Category Archives: security

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Presently sponsored by: Netsparker - a scalable and dead accurate web application security solution. Scan thousands of web applications within just hours.

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Last week I wrote a couple of different pieces on passwords, firstly about why we're going to be stuck with them for a long time yet and then secondly, about how we all bear some responsibility for making good password choices. A few people took some of the points I made in those posts as being contentious, although on reflection I suspect it was more a case of lamenting that we shouldn't be in a position where we're still dependent on passwords and people needing to understand good password management practices in order for them to work properly.

This week, I wanted to focus on going beyond passwords and talk about 2FA. Per the title, not just any old 2FA but U2F and in particular, Google's Advanced Protection Program. This post will be partly about 2FA in general, but also specifically about Google's program because of the masses of people dependent on them for Gmail. Your email address is the skeleton key to your life (not just "online" life) so protecting that is absolutely paramount.

Let's start with defining some terms because they tend to be used a little interchangeably. Before I do that, a caveat: every single time I see discussion on what these terms mean, it descends into arguments about the true meaning and mechanics of each. Let's not get bogged down in that and instead focus on the practical implications of each.

2FA, MFA, 2-Step

They may all be familiar, but there are important differences that warrant explanation and we'll start with the acronym we most commonly see:

2FA is two-factor authentication. For some quick perspective, a password alone is 1FA in that when you authenticate merely by entering a secret, all you require is one factor - "something that you know". If someone obtains the thing that you know then it's (probably) game over and they have access to your account. Adding a second factor typically means either requiring "something that you have" or "something that you are". The former is a physical device, for example I had one of these old RSA tokens more than a decade ago back in corporate land:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

When I logged onto the work VPN, I needed to enter not just my Active Directory credentials but also the 6-digit number shown in the token above known as a time-based one-time password (TOTP). The bars on the left of the LCD screen would count down after which the digits would be rotated and I'd need to enter a different TOTP when authenticating. This was a pretty solid way of doing auth, albeit a bit clunky and still not foolproof. For every good security solution, someone will always find a way of screwing it up:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

When it comes to "something you are", we're talking biometrics so think fingerprints, face, etc. Requiring, for example, both a password and a fingerprint would be a 2FA implementation.

Here's a very consumer-friendly way of describing 2FA: withdrawing money from an ATM requires two factors being your bank card (something you have) and your PIN (something you know). The bank card alone is useless as is the PIN; it's only a combination of the 2 that is usable.

MFA is multi-factor authentication. Strictly speaking, 2FA is MFA in that obviously, it's more than one factor. It's a subset of MFA. In consumer-facing implementations, most MFA is 2FA such as the RSA token example above. Same again with verification via SMS (don't lose your minds just yet, I'll come back to this). But, of course, MFA could also be 3FA and combine all the above factors, or even 4FA using an additional factor such as physical location.

2-Step authentication does not necessarily require 2 discrete factors. Entering 2 different passwords, for example, might be 2-step but is entirely predicated on "something you know". Let me draw from an excellent Security Stack Exchange explanation of this:

A good example of this is the two-step authentication required by Gmail. After providing the password you've memorized, you're required to also provide the one-time password displayed on your phone. While the phone may appear to be "something you have", from a security perspective it's still "something you know". This is because the key to the authentication isn't the device itself, but rather information stored on the device which could in theory be copied by an attacker. So, by copying both your memorized password and the OTP configuration, an attacker could successfully impersonate you without actually stealing anything physical.

You'll also regularly see arguments stating that what many consider to be 2FA is really just 2-step. For example, if you physically have someone's mobile phone in your hand and it's unlocked, you could login to an account by initiating a password reset, receiving the email in their email client then entering the "2nd factor" token sent via SMS or generated by a soft token app on the device. Using a soft token generated on your phone per the Stack Exchange explanation is, strictly speaking, not 2FA. 1Password has a great explanation of this (it's worth reading the entire "Second factor? No" section of that page):

We need to make the distinction between one time passwords and second factor security. One time passwords are often part of second factor security systems, but using one time passwords doesn’t automatically give you second factor security. Indeed, when you store your TOTP secret in the same place that you keep your password for a site, you do not have second factor security.

That's not to say that this model is "bad" and by any reasonable definition, it's a massive improvement over 1FA that would prevent the vast majority of account takeover attacks I see today. But many arguments have already been had about these definitions and as I said earlier, I don't want to get bogged down in that, let's talk specifics instead. Let's talk about what makes for good authentication practices.

The Hierarchy of Auth

I want to go through 5 separate levels of auth using common approaches, explain briefly how they work and then some common threats they're at risk of. Let's start somewhere familiar:

Password alone: This constitutes a single factor of auth and if someone else gets hold of it then there's a very good chance you're going to have a bad day. Passwords suffer from all the problems you're probably already aware of: they're often weak, they're regularly reused and they're also readily obtainable through attacks such as social engineering (phishing, smishing, vishing, etc.)

Password and SMS: I see a lot of derogatory comments about this pattern but let's be clear about one thing: a password and an SMS is always going to be better than a password alone. Those derogatory comments often relate to the prevalence of SIM porting where the attacker manages to port the victim's number to their own device and are subsequently able to receive SMSs destined for the victim. It's most damaging when account recovery can be facilitated via SMS alone (i.e. forgot password so recovery instructions can be sent via email or SMS).

Password and soft token: A very popular approach these days and it leverages an app such as Google Authenticator or Authy to generate the TOTP. The SIM porting risk is avoided (if you configure it properly), plus there's not the dedicated hardware dependency of the likes of an RSA token and consequently not the financial burden of acquiring hardware devices. This is probably the best balance of security, usability and cost we have going for us today.

Password and hard token: A similar situation to soft tokens but it requires a physical device which meets even a strict definition of 2FA. It addresses weaknesses with how many people configure soft tokens but it also introduces a cost barrier and requires you to have it physically present. It's also not immune to phishing: if a victim enters their first factor (password) into a malicious page and the attacker then requests the TOTP and (quickly) uses those on the target site, you've got a problem. Oh - you're still laughing about the webcam pointed at the tokens from earlier, so here's another one complete with instructions on how to set it up and even OCR the digits on the token:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Password and U2F: This is where I want to focus on the remainder of this blog because it solves all the problems raised above with the other approaches. Let's drop straight into understanding what U2F does and why you want it.

Understanding U2F

U2F is Universal 2nd Factor and according to Wikipedia:

is an open authentication standard that strengthens and simplifies two-factor authentication (2FA) using specialized Universal Serial Bus (USB) or near-field communication (NFC) devices based on similar security technology found in smart cards.

I suspect a word that's more familiar to most people is YubiKey:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

YubiKey is a popular implementation of U2F and per the Wikipedia piece above, both Yubico and Google were responsible for the original development of it. The standard is now managed by the FIDO Alliance (Fast IDentity Online) and you'll see that name appear again as we progress. FIDO has released protocols that not only allow U2F devices like the one above to communicate over USB, but also over Bluetooth and NFC.

The value proposition of a U2F device like the YubiKey is that not only must you have it present ("something that you have"), it's not subject to the TOTP being disclosed like with tokens that require the user to enter a password into a third-party service which could still be a phishing page. (You also don't get very far by pointing a webcam at it!) There's a lot more to them than that, but for the purposes of this post it solves the phishing problem.

All of this so far has just been to establish the problem with various means of authentication and to talk about the solution that is U2F. Let's move onto Google's use of it.

Google Advanced Protection Program

There was actually an incident that led to this journey down the U2F path and it relates to this:

This genuinely was a relative (no, not "a friend"!) and they'd been locked out of their account and no longer had access to the old email address that was used for recovery. At about that time, they received a notification saying that their Google password had been changed and they subsequently lost access to their Gmail. On the face of it, things looked bad and whilst I'd be inclined to reach the same conclusion that you probably already have in reading this, they also had SMS-based 2FA turned on and still had access to their mobile account. I assumed it was then either a case of someone phishing the TOTP sent via SMS or.... they'd inadvertently changed the password and locked themselves out. It turned out to be the latter, but it really got me thinking more about Google account security.

During the process of recovering their account, I spent a bit of time reading about Google's Advanced Protection Program which goes beyond SMS-based 2FA (or 2-step or whichever term you wish to use) all the way to U2F. The 2-minute video below is a pretty good summary of what it's about:

I decided to go through the process myself, putting myself in the shoes of the normal everyday consumer who'd be interested in turning on a feature like this so in other words, simply following their instructions (more on alternatives later on). Also, just to be clear, Google's Advanced Protection goes beyond just 2FA via U2F; there are also controls it turns on around the types of apps that can interact with your account and changes to the way account recovery works should you ever need to go down that path.

First up, you'll need 2 U2F keys where one will act as a primary and the other a backup. Plus, one will slot into a USB socket whilst the other will connect to a mobile device. Google linked me straight through to a 2-in-1 bundled offering on Amazon by Feitian for $40:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Feitian is a Chinese company that builds a bunch of different hardware security products. The white key above is their MultiPass FIDO unit in that it works across multiple communication protocols: USB, NFC and Bluetooth. The black key is their ePass FIDO and whilst compact, only supports USB so you're never going to be using this on, say, an iPhone (at least not without an additional dongle). You can use other keys but again, I wanted to go down the "follow the instructions" consumer route and experience the process as a normal everyday non-techie would. I'll ultimately be using these on both PCs and iOS devices so the Feitian route also aligns with Google's FAQ on keys:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

I ended up needing to order them via eBay instead of Google's recommended Amazon item (thanks screwy Aussie GST laws), but it's the same product and a week later, it was on my doorstep:

Beyond Passwords: 2FA, U2F and Google Advanced Protection
Beyond Passwords: 2FA, U2F and Google Advanced Protection
Beyond Passwords: 2FA, U2F and Google Advanced Protection

So let's get into it! Kicking off on the enrolment page, first, you have to (re)auth to Google:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

And just as a reminder, you need the keys:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Clicking the "enrol" button, it's now on (and I also get an email confirming this), but the keys themselves have yet to be enrolled:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

I already have the keys so let's register them:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

For the first key:

Beyond Passwords: 2FA, U2F and Google Advanced Protection
Beyond Passwords: 2FA, U2F and Google Advanced Protection

Key goes in and I tap:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Chrome then pops up and asks for access to the key:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

The key is registered so now give it a name:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Job done! I add the second key in the same fashion, albeit at the end of the provided micro USB cable:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

I name that one appropriately:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Both keys are now successfully registered:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Now we turn it on:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

And just in case you got all this was by accident, are you really really sure?

Beyond Passwords: 2FA, U2F and Google Advanced Protection

And we're done!

Beyond Passwords: 2FA, U2F and Google Advanced Protection

In case anyone wants to read through that how to sign in with a security key link, I've linked to it so you can read it in your own time.

I'm well and truly signed out, both on and in Chrome which now shows a status of "Paused":

Beyond Passwords: 2FA, U2F and Google Advanced Protection

I click the sign in link and fill the password from 1Password after which I'm faced with the second factor challenge:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

I insert the ePass key, tap the button and wait:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

And I'm back in!

Beyond Passwords: 2FA, U2F and Google Advanced Protection

I repeat the process for my two laptops, no dramas. This is actually much easier than either SMS or TOTP solutions which require you to re-enter a number into the page whilst the clock is counting down. One of those rare moments where more security also means more usability! Seriously, signing in once this is setup is dead simple, but that's just the PCs so far, let's do the mobile things.

Over on the iPhone, I've been signed out of the YouTube app so let's use that as a starting point:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Once I attempt to sign in, the Google Smart Lock app makes an appearance:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

At least on iOS, auth'ing back into your Google account requires the Google app. No biggy, grab the app and begin the pairing:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Now because this is an iPhone without a USB socket, I'll be using the MultiPass unit with Bluetooth to auth the phone so I grab that and follow the instructions:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

Once the key is in pairing mode, it shows up in the Google app with its 6-character identifier:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

This same identifier is on the back of the key. Speaking of which:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

And then finally, Apple pops up a native challenge that identifies the key via the same 6-letter name as before and asks for the PIN:

Beyond Passwords: 2FA, U2F and Google Advanced Protection

And we're done!

Beyond Passwords: 2FA, U2F and Google Advanced Protection

The sign-in over in the YouTube app can now use Google Smart Lock and I'm back into the account. Repeat the same thing on the iPad and all the mobile devices are now successfully auth'd.

Google's implementation is just lovely. It's fast, easy and for most people, something they're going to do very infrequently anyway. Gmail users are the big winners out of all this: as I said earlier, email is the skeleton key to your life and at least for tech folks who understand the importance of protecting accounts, this seems like a no-brainer. I wouldn't want to have to go through account recovery because I get the distinct impression that it wouldn't be a fun experience, but that's also kinda the idea: gaining access to an account without sufficient identity proof should be hard! This is why the SMS porting thing is a real problem; the number alone should never be used for account recovery because being in control of just a phone number simply isn't enough to verify identity with any reasonable degree of confidence.

There's an increasing array of other services out there that enable 2FA using U2F too, including:

  1. Dropbox
  2. Twitter
  3. Facebook
  4. GitHub

As I mentioned earlier, this was really "the consumer" path to U2F on Google. There are other U2F keys out there and I mentioned YubiKey earlier on. They've just launched the 5 series which is a pretty slick implementation that includes iPhone / iPad compatible NFC and supports FIDO2. But it's also $45 for one key and you're still going to need another one to enrol in Google's U2F program.

Lastly, let me leave you with anecdote related to going beyond 1FA: I recently had someone contact me and complain that GitHub was warning them about their choice of passwords following their integration with my Pwned Passwords service. "I should be able to use any password I want", he lamented. "I've got 2FA turned on so what does it matter?!" Now, hopefully the problem here is already self-evident but let's just be crystal clear anyway: adding a second step to authentication should not be seen as an excuse to weaken the first step. I'm hesitant to call this guy's approach 2FA (if it's true MFA at all), it's more like 1.5FA or something thereabouts. The point is, use the approaches above as additional security controls, not as an excuse to weaken existing ones!

When Accounts are “Hacked” Due to Poor Passwords, Victims Must Share the Blame

Presently sponsored by: Netsparker - a scalable and dead accurate web application security solution. Scan thousands of web applications within just hours.

When Accounts are

It's just another day on the internet when the news is full of headlines about accounts being hacked. Yesterday was a perfect example of that with 2 separate noteworthy stories adorning my early morning Twitter feed. The first one was about HSBC disclosing a "security incident" which, upon closer inspection, boiled down to this:

The security incident that HSBC described in its letter seems to fit the characteristics of brute-force password-guessing attempts, also known as a credentials stuffing attack. This is when hackers try usernames and password combos leaked in data breaches at other companies, hoping that some users might have reused usernames and passwords across services.

The second story was about a number of verified Twitter accounts having been "hacked" and then leveraged in Bitcoin scams. On the face of it they're pretty similar stories with both resulting in unauthorised access to a small percentage of the respective organisations' user accounts. Like the quote in the HSBC article above, Occam's Razor (the simplest solution tends to be the correct one) would suggest that the Twitter situation has the same root cause: people choosing poor passwords which they then reuse across services and attackers then use those exposed in one location to break into accounts in another location. If that was the case, the "hack" would not constitute some sophisticated exploit of vulnerable code as the term suggests to many people, rather it's made possible due to the victim's choice of password. As such, I proposed the headlines as they stood were likely inaccurate:

Now, for the most part there was much support for this and clearly very many likes. But there was also a theme that popped up that needs addressing, and it boiled down to this:

You're victim blaming. Stop victim blaming.

Yes, I am and no, I won't. This issue - the one that implies there's no responsibility on behalf of the victims in these incidents - needs addressing because frankly, it's an absolute cop out. I'll come back to that but firstly, let's all agree on who has a role to play here:

  1. The person breaking into the account
  2. The organisation responsible for the accounts
  3. The account holders themselves

Let's talk about the responsibility of each and we'll start with the attacker. Without doubt, blame lies with them. Their activity not only causes harm to the next two roles in the list, it's outright illegal and in cases like the two stories above, there's a good chance it'll end in jail time if they're caught. Their actions are selfish, malicious and should be punished.

The organisation also has a role to play and some blame must lie with them for facilitating the account takeover. In fact, the FTC in the US has been very clear about this: if customer data was put at risk by credential stuffing, then being the innocent corporate victim is no defence to an enforcement case. I'm sympathetic to the organisation because it's a hard problem to solve (stopping an attacker who fronts up with a victim's legit credentials), but this is today's reality of managing online accounts.

And then there's the account holder, the one who chose the password. The one who - assuming a credential stuffing attack - used that same password somewhere else. It's not just reused but almost certainly also weak by any reasonable definition of the term; of the 517M passwords I manage in Pwned Passwords, they overwhelming meet this definition and I'm using precisely the same sources as attackers are to break into services like HSBC and Twitter. (Incidentally, one of the reasons they're weak is that many come from successful hash cracking exercises against data breaches such as Linked in which stored them as SHA-1. Whilst that may now be a totally unsuitable means of storing passwords, strong ones not previously seen before - such as my own which is in that breach - still aren't getting cracked.)

The account holder is the victim but they must also share the blame. They made a decision of their own free volition which put them at risk and now they're suffering as a result. To suggest that somehow "victim blaming" is a bad thing is an absolute falsehood when their actions enabled the outcome. I just can't wrap my head around why anyone would think that people should be able to take whatever shortcuts they want with their personal security and somehow, magically, have absolutely no responsibility whatsoever for the outcome.

At this point, I want to make a tangential comment on this term "victim blaming" because if I don't it will inevitably be raised in the comments: in no way shape or form is the term used in the way it has been above analogous to how it's often applied to victims of sexual assault. In doing a bit of reading, apparently the coining of the phrase originally related to racism and you'll find that Wikipedia article full of references to rape, hate crimes and domestic abuse. Clearly, these are fundamentally different situations to people's choice of password and any attempts to draw parallels between them will almost certainly be a terrible IRL analogy attempting to explain a digital concept. Let's not go there, it's a much more serious and fundamentally different proposition to using your cat's name as a password.

The problem with the term "victim blaming" is the willingness for people to misappropriate it from the origins discussed in the previous paragraph and apply it to cases where victims do indeed have blame to wear. If I crash my car after driving like a lunatic, I am both a victim and worthy of blame. If I pat the poisonous things in my Aussie back yard and get bitten it's the same again. DON'T PAT THE POISONOUS THINGS! My kids understand this, why are some adults struggling with the cause and effect of poor password choices? I'm not trying to go down that poor IRL analogy path myself, but rather demonstrate that the words "victim" and "blame" are not mutually exclusive by any reasonable definition of the words.

The issue I continually came back to when reflecting on the hacking "victim blaming" comments was that they implied people were not responsible for the personal security decisions they made. That somehow, those decisions wouldn't influence the outcome of an attack because if they did - if they had the ability to make conscious decisions on their own behalf - they could make both good and bad decisions. You just can't have it both ways where on the one hand the victim blaming brigade says "you should focus on educating people so that they're able to make good decisions" but then on the other hand say "nobody should ever be accountable for making bad decisions". That's just not how life works and furthermore, it's not consistent with what the vast majority of people believe:

Just in case you're reading this before that poll wraps up, at the time of publishing it was showing 83% of people agreeing that at least some responsibility lies with the person making the decision about their own personal security posture. Clearly, I'm with the masses here: we all have the ability to make decisions that impact our security posture and I'm going to keep doing my utmost to help educate people about how to make the best possible ones. But when they don't, I'm also going to tell them to take responsibility for their actions and even as a victim, acknowledge some fault.

Despite an overwhelming majority of respondents to that poll agreeing with my stance, there was still much "robust" discussion to the contrary. It's worth reading and even though I don't agree with much of it, I appreciate the perspectives being shared. A common theme that emerged was "but people don't understand password security so they can't be responsible". I vehemently reject the premise that not understanding something becomes a get out of jail free card. Further, I counter with the suggestion that pretty much everyone creating online accounts has at least some understanding of the basics; that passwords shouldn't be easily guessable and you shouldn't use the same one everywhere. That doesn't necessarily mean that they adhere to those principles, but let's not pretend that you can go through online life without ever seeing password rules on a website or being told that the one you just entered "isn't strong enough". Further to that, the terms and conditions we abide to when using online services agree. For example, here's Twitter's terms of service:

You are responsible for safeguarding your account, so use a strong password and limit its use to this account. We cannot and will not be liable for any loss or damage arising from your failure to comply with the above.

So Twitter isn't responsible for someone's poor password practices, the person with the poor password practices is! Here's Amazon's ToS:

You are responsible for maintaining the confidentiality of your account and password and for restricting access to your account, and you agree to accept responsibility for all activities that occur under your account or password.

Who's responsible? The person creating the password! Just like on Google as well:

To protect your Google Account, keep your password confidential. You are responsible for the activity that happens on or through your Google Account.

And just before you chime in via the comments below, remember that when you login to Disqus to do so, you're solely responsible for your choice of password:

You are solely responsible for the activity that occurs on your account, and you must keep your account password secure. We encourage you to use “strong” passwords (passwords that use a combination of upper and lowercase letters, numbers and symbols) with your account.

Now I'm not suggesting that people actually read terms of service (they almost certainly don't), but if push comes to shove and your Twitter or Amazon or Google or Disqus account is compromised by precisely the means presented in that earlier poll, you are responsible and like it or not, you agreed to be. None of these terms say "you are responsible but if you don't understand passwords very well then that's cool, you're not responsible any more"! That's just not how any of this works.

Finally, I want to touch briefly on our responsibility to help lead people creating accounts down the path of success. We need to get better at designing systems that are more resilient to credential stuffing attacks (companies like Shape Security are focused on this),  better at helping people make good password choices at signup (for example GitHub's use of Pwned Passwords) and better at educating individuals about tools like password managers (1Password has a heap of great consumer-facing content). We should drive the adoption of multi factor auth but also recognise its limitations (particularly for the less technically adept), and we should provide those who have a harder time with technology viable alternative such as password books. Let's work more towards equipping everyone with the knowledge to make good security decisions and recognise that everyone is empowered to do so.

So by all means, call it victim blaming if you must, but when applied to making poor security decisions of the kind discussed above, the responsibility is a shared one.

Edit: I expanded on this further verbally in my weekly update the following day. It's worth a watch, particularly if you've made it here and disagree with my position on this as the video does a much better job of conveying sentiment IMHO.

Here’s Why [Insert Thing Here] Is Not a Password Killer

Presently sponsored by: Netsparker - a scalable and dead accurate web application security solution. Scan thousands of web applications within just hours.

Here's Why [Insert Thing Here] Is Not a Password Killer

These days, I get a lot of messages from people on security related things. Often it's related to data breaches or sloppy behaviour on behalf of some online service playing fast and loose with HTTPS or passwords or some other easily observable security posture. But on a fairly regular basis, I get an email from someone which effectively boils down to this:

Hey, have you seen [insert thing here]? It's totally going to kill passwords!

No, it's not and to save myself from repeating the same message over and over again, I want to articulate precisely why passwords have a lot of life left in them yet. But firstly, let me provide a high-level overview of the sort of product I'm talking about and I'll begin with recognising the problem it's trying to solve:

People suck at passwords. I know, massive shock right? They suck at making genuinely strong ones, they suck at making unique ones and they suck at handling them in a secure fashion. This leads to everything from simple account takeover (someone else now controls their eBay or their Spotify or whatever else), to financial damages (goods or services bought or sold under their identity) to full on data breaches (many of these occur due to admins reusing credentials). There is no escaping the fact that passwords remain high-risk security propositions for the vast majority of people. Part of the solution to this is to give people the controls to do password-based authentication better, for example by using a password manager and enabling 2FA. But others believe that passwords themselves have to go completely to solve the problem which brings us to proposed alternatives.

I don't want to single any one product out here because the piece I'm writing is bigger than that, so let's talk about patterns instead. I'm referring to passwordless solutions that involves things like QR codes, pictorial representations, 3rd party mobile apps, dedicated hardware devices or "magic" links sent via email. I'm sure there are others but for the purpose of this post, any pattern that doesn't involve entering a username and a password into a couple of input fields is in scope. To their credit, some of these solutions are genuinely very good - technically very good - but what proponents of them seem to regularly miss is that "technically" isn't enough. Despite their respective merits, every one of these solutions has a massive shortcoming that severely limits their viability and it's something they simply can't compete with:

Despite its many flaws, the one thing that the humble password has going for it over technically superior alternatives is that everyone understands how to use it. Everyone.

This is where we need to recognise that decisions around things like auth schemes go well beyond technology merits alone. Arguably, the same could be said about any security control and I've made the point many times before that these things need to be looked at from a very balanced viewpoint. There are merits and there are deficiencies and unless you can recognise both (regardless of how much you agree with them), it's going to be hard to arrive at the best outcome.

Let me put this in perspective: assume you're tasked with building a new system which has a requirement for registration and subsequently, authentication. You go to the Marketing Manager and say "hey, there's this great product called [insert thing here] that replaces passwords and all you have to do to sign in is...". And you've already lost the argument because the foremost thing on the Marketing Manager's mind is reducing friction. Their number one priority is to get people signing up to the service and using it because ultimately, that's what generates revenue or increases brand awareness or customer loyalty or achieves whatever the objective was for creating the service in the first place. As soon as you ask people to start doing something they're not familiar with, the risk of them simply not going through with it amplifies and defeats the whole point of having the service in the first place.

This isn't a new discussion either, the one about considering usability alongside security. In fact, we have this discussion every time we build a system that defines minimum criteria for passwords; yes, a min of 20 chars would make for much stronger passwords but no, we very rarely do this because we actually want customers! If you look at the minimum password length requirements by large online services you can see the results of this discussion covering a fairly broad spread with Google and Microsoft demanding at least 8 characters, Amazon and Twitter wanting 6 and Netflix requiring... 4. But I'm hesitant to berate Netflix for what seems like an extremely low number because they're also dealing with the usability challenge that is people logging on to TVs with remote controls. The point of all this is that usability is an absolutely essential attribute of the auth discussion.

What I often find when I have these discussions is a myopic focus on technical merits. I'll give you an example from earlier last year where someone reached out and espoused the virtues of the solution they'd built. They were emphatic that passwords were no longer required due to the merits of [insert thing here] and were frustrated that the companies they were approaching weren't entertaining the idea of using their product. I replied and explained pretty much what's outlined above - the conversation is going to get shut down as soon as you start asking companies to impose friction on their users but try as I might, they simply couldn't get the message. "What barrier? There is no barrier!" They went on to say that companies not willing to embrace products like this and educate their users about alternative auth schemes are the real problem and that they should adjust their behaviour accordingly. I countered with what remains a point that's very hard to argue against:

If your product is so awesome, have you stopped to consider why no one is using it?

Now in fairness, it may not be precisely "no one" but in this case and so many of the other [insert things here], I'd never seen them in use before and I do tend to get around the internet a bit. Maybe they're used in very niche corners of the web, the point is that none of these products are exactly taking the industry by storm and there's a very simple reason for that: there's a fundamental usability problem. This particular discussion ended when they replied with this:

I think it is only negativity that doesn’t allow positiveness to excel

Ugh. I'm negative about stuff that's no good, yes. I dropped out of the discussion at that point.

Almost a year ago, I travelled to Washington DC and sat in front of a room full of congressmen and congresswomen and explained why knowledge-based authentication (KBA) was such a problem in the age of the data breach. I was asked to testify because of my experience in dealing with data breaches, many of which exposed personal data attributes such as people's date of birth. You know, the thing companies ask you for in order to verify that you are who you say you are! We all recognise the flaws in using static KBA (knowledge of something that can't be changed), but just in case the penny hasn't yet dropped, do a find for "dates of birth" on the list of pwned websites in Have I Been Pwned. So why do we still use such a clearly fallible means of identity verification? For precisely the same reason we still use the humble password and that's simply because every single person knows how to use it.

This is why passwords aren't going anywhere in the foreseeable future and why [insert thing here] isn't going to kill them. No amount of focusing on how bad passwords are or how many accounts have been breached or what it costs when people can't access their accounts is going to change that. Nor will the technical prowess of [insert thing here] change the discussion because it simply can't compete with passwords on that one metric organisations are so focused on: usability. Sure, there'll be edge cases and certainly there remain scenarios where higher-friction can be justified due to either the nature of the asset being protected or the demographic of the audience, but you're not about to see your everyday e-commerce, social media or even banking sites changing en mass.

Now, one more thing: if I don't mention biometrics and WebAuthn they'll continually show up in the comments anyway. On the biometrics front, I'm a big supporter of things like Face ID and Windows Hello (I love my Logitech BRIO for that). But they don't replace passwords, rather provide you with an alternate means of authenticating. I still have my Apple account password and my Microsoft password, I just don't use them as frequently. WebAuthn has the potential to be awesome, not least of which because it's a W3C initiative and not a vendor pushing their cyber thing. But it's also extremely early days and even then, as with [insert things here], it will lead to a change in process that brings with it friction. The difference though - the great hope - is that it might redefine authentication to online services in an open, standardised way and ultimately achieve broad adoption. But that's many years out yet.

So what are we left with? What's actually being recommended and, indeed, adopted? Well to start with, there's lots of great guidance from the likes of the National Cyber Security Centre (NCSC) in the UK and the National Institute of Standards and Technology (NIST) in the US. I summarised many of the highlights in my post last year about Passwords Evolved: Authentication Guidance for the Modern Era. As a result of that piece (and following one of the NIST recommendations), I subsequently launched the free Pwned Passwords service which is being used by a heap of online entities including EVE Online and GitHub. This (along with many of the other NCSC or NIST recommendations) improves the password situation rather than solves it and it does it without fundamentally changing the way people authenticate. Every single solution I've seen that claims to "solve the password problem" just adds another challenge in its place thus introducing a new set of problems.

This is why [insert thing here] is not a password killer and why, for the foreseeable future, we're just going to have to continue getting better at the one authentication scheme that everyone knows how to use: passwords.

New Pluralsight Course: Adapting to the New Normal: Embracing a Security Culture of Continual Change

Presently sponsored by: Netsparker - a scalable and dead accurate web application security solution. Scan thousands of web applications within just hours.

New Pluralsight Course: Adapting to the New Normal: Embracing a Security Culture of Continual Change

I take more pleasure than I probably should in watching the bewilderment within organisations as the technology landscape rapidly changes and rushes ahead of them. Perhaps "pleasure" isn't the right word, is it more "amusement"? Or even "curiosity"? Whichever it is, I find myself rhetorically asking "so you just expected everything to stay the same forever, did you?"

A case in point: you should look for the green padlock on a website so that you know it's safe. Except that you can't say that anymore because so many phishing sites are using HTTPS (remember, encryption is morally neutral) which is why Barclays Bank had their ad pulled earlier this year. You also can't say "green padlock" anymore because after Chrome 69 hit earlier this month, they're all grey. Unless, that is, you're on iOS and using Safari in which case it's green but only if it's an EV cert and even that changed just a couple of weeks ago when Apple killed off the company name in the address bar for EV. Which, of course, also means you can't tell people to look for the company name in the address bar either!

The point of all this is that technology - including information security - is a rapidly changing landscape and that's exactly what I'm talking about in the latest Security Culture episode on Pluralsight. This is the quarterly series we launched earlier this year which aims to help organisations better understand how to create a culture of security within their organisations. Not just the hard skills, but the knowledge people need to really ingrain security into the organisation. Best of all, we've made this series easily accessible to everyone:

This course and the entire Security Culture series is still 100% free!

Adapting to the New Normal: Embracing a Security Culture of Continual Change is now live!