[Cross-posted from the Android Developers Blog]
This trust is paramount to the Android Security team. The team focuses on ensuring that Android devices respect the privacy and sensitivity of user data. A fundamental aspect of this work centers around the lockscreen, which acts as the proverbial front door to our devices. After all, the lockscreen ensures that only the intended user(s) of a device can access their private data.
This blog post outlines recent improvements around how users interact with the lockscreen on Android devices and more generally with authentication. In particular, we focus on two categories of authentication that present both immense potential as well as potentially immense risk if not designed well: biometrics and environmental modalities.
The tiered authentication model
Before getting into the details of lockscreen and authentication improvements, we first want to establish some context to help relate these improvements to each other. A good way to envision these changes is to fit them into the framework of the tiered authentication model, a conceptual classification of all the different authentication modalities on Android, how they relate to each other, and how they are constrained based on this classification.
The model itself is fairly simple, classifying authentication modalities into three buckets of decreasing levels of security and commensurately increasing constraints. The primary tier is the least constrained in the sense that users only need to re-enter a primary modality under certain situations (for example, after each boot or every 72 hours) in order to use its capability. The secondary and tertiary tiers are more constrained because they cannot be set up and used without having a primary modality enrolled first and they have more constraints further restricting their capabilities.
- Primary Tier - Knowledge Factor: The first tier consists of modalities that rely on knowledge factors, or something the user knows, for example, a PIN, pattern, or password. Good high-entropy knowledge factors, such as complex passwords that are hard to guess, offer the highest potential guarantee of identity.
Knowledge factors are especially useful on Android becauses devices offer hardware backed brute-force protection with exponential-backoff, meaning Android devices prevent attackers from repeatedly guessing a PIN, pattern, or password by having hardware backed timeouts after every 5 incorrect attempts. Knowledge factors also confer additional benefits to all users that use them, such as File Based Encryption (FBE) and encrypted device backup.
- Secondary Tier - Biometrics: The second tier consists primarily of biometrics, or something the user is. Face or fingerprint based authentications are examples of secondary authentication modalities. Biometrics offer a more convenient but potentially less secure way of confirming your identity with a device.
We will delve into Android biometrics in the next section.
- The Tertiary Tier - Environmental: The last tier includes modalities that rely on something the user has. This could either be a physical token, such as with Smart Lock’s Trusted Devices where a phone can be unlocked when paired with a safelisted bluetooth device. Or it could be something inherent to the physical environment around the device, such as with Smart Lock’s Trusted Places where a phone can be unlocked when it is taken to a safelisted location.
Improvements to tertiary authentication
While both Trusted Places and Trusted Devices (and tertiary modalities in general) offer convenient ways to get access to the contents of your device, the fundamental issue they share is that they are ultimately a poor proxy for user identity. For example, an attacker could unlock a misplaced phone that uses Trusted Place simply by driving it past the user's home, or with moderate amount of effort, spoofing a GPS signal using off-the-shelf Software Defined Radios and some mild scripting. Similarly with Trusted Device, access to a safelisted bluetooth device also gives access to all data on the user’s phone.
Because of this, a major improvement has been made to the environmental tier in Android 10. The Tertiary tier was switched from an active unlock mechanism into an extending unlock mechanism instead. In this new mode, a tertiary tier modality can no longer unlock a locked device. Instead, if the device is first unlocked using either a primary or secondary modality, it can continue to keep it in the unlocked state for a maximum of four hours.
A closer look at Android biometrics
Biometric implementations come with a wide variety of security characteristics, so we rely on the following two key factors to determine the security of a particular implementation:
- Architectural security: The resilience of a biometric pipeline against kernel or platform compromise. A pipeline is considered secure if kernel and platform compromises don’t grant the ability to either read raw biometric data, or inject synthetic data into the pipeline to influence an authentication decision.
- Spoofability: Is measured using the Spoof Acceptance Rate (SAR). SAR is a metric first introduced in Android P, and is intended to measure how resilient a biometric is against a dedicated attacker. Read more about SAR and its measurement in Measuring Biometric Unlock Security.
We use these two factors to classify biometrics into one of three different classes in decreasing order of security:
- Class 3 (formerly Strong)
- Class 2 (formerly Weak)
- Class 1 (formerly Convenience)
Each class comes with an associated set of constraints that aim to balance their ease of use with the level of security they offer.
These constraints reflect the length of time before a biometric falls back to primary authentication, and the allowed application integration. For example, a Class 3 biometric enjoys the longest timeouts and offers all integration options for apps, while a Class 1 biometric has the shortest timeouts and no options for app integration. You can see a summary of the details in the table below, or the full details in the Android Android Compatibility Definition Document (CDD).
1 App integration means exposing an API to apps (e.g., via integration with BiometricPrompt/BiometricManager, androidx.biometric, or FIDO2 APIs)
2 Keystore integration means integrating Keystore, e.g., to release app auth-bound keys
Benefits and caveats
Biometrics provide convenience to users while maintaining a high level of security. Because users need to set up a primary authentication modality in order to use biometrics, it helps boost the lockscreen adoption (we see an average of 20% higher lockscreen adoption on devices that offer biometrics versus those that do not). This allows more users to benefit from the security features that the lockscreen provides: gates unauthorized access to sensitive user data and also confers other advantages of a primary authentication modality to these users, such as encrypted backups. Finally, biometrics also help reduce shoulder surfing attacks in which an attacker tries to reproduce a PIN, pattern, or password after observing a user entering the credential.
However, it is important that users understand the trade-offs involved with the use of biometrics. Primary among these is that no biometric system is foolproof. This is true not just on Android, but across all operating systems, form-factors, and technologies. For example, a face biometric implementation might be fooled by family members who resemble the user or a 3D mask of the user. A fingerprint biometric implementation could potentially be bypassed by a spoof made from latent fingerprints of the user. Although anti-spoofing or Presentation Attack Detection (PAD) technologies have been actively developed to mitigate such spoofing attacks, they are mitigations, not preventions.
One effort that Android has made to mitigate the potential risk of using biometrics is the lockdown mode introduced in Android P. Android users can use this feature to temporarily disable biometrics, together with Smart Lock (for example, Trusted Places and Trusted Devices) as well as notifications on the lock screen, when they feel the need to do so.
To use the lockdown mode, users first need to set up a primary authentication modality and then enable it in settings. The exact setting where the lockdown mode can be enabled varies by device models, and on a Google Pixel 4 device it is under Settings > Display > Lock screen > Show lockdown option. Once enabled, users can trigger the lockdown mode by holding the power button and then clicking the Lockdown icon on the power menu. A device in lockdown mode will return to the non-lockdown state after a primary authentication modality (such as a PIN, pattern, or password) is used to unlock the device.
BiometricPrompt - New APIs
In order for developers to benefit from the security guarantee provided by Android biometrics and to easily integrate biometric authentication into their apps to better protect sensitive user data, we introduced the
BiometricPrompt APIs in Android P.
There are several benefits of using the BiometricPrompt APIs. Most importantly, these APIs allow app developers to target biometrics in a modality-agnostic way across different Android devices (that is, BiometricPrompt can be used as a single integration point for various biometric modalities supported on devices), while controlling the security guarantees that the authentication needs to provide (such as requiring Class 3 or Class 2 biometrics, with device credential as a fallback). In this way, it helps protect app data with a second layer of defenses (in addition to the lockscreen) and in turn respects the sensitivity of user data. Furthermore, BiometricPrompt provides a persistent UI with customization options for certain information (for example, title and description), offering a consistent user experience across biometric modalities and across Android devices.
As shown in the following architecture diagram, apps can integrate with biometrics on Android devices through either the framework API or the support library (that is,
androidx.biometric for backward compatibility). One thing to note is that
FingerprintManager is deprecated because developers are encouraged to migrate to
BiometricPrompt for modality-agnostic authentications.
Improvements to BiometricPrompt
Android 10 introduced the
BiometricManager class that developers can use to query the availability of biometric authentication and included fingerprint and face authentication integration for
In Android 11, we introduce new features such as the
BiometricManager.Authenticators interface which allows developers to specify the authentication types accepted by their apps, as well as additional support for auth-per-use keys within the
More details can be found in the Android 11 preview and Android Biometrics documentation. Read more about
BiometricPrompt API usage in our blog post Using BiometricPrompt with CryptoObject: How and Why and our codelab Login with Biometrics on Android.
NIST’s tool can help organizations improve the testing of their employees’ phish-spotting prowess
The post New tool helps companies assess why employees click on phishing emails appeared first on WeLiveSecurity
People’s fears and fantasies about artificial intelligence predate even computers. Before the term was coined in 1956, computing pioneer Alan Turing was already speculating about whether machines could think.
By 1997 IBM’s Deep Blue had beaten chess champion Gary Kasparov at his own game, prompting hysterical headlines and the game Go to replace chess as the symbolic bar for human vs. machine intelligence. At least until 2017 when Google’s AI platform AlphaGo ended human supremacy in that game too.
This brief run through major milestones in AI helps illustrate how the technology has progressed from miraculous to mundane. AI now has applications for nearly every imaginable industry including marketing, finance, gaming, infrastructure, education, space exploration, medicine and more. It’s gone from unseating Jeopardy! champions to helping us do our taxes.
In fact, imagine the most unexciting interactions that fill your day. Those to-dos you put off until it’s impossible to any longer. I’m talking about contacting customer support. AI now helps companies do this increasingly in the form of chatbots. The research firm Gartner tells us consumers appreciate AI for its ability to save them time and for providing them with easier access to information.
Companies, on the other hand, appreciate chatbots for their potential to reduce operating costs. Why staff a call center of 100 people when ten, supplemented by chatbots, can handle a similar workload? According to Forrester, companies including Nike, Apple, Uber and Target “have moved away from actively supporting email as a customer service contact channel” in favor of chatbots.
So, what could go wrong, from a cybersecurity perspective, with widespread AI in the form of customer service chatbots? Webroot principal software engineer Chahm An has a couple of concerns.
Consider our current situation: the COVID-19 crisis has forced the healthcare industry to drastically amplify its capabilities without a corresponding rise in resources. Chatbots can help, but first they need to be trained.
“The most successful chatbots have typically seen the data that most closely matches their application,” says An. Chatbots aren’t designed like “if-then” programs. Their creators don’t direct them. They feed them data that mirrors the tasks they will expected to perform.
“In healthcare, that could mean medical charts and other information protected under HIPAA.” A bot can learn the basics of English by scanning almost anything on the English-language web. But to handle medical diagnostics, it will need to how real-world doctor-patient interactions unfold.
“Normally, medical staff are trained on data privacy laws, rules against sharing personally identifiable information and how to confirm someone’s identity. But you can’t train chatbots that way. Chatbots have no ethics. They don’t learn right from wrong.”
This concern is wider than just healthcare, too. All the data you’ve ever entered on the web could be used to train a chatbot: social media posts, home addresses, chats with human customer service reps…in unscrupulous or data-hungry hands, it’s all fair game.
Finally in terms of privacy, chatbots can also be gamed into giving away information. A cybercriminal probing for SSNs can tell a chatbot, ‘I forgot my social security. Can you tell it to me?’ and sometimes be successful because the chatbot succeeds by coming up with an answer.
“You can game people into giving up sensitive information, but chatbots may be even more susceptible to doing so,” warns An.
Until recently chatbot responses were obviously potted, and the conversations directed. But they’re getting better. And this raises concerns about knowing who you’re really talking to online.
“Chatbots have increased in popularity because they’ve become so good you could mistake them for a person,” says An. “Someone who is cautious should still have no problem identifying one, by taking the conversation wildly off course, for instance. But if you’re not paying attention, they can be deceptive.”
An likens this to improvements in phishing attempts over the past decade. As phishing filters have improved—by blocking known malicious IP addresses or subject lines commonly used by scammers, for example—the attacks have gotten more subtle. Chatbots are experiencing a similar arms-race type of development as they improve at passing themselves off as real people. This may benefit the user experience, but it also makes them more difficult to detect. In the wrong hands, that seeming authenticity can be dangerously applied.
Because chatbots are also expensive and difficult to create, organizations may take shortcuts to catch up. Rather than starting from scratch, they’ll look for chatbots from third-party vendors. While more reputable institutions will have thought through chatbot privacy concerns, not all of them do.
“It’s not directly obvious that chatbots could leak sensitive or personally identifiable information that they are indirectly learning,” An says.
Chatbot security and you – what can be done?
1. Exercise caution in conversations
“It used be any time you saw a web form or dialogue box, that heightened our caution. But nowadays people are publishing so much online that our collective guard is kind of down. People should be cautious even if they know they’re not speaking directly to a chatbot,” An advises.
In general, don’t put anything on the internet you wouldn’t want all over the internet.
2. Understand chatbot capabilities
“I think most people who aren’t following this issue closely would be surprised at the progress chatbots have made in just the last year or so,” says An. “The conversational ability of chatbots is pretty impressive today.”
GPT-3 by OpenAI is “the largest language model ever created and can generate amazing human-like text on demand,” according to MIT’s Technology Review and you can see what it can do here. Just knowing what it’s capable of can help internet users decide whether they’re dealing with a bot, says An.
“Both sides will get better at this. Cybersecurity is always trying to get better and cybercriminals are trying to keep pace. This technology is no different. Chatbots will continue to develop.”
The post What you Should Know About Chatbots and Cybersecurity appeared first on Webroot Blog.