Category Archives: Best Practices

Security Monitoring Saves the Day

Security Monitoring Saves the Day

For the second week of  National Cyber Security Awareness Month, we would like to focus on a very important part in having a good website security posture: monitoring.

How can security monitoring save your day?

Most people only care about their website security after something bad has already happened. However, how can you tell when something is attempting to harm your website? Sometimes it is a very noticeable issue, such as:

  • website defacement – when the home page of the website is wiped out and something else appears in front of the visitor’s eyes;
  • unresponsive website – when the website pages respond too slowly or stop loading at all;
  • SEO spam – when the website listing in search engines shows unrelated spam keywords, often pharma keywords; or
  • a website blacklist warning – when a red warning page shows all your visitors that the website they are about to go to is not secure.

Continue reading Security Monitoring Saves the Day at Sucuri Blog.

October Cybersecurity Month

October Cybersecurity Month

Since 2003, October has been recognized as National Cybersecurity Awareness Month. It is an annual campaign to raise awareness about the importance of cybersecurity and being a better digital citizen.

October has just started and a majority of security companies are promoting internet security. With the holidays fast approaching, it is a crucial time for website owners, especially ones with an e-commerce website, to be cyber secure.

The end of the year is also the season when hackers try to profit the most.

Continue reading October Cybersecurity Month at Sucuri Blog.

PCI for SMB: Requirement 7 & 8 – Implement Strong Access Control Measures

PCI for SMB: Requirement 7 & 8 – Implement Strong Access Control Measures

This is the fifth post in a series of articles on understanding the Payment Card Industry Data Security Standard – PCI DSS. We are halfway there! In the previous articles about PCI, we covered the following:

  • Requirement 1: Build and Maintain a Secure Network – Install and maintain a firewall configuration to protect cardholder data.
  • Requirement 2: Build and Maintain a Secure Network – Do not use vendor-supplied defaults for system passwords or other security parameters.

Continue reading PCI for SMB: Requirement 7 & 8 – Implement Strong Access Control Measures at Sucuri Blog.

SSL vs. Website Security

SSL vs. Website Security

Having a website today is way easier than it was 10 or 15 years ago. Tools like content management systems (CMS), website builders, static site generators and alike remove a lot of the friction around building and maintaining sites. But, is there a price for such convenience?

I would dare to say that one of the downsides to bringing such facilities to the masses is the creation of misconceptions. The biggest misconception is about what makes a website secure versus not secure.

Continue reading SSL vs. Website Security at Sucuri Blog.

E-Commerce Security – Planning for Disasters

E-Commerce Security – Planning for Disasters

This is the last post in our series on E-commerce Security:

  • Intro to Securing an Online Store – Part 1
  • Intro to Securing an Online Store – Part 2

Today, let’s expand on some of the suggestions made during a webinar I hosted recently about steps you can take to secure your online store.

So far in this series, we have touched on how to identify potential risks and how to defend against threats via WAF technologies.

Continue reading E-Commerce Security – Planning for Disasters at Sucuri Blog.

How to Improve Your Website Security Posture – Part II

How to Improve Your Website Security Posture – Part II

In the first post of this series, we discussed some of the main website security threats. Knowing the website security environment is a vital part of a good website posture. However, it is also important to be aware of what to do to strengthen your website.

Today, we are going to give you some practical tips on how to improve your website posture.

As a website owner, we highly recommend using the principle of least privilege. It is a computer science principle which can be applied to every level in a system and the benefits strengthen your website security posture.

Continue reading How to Improve Your Website Security Posture – Part II at Sucuri Blog.

How to Improve Your Website Security Posture – Part I

How to Improve Your Website Security Posture – Part I

Have you ever wondered if your website security posture is adequate enough?

The risk of having a website compromise is never going to be zero. However, as a webmaster, you can play an important role in minimizing the chances of a website hack. A good security posture entails how to understand the importance of securing a website and how to implement security measures.

Correcting a poor security posture means recognizing problems that you might not notice.

Continue reading How to Improve Your Website Security Posture – Part I at Sucuri Blog.

When "ASLR" Is Not Really ASLR – The Case of Incorrect Assumptions and Bad Defaults

As a vulnerability analyst at the CERT Coordination Center, I am interested not only in software vulnerabilities themselves, but also exploits and exploit mitigations. Working in this field, it doesn't take too long to realize that there will never be an end to software vulnerabilities. That is to say, software defects are not going away. For this reason, software exploit mitigations are usually much more valuable than individual software fixes. Being able to mitigate entire classes of software vulnerabilities is a powerful capability. One of the reasons why we strongly promote mitigation tools like EMET or Windows Defender Exploit Guard, which is the replacement for EMET on the Windows 10 platform, is because exploit mitigation protections are not limited to the specific vulnerability du jour.

While looking at a recent exploit for VLC on Windows, I noticed some unexpected behaviors. In this blog post, I will describe how my journey led me to the discovery of several flaws that put users of many applications at unnecessary risk. VLC isn't the only victim here.

Memory Corruption Exploit Mitigations

When discussing memory corruption exploit mitigations, two rudimentary building blocks are Data Execution Prevention (DEP) and Address Space Layout Randomization (ASLR). Both techniques must be used together for the basis of an effective exploit mitigation strategy.

DEP

DEP, also referred to as NX or W^X on other platforms, ensures that a CPU executes only instructions that have been explicitly marked as executable. In other words, a system should only execute code. The system should refuse to execute bytes that are provided in the content of a document, for example.

ASLR

ASLR ensures memory locations associated with a process are not predictable. This capability is important to help prevent the use of Return Oriented Programming (ROP) by an attacker.

Confirming the Presence of DEP and ASLR

In the past I used CERT BFF to find some exploitable crashes in the Oracle Outside In library. I investigated several applications that use Outside In to demonstrate proof-of-concept (PoC) exploits for the vulnerabilities I discovered. I settled on two forensics investigation applications: Guidance EnCase and AccessData FTK. Why did I choose these applications? On one hand, the concept of exploiting vulnerabilities in forensics software seemed interesting to me, but more importantly it was because both applications failed their due diligence with respect to exploit mitigations.

The simplest way to check for the presence of DEP and ASLR is by using Microsoft Sysinternals Process Explorer. You can configure Process Explorer to display columns for DEP and ASLR. When a process is running, you can check these columns for the status of DEP and ASLR for the process. First let's check EnCase running on Windows XP. This was 2013, so Windows XP was still supported and prevalent.

encase_procexp_xp.png

EnCase didn't even enable DEP on Windows XP. You couldn't ask for an easier target to exploit!

On Windows 7, the situation is a little bit better, as DEP is enabled. However, Process Explorer reports that ASLR is not enabled for the EnCase.exe process, as shown in this figure:

encase_procexp_sm.png

Creating a PoC for Windows 7 required a little more work than the PoC for Windows XP, but it was still possible to achieve. As noted by Microsoft, DEP by itself or ASLR by itself do not provide viable protections against exploitation. Both must be present together to provide any meaningful protection against the exploitation of memory corruption vulnerabilities.

The VLC Exploit

Earlier this month, a PoC exploit for VLC 2.2.8 was posted to Exploit Database. This exploit was tagged as "E-DB Verified," which indicates that the exploit was confirmed to be functional. I tested out the exploit in a fully-patched 64-bit Windows 10 virtual machine that was connected to a CERT Tapioca Wildcard VM for network connectivity (since I don't fully trust what the exploit claims to do). Sure enough, opening the MKV file resulted in calc.exe popping, as shown in this figure:

vlc-calc.png

Let's take a look to see if VideoLAN was negligent in opting in to exploit mitigations:

procexp_vlc.png

Something is quite strange here. The binary vlc.exe opts in to both DEP and ASLR, and all of its dependencies are ASLR-enabled as well. This means that there should be no predictable code locations for an attacker to leverage in a ROP exploit. But notice that the base address for vlc.exe is 0x00400000, which is a special address in the world of Microsoft Windows. This address is an indicator that a process has not been loaded at an address that has been randomized by ASLR. Why is Process Explorer reporting that VLC is using ASLR, but it is also loading at a non-randomized address?

Looking at the exploit for VLC 2.2.8, we can see that it uses a ROP chain with fixed offsets that would live within the space of a process loaded at 0x00400000:

rop_gadgets = {
        '64': {
            '2.2.8': [
                0x004037ac,             # XCHG EAX,ESP # ROL BL,90H # CMP WORD PTR [RCX],5A4DH # JE VLC+0X37C0 # XOR EAX,EAX # RET
                0x00403b60,             # POP RCX # RET
                target_address,         # lpAddress
                0x004011c2,             # POP RDX # RET
                0x00001000,             # dwSize
                0x0040ab70,             # JMP VirtualProtect
                target_address + 0x500, # Shellcode
            ],
        },


PE File Header Investigation

Using Process Explorer to check an EXE for ASLR compatibility is a convenient shortcut. But clearly something is wrong in the case of VLC. By using the Microsoft Visual Studio tool dumpbin, we can check the PE headers of the vlc.exe binary. This is where it gets interesting, as shown in this figure:

dynamicbase_reloc_pic.png

Here we have an executable file that has two contradicting properties:

  • Dynamic base - This property indicates that the binary was linked with the /DYNAMICBASE flag, which opts the binary in to ASLR randomization by the OS.
  • Relocations stripped - This property indicates that the binary has had its relocation table removed. A traditional Windows executable with no relocation table cannot be randomized by Windows ASLR. Even if "mandatory ASLR" is enforced via EMET or Windows Defender Exploit Guard, these executables cannot be randomized by Windows.
    (Note: The exceptions to this rule are .NET executables. If executed on a Windows 8 or newer platform, a .NET executable with a stripped relocation table will still be relocated. On the Windows 7 platform, these .NET executables do not receive any benefit from ASLR.)

These two attributes of an executable file on Windows are at odds with each other. If a tool is simply looking for the presence of "Dynamic base" to determine whether an executable is using ASLR, this check alone is not sufficient. Microsoft Sysinternals Process Explorer versions 16.12 and earlier make this mistake. The impact of this flaw is that an analyst may be given a false sense of security about the exploit mitigations present in a product.

mingw-w64 and its Bad Defaults

How did we end up with a VLC executable that contains two different properties that contradict each other? To find the answer, look at the build chain that was used to compile the code: mingw-w64.

The build chain mingw-w64 allows the GCC compiler and its related tools to create executable code that functions on Microsoft Windows. Because GCC runs on multiple platforms, this means that I can use mingw-w64 on a Linux platform to create a Windows EXE file, for example.

The problem is that mingw-w64 doesn't produce a relocation table when linking executables with the "Dynamic base" option.

  • This limitation has been publicly known since 2013. This conversation thread ended in June 2013 with the comment: "Just a gentle reminder - any chance someone has been able to look at this?"
  • In 2014, a developer with the Tor Project suggested adding an optional flag to generate the relocation section required for proper ASLR functionality. The last activity on this ticket was from September 17, 2014, where the binutils developer requested some additional modifications to be made before the changes would be accepted.
  • In 2015, a ticket was filed for binutils that outlines several bad default settings for mingw-w64. One of the suggestions in this ticket is, "Never strip the reloc section."

A Workaround to Get Proper ASLR Compatibility with mingw-w64

Until binutils is updated to fix the generation of Windows code with mingw-w64, developers must take specific actions to produce executable files that are compatible with ASLR. While testing a simple "Hello world" application produced with mingw-w64, the addition of the following line before the main function resulted in an executable file that retained its relocation table:

__declspec(dllexport)

Apparently exporting a function in an executable file causes mingw-w64 to link the executable in a way that does not strip the relocation table. With the relocation table present, Windows can properly randomize the load address of executable using ASLR.

Conclusion and Recommendations

VideoLAN was a victim of using a toolchain (mingw-w64) that has insecure defaults. Because the VLC executable wasn't able to be randomized with ASLR, this allowed a fully functional and reliable exploit to be written for one of its vulnerabilities. Had mingw-w64 used secure default settings, which in this case means not stripping the relocation table of an executable that claims to be ASLR compatible, the public exploit that uses fixed ROP offsets would not be viable.

This situation is made worse by the fact that several tools that check for ASLR compatibility assume that the presence of the "Dynamic base" PE header is sufficient for ASLR compatibility. Because Process Explorer does not check that a relocation table is present, its indication of "ASLR" for a running process may be incorrect, and it may provide a false sense of security.

Note that a missing relocations table is not the only way that executable code may not be compatible with ASLR, despite the output files providing the "Dynamic base" PE header. Some executable content protected by WIBU-Systems tools appear to be ASLR compatible, both by the presence of the "Dynamic base" header, as well as the relocations table. However, such code may not be compatible with ASLR in practice. For example, WIBU IxProtector encrypts functions, and as a result the code produced may not be properly formed as far as Windows is concerned. Windows software defenses may not behave as designed in cases like this.

Recommendation for Security Researchers:

  • Unless you are certain that your security tool is checking for the presence of both the "Dynamic base" PE header and the presence of a relocation table, do not rely on that tool's indication of whether an executable is using ASLR or not. The PE headers, which are viewable via dumpbin and other tools, are the definitive source of this information.
  • Some malformed executable code may appear to be ASLR compatible at a glance, but in practice is not compatible with ASLR. For this reason, it is important to verify the addresses that executable code (both executable and DLL) get loaded. With properly functioning ASLR, code will get loaded at an address other than what is statically specified as the "image base" in the PE header.

Recommendation for Developers:

  • Ensure that your executables contain both the "Dynamic base" PE header, and also contain a relocation table. Without a relocation table, vulnerabilities in any executable that you produce will be easier to exploit due to ASLR incompatibility. If you are using the mingw-w64 toolchain, the __declspec(dllexport) workaround outlined above appears to produce ASLR-compatible executables.
  • If you are using software obfuscation tools, such as WIBU IxProtector, be sure to verify that the code being produced is compatible with Windows ASLR. This can be verified by confirming that the executable code is loaded at an address other than the image base that is specified within the binary itself. Code without ASLR protection is code more vulnerable to exploitation by attackers.

Whether you are a security researcher, a developer, or anybody else interested in the security of executables on your system, you can check for PE files that contain the problematic combination of "Dynamic base" and have a stripped relocation table using the following python script that I have created: (checkaslr.py):

checkaslr2.png

This tool lists problematic code files, along with the image base that is specified in each file.

Additional information about the mingw-w64 issue is available in vulnerability note VU#307144.

Automatically Stealing Password Hashes with Microsoft Outlook and OLE

Back in 2016, a coworker of mine was using CERT BFF, and he asked how he could turn a seemingly exploitable crash in Microsoft Office into a proof-of-concept exploit that runs calc.exe. Given Address Space Layout Randomization (ASLR) on modern Windows platforms, this isn't as easy as it used to be. One strategy to bypass ASLR that is possible in some cases is to leverage a memory leak to disclose memory addresses. Another strategy that is sometimes possible is to brute-force attack the vulnerability to make ASLR irrelevant. In this case, however, the path of least resistance was to simply use Rich Text Format (RTF) content to leverage Object Linking and Embedding (OLE) to load a library that doesn't opt in to using ASLR.

As I started pulling the thread of RTF and OLE, I uncovered a weakness that is much more severe than an ASLR bypass. Continue reading to follow my path of investigation, which leads to crashed Windows systems and stolen passwords.

Before getting into the details of my analysis, let's cover some of the basics involved:

OLE

OLE is a technology that Microsoft released in 1990 that allows content from one program to be embedded into a document handled by another program. For example, in Windows 3.x Microsoft Write provides the ability to embed a "Paintbrush Picture" object, as well as a "Sound" or a "Package." These are the three available OLE objects that can be inserted into a Write document:

insert_paint_object.png

Once inserted, we now have a Write document that has embedded Paintbrush content. Neat!

paint_OLE_in_write.png

Server Message Block (SMB)

SMB is a protocol that extends the DOS API (Interrupt 21h) for local file access to include network capabilities. That is, a file on a remote server can be accessed in much the same way that a file on a local drive can be accessed. Microsoft included SMB capabilities in Windows for Workgroups 3.1, which was released in 1992.

Microsoft Outlook

Microsoft Outlook is an email client that comes with Microsoft Office. Outlook includes the ability to send rich text (RTF) email messages. These messages can include OLE objects in them.

outlook_insert_object.png

When viewed with the Microsoft Outlook client, these rich text email messages are displayed in all of their rich-text glory.

Putting the Pieces Together

You may already have an idea of where I am going. If it's not clear, let's summarize what we know so far:

  1. Microsoft Outlook can create and render RTF email messages.
  2. RTF documents (including email messages) can include OLE objects.
  3. Due to SMB, OLE objects can live on remote servers.

Observing Microsoft Outlook Behavior

HTML email messages on the Internet are much more common than rich text email, so let's first look at the behavior of Microsoft Outlook when viewing an HTML message that has a remote image on a web server:

html_message_annotated.png

Here we can see that the remote image is not loaded automatically. Why is this the case? The reason is because if Outlook allowed remote images to load automatically, it could leak the client system's IP address and other metadata such as the time that an email is viewed. This restriction helps to protect against a web bug being used in email messages.

Now let's try the same sort of message, except in rich text format. And rather than a remote image file, it's an OLE document that is loaded from a remote SMB server:

rtf_message_annotated.png

Well this is unexpected. Outlook blocks remote web content due to the privacy risk of web bugs. But with a rich text email, the OLE object is loaded with no user interaction. Let's look at the traffic in Wireshark to see what exactly is being leaked as the result of this automatic remote object loading:

wireshark_annotated.png

Here we can see than an SMB connection is being automatically negotiated. The only action that triggers this negotiation is Outlook previewing an email that is sent to it. In the screenshot above, I can see that the following things are being leaked:

  1. IP address
  2. Domain name
  3. User name
  4. Host name
  5. SMB session key

A remote OLE object in a rich text email messages functions like a web bug on steroids! At this point in my analysis in late 2016, I notified Microsoft about the issue.

Impacts of an OLE Web Bug

This bug results in two main problems, described below.

Crashing the Client

We know at this point that we can trigger Outlook to initiate an SMB connection to an arbitrary host. On February 1, 2017, a Windows SMB client vulnerability (VU#867968) was disclosed. Upon connecting to a malicious SMB server, Windows would crash. What if we created a rich text email in Outlook, but point to an SMB server that exploits this vulnerability?

smbcrash.gif

Once Outlook previews such an email message, Windows will crash with a Blue Screen of Death (BSOD) such as the above. And beyond that, every time Outlook is launched after encountering such a scenario, Windows will BSOD crash again because Outlook remembers the last email message that was open. This is quite a denial of service attack. At this point I shared the attack details with Microsoft. Eventually Microsoft fixed this SMB vulnerability, and luckily we did not hear about any mass email-based exploitation of it.

Collecting Password Hashes

SMB vulnerabilities aside, I decided to dig deeper into the risks of a client attempting to initiate an SMB connection to an attacker's server. Based on what I saw in Wireshark, I already knew that it leaked much more than just the IP address of the victim. This time I used both Responder and John the Ripper.

First I sent an RTF email that has a remote OLE object that points to a system running Responder. On the Responder system I saw the following as soon as the email was previewed in Outlook:

[SMB] NTLMv2-SSP Client   : 192.168.153.136
[SMB] NTLMv2-SSP Username : DESKTOP-V26GAHF\test_user
[SMB] NTLMv2-SSP Hash : test_user::DESKTOP-V26GAHF:1122334455667788:571EE693342B161C50A73D502EB49B5A:010100000000000046E1992B4BB2D301FFADACA3241B6E090000000002000A0053004D0042003100320001000A0053004D0042003100320004000A0053004D0042003100320003000A0053004D0042003100320005000A0053004D004200310032000800300030000000000000000100000000200000D3BDB30B62A8937256327776471E072C7C6DE9F4F98458D1FEA17CBBB6AFBA770A001000000000000000000000000000000000000900280063006900660073002F003100390032002E003100360038002E003100350033002E003100330038000000000000000000

Here we have an NTLMv2 hash that we can hand off to John the Ripper. As shown below, I copied and pasted the hash into a file called test_user.john:

john test_user.john
Using default input encoding: UTF-8
Loaded 1 password hash (netntlmv2, NTLMv2 C/R [MD4 HMAC-MD5 32/64])
Will run 24 OpenMP threads
Press 'q' or Ctrl-C to abort, almost any other key for status
test (test_user)
Session completed

In approximately 1 second, I was able to determine that the password for the user "test_user" who opened my RTF email was test. The hash for a stronger password (longer and more types of characters) will take longer to crack. I've performed some basic testing on how long it takes to bruteforce the entire solution space of an 8-character password on a single mid-range GPU (NVIDIA GTX 960):

  • Lowercase letters - 16 minutes
  • Mixed-case letters - 3 days
  • Mixed-case letters and digits - 12 days
  • Mixed-case letters, digits, and symbols - 1 year

The statistics above are the worst-case scenarios for bruteforce cracking randomly-generated passwords. Any passwords that are words (like "test") or patterns (like "asdf") are much easier to crack than randomly-generated passwords, since most cracking tools have rules to check for such things.

Also, an attacker may have access to systems with multiple high-end GPUs that can cut their times into fractions of the above numbers. Each character that is added to the password length has an exponential effect on the time it takes to bruteforce the password, though. For example, while my mid-range GPU takes 1 year to exhaust the entire solution space of an 8-character password (with mixed case letters, digits, and symbols), increasing the password length to 9 characters also increases the time it takes to exhaust the solution space to 84 years!

Microsoft's Fix

Microsoft released a fix for the issue of Outlook automatically loading remote OLE content (CVE-2018-0950). Once this fix is installed, previewed email messages will no longer automatically connect to remote SMB servers. This fix helps to prevent the attacks outlined above. It is important to realize that even with this patch, a user is still a single click away from falling victim to the types of attacks described above. For example, if an email message has a UNC-style link that begins with "\\", clicking the link initiates an SMB connection to the specified server.

outlook_smb_link.png

Additional details are available in CERT Vulnerability note VU#974272.

Conclusion and Recommendations

On the Windows platform, there are several ways to cause the client to initiate an SMB connection. Any time an SMB connection is initiated, the client's IP address, domain name, user name, host name, and password hash may be leaked. To help protect against attacks that involve causing a victim's machine to initiate an SMB connection, please consider the following mitigations:

  • Install Microsoft update CVE-2018-0950. This update prevents automatic retrieval of remote OLE objects in Microsoft Outlook when rich text email messages are previewed. If a user clicks on an SMB link, however, this behavior will still cause a password hash to be leaked.
  • Block inbound and outbound SMB connections at your network border. This can be accomplished by blocking ports 445/tcp, 137/tcp, 139/tcp, as well as 137/udp and 139/udp.
  • Block NTLM Single Sign-on (SSO) authentication, as specified in Microsoft Security Advisory ADV170014. Starting with Windows 10 and Server 2016, if the EnterpriseAccountSSO registry value is created and set to 0, SSO authentication will be disabled for external and unspecified network resources. With this registry change, accessing SMB resources is still allowed, but external and unspecified SMB resources will require the user to enter credentials as opposed to automatically attempting to use the hash of the currently logged-on user.
  • Assume that at some point your client system will attempt to make an SMB connection to an attacker's server. For this reason, make sure that any Windows login has a sufficiently complex password so that it is resistant to cracking. The following two strategies can help achieve this goal:
    1. Use a password manager to help generate complex random passwords. This strategy can help ensure the use of unique passwords across resources that you use, and it can ensure that the passwords are of a sufficient complexity and randomness.
    2. Use longer passphrases (with mixed-case letters, numbers and symbols) instead of passwords. This strategy can produce memorable credentials that do not require additional software to store and retrieve.

The CERT Guide to Coordinated Vulnerability Disclosure

We are happy to announce the release of the CERT® Guide to Coordinated Vulnerability Disclosure (CVD). The guide provides an introduction to the key concepts, principles, and roles necessary to establish a successful CVD process. It also provides insights into how CVD can go awry and how to respond when it does so.

As a process, CVD is intended to minimize adversary advantage while an information security vulnerability is being mitigated. And it is important to recognize that CVD is a process, not an event. Releasing a patch or publishing a document are important events within the process, but do not define it.

CVD participants can be thought of as repeatedly asking these questions: What actions should I take in response to knowledge of this vulnerability in this product? Who else needs to know what, and when do they need to know it? The CVD process for a vulnerability ends when the answers to these questions are nothing, and no one.

If we have learned anything in nearly three decades of coordinating vulnerability reports at the CERT/CC, it is that there is no single right answer to many of the questions and controversies surrounding the disclosure of information about software and system vulnerabilities. The CERT Guide to CVD is a summary of what we know about a complex social process that surrounds humans trying to make the software and systems they use more secure. It's about what to do (and what not to) when you find a vulnerability, or when you find out about a vulnerability. It's written for vulnerability analysts, security researchers, developers, and deployers; it's for both technical staff and their management alike. While we discuss a variety of roles that play a part in the process, we intentionally chose not to focus on any one role; instead we wrote for any party that might find itself engaged in coordinating a vulnerability disclosure.

In a sense, this report is a travel guide for what might seem a foreign territory. Maybe you've passed through once or twice. Maybe you've only heard about the bad parts. You may be uncertain of what to do next, nervous about making a mistake, or even fearful of what might befall you. If you count yourself as one of those individuals, we want to reassure you that you are not alone; you are not the first to experience events like these or even your reaction to them. We're locals. We've been doing this for a while. Here's what we know.

Abstract

Security vulnerabilities remain a problem for vendors and deployers of software-based systems alike. Vendors play a key role by providing fixes for vulnerabilities, but they have no monopoly on the ability to discover vulnerabilities in their products and services. Knowledge of those vulnerabilities can increase adversarial advantage if deployers are left without recourse to remediate the risks they pose. Coordinated Vulnerability Disclosure (CVD) is the process of gathering information from vulnerability finders, coordinating the sharing of that information between relevant stakeholders, and disclosing the existence of software vulnerabilities and their mitigations to various stakeholders including the public. The CERT Coordination Center has been coordinating the disclosure of software vulnerabilities since its inception in 1988. This document is intended to serve as a guide to those who want to initiate, develop, or improve their own CVD capability. In it, the reader will find an overview of key principles underlying the CVD process, a survey of CVD stakeholders and their roles, and a description of CVD process phases, as well as advice concerning operational considerations and problems that may arise in the provision of CVD and related services.

The CERT® Guide to Coordinated Vulnerability Disclosure is available in the SEI Digital Library.

The Consequences of Insecure Software Updates

In this blog post, I discuss the impact of insecure software updates as well as several related topics, including mistakes made by software vendors in their update mechanisms, how to verify the security of a software update, and how vendors can implement secure software updating mechanisms.

Earlier in June of this year, I published two different vulnerability notes about applications that update themselves insecurely:

  • VU#846320 - Samsung Magician fails to update itself securely
  • VU#489392 - Acronis True Image fails to update itself securely

In both of these cases, the apparent assumption the software makes is that it is operating on a trusted network and that no attacker would attempt to modify the act of checking for or downloading the updates. This is not a valid assumption to make.

Consider the situation where you think you're on your favorite coffee shop's WiFi network. If you have a single application with an insecure update mechanism, you are at risk to be a victim of what's known as an evilgrade attack. That is, you may inadvertently install code from an attacker during the software update process. Given that software updaters usually run with administrative privileges, the attacker's code will subsequently run with administrative privileges as well. In some cases, this can happen with no user interaction. This risk has been publicly known since 2010.

The dangers of an insecure update process are much more severe than an individual user being compromised while on an untrusted network. As the recent ExPetr/NotPetya/Petya wiper malware has shown, the impact of insecure updates can be catastrophic. Let me expand on this topic by discussing several important aspects of software updates:

  1. What mistakes can software vendors make when designing a software update mechanism?
  2. How can I verify that my software performs software updates securely?
  3. How should vendors implement a secure software update mechanism?

Insecure Software Updates and Petya

Microsoft has reported that the Petya malware originated from the update mechanism for the Ukrainian tax software called MEDoc. Because of my language barrier, I was not able to thoroughly exercise the software. However, with the help of CERT Tapioca, I was able to see the network traffic that was being used for software updates. This software makes two critical mistakes in its update mechanism: (1) it uses the insecure HTTP protocol and (2) the updates are not digitally signed

These are the same two mistakes that the vulnerable Samsung Magician software (VU#846320) and the vulnerable Acronis True Image software (VU#489392) contain. Let's dig in to the two main flaws in software updaters:

Use of an Insecure Transport Layer

HTTP is used in a number of software update mechanisms, as opposed to the more-secure HTTPS protocol. What makes HTTPS more secure? Properly-configured HTTPS communications attempt to achieve three goals:

  1. Confidentiality - Traffic is protected against eavesdropping.
  2. Integrity - Traffic content is protected against modification.
  3. Authenticity - The client verifies the identity of the server being contacted.

Let's compare the differences between an update mechanism that uses a properly configured HTTPS protocol versus one that uses either HTTP or HTTPS but without validating certificates :

ProtocolConfidentialityIntegrityAuthenticity
Properly-configured HTTPS Software update request content is not visible to attackers. Software update requests cannot be modified by an attacker
Software update traffic cannot be redirected by an attacker
HTTP or HTTPS that is not validated An attacker can view the contents of software update requests. An attacker can modify update requests or responses to modify the update behavior or outcome. An attacker can intercept and redirect software update requests to a malicious server.

Based on its inability to address these three goals, it is clear that HTTP is not a viable solution for software update traffic.

Lack of Digital Signatures for Updates

If software updates are not digitally signed, or if the software update mechanism does not validate signatures, the absence of digital signatures can allow an attacker to replace a software update with malware. A simple test that I like to do when testing software updaters is to test if the software will download and install calc.exe from Windows instead of the expected update. If calc.exe pops up when the update occurs, we have evidence of a vulnerable update mechanism!

Verifying Software Updaters

CERT Tapioca can be used to easily observe if an application uses HTTP or non-validated HTTPS traffic for updates. Any update traffic that appears in the Tapioca mitmproxy window is an indication that the update uses an insecure transport layer. As long as the mitmproxy root CA certificate is not installed on the client system, the only traffic that will appear in the mitmproxy window will be HTTP traffic and HTTPS traffic that is not checked for a trusted root CA, both of which are insecure.

Determining whether an application validates the digital signatures of updates requires a little more work. Essentially, you need to intercept the update and redirect it to an update under your control that is either unsigned or signed by another vendor.

A "Belt and Suspenders" Approach to Updates

If an update mechanism uses HTTPS, it should ensure that a software update mechanism is only communicating with a legitimate update server, right? And shouldn't that be enough to ensure a secure update? Well, not exactly.

First, HTTPS is not without flaws. There have been a number of flaws in various protocols and ciphersuites supported by HTTPS, including FREAK, DROWN, BEAST, CRIME, BREACH, and POODLE. These known flaws, which were fixed in HTTPS communications using modern TLS protocols, can weaken the confidentiality, integrity, and authenticity goals that HTTPS aims to provide. It is also important to realize that even without such flaws, HTTPS without pinning can ensure website authenticity only to the level that the PKI and certificate authority architecture allow. See Moxie Marlinspike's post SSL and the Future of Authenticity for more details.

HTTPS flaws and other weaknesses aside, using HTTPS without signature-validated updates leaves a large open hole that can be attacked. What happens if an attacker can compromise an update server? If software update signatures are not validated, a compromise of a single server can result in malicious software being deployed to all clients. There is evidence that this is exactly what happened in the case of MEDoc.

virus_attack.pngSource: archive.org / Google translate

Using HTTPS can help to ensure both the transport layer security as well as the identity of the update server. Validating digital signatures of the updates themselves can help limit the damage even if the update server is compromised. Both of these aspects are vital to the operation of a secure software update mechanism.

Conclusion and Recommendations

Despite evilgrade being a publicly-known attack for about seven years now, it is still a problem. Because of the nature of how software is traditionally installed on Windows and Mac systems, we live in a world where every application may end up implementing its own auto-update mechanism independently. And as we are seeing, software vendors are frequently making mistakes in these updaters. Linux and other operating systems that use package-management systems appear to be less affected by this problem, due to the centralized channel through which software is installed.

Recommendations for Users

So what can end users do to protect themselves? Here are my recommendations:

  • Especially when on untrusted networks, be wary of automatic updates. When possible, retrieve updates using your web browser from the vendor's HTTPS website. Popular applications from major vendors are less likely to contain vulnerable update mechanisms, but any software update mechanism has the possibility of being susceptible to attack.
  • For any application you use that contains an update mechanism, verify that the update is at least not vulnerable to a MITM attack by performing the update using CERT Tapioca as an internet source. Virtual machines make this testing easier.

Recommendations for Developers

What can software developers do to provide software updates that are secure? Here are my recommendations:

  • If your software has an update mechanism, ensure that it both:
    • Uses properly validated HTTPS for a communications channel instead of HTTP
    • Validates digital signatures of retrieved updates before installing them
  • To protect against an update server compromise affecting your users, ensure that the code signing key is not present on the update server itself. To increase protection against malicious updates, ensure the code signing key is offline, or otherwise unavailable to an attacker.

The Twisty Maze of Getting Microsoft Office Updates

While investigating the fixes for the recent Microsoft Office OLE vulnerability, I encountered a situation that led me to believe that Office 2016 was not properly patched. However, after further investigation, I realized that the update process of Microsoft Update has changed. If you are not aware of these changes, you may end up with a Microsoft Office installation that is missing security updates. With the goal of preventing others from making similar mistakes as I have, I outline in this blog post how the way Microsoft Office receives updates has changed.

The Bad Old Days

Let's go back about 15 years in Windows computing to the year 2002. You've got a shiny new desktop with Windows XP and Office XP as well. If you knew where the option was in Windows, you could turn on Automatic Updates to download and notify you when OS updates are made available. What happens when there is a security update for Office? If you happened to know about the OfficeUpdate website, you could run an ActiveX control to check for Microsoft Office updates. Notice that the Auto Update link is HTTP instead of HTTPS. These were indeed dark times. But we had Clippy to help us through it!

officexp_clippy.png

Microsoft Update: A New Hope

Let's fast-forward to the year 2005. We now have Windows XP Service Pack 2, which enables a firewall by default. Windows XP SP2 also encourages you to enable Automatic Updates for the OS. But what about our friend Microsoft Office? As it turns out, an enhanced version of Windows Update, called Microsoft Update, was also released in 2005. The new Microsoft Update, instead of checking for updates for only the OS itself, now also checks for updates for other Microsoft software, such as Microsoft Office. If you enabled this optional feature, then updates for Microsoft Windows and Microsoft Office would be installed.

Microsoft Update in Modern Windows Systems

Enough about Windows XP, right? How does Microsoft Update factor into modern, supported Windows platforms? Microsoft Update is still supported through current Windows 10 platforms. But in each of these versions of Windows, Microsoft Update continues to be an optional component, as illustrated in the following screen shots for Windows 7, 8.1, and 10.

Windows 7

win7_windows_update.png

win7_microsoft_update.png

Once this dialog is accepted, we can now see that Microsoft Update has been installed. We will now receive updates for Microsoft Office through the usual update mechanisms for Windows.

win7_microsoft_update_installed.png

Windows 8.1

Windows 8.1 has Microsoft Update built-in; however, the option is not enabled by default.

win8_microsoft_update.png

Windows 10

Like Windows 8.1, Windows 10 also includes Microsoft Update, but it is not enabled by default.

win10_microsoft_update.png

Microsoft Click-to-Run

Microsoft Click-to-Run is a feature where users "... don't have to download or install updates. The Click-to-Run product seamlessly updates itself in the background." The Microsoft Office 2016 installation that I obtained through MSDN is apparently packaged in Click-to-Run format. How can I tell this? If you view the Account settings in Microsoft Office, a Click-to-Run installation looks like this:

office16_about.png

Additionally, you should notice a process called OfficeClickToRun.exe running:

procmon-ctr.png

Microsoft Office Click-to-Run and Updates

The interaction between a Click-to-Run version of Microsoft Office and Microsoft Updates is confusing. For the past dozen years or so, when a Windows machine completed running Microsoft Update, you could be pretty sure that Microsoft Office was up to date. As a CERT vulnerability analyst, my standard process on a Microsoft patch Tuesday was to restore my virtual machine snapshots, run Microsoft Update, and then consider that machine to have fully patched Microsoft software.

I first noticed a problem when my "fully patched" Office 2016 system still executed calc.exe when I opened my proof-of-concept exploit for CVE-2017-0199. Only after digging into the specific version of Office 2016 that was installed on my system did I realize that it did not have the April 2017 update installed, despite having completed Microsoft Update and rebooting. After setting up several VMs with Office 2016 installed, I was frequently presented with a screen like this:

office2016_no_updates_smaller.png

The problem here is obvious:

  • Microsoft Update is indicating that the machine is fully patched when it isn't.
  • The version of Office 2016 that is installed is from September 2015, which is outdated.
  • The above screenshot was taken on May 3, 2017, which shows that updates weren't available when they actually were.

I would love to have determined why my machines were not automatically retrieving updates. But unfortunately there appear to be too many variables at play to pinpoint the issue. All I can conclude is that my Click-to-Run installations of Microsoft Office did not receive updates for Microsoft Office 2016 until as late as 2.5 weeks after the patches were released to the public. And in the case of the April 2017 updates, there was at least one vulnerability that was being exploited in the wild, with exploit code being publicly available. This amount of time is a long window of exposure.

It is worth noting that the manual update button within the Click-to-Run Office 2016 installation does correctly retrieve and install updates. The problem I see here is that it requires manual user interaction to be sure that your software is up to date. Microsoft has indicated to me that this behavior is by design:

[Click-to-Run] updates are pushed automatically through gradual rollouts to ensure the best product quality across the wide variety of devices and configurations that our customers have.

Personally, I wish that the update paths for Microsoft Office were more clearly documented.

Update: April 11, 2018

Microsoft Office Click-to-Run updates are not necessarily released on the official Microsoft "Patch Tuesday" dates. For this reason, Click-to-Run Office users may have to wait additional time to receive security updates.

Conclusions and Recommendations

To prevent this problem from happening to you, I recommend that you do the following:

  • Enable Microsoft Update to ensure that you receive updates for software beyond just the core Windows operating system. This switch can be automated using the technique described here: https://msdn.microsoft.com/en-us/aa826676.aspx
  • If you have a Click-to-Run version of Microsoft Office installed, be aware that it will not receive updates via Microsoft Update.
  • If you have a Click-to-Run version of Microsoft Office and want to ensure timely installation of security updates, manually check for updates rather than relying on the automatic update capability of Click-to-Run.
  • Enterprise customers should refer to Deployment guide for Office 365 ProPlus to ensure that updates for Click-to-Run installations meet their security compliance timeframes.

The Risks of Google Sign-In on iOS Devices

The Google Identity Platform is a system that allows you to sign in to applications and other services by using your Google account. Google Sign-In is one such method for providing your identity to the Google Identity Platform. Google Sign-In is available for Android applications and iOS applications, as well as for websites and other devices.

Users of Google Sign-In find that it integrates well with the Android platform, but iOS users (iPhone, iPad, etc.) do not have the same experience. The user experience when logging in to a Google account on an iOS application is not only more tedious than the Android experience, but it also conditions users to engage in behaviors that put their Google accounts at risk!

Minding Your Credentials

Before getting into the details of Google Sign-In on iOS, it is worth it to take a brief detour to discuss when it is appropriate to provide your account credentials in general. Whether you are dealing with a web page, a desktop application, a mobile application, or anything else, there are certain questions that you should answer before you can safely provide authentication information:

  1. What application am I interacting with?
  2. Where does this information go when it's submitted?
  3. Is the information protected in transit?

You can't always answer all of these questions. Keep in mind, however, that any time you can't answer a question, you are placed in a situation where an attacker may be able to obtain unexpected access to your account.

What happens if you can't answer the question, "What application am I interacting with?" If you don't know what application is providing the dialog you're working with, then you can't begin to guess what will happen with the information that you provide. For example, see CERT Vulnerability Note VU#490708 - Microsoft Internet Explorer window.createPopup() method creates chromeless windows.

Also, what happens if you can't answer the question "Where does this information go when it's submitted?" This is a difficult question to answer. When dealing with web pages, the easiest starting point is to know what website you are currently viewing. If you're viewing look-alike content on a domain that is other than what you expect it to be, then it's possible that you may end up providing credentials to an attacker. Most phishing attacks use this technique. To complicate the matter further, a website that is vulnerable to cross-site scripting (XSS) may allow an attacker to obtain your credentials, even if you are currently viewing the legitimate website in question. For example, see CERT Vulnerability Note VU#808921 - eBay contains a cross-site scripting vulnerability.

Finally, what happens if you can't answer the question, "Is the information protected in transit?" Especially on mobile platforms, this question can also be difficult to answer. If an application chooses to not use encryption, or if it improperly implements encryption, this situation can allow an attacker to observe sensitive data in transit. In a case where you are providing credentials to an account, an attacker may be able to compromise the account. For example, see CERT Vulnerability Note VU#432608 - IBM Notes Traveler for Android transmits user credentials over HTTP.

Especially when you can't answer one or more of the above questions, it's important to consider the risks that may be involved with providing your account credentials. In this blog post I explain why the Google Sign-In experience on the iOS platform forces you to take unnecessary risks.

OAuth Basics

Google APIs use the OAuth 2.0 protocol for authentication and authorization. Without getting into the nitty gritty details, OAuth allows you to authenticate with a system without needing to divulge your credentials to a third party. For example, if you want to authenticate yourself with the Pokémon Go application by using your Google account, OAuth should allow you to do it without ever telling Niantic your Google credentials.

At least, that's the way it works with the Android version of the game. Why is iOS different? To understand the differences, let's look at the sign-in process for each.

Google Sign-In on Android

With most Android devices, the OS itself is associated with a Google account. When you turn on your new Android phone, one of the first steps is a prompt to log in with your Google account credentials. This account is tied so closely with the OS that many Android phones force a factory reset if the Google account is removed from the phone.

Google Sign-In on an Android device leverages the Google account that is associated with the device. When the getSignInIntent is invoked, the authentication process is handed off to the Android OS itself. Here you can select which Google account to use. No password is required. If the application needs permissions beyond profile, email, or openid, then you are prompted to grant access to the additional resources.

Because you are not prompted for any credentials, the three questions outlined above are not terribly important:

  1. What application am I interacting with?
  2. Where does this information go when it's submitted?
  3. Is the information protected in transit?

The reason that these questions are not important is because you are not directly providing sensitive information. You are simply choosing whether or not to grant access to the account access levels that are requested.

Google Sign-In on iOS

With iOS devices, there is no OS-level association with a Google account. Therefore, there is no already-authenticated component that Google Sign-In can leverage to achieve its goal. As a result, you must enter your Google username and password directly into a screen that is presented by the application.

Let me be clear about what happens during the sign-in process:

  1. The user runs an application.
  2. The application presents a login screen to the user.
  3. The user enters Google credentials into the screen presented by the application.

Doesn't this sign-in process violate one of the main aspects of OAuth? That is, you shouldn't have to enter your credentials to a third-party application or site with OAuth. As it turns out, when Google Sign-In is used on iOS, the dialog is indeed on the Google website. So the third-party application is not receiving your credentials. But how do you know this for sure? To find out, I launch an application, and click a button that looks like this:

btn_google_signin_dark_normal_web.png

As a result, I'm presented with a screen that looks like this:

signin_webview.png

Update (August 10, 2016): The above screenshot is from an iOS 8.x device.

At this point, I think that the dialog is presenting content from accounts.google.com. But I have no way of verifying it. Even if clicking the "accounts.google.com" text did something, how do I know that the whole screen isn't being faked?

As a user, I have no idea what is providing the screen that I'm seeing. All I know is that I launched an application that I may or may not trust, and it's now prompting me for the credentials to my Google account. This situation is much like website spoofing on PCs, but it's even worse on mobile devices due to the limited screen real estate and limited information presented to the user.

I can't answer the three questions outlined above easily.

  1. What application am I interacting with?
    All I know is that after pressing a button in a third-party application, I'm presented with a dialog requesting my Google account credentials. While the screen may look like I'm interacting with Safari, it's not Safari. If I know enough to double-tap the home button, I can at least see the host application that is providing the screen that I am seeing.
  2. Where does this information go when it's submitted?
    It appears that my credentials are sent to accounts.google.com, but I can't tell for sure. Because the third-party application provided the screen, it could put whatever it wants there.
  3. Is the information protected in transit?
    I definitely have no clue about the answer to this question. I don't see a padlock, so for all I know it's all cleartext HTTP.

By looking at the Google Sign-In for iOS code, it appears that the sign-in dialog is created using a UIWebView object. This object doesn't share cookies with Safari, though. This means that even if Safari is logged into Google on my iOS device, the Google Sign-In code will not know it.

Update (August 10, 2016): The UIWebView object is only used on iOS versions earlier than 8.x. On iOS 9.x devices, Google Sign-In library version 2.3 (Released Sept. 29, 2015) and newer will use the SFSafariViewController object, which will leverage Safari cookies. This means that if the user is already logged in with Safari, the application that uses the Google Sign-In library will not prompt the user for credentials. Version 2.0 through 2.2 of the Google Sign-In library used a UIWebView object to avoid the limitations introduced in iOS 9.0, where applications were not allowed to launch Safari for the purpose of logging in.

If an app author chooses to use something like the SFSafariViewController, or uses Google Sign-In version 2.3 or newer, then the Google authentication process can leverage an already-authenticated Safari browser. For example, here is a screenshot of an application that appears to use this technique:

safari_signin.png

Here I can see that the authentication process knows who I am already, and I simply have to choose "Allow" or "Deny." I don't have to enter my password, which, as it turns out, is a good thing, as Safari and its associated components appear to expose no way of viewing an SSL certificate. Even if I click the Safari icon in the bottom-right corner, which opens the same page in the full-blown Safari browser, the padlock does nothing when I touch it.

Even though I can't view the site certificate, I'm comfortable with clicking either "Allow" or "Deny" here. Even if the application is malicious, all it can do is view my email address and my basic profile information. The important part to realize is that I did not enter my credentials anywhere. Had I entered my Google credentials into the application, it could possibly have done anything that it wanted with my account.

This is exactly what happened with the Pokémon Go application when it was first released. When a user launched the application and clicked the "Sign up with Google" button, the user was presented with a screen where the username and password must be entered. After this action was taken, Adam Reeve noticed that the application had full access to his Google account.

This incident made me wonder how the application was able to gain full access without explicitly asking for it. The answer is perhaps obvious--the user entered Google credentials into a screen presented by the application. It doesn't matter if the recommended Google authentication technique is used or not. The user gave Google credentials to the application.

Niantic has since published a statement about how the application erroneously requested full-access permission to the Google account, and that it accessed only the user ID and email address. I didn't reverse engineer the original version of the application, so I guess I'll just have to take their word for it.

Facebook Authentication

Google isn't the only organization that provides an authentication API for third-party developers. Facebook Login can provide similar capabilities on Android, iOS, websites, and other platforms. There's an important difference between the Google Sign-In and Facebook Login tools though: Facebook Login works on iOS devices in the same way that it does on Android devices. That is, when an application uses the Facebook Login API, it launches the Facebook application (if installed) on the phone, and within that application I can see what is going on:

facebook_login.png

Let's go back to our three questions:

  1. What application am I interacting with?
  2. Where does this information go when it's submitted?
  3. Is the information protected in transit?

In the case of using a Facebook Login, it's pretty clear that I'm dealing with the Facebook application, as I am already logged in. I can verify this by double-tapping the home button if I like. The second two questions are more difficult to answer, but if I trust the Facebook application, I'm likely to believe that it is doing the right thing.

Being the security-conscious person that I am, I consider the risk of providing my credentials because I can't answer all of the above questions. And because I'm not providing my Facebook credentials at all here, I simply review the account access I would be providing to the app and choose to proceed or cancel based on the level of access I'm comfortable with providing.

Google Sign-In vs. Facebook Login

Why is Facebook able to handle authentication properly on iOS, but Google is not? The Facebook application has registered itself as a handler for Facebook Login requests. Any third-party application that attempts to use Facebook Login causes the Facebook application to launch and handle the request. As the Facebook application itself is already authenticated with the Facebook servers, it can delegate privileges to third-party applications without the need to share credentials, just like OAuth is supposed to work.

If a user doesn't have the Facebook application installed, then Facebook Login will fall back to using Safari. In this case, if I've already used Safari to log in to Facebook, I still won't be prompted for my Facebook credentials. I simply see a web browser page similar to the Facebook application screen above. I'm only prompted for Facebook credentials if I never logged in to Facebook using Safari and I don't have the Facebook application on my iOS phone.

Conclusions

When implemented properly, OAuth can help protect account credentials. Google Sign-In on iOS does not achieve this goal, however. By conditioning users to enter Google account credentials on a whim, Google has created an environment where users are more likely to have their accounts compromised.

Recommendations

  • iOS users should be sure that they are running iOS 9.x or newer. These newer iOS versions can allow Google Sign-In to leverage Safari cookies, which can avoid the need to prompt the user for credentials.
  • Users should be aware of the risks present when providing credentials to any application. Especially with third-party applications that may be untrusted, it is important to understand that providing Google account credentials is equivalent to giving that application full access to your account.
  • Application developers should use the SFSafariViewController object or Safari itself to perform the Google authentication process. Application developers should use Google Sign-In 2.3 or newer. This can leverage the cookies present in an already-authenticated Safari browser, which can prevent users from needing to type in credentials themselves.
  • Google should modify its existing iOS application or create a designated iOS application that registers itself to handle the Google authentication steps, much like how the Facebook application currently does. Doing so will prevent users from needing to type in their Google credentials, making them more resistant to hijacking.