Category Archives: Best Practices

Naughty or Nice Websites

Naughty or Nice Websites

Santa Claus is coming! Was your website naughty or nice this year?

Here is a quick checklist of the top 10 bad things that can harm your website security and the top 10 good things that can improve your website security.

Naughty Websites List

If your website falls into any of these categories, this is the perfect time of year to start thinking about improving your security posture.

1 – My website has outdated software.

Continue reading Naughty or Nice Websites at Sucuri Blog.

Professionally Evil Insights: Professionally Evil CISSP Certification: Breaking the Bootcamp Model

ISC2 describes the CISSP as a way to prove “you have what it takes to effectively design, implement and manage a best-in-class cybersecurity program”.  It is one of the primary certifications used as a stepping stone in your cybersecurity career.   Traditionally, students have two different options to gain this certification; self-study or a bootcamp.  Both of these options have pros and cons, but neither is the best.

Bootcamps are a popular way to cram for the certification test.  Students spend five days in total immersion into the topics of the CBK.  This is an easy way to pass the exam for lots of students because it focuses them on the CISSP study materials for the bootcamp timeframe.  But there are a few negatives to this model.  First is the significant cost.  The typical prices we see are between $3500 and 5000 with outliers as high as almost $7000.  The second issue is that it takes the student away from their life for the week.  Finally, most people finish the bootcamp with the knowledge to pass the exam but since it is crammed in, they quickly forget most of the information.

Self-Study is the other common mechanism for studying for the CISSP exam.  This allows a dedicated student to learn the information at their pace and time frame.  It also allows for them to decide how much to spend.  From books to online videos and practice exams the costs vary.  The main problem with the method is that students often get distracted by life and work while trying to accomplish it.

But there is an answer that combines the benefits of both previous options.  Secure Ideas has developed a mentorship program designed to provide the knowledge necessary to pass the certification, while working through the common body of knowledge (CBK).  All done in a manner that encourages retention of the knowledge.  And it is #affordabletraining!

The mentorship program is designed as a series of weekly mentor led discussion and review sessions along with various student support and communication methods, spanning a total of 9 weeks.  These work together to provide the student a solid foundation to not only help in passing the certification but to continue as a collection of information for everyday work.   This class is set up to cover the 8 domains of the ISC2 CBK:

  • Security and Risk Management
  • Asset Security
  • Security Architecture and Engineering
  • Communication and Network Security
  • Identity and Access Management (IAM)
  • Security Assessment and Testing
  • Security Operations
  • Software Development Security

The Professionally Evil CISSP Mentorship program uses multiple communication and knowledge sharing paths to build a comprehensive learning environment focused on both passing the CISSP certification and gaining a deep understanding of the CBK.

The program consists of the following parts:

  • Official study guide book
  • Weekly live session with instructor(s)
    • Live session will also be recorded
  • Private Slack team for students and instructors to communicate regularly
  • Practice exams
  • While we believe students will pass on their first try, we also include the option for students to take the program as many times as they want, any time we offer it.  🙂

You can sign up for the course over at for only $1000.  Our early bird pricing is $800 and is good until January 31.  Just use the Coupon code EARLYBIRD at checkout.  Veterans, active duty military and first responders also get a significant discount.  Email for more information.

Professionally Evil Insights

OWASP Top 10 Security Risks – Part III

OWASP Top 10  Security Risks – Part III

To bring awareness to what threatens the integrity of websites, we are continuing a series of posts on the OWASP top 10 security risks.

The OWASP Top 10 list consists of the 10 most seen application vulnerabilities:

  1. Injection
  2. Broken Authentication
  3. Sensitive data exposure
  4. XML External Entities (XXE)
  5. Broken Access control
  6. Security misconfigurations
  7. Cross Site Scripting (XSS)
  8. Insecure Deserialization
  9. Using Components with known vulnerabilities
  10. Insufficient logging and monitoring

In our previous posts, we explained the first four items on the OWASP Top 10 list.

Continue reading OWASP Top 10 Security Risks – Part III at Sucuri Blog.

Sucuri Blog: OWASP Top 10 Security Risks – Part III

OWASP Top 10  Security Risks – Part III

To bring awareness to what threatens the integrity of websites, we are continuing a series of posts on the OWASP top 10 security risks.

The OWASP Top 10 list consists of the 10 most seen application vulnerabilities:

  1. Injection
  2. Broken Authentication
  3. Sensitive data exposure
  4. XML External Entities (XXE)
  5. Broken Access control
  6. Security misconfigurations
  7. Cross Site Scripting (XSS)
  8. Insecure Deserialization
  9. Using Components with known vulnerabilities
  10. Insufficient logging and monitoring

In our previous posts, we explained the first four items on the OWASP Top 10 list.

Continue reading OWASP Top 10 Security Risks – Part III at Sucuri Blog.

Sucuri Blog

Avoid Coal in Your Digital Stocking — Here’s How to Improve Your Security Posture in 2019

As 2018 draws to a close, it’s time to reflect on the strides the cybersecurity industry made over the past year, and how far companies around the world still have to go to improve their security posture. Throughout the year, businesses were plagued by cybersecurity risks and hit with massive data breaches. In the lead-up to the holiday season, security leaders across industries are wishing for a quiet 2019 with no negative data breach headlines.

5 Cybersecurity Missteps That Put Enterprises at Risk in 2018

What lessons did we learn in 2018? And as we look forward, what best practices can we implement to improve defenses in the new year? We asked industry experts where they observe the worst security practices that still leave enterprises exposed to cybersecurity risks, and they offered advice to help companies and users enjoy a merrier, brighter, more secure 2019.

1. Poor Password Policies

Although passwords are far from perfect as a security mechanism, they are still used pervasively in the enterprise and in personal life. Yet password policies are still rife with problems around the globe.

Idan Udi Edry, CEO of Trustifi, said the most foundational — and also most disregarded — cybersecurity practice is maintaining a strong password.

“A unique password should be utilized for every account and not reused,” said Edry. “It is important to update passwords every 30–90 days. Passwords should never include a significant word, such as a pet’s name, or a significant date, such as a birthdate.”

Deploying devices and appliances and then leaving default passwords in place is also still a shockingly common practice. A threat actor with knowledge of a manufacturer or service provider’s default password conventions can do a lot of damage to an organization with factory settings still in place.

Edry advised enterprises to employ two-factor authentication (2FA) to add more security to their access strategy. Douglas Crawford, digital privacy adviser for BestVPN, meanwhile, recommended encouraging employees to use a password manager.

“It is hard to remember strong passwords for every website and service we use, so people simply stop bothering,” said Crawford. “Use of ‘123456’ as a password is still scarily common. And then we use the same password on every website we visit. This [is] particularly irksome, as this entire security nightmare can be easily remedied through use of password manager apps or services, which do the heavy lifting for us.”

2. Misconfigured Cloud Storage

Earlier this year, researchers from Digital Shadows uncovered more than 1.5 billion sensitive files stored in publicly available locations, such as misconfigured websites and unsecured network-attached storage (NAS) drives.

“Unfortunately, many administrators misconfigure [these buckets] rendering the contents publicly-accessible,” wrote Michael Marriott, senior strategy and research analyst with Digital Shadows.

The information uncovered included a treasure trove of personal data, such as payroll, tax return and health care information — all available to prying eyes thanks to overlooked security best practices in cloud storage.

“With the rise of mobility and cloud usage in enterprises, one of the worst security practices is leaving critical cloud services and SaaS applications open to the internet,” said Amit Bareket, co-founder and CEO of Perimeter 81.

It’s time to get proactive to analyze potential exposures in storage and then devise a plan to address cloud data risks to your organization. It’s also important to remember that with any connected service, it is often better not to deploy than to deploy insecurely.

3. Ineffective Cyber Awareness Training

Security begins and ends with your employees — but how much do they know about security? Specifically, how much do they know about the risks they are facing and how their actions could set your business up for a potential incident?

“At this time of the year, it’s critically important to ensure proper employee awareness of the risks related to travel,” said Baan Alsinawi, president and founder of TalaTek, a Washington-based risk management firm. “Using public Wi-Fi at airports or hotels to access corporate data, possible loss of personally-held devices such as an iPad, iPhone or corporate laptop, especially if not encrypted, talking to strangers about work issues or projects over a glass of wine can expose confidential information.”

Of course, a robust awareness program needs to be in place year-round. Data from London-based advisory and solutions company Willis Towers Watson found that employees are the cause of 66 percent of all cyberbreaches, either through negligence or deliberate offense.

Employees should be regularly educated on phishing, social engineering techniques and other attack vectors that could put corporate data at risk. If awareness training isn’t part of your security strategy, 2019 is the time to learn what an effective awareness program looks like and implement one to promote security best practices in your organization.

4. Poor Oversight of Third-Party Cybersecurity Risks

Third-party vendors and partners can be a source of compromise if criminals can access your organization’s sensitive information through their poorly secured systems. If you’re working with third-party vendors and partners, your security is only as good as theirs. If their systems are breached, your data is also at risk.

“Attackers seeking access to hardened company systems can pivot to breaching an integrated third party, establishing a beachhead there and then leveraging the trust implicit in the integration to gain access,” explained Ralph R. Russo, director of applied computing programs and professor of practice of IT management and cybersecurity at Tulane University School of Professional Advancement.

In 2019, evaluate the state of your third-party risk management. Make it a priority to identify gaps that may put you at risk if you are working with less-than-secure vendors. Implement a vigorous vetting process to determine the security level of your trusted partners.

5. Lack of an Incident Response Plan

A formal, regularly tested cybersecurity incident response plan is essential, yet many organizations continue to operate without one. In fact, 77 percent of companies do not have any formal plan.

Without a written and tested incident response plan, you’re unprepared for the worst-case scenario. It is not enough to focus on prevention; it is essential to establish a comprehensive incident response plan that is clear, detailed, flexible, includes multiple stakeholders, and tested and updated regularly.

Improve Your Security Posture in 2019 and Beyond

If your organization engages in any of these poor practices, it may be time to brush up on your basic cyber hygiene best practices. By following the recommendations outlined here, you can confidently resolve to close gaps in risk mitigation and establish more effective strategies to improve your company’s security posture in 2019 and beyond.

The post Avoid Coal in Your Digital Stocking — Here’s How to Improve Your Security Posture in 2019 appeared first on Security Intelligence.

Fear, Uncertainty, and Doubt

Fear, Uncertainty, and Doubt

There’s a term for the practice of scaring potential customers into purchasing products or services they don’t need: FUD; fear, uncertainty, and doubt. This practice is widespread in the computer/IT industries at large, but is especially present in the security industry.

People don’t want to get hacked—but may also not understand the issues and forces at play. This makes them easy targets for overzealous sales representatives who see an opportunity to use misinformation to increase their paycheck via commission payouts.

Continue reading Fear, Uncertainty, and Doubt at Sucuri Blog.

Navigating Data Responsibility

Navigating Data Responsibility

As we take a step back and think about how much the Internet has grown over the past 20 years, we realize how much content/data has been made available to everyone.

Moving forward, there’s no reason to expect data availability to slow down. In fact, insideBIGDATA claims:

There are many sources that predict exponential data growth toward 2020 and beyond. Yet they are all in broad agreement that the size of the digital universe will double every two years at least, a 50-fold growth from 2010 to 2020.

Continue reading Navigating Data Responsibility at Sucuri Blog.

A Scam-Free Cyber Monday for Online Businesses

A Scam-Free Cyber Monday for Online Businesses

Every year we see an increase in website attacks during the holidays. 

While business owners see their sales go up due to promotional Black Friday and Cyber Monday campaigns, hackers are in the background working nonstop to create malicious, fraudulent websites as well as take advantage of legitimate ones.

Main Cyber Monday Threats
Phishing Pages

One of the major risks to consumers is phishing campaigns.

Carefully crafted phishing login pages convince users they are logging into a valid service.

Continue reading A Scam-Free Cyber Monday for Online Businesses at Sucuri Blog.

PCI for SMB: Requirement 9 – Implement Strong Access Control Measures

PCI for SMB: Requirement 9 – Implement Strong Access Control Measures

Welcome to the sixth post of a series on understanding the Payment Card Industry Data Security Standard–PCI DSS. We want to show how PCI DSS affects anyone going through the compliance process using the PCI SAQ’s (Self Assessment Questionnaires).

In the previous articles written about PCI, we covered the following:

  • Requirement 1: Build and Maintain a Secure Network – Install and maintain a firewall configuration to protect cardholder data.
  • Requirement 2: Build and Maintain a Secure Network – Do not use vendor-supplied defaults for system passwords or other security parameters.

Continue reading PCI for SMB: Requirement 9 – Implement Strong Access Control Measures at Sucuri Blog.

10 Tips to Improve Your Website Security

10 Tips to Improve Your Website Security

Having a website has become easier than ever due to the proliferation of great tools and services in the web development space. Content management systems (CMS) like WordPress, Joomla!, Drupal, Magento, and others allow business owners to build an online presence rapidly. The CMS’s highly extensible architectures, rich plugins, and effective modules have reduced the need to spend years learning web development before starting to build a website.

The ease of launching an online business or personal website is great.

Continue reading 10 Tips to Improve Your Website Security at Sucuri Blog.

New WordPress Security Email Course

New WordPress Security Email Course

Recent statistics show that over 32% of website administrators across the web use WordPress.

Unfortunately, the CMSs popularity comes at a price — attackers often seek out vulnerabilities to exploit and target unhardened WordPress sites. If a site is compromised, it often becomes the host of malicious malware or spam campaigns, harming your website’s reputation and visitors in the process.

Knowledge is power, and we’re here to help! We’ve created a new WordPress Security Email Course to help improve your website’s security posture and reduce the risk of a security incident.

Continue reading New WordPress Security Email Course at Sucuri Blog.

Website Security Tips for Marketers

Website Security Tips for Marketers

In our previous post, we have discussed why marketers should have a proactive approach to website security. Today we are going to discuss some security tips marketers can put into practice. In the simplest terms, website security means three things here at Sucuri:

  • Protecting your website from compromises.
  • Monitoring for issues so you can react quickly.
  • Having a documented emergency response plan.

Marketers should champion these initiatives so they can be prioritized by their business development team.

Continue reading Website Security Tips for Marketers at Sucuri Blog.

Web Marketers Should Learn Security

Web Marketers Should Learn Security

Most online marketers think of themselves as T-shaped individuals. The theory behind this concept is that individuals possess a wide range of skills, with some abilities running deeper than others.

Website security awareness is in short supply and we need more champions — especially among small and medium-sized businesses. Digital marketers are in a prime position to add security know-how to their diverse toolkit.

Source: The T-Shaped Web Marketer by Rand Fishkin

It makes sense for marketers to want to secure their websites.

Continue reading Web Marketers Should Learn Security at Sucuri Blog.

OWASP Top 10 Security Risks – Part II

OWASP Top 10  Security Risks – Part II

It is National Cyber Security Awareness Month and in order to bring awareness to what threatens the integrity of websites, we have started a series of posts on the OWASP top 10 security risks.

The OWASP Top 10 list consists of the 10 most seen application vulnerabilities:

  1. Injection
  2. Broken Authentication
  3. Sensitive data exposure
  4. XML External Entities (XXE)
  5. Broken Access control
  6. Security misconfigurations
  7. Cross-Site Scripting (XSS)
  8. Insecure Deserialization
  9. Using Components with known vulnerabilities
  10. Insufficient logging and monitoring

In our previous post, we explained the first two items on the OWASP Top 10 list: injection and broken authentication.

Continue reading OWASP Top 10 Security Risks – Part II at Sucuri Blog.

Creating a Response Plan You Can Trust

Creating a Response Plan You Can Trust

As a website owner, you may have experienced your website being down for any number of reasons. Maybe due to errors in code, server related difficulties or even being under attack from bad actors.

I once shared my own experience of a hacked website in a webinar. Whether you have one site or hundreds, when restoring your online presence it is imperative to have a process in place.

If Your Website Gets Hacked, What is Your Plan?

Continue reading Creating a Response Plan You Can Trust at Sucuri Blog.

When "ASLR" Is Not Really ASLR – The Case of Incorrect Assumptions and Bad Defaults

As a vulnerability analyst at the CERT Coordination Center, I am interested not only in software vulnerabilities themselves, but also exploits and exploit mitigations. Working in this field, it doesn't take too long to realize that there will never be an end to software vulnerabilities. That is to say, software defects are not going away. For this reason, software exploit mitigations are usually much more valuable than individual software fixes. Being able to mitigate entire classes of software vulnerabilities is a powerful capability. One of the reasons why we strongly promote mitigation tools like EMET or Windows Defender Exploit Guard, which is the replacement for EMET on the Windows 10 platform, is because exploit mitigation protections are not limited to the specific vulnerability du jour.

While looking at a recent exploit for VLC on Windows, I noticed some unexpected behaviors. In this blog post, I will describe how my journey led me to the discovery of several flaws that put users of many applications at unnecessary risk. VLC isn't the only victim here.

Memory Corruption Exploit Mitigations

When discussing memory corruption exploit mitigations, two rudimentary building blocks are Data Execution Prevention (DEP) and Address Space Layout Randomization (ASLR). Both techniques must be used together for the basis of an effective exploit mitigation strategy.


DEP, also referred to as NX or W^X on other platforms, ensures that a CPU executes only instructions that have been explicitly marked as executable. In other words, a system should only execute code. The system should refuse to execute bytes that are provided in the content of a document, for example.


ASLR ensures memory locations associated with a process are not predictable. This capability is important to help prevent the use of Return Oriented Programming (ROP) by an attacker.

Confirming the Presence of DEP and ASLR

In the past I used CERT BFF to find some exploitable crashes in the Oracle Outside In library. I investigated several applications that use Outside In to demonstrate proof-of-concept (PoC) exploits for the vulnerabilities I discovered. I settled on two forensics investigation applications: Guidance EnCase and AccessData FTK. Why did I choose these applications? On one hand, the concept of exploiting vulnerabilities in forensics software seemed interesting to me, but more importantly it was because both applications failed their due diligence with respect to exploit mitigations.

The simplest way to check for the presence of DEP and ASLR is by using Microsoft Sysinternals Process Explorer. You can configure Process Explorer to display columns for DEP and ASLR. When a process is running, you can check these columns for the status of DEP and ASLR for the process. First let's check EnCase running on Windows XP. This was 2013, so Windows XP was still supported and prevalent.


EnCase didn't even enable DEP on Windows XP. You couldn't ask for an easier target to exploit!

On Windows 7, the situation is a little bit better, as DEP is enabled. However, Process Explorer reports that ASLR is not enabled for the EnCase.exe process, as shown in this figure:


Creating a PoC for Windows 7 required a little more work than the PoC for Windows XP, but it was still possible to achieve. As noted by Microsoft, DEP by itself or ASLR by itself do not provide viable protections against exploitation. Both must be present together to provide any meaningful protection against the exploitation of memory corruption vulnerabilities.

The VLC Exploit

Earlier this month, a PoC exploit for VLC 2.2.8 was posted to Exploit Database. This exploit was tagged as "E-DB Verified," which indicates that the exploit was confirmed to be functional. I tested out the exploit in a fully-patched 64-bit Windows 10 virtual machine that was connected to a CERT Tapioca Wildcard VM for network connectivity (since I don't fully trust what the exploit claims to do). Sure enough, opening the MKV file resulted in calc.exe popping, as shown in this figure:


Let's take a look to see if VideoLAN was negligent in opting in to exploit mitigations:


Something is quite strange here. The binary vlc.exe opts in to both DEP and ASLR, and all of its dependencies are ASLR-enabled as well. This means that there should be no predictable code locations for an attacker to leverage in a ROP exploit. But notice that the base address for vlc.exe is 0x00400000, which is a special address in the world of Microsoft Windows. This address is an indicator that a process has not been loaded at an address that has been randomized by ASLR. Why is Process Explorer reporting that VLC is using ASLR, but it is also loading at a non-randomized address?

Looking at the exploit for VLC 2.2.8, we can see that it uses a ROP chain with fixed offsets that would live within the space of a process loaded at 0x00400000:

rop_gadgets = {
        '64': {
            '2.2.8': [
                0x004037ac,             # XCHG EAX,ESP # ROL BL,90H # CMP WORD PTR [RCX],5A4DH # JE VLC+0X37C0 # XOR EAX,EAX # RET
                0x00403b60,             # POP RCX # RET
                target_address,         # lpAddress
                0x004011c2,             # POP RDX # RET
                0x00001000,             # dwSize
                0x0040ab70,             # JMP VirtualProtect
                target_address + 0x500, # Shellcode

PE File Header Investigation

Using Process Explorer to check an EXE for ASLR compatibility is a convenient shortcut. But clearly something is wrong in the case of VLC. By using the Microsoft Visual Studio tool dumpbin, we can check the PE headers of the vlc.exe binary. This is where it gets interesting, as shown in this figure:


Here we have an executable file that has two contradicting properties:

  • Dynamic base - This property indicates that the binary was linked with the /DYNAMICBASE flag, which opts the binary in to ASLR randomization by the OS.
  • Relocations stripped - This property indicates that the binary has had its relocation table removed. A traditional Windows executable with no relocation table cannot be randomized by Windows ASLR. Even if "mandatory ASLR" is enforced via EMET or Windows Defender Exploit Guard, these executables cannot be randomized by Windows.
    (Note: The exceptions to this rule are .NET executables. If executed on a Windows 8 or newer platform, a .NET executable with a stripped relocation table will still be relocated. On the Windows 7 platform, these .NET executables do not receive any benefit from ASLR.)

These two attributes of an executable file on Windows are at odds with each other. If a tool is simply looking for the presence of "Dynamic base" to determine whether an executable is using ASLR, this check alone is not sufficient. Microsoft Sysinternals Process Explorer versions 16.12 and earlier make this mistake. The impact of this flaw is that an analyst may be given a false sense of security about the exploit mitigations present in a product.

mingw-w64 and its Bad Defaults

How did we end up with a VLC executable that contains two different properties that contradict each other? To find the answer, look at the build chain that was used to compile the code: mingw-w64.

The build chain mingw-w64 allows the GCC compiler and its related tools to create executable code that functions on Microsoft Windows. Because GCC runs on multiple platforms, this means that I can use mingw-w64 on a Linux platform to create a Windows EXE file, for example.

The problem is that mingw-w64 doesn't produce a relocation table when linking executables with the "Dynamic base" option.

  • This limitation has been publicly known since 2013. This conversation thread ended in June 2013 with the comment: "Just a gentle reminder - any chance someone has been able to look at this?"
  • In 2014, a developer with the Tor Project suggested adding an optional flag to generate the relocation section required for proper ASLR functionality. The last activity on this ticket was from September 17, 2014, where the binutils developer requested some additional modifications to be made before the changes would be accepted.
  • In 2015, a ticket was filed for binutils that outlines several bad default settings for mingw-w64. One of the suggestions in this ticket is, "Never strip the reloc section."

A Workaround to Get Proper ASLR Compatibility with mingw-w64

Until binutils is updated to fix the generation of Windows code with mingw-w64, developers must take specific actions to produce executable files that are compatible with ASLR. While testing a simple "Hello world" application produced with mingw-w64, the addition of the following line before the main function resulted in an executable file that retained its relocation table:


Apparently exporting a function in an executable file causes mingw-w64 to link the executable in a way that does not strip the relocation table. With the relocation table present, Windows can properly randomize the load address of executable using ASLR.

Conclusion and Recommendations

VideoLAN was a victim of using a toolchain (mingw-w64) that has insecure defaults. Because the VLC executable wasn't able to be randomized with ASLR, this allowed a fully functional and reliable exploit to be written for one of its vulnerabilities. Had mingw-w64 used secure default settings, which in this case means not stripping the relocation table of an executable that claims to be ASLR compatible, the public exploit that uses fixed ROP offsets would not be viable.

This situation is made worse by the fact that several tools that check for ASLR compatibility assume that the presence of the "Dynamic base" PE header is sufficient for ASLR compatibility. Because Process Explorer does not check that a relocation table is present, its indication of "ASLR" for a running process may be incorrect, and it may provide a false sense of security.

Note that a missing relocations table is not the only way that executable code may not be compatible with ASLR, despite the output files providing the "Dynamic base" PE header. Some executable content protected by WIBU-Systems tools appear to be ASLR compatible, both by the presence of the "Dynamic base" header, as well as the relocations table. However, such code may not be compatible with ASLR in practice. For example, WIBU IxProtector encrypts functions, and as a result the code produced may not be properly formed as far as Windows is concerned. Windows software defenses may not behave as designed in cases like this.

Recommendation for Security Researchers:

  • Unless you are certain that your security tool is checking for the presence of both the "Dynamic base" PE header and the presence of a relocation table, do not rely on that tool's indication of whether an executable is using ASLR or not. The PE headers, which are viewable via dumpbin and other tools, are the definitive source of this information.
  • Some malformed executable code may appear to be ASLR compatible at a glance, but in practice is not compatible with ASLR. For this reason, it is important to verify the addresses that executable code (both executable and DLL) get loaded. With properly functioning ASLR, code will get loaded at an address other than what is statically specified as the "image base" in the PE header.

Recommendation for Developers:

  • Ensure that your executables contain both the "Dynamic base" PE header, and also contain a relocation table. Without a relocation table, vulnerabilities in any executable that you produce will be easier to exploit due to ASLR incompatibility. If you are using the mingw-w64 toolchain, the __declspec(dllexport) workaround outlined above appears to produce ASLR-compatible executables.
  • If you are using software obfuscation tools, such as WIBU IxProtector, be sure to verify that the code being produced is compatible with Windows ASLR. This can be verified by confirming that the executable code is loaded at an address other than the image base that is specified within the binary itself. Code without ASLR protection is code more vulnerable to exploitation by attackers.

Whether you are a security researcher, a developer, or anybody else interested in the security of executables on your system, you can check for PE files that contain the problematic combination of "Dynamic base" and have a stripped relocation table using the following python script that I have created: (


This tool lists problematic code files, along with the image base that is specified in each file.

Additional information about the mingw-w64 issue is available in vulnerability note VU#307144.

Automatically Stealing Password Hashes with Microsoft Outlook and OLE

Back in 2016, a coworker of mine was using CERT BFF, and he asked how he could turn a seemingly exploitable crash in Microsoft Office into a proof-of-concept exploit that runs calc.exe. Given Address Space Layout Randomization (ASLR) on modern Windows platforms, this isn't as easy as it used to be. One strategy to bypass ASLR that is possible in some cases is to leverage a memory leak to disclose memory addresses. Another strategy that is sometimes possible is to brute-force attack the vulnerability to make ASLR irrelevant. In this case, however, the path of least resistance was to simply use Rich Text Format (RTF) content to leverage Object Linking and Embedding (OLE) to load a library that doesn't opt in to using ASLR.

As I started pulling the thread of RTF and OLE, I uncovered a weakness that is much more severe than an ASLR bypass. Continue reading to follow my path of investigation, which leads to crashed Windows systems and stolen passwords.

Before getting into the details of my analysis, let's cover some of the basics involved:


OLE is a technology that Microsoft released in 1990 that allows content from one program to be embedded into a document handled by another program. For example, in Windows 3.x Microsoft Write provides the ability to embed a "Paintbrush Picture" object, as well as a "Sound" or a "Package." These are the three available OLE objects that can be inserted into a Write document:


Once inserted, we now have a Write document that has embedded Paintbrush content. Neat!


Server Message Block (SMB)

SMB is a protocol that extends the DOS API (Interrupt 21h) for local file access to include network capabilities. That is, a file on a remote server can be accessed in much the same way that a file on a local drive can be accessed. Microsoft included SMB capabilities in Windows for Workgroups 3.1, which was released in 1992.

Microsoft Outlook

Microsoft Outlook is an email client that comes with Microsoft Office. Outlook includes the ability to send rich text (RTF) email messages. These messages can include OLE objects in them.


When viewed with the Microsoft Outlook client, these rich text email messages are displayed in all of their rich-text glory.

Putting the Pieces Together

You may already have an idea of where I am going. If it's not clear, let's summarize what we know so far:

  1. Microsoft Outlook can create and render RTF email messages.
  2. RTF documents (including email messages) can include OLE objects.
  3. Due to SMB, OLE objects can live on remote servers.

Observing Microsoft Outlook Behavior

HTML email messages on the Internet are much more common than rich text email, so let's first look at the behavior of Microsoft Outlook when viewing an HTML message that has a remote image on a web server:


Here we can see that the remote image is not loaded automatically. Why is this the case? The reason is because if Outlook allowed remote images to load automatically, it could leak the client system's IP address and other metadata such as the time that an email is viewed. This restriction helps to protect against a web bug being used in email messages.

Now let's try the same sort of message, except in rich text format. And rather than a remote image file, it's an OLE document that is loaded from a remote SMB server:


Well this is unexpected. Outlook blocks remote web content due to the privacy risk of web bugs. But with a rich text email, the OLE object is loaded with no user interaction. Let's look at the traffic in Wireshark to see what exactly is being leaked as the result of this automatic remote object loading:


Here we can see than an SMB connection is being automatically negotiated. The only action that triggers this negotiation is Outlook previewing an email that is sent to it. In the screenshot above, I can see that the following things are being leaked:

  1. IP address
  2. Domain name
  3. User name
  4. Host name
  5. SMB session key

A remote OLE object in a rich text email messages functions like a web bug on steroids! At this point in my analysis in late 2016, I notified Microsoft about the issue.

Impacts of an OLE Web Bug

This bug results in two main problems, described below.

Crashing the Client

We know at this point that we can trigger Outlook to initiate an SMB connection to an arbitrary host. On February 1, 2017, a Windows SMB client vulnerability (VU#867968) was disclosed. Upon connecting to a malicious SMB server, Windows would crash. What if we created a rich text email in Outlook, but point to an SMB server that exploits this vulnerability?


Once Outlook previews such an email message, Windows will crash with a Blue Screen of Death (BSOD) such as the above. And beyond that, every time Outlook is launched after encountering such a scenario, Windows will BSOD crash again because Outlook remembers the last email message that was open. This is quite a denial of service attack. At this point I shared the attack details with Microsoft. Eventually Microsoft fixed this SMB vulnerability, and luckily we did not hear about any mass email-based exploitation of it.

Collecting Password Hashes

SMB vulnerabilities aside, I decided to dig deeper into the risks of a client attempting to initiate an SMB connection to an attacker's server. Based on what I saw in Wireshark, I already knew that it leaked much more than just the IP address of the victim. This time I used both Responder and John the Ripper.

First I sent an RTF email that has a remote OLE object that points to a system running Responder. On the Responder system I saw the following as soon as the email was previewed in Outlook:

[SMB] NTLMv2-SSP Client   :
[SMB] NTLMv2-SSP Username : DESKTOP-V26GAHF\test_user
[SMB] NTLMv2-SSP Hash : test_user::DESKTOP-V26GAHF:1122334455667788:571EE693342B161C50A73D502EB49B5A:010100000000000046E1992B4BB2D301FFADACA3241B6E090000000002000A0053004D0042003100320001000A0053004D0042003100320004000A0053004D0042003100320003000A0053004D0042003100320005000A0053004D004200310032000800300030000000000000000100000000200000D3BDB30B62A8937256327776471E072C7C6DE9F4F98458D1FEA17CBBB6AFBA770A001000000000000000000000000000000000000900280063006900660073002F003100390032002E003100360038002E003100350033002E003100330038000000000000000000

Here we have an NTLMv2 hash that we can hand off to John the Ripper. As shown below, I copied and pasted the hash into a file called test_user.john:

john test_user.john
Using default input encoding: UTF-8
Loaded 1 password hash (netntlmv2, NTLMv2 C/R [MD4 HMAC-MD5 32/64])
Will run 24 OpenMP threads
Press 'q' or Ctrl-C to abort, almost any other key for status
test (test_user)
Session completed

In approximately 1 second, I was able to determine that the password for the user "test_user" who opened my RTF email was test. The hash for a stronger password (longer and more types of characters) will take longer to crack. I've performed some basic testing on how long it takes to bruteforce the entire solution space of an 8-character password on a single mid-range GPU (NVIDIA GTX 960):

  • Lowercase letters - 16 minutes
  • Mixed-case letters - 3 days
  • Mixed-case letters and digits - 12 days
  • Mixed-case letters, digits, and symbols - 1 year

The statistics above are the worst-case scenarios for bruteforce cracking randomly-generated passwords. Any passwords that are words (like "test") or patterns (like "asdf") are much easier to crack than randomly-generated passwords, since most cracking tools have rules to check for such things.

Also, an attacker may have access to systems with multiple high-end GPUs that can cut their times into fractions of the above numbers. Each character that is added to the password length has an exponential effect on the time it takes to bruteforce the password, though. For example, while my mid-range GPU takes 1 year to exhaust the entire solution space of an 8-character password (with mixed case letters, digits, and symbols), increasing the password length to 9 characters also increases the time it takes to exhaust the solution space to 84 years!

Microsoft's Fix

Microsoft released a fix for the issue of Outlook automatically loading remote OLE content (CVE-2018-0950). Once this fix is installed, previewed email messages will no longer automatically connect to remote SMB servers. This fix helps to prevent the attacks outlined above. It is important to realize that even with this patch, a user is still a single click away from falling victim to the types of attacks described above. For example, if an email message has a UNC-style link that begins with "\\", clicking the link initiates an SMB connection to the specified server.


Additional details are available in CERT Vulnerability note VU#974272.

Conclusion and Recommendations

On the Windows platform, there are several ways to cause the client to initiate an SMB connection. Any time an SMB connection is initiated, the client's IP address, domain name, user name, host name, and password hash may be leaked. To help protect against attacks that involve causing a victim's machine to initiate an SMB connection, please consider the following mitigations:

  • Install Microsoft update CVE-2018-0950. This update prevents automatic retrieval of remote OLE objects in Microsoft Outlook when rich text email messages are previewed. If a user clicks on an SMB link, however, this behavior will still cause a password hash to be leaked.
  • Block inbound and outbound SMB connections at your network border. This can be accomplished by blocking ports 445/tcp, 137/tcp, 139/tcp, as well as 137/udp and 139/udp.
  • Block NTLM Single Sign-on (SSO) authentication, as specified in Microsoft Security Advisory ADV170014. Starting with Windows 10 and Server 2016, if the EnterpriseAccountSSO registry value is created and set to 0, SSO authentication will be disabled for external and unspecified network resources. With this registry change, accessing SMB resources is still allowed, but external and unspecified SMB resources will require the user to enter credentials as opposed to automatically attempting to use the hash of the currently logged-on user.
  • Assume that at some point your client system will attempt to make an SMB connection to an attacker's server. For this reason, make sure that any Windows login has a sufficiently complex password so that it is resistant to cracking. The following two strategies can help achieve this goal:
    1. Use a password manager to help generate complex random passwords. This strategy can help ensure the use of unique passwords across resources that you use, and it can ensure that the passwords are of a sufficient complexity and randomness.
    2. Use longer passphrases (with mixed-case letters, numbers and symbols) instead of passwords. This strategy can produce memorable credentials that do not require additional software to store and retrieve.

The CERT Guide to Coordinated Vulnerability Disclosure

We are happy to announce the release of the CERT® Guide to Coordinated Vulnerability Disclosure (CVD). The guide provides an introduction to the key concepts, principles, and roles necessary to establish a successful CVD process. It also provides insights into how CVD can go awry and how to respond when it does so.

As a process, CVD is intended to minimize adversary advantage while an information security vulnerability is being mitigated. And it is important to recognize that CVD is a process, not an event. Releasing a patch or publishing a document are important events within the process, but do not define it.

CVD participants can be thought of as repeatedly asking these questions: What actions should I take in response to knowledge of this vulnerability in this product? Who else needs to know what, and when do they need to know it? The CVD process for a vulnerability ends when the answers to these questions are nothing, and no one.

If we have learned anything in nearly three decades of coordinating vulnerability reports at the CERT/CC, it is that there is no single right answer to many of the questions and controversies surrounding the disclosure of information about software and system vulnerabilities. The CERT Guide to CVD is a summary of what we know about a complex social process that surrounds humans trying to make the software and systems they use more secure. It's about what to do (and what not to) when you find a vulnerability, or when you find out about a vulnerability. It's written for vulnerability analysts, security researchers, developers, and deployers; it's for both technical staff and their management alike. While we discuss a variety of roles that play a part in the process, we intentionally chose not to focus on any one role; instead we wrote for any party that might find itself engaged in coordinating a vulnerability disclosure.

In a sense, this report is a travel guide for what might seem a foreign territory. Maybe you've passed through once or twice. Maybe you've only heard about the bad parts. You may be uncertain of what to do next, nervous about making a mistake, or even fearful of what might befall you. If you count yourself as one of those individuals, we want to reassure you that you are not alone; you are not the first to experience events like these or even your reaction to them. We're locals. We've been doing this for a while. Here's what we know.


Security vulnerabilities remain a problem for vendors and deployers of software-based systems alike. Vendors play a key role by providing fixes for vulnerabilities, but they have no monopoly on the ability to discover vulnerabilities in their products and services. Knowledge of those vulnerabilities can increase adversarial advantage if deployers are left without recourse to remediate the risks they pose. Coordinated Vulnerability Disclosure (CVD) is the process of gathering information from vulnerability finders, coordinating the sharing of that information between relevant stakeholders, and disclosing the existence of software vulnerabilities and their mitigations to various stakeholders including the public. The CERT Coordination Center has been coordinating the disclosure of software vulnerabilities since its inception in 1988. This document is intended to serve as a guide to those who want to initiate, develop, or improve their own CVD capability. In it, the reader will find an overview of key principles underlying the CVD process, a survey of CVD stakeholders and their roles, and a description of CVD process phases, as well as advice concerning operational considerations and problems that may arise in the provision of CVD and related services.

The CERT® Guide to Coordinated Vulnerability Disclosure is available in the SEI Digital Library.

The Consequences of Insecure Software Updates

In this blog post, I discuss the impact of insecure software updates as well as several related topics, including mistakes made by software vendors in their update mechanisms, how to verify the security of a software update, and how vendors can implement secure software updating mechanisms.

Earlier in June of this year, I published two different vulnerability notes about applications that update themselves insecurely:

  • VU#846320 - Samsung Magician fails to update itself securely
  • VU#489392 - Acronis True Image fails to update itself securely

In both of these cases, the apparent assumption the software makes is that it is operating on a trusted network and that no attacker would attempt to modify the act of checking for or downloading the updates. This is not a valid assumption to make.

Consider the situation where you think you're on your favorite coffee shop's WiFi network. If you have a single application with an insecure update mechanism, you are at risk to be a victim of what's known as an evilgrade attack. That is, you may inadvertently install code from an attacker during the software update process. Given that software updaters usually run with administrative privileges, the attacker's code will subsequently run with administrative privileges as well. In some cases, this can happen with no user interaction. This risk has been publicly known since 2010.

The dangers of an insecure update process are much more severe than an individual user being compromised while on an untrusted network. As the recent ExPetr/NotPetya/Petya wiper malware has shown, the impact of insecure updates can be catastrophic. Let me expand on this topic by discussing several important aspects of software updates:

  1. What mistakes can software vendors make when designing a software update mechanism?
  2. How can I verify that my software performs software updates securely?
  3. How should vendors implement a secure software update mechanism?

Insecure Software Updates and Petya

Microsoft has reported that the Petya malware originated from the update mechanism for the Ukrainian tax software called MEDoc. Because of my language barrier, I was not able to thoroughly exercise the software. However, with the help of CERT Tapioca, I was able to see the network traffic that was being used for software updates. This software makes two critical mistakes in its update mechanism: (1) it uses the insecure HTTP protocol and (2) the updates are not digitally signed

These are the same two mistakes that the vulnerable Samsung Magician software (VU#846320) and the vulnerable Acronis True Image software (VU#489392) contain. Let's dig in to the two main flaws in software updaters:

Use of an Insecure Transport Layer

HTTP is used in a number of software update mechanisms, as opposed to the more-secure HTTPS protocol. What makes HTTPS more secure? Properly-configured HTTPS communications attempt to achieve three goals:

  1. Confidentiality - Traffic is protected against eavesdropping.
  2. Integrity - Traffic content is protected against modification.
  3. Authenticity - The client verifies the identity of the server being contacted.

Let's compare the differences between an update mechanism that uses a properly configured HTTPS protocol versus one that uses either HTTP or HTTPS but without validating certificates :

Properly-configured HTTPS Software update request content is not visible to attackers. Software update requests cannot be modified by an attacker
Software update traffic cannot be redirected by an attacker
HTTP or HTTPS that is not validated An attacker can view the contents of software update requests. An attacker can modify update requests or responses to modify the update behavior or outcome. An attacker can intercept and redirect software update requests to a malicious server.

Based on its inability to address these three goals, it is clear that HTTP is not a viable solution for software update traffic.

Lack of Digital Signatures for Updates

If software updates are not digitally signed, or if the software update mechanism does not validate signatures, the absence of digital signatures can allow an attacker to replace a software update with malware. A simple test that I like to do when testing software updaters is to test if the software will download and install calc.exe from Windows instead of the expected update. If calc.exe pops up when the update occurs, we have evidence of a vulnerable update mechanism!

Verifying Software Updaters

CERT Tapioca can be used to easily observe if an application uses HTTP or non-validated HTTPS traffic for updates. Any update traffic that appears in the Tapioca mitmproxy window is an indication that the update uses an insecure transport layer. As long as the mitmproxy root CA certificate is not installed on the client system, the only traffic that will appear in the mitmproxy window will be HTTP traffic and HTTPS traffic that is not checked for a trusted root CA, both of which are insecure.

Determining whether an application validates the digital signatures of updates requires a little more work. Essentially, you need to intercept the update and redirect it to an update under your control that is either unsigned or signed by another vendor.

A "Belt and Suspenders" Approach to Updates

If an update mechanism uses HTTPS, it should ensure that a software update mechanism is only communicating with a legitimate update server, right? And shouldn't that be enough to ensure a secure update? Well, not exactly.

First, HTTPS is not without flaws. There have been a number of flaws in various protocols and ciphersuites supported by HTTPS, including FREAK, DROWN, BEAST, CRIME, BREACH, and POODLE. These known flaws, which were fixed in HTTPS communications using modern TLS protocols, can weaken the confidentiality, integrity, and authenticity goals that HTTPS aims to provide. It is also important to realize that even without such flaws, HTTPS without pinning can ensure website authenticity only to the level that the PKI and certificate authority architecture allow. See Moxie Marlinspike's post SSL and the Future of Authenticity for more details.

HTTPS flaws and other weaknesses aside, using HTTPS without signature-validated updates leaves a large open hole that can be attacked. What happens if an attacker can compromise an update server? If software update signatures are not validated, a compromise of a single server can result in malicious software being deployed to all clients. There is evidence that this is exactly what happened in the case of MEDoc.

virus_attack.pngSource: / Google translate

Using HTTPS can help to ensure both the transport layer security as well as the identity of the update server. Validating digital signatures of the updates themselves can help limit the damage even if the update server is compromised. Both of these aspects are vital to the operation of a secure software update mechanism.

Conclusion and Recommendations

Despite evilgrade being a publicly-known attack for about seven years now, it is still a problem. Because of the nature of how software is traditionally installed on Windows and Mac systems, we live in a world where every application may end up implementing its own auto-update mechanism independently. And as we are seeing, software vendors are frequently making mistakes in these updaters. Linux and other operating systems that use package-management systems appear to be less affected by this problem, due to the centralized channel through which software is installed.

Recommendations for Users

So what can end users do to protect themselves? Here are my recommendations:

  • Especially when on untrusted networks, be wary of automatic updates. When possible, retrieve updates using your web browser from the vendor's HTTPS website. Popular applications from major vendors are less likely to contain vulnerable update mechanisms, but any software update mechanism has the possibility of being susceptible to attack.
  • For any application you use that contains an update mechanism, verify that the update is at least not vulnerable to a MITM attack by performing the update using CERT Tapioca as an internet source. Virtual machines make this testing easier.

Recommendations for Developers

What can software developers do to provide software updates that are secure? Here are my recommendations:

  • If your software has an update mechanism, ensure that it both:
    • Uses properly validated HTTPS for a communications channel instead of HTTP
    • Validates digital signatures of retrieved updates before installing them
  • To protect against an update server compromise affecting your users, ensure that the code signing key is not present on the update server itself. To increase protection against malicious updates, ensure the code signing key is offline, or otherwise unavailable to an attacker.

The Twisty Maze of Getting Microsoft Office Updates

While investigating the fixes for the recent Microsoft Office OLE vulnerability, I encountered a situation that led me to believe that Office 2016 was not properly patched. However, after further investigation, I realized that the update process of Microsoft Update has changed. If you are not aware of these changes, you may end up with a Microsoft Office installation that is missing security updates. With the goal of preventing others from making similar mistakes as I have, I outline in this blog post how the way Microsoft Office receives updates has changed.

The Bad Old Days

Let's go back about 15 years in Windows computing to the year 2002. You've got a shiny new desktop with Windows XP and Office XP as well. If you knew where the option was in Windows, you could turn on Automatic Updates to download and notify you when OS updates are made available. What happens when there is a security update for Office? If you happened to know about the OfficeUpdate website, you could run an ActiveX control to check for Microsoft Office updates. Notice that the Auto Update link is HTTP instead of HTTPS. These were indeed dark times. But we had Clippy to help us through it!


Microsoft Update: A New Hope

Let's fast-forward to the year 2005. We now have Windows XP Service Pack 2, which enables a firewall by default. Windows XP SP2 also encourages you to enable Automatic Updates for the OS. But what about our friend Microsoft Office? As it turns out, an enhanced version of Windows Update, called Microsoft Update, was also released in 2005. The new Microsoft Update, instead of checking for updates for only the OS itself, now also checks for updates for other Microsoft software, such as Microsoft Office. If you enabled this optional feature, then updates for Microsoft Windows and Microsoft Office would be installed.

Microsoft Update in Modern Windows Systems

Enough about Windows XP, right? How does Microsoft Update factor into modern, supported Windows platforms? Microsoft Update is still supported through current Windows 10 platforms. But in each of these versions of Windows, Microsoft Update continues to be an optional component, as illustrated in the following screen shots for Windows 7, 8.1, and 10.

Windows 7



Once this dialog is accepted, we can now see that Microsoft Update has been installed. We will now receive updates for Microsoft Office through the usual update mechanisms for Windows.


Windows 8.1

Windows 8.1 has Microsoft Update built-in; however, the option is not enabled by default.


Windows 10

Like Windows 8.1, Windows 10 also includes Microsoft Update, but it is not enabled by default.


Microsoft Click-to-Run

Microsoft Click-to-Run is a feature where users "... don't have to download or install updates. The Click-to-Run product seamlessly updates itself in the background." The Microsoft Office 2016 installation that I obtained through MSDN is apparently packaged in Click-to-Run format. How can I tell this? If you view the Account settings in Microsoft Office, a Click-to-Run installation looks like this:


Additionally, you should notice a process called OfficeClickToRun.exe running:


Microsoft Office Click-to-Run and Updates

The interaction between a Click-to-Run version of Microsoft Office and Microsoft Updates is confusing. For the past dozen years or so, when a Windows machine completed running Microsoft Update, you could be pretty sure that Microsoft Office was up to date. As a CERT vulnerability analyst, my standard process on a Microsoft patch Tuesday was to restore my virtual machine snapshots, run Microsoft Update, and then consider that machine to have fully patched Microsoft software.

I first noticed a problem when my "fully patched" Office 2016 system still executed calc.exe when I opened my proof-of-concept exploit for CVE-2017-0199. Only after digging into the specific version of Office 2016 that was installed on my system did I realize that it did not have the April 2017 update installed, despite having completed Microsoft Update and rebooting. After setting up several VMs with Office 2016 installed, I was frequently presented with a screen like this:


The problem here is obvious:

  • Microsoft Update is indicating that the machine is fully patched when it isn't.
  • The version of Office 2016 that is installed is from September 2015, which is outdated.
  • The above screenshot was taken on May 3, 2017, which shows that updates weren't available when they actually were.

I would love to have determined why my machines were not automatically retrieving updates. But unfortunately there appear to be too many variables at play to pinpoint the issue. All I can conclude is that my Click-to-Run installations of Microsoft Office did not receive updates for Microsoft Office 2016 until as late as 2.5 weeks after the patches were released to the public. And in the case of the April 2017 updates, there was at least one vulnerability that was being exploited in the wild, with exploit code being publicly available. This amount of time is a long window of exposure.

It is worth noting that the manual update button within the Click-to-Run Office 2016 installation does correctly retrieve and install updates. The problem I see here is that it requires manual user interaction to be sure that your software is up to date. Microsoft has indicated to me that this behavior is by design:

[Click-to-Run] updates are pushed automatically through gradual rollouts to ensure the best product quality across the wide variety of devices and configurations that our customers have.

Personally, I wish that the update paths for Microsoft Office were more clearly documented.

Update: April 11, 2018

Microsoft Office Click-to-Run updates are not necessarily released on the official Microsoft "Patch Tuesday" dates. For this reason, Click-to-Run Office users may have to wait additional time to receive security updates.

Conclusions and Recommendations

To prevent this problem from happening to you, I recommend that you do the following:

  • Enable Microsoft Update to ensure that you receive updates for software beyond just the core Windows operating system. This switch can be automated using the technique described here:
  • If you have a Click-to-Run version of Microsoft Office installed, be aware that it will not receive updates via Microsoft Update.
  • If you have a Click-to-Run version of Microsoft Office and want to ensure timely installation of security updates, manually check for updates rather than relying on the automatic update capability of Click-to-Run.
  • Enterprise customers should refer to Deployment guide for Office 365 ProPlus to ensure that updates for Click-to-Run installations meet their security compliance timeframes.