Daily Archives: December 3, 2019

What You Need to Know about Cyberbullying

Reading Time: ~ 2 min.

Have you noticed a decrease in your child’s happiness or an increase in their anxiety? Cyberbullying might be the cause to these behavioral changes.

Bullying is no longer confined to school playgrounds and neighborhood alleys. It has long moved into the online world, thanks to the easy access to technology. Between Twitter, SnapChat, TikTok, Instagram, WhatsApp, or even standard SMS texts, emails and instant messages, cyberbullies have an overwhelming number of technical avenues to exploit.

While cyberbullying can happen to anyone, studies have shown that teens are usually more susceptible to it. The percentage of individuals – middle and high school students from across the U.S. — who have experienced cyberbullying at some point, has more than doubled (19% to 37%) from 2007 to 2019, according to data from the Cyberbullying Research Center.

Before you teach your kids how to respond to cyberbullying, it is important to know what it entails.

Check out our Cybersecurity Education Resources

What is Cyberbullying?

Cyberbullying is bullying that takes place over digital devices like cell phones, tablets, or computers. Even smaller devices like smartwatches and iPods can facilitate cyberbullying. Today, social media platforms act like a breeding ground for cyberbullying.

Cyberbullying usually begins with teasing that turns to harassment. From there it can evolve in many ways, such as impersonation and catfishing, doxxing, or even blackmail through the use of compromising photos.

Catfishing is the process of creating a fake identity online and using it to lure people into a relationship. Teens often engage in impersonation online to humiliate their targets and it is a form of cyberbullying.

Doxxing is used as a method of attack that includes searching, collecting and publishing personal or identifying information about someone on the internet.

Identifying the Warning Signs

When it comes to cyberbullying, just like traditional bullying, there are warning signs for parents to watch for in their child. Although the warning signs may vary, Nemours Children’s Health System has identified the most common ones as:

  • being upset or emotional during or after internet or phone time
  • being overly protective of their digital life and mobile devices
  • withdrawal from family members, friends, and activities
  • missing or avoiding school 
  • a dip in school performance
  • changes in mood, behavior, sleep, or appetite
  • suddenly avoiding the computer or cellphone
  • being nervous or jumpy when getting an instant message, text, or email
  • avoiding conversations about their cell phone activities

Remember, there are free software and apps available to help you restrict content, block domains, or even monitor your child’s online activity.

While having a child who is being cyberbullied is every parent’s nightmare, it’s equally important to understand if your child is cyberbullying others.

Do you believe your child is a cyberbully? That difficult and delicate situation needs its own blog post—but don’t worry, we have you covered.

You’ll also find many cyberbullying prevention and resolution resources on both federal and local levels, as well as support from parents going through similar issues on our community forum.

Preparing your kids for a world where cyberbullying is a reality isn’t easy, but it is necessary. By creating a safe space for your child to talk to you about cyberbullying, you’re setting the foundation to squash this problem quickly if it arises.

The post What You Need to Know about Cyberbullying appeared first on Webroot Blog.

Securing Emerging Payment Channels


Securing emerging payment channels is a core pillar in the PCI Security Standards Council’s (PCI SSC) strategic framework, which guides how the Council achieves its mission and supports the needs of the global payments industry. In this interview with PCI SSC Standards Officer Emma Sutcliffe, we discuss this pillar and how it’s shaping Council priorities.

Cyber Security Roundup for November 2019

In recent years political motivated cyber-attacks during elections has become an expected norm, so it was no real surprise when the Labour Party reported it was hit with two DDoS cyber-attacks in the run up to the UK general election, which was well publicised by the media. However, what wasn't well publicised was both the Conservative Party and Liberal Democrats Party were also hit with cyber attacks. These weren't nation-state orchestrated cyberattacks either, black hat hacking group Lizard Squad, well known for their high profile DDoS attacks, are believed to be the culprits.

The launch of Disney Plus didn’t go exactly to plan, without hours of the streaming service going live, compromised Disney Plus user accounts credentials were being sold on the black market for as little as £2.30 a pop. Disney suggested hackers had obtained customer credentials from previously leaked identical credentials, as used by their customers on other compromised or insecure websites, and from keylogging malware. It's worth noting Disney Plus doesn’t use Multi-Factor Authentication (MFA), implementing MFA to protect their customer's accounts would have prevented the vast majority of Disney Plus account compromises in my view.

Trend Micro reported an insider stolen around 100,000 customer accounts details, with the data used by cyber con artists to make convincing scam phone calls impersonating their company to a number of their customers. In a statement, Trend Micro said it determined the attack was an inside job, an employee used fraudulent methods to access its customer support databases, retrieved the data and then sold it on. “Our open investigation has confirmed that this was not an external hack, but rather the work of a malicious internal source that engaged in a premeditated infiltration scheme to bypass our sophisticated controls,” the company said. The employee behind it was identified and fired, Trend Micro said it is working with law enforcement in an on-going investigation.

Security researchers found 4 billion records from 1.2 billion people on an unsecured Elasticsearch server. The personal information includes names, home and mobile phone numbers and email addresses and what may be information scraped from LinkedIn, Facebook and other social media sources.

T-Mobile reported a data breach of some their prepaid account customers. A T-Mobile spokesman said “Our cybersecurity team discovered and shut down malicious, unauthorized access to some information related to your T-Mobile prepaid wireless account. We promptly reported this to authorities”.

A French hospital was hit hard by a ransomware attack which has caused "very long delays in care". According to a spokesman, medical staff at Rouen University Hospital Centre (CHU) abandon PCs as ransomware had made them unusable, instead, staff returned to the "old-fashioned method of paper and pencil". No details about the strain of the ransomware have been released.

Microsoft released patches for 74 vulnerabilities in November, including 13 which are rated as critical. One of which was for a vulnerability with Internet Explorer (CVE-2019-1429), an ActiveX vulnerability known to be actively exploited by visiting malicious websites.

It was a busy month for blog articles and threat intelligence news, all are linked below.

BLOG
NEWS
VULNERABILITIES AND SECURITY UPDATES
AWARENESS, EDUCATION AND THREAT INTELLIGENCEHUAWEI NEWS AND THREAT INTELLIGENCE

An Update on Android TLS Adoption

Posted by Bram Bonné, Senior Software Engineer, Android Platform Security & Chad Brubaker, Staff Software Engineer, Android Platform Security

banner illustration with several devices and gaming controller

Android is committed to keeping users, their devices, and their data safe. One of the ways that we keep data safe is by protecting network traffic that enters or leaves an Android device with Transport Layer Security (TLS).

Android 7 (API level 24) introduced the Network Security Configuration in 2016, allowing app developers to configure the network security policy for their app through a declarative configuration file. To ensure apps are safe, apps targeting Android 9 (API level 28) or higher automatically have a policy set by default that prevents unencrypted traffic for every domain.

Today, we’re happy to announce that 80% of Android apps are encrypting traffic by default. The percentage is even greater for apps targeting Android 9 and higher, with 90% of them encrypting traffic by default.

Percentage of apps that block cleartext by default.

Percentage of apps that block cleartext by default.

Since November 1 2019, all app (updates as well as all new apps on Google Play) must target at least Android 9. As a result, we expect these numbers to continue improving. Network traffic from these apps is secure by default and any use of unencrypted connections is the result of an explicit choice by the developer.

The latest releases of Android Studio and Google Play’s pre-launch report warn developers when their app includes a potentially insecure Network Security Configuration (for example, when they allow unencrypted traffic for all domains or when they accept user provided certificates outside of debug mode). This encourages the adoption of HTTPS across the Android ecosystem and ensures that developers are aware of their security configuration.

Example of a warning shown to developers in Android Studio.

Example of a warning shown to developers in Android Studio.

Example of a warning shown to developers as part of the pre-launch report.

Example of a warning shown to developers as part of the pre-launch report.

What can I do to secure my app?

For apps targeting Android 9 and higher, the out-of-the-box default is to encrypt all network traffic in transit and trust only certificates issued by an authority in the standard Android CA set without requiring any extra configuration. Apps can provide an exception to this only by including a separate Network Security Config file with carefully selected exceptions.

If your app needs to allow traffic to certain domains, it can do so by including a Network Security Config file that only includes these exceptions to the default secure policy. Keep in mind that you should be cautious about the data received over insecure connections as it could have been tampered with in transit.

<network-security-config>
<base-config cleartextTrafficPermitted="false" />
<domain-config cleartextTrafficPermitted="true">
<domain includeSubdomains="true">insecure.example.com</domain>
<domain includeSubdomains="true">insecure.cdn.example.com</domain>
</domain-config>
</network-security-config>

If your app needs to be able to accept user specified certificates for testing purposes (for example, connecting to a local server during testing), make sure to wrap your element inside a element. This ensures the connections in the production version of your app are secure.

<network-security-config>
<debug-overrides>
<trust-anchors>
<certificates src="user"/>
</trust-anchors>
</debug-overrides>
</network-security-config>

What can I do to secure my library?

If your library directly creates secure/insecure connections, make sure that it honors the app's cleartext settings by checking isCleartextTrafficPermitted before opening any cleartext connection.

Android’s built-in networking libraries and other popular HTTP libraries such as OkHttp or Volley have built-in Network Security Config support.

Giles Hogben, Nwokedi Idika, Android Platform Security, Android Studio and Pre-Launch Report teams

Passwords: Our First Line of Defense

Darin Roberts // “Why do you recommend a 15-character password policy when (name your favorite policy here) recommends only 8-character minimum passwords?” I have had this question posed to me a couple of times in the very recent past.   There were 2 separate policies that were shown to me when asking these questions. First was […]

The post Passwords: Our First Line of Defense appeared first on Black Hills Information Security.

Excelerating Analysis – Tips and Tricks to Analyze Data with Microsoft Excel

Incident response investigations don’t always involve standard host-based artifacts with fully developed parsing and analysis tools. At FireEye Mandiant, we frequently encounter incidents that involve a number of systems and solutions that utilize custom logging or artifact data. Determining what happened in an incident involves taking a dive into whatever type of data we are presented with, learning about it, and developing an efficient way to analyze the important evidence.

One of the most effective tools to perform this type of analysis is one that is in almost everyone’s toolkit: Microsoft Excel. In this article we will detail some tips and tricks with Excel to perform analysis when presented with any type of data.

Summarizing Verbose Artifacts

Tools such as FireEye Redline include handy timeline features to combine multiple artifact types into one concise timeline. When we use individual parsers or custom artifact formats, it may be tricky to view multiple types of data in the same view. Normalizing artifact data with Excel to a specific set of easy-to-use columns makes for a smooth combination of different artifact types.

Consider trying to review parsed file system, event log, and Registry data in the same view using the following data.

$SI Created

$SI Modified

File Name

File Path

File Size

File MD5

File Attributes

File Deleted

2019-10-14 23:13:04

2019-10-14 23:33:45

Default.rdp

C:\Users\
attacker\Documents\

485

c482e563df19a40
1941c99888ac2f525

Archive

FALSE

Event Gen Time

Event ID

Event Message

Event Category

Event User

Event System

2019-10-14 23:13:06

4648

A logon was attempted using explicit credentials.

Subject:
   Security ID:  DomainCorp\Administrator
   Account Name:  Administrator
   Account Domain:  DomainCorp
   Logon ID:  0x1b38fe
   Logon GUID:  {00000000-0000-0000-0000-000000000000}
Account Whose Credentials Were Used:
   Account Name:  VictimUser
   Account Domain:  DomainCorp
   Logon GUID:  {00000000-0000-0000-0000-000000000000}
Target Server:
   Target Server Name: DestinationServer
   Additional Information:
Process Information:
   Process ID:  0x5ac
   Process Name:  C:\Program Files\Internet Explorer\iexplore.exe
Network Information:
   Network Address: -
   Port:   -

Logon

Administrator

SourceSystem

KeyModified

Key Path

KeyName

ValueName

ValueText

Type

2019-10-14 23:33:46

HKEY_USER\Software\Microsoft\Terminal Server Client\Servers\

DestinationServer

UsernameHInt

VictimUser

REG_SZ

Since these raw artifact data sets have different column headings and data types, they would be difficult to review in one timeline. If we format the data using Excel string concatenation, we can make the data easy to combine into a single timeline view. To format the data, we can use the “&” operation with a function to join information we may need into a “Summary” field.

An example command to join the relevant file system data delimited by ampersands could be “=D2 & " | " & C2 & " | " & E2 & " | " & F2 & " | " & G2 & " | " & H2”. Combining this format function with a “Timestamp” and “Timestamp Type” column will complete everything we need for streamlined analysis.

Timestamp

Timestamp Type

Event

2019-10-14 23:13:04

$SI Created

C:\Users\attacker\Documents\ | Default.rdp | 485 | c482e563df19a401941c99888ac2f525  | Archive | FALSE

2019-10-14 23:13:06

Event Gen Time

4648 | A logon was attempted using explicit credentials.

Subject:
   Security ID:  DomainCorp\Administrator
   Account Name:  Administrator
   Account Domain:  DomainCorp
   Logon ID:  0x1b38fe
   Logon GUID:  {00000000-0000-0000-0000-000000000000}
Account Whose Credentials Were Used:
   Account Name:  VictimUser
   Account Domain:  DomainCorp
   Logon GUID:  {00000000-0000-0000-0000-000000000000}
Target Server:
   Target Server Name: DestinationServer
   Additional Information:
Process Information:
   Process ID:  0x5ac
   Process Name:  C:\Program Files\Internet Explorer\iexplore.exe
Network Information:
   Network Address: -
   Port:   - | Logon | Administrator | SourceSystem

2019-10-14 23:33:45

$SI Modified

C:\Users\attacker\Documents\ | Default.rdp | 485 | c482e563df19a401941c99888ac2f525  | Archive | FALSE

2019-10-14 23:33:46

KeyModified

HKEY_USER\Software\Microsoft\Terminal Server Client\Servers\ | DestinationServer | UsernameHInt | VictimUser

After sorting by timestamp, we can see evidence of the “DomainCorp\Administrator” account connecting from “SourceSystem” to “DestinationServer” with the “DomainCorp\VictimUser” account via RDP across three artifact types.

Time Zone Conversions

One of the most critical elements of incident response and forensic analysis is timelining. Temporal analysis will often turn up new evidence by identifying events that precede or follow an event of interest. Equally critical is producing an accurate timeline for reporting. Timestamps and time zones can be frustrating, and things can get confusing when the systems being analyzed span various time zones. Mandiant tracks all timestamps in Coordinated Universal Time (UTC) format in its investigations to eliminate any confusion of both time zones and time adjustments such as daylight savings and regional summer seasons. 

Of course, various sources of evidence do not always log time the same way. Some may be local time, some may be UTC, and as mentioned, data from sources in various geographical locations complicates things further. When compiling timelines, it is important to first know whether the evidence source is logged in UTC or local time. If it is logged in local time, we need to confirm which local time zone the evidence source is from. Then we can use the Excel TIME()  formula to convert timestamps to UTC as needed.

This example scenario is based on a real investigation where the target organization was compromised via phishing email, and employee direct deposit information was changed via an internal HR application. In this situation, we have three log sources: email receipt logs, application logins, and application web logs. 

The email logs are recorded in UTC and contain the following information:

The application logins are recorded in Eastern Daylight Time (EDT) and contain the following:

The application web logs are also recorded in Eastern Daylight Time (EDT) and contain the following:

To take this information and turn it into a master timeline, we can use the CONCAT function (an alternative to the ampersand concatenation used previously) to make a summary of the columns in one cell for each log source, such as this example formula for the email receipt logs:

This is where checking our time zones for each data source is critical. If we took the information as it is presented in the logs and assumed the timestamps were all in the same time zone and created a timeline of this information, it would look like this:

As it stands the previous screenshot, we have some login events to the HR application, which may look like normal activity for the employees. Then later in the day, they receive some suspicious emails. If this were hundreds of lines of log events, we would risk the login and web log events being overlooked as the time of activity precedes our suspected initial compromise vector by a few hours. If this were a timeline used for reporting, it would also be inaccurate.

When we know which time zone our log sources are in, we can adjust the timestamps accordingly to reflect UTC. In this case, we confirmed through testing that the application logins and web logs are recorded in EDT, which is four hours behind UTC, or “UTC-4”. To change these to UTC time, we just need to add four hours to the time. The Excel TIME function makes this easy. We can just add a column to the existing tables, and in the first cell we type “=A2+TIME(4,0,0)”. Breaking this down:

  • =A2
    • Reference cell A2 (in this case our EDT timestamp). Note this is not an absolute reference, so we can use this formula for the rest of the rows.
  • +TIME
    • This tells Excel to take the value of the data in cell A2 as a “time” value type and add the following amount of time to it:
  • (4,0,0)
    • The TIME function in this instance requires three values, which are, from left to right: hours, minutes, seconds. In this example, we are adding 4 hours, 0 minutes, and 0 seconds.

Now we have a formula that takes the EDT timestamp and adds four hours to it to make it UTC. Then we can replicate this formula for the rest of the table. The end result looks like this:

When we have all of our logs in the same time zone, we are ready to compile our master timeline. Taking the UTC timestamps and the summary events we made, our new, accurate timeline looks like this:

Now we can clearly see suspicious emails sent to (fictional) employees Austin and Dave. A few minutes later, Austin’s account logs into the HR application and adds a new bank account. After this, we see the same email sent to Jake. Soon after this, Jake’s account logs into the HR application and adds the same bank account information as Austin’s. Converting all our data sources to the same time zone with Excel allowed us to quickly link these events together and easily identify what the attacker did. Additionally, it provided us with more indicators, such as the known-bad bank account number to search for in the rest of the logs.

Pro Tip: Be sure to account for log data spanning over changes in UTC offset due to regional events such as daylight savings or summer seasons. For example, local time zone adjustments will need to change for logs in United States Eastern Time from Virginia, USA from +TIME(5,0,0) to +TIME(4,0,0) the first weekend in March every year and back from +TIME(4,0,0) to +TIME(5,0,0) the first weekend in November to account for daylight and standard shifts.

CountIf for Log Baselining

When reviewing logs that record authentication in the form of a user account and timestamp, we can use COUNTIF to establish simple baselines to identify those user accounts with inconsistent activity.  

In the example of user logons that follows, we'll use the formula "=COUNTIF($B$2:$B$25,B2)" to establish a historical baseline. Here is a breakdown of the parameters for this COUNTIF formula located in C2 in our example: 

  • COUNTIF 
    • This Excel formula counts how many times a value exists in a range of cells. 
  • $B$2:$B$25 
    • This is the entire range of all cells, B2 through B25, that we want to use as a range to search for a specific value. Note the use of "$" to ensure that the start and end of the range are an absolute reference and are not automatically updated by Excel if we copy this formula to other cells. 
  • B2 
    • This is the cell that contains the value we want to search for and count occurrences of in our range of $B$2:$B$25. Note that this parameter is not an absolute reference with a preceding "$". This allows us to fill the formula down through all rows and ensure that we are counting the applicable user name. 

To summarize, this formula will search the username column of all logon data and count how many times the user of each logon has logged on in total across all data points. 

When most user accounts log on regularly, a compromised account being used to logon for the first time may clearly stand out when reviewing total log on counts. If we have a specific time frame in mind, it may be helpful to know which accounts first logged on during that time.  

The COUNTIF formula can help track accounts through time to identify their first log on which can help identify rarely used credentials that were abused for a limited time frame.  

We'll start with the formula "=COUNTIF($B$2:$B2,B2)" in cell D3. Here is a breakdown of the parameters  for this COUNTIF formula. Note that the use of "$" for absolute referencing is slightly different for the range used, and that is an importance nuance: 

  • COUNTIF 
    • This Excel formula counts how many times a value exists in a range of cells. 
  • $B$2:$B2 
    • This is the range of cells, B2 through B2, that we want to start with. Since we want to increase our range as we go through the rows of the log data, the ending cell row number (2 in this example) is not made absolute. As we fill this formula down through the rest of our log data, it will automatically expand the range to include the current log record and all previous logs. 
  • B2 
    • This cell contains the value we want to search for and provides a count of occurrences found in our defined range. Note that this parameter B2 is not an absolute reference with a preceding "$". This allows us to fill the formula down through all rows and ensure that we are counting the applicable user name. 

To summarize, this formula will search the username column of all logon data before and including the current log and count how many times the user of each logon has logged on up to that point in time. 

The following example illustrates how Excel automatically updated the range for D15 to $B$2:$B15 using the fill handle.  


To help visualize a large data set, let's add color scale conditional formatting to each row individually. To do so: 

  1. Select only the cells we want to compare with the color scale (such as D2 to D25). 
  2. On the Home menu, click the Conditional Formatting button in the Styles area. 
  3. Click Color Scales. 
  4. Click the type of color scale we would like to use. 

The following examples set the lowest values to red and the highest values to green. We can see how: 

  • Users with lower authentication counts contrast against users with more authentications. 
  • The first authentication times of users stand out in red. 

Whichever colors are used, be careful not to assume that one color, such as green, implies safety and another color, such as red, implies maliciousness.

Conclusion

The techniques described in this post are just a few ways to utilize Excel to perform analysis on arbitrary data. While these techniques may not leverage some of the more powerful features of Excel, as with any variety of skill set, mastering the fundamentals enables us to perform at a higher level. Employing fundamental Excel analysis techniques can empower an investigator to work through analysis of any presented data type as efficiently as possible.

Software Patching Statistics for 2019: Common Practices and Vulnerabilities

Wondering about software patching statistics and what the current state of affairs on updates is? This is where you will find all the relevant data as soon as experts reveal it, as well as stats based on our own customer data.

I will keep updating this list of software patching statistics periodically so it’s easier to see both the necessity of patching and how well companies worldwide do it (or not).

Without a question, difficulties and delays in applying software patching are still one of the biggest threats for companies today. Apps and software lacking the latest update are some of the easiest targets for any hacker who wants to infiltrate an organization.

Experts keep saying it over and over, but people have a hard time getting to those never-ending software updates. It’s both a matter of prioritization and a matter of difficulty (in the absence of a tool which can successfully automate software patching).

So, here are the most important truths about updates and how we apply them or not. I have broken down the software patching statistics of recent years in sections pertaining to the behavior or phenomena

Why Software Patching is Important, in Statistics and Data:

  • 80% of companies who had a data breach or a failed audit could have prevented it by patching on time or doing configuration updates – Voke Media survey, 2016.
  • Upon a breach or failed audit, nearly half of companies (46%) took longer than 10 days to remedy the situation and apply patches, because deploying updates in the entire organization can be difficult – Voke Media survey, 2016.
  • Devastating malware and ransomware which could have been prevented by patching software on time: WannaCry, NotPetya, SamSam.
  • 20% of all vulnerabilities caused by unpatched software are classified as High Risk or Critical – Edgescan Stats Report, 2018.
  • The average time for organizations to close a discovered vulnerability (caused by unpatched software and apps) is 67 daysEdgescan Stats Report, 2018.
  • 18% of all network-level vulnerabilities are caused by unpatched applications – Apache, Cisco, Microsoft, WordPress, BSD, PHP, etc. – Edgescan Stats Report, 2018.
  • 37% of organizations admitted that they don’t even scan for vulnerabilities – Ponemon Report, 2018.
  • 58% of organizations run on ‘legacy systems’ – platforms which are no longer supported with patches but which would still be too expensive to replace in the near future – 0patch Survey Report, 2017.
  • 64% of organizations say that they plan to hire more people on the vulnerability response team, although the average headcount is already 28, representing about 29% of all security human resources – Ponemon State of Vulnerability Report, 2018.
  • Still, this is something known in the industry as the ‘patching paradox’ – hiring more people will not make software vulnerabilities easier to handle – Ponemon State of Vulnerability Report, 2018.
  • Microsoft reports that most of its customers are breached via vulnerabilities that had patches released years agoMicrosoft’s Security Intelligence Report, 2015.
  • Since 2002, the total number of software vulnerabilities has grown year by year by the thousands. The peak year seems to have been 2018 for now, but the figures keep rising – ENISA report for 2018.

Why Is Software Patching So Difficult?

The main reason why patching is difficult is that manual updates (or coordinating the updates manually) take a gruesome amount of time.

According to the Ponemon Institute study for 2018:

  • More than half of all companies (55%) say that when it comes to spending more time manually navigating the various processes involved than actually patching vulnerabilities;
  • On average it takes 12 days for teams to coordinate for applying a patch across all devices;
  • Most companies (61%) feel that they are disadvantages for relying on manual processes for applying software patches;
  • Nearly two-thirds of all companies (65%) say that it is currently too difficult for them to decide correctly on the priority level of each software patch (aka which update is of critical importance and should be applied first).

Considering that it doesn’t make sense for most organizations to have really well-trained security experts on their payroll, it makes sense to have difficulties when prioritizing patches. In the best scenario, security and IT professionals define priorities simply by following the CVSS scoring.

While that scoring for patch importance is reliable, the organizations which implement automation of software patches are still better off both in terms of security and time spent.

Why Do Companies Choose to Delay Applying Software Patches and Updates?

It’s not just that it’s difficult. Some managers don’t want to apply the patches.

Organizations are not just late in applying patches because it takes time; some managers are reluctant to apply the patches for other reasons. According to the 0patch Survey Report, 2017:

  • 88% say they would apply patches faster if they had the option to quickly un-patch if needed;
  • 79% say decoupling security patches from functional ones would help them apply security patches faster;
  • 72% of managers are afraid to apply security patches right away because they could ‘break stuff’;
  • 52% of managers say they don’t want the functionality changes which come with security patching.

Even more worrying is that not everyone is aware of how dangerous it can be to delay. One of the most baffling software patching statistics of the past year comes from the Ponemon Institute report for 2019, again. According to them, only 39% of organizations are aware that actual breaches are linked to known vulnerabilities.

Of course, not wanting the hassle of updating software or system is a legitimate attitude, albeit a very dangerous one. But it’s only a hassle if you plan on updating it alone, manually.

Our Own Software Patching Statistics:

We have hundreds of thousands of enterprise endpoints which are kept secure and up to date through our patch management automation solution, X-Ploit Resilience. While our fast response and implementation times allow us to keep them all updated at a much higher rate compared to industry benchmarks, there are still interesting insights to be gleaned from our data.

This is what we can boast:

  • A new patch reaches the endpoints secured with our patch management system within 4 hours since it was launched (if the endpoint is available to receive it);
  • By automatically applying the patches, the X-Ploit Resilience technology effectively closes all possible system vulnerabilities in an enterprise environment, effectively taking away about 85% of all possible attack vectors;
  • At the moment, the X-Ploit Resilience patch management system covers 112 of the most common software and apps, with several apps and software being added to the list every year.

And this is what we and our customers need to work on together for an even better performance:

  • During the last 3 months, our corporate customers took a while to apply the patches we made available through our system (this can be either for a lack of activity on the endpoints, or a conscious decision to delay), but still at a rate 4 times faster than the global average).

Wrapping up:

If there’s one thing that the latest software patching statistics reflect, it’s that the field can be very non-homogenous. Some organizations react fast(er) to patches but take a long time applying them or apply them in the incorrect order.  Others have complicated assigning procedures but once a patch is set to be applied, it goes fast and smooth. Some apply only critical system updates and completely reject other patches to avoid functionality changes, even if it puts them at some risk.

The bottom line is that whatever is your organization’s unique flavor, we know patches can be overwhelming in one way or another. That’s why we leverage the scaling power of technology to help keep our customers covered with all software patches and zero inconveniences.

Our X-Ploit Resilience module will handle all software updates and patches within 4 hours since their launch, silently, in the background, with no interruptions. You can set it and forget it, as we like to say, or set a few preferences (like the right to exclude updates from one app or category, or to be asked before applying a patch on all endpoints within your organization, or the possibility to deploy and patch your own custom software through the platform).

The post Software Patching Statistics for 2019: Common Practices and Vulnerabilities appeared first on Heimdal Security Blog.