Daily Archives: June 20, 2019

Live From Gartner Security & Risk Mgmt Summit: Starting an AppSec Program, Part 2

This is part two of a two-part blog series on a presentation by Hooper Kincannon, Cyber Security Engineer at Unum Group, on “Secure from the Start: A Case Study on Software Security” at the Gartner Security & Risk Management Summit in National Harbor, MD. In this presentation, Hooper provided a great blueprint for starting a DevSecOps program. In part one, I summarized how Hooper got buy-in for his program and his overall plan for the initiative. In this blog, we delve into the details.

Using Different Assessment Types for the Right Purpose

Hooper kindly shared his slides with us. Here is his helpful comparison of different assessment types, focusing on static analysis, dynamic analysis and manual penetration testing:

You have to make a choice which route you’d like to take. In Hooper’s case, he decided to build static and dynamic application security into the SDLC.

Dynamic and Static Analysis Workflow

For dynamic analysis testing, Hooper recommeds the following workflow:

To make your DAST assessments successful, he recommended using a consistent scan duration, considering the various authentication mechanisms, and using the testing credentials only for testing.

For static analysis testing, he recommended the following workflow:

His recommendations for static analysis testing included being conscious of how you define applications, being aware of compilation instructions, and consistency of the process.

Understanding Remediation vs. Mitigation

After you have identified a vulnerability, you can address it in two different ways:

  • Remediation: Fixing the security defect by changing the code that contains the defect or making a configuration change. This eliminates the risk.
  • Mitigation: Implementing controls to make it less likely that the vulnerability is exploited. This reduces the risk but does not eliminate it because the vulnerability is still present in the code.

Working With Scanning Results

How you use your scan results can make or break your program. If you’re fortunate, you’ll scan your application and get back a low volume of flaws. If you’re unlucky, it may be the opposite.

Hooper’s biggest recommendation is not to panic: The overall goal is to reduce risk, and that won’t happen overnight. Take your time to digest the results and discuss how to best prioritize them. For example, consider fixing dynamic results first because they are easier to discover by an attacker. Decide what you accept as trusted sources, especially in the case of input validation, and have a process for handling exceptions, such as acceptable risk, mitigations, and false positives. Hooper recommends that you do a readout of the results with the stakeholders.

Picking the Right Metrics to Report On

Metrics are probably the most important deliverable coming out of your program. Security is a difficult metric to measure; reduction in risk is a bit easier.

Metrics that worked for Hooper are:

  • Flaw density
  • Risk reduced (vulnerability severity reduced)
  • Most common flaw types (use to guide education efforts)
  • Compliance over time
  • Onboarding time + other operation metrics

When presenting to the different stakeholders of the program, be aware of what each constituency is interested in – because it varies:

  • CISO + senior management: Profitability of the investment
  • Business leaders: Resource allocation
  • Development: Staying on top of flaws

Keeping a regular cadence is vital. Hooper has made these activities part of his program:

  • Monthly scorecards
  • Monthly executive dashboards
  • Annual reviews
  • Real-time dashboards for developers

Optimizing the Program in Year Two

One year after starting the program, Hooper had reached success with external high-risk applications. Next, he moved on to internal high-risk applications. In addition, he started to automate more and more of the program to make it repeatable and easier to manage. For most organizations, he recommends starting out with automation from day one, but even if you start out manually, you’re taking a step in the right direction.

Here is a picture of how Unum Group integrates Veracode into their SDLC:

For More Information

If you’re interested in starting your own application security program, read our take on Everything You Need To Know About Getting Application Security Buy-In.

Live From Gartner Security & Risk Mgmt Summit: Starting a Web Application Security Program

Bootstrapping an application security program is hard. Technology is only one part of the equation. You need to inventory your applications, get stakeholders on board, and then execute on the holy trinity of people, process, and technology. That’s why I was excited to see Hooper Kincannon, Cyber Security Engineer at Unum Group, present on “Secure from the Start: A Case Study on Software Security” at the Gartner Security & Risk Management Summit in National Harbor, MD. Hooper provided a great blue print for starting a DevSecOps program.

Sixty Vulnerabilities Are Reported Every Day, 27 Percent Are Never Fixed

Hooper began his presentation by outlining the current state of both software, and software security. He points out that while software is changing the world, it is also fundamentally flawed from a security perspective.

He points to some highlights from a study by Risk Based Security:

  • More than 22,000 vulnerabilities were disclosed in 2018 – that’s about 60 per day.
  • Almost a third of these (27%) were never fixed, so security professionals can’t just deploy a patch to improve their security posture.
  • Web-related vulnerabilities accounted for nearly half of all reported security flaws, and more than two thirds were related to insufficient or improper validation of input.
  • 33% received a severity rating of seven or above.
  • OWASP Top 10 still account for two-thirds of the reported vulnerabilities.

What can we do about it? We can develop a secure software development lifecycle and try to stem the flow of the vulnerabilities being published in the first place. This is becoming increasingly difficult because more lines of code are be written than ever before (111 billion lines of code in 2016, trending up).

Software Is Becoming Mission Critical: Making the Case for AppSec

So what if Alexa won’t work or my app crashes? Both would probably only be minor annoyances, but software is also impacting us on a much larger scale. Not too long ago, people would be lucky if they had only a two-minute warning that a tornado was coming. Today, weather monitoring and modeling software can predict the formation and path of a tornado with stunning accuracy. And better still they can send text messages to those in danger – providing precious minutes to find shelter.

Farming is being transformed by software as well. Software monitors the moisture levels in soil, and irrigation systems connected to these sensors release the optimal amount of water into the soil. This way, the crops have what they need to grow, and not a drop of water is wasted. There are technologies that monitor crop growth and health and even harvest crops. In other words, software is tackling world hunger. That’s something worth protecting.

When you want to demonstrate to your stakeholders why application security is important to your organization, go back to your company’s mission and ladder up your argument to this ultimate goal. Unum offers disability, life and financial protection to its customers. If your mission is to help people at their most vulnerable moments in life, you need to ensure that they don’t have to worry about their identity being stolen as the result of a data breach in addition to having to figure out medical payments. Making this connection with the core mission can really help tell a story of why application security is crucial to the business.

Starting Out With the Right Questions

Before you can dive head first into your DevSecOps program, you need to ask yourself the right questions:

  • Do you know your application portfolio?
  • Do you have web application security policies defined?
  • Who is responsible for the web application security program?
  • Who is going to fund the program?
  • What is your goal?

Only once you have answered these questions will you be able to find the right formula for your organization. Hooper laid out his program in the rest of the talk, but your organization may differ, so make sure that you ask these questions at the outset.

Building a DevSecOps Program from Scratch

Hooper started at Unum about three years ago as a member of their threat and vulnerability management team. At that point in time, they didn’t have a true web application security program, but they had a relationship with Veracode to assess their top-tier applications, and they were doing basic dynamic analysis with another vendor. At that point, Hooper was fortunate enough to get funding to help expand and mature the program. 

Unum’s primary goal was to reduce risk, so he set out to discover and rate the risk of all of their applications. He helped define security policies for all web applications, including expectations and remediation SLAs. They also decided that security should be responsible for the administration of the AppSec program, and development would cover remediation. 

Hooper chose to expand his relationship with Veracode, covering SAST, DAST, SCA, and eLearning. He also partnered with Veracode to provide live trainings for developers, and signed up for their program management and application security consulting services, which help onboard scrum teams and help developers fix security defects if they get stuck.

In a follow-up blog, we will delve into the details of Hooper’s AppSec program and his path to AppSec maturity.

Blocking DDoS Attacks Using Automation

Guest article by Adrian Taylor, Regional Vice President at A10 Networks

DDoS attacks can be catastrophic, but the right knowledge and tactics can drastically improve your chances of successfully mitigating attacks. In this article, we’ll explore the five ways, listed below, that automation can significantly improve response times during a DDoS attack while assessing the means to block such attacks.

Response time is critical for every enterprise because, in our hyper-connected world, DDoS attacks cause downtime, and downtime means money lost. The longer your systems are down, the more your profits will sink.

Let’s take a closer look at all the ways that automation can put time on your side during a DDoS attack. But first, let’s clarify just how much time an automated defence system can save.

Automated vs. Manual Response Time
Sure, automated DDoS defence is faster than manual DDoS defence, but by how much?

Founder and CEO of NimbusDDoS Andy Shoemaker recently conducted a study to find out. The results spoke volumes: automated DDoS defence improves attack response time five-fold.

The average response time using automated defence was just six minutes, compared to 35 minutes using manual processes, a staggering 29-minute difference. In some cases, the automated defence was even able to eliminate response time completely.

An automated defence system cuts down on response time in five major ways. Such systems can:

  • Instantly detect incoming attacks: Using the data it has collected during peace time, an automated DDoS defence system can instantly identify suspicious traffic that could easily be missed by human observers.
  • Redirect traffic accordingly: In a reactive deployment, once an attack has been detected, an automated DDoS defence system can redirect the malicious traffic to a shared mitigation scrubbing center – no more manual BGP routing announcements of suspicious traffic.
  • Apply escalation mitigation strategies: During the attack’s onslaught of traffic, an automated DDoS defence system will take action based on your defined policies in an adaptive fashion while minimising collateral damage to legitimate traffic.
  • Identify patterns within attack traffic: By carefully inspecting vast amounts of attack traffic in a short period of time, an automated DDoS defence system can extract patterns in real-time to block zero-day botnet attacks.
  • Apply current DDoS threat intelligence: An automated DDoS defence system can access real-time, research-driven IP blocklists and DDoS weapon databases and apply that intelligence to all network traffic destined for the protected zone.
An intelligent automated DDoS defence system doesn’t stop working after an attack, either. Once the attack has been successfully mitigated, it will generate detailed reports you and your stakeholders can use for forensic analysis and for communicating with other stakeholders.

Although DDoS attackers will never stop innovating and adapting, neither will automated and intelligent DDoS protection systems.

By using an automated system to rapidly identify and mitigate threats with the help of up-to-date threat intelligence, enterprises can defend themselves from DDoS attacks as quickly as bad actors can launch them.

Three key strategies to block DDoS attacks
While it’s crucial to have an automated system in place that can quickly respond to attacks, it’s equally important to implement strategies that help achieve your goal of ensuring service availability to legitimate users.

After all, DDoS attacks are asynchronous in nature: You can’t prevent the attacker from launching an attack, but with three critical strategies in place, you can be resilient to the attack, while protecting your users.

Each of the three methods listed below is known as a source-based DDoS mitigation strategy. Source-based strategies implement cause as a basis for choosing what traffic to block. The alternative of destination-based mitigation relies on traffic shaping to prevent the system from falling over.

While destination traffic shaping is effective in preserving system health from being overwhelmed during an attack, it is equally fraught with indiscriminate collateral damage to legitimate users.

Tracking deviation: A tracking deviation strategy works by observing traffic on an ongoing basis to learn what qualifies as normal and what represents a threat.
  • Specifically, a defence system can analyse data rate or query rate from multiple characteristics (e.g. BPS, PPS, SYN-FIN ratio, session rate, etc.) to determine which traffic is legitimate and which is malicious or may identify bots or spoofed traffic by their inability to answer challenge questions.
Pattern recognition: A pattern recognition strategy uses machine learning to parse unusual patterns of behaviour commonly exhibited by DDoS botnets and reflected amplification attacks in real time.
  • For example, DDoS attacks are initiated by a motivated attacker that leverages an orchestration platform providing the distributed weapons with instructions on how to flood the victim with unwanted traffic. The common command and control (C&C) and distributed attack exhibit patterns that can be leveraged as a causal blocking strategy.
Reputation: To utilise reputation as a source-based blocking strategy, a DDoS defence system will use threat intelligence provided by researchers of DDoS botnet IP addresses, in addition to tens of millions of exposed servers used in reflected amplification attacks.
  • The system will then use that intelligence to block any matching IP addresses during an attack.
Any of these three source-based DDoS mitigation strategies requires more computing capabilities than indiscriminate destination protection.

They do, however, have the significant advantage of being able to prevent legitimate users from being blocked, thereby reducing downtime and preventing unnecessarily lost profits.

Knowing that, it’s safe to say that these three mitigation strategies are all well worth the investment.

Adrian Taylor, Regional Vice President at A10 Networks

Process Reimaging: A Cybercrook’s New Disguise for Malware

As of early 2019, Windows 10 is running on more than 700 million devices, including PCs, tablets, phones, and even some gaming consoles. However, it turns out the widespread Windows operating system has some inconsistencies as to how it specifically determines process image file locations on disk. Our McAfee Advanced Threat Research team decided to analyze these inconsistencies and as a result uncovered a new cyberthreat called process reimaging. Similar to process doppelganging and process hollowing, this technique evades security measures, but with greater ease since it doesn’t require code injection. Specifically, this technique affects the ability for a Windows endpoint security solution to detect whether a process executing on the system is malicious or benign, allowing a cybercrook to go about their business on the device undetected.

Let’s dive into the details of this threat. Process reimaging leverages built-in Windows APIs, or application programming interfaces, which allow applications and the operating system to communicate with one another. One API dubbed K32GetProcessImageFileName allows endpoint security solutions, like Windows Defender, to verify whether an EXE file associated with a process contains malicious code. However, with process reimaging, a cybercriminal could subvert the security solution’s trust in the windows operating system APIs to display inconsistent FILE_OBJECT names and paths. Consequently, Windows Defender misunderstands which file name or path it is looking at and can no longer tell if a process is trustworthy or not. By using this technique, cybercriminals can persist malicious processes executing on a user’s device without them even knowing it.

So, the next question is — what can Windows users do to protect themselves from this potential threat? Check out these insights to help keep your device secure:

  • Update your software. Microsoft has issued a partial fix that stops cybercriminals from exploiting file names to disguise malicious code, which helps address at least part of the issue for Windows Defender only. And while file paths are still viable for exploitation, it’s worth updating your software regularly to ensure you always have the latest security patches, as this is a solid practice to work into your cybersecurity routine.
  • Work with your endpoint security vendor. To help ensure you’re protected from this threat, contact your endpoint security provider to see if they protect against process reimaging.

And, as always, to stay updated on all of the latest consumer and mobile security threats, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post Process Reimaging: A Cybercrook’s New Disguise for Malware appeared first on McAfee Blogs.

The evolution of Microsoft Threat Protection, June update

Since our announcement of Microsoft Threat Protection at Microsoft Ignite, our goal has been to execute and deliver on our promise of helping organizations protect themselves from today’s sophisticated and complex threat landscape. As we close out our fiscal year, we’ve continued progress on developing Microsoft Threat Protection, launching new capabilities and services. Hopefully, you’ve had a chance to follow our monthly updates.

As we previously shared, Microsoft Threat Protection enables your organization to:

This month, we want to share new capabilities that are starting public previews.

Efficient remediation and response for identity threats

Presently, efficient and effective response to identity threats is crucial, and Microsoft Threat Protection is built on the industry’s most widely used and comprehensive identity security service. As more organizations adopt hybrid environments, data is spread across multiple applications, is on-premises and in the cloud, and is accessed by multiple devices (often personal devices) and users. Most organizations no longer have a defined network perimeter, making traditional security tools obsolete. Identity is the control plane that is consistent across all elements of the modern organization.

At RSA, we announced a new unified Identity Threat Investigation experience between Azure Active Directory (Azure AD) Identity Protection, Azure Advanced Threat Protection (ATP), and Microsoft Cloud App Security. This experience will go into public preview this month.

Part of the new experience is enabled through Azure AD’s new integration with Azure ATP. Also, integration between Azure AD and Microsoft Cloud App Security enables continuous monitoring of user behavior from sign-in through the entire session. Microsoft Threat Protection’s identity services leverage user behavior analytics to create a dynamic investigation priority score (Figure 1) based off signal from Azure AD, Microsoft Cloud App Security, and Azure ATP. The investigation priority is calculated by assessing security alerts, abnormal activities, and potential business and asset impact related to each user. This score can help Security Operations (SecOps) teams focus and respond to the top user threats in the organization.

Figure 1. The investigation priority view.

To learn more, read Investigating identity threats in hybrid cloud environments.

Game-changing capabilities for endpoint security

Every month, Microsoft Threat Protection detects over 5 billion endpoint threats through its Microsoft Defender ATP service. Customers have long asked us to extend our industry-leading endpoint security beyond the Windows OS. This was a major driving force for us to deliver endpoint security natively for macOS in limited preview earlier this year. We’re excited to announce that Microsoft Defender ATP for macOS is in public preview.

Microsoft Threat Protection customers who have turned on the Microsoft Defender ATP preview features can access Microsoft Defender ATP for Mac via the onboarding section in the Microsoft Defender Security Center. For more information and resources, including system requirements, prerequisites, and a list of improvements and new features, check out the Microsoft Defender ATP for Mac documentation.

To further enhance your endpoint security, “live response,” our new incident response action for SecOps teams, is currently in public preview. Today, your employees often work beyond the corporate network boundary, whether from home or while traveling. The risk for compromise is potentially higher when a user is remote. Imagine the executive who connects their laptop to hotel Wi-Fi and is compromised. With current endpoint security services, SecOps would need to wait until the executive got back to the office, leaving a high-value laptop exposed. With our new live response, SecOps teams gain instant access to a compromised machine regardless of location, as well as the ability to gather any required forensic information.

This powerful feature allows you to:

  • Gather a snapshot of connections, drivers, scheduled tasks, and services, as well as search for specific files or request file analysis to reach a verdict (clean, malicious, or suspicious).
  • Download malware files for reverse-engineering.
  • Create a tenant-level library of forensic tools like PowerShell scripts and third-party binaries that allows SecOps to gather forensic information like the MFT table, firewall logs, event logs, process memory dumps, and more.
  • Run remediation activities such as quarantine file, stop process, remove registry, remove scheduled task, and more.

To learn more, try the live response DIY or read Investigate entities on machines using live response.

Figure 2. Run remediation commands.

Experience the evolution of Microsoft Threat Protection

Take a moment to learn more about Microsoft Threat Protection, read our previous monthly updates, and visit the Microsoft Threat Protection webpage. Organizations, like Telit, have already transitioned to Microsoft Threat Protection and our partners are also leveraging its powerful capabilities.

Begin a trial of Microsoft Threat Protection services, which also includes our newly launched SIEM, Azure Sentinel, to experience the benefits of the most comprehensive, integrated, and secure threat protection solution for the modern workplace.

The post The evolution of Microsoft Threat Protection, June update appeared first on Microsoft Security.

3 Tips Venmo Users Should Follow to Keep Their Transactions Secure

You’ve probably heard of Venmo, the quick and convenient peer-to-peer mobile payments app. From splitting the check when eating out with friends to dividing the cost of bills, Venmo is an incredibly easy way to share money. However, users’ comfort with the app can sometimes result in a few negligent security practices. In fact, computer science student Dan Salmon recently scraped seven million Venmo transactions to prove that users’ public activity can be easily obtained if they don’t have the right security settings flipped on. Let’s explore his findings.

By scraping the company’s developer API, Salmon was able to download millions of transactions across a six-month span. That means he was able to see who sent money to who, when they sent it, and why – just as long as the transaction was set to “public.” Mind you, Salmon’s download comes just a year after that of a German researcher, who downloaded over 200 million transactions from the public-by-default app last year.

These data scrapes, if anything, act as a demonstration. They prove to users just how crucial it is to set up online mobile payment apps with caution and care. Therefore, if you’re a Venmo or other mobile payment app user, make sure to follow these tips in order to keep your information secure:

  • Set your settings to “private” immediately. Only the sender and receiver should know about a monetary transaction in the works. So, whenever you go to send money on Venmo or any other mobile payment app, make sure the transaction is set to “private.” For Venmo users specifically, you can flip from “public” to “private” by just toggling the setting at the bottom right corner of main “Pay or Request” page.
  • Limit the amount of data you share. Just because something is designed to be social doesn’t mean it should become a treasure trove of personal data. No matter the type of transaction you’re making, always try to limit the amount of personal information you include in the corresponding message. That way, any potential cybercriminals out there won’t be able to learn about your spending habits.
  • Add on extra layers of security. Beyond flipping on the right in-app security settings, it’s important to take any extra precautions you can when it comes to protecting your financial data. Create complex logins to your mobile payment apps, participate in biometric options if available, and ensure your mobile device itself has a passcode as well. This will all help ensure no one has access to your money but you.

And, as always, to stay updated on all of the latest consumer and mobile security threats, follow @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post 3 Tips Venmo Users Should Follow to Keep Their Transactions Secure appeared first on McAfee Blogs.

Why Process Reimaging Matters

As this blog goes live, Eoin Carroll will be stepping off the stage at Hack in Paris having detailed the latest McAfee Advanced Threat Research (ATR) findings on Process Reimaging.  Admittedly, this technique probably lacks a catchy name, but be under no illusion the technique is significant and is worth paying very close attention to.

Plain and simple, the objective of malicious threat actors is to bypass endpoint security. It is this exact game of cat and mouse that the security industry has been playing with malware writers for years, and one that, quite frankly, will continue. This ongoing battle will shape the future of cyber, and drive innovation in attack techniques and the ways in which we defend against them.  As part of this process it is crucial that we, the McAfee ATR team, continually identify techniques that could be used by malicious actors successfully.  It is this work that has led to the identification of a technique we call Process Reimaging, which was successful in bypassing endpoint security solutions (ESSs). To be clear, our objective is to stay ahead of malicious actors in identifying evasion techniques, with the broader goal of providing a safer computing environment for all organizations.

This technique is detailed by Eoin in a comprehensive technical blog titled In NTDLL I Trust – Process Reimaging and Endpoint Security Solution Bypass. The following is a summary of the findings.

Process Reimaging targets non-EDR ESSs.  It’s a post exploitation technique, meaning it targets users who have already fallen victim, for example to a phishing or a drive-by-download attack, so that the process can execute undetected and dwell on an endpoint for an significant period of time. The Windows kernel exports functionality to support the user mode components of ESSs which they depend on for protection and detection capabilities. There are numerous APIs such as K32GetProcessImageFileName that allows the ESSs “to verify a process attribute to determine whether it contains malicious binaries and whether it can be trusted to call into its infrastructure.” It was this functionality that our research focused on since the APIs return stale and inconsistent FILE_OBJECT paths, this potentially allows a malicious actor to bypass the process attribute verification undertaken by the Windows Operating System.   To be more precise, this allowed McAfee ATR to develop a proof-of-concept that was not detected by Windows Defender and will not be detected until a signature is created to block the file on disk before the process itself is created or a full scan is executed.

It is because the ESS relies on the Windows operating system to verify the process attributes that this technique is actually successful.  Whereby the ESS will naturally trust a particular process with a non-malicious file on disk since it makes the assumption that the O/S has verified the correct file on disk associated with that process, for the ESS to scan.

Releasing details of the technique

With the public release of security research, there is always a significant risk that any released information can be utilized by adversaries for nefarious activities. The balance of security research versus irresponsible disclosure is an issue we continually wrestle with, and these findings are no different. In the process of conducting due diligence, we were able to identify the use of Process Doppelganging with Process Hollowing as its fallback defense evasion technique within the SynAck ransomware in 2018.  Since Process Doppelganging technique was weaponized within SynAck ransomware less than five months after it’s disclosure at Blackhat Europe in 2017, we can only assume that the Process Reimaging technique itself is, or rather will be close to usage by threat actors to bypass detection.

The post Why Process Reimaging Matters appeared first on McAfee Blogs.

In NTDLL I Trust – Process Reimaging and Endpoint Security Solution Bypass

Process Reimaging Overview

The Windows Operating System has inconsistencies in how it determines process image FILE_OBJECT locations, which impacts non-EDR (Endpoint Detection and Response) Endpoint Security Solution’s (such as Microsoft Defender Realtime Protection), ability to detect the correct binaries loaded in malicious processes. This inconsistency has led McAfee’s Advanced Threat Research to develop a new post-exploitation evasion technique we call “Process Reimaging”. This technique is equivalent in impact to Process Hollowing or Process Doppelganging within the Mitre Attack Defense Evasion Category; however, it is , much easier to execute as it requires no code injection. While this bypass has been successfully tested against current versions of Microsoft Windows and Defender, it is highly likely that the bypass will work on any endpoint security vendor or product implementing the APIs discussed below.

The Windows Kernel, ntoskrnl.exe, exposes functionality through NTDLL.dll APIs to support User-mode components such as Endpoint Security Solution (ESS) services and processes. One such API is K32GetProcessImageFileName, which allows ESSs to verify a process attribute to determine whether it contains malicious binaries and whether it can be trusted to call into its infrastructure. The Windows Kernel APIs return stale and inconsistent FILE_OBJECT paths, which enable an adversary to bypass Windows operating system process attribute verification. We have developed a proof-of-concept which exploits this FILE_OBJECT location inconsistency by hiding the physical location of a process EXE.

The PoC allowed us to persist a malicious process (post exploitation) which does not get detected by Windows Defender.

The Process Reimaging technique cannot be detected by Windows Defender until it has a signature for the malicious file and blocks it on disk before process creation or performs a full scan on suspect machine post compromise to detect file on disk. In addition to Process Reimaging Weaponization and Protection recommendations, this blog includes a technical deep dive on reversing the Windows Kernel APIs for process attribute verification and Process Reimaging attack vectors. We use the SynAck Ransomware as a case study to illustrate Process Reimaging impact relative to Process Hollowing and Doppelganging; this illustration does not relate to Windows Defender ability to detect Process Hollowing or Doppelganging but the subverting of trust for process attribute verification.

Antivirus Scanner Detection Points

When an Antivirus scanner is active on a system, it will protect against infection by detecting running code which contains malicious content, and by detecting a malicious file at write time or load time.

The actual sequence for loading an image is as follows:

  • FileCreate – the file is opened to be able to be mapped into memory.
  • Section Create – the file is mapped into memory.
  • Cleanup – the file handle is closed, leaving a kernel object which is used for PAGING_IO.
  • ImageLoad – the file is loaded.
  • CloseFile – the file is closed.

If the Antivirus scanner is active at the point of load, it can use any one of the above steps (1,2 and 4) to protect the operating system against malicious code. If the virus scanner is not active when the image is loaded, or it does not contain definitions for the loaded file, it can query the operating system for information about which files make up the process and scan those files. Process Reimaging is a mechanism which circumvents virus scanning at step 4, or when the virus scanner either misses the launch of a process or has inadequate virus definitions at the point of loading.

There is currently no documented method to securely identify the underlying file associated with a running process on windows.

This is due to Windows’ inability to retrieve the correct image filepath from the NTDLL APIs.  This can be shown to evade Defender (MpMsEng.exe/MpEngine.dll) where the file being executed is a “Potentially Unwanted Program” such as mimikatz.exe. If Defender is enabled during the launch of mimikatz, it detects at phase 1 or 2 correctly.  If Defender is not enabled, or if the launched program is not recognized by its current signature files, then the file is allowed to launch. Once Defender is enabled, or the signatures are updated to include detection, then Defender uses K32GetProcessImageFileName to identify the underlying file. If the process has been created using our Process Reimaging technique, then the running malware is no longer detected. Therefore, any security service auditing running programs will fail to identify the files associated with the running process.

Subverting Trust

The Mitre ATT&CK model specifies post-exploitation tactics and techniques used by adversaries, based on real-world observations for Windows, Linux and macOS Endpoints per figure 1 below.

Figure 1 – Mitre Enterprise ATT&CK

Once an adversary gains code execution on an endpoint, before lateral movement, they will seek to gain persistence, privilege escalation and defense evasion capabilities. They can achieve defense evasion using process manipulation techniques to get code executing in a trusted process. Process manipulation techniques have existed for a long time and evolved from Process Injection to Hollowing and Doppelganging with the objective of impersonating trusted processes. There are other Process manipulation techniques as documented by Mitre ATT&CK and Unprotect Project,  but we will focus on Process Hollowing and Process Doppelganging. Process manipulation techniques exploit legitimate features of the Windows Operating System to impersonate trusted process executable binaries and generally require code injection.

ESSs place inherent trust in the Windows Operating System for capabilities such as digital signature validation and process attribute verification. As demonstrated by Specter Ops, ESSs’ trust in the Windows Operating system could be subverted for digital signature validation.

Similarly, Process Reimaging subverts an ESSs’ trust in the Windows Operating System for process attribute verification.

When a process is trusted by an ESS, it is perceived to contain no malicious code and may also be trusted to call into the ESS trusted infrastructure.

McAfee ATR uses the Mitre ATT&CK framework to map adversarial techniques, such as defense evasion, with associated campaigns. This insight helps organizations understand adversaries’ behavior and evolution so that they can assess their security posture and respond appropriately to contain and eradicate attacks. McAfee ATR creates and shares Yara rules based on threat analysis to be consumed for protect and detect capabilities.

Process Manipulation Techniques (SynAck Ransomware)

McAfee Advanced Threat Research analyzed SynAck ransomware in 2018 and discovered it used Process Doppelganging with Process Hollowing as its fallback defense evasion technique. We use this malware to explain the Process Hollowing and Process Doppelganging techniques, so that they can be compared to Process Reimaging based on a real-world observation.

Process Manipulation defense evasion techniques continue to evolve. Process Doppelganging was publicly announced in 2017, requiring advancements in ESSs for protection and detection capabilities. Because process manipulation techniques generally exploit legitimate features of the Windows Operating system they can be difficult to defend against if the Antivirus scanner does not block prior to process launch.

Process Hollowing

Process hollowing occurs when a process is created in a suspended state then its memory is unmapped and replaced with malicious code. Execution of the malicious code is masked under a legitimate process and may evade defenses and detection analysis” (see figure 2 below)

Figure 2 – SynAck Ransomware Defense Evasion with Process Hollowing

Process Doppelganging

Process Doppelgänging involves replacing the memory of a legitimate process, enabling the veiled execution of malicious code that may evade defenses and detection. Process Doppelgänging’s use of Windows Transactional NTFS (TxF) also avoids the use of highly-monitored API functions such as NtUnmapViewOfSection, VirtualProtectEx, and SetThreadContext” (see figure 3 below)

Figure 3 – SynAck Ransomware Defense Evasion with Doppleganging

Process Reimaging Weaponization

The Windows Kernel APIs return stale and inconsistent FILE_OBJECT paths which enable an adversary to bypass windows operating system process attribute verification. This allows an adversary to persist a malicious process (post exploitation) by hiding the physical location of a process EXE (see figure 4 below).

Figure 4 – SynAck Ransomware Defense Evasion with Process Reimaging

Process Reimaging Technical Deep Dive

NtQueryInformationProcess retrieves all process information from EPROCESS structure fields in the kernel and NtQueryVirtualMemory retrieves information from the Virtual Address Descriptors (VADs) field in EPROCESS structure.

The EPROCESS structure contains filename and path information at the following fields/offsets (see figure 5 below):

  • +0x3b8 SectionObject (filename and path)
  • +0x448 ImageFilePointer* (filename and path)
  • +0x450 ImageFileName (filename)
  • +0x468 SeAuditProcessCreationInfo (filename and path)

* this field is only present in Windows 10

Figure 5 – Code Complexity IDA Graph Displaying NtQueryInformationProcess Filename APIs within NTDLL

Kernel API NtQueryInformationProcess is consumed by following the kernelbase/NTDLL APIs:

  • K32GetModuleFileNameEx
  • K32GetProcessImageFileName
  • QueryFullProcessImageImageFileName

The VADs hold a pointer to FILE_OBJECT for all mapped images in the process, which contains the filename and filepath (see figure 6 below).

Kernel API NtQueryVirtualMemory is consumed by following the kernelbase/NTDLL API:

  • GetMappedFileName

Figure 6 – Code Complexity IDA Graph Displaying NtQueryVirtualMemory Filename API within NTDLL

Windows fails to update any of the above kernel structure fields when a FILE_OBJECT filepath is modified post-process creation. Windows does update FILE_OBJECT filename changes, for some of the above fields.

The VADs reflect any filename change for a loaded image after process creation, but they don’t reflect any rename of the filepath.

The EPROCESS fields also fail to reflect any renaming of the process filepath and only the ImageFilePointer field reflects a filename change.

As a result, the APIs exported by NtQueryInformationProcess and NtQueryVirtualMemory return incorrect process image file information when called by ESSs or other Applications (see Table 1 below).

Table 1 OS/Kernel version and API Matrix

Prerequisites for all Attack Vectors

Process Reimaging targets the post-exploitation phase, whereby a threat actor has already gained access to the target system. This is the same prerequisite of Process Hollowing or Doppelganging techniques within the Defense Evasion category of the Mitre ATT&CK framework.

Process Reimaging Attack Vectors
FILE_OBJECT Filepath Changes

Simply renaming the filepath of an executing process results in Windows OS returning the incorrect image location information for all APIs (See figure 7 below).  This impacts all Windows OS versions at the time of testing.

Figure 7 FILE_OBJECT Filepath Changes – Filepath Changes Impact all Windows OS versions

FILE_OBJECT Filename Changes

Filename Change >= Windows 10

Simply renaming the filename of an executing process results in Windows OS returning the incorrect image information for K32GetProcessImageFileName API (See figure 8.1.1 below). This has been confirmed to impact Windows 10 only.

Figure 8.1.1 FILE_OBJECT Filename Changes – Filename Changes impact Windows >= Windows 10

Per figure 8.1.2 below, GetModuleFileNameEx and QueryFullProcessImageImageFileName will get the correct filename changes due to a new EPROCESS field ImageFilePointer at offset 448.  The instruction there (mov r12, [rbx+448h]) references the ImageFilePointer from offset 448 into the EPROCESS structure.

Figure 8.1.2 NtQueryInformationProcess (Windows 10) – Windows 10 RS1 x64 ntoskrnl version 10.0.14393.0

Filename Change < Windows 10

Simply renaming the filename of an executing process results in Windows OS returning the incorrect image information for K32GetProcessImageFileName, GetModuleFileNameEx and QueryFullProcessImageImageFileName APIs (See figure 8.2.1 below). This has been confirmed to impact Windows 7 and Windows 8.

Figure 8.2.1 FILE_OBJECT Filename Changes – Filename Changes Impact Windows < Windows 10

Per Figure8.2.2 below, GetModuleFileNameEx and QueryFullProcessImageImageFileName will get the incorrect filename (PsReferenceProcessFilePointer references EPROCESS offset 0x3b8 SectionObject).

Figure 8.2.2 NtQueryInformationProcess (Windows 7 and 8) – Windows 7 SP1 x64 ntoskrnl version 6.1.7601.17514

LoadLibrary FILE_OBJECT reuse

LoadLibrary FILE_OBJECT reuse leverages the fact that when a LoadLibrary or CreateProcess is called after a LoadLibrary and FreeLibrary on an EXE or DLL, the process reuses the existing image FILE_OBJECT in memory from the prior LoadLibrary.

Exact Sequence is:

  1. LoadLibrary (path\filename)
  2. FreeLibrary (path\filename)
  3. LoadLibrary (renamed path\filename) or CreateProcess (renamed path\filename)

This results in Windows creating a VAD entry in the process at step 3 above, which reuses the same FILE_OBJECT still in process memory, created from step 1 above. The VAD now has incorrect filepath information for the file on disk and therefore the GetMappedFileName API will return the incorrect location on disk for the image in question.

The following prerequisites are required to evade detection successfully:

  • The LoadLibrary or CreateProcess must use the exact same file on disk as the initial LoadLibrary
  • Filepath must be renamed (dropping the same file into a newly created path will not work)

The Process Reimaging technique can be used in two ways with LoadLibrary FILE_OBJECT reuse attack vector:

  1. LoadLibrary (see figure 9 below)
    1. When an ESS or Application calls the GetMappedFileName API to retrieve a memory-mapped image file, Process Reimaging will cause Windows OS to return the incorrect path. This impacts all Windows OS versions at the time of testing.

Figure 9 LoadLibrary FILE_OBJECT Reuse (LoadLibrary) – Process Reimaging Technique Using LoadLibrary Impacts all Windows OS Versions

2. CreateProcess (See figure 10 below)

    1. When an ESS or Application calls the GetMappedFileName API to retrieve the process image file, Process Reimaging will cause Windows OS to return the incorrect path. This impacts all Windows OS versions at the time of testing.

Figure 10 LoadLibrary FILE_OBJECT Reuse (CreateProcess) – Process Reimaging Technique using CreateProcess Impacts all Windows OS Versions

Process Manipulation Techniques Comparison

Windows Defender Process Reimaging Filepath Bypass Demo

This video simulates a zero-day malware being dropped (Mimikatz PUP sample) to disk and executed as the malicious process “phase1.exe”. Using the Process Reimaging Filepath attack vector we demonstrate that even if Defender is updated with a signature for the malware on disk it will not detect the running malicious process. Therefore, for non-EDR ESSs such as Defender Real-time Protection (used by Consumers and also Enterprises) the malicious process can dwell on a windows machine until a reboot or the machine receives a full scan post signature update.

CVSS and Protection Recommendations

CVSS

If a product uses any of the APIs listed in table 1 for the following use cases, then it is likely vulnerable:

  1. Process reputation of a remote process – any product using the APIs to determine if executing code is from a malicious file on disk

CVSS score 5.0 (Medium)  https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:L/AC:L/PR:L/UI:R/S:U/C:N/I:H/A:N (same score as Doppelganging)

  1. Trust verification of a remote process – any product using the APIs to verify trust of a calling process

CVSS score will be higher than 5.0; scoring specific to Endpoint Security Solution architecture

Protection Recommendations

McAfee Advanced Threat Research submitted Process Reimaging technique to Microsoft on June 5th, 2018. Microsoft released a partial mitigation to Defender in the June 2019 Cumulative update for the Process Reimaging FILE_OBJECT filename changes attack vector only. This update was only for Windows 10 and does not address the vulnerable APIs in Table 1 at the OS level; therefore, ESSs are still vulnerable to Process Reimaging. Defender also remains vulnerable to the FILE_OBJECT filepath changes attack vector executed in the bypass demo video, and this attack vector affects all Windows OS versions.

New and existing Process Manipulation techniques which abuse legitimate Operating System features for defense evasion are difficult to prevent dynamically by monitoring specific API calls as it can lead to false positives such as preventing legitimate processes from executing.

A process which has been manipulated by Process Reimaging will be trusted by the ESS unless it has been traced by EDR or a memory scan which may provide deeper insight.

Mitigations recommended to Microsoft
  1. File System Synchronization (EPROCESS structures out of sync with the filesystem or File Control Block structure (FCB)
    1. Allow the EPROCESS structure fields to reflect filepath changes as is currently implemented for the filename in the VADs and EPROCESS ImageFilePointer fields.
    2. There are other EPROCESS fields which do not reflect changes to filenames and need to be updated, such as K32GetModuleFileNameEx on Windows 10 through the ImageFilePointer.
  2. API Usage (most returning file info for process creation time)
    1. Defender (MpEngine.dll) currently uses K32GetProcessImageFileName to get process image filename and path when it should be using K32GetModuleFileNameEx.
    2. Consolidate the duplicate APIs being exposed from NtQueryInformationProcess to provide easier management and guidance to consumers that require retrieving process filename information. For example, clearly state GetMappedFileName should only be used for DLLs and not EXE backing process).
    3. Differentiate in API description whether the API is only limited to retrieving the filename and path at process creation or real-time at time of request.
  3. Filepath Locking
    1. Lock filepath and name similar to lock file modification when a process is executing to prevent modification.
    2. Standard user at a minimum should not be able to rename binary paths for its associated executing process.
  4. Reuse of existing FILE_OBJECT with LoadLibrary API (Prevent Process Reimaging)
    1. LoadLibrary should verify any existing FILE_OBJECT it reuses, has the most up to date Filepath at load time.
  5. Short term mitigation is that Defender should at least flag that it found malicious process activity but couldn’t find associated malicious file on disk (right now it fails open, providing no notification as to any potential threats found in memory or disk).
Mitigation recommended to Endpoint Security Vendors

The FILE_OBJECT ID must be tracked from FileCreate as the process closes its handle for the filename by the time the image is loaded at ImageLoad.

This ID must be managed by the Endpoint Security Vendor so that it can be leveraged to determine if a process has been reimaged when performing process attribute verification.

The post In NTDLL I Trust – Process Reimaging and Endpoint Security Solution Bypass appeared first on McAfee Blogs.

Make Seqrite UTM the first line of defense for your enterprise

Estimated reading time: 2 minutes

Network security has traditionally been a number one priority for enterprises. As the reliance on the Internet has increased, enterprises have invested in traditional network security solutions which aim to protect trusted internal networks from external actors. For this purpose, enterprises have invested in solutions like a firewall that stands at the perimeter of a company’s network and monitors and controls incoming and outgoing security traffic. Similarly, organizations have also invested in Unified Threat Management (UTM) solutions which combine and integrate multiple security devices for protection.

Enterprises can consider Seqrite’s Unified Threat Management (UTM) which combines multi-layered cybersecurity strategies for businesses, thereby safeguarding the entire IT framework while rendering it productive, secure and stable. Seqrite is one reliable security service provider that offers UTM as a gateway security solution. Seqrite’s UTM offers a host of features for enterprises in areas of networking, administration, content filtering, VPN, monitoring and reporting, mail protection, firewall, security services and user authentication.

Unified Threat Management is a holistic service that comes forth with the features like content filtering, VPN, firewall and anti-virus protection clubbed under a single dashboard. Some of the key features of UTM which can serve as the first line of defense for your enterprise are:

  • Gateway Antivirus

The Gateway Antivirus feature scans all incoming and outgoing network traffic at the gateway level. This helps to augment existing virus solutions by reducing the window of vulnerability (WoV) as threats are detected and dealt with right at the network level, hence preventing their entry into the rest of the enterprise.

  • IPS

Through the Intrusion Prevention System (IPS) feature, network traffic is scanned in real-time. This helps prevent a broad range of Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks even before they can penetrate the network. IPS can also configure rules, policies and required actions upon capturing these alarms.

  • Firewall Protection

With the best-in-class firewall protection, network administrators can permit or block access for traffic between internal and external networks based on enterprise compliance policies.

  • URL Filtering

When it comes to selecting a functional UTM solution, spam blocking and URL filtering need to be prioritized. These components are the building blocks of an enterprise-level network security solution and a key feature within reliable UTM products. URL filtering helps block risky websites and when paired with spam filtering, can also block the entry of spam mails and certain forms of phishing attacks. Seqrite UTM’s URL Filtering feature allows blocking of non-business related web traffic including streaming media sites, downloads, instant messaging etc. in order to reduce unnecessary load on enterprise bandwidth.

  • Gateway Mail Protection

Thanks to the Gateway Mail Protection features, enterprises can be sure that they are protected from malicious emails and Business Email Compromise (BEC) attacks. This feature scans incoming/outgoing emails or attachments at the gateway level to block spam and phishing emails before they enter the network.

  • Load Balancing

This feature allows the distribution of bandwidth across multiple ISPs within the enterprise network and enables these ISPs to operate over the same gateway channels. Multiple ISPs can be used by Seqrite UTM through this feature. Traffic is balanced across multiple ISP lines based on weightage and priority.

The above pointers make it quite clear why Seqrite Unified Threat Management (UTM) has the power and tools required for enterprises to make it their first line of defense against cyber attacks.

The post Make Seqrite UTM the first line of defense for your enterprise appeared first on Seqrite Blog.

Medical debt collection agency files for bankruptcy protection after data breach

A US medical bill and debt collection agency has filed for Chapter 11 bankruptcy protection after suffering a data breach that exposed the sensitive personal data of at least 20 million people.

Compromised data included names, addresses, dates of birth and Social Security numbers – data that could be used to commit fraud and identity theft.

RMCB (the Retrieval-Masters Creditors Bureau) – the parent company of AMCA (the American Medical Collection Agency) – listed assets and liabilities of up to $10 million and estimated that it had between 100 and 199 creditors.

The company’s founder and CEO Russell H. Fuchs said in a court declaration that the breach had prompted a “cascade of events” resulting in “enormous expenses that were beyond [its] ability […] to bear”.

These included spending more than $3.8 million on notifying more than 7 million individuals that their personal data had potentially been compromised – $2.5 million of which Fuchs loaned the company himself.

Chapter 11 filings help businesses restructure their debts and assets, and wind up their affairs in an orderly manner.

Undetected data breach

AMCA was hacked over an eight-month period from 1 August 2018 to 30 March 2019.

Gemini Advisory, which alerted it to the incident, explains that it first identified information stolen from the company on 28 February.

The next day, it “made several unsuccessful attempts to contact AMCA in order to alert the victims” before informing federal law enforcement.

Databreaches.net first reported the incident on 10 May, using information provided by Gemini Research, but was unable to elicit any comment from AMCA.

Customer data exposed

According to ZDNet, companies that used AMCA’s payment portal to bill their medical customers include Quest Diagnostics (12 million exposed records), LabCorp (8 million), BioReference Laboratories (423,000), Carecentrix (500,000) and Sunrise Laboratories (unknown number).

All have either “terminated or substantially curtailed their business relationships” with AMCA, Fuchs said.

The real price of a data breach

RMCB/AMCA has been in business since 1977. Following the breach, it was forced to reduce its headcount by 88 to 25. Moreover, it is not “optimistic that it will be able to rehabilitate its business”.

After more than 40 years, this will be a bitter blow.

The lesson to be learned is that all organisations are at risk from cyber attacks and that the results can be disastrous.

Defending against cyber attacks is therefore critical.

Cyber security boot camp

If you need to improve your cyber security quickly, you can get all the support you need on our cyber security boot camp.

Download our free Cyber Security Combat Plan and discover:

  • The five defensive measures you should take to protect your organisation from cyber attacks;
  • The benefits and the risks associated with each of them; and
  • How to build a business case for implementing them.

Enlist now >>

Cyber Security boot camp

The post Medical debt collection agency files for bankruptcy protection after data breach appeared first on IT Governance Blog.

Welcoming Cloudbric’s new CSO for strategic planning and investor relations

We’re excited to announce that Yujin (Gin) Hyeon has joined Cloudbric as Chief Strategic Officer (CSO).

As Cloudbric’s new CSO, Gin will be driving corporate strategy and investor relations to take the next big step forward. As a veteran of the tech industry, Gin’s experience with early stage companies from growth stage to IPOs will be pivotal for Cloudbric’s continued development. Given his track record, we are very excited to have him join our roster.

To give some background, Gin was a Co-Founder of Com2uS, one of the world’s first mobile gaming companies, which was established in 1996 and known for games like Summoners War: Sky Arena, Ace Fishing, Golf Star, and Tiny Farm.

His role was twofold with on hand on the business side and another on product development. On the business side Gin was focused on acquiring funding for the team and expanding business overseas, including the opening of regional offices in London, Bangalore, Los Angeles and Singapore.

On the product development end, he helped the company achieve its strategic goals and moved product development by co-developing the product lineup.

Some highlights while working with Com2uS include developing and negotiating contractual agreements with license holders, mobile telecommunications carriers, and other strategic partners, successfully completing agreements with 64 mobile telecommunications operators in 32 countries.

Notably, he also developed strategic partnerships with Samsung Electronics, Nokia, Motorola, Sony Ericsson, Siemens, Sun Microsystems, Qualcomm, and YAHOO.

It doesn’t end with just Com2uS!

Gin worked for INKA ENTWORKS, which specializes in security solutions and is known worldwide as one of the leading DRM (Digital Rights Management) technology companies.

With the launch of AppSealing, a software that prevents hacking for mobile applications, Gin, as Senior VP, oversaw the strategic and business plans for the solution all while implementing a China strategy before launching it into an incubation program.

He also worked for a company called ASCAN, a document management and record archiving company based in Korea that uses AI, serving as COO.

In between balancing both operating and strategic planning, Gin also helped acquire overseas funding for the company.

Amongst his experience in the tech industry, Gin has accumulated over 15 years of experience in consulting and grant writing too.

He has worked in consulting for companies like Fairways Consulting Services (a company focused on preparing businesses to enter the Indian market) and Brilliant Rise, a Hong Kong based company composed of former executives in the various fields of technology and internet to provide consulting services, project management, and merger & acquisition services.

Mostly recently, as the previous CSO to the Korean product design and development startup PiQuant, he oversaw the investor relation management aspect of the business and their strategic partnerships.

With his extensive experience in building company strategies across various industries and his huge investor network, we are excited to bring in Gin to the team as someone who can help us continue to grow.

Gin is joining the team at a time in which Cloudbric is seeing a 50% increase in the company’s workforce.

Welcome

환영합니다

स्वागत

Bienvenue Gin!

(Though a Korean national, Gin spent a total of 32 years in India. He speaks a total of five languages! English, Korean, Hindi, French, and Spanish).

Furthermore, as a company, we’re excited to continue growing and in various industries as well.

Recently, Cloudbric began providing security services to cryptocurrency exchanges blockchain businesses and has now delved into the operation of blockchain wallet nodes, utilizing its know-how in cloud computing services like AWS and others.

We aim to build blockchain nodes in our existing data centers and servers around the world to grow this business.


Make sure to follow us on our social media platforms (LinkedInTwitter, and Facebook) and our recently opened Telegram Announcement Channel for the latest updates!

The post Welcoming Cloudbric’s new CSO for strategic planning and investor relations appeared first on Cloudbric.

Upcoming cybersecurity events featuring BH Consulting

Here, we list upcoming events, conferences, webinars and training featuring members of the BH Consulting team presenting about cybersecurity, risk management, data protection, GDPR, and privacy. 

ISACA Last Tuesday: Dublin, 25 June

BH Consulting COO Valerie Lyons will present a talk on building an emotionally intelligent security team, and the role that leadership plays in influencing team style. It will be an interactive and fun session with several takeaways and directions to free online tools to help analyse team member roles. The evening event will take place at the Carmelite Community Centre on Aungier Street in Dublin 2. Attendance is free; to register, visit this link

Data Protection Officer certification course: Vilnius/Maastricht June/July

BH Consulting contributes to this specialised hands-on training course that provides the knowledge needed to carry out the role of a data protection officer under the GDPR. This course awards the ECPC DPO certification from Maastricht University. Places are still available at the courses scheduled for June and July, and a link to book a place is available here

IAM Annual Conference: Dublin, 28-30 August

Valerie Lyons is scheduled to speak at the 22nd annual Irish Academy of Management Conference, taking place at the National College of Ireland. The event will run across three days, and its theme considers how business and management scholarship can help to solve societal challenges. For more details and to register, visit the IAM conference page

The post Upcoming cybersecurity events featuring BH Consulting appeared first on BH Consulting.