Author Archives: Richard Bejtlich

MITRE ATT&CK Tactics Are Not Tactics



Just what are "tactics"?

Introduction


MITRE ATT&CK is a great resource, but something about it has bothered me since I first heard about it several years ago. It's a minor point, but I wanted to document it in case it confuses anyone else.

The MITRE ATT&CK Design and Philosophy document from March 2020 says the following:

At a high-level, ATT&CK is a behavioral model that consists of the following core components:

• Tactics, denoting short-term, tactical adversary goals during an attack;
• Techniques, describing the means by which adversaries achieve tactical goals;
• Sub-techniques, describing more specific means by which adversaries achieve tactical goals at a lower level than techniques; and
• Documented adversary usage of techniques, their procedures, and other metadata.

My concern is with MITRE's definition of "tactics" as "short-term, tactical adversary goals during an attack," which is oddly recursive.

The key word in the tactics definition is goals. According to MITRE, "tactics" are "goals."

Examples of ATT&CK Tactics


ATT&CK lists the following as "Enterprise Tactics":

MITRE ATT&CK "Tactics," https://attack.mitre.org/tactics/enterprise/

Looking at this list, the first 11 items could indeed be seen as goals. The last item, Impact, is not a goal. That item is an artifact of trying to shoehorn more information into the ATT&CK structure. That's not my primary concern though.

Military Theory and Definitions


As a service academy graduate who had to sit through many lectures on military theory, and who participated in small unit exercises, the idea of tactics as "goals" does not make any sense.

I'd like to share three resources that offer a different perspective on tactics. Although all three are military, my argument does not depend on that association.

The DOD Dictionary of Military and Associated Terms defines tactics as "the employment and ordered arrangement of forces in relation to each other. See also procedures; techniques. (CJCSM 5120.01)" (emphasis added)

In his book On Tactics, B. A. Friedman defines tactics as "the use of military forces to achieve victory over opposing enemy forces over the short term." (emphasis added)

Dr. Martin van Creveld, scholar and author from the military strategy world, wrote the excellent Encyclopedia Britannica entry on tactics. His article includes the following:

"Tactics, in warfare, the art and science of fighting battles on land, on sea, and in the air. It is concerned with the approach to combat; the disposition of troops and other personalities; the use made of various arms, ships, or aircraft; and the execution of movements for attack or defense...

The word tactics originates in the Greek taxis, meaning order, arrangement, or disposition -- including the kind of disposition in which armed formations used to enter and fight battles. From this, the Greek historian Xenophon derived the term tactica, the art of drawing up soldiers in array. Likewise, the Tactica, an early 10th-century handbook said to have been written under the supervision of the Byzantine emperor Leo VI the Wise, dealt with formations as well as weapons and the ways of fighting with them.

The term tactics fell into disuse during the European Middle Ages. It reappeared only toward the end of the 17th century, when “Tacticks” was used by the English encyclopaedist John Harris to mean 'the Art of Disposing any Number of Men into a proposed form of Battle...'"

From these three examples, it is clear that tactics are about use and disposition of forces or capabilities during engagements. Goals are entirely different. Tactics are the methods by which leaders achieve goals. 

How Did This Happen?


I was not a fly on the wall when the MITRE team designed ATT&CK. Perhaps the MITRE team fixated on the phrase"tactics, techniques, and procedures," or "TTPs," again derived from military examples, when they were designing ATT&CK? TTPs became hot during the 2000s as incident responders drew with military experience drew on that language when developing concepts like indicators of compromise. That fixation might have led MITRE to use "tactics" for their top-level structure. 

It would have made more sense for MITRE to have just said "goal" or "objective," but "GTP" isn't recognized by the digital defender world.

It's Not Just the Military


Some readers might think "ATT&CK isn't a military tool, so your military examples don't apply." I use the military references to show that the word tactic does have military origins, like the word "strategy," from the Greek Strategos or strategus, plural strategoi, (Greek: στρατηγός, pl. στρατηγοί; Doric Greek: στραταγός, stratagos; meaning "army leader"). 

That said, I would be surprised to see the word tactics used as "goals" anywhere else. For example, none of these examples from the non-military world involve tactics as goals:

This Harvard Business Review article defines tactics as "the day-to-day and month-to-month decisions required to manage a business." 

This guide for ice hockey coaches mentions tactics like "give and go’s, crossing attacks, cycling the puck, chipping the puck to space and overlapping."

The guide for small business marketing lists tactics like advertising, grass-roots efforts, trade shows, website optimization, and email and social marketing.

In the civilian world, tactics are how leaders achieve goals or objectives.

Conclusion


In the big picture, it doesn't matter that much to ATT&CK content that MITRE uses the term "tactics" when it really means "goals." 

However, I wrote this article because the ATT&CK design and philosophy emphasizes a common language, e.g., ATT&CK "succinctly organizes adversary tactics and techniques along with providing a common language used across security disciplines."

If we want to share a common language, it's important that we recognize that the ATT&CK use of the term "tactics" is an anomaly. Perhaps a future edition will change the terminology, but I doubt it given how entrenched it is at this point.

Greg Rattray Invented the Term Advanced Persistent Threat

 



I was so pleased to read this Tweet yesterday from Greg Rattray:

"Back in 2007, I coined the term “Advanced Persistent Threat” to characterize emerging adversaries that we needed to work with the defense industrial base to deal with... Since then both the APT term and the nature of our adversaries have evolved. What hasn’t changed is that in cyberspace, advanced attackers will persistently go after targets with assets they want, no matter the strength of defenses."

Background


First, some background. Who is Greg Rattray?

First, you could call him Colonel or Doctor. I will use Col as that was the last title I used with him, although these days when we chat I call him Greg. 

Col Rattray served 21 years in the Air Force and also earned his PhD in international security from Tufts University. His thesis formed the content for his 2001 book Strategic Warfare in Cyberspace, which I reviewed in 2002 and rated 4 stars. (Ouch -- I was a bit stingy with the stars back then. I was more of an operator and less of a theorist or historian in those days. Such was my bias I suppose.)

Col Rattray is also a 1984 graduate of the Air Force Academy. He studied history and political science there and returned as an assistant professor in the early 1990s. He was one of my instructors when I was a cadet there. (I graduated in 1994 with degrees in history and political science.) Col Rattray then earned a master of public policy degree at Harvard Kennedy School. (I did the same, in 1996.) 

Do you see a pattern here? He is clearly a role model. Of course, I did not stay in the Air Force as long, earn the same rank, or survive my PhD program!

After the Academy, Col Rattray served as commander of the 23rd Information Operations Squadrons on Security Hill in San Antonio, Texas. I was working in the AFCERT at the time. 

One of the last duties I had in uniform was to travel to Nellis AFB outside Las Vegas and participate in a doctrine writing project for information warfare. At the time I was not a fan of the idea, but Col Rattray convinced me someone needed to write down how we did computer network defense in the AFCERT. 

He didn't order me to participate, which I always appreciated. Years later I told him it was a good idea to organize that project and that I was probably just grumpy because of the way the Air Force personnel system had treated me at the end of my military career.

Why The Tweet Matters


For years I've had to dance around the issue of who invented the term "APT." In most narratives I say that an Air Force colonel invented the term in 2006. I based this on discussions I had with colleagues in the defense industrial base who were working with said colonel and his team from the Air Force. I did not know back then that it was Col Rattray and his team from the Air Force Information Warfare Center. 

Years later I learned of Rattray's role, but not directly from him. Only this year did Col Rattray confirm to me that he had invented the term, and that 2007 was the correct year. I encouraged him to say something, because as an historian I appreciate the value of facts and narrative. As I Tweeted after seeing Greg's Tweet:

"Security, like any other field, has HISTORY, which means there are beginnings, and stories, and discoveries, and innovators, and leaders, and first steps, and pioneers. I'm so pleased to see people like @GregRattray_ feel comfortable enough after all these years to say something."

I don't think many people in the security field think about history. Security tends to be obsessed with the "new" and the "shiny." Not enough people wonder how we got to this point, or what decisions led to the current situation. The security scene in 2020 is very different from the scene in 1960, or 1970, or 1980, or 1990, or 2000, or even 2010. This is not the time to describe how or why that is the case. I'm just glad a very important piece of the puzzle is now public.

More on the APT



If you'd like to learn more about this history of the APT, check out my newest book -- The Best of TaoSecurity Blog, Volume 2. I devote an entire chapter to blog posts and new commentary on the APT. Volume 1 arrived a few months before this new book, and I'm working on Volume 3 now.

The FBI Intrusion Notification Program

The FBI intrusion notification program is one of the most important developments in cyber security during the last 15 years. 

This program achieved mainstream recognition on 24 March 2014 when Ellen Nakashima reported on it for the Washington Post in her story U.S. notified 3,000 companies in 2013 about cyberattacks

The story noted the following:

"Federal agents notified more than 3,000 U.S. companies last year that their computer systems had been hacked, White House officials have told industry executives, marking the first time the government has revealed how often it tipped off the private sector to cyberintrusions...

About 2,000 of the notifications were made in person or by phone by the FBI, which has 1,000 people dedicated to cybersecurity investigations among 56 field offices and its headquarters. Some of the notifications were made to the same company for separate intrusions, officials said. Although in-person visits are preferred, resource constraints limit the bureau’s ability to do them all that way, former officials said...

Officials with the Secret Service, an agency of the Department of Homeland Security that investigates financially motivated cybercrimes, said that they notified companies in 590 criminal cases opened last year, officials said. Some cases involved more than one company."

The reason this program is so important is that it shattered the delusion that some executives used to reassure themselves. When the FBI visits your headquarters to tell you that you are compromised, you can't pretend that intrusions are "someone else's problem."

It may be difficult for some readers to appreciate how prevalent this mindset was, from the beginnings of IT to about the year 2010.

I do not know exactly when the FBI began notifying victims, but I believe the mid-2000's is a safe date. I can personally attest to the program around that time.

I was reminded of the importance of this program by Andy Greenberg's new story The FBI Botched Its DNC Hack Warning in 2016—but Says It Won’t Next Time

I strongly disagree with this "botched" characterization. Andy writes:

"[S]omehow this breach [of the Democratic National Committee] had come as a terrible surprise—despite an FBI agent's warning to [IT staffer Yared] Tamene of potential Russian hacking over a series of phone calls that had begun fully nine months earlier.

The FBI agent's warnings had 'never used alarming language,' Tamene would tell the Senate committee, and never reached higher than the DNC's IT director, who dismissed them after a cursory search of the network for signs of foul play."

As with all intrusions, criminal responsibility lies with the intruder. However, I do not see why the FBI is supposed to carry the blame for how this intrusion unfolded. 

According to investigatory documents and this Crowdstrike blog post on their involvement, at least seven months passed from the time the FBI notified the DNC (sometime in September 2015) and when they contacted Crowdstrike (30 April 2016). That is ridiculous. 

If I received a call from the FBI even hinting at a Russian presence in my network, I would be on the phone with a professional incident response firm right after I briefed the CEO about the call.

I'm glad the FBI continues to improve its victim notification procedures, but it doesn't make much of a difference if the individuals running IT and the organization are negligent, either through incompetence or inaction.

Note: Fixed year typo.

New Book! The Best of TaoSecurity Blog, Volume 2


 

I published a new book!

The Best of TaoSecurity Blog, Volume 2: Network Security Monitoring, Technical Notes, Research, and China and the Advanced Persistent Threat

It's in the Kindle Store, and if you're Unlimited it's free. Print edition to follow.

The book lists as having 413 pages (for the Kindle edition at least) at it's almost 95,000 words. I started working on it in June after finishing Volume 1.

Here is the book description:

Since 2003, cybersecurity author Richard Bejtlich has been writing posts on TaoSecurity Blog, a site with 15 million views since 2011. Now, after re-reading over 3,000 posts and approximately one million words, he has selected and republished the very best entries from 17 years of writing. 

In the second volume of the TaoSecurity Blog series, Mr. Bejtlich addresses how to detect and respond to intrusions using third party threat intelligence sources, network data, application and infrastructure data, and endpoint data. He assesses government and private security initiatives and applies counterintelligence and counteradversary mindsets to defend digital assets. He documents the events of the last 20 years of Chinese hacking from the perspective of a defender on the front lines, in the pre- and post-APT era. 

This volume contains some of Mr. Bejtlich’s favorite posts, such as histories of threat hunting, so-called black and white hat budgeting, attribution capabilities and limits, and rating information security incidents. He has written new commentaries to accompany each post, some of which would qualify as blog entries in their own right.  Read how the security industry, defensive methodologies, and strategies to improve national security have evolved in this new book, written by one of the authors who has seen it all and survived to blog about it.

I have a third volume planned. I will publish it by the end of the year. 


If you have any questions about the book, let me know. Currently you can see the table of contents via the "Look Inside" function, and there is a sample that lets you download and read some of the book. Enjoy!

One Weird Trick for Reviewing Zeek Logs on the Command Line!

Are you a network security monitoring dinosaur like me? Do you prefer to inspect your Zeek logs using the command line instead of a Web-based SIEM?

If yes, try this one weird trick!

I store my Zeek logs in JSON format. Sometimes I like to view the output using jq.

If I need to search directories of logs for a string, like a UID, I might* use something like zgrep with the following syntax:

$ zgrep "CLkXf2CMo11hD8FQ5" 2020-08-16/*

2020-08-16/conn_20200816_06:00:00-07:00:00+0000.log.gz:{"_path":"conn","_system_name":"ds61","_write_ts":"2020-08-16T06:26:10.266225Z","_node":"worker-01","ts":"2020-08-16T06:26:01.485394Z","uid":"CLkXf2CMo11hD8FQ5","id.orig_h":"192.168.2.76","id.orig_p":53380,"id.resp_h":"196.216.2.24","id.resp_p":21,"proto":"tcp","service":"ftp","duration":3.780829906463623,"orig_bytes":184,"resp_bytes":451,"conn_state":"SF","local_orig":true,"local_resp":false,"missed_bytes":0,"history":"ShAdDafF","orig_pkts":20,"orig_ip_bytes":1232,"resp_pkts":17,"resp_ip_bytes":1343,"community_id":"1:lEESxqaSVYqFZvWNb4OccTa9sTs="}
2020-08-16/ftp_20200816_06:26:04-07:00:00+0000.log.gz:{"_path":"ftp","_system_name":"ds61","_write_ts":"2020-08-16T06:26:04.077276Z","_node":"worker-01","ts":"2020-08-16T06:26:03.553287Z","uid":"CLkXf2CMo11hD8FQ5","id.orig_h":"192.168.2.76","id.orig_p":53380,"id.resp_h":"196.216.2.24","id.resp_p":21,"user":"anonymous","password":"ftp@example.com","command":"EPSV","reply_code":229,"reply_msg":"Entering Extended Passive Mode (|||31746|).","data_channel.passive":true,"data_channel.orig_h":"192.168.2.76","data_channel.resp_h":"196.216.2.24","data_channel.resp_p":31746}
2020-08-16/ftp_20200816_06:26:04-07:00:00+0000.log.gz:{"_path":"ftp","_system_name":"ds61","_write_ts":"2020-08-16T06:26:05.117287Z","_node":"worker-01","ts":"2020-08-16T06:26:04.597290Z","uid":"CLkXf2CMo11hD8FQ5","id.orig_h":"192.168.2.76","id.orig_p":53380,"id.resp_h":"196.216.2.24","id.resp_p":21,"user":"anonymous","password":"ftp@example.com","command":"RETR","arg":"ftp://196.216.2.24/pub/stats/afrinic/delegated-afrinic-extended-latest.md5","file_size":74,"reply_code":226,"reply_msg":"Transfer complete.","fuid":"FueF95uKPrUuDnMc4"}

That is tough on the eyes. I cannot simply pipe that output to Jq however:

$ zgrep "CLkXf2CMo11hD8FQ5" 2020-08-16/* | jq .
parse error: Invalid numeric literal at line 1, column 28

What I need to do is strip out the filename and colon before the JSON. I learned how to use sed to do this thanks to this post

$ zgrep "CLkXf2CMo11hD8FQ5" 2020-08-16/* | sed 's/.*gz://' | jq .

{
  "_path": "conn",
  "_system_name": "ds61",
  "_write_ts": "2020-08-16T06:26:10.266225Z",
  "_node": "worker-01",
  "ts": "2020-08-16T06:26:01.485394Z",
  "uid": "CLkXf2CMo11hD8FQ5",
  "id.orig_h": "192.168.2.76",
  "id.orig_p": 53380,
  "id.resp_h": "196.216.2.24",
  "id.resp_p": 21,
  "proto": "tcp",
  "service": "ftp",
  "duration": 3.780829906463623,
  "orig_bytes": 184,
  "resp_bytes": 451,
  "conn_state": "SF",
  "local_orig": true,
  "local_resp": false,
  "missed_bytes": 0,
  "history": "ShAdDafF",
  "orig_pkts": 20,
  "orig_ip_bytes": 1232,
  "resp_pkts": 17,
  "resp_ip_bytes": 1343,
  "community_id": "1:lEESxqaSVYqFZvWNb4OccTa9sTs="
}
{
  "_path": "ftp",
  "_system_name": "ds61",
  "_write_ts": "2020-08-16T06:26:04.077276Z",
  "_node": "worker-01",
  "ts": "2020-08-16T06:26:03.553287Z",
  "uid": "CLkXf2CMo11hD8FQ5",
  "id.orig_h": "192.168.2.76",
  "id.orig_p": 53380,
  "id.resp_h": "196.216.2.24",
  "id.resp_p": 21,
  "user": "anonymous",
  "password": "ftp@example.com",
  "command": "EPSV",
  "reply_code": 229,
  "reply_msg": "Entering Extended Passive Mode (|||31746|).",
  "data_channel.passive": true,
  "data_channel.orig_h": "192.168.2.76",
  "data_channel.resp_h": "196.216.2.24",
  "data_channel.resp_p": 31746
}
{
  "_path": "ftp",
  "_system_name": "ds61",
  "_write_ts": "2020-08-16T06:26:05.117287Z",
  "_node": "worker-01",
  "ts": "2020-08-16T06:26:04.597290Z",
  "uid": "CLkXf2CMo11hD8FQ5",
  "id.orig_h": "192.168.2.76",
  "id.orig_p": 53380,
  "id.resp_h": "196.216.2.24",
  "id.resp_p": 21,
  "user": "anonymous",
  "password": "ftp@example.com",
  "command": "RETR",
  "arg": "ftp://196.216.2.24/pub/stats/afrinic/delegated-afrinic-extended-latest.md5",
  "file_size": 74,
  "reply_code": 226,
  "reply_msg": "Transfer complete.",
  "fuid": "FueF95uKPrUuDnMc4"
}

Maybe this will help you too.

*I use the find command in other circumstances.

Update: Twitter user @captainGeech42 noted that I could use grep -h and omit the sed pipe, e.g.:

$ zgrep -h "CLkXf2CMo11hD8FQ5" 2020-08-16/* | jq .

Thanks for the tip!

I Did Not Write This Book


Fake Book
Fake Book 

Someone published a "book" on Amazon and claimed that I wrote it! I had NOTHING to do with this. I am working with Amazon now to remove it, or at least remove my name. Stay away from this garbage!

Update: Thankfully, within a day or so of this post, the true author of this work removed it from Amazon. It has not returned, at least as far as I have seen.

New Book! The Best of TaoSecurity Blog, Volume 1



I'm very pleased to announce that I've published a new book!

It's The Best of TaoSecurity Blog, Volume 1: Milestones, Philosophy and Strategy, Risk, and Advice. It's available now in the Kindle Store, and if you're a member of Kindle Unlimited, it's currently free. I may also publish a print version. If you're interested, please tell me on Twitter.



The book lists at 332 pages and is over 83,000 words. I've been working on it since last year, but I've used the time in isolation to carry the first volume over the finish line.

The Amazon.com description says:

Since 2003, cybersecurity author Richard Bejtlich has been writing posts on TaoSecurity Blog, a site with 15 million views since 2011. Now, after re-reading over 3,000 posts and approximately one million words, he has selected and republished the very best entries from 17 years of writing.

In the first volume of the TaoSecurity Blog series, Bejtlich addresses milestones, philosophy and strategy, risk, and advice. Bejtlich shares his thoughts on leadership, the intruder's dilemma, managing burnout, controls versus assessments, insider versus outsider threats, security return on investment, threats versus vulnerabilities, controls and compliance, the post that got him hired at a Fortune 5 company as their first director of incident response, and much more.

He has written new commentaries to accompany each post, some of which would qualify as blog entries in their own right.  Read how the security industry, defensive methodologies, and strategies to improve career opportunities have evolved in this new book, written by one of the authors who has seen it all and survived to blog about it.

Finally, if you're interested in subsequent volumes, I have two planned.


I may also have a few other book projects in the pipeline. I'll have more to say on that in the coming weeks.

If you have any questions about the book, let me know. Currently you can see the table of contents via the "Look Inside" function, and there is a sample that lets you download and read some of the book. Enjoy!

If You Can’t Patch Your Email Server, You Should Not Be Running It

CVE-2020-0688 Scan Results, per Rapid7

tl;dr -- it's the title of the post: "If You Can't Patch Your Email Server, You Should Not Be Running It."

I read a disturbing story today with the following news:

"Starting March 24, Rapid7 used its Project Sonar internet-wide survey tool to discover all publicly-facing Exchange servers on the Internet and the numbers are grim.

As they found, 'at least 357,629 (82.5%) of the 433,464 Exchange servers' are still vulnerable to attacks that would exploit the CVE-2020-0688 vulnerability.

To make matters even worse, some of the servers that were tagged by Rapid7 as being safe against attacks might still be vulnerable given that 'the related Microsoft update wasn’t always updating the build number.'

Furthermore, 'there are over 31,000 Exchange 2010 servers that have not been updated since 2012,' as the Rapid7 researchers observed. 'There are nearly 800 Exchange 2010 servers that have never been updated.'

They also found 10,731 Exchange 2007 servers and more than 166,321 Exchange 2010 ones, with the former already running End of Support (EoS) software that hasn't received any security updates since 2017 and the latter reaching EoS in October 2020."

In case you were wondering, threat actors have already been exploiting these flaws for weeks, if not months.

Email is one of, if not the most, sensitive and important systems upon which organizations of all shapes and sizes rely. The are, by virtue of their function, inherently exposed to the Internet, meaning they are within the range of every targeted or opportunistic intruder, worldwide.

In this particular case, unpatched servers are also vulnerable to any actor who can download and update Metasploit, which is virtually 100% of them.

It is the height of negligence to run such an important system in an unpatched state, when there are much better alternatives -- namely, outsourcing your email to a competent provider, like Google, Microsoft, or several others.

I expect some readers are saying "I would never put my email in the hands of those big companies!" That's fine, and I know several highly competent individuals who run their own email infrastructure. The problem is that they represent the small fraction of individuals and organizations who can do so. Even being extremely generous with the numbers, it appears that less than 20%, and probably less than 15% according to other estimates, can even keep their Exchange servers patched, let alone properly configured.

If you think it's still worth the risk, and your organization isn't able to patch, because you want to avoid megacorp email providers or government access to your email, you've made a critical miscalculation. You've essentially decided that it's more important for you to keep your email out of megacorp or government hands than it is to keep it from targeted or opportunistic intruders across the Internet.

Incidentally, you've made another mistake. Those same governments you fear, at least many of them, will just leverage Metasploit to break into your janky email server anyway.

The bottom line is that unless your organization is willing to commit the resources, attention, and expertise to maintaining a properly configured and patched email system, you should outsource it. Otherwise you are being negligent with not only your organization's information, but the information of anyone with whom you exchange emails.

Seeing Book Shelves on Virtual Calls


I have a confession... for me, the best part of virtual calls, or seeing any reporter or commentator working for home, is being able to check out their book shelves. I never use computer video, because I want to preserve the world's bandwidth. That means I don't share what my book shelves look like when I'm on a company call. Therefore, I thought I'd share my book shelves with the world.

My big categories of books are martial arts, mixed/miscellaneous, cybersecurity and intelligence, and military and Civil War history. I've cataloged about 400 print books and almost 500 digital titles. Over the years I've leaned towards buying Kindle editions of any book that is mostly print, in order to reduce my footprint.

For the last many years, my book shelving has consisted of three units, each with five shelves. Looking at the topic distribution, as of 2020 I have roughly 6 shelves for martial arts, 4 for mixed/miscellaneous, 3 for cybersecurity and intelligence, and 2 for military and Civil War history.

This is interesting to me because I can compare my mix from five years ago, when I did an interview for the now defunct Warcouncil Warbooks project.


In that image from 2015, I can see 2 shelves for martial arts, 4 for mixed/miscellaneous, 7 for cybersecurity and intelligence, and 2 for military and Civil War history.

What happened to all of the cybersecurity and intelligence books? I donated a bunch of them, and the rest I'm selling on Amazon, along with books (in new or like new condition) that my kids decided they didn't want anymore.

I've probably donated hundreds, possibly approaching a thousand, cyber security and IT books over the years. These were mostly books sent by publishers, although some were those that I bought and no longer needed. Some readers from northern Virginia might remember me showing up at ISSA or NoVASec meetings with a boxes of books that I would leave on tables. I would say "I don't want to come home with any of these. Please be responsible. And guess what -- everyone was!

If anyone would like to share their book shelves, the best place would be as a reply to my Tweet on this post. I look forward to seeing your book shelves, fellow bibliophiles.

Skill Levels in Digital Security


Two posts in one day? These are certainly unusual times.

I was thinking about words to describe different skill levels in digital security. Rather than invent something, I decided to review terms that have established meaning. Thanks to Google Books I found this article in a 1922 edition of the Archives of Psychology that mentioned four key terms:

  1. The novice is a (person) who has no trade ability whatever, or at least none that could not be paralleled by practically any intelligent (person).
  2. An apprentice has acquired some of the elements of the trade but is not sufficiently skilled to be trusted with any important task.
  3. The journey(person) is qualified to perform almost any work done by members of the trade.
  4. An expert can perform quickly and with superior skill any work done by (people) in the trade.
I believe these four categories can apply to some degree to the needs of the digital security profession.

At GE-CIRT we had three levels -- event analyst, incident analyst, and incident handler. We did not hire novices, so those three roles map in some ways to apprentice, journeyperson, and expert. 

One difference with the classical description applies to how we worked with apprentices. We trusted apprentices, or event analysts, with specific tasks. We thought of this work as important, just as every role on a team is important. It may not have been leading an incident response, but without the work of the event and incident analysts, we may not have discovered many incidents!

Crucially, we encouraged event analysts, and incident analysts for that matter, to always be looking to exceed the parameters of their assigned duties.

However, we stipulated that if a person was working beyond their assigned duties, they had to have their work product reviewed by the next level of analysis. This enabled mentoring among the various groups. It also helped identify people who were candidates for promotion. If a person consistently worked beyond their assigned duties, and eventually reached a near-perfect or perfect ability to do that work, that proved he or she was ready to assume the next level.

This ability to access work beyond assigned duties is one reason I have problems with limiting data by role. I think everyone who works in a CIRT should have access to all of the data, assuming there are no classification, privacy, or active investigation constraints.

One of my laws is the following:

Analysts are good because they have good data. An expert with bad data is helpless. An apprentice with good data has a chance to do good work.

I've said it more eloquently elsewhere but this is the main point. 

For more information on the apprenticeship model, this article might be useful.

When You Should Blog and When You Should Tweet


I saw my like-minded, friend-that-I've-never-met Andrew Thompson Tweet a poll, posted above.

I was about to reply with the following Tweet:

"If I'm struggling to figure out how to capture a thought in just 1 Tweet, that's a sign that a blog post might be appropriate. I only use a thread, and no more than 2, and hardly ever 3 (good Lord), when I know I've got nothing more to say. "1/10," "1/n," etc. are not for me."

Then I realized I had something more to say, namely, other reasons blog posts are better than Tweets. For the briefest moment I considered adding a second Tweet, making, horror of horrors, a THREAD, and then I realized I would be breaking my own guidance.

Here are three reasons to consider blogging over Tweeting.

1. If you find yourself trying to pack your thoughts into a 280 character limit, then you should write a blog post. You might have a good idea, and instead of expressing it properly, you're falling into the trap of letting the medium define the message, aka the PowerPoint trap. I learned this from Edward Tufte: let the message define the medium, not the other way around.

2. Twitter threads lose the elegance and readability of the English language as our ancestors created it, for our benefit. They gave us structures, like sentences, lists, indentation, paragraphs, chapters, and so on. What does Twitter provide? 280 character chunks. Sure, you can apply feeble "1/n" annotations, but you've lost all that structure and readability, and for what?

3. In the event you're writing a Tweet thread that's really worth reading, writing it via Twitter virtually guarantees that it's lost to history. Twitter is an abomination for citation, search, and future reference. In the hierarchy of delivering content for current researchers and future generations, the hierarchy is the following, from lowest to highest:

  • "Transient," "bite-sized" social media, e.g., Twitter, Instagram, Facebook, etc. posts
  • Blog posts
  • Whitepapers
  • Academic papers in "electronic" journals
  • Electronic (e.g., Kindle) only formatted books
  • Print books (that may be stand-alone works, or which may contain journal articles)

Print book are the apex communication medium because we have such references going back hundreds of years. Hundreds of years from now, I doubt the first five formats above will be easily accessible, or accessible at all. However, in a library or personal collection somewhere, printed books will endure.

The bottom line is that if you think what you're writing is important enough to start a "1/n" Tweet thread, you've already demonstrated that Twitter is the wrong medium.

The natural follow-on might be: what is Twitter good for? Here are my suggestions:

  • Announcing a link to another, in-depth news resource, like a news article, blog post, whitepaper, etc.
  • Offering a comment on an in-depth news resource, or replying to another person's announcement.
  • Asking a poll question.
  • Asking for help on a topic.
  • Engaging in a short exchange with another user. Long exchanges on hot topics typically devolve into a confusing mess of messages and replies, that delivery of which Twitter has never really managed to figure out.

I understand the seduction of Twitter. I use it every day. However, when it really matters, blogging is preferable, followed by the other media I listed in point 3 above.

Update 0930 ET 27 Mar 2020: I forgot to mention that in extenuating circumstances, like live-Tweeting an emergency, Twitter threads on significant matters are fine because the urgency of the situation and the convenience or plain logistical limitations of the situation make Twitter indispensable. I'm less thrilled by live-Tweeting in conferences, although I'm guilty of it in the past. I'd prefer a thoughtful wrap-up post following the event, which I did a lot before Twitter became popular.

COVID-19 Phishing Tests: WRONG

Malware Jake Tweeted a poll last night which asked the following:

"I have an interesting ethical quandary. Is it ethically okay to use COVID-19 themed phishing emails for assessments and user awareness training right now? Please read the thread before responding and RT for visibility. 1/"

Ultimately he decided:

"My gut feeling is to not use COVID-19 themed emails in assessments/training, but to TELL users to expect them, though I understand even that might discourage consumption of legitimate information, endangering public health. 6/"

I responded by saying this was the right answer.

Thankfully there were many people who agreed, despite the fact that voting itself was skewed towards the "yes" answer.

There were an uncomfortable number of responses to the Tweet that said there's nothing wrong with red teams phishing users with COVID-19 emails. For example:

"Do criminals abide by ethics? Nope. Neither should testing."

"Yes. If it's in scope for the badguys [sic], it's in scope for you."

"Attackers will use it. So I think it is fair game."

Those are the wrong answers. As a few others outlined well in their responses, the fact that a criminal or intruder employs a tactic does not mean that it's appropriate for an offensive security team to use it too.

I could imagine several COVID-19 phishing lures that could target school districts and probably cause high double-digit click-through rates. What's the point of that? For a "community" that supposedly considers fear, uncertainty, and doubt (FUD) to be anathema, why introduce FUD via a phishing test?

I've grown increasingly concerned over the past few years that there's a "cult of the offensive" that justifies its activities with the rationale that "intruders do it, so we should too." This is directly observable in the replies to Jake's Tweet. It's a thin veneer that covers bad behavior, outweighing the small benefit accrued to high-end, 1% security shops against the massive costs suffered by the vast majority of networked global organizations.

The is a selfish, insular mindset that is reinforced by the echo chamber of the so-called "infosec community." This "tribe" is detached from the concerns and ethics of the larger society. It tells itself that what it is doing is right, oblivious or unconcerned with the costs imposed on the organizations they are supposedly "protecting" with their backwards actions.

We need people with feet in both worlds to tell this group that their approach is not welcome in the broader human community, because the costs it imposes vastly outweigh the benefits.

I've written here about ethics before, usually in connection with the only real value I saw in the CISSP -- its code of ethics. Reviewing the "code," as it appears now, shows the following:

"There are only four mandatory canons in the Code. By necessity, such high-level guidance is not intended to be a substitute for the ethical judgment of the professional.

Code of Ethics Preamble:

The safety and welfare of society and the common good, duty to our principals, and to each other, requires that we adhere, and be seen to adhere, to the highest ethical standards of behavior.
Therefore, strict adherence to this Code is a condition of certification.

Code of Ethics Canons:

Protect society, the common good, necessary public trust and confidence, and the infrastructure.
Act honorably, honestly, justly, responsibly, and legally.
Provide diligent and competent service to principals.
Advance and protect the profession."

This is almost worthless. The only actionable item in the "code" is the word "legally," implying that if a CISSP holder was convicted of a crime, he or she could lose their certification. Everything else is subject to interpretation.

Contrast that with the USAFA Code of Conduct:

"We will not lie, steal, or cheat, nor tolerate among us anyone who does."

While it still requires an Honor Board to determine if a cadet has lied, stolen, cheated, or tolerated, there's much less gray in this statement of the Academy's ethics. Is it perfect? No. Is it more actionable than the CISSP's version? Absolutely.

I don't have "solutions" to the ethical bankruptcy manifesting in some people practicing what they consider to be "information security." However, this post is a step towards creating red lines that those who are not already hardened in their ways can observe and integrate.

Perhaps at some point we will have an actionable code of ethics that helps newcomers to the field understand how to properly act for the benefit of the human community.

Seven Security Strategies, Summarized

This is the sort of story that starts as a comment on Twitter, then becomes a blog post when I realize I can't fit all the ideas into one or two Tweets. (You know how much I hate Tweet threads, and how I encourage everyone to capture deep thoughts in blog posts!)

In the interest of capturing the thought, and not in the interest of thinking too deeply or comprehensively (at least right now), I offer seven security strategies, summarized.

When I mention the risk equation, I'm talking about the idea that one can conceptually image the risk of some negative event using this "formula": Risk (of something) is the product of some measurements of Vulnerability X Threat X Asset Value, or R = V x T x A.

  1. Denial and/or ignorance. This strategy assumes the risk due to loss is low, because those managing the risk assume that one or more of the elements of the risk equation are zero or almost zero, or they are apathetic to the cost.
  2. Loss acceptance. This strategy may assume the risk due to loss is low, or more likely those managing the risk assume that the cost of risk realization is low. In other words, incidents will occur, but the cost of the incident is acceptable to the organization.
  3. Loss transferal. This strategy may also assume the risk due to loss is low, but in contrast with risk acceptance, the organization believes it can buy an insurance policy which will cover the cost of an incident, and the cost of the policy is cheaper than alternative strategies.
  4. Vulnerability elimination. This strategy focuses on driving the vulnerability element of the risk equation to zero or almost zero, through secure coding, proper configuration, patching, and similar methods.
  5. Threat elimination. This strategy focuses on driving the threat element of the risk equation to zero or almost zero, through deterrence, dissuasion, co-option, bribery, conversion, incarceration, incapacitation, or other methods that change the intent and/or capabilities of threat actors. 
  6. Asset value elimination. This strategy focuses on driving the threat element of the risk equation to zero or almost zero, through minimizing data or resources that might be valued by adversaries.
  7. Interdiction. This is a hybrid strategy which welcomes contributions from vulnerability elimination, primarily, but is open to assistance from loss transferal, threat elimination, and asset value elimination. Interdiction assumes that prevention eventually fails, but that security teams can detect and respond to incidents post-compromise and pre-breach. In other words, some classes of intruders will indeed compromise an organization, but it is possible to detect and respond to the attack before the adversary completes his mission.
As you might expect, I am most closely associated with the interdiction strategy. 

I believe the denial and/or ignorance and loss acceptance strategies are irresponsible.

I believe the loss transferal strategy continues to gain momentum with the growth of cybersecurity breach insurance policies. 

I believe the vulnerability elimination strategy is important but ultimately, on its own, ineffective and historically shown to be impossible. When used in concert with other strategies, it is absolutely helpful.

I believe the threat elimination strategy is generally beyond the scope of private organizations. As the state retains the monopoly on the use of force, usually only law enforcement, military, and sometimes intelligence agencies can truly eliminate or mitigate threats. (Threats are not vulnerabilities.)

I believe asset value elimination is powerful but has not gained the ground I would like to see. This is my "If you can’t protect it, don’t collect it" message. The limitation here is obviously one's raw computing elements. If one were to magically strip down every computing asset into basic operating systems on hardware or cloud infrastructure, the fact that those assets exist and are networked means that any adversary can abuse them for mining cryptocurrencies, or as infrastructure for intrusions, or for any other uses of raw computing power.

Please notice that none of the strategies listed tools, techniques, tactics, or operations. Those are important but below the level of strategy in the conflict hierarchy. I may have more to say on this in the future. 

Five Thoughts on the Internet Freedom League

In the September/October issue of Foreign Affairs magazine, Richard Clarke and Rob Knake published an article titled "The Internet Freedom League: How to Push Back Against the Authoritarian Assault on the Web," based on their recent book The Fifth Domain. The article proposes the following:

The United States and its allies and partners should stop worrying about the risk of authoritarians splitting the Internet. 

Instead, they should split it themselves, by creating a digital bloc within which data, services, and products can flow freely, excluding countries that do not respect freedom of expression or privacy rights, engage in disruptive activity, or provide safe havens to cybercriminals...

The league would not raise a digital Iron Curtain; at least initially, most Internet traffic would still flow between members and nonmembers, and the league would primarily block companies and organizations that aid and abet cybercrime, rather than entire countries. 

Governments that fundamentally accept the idea of an open, tolerant, and democratic Internet but that struggle to live up to such a vision would have an incentive to improve their enforcement efforts in order join the league and secure connectivity for their companies and citizens. 

Of course, authoritarian regimes in China, Russia, and elsewhere will probably continue to reject that vision. 

Instead of begging and pleading with such governments to play nice, from now on, the United States and its allies should lay down the law: follow the rules, or get cut off.

My initial reaction to this line of thought was not encouraging. Rather than continue exchanging Twitter messages, Rob and I had a very pleasant phone conversation to help each other understand our points of view. Rob asked me to document my thoughts in a blog post, so this is the result.

Rob explained that the main goal of the IFL is to create leverage to influence those who do not implement an open, tolerant, and democratic Internet (summarized below as OTDI). I agree that leverage is certainly lacking, but I wondered if the IFL would accomplish that goal. My reservations included the following.

1. Many countries that currently reject the OTDI might only be too happy to be cut off from the Western Internet. These countries do not want their citizens accessing the OTDI. Currently dissidents and others seeking news beyond their local borders must often use virtual private networks and other means to access the OTDI. If the IFL went live, those dissidents and others would be cut off, thanks to their government's resistance to OTDI principles.

2. Elites in anti-OTDI countries would still find ways to access the Western Internet, either for personal, business, political, military, or intelligence reasons. The common person would be mostly likely to suffer.

3. Segregating the OTDI would increase the incentives for "network traffic smuggling," whereby anti-OTDI elites would compromise, bribe, or otherwise corrupt Western Internet resources to establish surreptitious methods to access the OTDI. This would increase the intrusion pressure upon organizations with networks in OTDI and anti-OTDI locations.

4. Privacy and Internet freedom groups would likely strongly reject the idea of segregating the Internet in this manner. They are vocal and would apply heavy political pressure, similar to recent net neutrality arguments.

5. It might not be technically possible to segregate the Internet as desired by the IFL. Global business does not neatly differentiate between Western and anti-OTDI networks. Similar to the expected resistance from privacy and freedom groups, I expect global commercial lobbies to strongly reject the IFL on two grounds. First, global businesses cannot disentangle themselves from anti-OTDI locations, and second, Western businesses do not want to lose access to markets in anti-OTDI countries.

Rob and I had a wide-ranging discussion, but these five points in written form provide a platform for further analysis.

What do you think about the IFL? Let Rob and I know on Twitter, via @robknake and @taosecurity.

Happy Birthday TaoSecurity.com


Nineteen years ago this week I registered the domain taosecurity.com:

Creation Date: 2000-07-04T02:20:16Z

This was 2 1/2 years before I started blogging, so I don't have much information from that era. I did create the first taosecurity.com Web site shortly thereafter.

I first started hosting it on space provided by my then-ISP, Road Runner of San Antonio, TX. According to archive.org, it looked like this in February 2002.


That is some fine-looking vintage hand-crafted HTML. Because I lived in Texas I apparently reached for the desert theme with the light tan background. Unfortunately I didn't have the "under construction" gif working for me.

As I got deeper into the security scene, I decided to simplify and adopt a dark look. By this time I had left Texas and was in the DC area, working for Foundstone. According to archive.org, the site look like this in April 2003.


Notice I've replaced the oh-so-cool picture of me doing American Kenpo in the upper-left-hand corner with the classic Bruce Lee photo from the cover of The Tao of Jeet Kune Do. This version marks the first appearance of my classic TaoSecurity logo.

A little more than two years later, I decided to pursue TaoSecurity as an independent consultant. To launch my services, I painstakingly created more hand-written HTML and graphics to deliver this beauty. According to archive.org, the site looked like this in May 2005.


I mean, can you even believe how gorgeous that site is? Look at the subdued gray TaoSecurity logo, the red-highlighted menu boxes, etc. I should have kept that site forever.

We know that's not what happened, because that wonder of a Web site only lasted about a year. Still to this day not really understanding how to use CSS, I used a free online template by Andreas Viklund to create a new site. According to archive.org, the site appeared in this form in July 2006.


After four versions in four years, my primary Web site stayed that way... for thirteen years. Oh, I modified the content, SSH'ing into the server hosted by my friend Phil Hagen, manually editing the HTML using vi (and careful not to touch the CSS).

Then, I attended AWS re:inforce the last week in June, 2019. I decided that although I had tinkered with Amazon Web Services as early as 2010, and was keeping an eye on it as early as 2008, I had never hosted any meaningful workloads there. A migration of my primary Web site to AWS seemed like a good way to learn a bit more about AWS and an excuse to replace my teenage Web layout with something that rendered a bit better on a mobile device.

After working with Mobirise, AWS S3, AWS Cloudfront, AWS Certificate Manager, AWS Route 53, my previous domain name servers, and my domain registrar, I'm happy to say I have a new TaoSecurity.com Web site. The front page like this:


The background is an image of Milnet from the late 1990s. I apologize for the giant logo in the upper left. It should be replaced by a resized version later today when the AWS Cloudfront cache expires.

Scolling down provides information on my books, which I figured is what most people who visit the site care about.


For reference, I moved the content (which I haven't been updated) about news, press, and research to individual TaoSecurity Blog posts.

It's possible you will not see the site, if your DNS servers have the old IP addresses cached. That should all expire no later than tomorrow afternoon, I imagine.

Let's see if the new site lasts another thirteen years?

Reference: TaoSecurity News

I started speaking publicly about digital security in 2000. I used to provide this information on my Web site, but since I don't keep that page up-to-date anymore, I decided to publish it here.
  • 2017
    • Mr. Bejtlich led a podcast titled Threat Hunting: Past, Present, and Future, in early July 2017. He interviewed four of the original six GE-CIRT incident handlers. The audio is posted on YouTube. Thank you to Sqrrl for making the reunion possible.
    • Mr. Bejtlich's latest book was inducted into the Cybersecurity Canon.
    • Mr. Bejtlich is doing limited security consulting. See this blog post for details.
  • 2016
    • Mr. Bejtlich organized and hosted the Management track (now "Executive track") at the 7th annual Mandiant MIRCon (now "FireEye Cyber Defense Summit") on 29-30 November 2016.
    • Mr. Bejtlich delivered the keynote to the 2016 Air Force Senior Leaders Orientation Conference at Joint Base Andrews on 29 July 2016.
    • Mr. Bejtlich delivered the keynote to the FireEye Cyber Defense Live Tokyo event in Tokyo on 12 July 2016.
    • Mr. Bejtlich delivered the keynote to the New Zealand Cyber Security Summit in Auckland on 6 May 2016.
    • Mr. Bejtlich delivered the keynote to the Lexpo Summit in Amsterdam on 21 April 2016. Video posted here.
    • Mr. Bejtlich discussed cyber security campaigns at the 2016 War Studies Cumberland Lodge Conference near London on 30 March 2016.
    • Mr. Bejtlich offered a guest lecture to the Wilson Center Congressional Cybersecurity Lab on 5 February 2016.
    • Mr. Bejtlich delivered the keynote to the SANS Cyber Threat Intelligence Summit on 4 February 2016. Slides and video available.
  • 2015
  • 2014
  • 2013
    • Mr. Bejtlich taught Network Security Monitoring 101 at Black Hat Seattle 2013: 9-10 December 2013 / Seattle, WA.
    • Mr. Bejtlich offered a guest lecture on digital security at George Washington University on 23 November 2013.
    • Mr. Bejtlich spoke about digital security at the Mid-Atlantic CIO Council on 21 November 2013.
    • Mr. Bejtlich was a panelist at the Brookings Institute on 19 November 2013.
    • Mr. Bejtlich offered several guest lectures on digital security at the Massachusetts Institute of Technology on 18 November 2013.
    • Mr. Bejtlich was a panelist at the Atlantic Council on 15 November 2013.
    • Mr. Bejtlich organized and hosted the Management track at the 4th annual Mandiant MIRCon on 5-6 November 2013.
    • Mr. Bejtlich was a panelist at the Free Thinking Film Festival on 2 November 2013.
    • Mr. Bejtlich offered the keynote at the Cyber Ark user conference on 30 October 2013.
    • Mr. Bejtlich was a panelist at the Indiana University Center for Applied Cybersecurity Research on 21 October 2013.
    • Mr. Bejtlich spoke at the national ISSA conference on 10 October 2013.
    • Mr. Bejtlich was a panelist at the Politico Cyber 7 event on 8 October 2013.
    • Mr. Bejtlich offered the keynote at the BSides August 2013 conference on 14 September 2013.
    • Mr. Bejtlich taught Network Security Monitoring 101 at Black Hat USA 2013: 27-28 and 29-30 July 2013 / Las Vegas, NV.
    • Mr. Bejtlich was a panelist at the Chatham House Cyber Security Conference in London, England on 10 June 2013.
    • Mr. Bejtlich appeared in the documentary Hacked, first available 7 June 2013.
    • Mr. Bejtlich was interviewed at the Center for National Policy, with video archived, on 15 May 2013.
    • Mr. Bejtlich delivered a keynote at the IT Web Security Summit in Johannesburg, South Africa on 8 May 2013.
    • Mr. Bejtlich was a panelist at The George Washington University and US News & World Report Cybersecurity Conference on 26 April 2013.
    • Mr. Bejtlich testified to the House Committee on Foreign Affairs on 21 March 2013.
    • Mr. Bejtlich testified to the House Committee on Homeland Security on 20 March 2013.
    • Mr. Bejtlich testified to the Senate Armed Services Committee on 19 March 2013.
    • Mr. Bejtlich shared his thoughts on the APT1 report with the Federalist Society on 12 March 2013. The conference call was recorded as Cybersecurity And the Chinese Hacker Problem - Podcast.
  • 2012
    • Mr. Bejtlich taught TCP/IP Weapons School 3.0 at Black Hat Abu Dhabi 2012: 3-4 Dec / Abu Dhabi, UAE.
    • Mr. Bejtlich spoke at a Mandiant breakfast event in Calgary, AB on 28 Nov 2012.
    • Mr. Bejtlich spoke at AppSecUSA in Austin, TX on 26 Oct 2012. The talk Incident Response: Security After Compromise is posted as a video (42 min).
    • Mr. Bejtlich organized and hosted the Management track at the 3rd annual Mandiant MIRCon on 17-18 October 2012.
    • Mr. Bejtlich spoke at a SANS event in Baltimore, MD on 5 Oct 2012.
    • Mr. Bejtlich spoke at a Mandiant breakfast event in Dallas, TX on 13 Sep 2012.
    • Mr. Bejtlich taught TCP/IP Weapons School 3.0 at Black Hat USA 2012: 21-22 and 23-24 Jul / Las Vegas, NV.
    • Mr. Bejtlich taught a compressed version of TCP/IP Weapons School 3.0 at a U.S. Cyber Challenge Summer Camp in Ballston, VA on 28 Jun 2012.
    • Mr. Bejtlich participated on a panel titled Hackers vs Executives at the Forrester conference in Las Vegas on 25 May 2012.
    • Mr. Bejtlich spoke at the Cyber Security for Executive Leadership: What Every CEO Should Know event in Raleigh, NC on 11 May 2012.
    • Mr. Bejtlich participated on a panel titled SEC Cyber Security Guidelines: A New Basis for D&O Exposure? at the 8th Annual National Directors & Officers Insurance ExecuSummit in Uncasville, CT on 8 May 2012.
    • Mr. Bejtlich delivered the keynote to the 2012 National Cyber Crime Conference in Norwood, MA on 30 Apr 2012.
    • Mr. Bejtlich spoke at the FOSE conference on a panel discussing new attacks on 4 Apr 2012.
    • Mr. Bejtlich testified to the US-China Economic and Security Review Commission on 26 Mar 2012.
    • Mr. Bejtlich spoke at the Air Force Association CyberFutures conference (audio mp3) on 23 Mar 2012.
    • Mr. Bejtlich delivered the keynote to the IANS Research Mid-Atlantic conference on 21 Mar 2012.
    • Mr. Bejtlich spoke at a Mandiant breakfast event with Secretary Michael Chertoff in New York, NY on 15 Mar 2012.
    • Mr. Bejtlich spoke to the Augusta, GA ISSA chapter on 8 Mar 2012.
    • Mr. Bejtlich participated on a panel about digital threats at the RSA Executive Security Action Forum on 27 Feb 2012.
    • Mr. Bejtlich spoke at a Mandiant breakfast event with Gen (ret.) Michael Hayden in Washington, DC on 22 Feb 2012.
    • Mr. Bejtlich spoke at the ShmooCon Epilogue conference on 30 Jan 2012.
    • Mr. Bejtlich spoke at a Mandiant breakfast event with Secretary Michael Chertoff in Houston, TX on 12 Jan 2012.
  • 2011
  • 2010
  • 2009
  • 2008
  • 2007
    • Mr. Bejtlich offered a guest lecture on digital security at George Mason University on 29 November 2007.
    • Network Security Operations: 27-29 August 2007 / public 3 day class / Chicago, IL
    • Mr. Bejtlich spoke to the Chicago Electronic Crimes Task Force and the Chicago Snort Users Group on 30 and 29 August 2007, respectively.
    • Mr. Bejtlich taught Network Security Operations on 21-23 August 2007 / Cincinnati, OH
    • Mr. Bejtlich taught TCP/IP Weapons School (layers 4-7) at USENIX Security 2007: 6-7 August 2007 / Boston, MA.
    • Mr. Bejtlich taught TCP/IP Weapons School at Black Hat USA 2007: 28-29 and 30-31 July 2007 / Caesars Palace, Las Vegas, NV.
    • USENIX 2007: 20-22 June 2007 / Network Security Monitoring and TCP/IP Weapons School (Layers 2-3) tutorials / Santa Clara, CA
    • Mr. Bejtlich briefed GFIRST 2007: 25-26 June 2007 / Network Incident Response and Forensics (two half-day tutorials) and Traditional IDS Should Be Dead conference presentation / Orlando, FL
    • Mr. Bejtlich taught TCP/IP Weapons School (Layers 2-3) and briefed Open Source Network Forensics at Techno Security 2007: 5-7 June 2007 / / Myrtle Beach, SC.
    • Mr. Bejtlich briefed Open Source Network Forensics at ISS World Spring 2007: 31 May 2007 / Washington, DC
    • Mr. Bejtlich briefed Network Incident Response and Forensics at AusCERT 2007: 23-24 May 2007 / Gold Coast, Australia.
    • Mr. Bejtlich taught Network Security Monitoring: 25 May 2007 / Sydney, Australia.
    • Mr. Bejtlich briefed at CONFIDENCE 2007: 13 May 2007 / Krakow, Poland.
    • Mr. Bejtlich briefed at ShmooCon: 24 March 2007 / Washington, DC; video here.
  • 2006
  • 2005
    • Mr. Bejtlich presented three full-day tutorials at USENIX LISA 2005 in San Diego, CA, from 6-8 December 2005. He taught network security monitoring, incident response, and forensics.
    • Mr. Bejtlich spoke at the Cisco Fall 2005 System Engineering Security Virtual Team Meeting in San Jose, CA on 10 October 2005.
    • Mr. Bejtlich spoke at the Net Optics Think Tank at the Hilton Santa Clara in Santa Clara, CA on 21 September 2005. He discussed network forensics, with a preview of material in his next book Real Digital Forensics.
    • Mr. Bejtlich taught network security monitoring to security analysts from the Pentagon with Special Ops Security on 23 and 24 August 2005 in Rosslyn, VA.
    • Mr. Bejtlich spoke at the InfraGard 2005 National Conference on 9 August 05 in Washington, DC on the basics of network forensics.
    • Mr. Bejtlich taught a one day course on network incident response, with his forensics book as the background material, at USENIX Security 05 on 1 August 2005 in Baltimore, MD.
    • Mr. Bejtlich taught a one day course on network security monitoring, with his NSM book as the background material, at USENIX Security 05 on 31 July 2005 in Baltimore, MD.
    • Mr. Bejtlich offered a guest lecture on digital security at George Washington University on 23 June 2005.
    • Mr. Bejtlich spoke at the Techno Security 2005 conference on 13 June 2005 in Myrtle Beach, CA. He was invited by Tenable Security to appear at their evening social event.
    • Mr. Bejtlich spoke at the Net Optics Think Tank on 18 May 2005 in Sunnyvale, CA.
    • Mr. Bejtlich presented Keeping FreeBSD Up-To-Date and More Tools for Network Security Monitoring at BSDCan 2005 on 13 May 2005.
    • Mr. Bejtlich spoke to the Pentagon Security Forum on 19 April 2005.
    • Mr. Bejtlich taught a one day course on network security monitoring, with his book as the background material, at USENIX 05 on 14 April 2005 in Anaheim, CA.
    • Mr. Bejtlich spoke to the Government Forum of Incident Response and Security Teams (GFIRST) on 5 April 2005 in Orlando, FL.
    • Mr. Bejtlich spoke to the Information Systems Security Association of Northern Virginia (ISSA-NoVA) on 17 February 2005 in Reston, VA.
    • Mr. Bejtlich spoke at the 2005 DoD Cybercrime Conference on 13 January 2005 in Palm Harbor, FL.
  • 2004
    • Mr. Bejtlich spoke to the DC Systems Administrators Guild (DC-SAGE) on 21 October 2004 about Sguil.
    • Mr. Bejtlich spoke to the DC Linux Users Group on 15 September 2004 about Sguil.
    • Mr. Bejtlich spoke to the High Technology Crime Investigation Association International Conference and Expo 2004 on 13 September 2004 in Washington, DC about Sguil.
    • Mr. Bejtlich taught a one day course on network security monitoring, with his first book as the background material, at USENIX Security 04 on 9 August 2004 in San Diego.
    • Mr. Bejtlich spoke to the DC Snort User's Group on 24 Jun 2004 about Sguil.
    • Mr. Bejtlich presented Network Security Monitoring with Sguil (.pdf) at BSDCan on 14 May 2004.
    • Mr. Bejtlich spoke to the SANS Local Mentor program in northern Virginia for two hours on 11 May 2004 about NSM using Sguil. Joe Bowling invited him.
    • Mr. Bejtlich gave a lightning talk demo of Sguil at CanSecWest 04 on 22 April 2004.
  • 2003
    • Mr. Bejtlich spoke to ISSA-CT about network security monitoring on 9 December 2003.
    • Mr. Bejtlich taught Foundstone's Ultimate Hacking Expert class at Black Hat Federal 2003 in Tyson's Corner, 29-30 September 2003.
    • Mr. Bejtlich recorded a second webcast on network security monitoring for SearchSecurity.com. He posted the slides here.
    • Mr. Bejtlich taught the first day of Foundstone's Ultimate Hacking Expert class at Black Hat USA 2003 Training in Las Vegas on 28 July 2003.
    • Mr. Bejtlich spoke on 21 July 2003 in Washington, DC at the SANS NIAL conference.
    • Mr. Bejtlich discussed digital security in Toronto on 13 March 2003 and in Washington, DC on Tuesday, 25 March 2003 at the request of Watchguard.
    • Mr. Bejtlich taught days four, five, and six of the SANS intrusion detection track in San Antonio, Texas from 28-30 January 2003.
  • 2002
    • Mr. Bejtlich recorded a webcast on network security monitoring (PDF slides) with his friend Bamm Visscher for SearchSecurity.com and answered questions submitted by listeners. A SearchSecurity editor commented on the talk as well.
    • Mr. Bejtlich helped teach Foundstone's Ultimate Hacking class at Black Hat USA 2002 Training in Las Vegas on 29-30 July 2002.
    • Mr. Bejtlich taught days one, two, and three of the SANS intrusion detection track in San Antonio, Texas from 15-17 July 2002.
    • Mr. Bejtlich taught day four of the SANS intrusion detection track in Toronto, Ontario on 16 May 2002.
    • On 11 April 2002 Mr. Bejtlich briefed the South Texas ISSA chapter on Snort.
    • Mr. Bejtlich helped teach day four of the SANS intrusion detection track in San Antonio, Texas on 14 March 2002 after Marty Roesch was unable to teach the class.
  • 2000-2001
    • On 24-25 October 2001 Mr. Bejtlich spoke to the Houston InfraGard chapter at their 2001 conference.
    • In August and September 2001 Mr. Bejtlich briefed analysts at the AFCERT on Interpreting Network Traffic.
    • On 19 October 2000 Mr. Bejtlich was invited back to speak at the SANS Network Security 2000 Technical Conference.
    • During 14-16 August 2000 Mr. Bejtlich participated in the Cyber Summit 2000 sponsored by the Air Intelligence Agency. Mr. Bejtlich was a captain in the AFCERT. You will find him in the middle of this picture.
    • In June 2000 Mr. Bejtlich signed a letter protesting the Council of Europe draft treaty on Crime in Cyberspace.
    • In June 2000 Mr. Bejtlich briefed FIRST on third party effects. This predated CAIDA's 2001 USENIX "backscatter" paper.
    • On 25 March 2000 Mr. Bejtlich presented Interpreting Network Traffic: A Network Intrusion Detector's Look at Suspicious Events at the SANS 2000 Technical Conference.

Reference: TaoSecurity Research

I started publishing my thoughts and findings on digital security in 1999. I used to provide this information on my Web site, but since I don't keep that page up-to-date anymore, I decided to publish it here.

2015 and later:

  • Please visit Academia.edu for Mr. Bejtlich's most recent research.
2014 and earlier:

Reference: TaoSecurity Press

I started appearing in media reports in 2000. I used to provide this information on my Web site, but since I don't keep that page up-to-date anymore, I decided to publish it here.

Know Your Limitations

At the end of the 1973 Clint Eastwood movie Magnum Force, after Dirty Harry watches his corrupt police captain explode in a car, he says "a man's got to know his limitations."

I thought of this quote today as the debate rages about compromising municipalities and other information technology-constrained yet personal information-rich organizations.

Several years ago I wrote If You Can't Protect It, Don't Collect It. I argued that if you are unable to defend personal information, then you should not gather and store it.

In a similar spirit, here I argue that if you are unable to securely operate information technology that matters, then you should not be supporting that IT.

You should outsource it to a trustworthy cloud provider, and concentrate on managing secure access to those services.

If you cannot outsource it, and you remain incapable of defending it natively, then you should integrate a capable managed security provider.

It's clear to me that a large portion of those running PI-processing IT are simply not capable of doing so in secure manner, and they do not bear the full cost of PI breaches.

They have too many assets, with too many vulnerabilities, and are targeted by too many threat actors.

These organizations lack sufficient people, processes, and technologies to mitigate the risk.

They have successes, but they are generally due to the heroics of individual IT and security professionals, who often feel out-gunned by their adversaries.

If you can't patch a two-year-old vulnerability prior to exploitation, or detect an intrusion and respond to the adversary before he completes his mission, then you are demonstrating that you need to change your entire approach to information technology.

The security industry seems to think that throwing more people at the problem is the answer, yet year after year we read about several million job openings that remain unfilled. This is a sign that we need to change the way we are doing business. The fact is that those organziations that cannot defend themselves need to recognize their limitations and change their game.

I recognize that outsourcing is not a panacea. Note that I emphasized "IT" in my recommendation. I do not see how one could outsource the critical technology running on-premise in the industrial control system (ICS) world, for example. Those operations may need to rely more on outsourced security providers, if they cannot sufficiently detect and respond to intrusions using in-house capabilities.

Remember that the vast majority of organizations do not exist to run IT. They run IT to support their lines of business. Many older organizations have indeed been migrating legacy applications to the cloud, and most new organizations are cloud-native. These are hopeful signs, as the older organizations could potentially  "age-out" over time.

This puts a burden on the cloud providers, who fall into the "managed service provider" category that I wrote about in my recent Corelight blog. However, the more trustworthy providers have the people, processes, and technology in place to handle their responsibilities in a more secure way than many organziations who are struggling with on-premise legacy IT.

Everyone's got to know their limitations.

Dissecting Weird Packets

I was investigating traffic in my home lab yesterday, and noticed that about 1% of the traffic was weird. Before I describe the weird, let me show you a normal frame for comparison's sake.


This is a normal frame with Ethernet II encapsulation. It begins with 6 bytes of the destination MAC address, 6 bytes of the source MAC address, and 2 bytes of an Ethertype, which in this case is 0x0800, indicating an IP packet follows the Ethernet header. There is no TCP payload as this is an ACK segment.

You can also see this in Tshark.

$ tshark -Vx -r frame4238.pcap

Frame 1: 66 bytes on wire (528 bits), 66 bytes captured (528 bits)
    Encapsulation type: Ethernet (1)
    Arrival Time: May  7, 2019 18:19:10.071831000 UTC
    [Time shift for this packet: 0.000000000 seconds]
    Epoch Time: 1557253150.071831000 seconds
    [Time delta from previous captured frame: 0.000000000 seconds]
    [Time delta from previous displayed frame: 0.000000000 seconds]
    [Time since reference or first frame: 0.000000000 seconds]
    Frame Number: 1
    Frame Length: 66 bytes (528 bits)
    Capture Length: 66 bytes (528 bits)
    [Frame is marked: False]
    [Frame is ignored: False]
    [Protocols in frame: eth:ethertype:ip:tcp]
Ethernet II, Src: IntelCor_12:7d:bb (38:ba:f8:12:7d:bb), Dst: Ubiquiti_49:e0:10 (fc:ec:da:49:e0:10)
    Destination: Ubiquiti_49:e0:10 (fc:ec:da:49:e0:10)
        Address: Ubiquiti_49:e0:10 (fc:ec:da:49:e0:10)
        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)
    Source: IntelCor_12:7d:bb (38:ba:f8:12:7d:bb)
        Address: IntelCor_12:7d:bb (38:ba:f8:12:7d:bb)
        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)
    Type: IPv4 (0x0800)
Internet Protocol Version 4, Src: 192.168.4.96, Dst: 52.21.18.219
    0100 .... = Version: 4
    .... 0101 = Header Length: 20 bytes (5)
    Differentiated Services Field: 0x00 (DSCP: CS0, ECN: Not-ECT)
        0000 00.. = Differentiated Services Codepoint: Default (0)
        .... ..00 = Explicit Congestion Notification: Not ECN-Capable Transport (0)
    Total Length: 52
    Identification: 0xd98c (55692)
    Flags: 0x4000, Don't fragment
        0... .... .... .... = Reserved bit: Not set
        .1.. .... .... .... = Don't fragment: Set
        ..0. .... .... .... = More fragments: Not set
        ...0 0000 0000 0000 = Fragment offset: 0
    Time to live: 64
    Protocol: TCP (6)
    Header checksum: 0x553f [validation disabled]
    [Header checksum status: Unverified]
    Source: 192.168.4.96
    Destination: 52.21.18.219
Transmission Control Protocol, Src Port: 38828, Dst Port: 443, Seq: 1, Ack: 1, Len: 0
    Source Port: 38828
    Destination Port: 443
    [Stream index: 0]
    [TCP Segment Len: 0]
    Sequence number: 1    (relative sequence number)
    [Next sequence number: 1    (relative sequence number)]
    Acknowledgment number: 1    (relative ack number)
    1000 .... = Header Length: 32 bytes (8)
    Flags: 0x010 (ACK)
        000. .... .... = Reserved: Not set
        ...0 .... .... = Nonce: Not set
        .... 0... .... = Congestion Window Reduced (CWR): Not set
        .... .0.. .... = ECN-Echo: Not set
        .... ..0. .... = Urgent: Not set
        .... ...1 .... = Acknowledgment: Set
        .... .... 0... = Push: Not set
        .... .... .0.. = Reset: Not set
        .... .... ..0. = Syn: Not set
        .... .... ...0 = Fin: Not set
        [TCP Flags: ·······A····]
    Window size value: 296
    [Calculated window size: 296]
    [Window size scaling factor: -1 (unknown)]
    Checksum: 0x08b0 [unverified]
    [Checksum Status: Unverified]
    Urgent pointer: 0
    Options: (12 bytes), No-Operation (NOP), No-Operation (NOP), Timestamps
        TCP Option - No-Operation (NOP)
            Kind: No-Operation (1)
        TCP Option - No-Operation (NOP)
            Kind: No-Operation (1)
        TCP Option - Timestamps: TSval 26210782, TSecr 2652693036
            Kind: Time Stamp Option (8)
            Length: 10
            Timestamp value: 26210782
            Timestamp echo reply: 2652693036
    [Timestamps]
        [Time since first frame in this TCP stream: 0.000000000 seconds]
        [Time since previous frame in this TCP stream: 0.000000000 seconds]

0000  fc ec da 49 e0 10 38 ba f8 12 7d bb 08 00 45 00   ...I..8...}...E.
0010  00 34 d9 8c 40 00 40 06 55 3f c0 a8 04 60 34 15   .4..@.@.U?...`4.
0020  12 db 97 ac 01 bb e3 42 2a 57 83 49 c2 ea 80 10   .......B*W.I....
0030  01 28 08 b0 00 00 01 01 08 0a 01 8f f1 de 9e 1c   .(..............
0040  e2 2c   

You can see Wireshark understands what it is seeing. It decodes the IP header and the TCP header.

So far so good. Here is an example of the weird traffic I was seeing.



Here is what Tshark thinks of it.

$ tshark -Vx -r frame4241.pcap
Frame 1: 66 bytes on wire (528 bits), 66 bytes captured (528 bits)
    Encapsulation type: Ethernet (1)
    Arrival Time: May  7, 2019 18:19:10.073296000 UTC
    [Time shift for this packet: 0.000000000 seconds]
    Epoch Time: 1557253150.073296000 seconds
    [Time delta from previous captured frame: 0.000000000 seconds]
    [Time delta from previous displayed frame: 0.000000000 seconds]
    [Time since reference or first frame: 0.000000000 seconds]
    Frame Number: 1
    Frame Length: 66 bytes (528 bits)
    Capture Length: 66 bytes (528 bits)
    [Frame is marked: False]
    [Frame is ignored: False]
    [Protocols in frame: eth:llc:data]
IEEE 802.3 Ethernet
    Destination: Ubiquiti_49:e0:10 (fc:ec:da:49:e0:10)
        Address: Ubiquiti_49:e0:10 (fc:ec:da:49:e0:10)
        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)
    Source: IntelCor_12:7d:bb (38:ba:f8:12:7d:bb)
        Address: IntelCor_12:7d:bb (38:ba:f8:12:7d:bb)
        .... ..0. .... .... .... .... = LG bit: Globally unique address (factory default)
        .... ...0 .... .... .... .... = IG bit: Individual address (unicast)
    Length: 56
        [Expert Info (Error/Malformed): Length field value goes past the end of the payload]
            [Length field value goes past the end of the payload]
            [Severity level: Error]
            [Group: Malformed]
Logical-Link Control
    DSAP: Unknown (0x45)
        0100 010. = SAP: Unknown
        .... ...1 = IG Bit: Group
    SSAP: LLC Sub-Layer Management (0x02)
        0000 001. = SAP: LLC Sub-Layer Management
        .... ...0 = CR Bit: Command
    Control field: U, func=Unknown (0x0B)
        000. 10.. = Command: Unknown (0x02)
        .... ..11 = Frame type: Unnumbered frame (0x3)
Data (49 bytes)
    Data: 84d98d86b5400649eec0a80460341512db97ac0d0be3422a...
    [Length: 49]

0000  fc ec da 49 e0 10 38 ba f8 12 7d bb 00 38 45 02   ...I..8...}..8E.
0010  0b 84 d9 8d 86 b5 40 06 49 ee c0 a8 04 60 34 15   ......@.I....`4.
0020  12 db 97 ac 0d 0b e3 42 2a 57 83 49 c2 ea c8 ec   .......B*W.I....
0030  01 28 17 6f 00 00 01 01 08 0a 01 8f f1 de ed 7f   .(.o............
0040  a5 4a                                             .J

What's the problem? This frame begins with 6 bytes of the destination MAC address and 6 bytes of the source MAC address, as we saw before. However, the next two bytes are 0x0038, which is not the same as the Ethertype of 0x0800 we saw earlier. 0x0038 is decimal 56, which would seem to indicate a frame length (even though the frame here is a total of 66 bytes).

Wireshark decides to treat this frame as not being Ethernet II, but instead as IEEE 802.3 Ethernet. I had to refer to appendix A of my first book to see what this meant.

For comparison, here is the frame format for Ethernet II (page 664):

This was what we saw with frame 4238 earlier -- Dst MAC, Src MAC, Ethertype, then data.

Here is the frame format for IEEE 802.3 Ethernet.


This is much more complicated: Dst MAC, Src MAC, length, and then DSAP, SSAP, Control, and data.

It turns out that this format doesn't seem to fit what is happening in frame 4241, either. While the length field appears to be in the ballpark, Wireshark's assumption that the next bytes are DSAP, SSAP, Control, and data doesn't fit. The clue for me was seeing that 0x45 followed the presumed length field. I recognized 0x45 as the beginning of an IP header, with 4 meaning IPv4 and 5 meaning 5 words (40 bytes) in the IP header.

If we take a manual byte-by-byte comparative approach we can better understand what may be happening with these two frames. (I broke the 0x45 byte into two "nibbles" in one case.)

Note that I have bolded the parts of each frame that are exactly the same.


This analysis shows that these two frames are very similar, especially in places where I would not expect them to be similar. This caused me to hypothesize that frame 4241 was a corrupted version of frame 4238.

I can believe that the frames would share MAC addresses, IP addresses, and certain IP and TCP defaults. However, it is unusual for them to have the same high source ports (38828) but not the same destination ports (443 and 3339).  Very telling is the fact that they have the same TCP sequence and acknowledgement numbers. They also share the same source timestamp.

Notice one field that I did not bold, because they are not identical -- the IP ID value. Frame 4238 has 0xd98c and frame 4241 has 0xd98d. The perfectly incremented IP ID prompted me to believe that frame 4241 is a corrupted retransmission, at the IP layer, of the same TCP segment.

However, I really don't know what to think. These frames were captured in a Linux 16.04 VirtualBox VM by netsniff-ng. Is this a problem with netsniff-ng, or Linux, or VirtualBox, or the Linux host operating system running VirtualBox?

I'd like to thank the folks at ask.wireshark.org for their assistance with my attempts to decode this (and other) frames as 802.3 raw Ethernet. What's that? It's basically a format that Novell used with IPX, where the frame is Dst MAC, Src MAC, length, data.

I wanted to see if I could tell Wireshark to decode the odd frames as 802.3 raw Ethernet, rather than IEEE 802.3 Ethernet with LLC headers.

Sake Blok helpfully suggested I change the pcap's link layer type to User0, and then tell Wireshark how to interpret the frames. I did it this way, per his direction:

$ editcap -T user0 excerpt.pcap excerpt-user0.pcap

Next I opened the trace in Wireshark and saw frame 4241 (here listed as frame 3) as shown below:


DLT 147 corresponds to the link layer type for User0. Wireshark doesn't know how to handle it. We fix that by right-clicking on the yellow field and selecting Protocol Preferences -> Open DLT User preferences:

Next I created an entry fpr User 0 (DLT-147) with Payload protocol "ip" and Header size "14" as shown:

After clicking OK, I returned to Wireshark. Here is how frame 4241 (again listed here as frame 3) appeared:


You can see Wireshark is now making sense of the IP header, but it doesn't know how to handle the TCP header which follows. I tried different values and options to see if I could get Wireshark to understand the TCP header too, but this went far enough for my purposes.

The bottom line is that I believe there is some sort of packet capture problem, either with the softare used or the traffic that is presented to the software by the bridged NIC created by VirtualBox. As this is a lab environment and the traffic is 1% of the overall capture, I am not worried about the results.

I am fairly sure that the weird traffic is not on the wire. I tried capturing on the host OS sniffing NIC and did not see anything resembling this traffic.

Have you seen anything like this? Let me know in a comment here on on Twitter.

PS: I found the frame.number=X Wireshark display filter helpful, along with the frame.len>Y display filter, when researching this activity.

Troubleshooting NSM Virtualization Problems with Linux and VirtualBox

I spent a chunk of the day troubleshooting a network security monitoring (NSM) problem. I thought I would share the problem and my investigation in the hopes that it might help others. The specifics are probably less important than the general approach.

It began with ja3. You may know ja3 as a set of Zeek scripts developed by the Salesforce engineering team to profile client and server TLS parameters.

I was reviewing Zeek logs captured by my Corelight appliance and by one of my lab sensors running Security Onion. I had coverage of the same endpoint in both sensors.

I noticed that the SO Zeek logs did not have ja3 hashes in the ssl.log entries. Both sensors did have ja3s hashes. My first thought was that SO was misconfigured somehow to not record ja3 hashes. I quickly dismissed that, because it made no sense. Besides, verifying that intution required me to start troubleshooting near the top of the software stack.

I decided to start at the bottom, or close to the bottom. I had a sinking suspicion that, for some reason, Zeek was only seeing traffic sent from remote systems, and not traffic originating from my network. That would account for the creation of ja3s hashes, for traffic sent by remote systems, but not ja3 hashes, as Zeek was not seeing traffic sent by local clients.

I was running SO in VirtualBox 6.0.4 on Ubuntu 18.04. I started sniffing TCP network traffic on the SO monitoring interface using Tcpdump. As I feared, it didn't look right. I ran a new capture with filters for ICMP and a remote IP address. On another system I tried pinging the remote IP address. Sure enough, I only saw ICMP echo replies, and no ICMP echoes. Oddly, I also saw doubles and triples of some of the ICMP echo replies. That worried me, because unpredictable behavior like that could indicate some sort of software problem.

My next step was to "get under" the VM guest and determine if the VM host could see traffic properly. I ran Tcpdump on the Ubuntu 18.04 host on the monitoring interface and repeated my ICMP tests. It saw everything properly. That meant I did not need to bother checking the switch span port that was feeding traffic to the VirtualBox system.

It seemed I had a problem somewhere between the VM host and guest. On the same VM host I was also running an instance of RockNSM. I ran my ICMP tests on the RockNSM VM and, sadly, I got the same one-sided traffic as seen on SO.

Now I was worried. If the problem had only been present in SO, then I could fix SO. If the problem is present in both SO and RockNSM, then the problem had to be with VirtualBox -- and I might not be able to fix it.

I reviewed my configurations in VirtualBox, ensuring that the "Promiscuous Mode" under the Advanced options was set to "Allow All". At this point I worried that there was a bug in VirtualBox. I did some Google searches and reviewed some forum posts, but I did not see anyone reporting issues with sniffing traffic inside VMs. Still, my use case might have been weird enough to not have been reported.

I decided to try a different approach. I wondered if running VirtualBox with elevated privileges might make a difference. I did not want to take ownership of my user VMs, so I decided to install a new VM and run it with elevated privileges.

Let me stop here to note that I am breaking one of the rules of troubleshooting. I'm introducing two new variables, when I should have introduced only one. I should have built a new VM but run it with the same user privileges with which I was running the existing VMs.

I decided to install a minimal edition of Ubuntu 9, with VirtualBox running via sudo. When I started the VM and sniffed traffic on the monitoring port, lo and behold, my ICMP tests revealed both sides of the traffic as I had hoped. Unfortunately, from this I erroneously concluded that running VirtualBox with elevated privileges was the answer to my problems.

I took ownership of the SO VM in my elevated VirtualBox session, started it, and performed my ICMP tests. Womp womp. Still broken.

I realized I needed to separate the two variables that I had entangled, so I stopped VirtualBox, and changed ownership of the Debian 9 VM to my user account. I then ran VirtualBox with user privileges, started the Debian 9 VM, and ran my ICMP tests. Success again! Apparently elevated privileges had nothing to do with my problem.

By now I was glad I had not posted anything to any user forums describing my problem and asking for help. There was something about the monitoring interface configurations in both SO and RockNSM that resulted in the inability to see both sides of traffic (and avoid weird doubles and triples).

I started my SO VM again and looked at the script that configured the interfaces. I commented out all the entries below the management interface as shown below.

$ cat /etc/network/interfaces

# This configuration was created by the Security Onion setup script.
#
# The original network interface configuration file was backed up to:
# /etc/network/interfaces.bak.
#
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# loopback network interface
auto lo
iface lo inet loopback

# Management network interface
auto enp0s3
iface enp0s3 inet static
  address 192.168.40.76
  gateway 192.168.40.1
  netmask 255.255.255.0
  dns-nameservers 192.168.40.1
  dns-domain localdomain

#auto enp0s8
#iface enp0s8 inet manual
#  up ip link set $IFACE promisc on arp off up
#  down ip link set $IFACE promisc off down
#  post-up ethtool -G $IFACE rx 4096; for i in rx tx sg tso ufo gso gro lro; do ethtool -K $IFACE $i off; done
#  post-up echo 1 > /proc/sys/net/ipv6/conf/$IFACE/disable_ipv6

#auto enp0s9
#iface enp0s9 inet manual
#  up ip link set $IFACE promisc on arp off up
#  down ip link set $IFACE promisc off down
#  post-up ethtool -G $IFACE rx 4096; for i in rx tx sg tso ufo gso gro lro; do ethtool -K $IFACE $i off; done
#  post-up echo 1 > /proc/sys/net/ipv6/conf/$IFACE/disable_ipv6

I rebooted the system and brought the enp0s8 interface up manually using this command:

$ sudo ip link set enp0s8 promisc on arp off up

Fingers crossed, I ran my ICMP sniffing tests, and voila, I saw what I needed -- traffic in both directions, without doubles or triples no less.

So, there appears to be some sort of problem with the way SO and RockNSM set parameters for their monitoring interfaces, at least as far as they interact with VirtualBox 6.0.4 on Ubuntu 18.04. You can see in the network script that SO disables a bunch of NIC options. I imagine one or more of them is the culprit, but I didn't have time to work through them individually.

I tried taking a look at the network script in RockNSM, but it runs CentOS, and I'll be darned if I can't figure out where to look. I'm sure it's there somewhere, but I didn't have the time to figure out where.

The moral of the story is that I should have immediately checked after installation that both SO and RockNSM were seeing both sides of the traffic I expected them to see. I had taken that for granted for many previous deployments, but something broke recently and I don't know exactly what. My workaround will hopefully hold for now, but I need to take a closer look at the NIC options because I may have introduced another fault.

A second moral is to be careful of changing two or more variables when troubleshooting. When you do that you might fix a problem, but not know what change fixed the issue.

Thoughts on OSSEC Con 2019

Last week I attended my first OSSEC conference. I first blogged about OSSEC in 2007, and wrote other posts about it in the following years.

OSSEC is a host-based intrusion detection and log analysis system with correlation and active response features. It is cross-platform, such that I can run it on my Windows and Linux systems. The moving force behind the conference was a company local to me called Atomicorp.

In brief, I really enjoyed this one-day event. (I had planned to attend the workshop on the second day but my schedule did not cooperate.) The talks were almost uniformly excellent and informative. I even had a chance to talk jiu-jitsu with OSSEC creator Daniel Cid, who despite hurting his leg managed to travel across the country to deliver the keynote.

I'd like to share a few highlights from my notes.

First, I had been worried that OSSEC was in some ways dead. I saw that the Security Onion project had replaced OSSEC with a fork called Wazuh, which I learned is apparently pronounced "wazoo." To my delight, I learned OSSEC is decidedly not dead, and that Wazuh has been suffering stability problems. OSSEC has a lot of interesting development ahead of it, which you can track on their Github repo.

For example, the development roadmap includes eliminating Logstash from the pipeline used by many OSSEC users. OSSEC would feed directly into Elasticsearch. One speaker noted that Logstash has a 1.7 GB memory footprint, which astounded me.

On a related note, the OSSEC team is planning to create a new Web console, with a design goal to have it run in an "AWS t2.micro" instance. The team noted that instance offers 2 GB memory, which doesn't match what AWS says. Perhaps they meant t2.micro and 1 GB memory, or t2.small with 2 GB memory. I think they mean t2.micro with 1 GB RAM, as that is the free tier. Either way, I'm excited to see this later in 2019.

Second, I thought the presentation by security personnel from USA Today offered an interesting insight. One design goal they had for monitoring their Google Cloud Platform (GCP) was to not install OSSEC on every container or on Kubernetes worker nodes. Several times during the conference, speakers noted that the transient nature of cloud infrastructure is directly antithetical to standard OSSEC usage, whereby OSSEC is installed on servers with long uptime and years of service. Instead, USA Today used OSSEC to monitor HTTP logs from the GCP load balancer, logs from Google Kubernetes Engine, and monitored processes by watching output from successive kubectl invocations.

Third, a speaker from Red Hat brought my attention to an aspect of containers that I had not considered. Docker and containers had made software testing and deployment a lot easier for everyone. However, those who provide containers have effectively become Linux distribution maintainers. In other words, who is responsible when a security or configuration vulnerability in a Linux component is discovered? Will the container maintainers be responsive?

Another speaker emphasized the difference between "security of the cloud," offered by cloud providers, and "security in the cloud," which is supposed to be the customer's responsibility. This makes sense from a technical point of view, but I expect that in the long term this differentiation will no longer be tenable from a business or legal point of view.

Customers are not going to have the skills or interest to secure their software in the cloud, as they outsource ever more technical talent to the cloud providers and their infrastructure. I expect cloud providers to continue to develop, acquire, and offer more security services, and accelerate their competition on a "complete security environment."

I look forward to more OSSEC development and future conferences.

Thoughts on Cloud Security

Recently I've been reading about cloud security and security with respect to DevOps. I'll say more about the excellent book I'm reading, but I had a moment of déjà vu during one section.

The book described how cloud security is a big change from enterprise security because it relies less on IP-address-centric controls and more on users and groups. The book talked about creating security groups, and adding users to those groups in order to control their access and capabilities.

As I read that passage, it reminded me of a time long ago, in the late 1990s, when I was studying for the MCSE, then called the Microsoft Certified Systems Engineer. I read the book at left, Windows NT Security Handbook, published in 1996 by Tom Sheldon. It described the exact same security process of creating security groups and adding users. This was core to the new NT 4 role based access control (RBAC) implementation.

Now, fast forward a few years, or all the way to today, and consider the security challenges facing the majority of legacy enterprises: securing Windows assets and the data they store and access. How could this wonderful security model, based on decades of experience (from the 1960s and 1970s no less), have failed to work in operational environments?

There are many reasons one could cite, but I think the following are at least worthy of mention.

The systems enforcing the security model are exposed to intruders.

Furthermore:

Intruders are generally able to gain code execution on systems participating in the security model.

Finally:

Intruders have access to the network traffic which partially contains elements of the security model.

From these weaknesses, a large portion of the security countermeasures of the last two decades have been derived as compensating controls and visibility requirements.

The question then becomes:

Does this change with the cloud?

In brief, I believe the answer is largely "yes," thankfully. Generally, the systems upon which the security model is being enforced are not able to access the enforcement mechanism, thanks to the wonders of virtualization.

Should an intruder find a way to escape from their restricted cloud platform and gain hypervisor or management network access, then they find themselves in a situation similar to the average Windows domain network.

This realization puts a heavy burden on the cloud infrastructure operators. They major players are likely able to acquire and apply the expertise and resources to make their infrastructure far more resilient and survivable than their enterprise counterparts.

The weakness will likely be their personnel.

Once the compute and network components are sufficiently robust from externally sourced compromise, then internal threats become the next most cost-effective and return-producing vectors for dedicated intruders.

Is there anything users can do as they hand their compute and data assets to cloud operators?

I suggest four moves.

First, small- to mid-sized cloud infrastructure users will likely have to piggyback or free-ride on the initiatives and influence of the largest cloud customers, who have the clout and hopefully the expertise to hold the cloud operators responsible for the security of everyone's data.

Second, lawmakers may also need improved whistleblower protection for cloud employees who feel threatened by revealing material weaknesses they encounter while doing their jobs.

Third, government regulators will have to ensure no cloud provider assumes a monopoly, or no two providers assume a duopoloy. We may end up with the three major players and a smattering of smaller ones, as is the case with many mature industries.

Fourth, users should use every means at their disposal to select cloud operators not only on their compute features, but on their security and visibility features. The more logging and visibility exposed by the cloud provider, the better. I am excited by new features like the Azure network tap and hope to see equivalent features in other cloud infrastructure.

Remember that security has two main functions: planning/resistance, to try to stop bad things from happening, and detection/respond, to handle the failures that inevitably happen. "Prevention eventually fails" is one of my long-time mantras. We don't want prevention to fail silently in the cloud. We need ways to know that failure is happening so that we can plan and implement new resistance mechanisms, and then validate their effectiveness via detection and response.

Update: I forgot to mention that the material above assumed that the cloud users and operators made no unintentional configuration mistakes. If users or operators introduce exposures or vulnerabilities, then those will be the weaknesses that intruders exploit. We've already seen a lot of this happening and it appears to be the most common problem. Procedures and tools which constantly assess cloud configurations for exposures and vulnerabilities due to misconfiguration or poor practices are a fifth move which all involved should make.

A corollary is that complexity can drive problems. When the cloud infrastructure offers too many knobs to turn, then it's likely the users and operators will believe they are taking one action when in reality they are implementing another.

Forcing the Adversary to Pursue Insider Theft

Jack Crook pointed me toward a story by Christopher Burgess about intellectual property theft by "Hongjin Tan, a 35 year old Chinese national and U.S. legal permanent resident... [who] was arrested on December 20 and charged with theft of trade secrets. Tan is alleged to have stolen the trade secrets from his employer, a U.S. petroleum company," according to the criminal complaint filed by the US DoJ.

Tan's former employer and the FBI allege that Tan "downloaded restricted files to a personal thumb drive." I could not tell from the complaint if Tan downloaded the files at work or at home, but the thumb drive ended up at Tan's home. His employer asked Tan to bring it to their office, which Tan did. However, he had deleted all the files from the drive. Tan's employer recovered the files using commercially available forensic software.

This incident, by definition, involves an "insider threat." Tan was an employee who appears to have copied information that was outside the scope of his work responsibilities, resigned from his employer, and was planning to return to China to work for a competitor, having delivered his former employer's intellectual property.

When I started GE-CIRT in 2008 (officially "initial operating capability" on 1 January 2009), one of the strategies we pursued involved insider threats. I've written about insiders on this blog before but I couldn't find a description of the strategy we implemented via GE-CIRT.

We sought to make digital intrusions more expensive than physical intrusions.

In other words, we wanted to make it easier for the adversary to accomplish his mission using insiders. We wanted to make it more difficult for the adversary to accomplish his mission using our network.

In a cynical sense, this makes security someone else's problem. Suddenly the physical security team is dealing with the worst of the worst!

This is a win for everyone, however. Consider the many advantages the physical security team has over the digital security team.

The physical security team can work with human resources during the hiring process. HR can run background checks and identify suspicious job applicants prior to granting employment and access.

Employees are far more exposed than remote intruders. Employees, even under cover, expose their appearance, likely residence, and personalities to the company and its workers.

Employees can be subject to far more intensive monitoring than remote intruders. Employee endpoints can be instrumented. Employee workspaces are instrumented via access cards, cameras at entry and exit points, and other measures.

Employers can cooperate with law enforcement to investigate and prosecute employees. They can control and deter theft and other activities.

In brief, insider theft, like all "close access" activities, is incredibly risky for the adversary. It is a win for everyone when the adversary must resort to using insiders to accomplish their mission. Digital and physical security must cooperate to leverage these advantages, while collaborating with human resources, legal, information technology, and business lines to wring the maximum results from this advantage.

Fixing Virtualbox RDP Server with DetectionLab

Yesterday I posted about DetectionLab, but noted that I was having trouble with the RDP servers offered by Virtualbox. If you remember, DetectionLab builds four virtual machines:

root@LAPTOP-HT4TGVCP C:\Users\root>"c:\Program Files\Oracle\VirtualBox\VBoxManage" list runningvms
"logger" {3da9fffb-4b02-4e57-a592-dd2322f14245}
"dc.windomain.local" {ef32d493-845c-45dc-aff7-3a86d9c590cd}
"wef.windomain.local" {7cd008b7-c6e0-421d-9655-8f92ec98d9d7}
"win10.windomain.local" {acf413fb-6358-44df-ab9f-cc7767ed32bd}

I was having a problem with two of the VMs sharing the same port for the RDP server offered by Virtualbox. This meant I could not access one of them. (Below, port 5932 has the conflict.)

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo logger | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 5955, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address  = "0.0.0.0"

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo dc.windomain.local | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 5932, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address = "0.0.0.0"

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo wef.windomain.local | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 5932, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address = "0.0.0.0"

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo win10.windomain.local | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 5981, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address = "0.0.0.0"

To fix this, I explicitly added port values to the configuration in the Vagrantfile. Here is one example:

      vb.customize ["modifyvm", :id, "--vrde", "on"]
      vb.customize ["modifyvm", :id, "--vrdeaddress", "0.0.0.0"]
      vb.customize ["modifyvm", :id, "--vrdeport", "60101"]

After a 'vagrant reload', the RDP servers were now listening on new ports, as I hoped.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo logger | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 60101, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address  = "0.0.0.0"

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo dc.windomain.local | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 60102, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address = "0.0.0.0"

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo wef.windomain.local | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 60103, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address = "0.0.0.0"

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage" showvminfo win10.windomain.local | findstr /I vrde | findstr /I address
VRDE:                        enabled (Address 0.0.0.0, Ports 60104, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
VRDE property               : TCP/Address = "0.0.0.0"

This is great, but I am still encountering a problem with avoiding port collisions when Vagrant remaps ports for services on the VMs.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant status
Current machine states:

logger                    running (virtualbox)
dc                        running (virtualbox)
wef                       running (virtualbox)
win10                     running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant port logger
The forwarded ports for the machine are listed below. Please note that
these values may differ from values configured in the Vagrantfile if the
provider supports automatic port collision detection and resolution.

    22 (guest) => 2222 (host)

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant port dc
The forwarded ports for the machine are listed below. Please note that
these values may differ from values configured in the Vagrantfile if the
provider supports automatic port collision detection and resolution.

  3389 (guest) => 3389 (host)
    22 (guest) => 2200 (host)
  5985 (guest) => 55985 (host)
  5986 (guest) => 55986 (host)

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant port wef
The forwarded ports for the machine are listed below. Please note that
these values may differ from values configured in the Vagrantfile if the
provider supports automatic port collision detection and resolution.

  3389 (guest) => 2201 (host)
    22 (guest) => 2202 (host)
  5985 (guest) => 2203 (host)
  5986 (guest) => 2204 (host)

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant port win10
The forwarded ports for the machine are listed below. Please note that
these values may differ from values configured in the Vagrantfile if the
provider supports automatic port collision detection and resolution.

  3389 (guest) => 2205 (host)
    22 (guest) => 2206 (host)
  5985 (guest) => 2207 (host)
  5986 (guest) => 2208 (host)

The entry in bold is the problem. Vagrant should not be mapping port 3389, which is already in use by the RDP server on the Windows 10 host, such that it tries to be available to the guest.

I tried telling Vagrant by hand in the Vagrantfile to map port 3389 elsewhere, but nothing worked. (I tried entries like the following.)

    config.vm.network :forwarded_port, guest: 3389, host: 5789

I also searched to see if there might be a configuration outside the Vagrantfile that I was missing. Here is what I found:

ds61@ds61:~/DetectionLab-master$ find . | xargs grep "3389" *
./Terraform/Method1/main.tf:    from_port   = 3389
./Terraform/Method1/main.tf:    to_port     = 3389
./Packer/vagrantfile-windows_2016.template:    config.vm.network :forwarded_port, guest: 3389, host: 3389, id: "rdp", auto_correct: true
./Packer/scripts/enable-rdp.bat:netsh advfirewall firewall add rule name="Open Port 3389" dir=in action=allow protocol=TCP localport=3389
./Packer/vagrantfile-windows_10.template:    config.vm.network :forwarded_port, guest: 3389, host: 3389, id: "rdp", auto_correct: true

I wonder if those Packer templates have anything to do with it, or if I am encountering a problem with Vagrant? I have seen many people experience similar issues, so I don't know.

It's not a big deal, though. Now that I can directly access the virtual screens for each VM on Virtualbox via the RDP server, I don't need to RDP to port 3389 on each Windows VM in order to interact with it.

If anyone has any ideas, though, I'm interested!

Trying DetectionLab

Many security professionals run personal labs. Trying to create an environment that includes fairly modern Windows systems can be a challenge. In the age of "infrastructure as code," there should be a simpler way to deploy systems in a repeatable, virtualized way -- right?

Enter DetectionLab, a project by Chris Long. Briefly, Chris built a project that uses Packer and Vagrant to create an instrumented lab environment. Chris explained the project in late 2017 in a Medium post, which I recommend reading.

I can't even begin to describe all the functionality packed into this project. So much of it is new, but this is a great way to learn about it. In this post, I would like to show how I got a version of DetectionLab running.

My build environment included a modern laptop with 16 GB RAM and Windows 10 professional. I had already installed Virtualbox 6.0 with the appropriate VirtualBox Extension Pack. I had also enabled the native OpenSSH server and performed all DetectionLab installation functions over an OpenSSH session.

Install Chocolatey

My first step was to install Chocolatey, a package manager for Windows. I wanted to use this to install the Git client I wanted to use to clone the DetectionLab repo. Commands I typed at each stage are in bold below.

root@LAPTOP-HT4TGVCP C:\Users\root>@"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "ie
x ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
Getting latest version of the Chocolatey package for download.
Getting Chocolatey from https://chocolatey.org/api/v2/package/chocolatey/0.10.11.
Downloading 7-Zip commandline tool prior to extraction.
Extracting C:\Users\root\AppData\Local\Temp\chocolatey\chocInstall\chocolatey.zip to C:\Users\root\AppData\Local\Temp\chocolatey\chocInstall...
Installing chocolatey on this machine
Creating ChocolateyInstall as an environment variable (targeting 'Machine')
  Setting ChocolateyInstall to 'C:\ProgramData\chocolatey'
WARNING: It's very likely you will need to close and reopen your shell
  before you can use choco.
Restricting write permissions to Administrators
We are setting up the Chocolatey package repository.
The packages themselves go to 'C:\ProgramData\chocolatey\lib'
  (i.e. C:\ProgramData\chocolatey\lib\yourPackageName).
A shim file for the command line goes to 'C:\ProgramData\chocolatey\bin'
  and points to an executable in 'C:\ProgramData\chocolatey\lib\yourPackageName'.

Creating Chocolatey folders if they do not already exist.

WARNING: You can safely ignore errors related to missing log files when
  upgrading from a version of Chocolatey less than 0.9.9.
  'Batch file could not be found' is also safe to ignore.
  'The system cannot find the file specified' - also safe.
chocolatey.nupkg file not installed in lib.
 Attempting to locate it from bootstrapper.
PATH environment variable does not have C:\ProgramData\chocolatey\bin in it. Adding...
WARNING: Not setting tab completion: Profile file does not exist at 'C:\Users\root\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1'.
Chocolatey (choco.exe) is now ready.
You can call choco from anywhere, command line or powershell by typing choco.
Run choco /? for a list of functions.
You may need to shut down and restart powershell and/or consoles
 first prior to using choco.
Ensuring chocolatey commands are on the path
Ensuring chocolatey.nupkg is in the lib folder

root@LAPTOP-HT4TGVCP C:\Users\root>choco
Chocolatey v0.10.11
Please run 'choco -?' or 'choco -?' for help menu.

Install Git

With Chocolatey installed, I could install Git.

root@LAPTOP-HT4TGVCP C:\Users\root>choco install git -params '"/GitAndUnixToolsOnPath"'
Chocolatey v0.10.11
Installing the following packages:
git
By installing you accept licenses for the packages.
Progress: Downloading git.install 2.20.1... 100%
Progress: Downloading chocolatey-core.extension 1.3.3... 100%
Progress: Downloading git 2.20.1... 100%

chocolatey-core.extension v1.3.3 [Approved]
chocolatey-core.extension package files install completed. Performing other installation steps.
 Installed/updated chocolatey-core extensions.
 The install of chocolatey-core.extension was successful.
  Software installed to 'C:\ProgramData\chocolatey\extensions\chocolatey-core'

git.install v2.20.1 [Approved]
git.install package files install completed. Performing other installation steps.
The package git.install wants to run 'chocolateyInstall.ps1'.
Note: If you don't run this script, the installation will fail.
Note: To confirm automatically next time, use '-y' or consider:
choco feature enable -n allowGlobalConfirmation
Do you want to run the script?([Y]es/[N]o/[P]rint): y

@{Inno Setup CodeFile: Path Option=CmdTools; PSPath=Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\Git_
is1; PSParentPath=Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall; PSChildName=Git_is1; PSDrive=HKLM; PS
Provider=Microsoft.PowerShell.Core\Registry}
Using Git LFS
Installing 64-bit git.install...
git.install has been installed.
git.install installed to 'C:\Program Files\Git'
  git.install can be automatically uninstalled.
Environment Vars (like PATH) have changed. Close/reopen your shell to
 see the changes (or in powershell/cmd.exe just type `refreshenv`).
 The install of git.install was successful.
  Software installed to 'C:\Program Files\Git\'

git v2.20.1 [Approved]
git package files install completed. Performing other installation steps.
 The install of git was successful.
  Software install location not explicitly set, could be in package or
  default install location if installer.

Chocolatey installed 3/3 packages.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

Clone DetectionLab

With Git installed, I can clone the DetectionLab repo from Github.

root@LAPTOP-HT4TGVCP C:\Users\root>mkdir git

root@LAPTOP-HT4TGVCP C:\Users\root>cd git

root@LAPTOP-HT4TGVCP C:\Users\root\git>mkdir detectionlab

root@LAPTOP-HT4TGVCP C:\Users\root\git>cd detectionlab

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab>git clone https://github.com/clong/DetectionLab.git
'git' is not recognized as an internal or external command,
operable program or batch file.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab>refreshenv
Refreshing environment variables from registry for cmd.exe. Please wait...Finished..

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab>git clone https://github.com/clong/DetectionLab.git
Cloning into 'DetectionLab'...
remote: Enumerating objects: 1, done.
remote: Counting objects: 100% (1/1), done.
remote: Total 1163 (delta 0), reused 0 (delta 0), pack-reused 1162R
Receiving objects: 100% (1163/1163), 11.81 MiB | 12.24 MiB/s, done.
Resolving deltas: 100% (568/568), done.

Install Vagrant

Before going any further, I needed to install Vagrant.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab>cd ..\..\

root@LAPTOP-HT4TGVCP C:\Users\root>choco install vagrant
Chocolatey v0.10.11
Installing the following packages:
vagrant
By installing you accept licenses for the packages.
Progress: Downloading vagrant 2.2.3... 100%

vagrant v2.2.3 [Approved]
vagrant package files install completed. Performing other installation steps.
The package vagrant wants to run 'chocolateyinstall.ps1'.
Note: If you don't run this script, the installation will fail.
Note: To confirm automatically next time, use '-y' or consider:
choco feature enable -n allowGlobalConfirmation
Do you want to run the script?([Y]es/[N]o/[P]rint): y

Downloading vagrant 64 bit
  from 'https://releases.hashicorp.com/vagrant/2.2.3/vagrant_2.2.3_x86_64.msi'
Progress: 100% - Completed download of C:\Users\root\AppData\Local\Temp\chocolatey\vagrant\2.2.3\vagrant_2.2.3_x86_64.msi (229.22 MB).
Download of vagrant_2.2.3_x86_64.msi (229.22 MB) completed.
Hashes match.
Installing vagrant...
vagrant has been installed.
Repairing currently installed global plugins. This may take a few minutes...
Installed plugins successfully repaired!
  vagrant may be able to be automatically uninstalled.
Environment Vars (like PATH) have changed. Close/reopen your shell to
 see the changes (or in powershell/cmd.exe just type `refreshenv`).
 The install of vagrant was successful.
  Software installed as 'msi', install location is likely default.

Chocolatey installed 1/1 packages.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

Packages requiring reboot:
 - vagrant (exit code 3010)

The recent package changes indicate a reboot is necessary.
 Please reboot at your earliest convenience.

root@LAPTOP-HT4TGVCP C:\Users\root>shutdown /r /t 0

Installing DetectionLab

Now we are finally at the point where we can install DetectionLab. Note that in my example, I downloaded boxes already built by Chris. I did not build my own, in order to save time. You can follow his instructions to build boxes yourself.

I saw an error regarding the win10 host, but that did not appear to be a real problem.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab>powershell
Windows PowerShell
Copyright (C) Microsoft Corporation. All rights reserved.

PS C:\Users\root\git\detectionlab\DetectionLab> .\build.ps1 -ProviderName virtualbox -VagrantOnly
[preflight_checks] Running..
[preflight_checks] Checking if Vagrant is installed
[preflight_checks] Checking for pre-existing boxes..
[preflight_checks] Checking for vagrant instances..
[preflight_checks] Checking disk space..
[preflight_checks] Checking if vagrant-reload is installed..
The vagrant-reload plugin is required and not currently installed. This script will attempt to install it now.
Installing the 'vagrant-reload' plugin. This can take a few minutes...
Installed the plugin 'vagrant-reload (0.0.1)'!
[preflight_checks] Finished.
[download_boxes] Running..
[download_boxes] Downloading windows_10_virtualbox.box

[download_boxes] Downloading windows_2016_virtualbox.box
[download_boxes] Getting filehash for: windows_10_virtualbox.box
[download_boxes] Getting filehash for: windows_2016_virtualbox.box
[download_boxes] Checking Filehashes..
[download_boxes] Finished.
[main] Running vagrant_up_host for: logger
[vagrant_up_host] Running for logger
Attempting to bring up the logger host using Vagrant
[vagrant_up_host] Finished for logger. Got exit code: 0
[main] vagrant_up_host finished. Exitcode: 0
Good news! logger was built successfully!
[main] Finished for: logger
[main] Running vagrant_up_host for: dc
[vagrant_up_host] Running for dc
Attempting to bring up the dc host using Vagrant
[vagrant_up_host] Finished for dc. Got exit code: 0
[main] vagrant_up_host finished. Exitcode: 0
Good news! dc was built successfully!
[main] Finished for: dc
[main] Running vagrant_up_host for: wef
[vagrant_up_host] Running for wef
Attempting to bring up the wef host using Vagrant
[vagrant_up_host] Finished for wef. Got exit code: 0
[main] vagrant_up_host finished. Exitcode: 0
Good news! wef was built successfully!
[main] Finished for: wef
[main] Running vagrant_up_host for: win10
[vagrant_up_host] Running for win10
Attempting to bring up the win10 host using Vagrant
[vagrant_up_host] Finished for win10. Got exit code: 1
[main] vagrant_up_host finished. Exitcode: 1
WARNING: Something went wrong while attempting to build the win10 box.
Attempting to reload and reprovision the host...
[main] Running vagrant_reload_host for: win10
[vagrant_reload_host] Running for win10
[vagrant_reload_host] Finished for win10. Got exit code: 1
C:\Users\root\git\detectionlab\DetectionLab\build.ps1 : Failed to bring up win10 after a reload. Exiting
At line:1 char:1
+ .\build.ps1 -ProviderName virtualbox -VagrantOnly
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Write-Error], WriteErrorException
    + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,build.ps1

[main] Running post_build_checks
[post_build_checks] Running Caldera Check.
[download] Running for https://192.168.38.105:8888, looking for
[download] Found at https://192.168.38.105:8888
[post_build_checks] Cladera Result: True
[post_build_checks] Running Splunk Check.
[download] Running for https://192.168.38.105:8000/en-US/account/login?return_to=%2Fen-US%2F, looking for This browser is not supported by Splunk
[download] Found This browser is not supported by Splunk at https://192.168.38.105:8000/en-US/account/login?return_to=%2Fen-US%2F
[post_build_checks] Splunk Result: True
[post_build_checks] Running Fleet Check.
[download] Running for https://192.168.38.105:8412, looking for Kolide Fleet
[download] Found Kolide Fleet at https://192.168.38.105:8412
[post_build_checks] Fleet Result: True
[post_build_checks] Running MS ATA Check.
[download] Running for https://192.168.38.103, looking for
[post_build_checks] ATA Result: True
[main] Finished post_build_checks

Checking the VMs

I used the Virtualbox command line program to check the status of the new VMs.

root@LAPTOP-HT4TGVCP c:\Program Files\Oracle\VirtualBox>VBoxManage list runningvms
"logger" {3da9fffb-4b02-4e57-a592-dd2322f14245}
"dc.windomain.local" {ef32d493-845c-45dc-aff7-3a86d9c590cd}
"wef.windomain.local" {7cd008b7-c6e0-421d-9655-8f92ec98d9d7}
"win10.windomain.local" {acf413fb-6358-44df-ab9f-cc7767ed32bd}

Interacting with Vagrant and the Logger Host

Next I decided to use Vagrant to check on the status of the boxes, and to interact with one if I could. I wanted to find the Bro and Suricata logs.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab>cd Vagrant

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant status
Current machine states:

logger                    running (virtualbox)
dc                        running (virtualbox)
wef                       running (virtualbox)
win10                     running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.


root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant ssh logger

Welcome to Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-131-generic x86_64)

 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

29 packages can be updated.
24 updates are security updates.


Last login: Sun Jan 27 19:24:05 2019 from 10.0.2.2

root@logger:~# /opt/bro/bin/broctl status
Name         Type    Host             Status    Pid    Started
manager      manager localhost        running   9848   27 Jan 17:19:15
proxy        proxy   localhost        running   9893   27 Jan 17:19:16
worker-eth1-1 worker  localhost        running   9945   27 Jan 17:19:17
worker-eth1-2 worker  localhost        running   9948   27 Jan 17:19:17

vagrant@logger:~$ ls -al /opt/bro
total 32
drwxr-xr-x 8 root root 4096 Jan 27 17:19 .
drwxr-xr-x 5 root root 4096 Jan 27 17:19 ..
drwxr-xr-x 2 root root 4096 Jan 27 17:19 bin
drwxrwsr-x 2 root bro  4096 Jan 27 17:19 etc
drwxr-xr-x 3 root root 4096 Jan 27 17:19 lib
drwxrws--- 3 root bro  4096 Jan 27 18:00 logs
drwxr-xr-x 4 root root 4096 Jan 27 17:19 share
drwxrws--- 8 root bro  4096 Jan 27 17:19 spool

vagrant@logger:~$ ls -al /opt/bro/logs
ls: cannot open directory '/opt/bro/logs': Permission denied

vagrant@logger:~$ sudo bash

root@logger:~# ls -al /opt/bro/logs/
2019-01-27/ current/

root@logger:~# ls -al /opt/bro/logs/current/
total 3664
drwxr-sr-x 2 root bro    4096 Jan 27 19:20 .
drwxrws--- 8 root bro    4096 Jan 27 17:19 ..
-rw-r--r-- 1 root bro     475 Jan 27 19:19 capture_loss.log
-rw-r--r-- 1 root bro     127 Jan 27 17:19 .cmdline
-rw-r--r-- 1 root bro   83234 Jan 27 19:30 communication.log
-rw-r--r-- 1 root bro 1430714 Jan 27 19:30 conn.log
-rw-r--r-- 1 root bro    1340 Jan 27 19:00 dce_rpc.log
-rw-r--r-- 1 root bro  185114 Jan 27 19:28 dns.log
-rw-r--r-- 1 root bro     310 Jan 27 17:19 .env_vars
-rw-r--r-- 1 root bro  139387 Jan 27 19:30 files.log
-rw-r--r-- 1 root bro  544416 Jan 27 19:30 http.log
-rw-r--r-- 1 root bro     224 Jan 27 19:05 known_services.log
-rw-r--r-- 1 root bro     956 Jan 27 19:19 notice.log
-rw-r--r-- 1 root bro       5 Jan 27 17:19 .pid
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.capture_loss
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.communication
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.conn
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.conn-summary
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.dce_rpc
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.dns
-rw-r--r-- 1 root bro      18 Jan 27 18:00 .rotated.dpd
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.files
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.http
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.kerberos
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.known_certs
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.known_hosts
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.known_services
-rw-r--r-- 1 root bro      18 Jan 27 18:00 .rotated.loaded_scripts
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.notice
-rw-r--r-- 1 root bro      18 Jan 27 18:00 .rotated.packet_filter
-rw-r--r-- 1 root bro      18 Jan 27 18:00 .rotated.reporter
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.smb_files
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.smb_mapping
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.software
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.ssl
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.stats
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.weird
-rw-r--r-- 1 root bro      18 Jan 27 19:00 .rotated.x509
-rw-r--r-- 1 root bro    1311 Jan 27 19:24 smb_mapping.log
-rw-r--r-- 1 root bro   15767 Jan 27 19:30 ssl.log
-rw-r--r-- 1 root bro      58 Jan 27 17:19 .startup
-rw-r--r-- 1 root bro   11326 Jan 27 19:30 stats.log
-rwx------ 1 root bro      18 Jan 27 17:19 .status
-rw-r--r-- 1 root bro      80 Jan 27 19:00 stderr.log
-rw-r--r-- 1 root bro     188 Jan 27 17:19 stdout.log
-rw-r--r-- 1 root bro 1141860 Jan 27 19:30 weird.log
-rw-r--r-- 1 root bro    2799 Jan 27 19:20 x509.log

root@logger:~# cd /opt/bro/logs/

root@logger:/opt/bro/logs# ls
2019-01-27  current

root@logger:/opt/bro/logs# cd current

root@logger:/opt/bro/logs/current# ls

capture_loss.log   conn.log     dns.log    http.log            notice.log     smb_mapping.log  stats.log   stdout.log  x509.log
communication.log  dce_rpc.log  files.log  known_services.log  smb_files.log  ssl.log          stderr.log  weird.log

root@logger:/opt/bro/logs/current# jq  -c . dce_rpc.log  | head

{"ts":1548615615.836272,"uid":"CEmNr31qusp3G7GFg4","id.orig_h":"192.168.38.104","id.orig_p":49758,"id.resp_h":"192.168.38.102","id.resp_p":135,"named_pipe":
"135","endpoint":"epmapper","operation":"ept_map"}
{"ts":1548615615.83961,"uid":"CJO7xe4JUo43IjGG01","id.orig_h":"192.168.38.104","id.orig_p":49759,"id.resp_h":"192.168.38.102","id.resp_p":49667,"rtt":0.0003
57,"named_pipe":"49667","endpoint":"lsarpc","operation":"LsarLookupSids3"}
{"ts":1548615615.851544,"uid":"CJO7xe4JUo43IjGG01","id.orig_h":"192.168.38.104","id.orig_p":49759,"id.resp_h":"192.168.38.102","id.resp_p":49667,"rtt":0.000
596,"named_pipe":"49667","endpoint":"lsarpc","operation":"LsarLookupSids3"}
{"ts":1548615615.835628,"uid":"CgEcizh05xJ1ricP8","id.orig_h":"192.168.38.104","id.orig_p":49758,"id.resp_h":"192.168.38.102","id.resp_p":135,"named_pipe":"
135","endpoint":"epmapper","operation":"ept_map"}
{"ts":1548615615.839587,"uid":"CVV6WZ3vgpE673rl6a","id.orig_h":"192.168.38.104","id.orig_p":49759,"id.resp_h":"192.168.38.102","id.resp_p":49667,"rtt":0.000
382,"named_pipe":"49667","endpoint":"lsarpc","operation":"LsarLookupSids3"}
{"ts":1548615615.852193,"uid":"CVV6WZ3vgpE673rl6a","id.orig_h":"192.168.38.104","id.orig_p":49759,"id.resp_h":"192.168.38.102","id.resp_p":49667,"rtt":0.000
382,"named_pipe":"49667","endpoint":"lsarpc","operation":"LsarLookupSids3"}
{"ts":1548618003.200283,"uid":"CGYDww34wYz8eCYr96","id.orig_h":"192.168.38.103","id.orig_p":63295,"id.resp_h":"192.168.38.102","id.resp_p":135,"named_pipe":
"135","endpoint":"epmapper","operation":"ept_map"}
{"ts":1548618003.200403,"uid":"CrTZMz2nCXsiY5WOF8","id.orig_h":"192.168.38.103","id.orig_p":63295,"id.resp_h":"192.168.38.102","id.resp_p":135,"named_pipe":
"135","endpoint":"epmapper","operation":"ept_map"}

root@logger:~# head /var/log/suricata/fast.log

01/27/2019-17:19:08.133030  [**] [1:2013028:4] ET POLICY curl User-Agent Outbound [**] [Classification: Attempted Information Leak] [Priority: 2] {TCP} 10.0
.2.15:51574 -> 195.135.221.134:80
01/27/2019-17:19:08.292747  [**] [1:2013504:5] ET POLICY GNU/Linux APT User-Agent Outbound likely related to package management [**] [Classification: Not Su
spicious Traffic] [Priority: 3] {TCP} 10.0.2.15:55260 -> 99.84.178.103:80
01/27/2019-17:19:08.356618  [**] [1:2013504:5] ET POLICY GNU/Linux APT User-Agent Outbound likely related to package management [**] [Classification: Not Su
spicious Traffic] [Priority: 3] {TCP} 10.0.2.15:55260 -> 99.84.178.103:80
01/27/2019-17:19:08.432477  [**] [1:2013504:5] ET POLICY GNU/Linux APT User-Agent Outbound likely related to package management [**] [Classification: Not Su
spicious Traffic] [Priority: 3] {TCP} 10.0.2.15:46630 -> 91.189.95.83:80
01/27/2019-17:19:08.448249  [**] [1:2013504:5] ET POLICY GNU/Linux APT User-Agent Outbound likely related to package management [**] [Classification: Not Su
spicious Traffic] [Priority: 3] {TCP} 10.0.2.15:53932 -> 91.189.88.161:80
...trimmed...

Docker on the Logger Host

Chris is using Docker to provide some of the Logger host functions, e.g.:

root@logger:~# docker container ls
CONTAINER ID        IMAGE                    COMMAND                   CREATED             STATUS              PORTS                              NAMES
343c18f933d9        kolide/fleet:latest      "sh -c 'echo '\\n' | …"   3 hours ago         Up 3 hours          0.0.0.0:8412->8412/tcp             kolidequic
kstart_fleet_1
513cb0d61401        mysql:5.7                "docker-entrypoint.s…"    3 hours ago         Up 3 hours          3306/tcp, 33060/tcp                kolidequic
kstart_mysql_1
b0278855b130        mailhog/mailhog:latest   "MailHog"                 3 hours ago         Up 3 hours          1025/tcp, 0.0.0.0:8025->8025/tcp   kolidequic
kstart_mailhog_1
ddcd3e59dda2        redis:3.2.4              "docker-entrypoint.s…"    3 hours ago         Up 3 hours          6379/tcp                           kolidequic
kstart_redis_1

Troubleshooting Localhost Bindings

One of the issues I encountered involved the IP addresses to which VMs bound their Virtualbox Remote Display Protocol servers. The default configuration bound them to localhost on my Windows laptop. That was ok if I was interacting with that laptop in person, but I was doing this work remotely.

I could RDP to the laptop, and then RDP from the laptop to the VMs. This works, but it was a slight hassle to log into the Windows 2016 server which required a ctrl-alt-del sequence. This is a nuanced issue that can be solved by using the on screen keyboard (osk.exe) to enter the ctrl-alt-end sequence on the remote laptop, but I wanted an easier solution.

Dustin Lee, who has done a lot of work customized DetectionLab to include Security Onion (a future post maybe?) suggested I modify the Vagrantfile with the following bolded content. This example is for the wef host in the Vagrantfile.

    cfg.vm.provider "virtualbox" do |vb, override|
      vb.gui = true
      vb.name = "wef.windomain.local"
      vb.default_nic_type = "82545EM"
      vb.customize ["modifyvm", :id, "--vrde", "on"]
      vb.customize ["modifyvm", :id, "--vrdeaddress", "0.0.0.0"]
      vb.customize ["modifyvm", :id, "--memory", 2048]
      vb.customize ["modifyvm", :id, "--cpus", 2]
      vb.customize ["modifyvm", :id, "--vram", "32"]
      vb.customize ["modifyvm", :id, "--clipboard", "bidirectional"]
      vb.customize ["setextradata", "global", "GUI/SuppressMessages", "all" ]
    end
Basically, add the bold entries wherever you see a "virtualbox" option, to enable the VRDP server binding to 0.0.0.0 (which should be all IP addresses, to include the public IP, as I want).

Before I restarted the wef host, you can see below how the VRDE server is listening only on localhost on port 5932 (in bold).

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" showvminfo wef.windomain.local
| findstr /I vrde

VRDE:                        enabled (Address 127.0.0.1, Ports 5932, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
...trimmed...

After changing the Vagrant file, I restarted the wef host using Vagrant.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant reload wef
==> wef: Attempting graceful shutdown of VM...
==> wef: Clearing any previously set forwarded ports...
==> wef: Fixed port collision for 3389 => 3389. Now on port 2201.
==> wef: Fixed port collision for 22 => 2222. Now on port 2202.
==> wef: Fixed port collision for 5985 => 55985. Now on port 2203.
==> wef: Fixed port collision for 5986 => 55986. Now on port 2204.
==> wef: Clearing any previously set network interfaces...
==> wef: Preparing network interfaces based on configuration...
    wef: Adapter 1: nat
    wef: Adapter 2: hostonly
==> wef: Forwarding ports...
    wef: 3389 (guest) => 2201 (host) (adapter 1)
    wef: 22 (guest) => 2202 (host) (adapter 1)
    wef: 5985 (guest) => 2203 (host) (adapter 1)
    wef: 5986 (guest) => 2204 (host) (adapter 1)
==> wef: Running 'pre-boot' VM customizations...
==> wef: Booting VM...
==> wef: Waiting for machine to boot. This may take a few minutes...
    wef: WinRM address: 127.0.0.1:2203
    wef: WinRM username: vagrant
    wef: WinRM execution_time_limit: PT2H
    wef: WinRM transport: negotiate
==> wef: Machine booted and ready!
==> wef: Checking for guest additions in VM...
    wef: The guest additions on this VM do not match the installed version of
    wef: VirtualBox! In most cases this is fine, but in rare cases it can
    wef: prevent things such as shared folders from working properly. If you see
    wef: shared folder errors, please make sure the guest additions within the
    wef: virtual machine match the version of VirtualBox you have installed on
    wef: your host and reload your VM.
    wef:
    wef: Guest Additions Version: 5.2.16
    wef: VirtualBox Version: 6.0
==> wef: Setting hostname...
==> wef: Configuring and enabling network interfaces...
==> wef: Mounting shared folders...

    wef: /vagrant =>  

The bolded entry about port 2201 means I can log into the wef host as user vagrant / password vagrant, over RDP, directly from another computer.


After restarting the wef host, I check to see what IP address and port the RDP server is listening (in bold):


root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>"c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" showvminfo wef.windomain.local
| findstr /I vrde
VRDE:                        enabled (Address 0.0.0.0, Ports 5932, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
...trimmed...

This should allow me to access the "screen" of the VM via port 5932 and the IP of the host laptop. Unfortunately, there is some sort of conflict, as I saw the domain controller also reserved the same port for its VRDE.

root@LAPTOP-HT4TGVCP C:\Users\root>"c:\Program Files\Oracle\VirtualBox\VBoxManage.exe" showvminfo dc.windomain.local | findstr /I vrde

VRDE:                        enabled (Address 0.0.0.0, Ports 5932, MultiConn: off, ReuseSingleConn: off, Authentication type: null)
...trimmed...

I encountered a similar issue with the domain controller also failing to resolve a conflict with the host system RDP port.

root@LAPTOP-HT4TGVCP C:\Users\root\git\detectionlab\DetectionLab\Vagrant>vagrant reload dc
==> dc: Attempting graceful shutdown of VM...
==> dc: Clearing any previously set forwarded ports...
==> dc: Fixed port collision for 22 => 2222. Now on port 2200.
==> dc: Clearing any previously set network interfaces...
==> dc: Preparing network interfaces based on configuration...
    dc: Adapter 1: nat
    dc: Adapter 2: hostonly
==> dc: Forwarding ports...
    dc: 3389 (guest) => 3389 (host) (adapter 1)
    dc: 22 (guest) => 2200 (host) (adapter 1)
    dc: 5985 (guest) => 55985 (host) (adapter 1)
    dc: 5986 (guest) => 55986 (host) (adapter 1)

I haven't solved these problems yet. I wonder if it's a result of using pre-built VMs, which use the 5.x series of Virtualbox extensions, while my Virtualbox system runs 6.0?

Summary

These are minor issues, for the result of all this work is four systems offering Windows client and server features, plus instrumentation. That could be another topic for discussion. I'm also excited by the prospect of running all of this in the cloud. Furthermore, Dustin Lee has a fork of DetectionLab that replaces some of the instrumentation with Security Onion!

Leveraging the Power of Solutions and Intelligence

Welcome to my first post as a FireEye™ employee! Many of you have asked me what I think of FireEye's acquisition of Mandiant. One of the aspects of the new company that I find most exciting is our increased threat intelligence capabilities. This post will briefly explore what that means for our customers, prospects, and the public.

By itself, Mandiant generates threat intelligence in a fairly unique manner from three primary sources. First, our professional services division learns about adversary tools, tactics, and procedures (TTPs) by assisting intrusion victims. This "boots on the ground" offering is unlike any other, in terms of efficiency (a small number of personnel required), speed (days or weeks onsite, instead of weeks or months), and effectiveness (we know how to remove advanced foes). By having consultants inside a dozen or more leading organizations every week of the year, Mandiant gains front-line experience of cutting-edge intrusion activity. Second, the Managed Defense™ division operates our software and provides complementary services on a multi-year subscription basis. This team develops long-term counter-intrusion experience by constantly assisting another set of customers in a managed security services model. Finally, Mandiant's intelligence team acquires data from a variety of sources, fusing it with information from professional services and managed defense. The output of all this work includes deliverables such as the annual M-Trends report and last year's APT1 document, both of which are free to the public. Mandiant customers have access to more intelligence through our software and services.

As a security software company, FireEye deploys powerful appliances into customer environments to inspect and (if so desired) quarantine malicious content. Most customers choose to benefit from the cloud features of the FireEye product suite. This decision enables community self-defense and exposes a rich collection of the world's worst malware. As millions more instances of FireEye's MVX technology expand to mobile, cloud and data center environments, all of us benefit in terms of protection and visibility. Furthermore, FireEye's own threat intelligence and services components generate knowledge based on their visibility into adversary software and activity. Recent examples include breaking news on Android malware, identifying Yahoo! systems serving malware, and exploring "cyber arms" dealers. Like Mandiant, FireEye's customers benefit from intelligence embedded into the MVX platforms.

Many have looked at the Mandiant and FireEye combination from the perspective of software and services. While these are important, both ultimately depend on access to the best threat intelligence available. As a combined entity, FireEye can draw upon nearly 2,000 employees in 40 countries, with a staff of security consultants, analysts, engineers, and experts not found in any other private organization. Stay tuned to the FireEye and Mandiant blogs as we work to provide an integrated view of adversary activity throughout 2014.

I hope you can attend the FireEye + Mandiant - 4 Key Steps to Continuous Threat Protection webinar on Wednesday, Jan 29 at 2pm ET. During the webinar Manish Gupta, FireEye SVP of Products, and Dave Merkel, Mandiant CTO and VP of Products will discuss why traditional IT security defenses are no longer the safeguards they once were and what's now needed to protect against today's advanced threats.