Category Archives: topcan

MITRE ATT&CK Tactics Are Not Tactics



Just what are "tactics"?

Introduction


MITRE ATT&CK is a great resource, but something about it has bothered me since I first heard about it several years ago. It's a minor point, but I wanted to document it in case it confuses anyone else.

The MITRE ATT&CK Design and Philosophy document from March 2020 says the following:

At a high-level, ATT&CK is a behavioral model that consists of the following core components:

• Tactics, denoting short-term, tactical adversary goals during an attack;
• Techniques, describing the means by which adversaries achieve tactical goals;
• Sub-techniques, describing more specific means by which adversaries achieve tactical goals at a lower level than techniques; and
• Documented adversary usage of techniques, their procedures, and other metadata.

My concern is with MITRE's definition of "tactics" as "short-term, tactical adversary goals during an attack," which is oddly recursive.

The key word in the tactics definition is goals. According to MITRE, "tactics" are "goals."

Examples of ATT&CK Tactics


ATT&CK lists the following as "Enterprise Tactics":

MITRE ATT&CK "Tactics," https://attack.mitre.org/tactics/enterprise/

Looking at this list, the first 11 items could indeed be seen as goals. The last item, Impact, is not a goal. That item is an artifact of trying to shoehorn more information into the ATT&CK structure. That's not my primary concern though.

Military Theory and Definitions


As a service academy graduate who had to sit through many lectures on military theory, and who participated in small unit exercises, the idea of tactics as "goals" does not make any sense.

I'd like to share three resources that offer a different perspective on tactics. Although all three are military, my argument does not depend on that association.

The DOD Dictionary of Military and Associated Terms defines tactics as "the employment and ordered arrangement of forces in relation to each other. See also procedures; techniques. (CJCSM 5120.01)" (emphasis added)

In his book On Tactics, B. A. Friedman defines tactics as "the use of military forces to achieve victory over opposing enemy forces over the short term." (emphasis added)

Dr. Martin van Creveld, scholar and author from the military strategy world, wrote the excellent Encyclopedia Britannica entry on tactics. His article includes the following:

"Tactics, in warfare, the art and science of fighting battles on land, on sea, and in the air. It is concerned with the approach to combat; the disposition of troops and other personalities; the use made of various arms, ships, or aircraft; and the execution of movements for attack or defense...

The word tactics originates in the Greek taxis, meaning order, arrangement, or disposition -- including the kind of disposition in which armed formations used to enter and fight battles. From this, the Greek historian Xenophon derived the term tactica, the art of drawing up soldiers in array. Likewise, the Tactica, an early 10th-century handbook said to have been written under the supervision of the Byzantine emperor Leo VI the Wise, dealt with formations as well as weapons and the ways of fighting with them.

The term tactics fell into disuse during the European Middle Ages. It reappeared only toward the end of the 17th century, when “Tacticks” was used by the English encyclopaedist John Harris to mean 'the Art of Disposing any Number of Men into a proposed form of Battle...'"

From these three examples, it is clear that tactics are about use and disposition of forces or capabilities during engagements. Goals are entirely different. Tactics are the methods by which leaders achieve goals. 

How Did This Happen?


I was not a fly on the wall when the MITRE team designed ATT&CK. Perhaps the MITRE team fixated on the phrase"tactics, techniques, and procedures," or "TTPs," again derived from military examples, when they were designing ATT&CK? TTPs became hot during the 2000s as incident responders drew with military experience drew on that language when developing concepts like indicators of compromise. That fixation might have led MITRE to use "tactics" for their top-level structure. 

It would have made more sense for MITRE to have just said "goal" or "objective," but "GTP" isn't recognized by the digital defender world.

It's Not Just the Military


Some readers might think "ATT&CK isn't a military tool, so your military examples don't apply." I use the military references to show that the word tactic does have military origins, like the word "strategy," from the Greek Strategos or strategus, plural strategoi, (Greek: στρατηγός, pl. στρατηγοί; Doric Greek: στραταγός, stratagos; meaning "army leader"). 

That said, I would be surprised to see the word tactics used as "goals" anywhere else. For example, none of these examples from the non-military world involve tactics as goals:

This Harvard Business Review article defines tactics as "the day-to-day and month-to-month decisions required to manage a business." 

This guide for ice hockey coaches mentions tactics like "give and go’s, crossing attacks, cycling the puck, chipping the puck to space and overlapping."

The guide for small business marketing lists tactics like advertising, grass-roots efforts, trade shows, website optimization, and email and social marketing.

In the civilian world, tactics are how leaders achieve goals or objectives.

Conclusion


In the big picture, it doesn't matter that much to ATT&CK content that MITRE uses the term "tactics" when it really means "goals." 

However, I wrote this article because the ATT&CK design and philosophy emphasizes a common language, e.g., ATT&CK "succinctly organizes adversary tactics and techniques along with providing a common language used across security disciplines."

If we want to share a common language, it's important that we recognize that the ATT&CK use of the term "tactics" is an anomaly. Perhaps a future edition will change the terminology, but I doubt it given how entrenched it is at this point.

The FBI Intrusion Notification Program

The FBI intrusion notification program is one of the most important developments in cyber security during the last 15 years. 

This program achieved mainstream recognition on 24 March 2014 when Ellen Nakashima reported on it for the Washington Post in her story U.S. notified 3,000 companies in 2013 about cyberattacks

The story noted the following:

"Federal agents notified more than 3,000 U.S. companies last year that their computer systems had been hacked, White House officials have told industry executives, marking the first time the government has revealed how often it tipped off the private sector to cyberintrusions...

About 2,000 of the notifications were made in person or by phone by the FBI, which has 1,000 people dedicated to cybersecurity investigations among 56 field offices and its headquarters. Some of the notifications were made to the same company for separate intrusions, officials said. Although in-person visits are preferred, resource constraints limit the bureau’s ability to do them all that way, former officials said...

Officials with the Secret Service, an agency of the Department of Homeland Security that investigates financially motivated cybercrimes, said that they notified companies in 590 criminal cases opened last year, officials said. Some cases involved more than one company."

The reason this program is so important is that it shattered the delusion that some executives used to reassure themselves. When the FBI visits your headquarters to tell you that you are compromised, you can't pretend that intrusions are "someone else's problem."

It may be difficult for some readers to appreciate how prevalent this mindset was, from the beginnings of IT to about the year 2010.

I do not know exactly when the FBI began notifying victims, but I believe the mid-2000's is a safe date. I can personally attest to the program around that time.

I was reminded of the importance of this program by Andy Greenberg's new story The FBI Botched Its DNC Hack Warning in 2016—but Says It Won’t Next Time

I strongly disagree with this "botched" characterization. Andy writes:

"[S]omehow this breach [of the Democratic National Committee] had come as a terrible surprise—despite an FBI agent's warning to [IT staffer Yared] Tamene of potential Russian hacking over a series of phone calls that had begun fully nine months earlier.

The FBI agent's warnings had 'never used alarming language,' Tamene would tell the Senate committee, and never reached higher than the DNC's IT director, who dismissed them after a cursory search of the network for signs of foul play."

As with all intrusions, criminal responsibility lies with the intruder. However, I do not see why the FBI is supposed to carry the blame for how this intrusion unfolded. 

According to investigatory documents and this Crowdstrike blog post on their involvement, at least seven months passed from the time the FBI notified the DNC (sometime in September 2015) and when they contacted Crowdstrike (30 April 2016). That is ridiculous. 

If I received a call from the FBI even hinting at a Russian presence in my network, I would be on the phone with a professional incident response firm right after I briefed the CEO about the call.

I'm glad the FBI continues to improve its victim notification procedures, but it doesn't make much of a difference if the individuals running IT and the organization are negligent, either through incompetence or inaction.

Note: Fixed year typo.

If You Can’t Patch Your Email Server, You Should Not Be Running It

CVE-2020-0688 Scan Results, per Rapid7

tl;dr -- it's the title of the post: "If You Can't Patch Your Email Server, You Should Not Be Running It."

I read a disturbing story today with the following news:

"Starting March 24, Rapid7 used its Project Sonar internet-wide survey tool to discover all publicly-facing Exchange servers on the Internet and the numbers are grim.

As they found, 'at least 357,629 (82.5%) of the 433,464 Exchange servers' are still vulnerable to attacks that would exploit the CVE-2020-0688 vulnerability.

To make matters even worse, some of the servers that were tagged by Rapid7 as being safe against attacks might still be vulnerable given that 'the related Microsoft update wasn’t always updating the build number.'

Furthermore, 'there are over 31,000 Exchange 2010 servers that have not been updated since 2012,' as the Rapid7 researchers observed. 'There are nearly 800 Exchange 2010 servers that have never been updated.'

They also found 10,731 Exchange 2007 servers and more than 166,321 Exchange 2010 ones, with the former already running End of Support (EoS) software that hasn't received any security updates since 2017 and the latter reaching EoS in October 2020."

In case you were wondering, threat actors have already been exploiting these flaws for weeks, if not months.

Email is one of, if not the most, sensitive and important systems upon which organizations of all shapes and sizes rely. The are, by virtue of their function, inherently exposed to the Internet, meaning they are within the range of every targeted or opportunistic intruder, worldwide.

In this particular case, unpatched servers are also vulnerable to any actor who can download and update Metasploit, which is virtually 100% of them.

It is the height of negligence to run such an important system in an unpatched state, when there are much better alternatives -- namely, outsourcing your email to a competent provider, like Google, Microsoft, or several others.

I expect some readers are saying "I would never put my email in the hands of those big companies!" That's fine, and I know several highly competent individuals who run their own email infrastructure. The problem is that they represent the small fraction of individuals and organizations who can do so. Even being extremely generous with the numbers, it appears that less than 20%, and probably less than 15% according to other estimates, can even keep their Exchange servers patched, let alone properly configured.

If you think it's still worth the risk, and your organization isn't able to patch, because you want to avoid megacorp email providers or government access to your email, you've made a critical miscalculation. You've essentially decided that it's more important for you to keep your email out of megacorp or government hands than it is to keep it from targeted or opportunistic intruders across the Internet.

Incidentally, you've made another mistake. Those same governments you fear, at least many of them, will just leverage Metasploit to break into your janky email server anyway.

The bottom line is that unless your organization is willing to commit the resources, attention, and expertise to maintaining a properly configured and patched email system, you should outsource it. Otherwise you are being negligent with not only your organization's information, but the information of anyone with whom you exchange emails.

When You Should Blog and When You Should Tweet


I saw my like-minded, friend-that-I've-never-met Andrew Thompson Tweet a poll, posted above.

I was about to reply with the following Tweet:

"If I'm struggling to figure out how to capture a thought in just 1 Tweet, that's a sign that a blog post might be appropriate. I only use a thread, and no more than 2, and hardly ever 3 (good Lord), when I know I've got nothing more to say. "1/10," "1/n," etc. are not for me."

Then I realized I had something more to say, namely, other reasons blog posts are better than Tweets. For the briefest moment I considered adding a second Tweet, making, horror of horrors, a THREAD, and then I realized I would be breaking my own guidance.

Here are three reasons to consider blogging over Tweeting.

1. If you find yourself trying to pack your thoughts into a 280 character limit, then you should write a blog post. You might have a good idea, and instead of expressing it properly, you're falling into the trap of letting the medium define the message, aka the PowerPoint trap. I learned this from Edward Tufte: let the message define the medium, not the other way around.

2. Twitter threads lose the elegance and readability of the English language as our ancestors created it, for our benefit. They gave us structures, like sentences, lists, indentation, paragraphs, chapters, and so on. What does Twitter provide? 280 character chunks. Sure, you can apply feeble "1/n" annotations, but you've lost all that structure and readability, and for what?

3. In the event you're writing a Tweet thread that's really worth reading, writing it via Twitter virtually guarantees that it's lost to history. Twitter is an abomination for citation, search, and future reference. In the hierarchy of delivering content for current researchers and future generations, the hierarchy is the following, from lowest to highest:

  • "Transient," "bite-sized" social media, e.g., Twitter, Instagram, Facebook, etc. posts
  • Blog posts
  • Whitepapers
  • Academic papers in "electronic" journals
  • Electronic (e.g., Kindle) only formatted books
  • Print books (that may be stand-alone works, or which may contain journal articles)

Print book are the apex communication medium because we have such references going back hundreds of years. Hundreds of years from now, I doubt the first five formats above will be easily accessible, or accessible at all. However, in a library or personal collection somewhere, printed books will endure.

The bottom line is that if you think what you're writing is important enough to start a "1/n" Tweet thread, you've already demonstrated that Twitter is the wrong medium.

The natural follow-on might be: what is Twitter good for? Here are my suggestions:

  • Announcing a link to another, in-depth news resource, like a news article, blog post, whitepaper, etc.
  • Offering a comment on an in-depth news resource, or replying to another person's announcement.
  • Asking a poll question.
  • Asking for help on a topic.
  • Engaging in a short exchange with another user. Long exchanges on hot topics typically devolve into a confusing mess of messages and replies, that delivery of which Twitter has never really managed to figure out.

I understand the seduction of Twitter. I use it every day. However, when it really matters, blogging is preferable, followed by the other media I listed in point 3 above.

Update 0930 ET 27 Mar 2020: I forgot to mention that in extenuating circumstances, like live-Tweeting an emergency, Twitter threads on significant matters are fine because the urgency of the situation and the convenience or plain logistical limitations of the situation make Twitter indispensable. I'm less thrilled by live-Tweeting in conferences, although I'm guilty of it in the past. I'd prefer a thoughtful wrap-up post following the event, which I did a lot before Twitter became popular.

COVID-19 Phishing Tests: WRONG

Malware Jake Tweeted a poll last night which asked the following:

"I have an interesting ethical quandary. Is it ethically okay to use COVID-19 themed phishing emails for assessments and user awareness training right now? Please read the thread before responding and RT for visibility. 1/"

Ultimately he decided:

"My gut feeling is to not use COVID-19 themed emails in assessments/training, but to TELL users to expect them, though I understand even that might discourage consumption of legitimate information, endangering public health. 6/"

I responded by saying this was the right answer.

Thankfully there were many people who agreed, despite the fact that voting itself was skewed towards the "yes" answer.

There were an uncomfortable number of responses to the Tweet that said there's nothing wrong with red teams phishing users with COVID-19 emails. For example:

"Do criminals abide by ethics? Nope. Neither should testing."

"Yes. If it's in scope for the badguys [sic], it's in scope for you."

"Attackers will use it. So I think it is fair game."

Those are the wrong answers. As a few others outlined well in their responses, the fact that a criminal or intruder employs a tactic does not mean that it's appropriate for an offensive security team to use it too.

I could imagine several COVID-19 phishing lures that could target school districts and probably cause high double-digit click-through rates. What's the point of that? For a "community" that supposedly considers fear, uncertainty, and doubt (FUD) to be anathema, why introduce FUD via a phishing test?

I've grown increasingly concerned over the past few years that there's a "cult of the offensive" that justifies its activities with the rationale that "intruders do it, so we should too." This is directly observable in the replies to Jake's Tweet. It's a thin veneer that covers bad behavior, outweighing the small benefit accrued to high-end, 1% security shops against the massive costs suffered by the vast majority of networked global organizations.

The is a selfish, insular mindset that is reinforced by the echo chamber of the so-called "infosec community." This "tribe" is detached from the concerns and ethics of the larger society. It tells itself that what it is doing is right, oblivious or unconcerned with the costs imposed on the organizations they are supposedly "protecting" with their backwards actions.

We need people with feet in both worlds to tell this group that their approach is not welcome in the broader human community, because the costs it imposes vastly outweigh the benefits.

I've written here about ethics before, usually in connection with the only real value I saw in the CISSP -- its code of ethics. Reviewing the "code," as it appears now, shows the following:

"There are only four mandatory canons in the Code. By necessity, such high-level guidance is not intended to be a substitute for the ethical judgment of the professional.

Code of Ethics Preamble:

The safety and welfare of society and the common good, duty to our principals, and to each other, requires that we adhere, and be seen to adhere, to the highest ethical standards of behavior.
Therefore, strict adherence to this Code is a condition of certification.

Code of Ethics Canons:

Protect society, the common good, necessary public trust and confidence, and the infrastructure.
Act honorably, honestly, justly, responsibly, and legally.
Provide diligent and competent service to principals.
Advance and protect the profession."

This is almost worthless. The only actionable item in the "code" is the word "legally," implying that if a CISSP holder was convicted of a crime, he or she could lose their certification. Everything else is subject to interpretation.

Contrast that with the USAFA Code of Conduct:

"We will not lie, steal, or cheat, nor tolerate among us anyone who does."

While it still requires an Honor Board to determine if a cadet has lied, stolen, cheated, or tolerated, there's much less gray in this statement of the Academy's ethics. Is it perfect? No. Is it more actionable than the CISSP's version? Absolutely.

I don't have "solutions" to the ethical bankruptcy manifesting in some people practicing what they consider to be "information security." However, this post is a step towards creating red lines that those who are not already hardened in their ways can observe and integrate.

Perhaps at some point we will have an actionable code of ethics that helps newcomers to the field understand how to properly act for the benefit of the human community.

Seven Security Strategies, Summarized

This is the sort of story that starts as a comment on Twitter, then becomes a blog post when I realize I can't fit all the ideas into one or two Tweets. (You know how much I hate Tweet threads, and how I encourage everyone to capture deep thoughts in blog posts!)

In the interest of capturing the thought, and not in the interest of thinking too deeply or comprehensively (at least right now), I offer seven security strategies, summarized.

When I mention the risk equation, I'm talking about the idea that one can conceptually image the risk of some negative event using this "formula": Risk (of something) is the product of some measurements of Vulnerability X Threat X Asset Value, or R = V x T x A.

  1. Denial and/or ignorance. This strategy assumes the risk due to loss is low, because those managing the risk assume that one or more of the elements of the risk equation are zero or almost zero, or they are apathetic to the cost.
  2. Loss acceptance. This strategy may assume the risk due to loss is low, or more likely those managing the risk assume that the cost of risk realization is low. In other words, incidents will occur, but the cost of the incident is acceptable to the organization.
  3. Loss transferal. This strategy may also assume the risk due to loss is low, but in contrast with risk acceptance, the organization believes it can buy an insurance policy which will cover the cost of an incident, and the cost of the policy is cheaper than alternative strategies.
  4. Vulnerability elimination. This strategy focuses on driving the vulnerability element of the risk equation to zero or almost zero, through secure coding, proper configuration, patching, and similar methods.
  5. Threat elimination. This strategy focuses on driving the threat element of the risk equation to zero or almost zero, through deterrence, dissuasion, co-option, bribery, conversion, incarceration, incapacitation, or other methods that change the intent and/or capabilities of threat actors. 
  6. Asset value elimination. This strategy focuses on driving the threat element of the risk equation to zero or almost zero, through minimizing data or resources that might be valued by adversaries.
  7. Interdiction. This is a hybrid strategy which welcomes contributions from vulnerability elimination, primarily, but is open to assistance from loss transferal, threat elimination, and asset value elimination. Interdiction assumes that prevention eventually fails, but that security teams can detect and respond to incidents post-compromise and pre-breach. In other words, some classes of intruders will indeed compromise an organization, but it is possible to detect and respond to the attack before the adversary completes his mission.
As you might expect, I am most closely associated with the interdiction strategy. 

I believe the denial and/or ignorance and loss acceptance strategies are irresponsible.

I believe the loss transferal strategy continues to gain momentum with the growth of cybersecurity breach insurance policies. 

I believe the vulnerability elimination strategy is important but ultimately, on its own, ineffective and historically shown to be impossible. When used in concert with other strategies, it is absolutely helpful.

I believe the threat elimination strategy is generally beyond the scope of private organizations. As the state retains the monopoly on the use of force, usually only law enforcement, military, and sometimes intelligence agencies can truly eliminate or mitigate threats. (Threats are not vulnerabilities.)

I believe asset value elimination is powerful but has not gained the ground I would like to see. This is my "If you can’t protect it, don’t collect it" message. The limitation here is obviously one's raw computing elements. If one were to magically strip down every computing asset into basic operating systems on hardware or cloud infrastructure, the fact that those assets exist and are networked means that any adversary can abuse them for mining cryptocurrencies, or as infrastructure for intrusions, or for any other uses of raw computing power.

Please notice that none of the strategies listed tools, techniques, tactics, or operations. Those are important but below the level of strategy in the conflict hierarchy. I may have more to say on this in the future. 

Reference: TaoSecurity News

I started speaking publicly about digital security in 2000. I used to provide this information on my Web site, but since I don't keep that page up-to-date anymore, I decided to publish it here.
  • 2017
    • Mr. Bejtlich led a podcast titled Threat Hunting: Past, Present, and Future, in early July 2017. He interviewed four of the original six GE-CIRT incident handlers. The audio is posted on YouTube. Thank you to Sqrrl for making the reunion possible.
    • Mr. Bejtlich's latest book was inducted into the Cybersecurity Canon.
    • Mr. Bejtlich is doing limited security consulting. See this blog post for details.
  • 2016
    • Mr. Bejtlich organized and hosted the Management track (now "Executive track") at the 7th annual Mandiant MIRCon (now "FireEye Cyber Defense Summit") on 29-30 November 2016.
    • Mr. Bejtlich delivered the keynote to the 2016 Air Force Senior Leaders Orientation Conference at Joint Base Andrews on 29 July 2016.
    • Mr. Bejtlich delivered the keynote to the FireEye Cyber Defense Live Tokyo event in Tokyo on 12 July 2016.
    • Mr. Bejtlich delivered the keynote to the New Zealand Cyber Security Summit in Auckland on 6 May 2016.
    • Mr. Bejtlich delivered the keynote to the Lexpo Summit in Amsterdam on 21 April 2016. Video posted here.
    • Mr. Bejtlich discussed cyber security campaigns at the 2016 War Studies Cumberland Lodge Conference near London on 30 March 2016.
    • Mr. Bejtlich offered a guest lecture to the Wilson Center Congressional Cybersecurity Lab on 5 February 2016.
    • Mr. Bejtlich delivered the keynote to the SANS Cyber Threat Intelligence Summit on 4 February 2016. Slides and video available.
  • 2015
  • 2014
  • 2013
    • Mr. Bejtlich taught Network Security Monitoring 101 at Black Hat Seattle 2013: 9-10 December 2013 / Seattle, WA.
    • Mr. Bejtlich offered a guest lecture on digital security at George Washington University on 23 November 2013.
    • Mr. Bejtlich spoke about digital security at the Mid-Atlantic CIO Council on 21 November 2013.
    • Mr. Bejtlich was a panelist at the Brookings Institute on 19 November 2013.
    • Mr. Bejtlich offered several guest lectures on digital security at the Massachusetts Institute of Technology on 18 November 2013.
    • Mr. Bejtlich was a panelist at the Atlantic Council on 15 November 2013.
    • Mr. Bejtlich organized and hosted the Management track at the 4th annual Mandiant MIRCon on 5-6 November 2013.
    • Mr. Bejtlich was a panelist at the Free Thinking Film Festival on 2 November 2013.
    • Mr. Bejtlich offered the keynote at the Cyber Ark user conference on 30 October 2013.
    • Mr. Bejtlich was a panelist at the Indiana University Center for Applied Cybersecurity Research on 21 October 2013.
    • Mr. Bejtlich spoke at the national ISSA conference on 10 October 2013.
    • Mr. Bejtlich was a panelist at the Politico Cyber 7 event on 8 October 2013.
    • Mr. Bejtlich offered the keynote at the BSides August 2013 conference on 14 September 2013.
    • Mr. Bejtlich taught Network Security Monitoring 101 at Black Hat USA 2013: 27-28 and 29-30 July 2013 / Las Vegas, NV.
    • Mr. Bejtlich was a panelist at the Chatham House Cyber Security Conference in London, England on 10 June 2013.
    • Mr. Bejtlich appeared in the documentary Hacked, first available 7 June 2013.
    • Mr. Bejtlich was interviewed at the Center for National Policy, with video archived, on 15 May 2013.
    • Mr. Bejtlich delivered a keynote at the IT Web Security Summit in Johannesburg, South Africa on 8 May 2013.
    • Mr. Bejtlich was a panelist at The George Washington University and US News & World Report Cybersecurity Conference on 26 April 2013.
    • Mr. Bejtlich testified to the House Committee on Foreign Affairs on 21 March 2013.
    • Mr. Bejtlich testified to the House Committee on Homeland Security on 20 March 2013.
    • Mr. Bejtlich testified to the Senate Armed Services Committee on 19 March 2013.
    • Mr. Bejtlich shared his thoughts on the APT1 report with the Federalist Society on 12 March 2013. The conference call was recorded as Cybersecurity And the Chinese Hacker Problem - Podcast.
  • 2012
    • Mr. Bejtlich taught TCP/IP Weapons School 3.0 at Black Hat Abu Dhabi 2012: 3-4 Dec / Abu Dhabi, UAE.
    • Mr. Bejtlich spoke at a Mandiant breakfast event in Calgary, AB on 28 Nov 2012.
    • Mr. Bejtlich spoke at AppSecUSA in Austin, TX on 26 Oct 2012. The talk Incident Response: Security After Compromise is posted as a video (42 min).
    • Mr. Bejtlich organized and hosted the Management track at the 3rd annual Mandiant MIRCon on 17-18 October 2012.
    • Mr. Bejtlich spoke at a SANS event in Baltimore, MD on 5 Oct 2012.
    • Mr. Bejtlich spoke at a Mandiant breakfast event in Dallas, TX on 13 Sep 2012.
    • Mr. Bejtlich taught TCP/IP Weapons School 3.0 at Black Hat USA 2012: 21-22 and 23-24 Jul / Las Vegas, NV.
    • Mr. Bejtlich taught a compressed version of TCP/IP Weapons School 3.0 at a U.S. Cyber Challenge Summer Camp in Ballston, VA on 28 Jun 2012.
    • Mr. Bejtlich participated on a panel titled Hackers vs Executives at the Forrester conference in Las Vegas on 25 May 2012.
    • Mr. Bejtlich spoke at the Cyber Security for Executive Leadership: What Every CEO Should Know event in Raleigh, NC on 11 May 2012.
    • Mr. Bejtlich participated on a panel titled SEC Cyber Security Guidelines: A New Basis for D&O Exposure? at the 8th Annual National Directors & Officers Insurance ExecuSummit in Uncasville, CT on 8 May 2012.
    • Mr. Bejtlich delivered the keynote to the 2012 National Cyber Crime Conference in Norwood, MA on 30 Apr 2012.
    • Mr. Bejtlich spoke at the FOSE conference on a panel discussing new attacks on 4 Apr 2012.
    • Mr. Bejtlich testified to the US-China Economic and Security Review Commission on 26 Mar 2012.
    • Mr. Bejtlich spoke at the Air Force Association CyberFutures conference (audio mp3) on 23 Mar 2012.
    • Mr. Bejtlich delivered the keynote to the IANS Research Mid-Atlantic conference on 21 Mar 2012.
    • Mr. Bejtlich spoke at a Mandiant breakfast event with Secretary Michael Chertoff in New York, NY on 15 Mar 2012.
    • Mr. Bejtlich spoke to the Augusta, GA ISSA chapter on 8 Mar 2012.
    • Mr. Bejtlich participated on a panel about digital threats at the RSA Executive Security Action Forum on 27 Feb 2012.
    • Mr. Bejtlich spoke at a Mandiant breakfast event with Gen (ret.) Michael Hayden in Washington, DC on 22 Feb 2012.
    • Mr. Bejtlich spoke at the ShmooCon Epilogue conference on 30 Jan 2012.
    • Mr. Bejtlich spoke at a Mandiant breakfast event with Secretary Michael Chertoff in Houston, TX on 12 Jan 2012.
  • 2011
  • 2010
  • 2009
  • 2008
  • 2007
    • Mr. Bejtlich offered a guest lecture on digital security at George Mason University on 29 November 2007.
    • Network Security Operations: 27-29 August 2007 / public 3 day class / Chicago, IL
    • Mr. Bejtlich spoke to the Chicago Electronic Crimes Task Force and the Chicago Snort Users Group on 30 and 29 August 2007, respectively.
    • Mr. Bejtlich taught Network Security Operations on 21-23 August 2007 / Cincinnati, OH
    • Mr. Bejtlich taught TCP/IP Weapons School (layers 4-7) at USENIX Security 2007: 6-7 August 2007 / Boston, MA.
    • Mr. Bejtlich taught TCP/IP Weapons School at Black Hat USA 2007: 28-29 and 30-31 July 2007 / Caesars Palace, Las Vegas, NV.
    • USENIX 2007: 20-22 June 2007 / Network Security Monitoring and TCP/IP Weapons School (Layers 2-3) tutorials / Santa Clara, CA
    • Mr. Bejtlich briefed GFIRST 2007: 25-26 June 2007 / Network Incident Response and Forensics (two half-day tutorials) and Traditional IDS Should Be Dead conference presentation / Orlando, FL
    • Mr. Bejtlich taught TCP/IP Weapons School (Layers 2-3) and briefed Open Source Network Forensics at Techno Security 2007: 5-7 June 2007 / / Myrtle Beach, SC.
    • Mr. Bejtlich briefed Open Source Network Forensics at ISS World Spring 2007: 31 May 2007 / Washington, DC
    • Mr. Bejtlich briefed Network Incident Response and Forensics at AusCERT 2007: 23-24 May 2007 / Gold Coast, Australia.
    • Mr. Bejtlich taught Network Security Monitoring: 25 May 2007 / Sydney, Australia.
    • Mr. Bejtlich briefed at CONFIDENCE 2007: 13 May 2007 / Krakow, Poland.
    • Mr. Bejtlich briefed at ShmooCon: 24 March 2007 / Washington, DC; video here.
  • 2006
  • 2005
    • Mr. Bejtlich presented three full-day tutorials at USENIX LISA 2005 in San Diego, CA, from 6-8 December 2005. He taught network security monitoring, incident response, and forensics.
    • Mr. Bejtlich spoke at the Cisco Fall 2005 System Engineering Security Virtual Team Meeting in San Jose, CA on 10 October 2005.
    • Mr. Bejtlich spoke at the Net Optics Think Tank at the Hilton Santa Clara in Santa Clara, CA on 21 September 2005. He discussed network forensics, with a preview of material in his next book Real Digital Forensics.
    • Mr. Bejtlich taught network security monitoring to security analysts from the Pentagon with Special Ops Security on 23 and 24 August 2005 in Rosslyn, VA.
    • Mr. Bejtlich spoke at the InfraGard 2005 National Conference on 9 August 05 in Washington, DC on the basics of network forensics.
    • Mr. Bejtlich taught a one day course on network incident response, with his forensics book as the background material, at USENIX Security 05 on 1 August 2005 in Baltimore, MD.
    • Mr. Bejtlich taught a one day course on network security monitoring, with his NSM book as the background material, at USENIX Security 05 on 31 July 2005 in Baltimore, MD.
    • Mr. Bejtlich offered a guest lecture on digital security at George Washington University on 23 June 2005.
    • Mr. Bejtlich spoke at the Techno Security 2005 conference on 13 June 2005 in Myrtle Beach, CA. He was invited by Tenable Security to appear at their evening social event.
    • Mr. Bejtlich spoke at the Net Optics Think Tank on 18 May 2005 in Sunnyvale, CA.
    • Mr. Bejtlich presented Keeping FreeBSD Up-To-Date and More Tools for Network Security Monitoring at BSDCan 2005 on 13 May 2005.
    • Mr. Bejtlich spoke to the Pentagon Security Forum on 19 April 2005.
    • Mr. Bejtlich taught a one day course on network security monitoring, with his book as the background material, at USENIX 05 on 14 April 2005 in Anaheim, CA.
    • Mr. Bejtlich spoke to the Government Forum of Incident Response and Security Teams (GFIRST) on 5 April 2005 in Orlando, FL.
    • Mr. Bejtlich spoke to the Information Systems Security Association of Northern Virginia (ISSA-NoVA) on 17 February 2005 in Reston, VA.
    • Mr. Bejtlich spoke at the 2005 DoD Cybercrime Conference on 13 January 2005 in Palm Harbor, FL.
  • 2004
    • Mr. Bejtlich spoke to the DC Systems Administrators Guild (DC-SAGE) on 21 October 2004 about Sguil.
    • Mr. Bejtlich spoke to the DC Linux Users Group on 15 September 2004 about Sguil.
    • Mr. Bejtlich spoke to the High Technology Crime Investigation Association International Conference and Expo 2004 on 13 September 2004 in Washington, DC about Sguil.
    • Mr. Bejtlich taught a one day course on network security monitoring, with his first book as the background material, at USENIX Security 04 on 9 August 2004 in San Diego.
    • Mr. Bejtlich spoke to the DC Snort User's Group on 24 Jun 2004 about Sguil.
    • Mr. Bejtlich presented Network Security Monitoring with Sguil (.pdf) at BSDCan on 14 May 2004.
    • Mr. Bejtlich spoke to the SANS Local Mentor program in northern Virginia for two hours on 11 May 2004 about NSM using Sguil. Joe Bowling invited him.
    • Mr. Bejtlich gave a lightning talk demo of Sguil at CanSecWest 04 on 22 April 2004.
  • 2003
    • Mr. Bejtlich spoke to ISSA-CT about network security monitoring on 9 December 2003.
    • Mr. Bejtlich taught Foundstone's Ultimate Hacking Expert class at Black Hat Federal 2003 in Tyson's Corner, 29-30 September 2003.
    • Mr. Bejtlich recorded a second webcast on network security monitoring for SearchSecurity.com. He posted the slides here.
    • Mr. Bejtlich taught the first day of Foundstone's Ultimate Hacking Expert class at Black Hat USA 2003 Training in Las Vegas on 28 July 2003.
    • Mr. Bejtlich spoke on 21 July 2003 in Washington, DC at the SANS NIAL conference.
    • Mr. Bejtlich discussed digital security in Toronto on 13 March 2003 and in Washington, DC on Tuesday, 25 March 2003 at the request of Watchguard.
    • Mr. Bejtlich taught days four, five, and six of the SANS intrusion detection track in San Antonio, Texas from 28-30 January 2003.
  • 2002
    • Mr. Bejtlich recorded a webcast on network security monitoring (PDF slides) with his friend Bamm Visscher for SearchSecurity.com and answered questions submitted by listeners. A SearchSecurity editor commented on the talk as well.
    • Mr. Bejtlich helped teach Foundstone's Ultimate Hacking class at Black Hat USA 2002 Training in Las Vegas on 29-30 July 2002.
    • Mr. Bejtlich taught days one, two, and three of the SANS intrusion detection track in San Antonio, Texas from 15-17 July 2002.
    • Mr. Bejtlich taught day four of the SANS intrusion detection track in Toronto, Ontario on 16 May 2002.
    • On 11 April 2002 Mr. Bejtlich briefed the South Texas ISSA chapter on Snort.
    • Mr. Bejtlich helped teach day four of the SANS intrusion detection track in San Antonio, Texas on 14 March 2002 after Marty Roesch was unable to teach the class.
  • 2000-2001
    • On 24-25 October 2001 Mr. Bejtlich spoke to the Houston InfraGard chapter at their 2001 conference.
    • In August and September 2001 Mr. Bejtlich briefed analysts at the AFCERT on Interpreting Network Traffic.
    • On 19 October 2000 Mr. Bejtlich was invited back to speak at the SANS Network Security 2000 Technical Conference.
    • During 14-16 August 2000 Mr. Bejtlich participated in the Cyber Summit 2000 sponsored by the Air Intelligence Agency. Mr. Bejtlich was a captain in the AFCERT. You will find him in the middle of this picture.
    • In June 2000 Mr. Bejtlich signed a letter protesting the Council of Europe draft treaty on Crime in Cyberspace.
    • In June 2000 Mr. Bejtlich briefed FIRST on third party effects. This predated CAIDA's 2001 USENIX "backscatter" paper.
    • On 25 March 2000 Mr. Bejtlich presented Interpreting Network Traffic: A Network Intrusion Detector's Look at Suspicious Events at the SANS 2000 Technical Conference.

Reference: TaoSecurity Research

I started publishing my thoughts and findings on digital security in 1999. I used to provide this information on my Web site, but since I don't keep that page up-to-date anymore, I decided to publish it here.

2015 and later:

  • Please visit Academia.edu for Mr. Bejtlich's most recent research.
2014 and earlier:

Reference: TaoSecurity Press

I started appearing in media reports in 2000. I used to provide this information on my Web site, but since I don't keep that page up-to-date anymore, I decided to publish it here.

Know Your Limitations

At the end of the 1973 Clint Eastwood movie Magnum Force, after Dirty Harry watches his corrupt police captain explode in a car, he says "a man's got to know his limitations."

I thought of this quote today as the debate rages about compromising municipalities and other information technology-constrained yet personal information-rich organizations.

Several years ago I wrote If You Can't Protect It, Don't Collect It. I argued that if you are unable to defend personal information, then you should not gather and store it.

In a similar spirit, here I argue that if you are unable to securely operate information technology that matters, then you should not be supporting that IT.

You should outsource it to a trustworthy cloud provider, and concentrate on managing secure access to those services.

If you cannot outsource it, and you remain incapable of defending it natively, then you should integrate a capable managed security provider.

It's clear to me that a large portion of those running PI-processing IT are simply not capable of doing so in secure manner, and they do not bear the full cost of PI breaches.

They have too many assets, with too many vulnerabilities, and are targeted by too many threat actors.

These organizations lack sufficient people, processes, and technologies to mitigate the risk.

They have successes, but they are generally due to the heroics of individual IT and security professionals, who often feel out-gunned by their adversaries.

If you can't patch a two-year-old vulnerability prior to exploitation, or detect an intrusion and respond to the adversary before he completes his mission, then you are demonstrating that you need to change your entire approach to information technology.

The security industry seems to think that throwing more people at the problem is the answer, yet year after year we read about several million job openings that remain unfilled. This is a sign that we need to change the way we are doing business. The fact is that those organziations that cannot defend themselves need to recognize their limitations and change their game.

I recognize that outsourcing is not a panacea. Note that I emphasized "IT" in my recommendation. I do not see how one could outsource the critical technology running on-premise in the industrial control system (ICS) world, for example. Those operations may need to rely more on outsourced security providers, if they cannot sufficiently detect and respond to intrusions using in-house capabilities.

Remember that the vast majority of organizations do not exist to run IT. They run IT to support their lines of business. Many older organizations have indeed been migrating legacy applications to the cloud, and most new organizations are cloud-native. These are hopeful signs, as the older organizations could potentially  "age-out" over time.

This puts a burden on the cloud providers, who fall into the "managed service provider" category that I wrote about in my recent Corelight blog. However, the more trustworthy providers have the people, processes, and technology in place to handle their responsibilities in a more secure way than many organziations who are struggling with on-premise legacy IT.

Everyone's got to know their limitations.

Thoughts on Cloud Security

Recently I've been reading about cloud security and security with respect to DevOps. I'll say more about the excellent book I'm reading, but I had a moment of déjà vu during one section.

The book described how cloud security is a big change from enterprise security because it relies less on IP-address-centric controls and more on users and groups. The book talked about creating security groups, and adding users to those groups in order to control their access and capabilities.

As I read that passage, it reminded me of a time long ago, in the late 1990s, when I was studying for the MCSE, then called the Microsoft Certified Systems Engineer. I read the book at left, Windows NT Security Handbook, published in 1996 by Tom Sheldon. It described the exact same security process of creating security groups and adding users. This was core to the new NT 4 role based access control (RBAC) implementation.

Now, fast forward a few years, or all the way to today, and consider the security challenges facing the majority of legacy enterprises: securing Windows assets and the data they store and access. How could this wonderful security model, based on decades of experience (from the 1960s and 1970s no less), have failed to work in operational environments?

There are many reasons one could cite, but I think the following are at least worthy of mention.

The systems enforcing the security model are exposed to intruders.

Furthermore:

Intruders are generally able to gain code execution on systems participating in the security model.

Finally:

Intruders have access to the network traffic which partially contains elements of the security model.

From these weaknesses, a large portion of the security countermeasures of the last two decades have been derived as compensating controls and visibility requirements.

The question then becomes:

Does this change with the cloud?

In brief, I believe the answer is largely "yes," thankfully. Generally, the systems upon which the security model is being enforced are not able to access the enforcement mechanism, thanks to the wonders of virtualization.

Should an intruder find a way to escape from their restricted cloud platform and gain hypervisor or management network access, then they find themselves in a situation similar to the average Windows domain network.

This realization puts a heavy burden on the cloud infrastructure operators. They major players are likely able to acquire and apply the expertise and resources to make their infrastructure far more resilient and survivable than their enterprise counterparts.

The weakness will likely be their personnel.

Once the compute and network components are sufficiently robust from externally sourced compromise, then internal threats become the next most cost-effective and return-producing vectors for dedicated intruders.

Is there anything users can do as they hand their compute and data assets to cloud operators?

I suggest four moves.

First, small- to mid-sized cloud infrastructure users will likely have to piggyback or free-ride on the initiatives and influence of the largest cloud customers, who have the clout and hopefully the expertise to hold the cloud operators responsible for the security of everyone's data.

Second, lawmakers may also need improved whistleblower protection for cloud employees who feel threatened by revealing material weaknesses they encounter while doing their jobs.

Third, government regulators will have to ensure no cloud provider assumes a monopoly, or no two providers assume a duopoloy. We may end up with the three major players and a smattering of smaller ones, as is the case with many mature industries.

Fourth, users should use every means at their disposal to select cloud operators not only on their compute features, but on their security and visibility features. The more logging and visibility exposed by the cloud provider, the better. I am excited by new features like the Azure network tap and hope to see equivalent features in other cloud infrastructure.

Remember that security has two main functions: planning/resistance, to try to stop bad things from happening, and detection/respond, to handle the failures that inevitably happen. "Prevention eventually fails" is one of my long-time mantras. We don't want prevention to fail silently in the cloud. We need ways to know that failure is happening so that we can plan and implement new resistance mechanisms, and then validate their effectiveness via detection and response.

Update: I forgot to mention that the material above assumed that the cloud users and operators made no unintentional configuration mistakes. If users or operators introduce exposures or vulnerabilities, then those will be the weaknesses that intruders exploit. We've already seen a lot of this happening and it appears to be the most common problem. Procedures and tools which constantly assess cloud configurations for exposures and vulnerabilities due to misconfiguration or poor practices are a fifth move which all involved should make.

A corollary is that complexity can drive problems. When the cloud infrastructure offers too many knobs to turn, then it's likely the users and operators will believe they are taking one action when in reality they are implementing another.

Forcing the Adversary to Pursue Insider Theft

Jack Crook pointed me toward a story by Christopher Burgess about intellectual property theft by "Hongjin Tan, a 35 year old Chinese national and U.S. legal permanent resident... [who] was arrested on December 20 and charged with theft of trade secrets. Tan is alleged to have stolen the trade secrets from his employer, a U.S. petroleum company," according to the criminal complaint filed by the US DoJ.

Tan's former employer and the FBI allege that Tan "downloaded restricted files to a personal thumb drive." I could not tell from the complaint if Tan downloaded the files at work or at home, but the thumb drive ended up at Tan's home. His employer asked Tan to bring it to their office, which Tan did. However, he had deleted all the files from the drive. Tan's employer recovered the files using commercially available forensic software.

This incident, by definition, involves an "insider threat." Tan was an employee who appears to have copied information that was outside the scope of his work responsibilities, resigned from his employer, and was planning to return to China to work for a competitor, having delivered his former employer's intellectual property.

When I started GE-CIRT in 2008 (officially "initial operating capability" on 1 January 2009), one of the strategies we pursued involved insider threats. I've written about insiders on this blog before but I couldn't find a description of the strategy we implemented via GE-CIRT.

We sought to make digital intrusions more expensive than physical intrusions.

In other words, we wanted to make it easier for the adversary to accomplish his mission using insiders. We wanted to make it more difficult for the adversary to accomplish his mission using our network.

In a cynical sense, this makes security someone else's problem. Suddenly the physical security team is dealing with the worst of the worst!

This is a win for everyone, however. Consider the many advantages the physical security team has over the digital security team.

The physical security team can work with human resources during the hiring process. HR can run background checks and identify suspicious job applicants prior to granting employment and access.

Employees are far more exposed than remote intruders. Employees, even under cover, expose their appearance, likely residence, and personalities to the company and its workers.

Employees can be subject to far more intensive monitoring than remote intruders. Employee endpoints can be instrumented. Employee workspaces are instrumented via access cards, cameras at entry and exit points, and other measures.

Employers can cooperate with law enforcement to investigate and prosecute employees. They can control and deter theft and other activities.

In brief, insider theft, like all "close access" activities, is incredibly risky for the adversary. It is a win for everyone when the adversary must resort to using insiders to accomplish their mission. Digital and physical security must cooperate to leverage these advantages, while collaborating with human resources, legal, information technology, and business lines to wring the maximum results from this advantage.