Monthly Archives: April 2019

Cyber Security: Three Parts Art, One Part Science

As I reflect upon my almost 40 years as a cyber security professional, I think of the many instances where the basic tenets of cyber security—those we think have common understanding—require a lot of additional explanation. For example, what is a vulnerability assessment? If five cyber professionals are sitting around a table discussing this question, you will end up with seven or eight answers. One will say that a vulnerability assessment is vulnerability scanning only. Another will say an assessment is much bigger than scanning, and addresses ethical hacking and internal security testing. Another will say that it is a passive review of policies and controls. All are correct in some form, but the answer really depends on the requirements or criteria you are trying to achieve. And it also depends on the skills and experience of the risk owner, auditor, or assessor. Is your head spinning yet? I know mine is! Hence the “three parts art.”

There is quite a bit of subjectivity in the cyber security business. One auditor will look at evidence and agree you are in compliance; another will say you are not. If you are going to protect sensitive information, do you encrypt it, obfuscate it, or segment it off and place it behind very tight identification and access controls before allowing users to access the data? Yes. As we advise our client base, it is essential that we have all the context necessary to make good risk-based decisions and recommendations.

Let’s talk about Connection’s artistic methodology. We start with a canvas that has the core components of cyber security: protection, detection, and reaction. By addressing each of these three pillars in a comprehensive way, we ensure that the full conversation around how people, process, and technology all work together to provide a comprehensive risk strategy is achieved.

Related: Cyber Security is Everyone’s Business

Protection:

People
Users understand threat and risk, and know what role they play in the protection strategy. For example, if you see something, say something. Don’t let someone surf in behind you through a badge check entry. And don’t think about trying to shut off your end-point anti-virus or firewall.

Process
Policy are established, documented, and socialized. For example, personal laptops should never be connected to the corporate network. Also, don’t send sensitive information to your personal email account so you can work from home.

Technology
Some examples of the barriers used to deter attackers and breaches are edge security with firewalls, intrusion detection and prevention, sandboxing, and advanced threat detection.

Detection:

The average mean time to identify an active incident in a network is 197 days. The mean time to contain an incident is 69 days.

People
Incident response teams need to be identified and trained, and all employees need to be trained on the concept of “if you see something, say something.” Detection is a proactive process.

Process
What happens when an alert occurs? Who sees it? What is the documented process for taking action?

Technology
What is in place to ensure you are detecting malicious activity? Is it configured to ignore noise and only alert you of a real event? Will it help you bring that 197-day mean time to detection way down?

Reaction:

People
What happens when an event occurs? Who responds? How do you recover? Does everyone understand their role? Do you War Game to ensure you are prepared WHEN an incident occurs?

Process
What is the documented process to reduce the Kill Chain—the mean time to detect and contain—from 69 days to 69 minutes? Do you have a Business Continuity and Disaster Recovery Plan to ensure the ability to react to a natural disaster, significant cyber breach such as ransomware, DDoS, or—dare I say it—a pandemic?

Technology
What cyber security consoles have been deployed that allow quick access to patch a system, change a firewall rule, switch ACL, or policy setting at an end point, or track a security incident through the triage process?

All of these things are important to create a comprehensive InfoSec Program. The science is the technology that will help you build a layered, in-depth defense approach. The art is how to assess the threat, define and document the risk, and create a strategy that allows you to manage your cyber risk as it applies to your environment, users, systems, applications, data, customers, supply chain, third party support partners, and business process.

More Art – Are You a Risk Avoider or Risk Transference Expert?

A better way to state that is, “Do you avoid all risk responsibility or do you give your risk responsibility to someone else?” Hint: I don’t believe in risk avoidance or risk transference.

Yes, there is an art to risk management. There is also science if you use, for example, The Carnegie Mellon risk tools. But a good risk owner and manager documents risk, prioritizes it by risk criticality, turns it into a risk register or roadmap plan, remediates what is necessary, and accepts what is reasonable from a business and cyber security perspective. Oh, by the way, those same five cyber security professional we talked about earlier? They have 17 definitions of risk.

As we wrap up this conversation, let’s talk about the importance of selecting a risk framework. It’s kind of like going to a baseball game and recognizing the program helps you know the players and the stats. What framework will you pick? Do you paint in watercolors or oils? Are you a National Institute of Standards (NIST) artist, an Internal Standards Organization artist, or have you developed your own framework like the Nardone puzzle chart? I developed this several years ago when I was the CTO/CSO of the Commonwealth of Massachusetts. It has been artistically enhanced over the years to incorporate more security components, but it is loosely coupled on the NIST 800-53 and ISO 27001 standards.

When it comes to selecting a security framework as a CISO, I lean towards the NIST Cyber Security Framework (CSF) pictured below. This framework is comprehensive, and provides a scoring model that allows risk owners to measure and target what risk level they believe they need to achieve based on their business model, threat profile, and risk tolerance. It has five functional focus areas. The ISO 27001 framework is also a very solid and frequently used model. Both of these frameworks can result in a Certificate of Attestation demonstrating adherence to the standard. Many commercial corporations do an annual ISO 27001 assessment for that very reason. More and more are leaning towards the NIST CSF, especially commercial corporations doing work with the government.

The art in cyber security is in the interpretation of the rules, standards, and requirements that are primarily based on a foundation in science in some form. The more experience one has in the cyber security industry, the more effective the art becomes. As a last thought, keep in mind that Connection’s Technology Solutions Group Security Practice has over 150 years of cyber security expertise on tap to apply to that art.

The post Cyber Security: Three Parts Art, One Part Science appeared first on Connected.

Troubleshooting NSM Virtualization Problems with Linux and VirtualBox

I spent a chunk of the day troubleshooting a network security monitoring (NSM) problem. I thought I would share the problem and my investigation in the hopes that it might help others. The specifics are probably less important than the general approach.

It began with ja3. You may know ja3 as a set of Zeek scripts developed by the Salesforce engineering team to profile client and server TLS parameters.

I was reviewing Zeek logs captured by my Corelight appliance and by one of my lab sensors running Security Onion. I had coverage of the same endpoint in both sensors.

I noticed that the SO Zeek logs did not have ja3 hashes in the ssl.log entries. Both sensors did have ja3s hashes. My first thought was that SO was misconfigured somehow to not record ja3 hashes. I quickly dismissed that, because it made no sense. Besides, verifying that intution required me to start troubleshooting near the top of the software stack.

I decided to start at the bottom, or close to the bottom. I had a sinking suspicion that, for some reason, Zeek was only seeing traffic sent from remote systems, and not traffic originating from my network. That would account for the creation of ja3s hashes, for traffic sent by remote systems, but not ja3 hashes, as Zeek was not seeing traffic sent by local clients.

I was running SO in VirtualBox 6.0.4 on Ubuntu 18.04. I started sniffing TCP network traffic on the SO monitoring interface using Tcpdump. As I feared, it didn't look right. I ran a new capture with filters for ICMP and a remote IP address. On another system I tried pinging the remote IP address. Sure enough, I only saw ICMP echo replies, and no ICMP echoes. Oddly, I also saw doubles and triples of some of the ICMP echo replies. That worried me, because unpredictable behavior like that could indicate some sort of software problem.

My next step was to "get under" the VM guest and determine if the VM host could see traffic properly. I ran Tcpdump on the Ubuntu 18.04 host on the monitoring interface and repeated my ICMP tests. It saw everything properly. That meant I did not need to bother checking the switch span port that was feeding traffic to the VirtualBox system.

It seemed I had a problem somewhere between the VM host and guest. On the same VM host I was also running an instance of RockNSM. I ran my ICMP tests on the RockNSM VM and, sadly, I got the same one-sided traffic as seen on SO.

Now I was worried. If the problem had only been present in SO, then I could fix SO. If the problem is present in both SO and RockNSM, then the problem had to be with VirtualBox -- and I might not be able to fix it.

I reviewed my configurations in VirtualBox, ensuring that the "Promiscuous Mode" under the Advanced options was set to "Allow All". At this point I worried that there was a bug in VirtualBox. I did some Google searches and reviewed some forum posts, but I did not see anyone reporting issues with sniffing traffic inside VMs. Still, my use case might have been weird enough to not have been reported.

I decided to try a different approach. I wondered if running VirtualBox with elevated privileges might make a difference. I did not want to take ownership of my user VMs, so I decided to install a new VM and run it with elevated privileges.

Let me stop here to note that I am breaking one of the rules of troubleshooting. I'm introducing two new variables, when I should have introduced only one. I should have built a new VM but run it with the same user privileges with which I was running the existing VMs.

I decided to install a minimal edition of Ubuntu 9, with VirtualBox running via sudo. When I started the VM and sniffed traffic on the monitoring port, lo and behold, my ICMP tests revealed both sides of the traffic as I had hoped. Unfortunately, from this I erroneously concluded that running VirtualBox with elevated privileges was the answer to my problems.

I took ownership of the SO VM in my elevated VirtualBox session, started it, and performed my ICMP tests. Womp womp. Still broken.

I realized I needed to separate the two variables that I had entangled, so I stopped VirtualBox, and changed ownership of the Debian 9 VM to my user account. I then ran VirtualBox with user privileges, started the Debian 9 VM, and ran my ICMP tests. Success again! Apparently elevated privileges had nothing to do with my problem.

By now I was glad I had not posted anything to any user forums describing my problem and asking for help. There was something about the monitoring interface configurations in both SO and RockNSM that resulted in the inability to see both sides of traffic (and avoid weird doubles and triples).

I started my SO VM again and looked at the script that configured the interfaces. I commented out all the entries below the management interface as shown below.

$ cat /etc/network/interfaces

# This configuration was created by the Security Onion setup script.
#
# The original network interface configuration file was backed up to:
# /etc/network/interfaces.bak.
#
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# loopback network interface
auto lo
iface lo inet loopback

# Management network interface
auto enp0s3
iface enp0s3 inet static
  address 192.168.40.76
  gateway 192.168.40.1
  netmask 255.255.255.0
  dns-nameservers 192.168.40.1
  dns-domain localdomain

#auto enp0s8
#iface enp0s8 inet manual
#  up ip link set $IFACE promisc on arp off up
#  down ip link set $IFACE promisc off down
#  post-up ethtool -G $IFACE rx 4096; for i in rx tx sg tso ufo gso gro lro; do ethtool -K $IFACE $i off; done
#  post-up echo 1 > /proc/sys/net/ipv6/conf/$IFACE/disable_ipv6

#auto enp0s9
#iface enp0s9 inet manual
#  up ip link set $IFACE promisc on arp off up
#  down ip link set $IFACE promisc off down
#  post-up ethtool -G $IFACE rx 4096; for i in rx tx sg tso ufo gso gro lro; do ethtool -K $IFACE $i off; done
#  post-up echo 1 > /proc/sys/net/ipv6/conf/$IFACE/disable_ipv6

I rebooted the system and brought the enp0s8 interface up manually using this command:

$ sudo ip link set enp0s8 promisc on arp off up

Fingers crossed, I ran my ICMP sniffing tests, and voila, I saw what I needed -- traffic in both directions, without doubles or triples no less.

So, there appears to be some sort of problem with the way SO and RockNSM set parameters for their monitoring interfaces, at least as far as they interact with VirtualBox 6.0.4 on Ubuntu 18.04. You can see in the network script that SO disables a bunch of NIC options. I imagine one or more of them is the culprit, but I didn't have time to work through them individually.

I tried taking a look at the network script in RockNSM, but it runs CentOS, and I'll be darned if I can't figure out where to look. I'm sure it's there somewhere, but I didn't have the time to figure out where.

The moral of the story is that I should have immediately checked after installation that both SO and RockNSM were seeing both sides of the traffic I expected them to see. I had taken that for granted for many previous deployments, but something broke recently and I don't know exactly what. My workaround will hopefully hold for now, but I need to take a closer look at the NIC options because I may have introduced another fault.

A second moral is to be careful of changing two or more variables when troubleshooting. When you do that you might fix a problem, but not know what change fixed the issue.