Monthly Archives: November 2017

The Hay CFP Management Method – Part 2

I’ve had a lot of positive feedback from my first post which explained how to create the Trello board to track your Call For Paper (CFP) due dates, submissions, and results. In this post, I’ll explain how to create the cards and populate them with the required data to better manage your CFP pipeline.

To start your first card click the ‘Add a card…’ link in the CFP Open swim lane.

Type in the name of the conference and select the ‘Add’ button.

Once the card is added, click the pencil icon to add more context.

Within the card, place the location of the conference in the ‘Add a more detailed subscription…’ section and select the Save button. Note: I strongly advise that you follow a consistent location naming (e.g. Houston, TX or Houston, TX, USA) to make visualizing the data easier later on.

Now we have to add the CFP due date. Select the ‘Due Date’ button.

When I input the CFP due date, I often use the date prior to the published due date ( I also set the time to 11:59pm) as a way to ensure I don’t leave the submission to the absolute last minute.

After the date is selected I fill the card with more CFP-specific information that I find from the event website, Twitter, or a third-party CFP site. I also pate the URL for the CFP submission form into the card so that I don’t have to hunt for it later (it automatically saves it as an attachment). If other information, such as important dates, conference details, or comments about the event are available I often add those in the ‘Add Comment’ section. Just make sure to his the ‘Save’ button or the data won’t be added to the card.

Optionally, you can leverage the ‘Labels’ button to assign color coded tags to denote different things. For example, I’ve used these to denote the audience type, the continent, country, state/province where the event is located, and whether or not travel and expenses (T&E) are covered. These are really just informational to help you prioritize events.

Click the ‘X’ at the top right hand side of the card or click somewhere else on the board to close the card.

You now have your first conference CFP card that can be moved through the board calendar pipeline – something that I’ll discuss in my next blog post.

toolsmith #129 – DFIR Redefined: Deeper Functionality for Investigators with R – Part 2

You can have data without information, but you cannot have information without data. ~Daniel Keys Moran

Here we resume our discussion of DFIR Redefined: Deeper Functionality for Investigators with R as begun in Part 1.
First, now that my presentation season has wrapped up, I've posted the related material on the Github for this content. I've specifically posted the most recent version as presented at SecureWorld Seattle, which included Eric Kapfhammer's contributions and a bit of his forward thinking for next steps in this approach.
When we left off last month I parted company with you in the middle of an explanation of analysis of emotional valence, or the "the intrinsic attractiveness (positive valence) or averseness (negative valence) of an event, object, or situation", using R and the Twitter API. It's probably worth your time to go back and refresh with the end of Part 1. Our last discussion point was specific to the popularity of negative tweets versus positive tweets with a cluster of emotionally neutral retweets, two positive retweets, and a load of negative retweets. This type of analysis can quickly give us better understanding of an attacker collective's sentiment, particularly where the collective is vocal via social media. Teeing off the popularity of negative versus positive sentiment, we can assess the actual words fueling such sentiment analysis. It doesn't take us much R code to achieve our goal using the apply family of functions. The likes of apply, lapply, and sapply allow you to manipulate slices of data from matrices, arrays, lists and data frames in a repetitive way without having to use loops. We use code here directly from Michael Levy, Social Scientist, and his Playing with Twitter Data post.

polWordTables = 
  sapply(pol, function(p) {
    words = c(positiveWords = paste(p[[1]]$pos.words[[1]], collapse = ' '), 
              negativeWords = paste(p[[1]]$neg.words[[1]], collapse = ' '))
    gsub('-', '', words)  # Get rid of nothing found's "-"
  }) %>%
  apply(1, paste, collapse = ' ') %>% 
  stripWhitespace() %>% 
  strsplit(' ') %>%
  sapply(table)

par(mfrow = c(1, 2))
invisible(
  lapply(1:2, function(i) {
    dotchart(sort(polWordTables[[i]]), cex = .5)
    mtext(names(polWordTables)[i])
  }))

The result is a tidy visual representation of exactly what we learned at the end of Part 1, results as noted in Figure 1.

Figure 1: Positive vs negative words
Content including words such as killed, dangerous, infected, and attacks are definitely more interesting to readers than words such as good and clean. Sentiment like this could definitely be used to assess potential attacker outcomes and behaviors just prior, or in the midst of an attack, particularly in DDoS scenarios. Couple sentiment analysis with the ability to visualize networks of retweets and mentions, and you could zoom in on potential leaders or organizers. The larger the network node, the more retweets, as seen in Figure 2.

Figure 2: Who is retweeting who?
Remember our initial premise, as described in Part 1, was that attacker groups often use associated hashtags and handles, and the minions that want to be "part of" often retweet and use the hashtag(s). Individual attackers either freely give themselves away, or often become easily identifiable or associated, via Twitter. Note that our dominant retweets are for @joe4security, @HackRead,  @defendmalware (not actual attackers, but bloggers talking about attacks, used here for example's sake). Figure 3 shows us who is mentioning who.

Figure 3: Who is mentioning who?
Note that @defendmalware mentions @HackRead. If these were actual attackers it would not be unreasonable to imagine a possible relationship between Twitter accounts that are actively retweeting and mentioning each other before or during an attack. Now let's assume @HackRead might be a possible suspect and you'd like to learn a bit more about possible additional suspects. In reality @HackRead HQ is in Milan, Italy. Perhaps Milan then might be a location for other attackers. I can feed  in Twittter handles from my retweet and mentions network above, query the Twitter API with very specific geocode, and lock it within five miles of the center of Milan.
The results are immediate per Figure 4.

Figure 4: GeoLocation code and results
Obviously, as these Twitter accounts aren't actual attackers, their retweets aren't actually pertinent to our presumed attack scenario, but they definitely retweeted @computerweekly (seen in retweets and mentions) from within five miles of the center of Milan. If @HackRead were the leader of an organization, and we believed that associates were assumed to be within geographical proximity, geolocation via the Twitter API could be quite useful. Again, these are all used as thematic examples, no actual attacks should be related to any of these accounts in any way.

Fast Frugal Trees (decision trees) for prioritizing criticality

With the abundance of data, and often subjective or biased analysis, there are occasions where a quick, authoritative decision can be quite beneficial. Fast-and-frugal trees (FFTs) to the rescue. FFTs are simple algorithms that facilitate efficient and accurate decisions based on limited information.
Nathaniel D. Phillips, PhD created FFTrees for R to allow anyone to easily create, visualize and evaluate FFTs. Malcolm Gladwell has said that "we are suspicious of rapid cognition. We live in a world that assumes that the quality of a decision is directly related to the time and effort that went into making it.” FFTs, and decision trees at large, counter that premise and aid in the timely, efficient processing of data with the intent of a quick but sound decision. As with so much of information security, there is often a direct correlation with medical, psychological, and social sciences, and the use of FFTs is no different. Often, predictive analysis is conducted with logistic regression, used to "describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables." Would you prefer logistic regression or FFTs?

Figure 5: Thanks, I'll take FFTs
Here's a text book information security scenario, often rife with subjectivity and bias. After a breach, and subsequent third party risk assessment that generated a ton of CVSS data, make a fast decision about what treatments to apply first. Because everyone loves CVSS.

Figure 6: CVSS meh
Nothing like a massive table, scored by base, impact, exploitability, temporal, environmental, modified impact, and overall scores, all assessed by a third party assessor who may not fully understand the complexities or nuances of your environment. Let's say our esteemed assessor has decided that there are 683 total findings, of which 444 are non-critical and 239 are critical. Will FFTrees agree? Nay! First, a wee bit of R code.

library("FFTrees")
cvss <- c:="" coding="" csv="" p="" r="" read.csv="" rees="">cvss.fft <- data="cvss)</p" fftrees="" formula="critical">plot(cvss.fft, what = "cues")
plot(cvss.fft,
     main = "CVSS FFT",
     decision.names = c("Non-Critical", "Critical"))


Guess what, the model landed right on impact and exploitability as the most important inputs, and not just because it's logically so, but because of their position when assessed for where they fall in the area under the curve (AUC), where the specific curve is the receiver operating characteristic (ROC). The ROC is a "graphical plot that illustrates the diagnostic ability of a binary classifier system as its discrimination threshold is varied." As for the AUC, accuracy is measured by the area under the ROC curve where an area of 1 represents a perfect test and an area of .5 represents a worthless test. Simply, the closer to 1, the better. For this model and data, impact and exploitability are the most accurate as seen in Figure 7.

Figure 7: Cue rankings prefer impact and exploitability
The fast and frugal tree made its decision where impact and exploitability with scores equal or less than 2 were non-critical and exploitability greater than 2 was labeled critical, as seen in Figure 8.

Figure 8: The FFT decides
Ah hah! Our FFT sees things differently than our assessor. With a 93% average for performance fitting (this is good), our tree, making decisions on impact and exploitability, decides that there are 444 non-critical findings and 222 critical findings, a 17 point differential from our assessor. Can we all agree that mitigating and remediating critical findings can be an expensive proposition? If you, with just a modicum of data science, can make an authoritative decision that saves you time and money without adversely impacting your security posture, would you count it as a win? Yes, that was rhetorical.

Note that the FFTrees function automatically builds several versions of the same general tree that make different error trade-offs with variations in performance fitting and false positives. This gives you the option to test variables and make potentially even more informed decisions within the construct of one model. Ultimately, fast frugal trees make very fast decisions on 1 to 5 pieces of information and ignore all other information. In other words, "FFTrees are noncompensatory, once they make a decision based on a few pieces of information, no additional information changes the decision."

Finally, let's take a look at monitoring user logon anomalies in high volume environments with Time Series Regression (TSR). Much of this work comes courtesy of Eric Kapfhammer, our lead data scientist on our Microsoft Windows and Devices Group Blue Team. The ideal Windows Event ID for such activity is clearly 4624: an account was successfully logged on. This event is typically one of the top 5 events in terms of volume in most environments, and has multiple type codes including Network, Service, and RemoteInteractive.
User accounts will begin to show patterns over time, in aggregate, including:
  • Seasonality: day of week, patch cycles, 
  • Trend: volume of logons increasing/decreasing over time
  • Noise: randomness
You could look at 4624 with a Z-score model, which sets a threshold based on the number of standard deviations away from an average count over a given period of time, but this is a fairly simple model. The higher the value, the greater the degree of “anomalousness”.
Preferably, via Time Series Regression (TSR), your feature set is more rich:
  • Statistical method for predicting a future response based on the response history (known as autoregressive dynamics) and the transfer of dynamics from relevant predictors
  • Understand and predict the behavior of dynamic systems from experimental or observational data
  • Commonly used for modeling and forecasting of economic, financial and biological systems
How to spot the anomaly in a sea of logon data?
Let's imagine our user, DARPA-549521, in the SUPERSECURE domain, with 90 days of aggregate 4624 Type 10 events by day.

Figure 9: User logon data
With 210 line of R, including comments, log read, file output, and graphing we can visualize and alert on DARPA-549521's data as seen in Figure 10

Figure 10: User behavior outside the confidence interval
We can detect when a user’s account exhibits  changes in their seasonality as it relates to a confidence interval established (learned) over time. In this case, on 27 AUG 2017, the user topped her threshold of 19 logons thus triggering an exception. Now imagine using this model to spot anomalous user behavior across all users and you get a good feel for the model's power.
Eric points out that there are, of course, additional options for modeling including:
  • Seasonal and Trend Decomposition using Loess (STL)
    • Handles any type of seasonality ~ can change over time
    • Smoothness of the trend-cycle can also be controlled by the user
    • Robust to outliers
  • Classification and Regression Trees (CART)
    • Supervised learning approach: teach trees to classify anomaly / non-anomaly
    • Unsupervised learning approach: focus on top-day hold-out and error check
  • Neural Networks
    • LSTM / Multiple time series in combination
These are powerful next steps in your capabilities, I want you to be brave, be creative, go forth and add elements of data science and visualization to your practice. R and Python are well supported and broadly used for this mission and can definitely help you detect attackers faster, contain incidents more rapidly, and enhance your in-house detection and remediation mechanisms.
All the code as I can share is here; sorry, I can only share the TSR example without the source.
All the best in your endeavors!
Cheers...until next time.

QOTD – Security & The Business – Which Objective(s) Are You Meeting?

When meeting with security leaders, directors should ask how their cybersecurity plan will help the company meet one or some of these objectives: revenue, cost, margin, customer satisfaction, employee efficiency, or strategy. While these terms are familiar to board members and business executives, security leaders may need guidance on how to frame their department’s duties in the context of business operations.
-- Sam Curry, Chief Security Officer at Cybereason

Src: HBR: Boards Should Take Responsibility for Cybersecurity. Here’s How to Do It 

Books I’d give to my 30yr old self

A good friend/co-worker recently turned 30.  In preparation for his birthday party I gave some thought to my 30th birthday and the things I now know or have an idea about and what I wish I had known at that point in my life.

I decided to buy him a few books that had impacted my life since my 30th birthday and that I wish I had know or read earlier in life.

I'll split the post into two parts; computer books and life/metaphysical books.

Computer books

This is by no means an exhaustive list.  A more exhaustive list can be found here (recently updated).

He already had The Web Application Hacker's Handbook but had he not I would have purchased a copy for him.  There are lots of Web Hacking books but WAHH is probably the best and most comprehensive one.

The other books I did purchase were The Phoenix Project  and Zero to One.

The Phoenix Project is absolutely one of the best tech books I've read in the last few years.  Working for  Silicon Valley companies I think it can be easy to take for granted the whole idea of DevOps and the power it brings when you can do infrastructure as code, micro services, and the flexibility DevOps can bring to prototyping and developing code and projects.  There is also the "security guy" in the story that serves as the guy we never want to be but sometimes end up being unbeknownst to us.

The running joke is that Zero to One is in the Hipster starter kit but I thought it was a great book.  The quick summary is that Peter Thiel describes businesses that iterate on a known problem and can be successful and there are businesses that create solutions to problems we didn't know we had. Examples of the latter being companies like Google, Facebook, PayPal, Uber.  It's a short book and should be required reading for anyone thinking of starting a business.


The following is life stuff, so if all you care about is tech shit, feel free to eject at this point.















still here?

Metaphysics

1st, Many Lives Many Masters by Brian Weiss  A nice gentle introduction into the idea that we reincarnate and our eternal souls.  Written by a psychiatrist who more or less stumbled into the fact that people have past lives while doing normal psychiatry work.

From Amazon:
"As a traditional psychotherapist, Dr. Brian Weiss was astonished and skeptical when one of his patients began recalling past-life traumas that seemed to hold the key to her recurring nightmares and anxiety attacks. His skepticism was eroded, however, when she began to channel messages from the “space between lives,” which contained remarkable revelations about Dr. Weiss’ family and his dead son. Using past-life therapy, he was able to cure the patient and embark on a new, more meaningful phase of his own career."


2nd,  A New Earth by Eckert Tolle This is the best book i read in 2016 and I've been sharing it with everyone I can.  Everyone in infosec should read this book to understand the way the ego works in our day to day lives.

From Amazon:
In A New Earth, Tolle expands on these powerful ideas to show how transcending our ego-based state of consciousness is not only essential to personal happiness, but also the key to ending conflict and suffering throughout the world. Tolle describes how our attachment to the ego creates the dysfunction that leads to anger, jealousy, and unhappiness, and shows readers how to awaken to a new state of consciousness and follow the path to a truly fulfilling existence.

3rd, Self Observation by Red Hawk. The practical application guide if you got something from A New Earth.  An instruction manual around self-observation.

From Amazon:
"This book is an in-depth examination of the much needed process of 'self'-study known as self observation. We live in an age where the "attention function" in the brain has been badly damaged by TV and computers - up to 90 percent of the public under age 35 suffers from attention-deficit disorder! This book offers the most direct, non-pharmaceutical means of healing attention dysfunction. The methods presented here are capable of restoring attention to a fully functional and powerful tool for success in life and relationships. This is also an age when humanity has lost its connection with conscience. When humanity has poisoned the Earth's atmosphere, water, air and soil, when cancer is in epidemic proportions and is mainly an environmental illness, the author asks: What is the root cause? And he boldly answers: failure to develop conscience! Self-observation, he asserts, is the most ancient, scientific, and proven means to develop this crucial inner guide to awakening and a moral life. This book is for the lay-reader, both the beginner and the advanced student of self observation. No other book on the market examines this practice in such detail. There are hundreds of books on self-help and meditation, but almost none on self-study via self observation, and none with the depth of analysis, wealth of explication, and richness of experience which this book offers."

Finance

Rich Dad Poor Dad, I talked about this in 2013:  http://carnal0wnage.attackresearch.com/2013/12/best-non-technical-book-i-read-this-year.html

Unprotecting VBS Password Protected Office Files

Hi folks,
today I'd like to share a nice trick to unprotect password protected VB scripts into Office files. Nowadays it's easy to find out malicious contents wrapped into OLE files since such a file format has the capability to link objects into documents and viceversa. An object could be a simple external link, a document itself or a more complex script (such as Visual Basic Script) and it might easily interact with the original document  (container) in order to change contents and values.

Attackers are frequently using embedded VB Scripts to perform malicious actions such as for example (but not limited to): payload downloading, landing steps, environment preparation and payload execution. Such a technique needs "the user agreement" before the execution takes place, but once the user gave the freedom to execute (see the following image) the linked code on the machine, the VB script would be free to download content from malicious website and later on to execute it the victim machine.

Enable "Scripting" Content

Cyber Security Analysts often need to read "raw code"  by opening it and eventually digging into obfuscation techniques and anti-code analysis in order to figure out what it really does. Indeed contemporary malware performs evasive techniques making the simple SandBox execution useless and advanced attackers are smart enough to block VB code through complex and strong passwords. Those techniques make the "raw code analysis" hard if the unlocking password is unknown. But again, the a Cyber Security Analyst really needs to open the document and to dig into "raw code" in oder to defend victims. How would I approach this problem ?

Following a simple method to help cyber security analysts (NB: this is a well known technique) to bypass password protected VB Scripts.

Let's suppose you have an Excel file within Visual Basic code, and you want to read the password protected VB Script. Let's call such a first Excel file: victim_file.

As a first step you need to open the victim_file. After opening it you need to create a additional excel file. Let's call it: injector_file.xlsm. Open the VB editor and add the following code into Module1.




Now create a new module: Module2 with the following code. It represents the "calling function". Run it and don't close it. 



It's time to come back to your original victim_file, let's open the VB Editor and: here we go ! Your code is plain clear text !

At that point you are probably wondering how this code works. So let's have a quick and dirty explanation about it. Once the VBProject gets opened it visualizes a dialogBox asking for a password (a String). The WinAPI eventually checks if the input string is equals to the encoded static string (file body not code body) and it returns "True" (if the strings are equals) or "False" (if the strings are not equals). The function Hook() overrides the User32.dll DialogBoxParamA returning parameter by making it returns always the value "True".  

Technically speaking:
  • Raw 45 saves the original "call" (User32.dll DialogBoxParamA) parameters into TmpBytes
  • If the password is correct TempBytes(0) gets the right pointer to the current process 
  • If the password is not correct the script saves the original bytes into OriginalBytes (length 6)
  • Raw 50  takes the address of MwDialogBoxProgram
  • Raw 52  forces the right handler 
  • Raw 53  saves the current value
  • Raw 54  forces the return par as True
  • Raw 56  moves the just crafted parameters into the right location into user32.dll
Have nice VBA Password un-protection :D

Disclaimer:
This is a well-known method: it is not new.
I wrote it down since it becomes useful for cyber security analyst to fight against Office Macro malware. Don't use it unlawfully.
Do not use it to break legal documents.
I am not assuming any responsibility about the usage of such a script.
It works on my machine :D  and I will not try to get it working on your :D (programming Horror humor)












TEDxMilano: What a great adventure !

Hi folks, 
today I want to share my "output" of a super nice adventure I had this year which took me to actively participate to TEDxMilano. It is definitely one of the most exiting stage I've been so far.

My usual readers would probably think: "Hey Man, you are a technical person, you should participate to DefCON, Black Hat, NullCon, SmooCon, Toorcon and much more technical conferences like these where you have the opportunity to show reverse engineering techniques, new vulnerabilities or new attack paths,  I wont see you on a TEDx conference! ".

Well actually I have participated to a lot of such a conferences (just take a look to "Selected Publications" on top of this page) but you know what ? CyberSecurity is a hybrid world where technologies meet people, where most sophisticated evasion techniques meet human irrationality and where a simple "click" can make the difference between "levelUP" or "GameOver". So I believe being able to comunicate such a complex world to a "not technical people" is a great way to contribute to the security of our digital Era. If you agree (and you know Italian language) please have a look ! I will appreciate.  




“As long as a human being is the one profiting from an attack, only a human being will be able to combat it.” This is how we can define Marco Ramilli’s essence, a computer engineer and an expert in hacking, penetration testing, and cyber security. Marco obtained a degree in Computer Engineering and, while working on a Ph.D. in Information Security, served the security division of the U.S. Government’s National Institute of Standards and Technology, where he conducted research on Malware Evasion and Penetration Testing techniques for the electronic voting system. In 2014 he founded Yoroi, a startup that has created one of the best cyber security defense centers he ever developed. This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

Hybrid Analysis Grows Up – Acquired by CrowdStrike

CrowdStrike acquired Payload Security, the company behind the automated malware analysis sandbox technology Hybrid Analysis, in November 2017. Jan Miller founded Payload Security approximately 3 years earlier. The interview I conducted with Jan in early 2015 captured his mindset at the onset of the journey that led to this milestone. I briefly spoke with Jan again, a few days after the acquisition. He reflected upon his progress over the three years of leading Payload Security so far and his plans for Hybrid Analysis as part of CrowdStrike.

Jan, why did you and your team decide to join CrowdStrike?

Developing a malware analysis product requires a constant stream of improvements to the technology, not only to keep up with the pace of malware authors’ attempts to evade automated analysis but also innovate and embrace the community. The team has accomplished a lot thus far, but joining CrowdStrike gives us the ability to access a lot more resources and grow the team to rapidly improve Hybrid Analysis in the competitive space that we live in. We will have the ability to bring more people into the team and also enhance and grow the infrastructure and integrations behind the free Hybrid Analysis community platform.

What role did the free version of your product, available at hybrid-analysis.com, play in the company’s evolution?

A lot of people in the community have been using the free version of Hybrid Analysis to analyze their own malware samples, share them with friends or to look-up existing analysis reports and extract intelligence. Today, the site has approximately 44,000 active users and around 1 million sessions per month. One of the reasons the site took off is the simplicity and quality of the reports, focusing on what matters and enabling effective incident response.

The success of Hybrid Analysis was, to a large extent, due to the engagement from the community. The samples we have been receiving allowed us to constantly field-test the system against the latest malware, stay on top of the game and also to embrace feedback from security professionals. This allowed us to keep improving at rapid pace in a competitive space, successfully.

What will happen to the free version of Hybrid Analysis? I saw on Twitter that your team pinky-promised to continue making it available for free to the community, but I was hoping you could comment further on this.

I’m personally committed to ensuring that the community platform will stay not only free, but grow even more useful and offer new capabilities shortly. Hybrid Analysis deserves to be the place for professionals to get a reasoned opinion about any binary they’ve encountered. We plan to open up the API, add more integrations and other free capabilities in the near future.

What stands out in your mind as you reflect upon your Hybrid Analysis journey so far? What’s motivating you to move forward?

Starting out without any noteworthy funding, co-founders or advisors, in a saturated high-tech market that is extremely fast paced and full of money, it seemed impossible to succeed on paper. But the reality is: if you are offering a product or service that is solving a real-world problem considerably better than the market leaders, you always have a chance. My hope is that people who are considering becoming entrepreneurs will be encouraged to pursue their ideas, but be prepared to work 80 hours a week, have the right technology, the feedback from the community, amazing team members and lean on insightful advisors and you can make it happen.

In fact, it’s because of the value Hybrid Analysis has been adding to the community that I was able to attract the highly talented individuals that are currently on the team. It has always been important for me to make a difference, to contribute something and have a true impact on people’s lives. It all boils down to bringing more light than darkness into the world, as cheesy as that might sound.

Malware analysis sandbox aggregation: Welcome Tencent HABO!

VirusTotal is much more than just an antivirus aggregator; we run all sorts of open source/private/in-house tools to further characterize files, URLs, IP addresses and domains in order to highlight suspicious signals. Similarly, we execute a variety of backend processes to build relationships between the items that we store in the dataset, for instance, all the URLs from which we have downloaded a given piece of malware.

One of the pillars of the in-depth characterization of files and the relationship-building process has been our behavioural information setup. By running the executables uploaded to VirusTotal in virtual machines, we are often able to discover network infrastructure used by attackers (C&C domains, additional payload downloads, cloud config files, etc.), registry keys used to ensure persistence on infected machines, and other interesting indicators of compromise. Over time, we have developed automatic malware analysis setups for other operating systems such as Android or OS X.

Today we are excited to announce that, similar to the way we aggregate antivirus verdicts, we will aggregate malware analysis sandbox reports under a new project that we internally call "multisandbox". We are excited to announce that the first partner paving the way is Tencent, an existing antivirus partner that is integrating its Tencent HABO analysis system in order to contribute behavioral analysis reports. In their own words:

Tencent HABO was independently developed by Tencent Anti-Virus Laboratory. It can comprehensively analyze samples from both static information and dynamic behaviors, trigger and capture behaviors of the samples in the sandbox, and output the results in various formats.
One of the most exciting aspects of this integration is that Tencent's setup comprises analysis environments for Windows, Linux and Android. This means that it will also be the very first Linux ELF behavioral characterization engine. 

These are a couple of example reports illustrating the integration:


Whenever there is more than one sandbox report for a given file, you will see the pulsating animation in the analysis system selector drop-down.


Please note that sandbox partners are contributing both a summarized analysis and a detailed freestyle HTML report. On the far right of the analysis system selector bar you will see the sandbox's logo along with a link to the detailed HTML report. This is where partners can insert as much fine-grained information as wanted and can be as visually creative as possible, to emphasize what they deem important.


We hope you find this new project as exciting as we do. We already have more integrations in the pipeline and we are certain this will heavily contribute to identifying new threats and strengthening anti-malware defenses worldwide.

If you have a sandbox setup or develop dynamic malware analysis systems please contact us to join this effort.

Great article! I love the text under the link on t…

Great article! I love the text under the link on the Google image.
"by stock options grant date. Excitement on the river, sea or ocean common...

All that work and given away by foolish text instead of something that fits the site description.

I like the breakdown. Nice to see the list of checks that it makes for the environment. Looks like we can defeat them around the world by just installing perl on every system. If it detects perl, it removes itself.

Thanks again for another great breakdown on the actions of this sample.

The Honeynet Project will bring GSoC students to the annual workshop in Canberra

The Honeynet Project annual workshop is just few days away, members and security folks from all over the world will gather in Canberra, Australia November 15th-17th. Every year the Honeynet Project, with the support of Google, funds a bunch of students that were admitted to the Google Summer of Code program and successfully completed their project assignments. They will have a chance to travel to the workshop and meet face to face with honeynet members and grown up experts in the security field.

read more