Daily Archives: May 17, 2018

The CTOvision Artificial Intelligence, Big Data and Analytics Weekly

The Weekly Artificial Intelligence, Big Data and Analytics Newsletter is a weekly review of hot topics on the theme of Big Data. This is our fastest growing list with over 1,500 readers receiving the newsletter every Wednesday.

To sign up for the Weekly AI, Big Data and Analytics Newsletter see: CTOvision Newsletter Signups

Our full array of newsletters includes:

  • The Monthly CTOvision.com Tech Review provides a recap of the most significant trends sweeping the technology community in the prior month, plus insights into coming events and activities.
  • The Daily CTOvision.com Update provides a summary of posts we publish on our blog.  If we don’t publish it does not go out, but it is never more than once a day to 6,000 readers. All posts on the site are also shared with the over 14,500 CTOvision twitter followers and over 12,000 of Bob Gourley’s connections on LinkedIn.
  • The CTOvision Pro IT Report  summarizes enterprise IT developments and concepts. Transmitted to a select list of 700 CTOs and other tech professionals every Tuesday.
  • The Weekly Artificial Intelligence, Big Data and Analytics Newsletter is a weekly review of hot topics on the theme of Big Data. This is our fastest growing list with over 1,500 readers receiving the newsletter every Wednesday.
  • The Weekly Cyberwar and Cybersecurity Review summarizes enterprise IT security technologies and concepts and the issues you need to track regarding the high end threat actors. Over 6,000 readers receive this report every Thursday.
  • The Daily Threat Brief Our version of the President’s Daily Brief (PDB) focused on cyber threats and tips on being as secure as possible. Sent daily to a list of over 4,500 executives seeking insights into threats to business growth. Reports are also shared with over 10,000 Twitter followers of ThreatBrief.

For more and to sign up see: Crucial Point and CTOvision Newsletter Signups

The post The CTOvision Artificial Intelligence, Big Data and Analytics Weekly appeared first on The Cyber Threat.

SecMon State of the Union: Focus on Use Cases

Posted under: Research and Analysis

When we revisited the Security Monitoring Team of Rivals it became obvious that the overlap between SIEM and security analytics has passed a point of no return. So with a Civil War brewing our key goal is to determine what will be your strategic platform for security monitoring. This requires you to shut out the noise of fancy analytics and colorful visualizations, and focus on the problem you are trying to solve now, with an eye to how it will evolve in the future. That means getting back to use cases. The cases for security monitoring tend to fall into three major buckets:

  1. Security alerts
  2. Forensics and Response
  3. Compliance reporting

Let’s go into each of these to make sure you have a clear handle on success today, and how each will change in the future. After we work through the use cases, we’ll cover pros and cons of how each combatant (SIEM vs. Security Analytics) addresses them. As you can see, there isn’t really any clean way to categorize the players, so let’s just jump into cases.

Security Alerts

Traditional SIEM was based on looking for patterns you knew to be attacks. You couldn’t detect things that you didn’t yet recognize as attacks yet, and keeping the rules current to keep pace with dynamic attacks was a challenge. So many customers didn’t receive the value they needed. In response a new generation of security analytics products appeared to apply advanced mathematical techniques to security data, identifying and analyzing anomalous activity, giving customers hope that they would be able to detect attacks not covered by their existing rules.

Today to have a handle on success any security monitoring platform needs the ability to detect and alert on the following attack vectors:

  • Commodity Malware: Basically these are known attacks, likely with a Metasploit module available to allow even the least sophisticated attackers to use them. Although not sexy, this kind of attack is still prevalent because adversaries don’t use advanced attacks unless they need to.
  • Advanced Attacks: You make the assumption that you haven’t seen an advanced attack before, thus you are very unlikely to have a rule in your security monitoring platform to find it.
  • User Behavior Analysis: Another way to pinpoint attacks is to look for strange user activity. At some point in an attack, a device will be compromised and that device will act in an anomalous way, which provides an opportunity to detect it.
  • Insider Threat Detection: The last use case we’ll describe overlaps with UBA because it’s about figuring out if you have a malicious insider stealing data or causing damage. The insider tends to be a user (thus the overlap with UBA). Yet this use case is less about malware (because the user is already within the perimeter) and more about profiling employee behavior and looking for signs of malicious intent, such as reconnaissance and exfiltration.

But the telemetry used to drive security monitoring tools today is much broader than in the past. The first generation of the technology – SIEM – was largely driven by log data and possibly some network flows and vulnerability information. Now, given the disruption of cloud and mobility, a much broader set of data is needed. For instance there are SaaS applications in your environment, which you need to factor into your security monitoring. There are likely IoT devices as well, whether they be work floor sensors or multi-function printers with operating systems which can be compromised. Those also need to be watched. And finally, mobile endpoints are full participants in the technology ecosystem nowadays, so gathering telemetry from those devices is an important aspect of monitoring as well.

So aside from the main attack vectors, the fact that corporate data lies both inside the perimeter and across a bunch of SaaS services and mobile devices, makes it much harder to build a comprehensive security monitoring environment. We described this need for enterprise visibility in our Security Decision Support series.

Forensics and Response

The forensics and response use case comes into play after an attack, when the organization is trying to figure out what happened and assess damage. The key functions required for response tend to be sophisticated search and the ability to drill down into an attack quickly and efficiently. Skilled responders are very scarce, so they need to leverage technology where possible to streamline their efforts.

But given the scarcity of responders, a heavy dose of enrichment (adding threat intel to case files) and even potential attack remediation must be increasingly automated. So it’s not just about equipping the responders – it’s about helping scale their activity.

Compliance Reporting

This use case is primarily focused on providing the information needed to make the auditor go away as quickly as possible, with minimal customization and tuning of reports. Every organization has to deal with different compliance and regulatory hierarchies, as well as internal controls reporting, so success entails having the tool handle mapping specific controls to regulations, and substantiating that the controls are actually in place and operational.

Seems pretty simple, right? It is until you have to spend two days in Excel cleaning up the stuff that came from your tool. You could pay an assessor to go through all your stuff and make sense of things, but that may not be the best use of your or their time – nor can you ensure they’ll reach the right conclusions regarding your controls.

As we look to the future, compliance reporting won’t change that much. But the data you need to feed into a platform to generate your substantiation will expand substantially. It’s all about visibility as mentioned above. As your organization embraces cloud computing and mobility, you will need to make sure you have logs and appropriate telemetry from the controls protecting functions to ensure you can substantiate your security activity.

Assessing the Combatants

Given the backdrop of these use cases and what’s needed for the future, we need to perform a general assessment of SIEM and security analytics. To be clear this isn’t an apples to apples comparison – there is already significant functional overlap between a SIEM and what is in a security analytics product. The overlap will only accelerate until there really is no functional difference between a SIEM and a security analytics tool. Basically it will just be security monitoring, regardless of what it’s called.

Given this overlap, we still need some way to categorize the players. The underlying means of analyzing data makes a useful way to distinguish an incumbent from a new entrant. To drill down, a system built on a rules engine is SIEM, while a tool based on a (Big Data-centric) analytical platform would be a new entrant. But this gets a bit murky, because no current tool really only uses rules, and every analytics product has the option to use rules.

But nomenclature aside, if we go back to the attacks described above, here is how each class of security monitor handles attacks:

  • Commodity Malware: You know what these attacks are so traditional rules-based alerting works well. As long as you keep them up to date. A bit counter-intuitively, new entrants can have trouble with these attacks because they don’t always show up as anomalous behavior. To draw an analogy, this is why endpoint AV signatures are still useful: behavioral models can miss old attacks.
  • Advanced Attacks: The new entrants begin to shine for handling advanced attacks because they don’t need to look specifically for individual attacks like rules-based systems do. The behavioral and machine learning models underlying analytics engines do need to be updated periodically, but if you are worried about an advanced adversary a rules-based system definitely won’t suffice.
  • User Behavioral Analysis: UBA requires integration with the corporate identity store so you can associate specific devices with users, but leverages modern analytics to detect advanced attacks. This makes UBA difficult with a traditional SIEM.
  • Insider Threat Detection: The major difference between UBA and insider threat detection is its specificity to an organization. UBA tends to look for generically anomalous behavior, while insider threat tools are more tuned towards the inner workings of a specific organization. This tends to involve integration of physical security and HR systems because there are many triggers for recognizing malicious employee intent.

To net it out, for the security alerting use case, a rules-based system will be limited beyond commodity malware. Detecting advanced attacks and profiling users (either for a UBA or insider threat use case) requires higher-level analytics. So any tool you are considering from here on needs the ability to handle broader analytics.

Thinking about forensics and response, the maturity of incumbents tends to make their tools more adept at the response use case because they collect broader security data and have better search and drill-down experiences – they’ve been doing it longer. But the gap is closing rapidly as new entrants focus on reaching feature parity with incumbent SIEM vendors.

Finally, for compliance reporting, incumbents have been cranking out auditor reports for well over a decade and have all those bases covered. Although this is another area where the gap between incumbents and new entrants is closing. First because compliance reporting is fairly mechanical, and once the security control to mandate mapping is integrated into the product, the reports more or less pop out. No, it’s not that simple, and there is still need for customers to customize reports, but it isn’t rocket surgery.

So where does this leave us as the Civil War continues to smolder? Right back where we started. The use cases continue to evolve, as do the tools. Clearly an organization worried about advanced attacks will favor a monitoring platform with an underlying analytics engine, while those who tend to prioritize response and compliance reporting may favor incumbents. But of course it’s not quite that simple.

Given the overlap between SIEM and security analytics for all the applicable use cases, a cursory look at use cases is insufficient to even narrow down your short list. We need to dig into the requirements of the various platforms – regardless of whether a tool started as SIEM or emerged as a security analytics offering. As platforms consolidate you need to look at a single set of capabilities moving forward. We’ll tackle that in our next post.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Introducing Team Foundation Server decryption tool

During penetration tests we sometimes encounter servers running software that use sensitive information as part of the underlying process, such as Microsoft’s Team Foundation Server (TFS). TFS can be used for developing code, version control and automatic deployment to target systems. This blogpost provides two tools to decrypt sensitive information that is stored in the TFS database.

Decrypting TFS secrets

Within Team Foundation Server (TFS), it is possible to automate the build, testing and deployment of new releases. With the use of variables it is possible to create a generic deployment process once and customize it per environment.1 Sometimes specific tasks need a set of credentials or other sensitive information and therefor TFS supports encrypted variables. With an encrypted variable the contents of the variables is encrypted in the database and not visible for the user of TFS.


However, with the correct amount of access rights to the database it is possible to decrypt the encrypted content. Sebastian Solnica wrote a blogpost about this, which can be read on the following link: https://lowleveldesign.org/2017/07/04/decrypting-tfs-secret-variables/

Fox-IT wrote a PowerShell script that uses the information mentioned in the blogpost. While the blogpost mainly focused on the decryption technique, the PowerShell script is built with usability in mind. The script will query all needed values and display the decrypted values. An example can be seen in the following screenshot:


The script can be downloaded from Fox-IT’s Github repository: https://github.com/fox-it/Decrypt-TFSSecretVariables

It is also possible to use the script in Metasploit. Fox-IT wrote a post module that can be used through a meterpreter session. The result of the script can be seen in the screenshot below.


There is a pull request pending and hopefully the module will be part of the Metasploit Framework soon. The pull request can be found here: https://github.com/rapid7/metasploit-framework/pull/9930


[1] https://docs.microsoft.com/en-us/vsts/build-release/concepts/definitions/release/variables?view=vsts&tabs=batch
[2] https://lowleveldesign.org/2017/07/04/decrypting-tfs-secret-variables

I would like to clear some things regarding the us…

I would like to clear some things regarding the user password.

If Telegram Desktop user has a local passcode installed (that is asked each time he launches the app) then it will be hard to steal his session and gain access to the cloud chats and contacts.

Those local files that are encrypted contain only images cache and (the most important) keys for the cloud API access. If the local passcode was not set, then they will just give you the access if you manage to steal them. But if the local passcode was installed, it was used to encrypt all local files, including the API keys.

Brute-force of course is an option here, but it won't be that easy. The final key that is used to encrypt data is derived from the passcode and has 256 bytes (2048 bits) in it, so brute-forcing the key itself is a bad idea, it will take too long.

Brute-forcing a strong local passcode will take a long time as well, because the key (which you can test for validity) is derived using PKCS5_PBKDF2_HMAC_SHA1 with a random salt and uses 4000 iterations (and the documentations states that "RFC 2898 suggests an iteration count of at least 1000.") so each attempt on the password will take a lot of resources even though everything is done locally.