Monthly Archives: August 2015

CVE-2015-5560 (Flash up to and Exploit Kits

Patched with flash version, CVE-2015-5560 is now being exploited by Angler EK.

Angler EK :
[Edit : 2015-09-01] Exploit candidated by by Anton Ivanov ( Kaspersky ) as CVE-2015-5560 [/edit]
The exploit has been added the 28th. It's not being sent to Flash
It uses the same Diffie-Hellman Key Exchange technique described by FireEye as in their CVE-2015-2419 implementation making a default fiddler unreplayable.

Angler EK pushing Bedep to Win7 IE11 Flash - CVE-2015-5560

Sample in that pass : 9fbb043f63bb965a48582aa522cb1fd0
Fiddler sent to VT (password is malware)
Note: with help from G Data, a replayable fiddler is available. No public share (you know how to get it).

Nuclear Pack :
Additional post spotted on the 2015-09-10

Nuclear Pack additionnal post on 2015-09-10 showing integration of CVE-2015-5560 was on the road
and got a first payload  the day after :

Nuclear Pack successfully exploiting Flash with CVE-2015-5560 (rip from Angler)
Out of topic payload : 91b76aaf6f7b93c667f685a86a7d68de  Smokebot C&C  hostnamessimply1.effers .com: )
Files : Fiddler here (Password is malware)

Read More :
Adobe Flash: Overflow in ID3 Tag Parsing - 2015-06-12 Google Security Research
Three bypasses and a fix for one of Flash's Vector.<*> mitigations - 2015-08-19 - Chris Evans - Google Project Zero
CVE-2015-2419 – Internet Explorer Double-Free in Angler EK  - 2015-08-10 - FireEye
Bedep’s DGA: Trading Foreign Exchange for Malware Domains - 2015-04-21 - Dennis Schartz - Arbor Sert
Post publication reading :
Attacking Diffie-Hellman protocol implementation in the Angler Exploit Kit - 2015-09-08 Kaspersky
Analysis of Adobe Flash Player ID3 Tag Parsing Integer Overflow Vulnerability (CVE-2015-5560) - 2016-01-12 - Nahuel Riva - CoreSecurity

Paul’s Security Weekly #432

Jack's Uplifting Rants, Stories of the Week - Episode 432 - August 27, 2015

In our first segment: No seriously, Jack was in rare form: Uplifting, sympathetic, offering help, and dare I say trying to be positive! After 45 minutes of this, we just wanted the old Jack back...


Jack gets into full rant mode in this segment, where we cover some more news about the epic Ashley Madison breach, Smart fridge that gets hacked, and more!


Show Notes:

Security Weekly Web SIte:

CIPL Comments on the Indonesian Draft Data Protection Regulation with New White Paper on Cross-Border Data Transfers Mechanism

On August 20, 2015, the Centre for Information Policy Leadership at Hunton & Williams (“CIPL”) filed comments to the Indonesian Draft Regulation proposed by the Minister of Communication and Information (RPM) of the Protection of Personal Data in Electronic Systems. The comments were limited to the issue of cross-border data transfers and were submitted in the form of a new CIPL white paper entitled Cross-Border Data Transfer Mechanisms.

This white paper is directed at all policy makers and legislators who are drafting privacy laws that contain cross-border transfer restrictions for personal data. The paper argues that while an approach to cross-border data transfers that relies on “accountability” rather than transfer restrictions is both viable and preferable, an increasing number of countries are including cross-border transfer restrictions modeled on the EU example. Given this trend, privacy laws that contain cross-border data transfer restrictions should also include the full range of existing exceptions and derogations to such restrictions, as well as a comprehensive set of available cross-border transfer mechanisms to enable accountable global data flows. These mechanisms include:

  • Contracts: The law should allow cross-border transfers on the basis of contractual arrangements that stipulate appropriate data privacy and security controls to be implemented by the organizations.
  • Corporate Rules: The law should allow cross-border transfers based on binding corporate rules.
  • Cross-Border Rules: The law should allow for enforceable corporate cross-border privacy rules modeled on the APEC Cross-Border Privacy Rules.
  • Codes of Conduct, Certifications, Privacy Marks, Seals and Standards: The law should allow for the use of certified codes of conduct, certifications, privacy marks, and seals and standards as cross-border transfer mechanisms.
  • “Safe Harbors” and Self-Certification Arrangements: The law should allow the possibility of cross-border transfers based on negotiated safe harbor arrangements, including arrangements that rely on self-certification to a given privacy standard, coupled with enforcement.
  • Consent: The law should allow cross-border data transfers on the basis of data subject consent.
  • Adequacy and Whitelists: The law should allow adequacy rulings and “whitelists.”

Any derogations and exceptions to cross-border data transfer restrictions should be comprehensive in light of global practice.

For more information, read our previous blog entry on the Indonesia Draft Regulation.

German DPA Fines Two Companies for the Unlawful Transfer of Customer Data as Part of an Asset Deal

On July 30, 2015, the Bavarian Data Protection Authority (“DPA”) issued a press release stating that it imposed a significant fine on both the seller and purchaser in an asset deal for unlawfully transferring customer personal data as part of the deal.

In the press release, the DPA stated that customer data often have significant economic value to businesses, particularly with respect to delivering personalized advertising. If a company terminates its business, it may sell its valuable economic assets, including customer data, to another company as part of an asset deal. In addition, insolvency administrators may try to sell the customer data maintained by the business during the insolvency process.

According to the press release, the Bavarian DPA fined both the seller and the purchaser for unlawfully transferring email addresses of customers of an online shop. The exact fines were not announced, but the press release mentions that they were fined upwards of five figures.

The DPA also stated that transferring customer email addresses, phone numbers, credit card information and purchase history requires prior customer consent or, alternatively, customers must be given prior notice about the intent to transfer such personal data so they have an opportunity to object to the transfer.

Since the seller and the purchaser failed to obtain customer consent or give the customers an opportunity to object, the DPA found both companies in violation of German data protection law. The DPA also pointed out that both seller and purchaser are “data controllers” and thus have broader responsibilities than data processors under German data protection law.

In addition, the DPA stated that it will act similarly in future cases and will fine companies that sell customer data in a non-compliant manner during asset deals.

Third Circuit Upholds FTC’s Authority to Regulate Companies’ Data Security Practices

On August 24, 2015, the United States Court of Appeals for the Third Circuit issued its opinion in Federal Trade Commission v. Wyndham Worldwide Corporation (“Wyndham”), affirming a district court holding that the Federal Trade Commission has the authority to regulate companies’ data security practices.

As we previously reported, the case stems from Wyndham’s challenge to the FTC’s authority to bring a 2012 suit against Wyndham, in which the FTC alleged that the company’s failure to maintain reasonable security contributed to three separate data breaches involving hackers accessing sensitive consumer data. Wyndham challenged the FTC’s authority to bring charges against private companies’ data security, arguing that by adopting targeted security legislation such as the Gramm-Leach-Bliley Act and the Health Insurance Portability and Accountability Act of 1996, Congress had precluded the FTC’s jurisdiction over data security. Wyndham also argued that before bringing a Section 5 enforcement action, the FTC must publish “rules, regulations, or other guidelines” setting out the acceptable security standards.

In today’s decision, the Third Circuit’s three-judge panel upheld the U.S. District Court for the District of New Jersey’s April 2014 ruling that the unfairness prong of Section 5 of the FTC Act does empower the FTC to bring lawsuits against private companies for insufficient data security practices, and that it is not required to publish rules or regulations regarding what constitutes reasonable security standards.

In a statement released by the FTC, FTC Chairwoman Edith Ramirez said, “Today’s Third Circuit Court of Appeals decision reaffirms the FTC’s authority to hold companies accountable for failing to safeguard consumer data. It is not only appropriate, but critical, that the FTC has the ability to take action on behalf of consumers when companies fail to take reasonable steps to secure sensitive consumer information.”

Online Trust Alliance Releases Privacy and Data Security Framework for Internet of Things

On August 11, 2015, the Online Trust Alliance, a nonprofit group whose goal is to increase online trust and promote the vitality of the Internet, released a framework (the “Framework”) for best practices in privacy and data security for the Internet of Things. The Framework was developed by the Internet of Things Trustworthy Working Group, which the Online Trust Alliance created in January 2015 to address “the mounting concerns and collective impact of connected devices.”

The Framework focuses on two categories within the Internet of Things: (1) home automation and connected home products, such as smart appliances and (2) wearable technologies, such as fitness trackers. The Framework lists 23 minimum requirements as a “proposed baseline for any self-regulatory and/or certification program” for the Internet of Things. These requirements include:

  • making privacy policies easily available to review prior to purchasing or downloading a product;
  • disclosing how long the consumer’s personal information will be retained;
  • encrypting or hashing personal information in storage and in motion;
  • developing and implementing a breach response and consumer safety notification plan, which should be reviewed at least semi-annually; and
  • creating controls and/or documentation that enable the consumer to set, revise and manage privacy and security preferences, including what types of information are transmitted via a specific device.

In addition to the minimum requirements, the Framework lists 12 other recommendations and considerations for companies in the Internet of Things space. These include:

  • disclosing whether personal information is being stored and accessed in the cloud;
  • providing a history of privacy notice changes that the customer may review; and
  • enabling the consumer to return a product without charge after reviewing the privacy practices that are presented during the initial product set up.

The Online Trust Alliance has requested public comments that it will incorporate into the formal release of the Framework. Comments may be submitted at the Online Trust Alliance’s website by September 14. The Framework comes at a time of increased scrutiny of this burgeoning area. In January, we reported on the Federal Trade Commission’s report on the Internet of Things.

What’s next for StopBadware in Tulsa

We asked Tyler Moore, StopBadware's research advisor and the boffin who's taking over our core operations, to expand on his plans for the organization in Tulsa and to throw in some 90s references. He obliged. 

Dr. Tyler Moore on the new version of StopBadware

Recently we announced that StopBadware is transferring operations to the University of Tulsa. In today's blog post I will fill in some more details on this exciting new chapter of the organization. Some things will change as a result, but our non-profit mission to make the web safer will remain.

First, let me tell you a bit about myself and my history with StopBadware, which I hope will go a long way to help solve the mystery of how StopBadware has ended up in Tulsa. (Hint: it's not because of Hanson. And I promise the circumstances are happier than when Chandler was transferred there after sleeping in a meeting on Friends.)

I first began interacting with StopBadware in 2008 while I was a postdoctoral fellow at Harvard's Center for Research on Computation and Society. I wanted to engage with StopBadware due to my research interests in cybercrime measurement. We collaborated on several projects, one of which culminated in a 2012 paper describing an experiment that demonstrated the impact of transmitting detailed notices in cleaning up websites distributing malware. The paper was co-authored by Marie Vasek, who is now my Ph.D student and Research Scientist at StopBadware.

Since 2013, StopBadware has been closely collaborating with my research team under Marie's supervision. The website testing intern has regularly been an undergraduate student I have recruited from my courses. Last year, I became StopBadware's research advisor, further formalizing my long-term involvement with the organization.

When StopBadware's board of directors decided earlier this year to move away from being a stand-alone 501c3 non-profit organization, I volunteered to bring StopBadware back to its roots in academia. StopBadware will become a program operating within the Security Economics Laboratory at the Tandy School of Computer Science at the University of Tulsa, where I cut my teeth as an undergraduate security researcher and where I recently joined the faculty.

This change in organization will bring several benefits. One is that it should greatly reduce operating costs, as I will be serving as Director pro bono, and we can share other overheads with an existing institution. Another is that we will be able to continue to serve as a true non-profit—something that in the eyes of staff and community is both unique and essential in this space.

The new StopBadware will concentrate on the core competencies that we offer. First, we will continue the testing and review program, in which anyone can request independent review of URLs blacklisted for malware by StopBadware's data providers. Second, we will continue the Data Sharing Program (DSP), in which StopBadware serves as a trusted broker for community-contributed feeds of security datasets. Third, StopBadware's research mission will be expanded. We plan to more extensively mine the data contributed to the DSP and other sources. Finally, we intend to greatly expand the publication of data related to web-based badware. Our aim is to provide even greater transparency into the fight against web-based malware, so that we might more accurately track progress, highlight accomplishments and encourage improvements on part of the community.

We still need your help, in terms of contributing data, services and financial assistance. Donations will still be required in order for StopBadware to continue thriving in the years ahead. If you are interested in supporting StopBadware as we move onto the next chapter, please get in touch by emailing me at

- Tyler

Should one fret over the leaked Ashley Madison data?

Several news sites have reported that 15 GB of identity data stolen last month from online has been made available on the darknet. Three sites have since sprung up with allows interested parties to query the site to ascertain the identity of Ashley Madison users. allowed married people to have short extramarital affairs. While the morality of the services provided may be questionable, and is perhaps best left to judgment of individuals, there is a serious risk of reputation damage if the data is fake.
There are several reasons why it may be. Firstly this is not the first leak to appear online; there have been several in the span of the last month. Then, there is the question of the validity of the email address and other details which were never verified. There is always a probability that a prominent person or an associate’s identity was used to create a profile. From one analysis, it seems that 90% of the users were male and most of the female profiles were fake. If this is true than users subscribed but may not have been able to use the site. Many users may have subscribed due to curiosity or for fun. Some articles seem to suggest that once subscribed removing a personal profile from the site was not easy. Finally, there is a strong suspicion that some of this data may have been amalgamated from other breaches.

On the flip side there seems to be several reports of individuals claiming to verify that they were users of the site and confirming their email ids in the released data.
Whatever, may be the truth, I would like cybercitizens to know that though it seems to be a sordid affair not to disrupt your personal lives purely by data that cannot be verified put out on the net. 

FTC Reaches Settlement with Thirteen Companies over Safe Harbor Misrepresentations

On August 17, 2015, the Federal Trade Commission announced proposed settlements with 13 companies over allegations that they misled consumers by falsely claiming to be Safe Harbor certified when their certifications had lapsed or they had never been certified at all.

Seven companies, including Golf Connect, LLC, Contract Logix, LLC and Forensics Consulting Solutions, LLC, allegedly claimed to have a current certification in one or both of the U.S.-EU or U.S.-Swiss Safe Harbor Framework when, in actuality, their certifications had lapsed.

Six companies, including Dale Jarrett Racing Adventure, SteriMed Medical Waste Solutions and California Skate-Line, are alleged to have claimed certification in one or both Safe Harbor programs but had never applied for membership in either program.

Each of the proposed settlement agreements prohibit the companies “from misrepresenting the extent to which they participate in any privacy or data security program sponsored by the government or any other self-regulatory or standard-setting organization.”

Delaware Governor Signs Set of Online Privacy Bills

On August 7, 2015, Delaware Governor Jack Markell signed four bills into law concerning online privacy. The bills, drafted by the Delaware Attorney General, focus on protecting the privacy of website and mobile app users, children, students and crime victims.

1. The Delaware Online and Personal Privacy Protection Act

The Delaware Online and Personal Privacy Protection Act (S.B. 68) will require operators of commercial internet services such as websites and mobile apps to make a privacy policy conspicuously available to the extent they collect online personally identifiable information (“PII”) of Delaware residents. Pursuant to the bill, the website or mobile app must have a privacy policy in place that discloses, among other things, information regarding the service’s collection and disclosure of PII and its online tracking practices.

This bill also aims to protect children’s privacy by restricting online businesses from advertising or marketing specific types of age-restricted products (such as alcohol and firearms) and services (such as certain gambling activities) on websites and mobile apps directed to children. In addition, the bill regulates the privacy practices of online businesses that primarily provide services related to e-books, including audio books. The bill restricts book service providers from knowingly disclosing a user’s PII to third parties without written consent and requires them to post an annual transparency report on their website.

2. The Student Data Privacy Protection Act

The Student Data Privacy Protection Act (S.B. 79) is modeled on California’s Student Online Personal Information Privacy Act and restricts education technology providers from selling student data, using student data to engage in targeted advertising directed to students or their families, amassing a profile on students to be used for non-educational purposes or disclosing student data unless in accordance with a permissible purpose set forth in the bill.

3. The Employee/Applicant Protection for Social Media Act

Similar to similar laws in other states, the Employee/Applicant Protection for Social Media Act (H.B. 109) places restrictions on an employer requiring or requesting access to an employee’s or applicant’s personal social media account.

4. The Victim Online Privacy Act

The Victim Online Privacy Act (H.B. 102) provides privacy protection for witnesses and victims of crimes. It prohibits posting or publicly disclosing online contact information or images of crime victims, material witnesses or their families for the purpose of inciting someone to commit violence against them.

8 steps to prevent a stolen phone from ruining you digital life

Smart phones are lost because they were accidental forgotten at public places or stolen. A phone today, is a cybercitizens gateway to their digital life. It allows use of apps for services such as for banking, social networking and taxi booking, storage for personal pictures and videos, email, instant messaging and telephony.
Most phones have an Internet finder program which helps to locate phones connected to the Internet. The service works well, if the phone is forgotten at places which are likely to have a lost and found counter like airports and restaurants where the staff is unlikely to pocket it. More often, the key risk is the loss of battery life effectively shutting down the phone. Even when a phone is lost and picked up by a person wanting to return it, a study has shown that most of the people browse private data like contact and pictures, understandably to locate the owner.
Most thieves quickly switch off the phone and remove the SIM card to effectively disable the Internet finder applications. When a phone is stolen or lost there are three risks that the owner face.
Financial Loss
Typically, you lose the value of the phone and the additional cost of calls made from the phone which obviously, one has to pay for. While there may be insurance that can be bought to recover part of the cost of the phone; to prevent fraudulent calls the cellular provider needs to be quickly alerted to deactivate the number.  Ensuring that the phone is protected by a strong screen saver password will mitigate the risk of expensive calls.
Reputation Loss
Many personal applications like Facebook, twitter, email or such social media accounts are logged on and can be accessed without a password allowing personal information to be read or malicious comments to be written. Such comments may affect personal reputation or be defamatory which may results in soured relationships or legal action. Hereto a strong screen saver password can help. If the thief is unable to crack the password, the simplest action would be to format the phone, reload the operating system and sell it in the black market
Privacy Loss
Privacy can be lost in two ways. By viewing data stored directly on the phone memory or on memory cards such as personal pictures, by reading private posts, email and by looking up the browsing history. Private data such as sexting pictures of other individuals received and stored on the phone may compromise their privacy.
Four steps that cybercitizens should take to reduce the risks to themselves and the incentive a thief gets from a stolen phone:-
1.        Set a strong password and short lock screen timeout.  If your phone provides the option to erase data after several unsuccessful tries to enter a passcode, typically 10, activate it. New phones disallow the formatting of the operating system without a password thereby rendering the phone worthless and reducing the incentive to steal it. A strong password or passcode has at least 8 characters that include some combination of letters, numbers, and special characters
2.        Try to avoid using external memory cards unless they are encrypted
3.        Update the phone regularly, to ensure that  vulnerabilities which can be exploited to unlock password protected phones is patched
4.         Backup contacts and other data
Four steps that cybercitizens should take when the phone has been stolen or lost and returned.
1.        Use the Internet finder app to locate the phone and erase data
2.        Reset all passwords for apps and accounts even if the phone has been returned
3.        If returned, reformat and reload the operating system to avoid any malware being surreptitiously loaded. Malware can be used to spy, steal credentials and cause an even bigger financial loss
4.        Block you SIM card by calling up your cellular provider

Why DNS is awesome and why you should love it

It's no secret that I love DNS. It's an awesome protocol. It's easy to understand and easy to implement. It's also easy to get dangerously wrong, but that's a story for last weeka few weeks ago. :)

I want to talk about interesting implication of DNS's design decisions that benefit us, as penetration testers. It's difficult to describe these decisions as good or bad, it's just what we have to work with.

What I DON'T want to talk about today is DNS poisoning or spoofing, or similar vulnerabilities. While cool, it generally requires the attacker to take advantage of poorly configured or vulnerable DNS servers.

Technically, I'm also releasing a tool I wrote a couple weeks ago: dnslogger.rb that replaces an old tool I wrote a million years ago.

Recursive? Authoritative? Wut?

As always, I'll start with some introduction to how DNS works. If you already know DNS, you can go ahead and skip to the next section.

DNS is recursive. That means that if you ask a server about a domain it doesn't know about (that is, a domain that isn't cached or a domain that the server isn't the authority for), it'll either pass it upstream to another DNS server (recursive) or tell you where to go for the answer (non-recursive). As always, we'll focus on recursive DNS servers - they're the fun ones!

If no interim DNS server has the entry cached, the request will eventually make it all the way to the authoritative server for the domain. For example, the authoritative server for * is - my server (and hopefully the server you're reading this on :) ). That is, any request that ends with - and that isn't cached - will eventually go to my server. See the next section for information on how to set up your own authoritative DNS server.

Let's look at a typical setup. You're on your home network. Your router's ip address is probably the usual, and is plugged into a cable modem. When you connect your laptop to your network, DHCP (aka, magic) happens, and your DNS server probably gets set to (unless you've manually configured it to, which you should). When your router connects to your cable modem, more DHCP (aka, more magic) happens, and its DNS server set to the ISP's DNS server.

When you do a lookup, like "dig", your computer sends a DNS request to saying "who is"? Obviously, your router has no idea - he's just a stupid Linksys or whatever - so he has to forward the request to the ISP's DNS server.

The ISP's DNS server gets the request, and it has no idea what to do with it either. It certainly doesn't know who "" is, so it's gonna forward the request to its DNS server, whatever that happens to be. Or it might tell the router where to look for a non-recursive query. Since at this point it's out of our hands, it doesn't really matter.

Eventually, some DNS server along the way is going to say "hey, why don't we just go to the source?", and through a process that leading scientists believe is magic (there's a lot of magic in DNS :) ), it will look up the authoritative server for, discover it's, and send the request there.

My server will see the request, and, assuming something is listening on UDP port 53, have the opportunity to respond.

The response can be any IP address for an A (IP) or AAAA (IPv6) request; a name for a CNAME (alias) or MX (mail) request; or any ol' text for a TXT request. It can also be NXDomain - "domain not found" - or various error messages (like "servfail").

One of the cool things is that even if we return "domain not found", we still see that a request happened, even if the person doing the lookup sees that it failed! We'll see some examples of why that's cool shortly.

How do I get an authoritative server?

The sad part is, getting an authoritative server isn't free. You have to buy a domain, which is on the order of $10 / year, give or take.

Beyond that, it's just a configuration thing. I don't want to spend a ton of time talking about it here, so check out this guide, written by Irvin Zhan for instructions to do it on Namecheap.

I personally did it on Godaddy. It took some time to figure out, though, so prepare for a headache! But trust me: it's worth it.

The set up

We'll use - my test domain - for the remainder of this. Obviously, if you want to do this yourself, you'll need to replace that with whatever domain you registered. We'll also use dnslogger.rb, which you'll get if you clone dnscat2's repository.

Getting dnslogger.rb to work is mostly easy, but permissions can be a problem. To listen on UDP/53, it has to run as root. It also needs the "rubydns" gem installed in a place where it can be found. That can be a little annoying, so I apologize if it's a pain. "rvmsudo" may help.

If anybody out there is familiar with how to properly package Ruby programs, I'd love to chat! I'm making this up as I go along :)

What does DNS look like?

All right, let's mess around!

I'll start by having no DNS server running at all on - basically, the base state. From another host, if you try to ping it, you'll see this:

$ ping
Ping request could not find host Please check the name and try again.

Conclusion? It's down. If you were investigating an incident and you saw that message, you'd conclude that there's nothing there, right? Probably?

Let's fire up dnslogger.rb:

$ sudo ruby ./dnslogger.rb
dnslogger v1.0.0 is starting!

Starting dnslogger DNS server on

Then do the same ping (with a different domain, because caching can screw you up):

$ ping
Ping request could not find host Please check the name and try again.

It's the exact. Same. Response. The only difference is, on the DNS server, we see this:

$ sudo ruby ./dnslogger.rb
dnslogger v1.0.0 is starting!

Starting dnslogger DNS server on
Got a request for [type = A], responding with NXDomain

What's this? We saw the request! Even if the person doing the lookup thought it failed, it didn't: WE KNOW.

That's really cool, because it's a really, really stealthy way to find out if somebody is looking you up. If you do a reverse DNS lookup for, you'll see:

$ dig -x

And if you look up the forward record:

$ dig
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 57980
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

NXDOMAIN = "no such domain". Totally stealth!

Why is it so awesome?

Let's say you're testing for cross-site scripting on a site. Post <img src="" /> everywhere. If you later see a request like "" come in, then guess what? You found some stored XSS on their admin page!

Let's say you're looking for shell injection. Normally, you do something like "||ping -c5 localhost". If it takes 5 seconds, it's probably vulnerable to XSSshell command injection [thanks albinowax!]. That's lame. Instead, do a query for "myquery||nslookup". If you see the query, it's definitely vulnerable. If you don't, it's almost certainly not.

Let's say you're looking for XXE. Normally, you'd stick something like "<!ENTITY xxe SYSTEM "file:///etc/passwd" >]><foo>&xxe;</foo>" into the XML. That works great - IF it returns the data. If it doesn't, you see nothing, and it probably failed. Probably. But if you change the "file:///" URL to "", you'll see the request in your DNS logs, and you can confirm it's vulnerable!

Let's say you're wondering if a system is executing a binary you're sending across the network. Create a binary that attempts to connect to You'll instantly know if anybody attempted to run it, and in their logs they'll see nothing more than a failed DNS lookup. As far as they know, nothing happened!

The coolest thing is, if you're responding with NXDomain, then as far as the client or IDS/IPS/Wireshark/etc. knows, the domain doesn't exist and the connection doesn't happen. Nothing even attempts to connect - it doesn't even send a SYN. How could it? It just looks at the domain and "NOPES" right outta there.

If some poor server admin has to figure out what's happening, what's s/he going to see? A request to a domain which, if they ping, doesn't exist. At that point, they give up and declare it a false positive. What else can they do, really?

There are so many applications. Looking for SQL injection? Use a command that does a DNS lookup (I don't know enough about SQL to do this). Looking for a RFI vuln? Try to include a file from your domain. Wondering if a company will try emailing you without risking getting an email (I'm sure I can come up with a scenario)? Give them "" as your email address. If I try to email that from gmail, it fails pretty much instantly:

Delivery to the following recipient failed permanently:

Technical details of permanent failure:
DNS Error: Address resolution of failed: Domain name not found

But I still see that they tried:

$ sudo ruby ./dnslogger.rb
dnslogger v1.0.0 is starting!

Starting dnslogger DNS server on
Got a request for [type = MX], responding with NXDomain
Got a request for [type = MX], responding with NXDomain
Got a request for [type = AAAA], responding with NXDomain
Got a request for [type = A], responding with NXDomain

I see the attempt, but neither gmail nor the original sender can tell that apart from a misspelled domain - because it's identical in every way!

(I'm mildly curious why it does a AAAA/A lookup - maybe somebody can look into that)

Returning addresses

dnslogger.rb can return more than just NXDomain - it can return actual domains! If you start dnslogger.rb with a --A argument:

$ sudo ruby ./dnslogger.rb --A ""

Then it'll return that ip address for every A request for any domain:

$ ping

Pinging [] with 32 bytes of data:
Reply from bytes=32 time=85ms TTL=44
Reply from bytes=32 time=80ms TTL=44
Reply from bytes=32 time=73ms TTL=44
Reply from bytes=32 time=90ms TTL=44

Ping statistics for
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 73ms, Maximum = 90ms, Average = 82ms

If you do a lookup directly to the server, you can use any domain:

$ dig @
;; ANSWER SECTION:             86400   IN      A

In the past, I've found a DNS server that always returns the same thing to be useful for analyzing malware (also database software, which can often be considered the same thing). In particular, setting a system's DNS server to the IP of a dnslogger.rb instance, then returning for all A records and ::1 for all AAAA records, can be a great way to analyze malware without letting it connect outbound to any domains (it will, of course, be able to connect outbound if it uses an ip address instead of a domain name):

$ sudo ruby ./dnslogger.rb --A "" --AAAA "::1"

What else can you do?

Well, I mean, if you have an authoritative DNS server, you can have a command-and-control channel over DNS. I'm not going to dwell on that, but I've written about it in the past :).


The entire point of this post is that: it's possible to tell if somebody is trying to connect to you (either as a TCP connection, sending an email, pinging you, etc) without them knowing that you know.

And the coolest part of all this? It's totally invisible. As far as anybody can tell, the connection fails and that's all they know.

Isn't DNS awesome?

Privacy by Design Certification offered by Ryerson University

On May 25, 2015, the Privacy and Big Data Institute at Ryerson University in Canada announced that it is offering a Privacy by Design Certification. Privacy by Design is a “framework that seeks to proactively embed privacy into the design specifications of information technologies” to obtain the most secure data protection possible. Organizations that attain the certification will be permitted to post a “Certification Shield” “to demonstrate to consumers that they have withstood the scrutiny of a rigorous third party assessment, assuring the public that their product or service reflects the viewpoint of today’s privacy conscious consumer.”

The certification is founded on the seven privacy by design principles published by the Executive Director of Ryerson University’s Privacy and Big Data Institute more than a decade ago. When announcing the certification, the Privacy and Big Data Institute also released the accompanying Assessment Control Framework and an Application Form for organizations that want to become certified.

I lost money because my petrol pump was hacked by attendants!

The neighborhood petrol pump which I occasional use, was in the news for allegedly tampering with the meter readings. Some of the staffers had hacked the circuitry to modify the pulser readings which converted the flow volume to the digital readout. As a consequence, 5% of the bill value was inflated. Hacking is typically associated with software and remote Internet connections, but all sort of meter readings can be tampered with to skim small sums of money or develop glitches that result in inflated bills.
The only way to tackle such misuse is by surprise calibration checks and stringent penalties. In the case of the above petrol pump, the ingenious system also had a switch to toggle back to normal values during a calibration inspection.

The police believes that this particular fraud may be widespread, which simply demonstrates the ease with which the perpetrator of the modified pulser is able to sell his invention without being caught.

Visualizing eight years of independent reviews

StopBadware has been performing independent reviews of websites blacklisted by our data providers for more than eight years. As we've explained in the past, a manual review done by our staff is not always necessary: if a webmaster requests a StopBadware review of a site on Google's Safe Browsing blacklist, the first step in our review process is an automated request for Google to rescan the site in search of malicious code. If Google's automated systems don't find anything suspicious, that site will come off Google's blacklist without our ever having to touch it. When Google still finds malware, or when one of our other data providers is the blacklisting party, one of our website testing team uses a variety of tools to scour the site for malicious code and other bad behavior.

As our home page proclaims in red, we've helped de-blacklist more than 171,000 websites since 2007. Before we shutter operations as an independent nonprofit next month, we want to give our community a better idea of what goes into that number. 

Since we started collaborating with Google, and later ThreatTrack Security and NSFocus, we've performed 53,167 manual reviews. We've also processed an additional 188,149 review requests that were resolved automatically thanks to our automated integration with Google. Those aren't all unique requests, so combining them doesn't yield an accurate number. Here's what all those review requests look like over time:

Why the decline? 

You'll undoubtedly notice that we received many more review requests early on than we do today. Better security awareness, wide availability of relatively low-cost security tools, and default use of things like Webmaster Tools all contribute to the decline we've experienced in review requests. We also have better ways of detecting and weeding out abusive requests than we used to. 

Unfortunately, something else that's contributed to the decline in review requests is malware distributors' widescale use of stealthier, more targeted methods like malvertising. When a resource is compromised only very briefly (e.g., through an infected ad network), even when blacklist operators are able to detect the infection and warn users away, the compromise is often resolved too quickly for StopBadware's Clearinghouse to reflect that the resource was ever blacklisted. Generally speaking, if something is blacklisted for fewer than six hours, we won't have a record of it in our Clearinghouse. On the one hand, this is good news, in that we want blacklists to operate as narrowly as possible to maximize user protection while minimizing penalty to site owners; on the other hand, this is bad news, in that malicious actors are able to effectively utilize powerful technologies to spread malware in ways that are difficult to detect and counter. 

What's not included in this data? 

What you don't see in this chart is the tens of thousands of URLs we've reviewed in bulk for web hosting providers, AS operators, and other network providers over the years. We've worked with everyone from dynamic DNS companies and bulk subdomain providers to small resellers and abuse departments at big companies to clean up malicious resources on their networks and help remove them from blacklists. The majority of this process is manual, and because it's initiated based on trust and human communication instead of by clicking a button, bulk review data isn't reflected in our public review data. 

StopBadware's review process will continue to operate normally during and after our operations transfer to our research team at the University of Tulsa. Thanks to our research scientist, Marie Vasek, for putting this data together!

Hacking SMART services in Cars, Homes, and Medical Devices – a cinch!

Businesses are reinventing themselves by transforming traditional services and service delivery into digital services. Digital services utilize smart products to provide enhanced service quality, additional features and to collect data that can be used to improve performance. Smart products can be remotely controlled using Wi-Fi or cellular connections, software, sensors that makes smart dumb devices, cloud infrastructure and mobiles.
Examples of digital products and services are network connected cars, home appliances, surveillance systems, wearables, medical devices, rifles and so on. Very recently ethical hackers exploited a software glitch that allowed them to take control of a Jeep Cherokee while on the road and drive it into a ditch. All this with the hapless driver at the wheel!

While the car hack made headlines and led to the recall of 1.4 m vehicles, it also signaled the beginning of an era where cyber-attacks or software glitches cause physically harm to cyber citizens, blurring the lines between safety and security. Cyber-attacks in the near future will do a lot more damage than destroy reputations, steal money or spy on intimate moments people would prefer to keep private, it may maim or kill in a targeted or random fashion and that too in the privacy of one’s own home.
The severity of some of the demonstrated exploits by ethical hackers were downplayed because the attacker required physical access to the vehicle to execute the attack. I for one, do not know what happens to my vehicle while it is serviced or valet parked, both ideal opportunities to fiddle with the electronic systems and even modify the firmware.

All smart devices will be connected and updatable over wireless networks. Wireless updates are ideal opportunities for hackers to obtain access or control over these devices. However, digital products or services must have built in defenses not only for over the air hacks but equally on risks from technicians, mechanics or others that have physical access to the smart infrastructure.
Startups with limited budgets may struggle to provide adequate security to their new incubations, allowing ample opportunity for maliciously minded individuals and cyber criminals to find ways to compromise the service. Investment in smart product security will be driven by liabilities around safety regulations, compliance and strict penal provisions.

Potao Express samples


2011- July 2015
  • Aka  Sapotao and node69
  • Group - Sandworm / Quedagh APT
  • Vectors - USB, exe as doc, xls
  • Victims - RU, BY, AM, GE 
  • Victims - MMM group, UA gov
  • has been serving modified versions of the encryption software (Win32/FakeTC) that included a backdoor to selected targets. 
  • Win32/FakeTC - data theft from encrypted drives
  • The Potao main DLL only takes care of its core functionality; the actual spying functions are implemented in the form of downloadable modules. The plugins are downloaded each time the malware starts, since they aren’t stored on the hard drive.
  • 1st Full Plugin and its export function is called Plug. Full plugins run continuously until the infected system is restarted
  • 2nd Light Plugin with an export function Scan. Light plugins terminate immediately after returning a buffer with the information they harvested off the victim’s machine.
  • Some of the plugins were signed with a certificate issued to “Grandtorg”:
  • Traffic 
  • Strong encryption. The data sent is encapsulated using the XML-RPC protocol.
  • MethodName value 10a7d030-1a61-11e3-beea-001c42e2a08b is always present in Potao traffic.
  • After receiving the request the C&C server generates an RSA-2048 public key and signs this generated key with another, static RSA-2048 private key .
  • In 2nd stage the malware generates a symmetric AES-256 key. This AES session key is encrypted with the newly received RSA-2048 public key and sent to the C&C server.
  • The actual data exchange after the key exchange is then encrypted using symmetric cryptography, which is faster, with the AES-256 key
  • The Potao malware sends an encrypted request to the server with computer ID, campaign ID, OS version, version of malware, computer name, current privileges, OS architecture (64 or 32bits) and also the name of the current process.
  • Potao USB - uses social engineering, exe in the root disguised as drive icon
  • Potao Anti RE -  uses the MurmurHash2 algorithm for computing the hashes of the API function names.
  • Potao Anti RE - encryption of strings
  • Russian TrueCrypt Win32/FakeTC - The malicious program code within the otherwise functional TrueCrypt software runs in its own thread. This thread, created at the end of the Mount function, enumerates files on the mounted encrypted drive, and if certain conditions are met, it connects to the C&C server, ready to execute commands from the attackers.
  • IOC


CVE-2015-2419 (Internet Explorer) and Exploits Kits

As published by FireEye Angler EK is now exploiting CVE-2015-2419 fixed with MS15-065

Angler EK :

It seems they might have started to work on that exploit as early as 2015-07-24 where some instances briefly used code to gather ScriptEngineVersion from redirected visitors :

Angler EK gathering ScriptEngineVersion data the fast way.
Today first pass i made was showing a new POST call and was successfully exploiting a VM that used to be safe to Angler.

CVE-2015-2419 successfully exploiting IE11 in windows 7
(Here bedep grabbing Pony and TeslaCrypt then doing some AdFraud)

I spent (too much ;) ) time trying to decode that b value in the POST reply.
Here are some materials :

- The landing after first pass of decoding and with some comments :

The post call is handled by String['prototype']['jjd'] , ggg is sent to Post data as well as the ScriptEngineVersion (in the shared pass : 17728 )

- The l() function handling the post :
- The post data and reply after first pass of decoding :

Files : 2 Fiddlers (ScriptEngineVersion Gathering and successfull pass - use malware as password)

Thanks :
Horgh_RCE for his help

Magnitude :
( I am waiting for some strong confirmation on CVE-2015-2426 used as PrivEsc only here )

Magnitude successfully exploiting CVE-2015-2419 to push an elevated (CVE-2015-2426) Cryptowall on IE11 in Win7
As you can see the CVE-2015-2419 is a RIP of Angler EK's implementation (even containing their XTea key, despite payload is in clear)

Note : The CVE-2015-2426 seems to be used for privilege escalation only

Cryptowall dropped by Magnitude executed as NT Authority\system after CVE-2015-2426

and has been associated to flash Exploit as well.
Pass showing the privilege escalation has been associated to flash Exploit as well.

Files : CVE-2015-2419 pass (password: malware)
CVE-2015-5122 pass featuring CVE-2015-2426 (password : malware)

Thanks :
Horgh_RCE , EKWatcher and Will Metcalf for their help

Nuclear Pack:

Nuclear Pack exploiting IE11 in Win7 with CVE-2015-2419 to push TeslaCrypt
Files :  Fiddler (Password is malware)

Neutrino :
CVE Identification by Timo Hirvonen

Neutrino successfully exploiting CVE-2015-2419 on IE11 in Windows 7
(Out of topic payload : c7692ccd9e9984e23003bef3097f7746  Betabot)

Files: Fiddler (Password is malware)


RIG successfully exploiting CVE-2015-2419
(Out of topic payload : fe942226ea57054f1af01f2e78a2d306 Kelihos (kilo601)

Files : Fiddler (password is malware)

Hunter :
@hunter_exploit 2015-08-26

As spotted by Proofpoint Hunter EK has integrated CVE-2015-2419

Hunter Exploit Kit successfully exploiting CVE-2015-2419
Files : Fiddler (password is malware)

Kaixin :

Files: Fiddler here (password is malware)
( out of topic Payload : bb1fff88c3b86baa29176642dc5f278d firing PCRat/Gh0st ET rule 2016922 )

Sundown :
2016-07-06 - Thanks  Anton Ivanov (Kaspersky) for confirmation

Sundown successfully Exploiting CVE-2015-2419 - 2016-07-06
cmd into wscript into Neutrino-ish named / RC4ed Payload let think this is a Rip from Neutrino implementation

( Out of topic payload: bcb80b5925ead246729ca423b7dfb635 is a Netwire Rat )

Files : Sundown_CVE-2015-2419_2016-07-06 (password is malware)

Read More :
Hunter Exploit Kit Targets Brazilian Banking Customers - 2015-08-27 - Proofpoint
CVE-2015-2419 – Internet Explorer Double-Free in Angler EK - 2015-08-10 - Sudeep Singh, Dan Caselden - FireEye
2015-08-10 - ANGLER EK FROM SENDS BEDEP This pass shared by Brad from Malware-Traffic-Analysis is including the CVE-2015-2419
Generic bypass of next-gen intrusion / threat / breach detection systems - 2015-06-05 - Zoltan Balazs - Effitas
Post publication Reading :
Attacking Diffie-Hellman protocol implementation in the Angler Exploit Kit - 2015-09-08 Kaspersky

Defcon 23: Let’s End Clickjacking

So, my Defcon talk, ultimately about ending clickjacking by design.

TL:DR: The web is actually fantastic, and one of the cool things about it is the ability for mutually distrusting entities to share the same browser, or even the same web page. What’s not so cool is that embedded content has no idea what’s actually being presented to the user — Paypal could have a box that says “Want to spend $1000” and somebody could shove an icon on top of that saying “$1.00” and nobody could tell, least of all Paypal.

I want to fix that, and all other Clickjacking attacks. Generally the suggested solution involves pixel scraping, i.e. comparing what was supposed to be drawn to what actually was. But it’s way too slow to do that generically. Browsers don’t actually know what pixels are ultimately drawn normally; they just send a bunch of stuff to the GPU and say “you figure it out”.

But they do know what they send to the GPU. Web pages are like transparencies, one stacked over the next. So, I’ve made a little thing called IronFrame, that works sort of like Jenga: We take the layer from the bottom, and put it on top. Instead of auditing, we make it so the only thing that could be rendered, is what should be rendered. It works remarkably well, even just now. There’s a lot more work to do before web browsers can really use this, but let’s fix the web!

Oh, also, here’s a CPU monitor written in JavaScript that works cross domain.

Darknet, where child pornography is rampant

Child porn is rampant in what is known as the dark web or darknet. The part of Internet that cannot be reached by using a search engine like Google. It is that part which is accessed using a special browser (TOR) which is freely downloadable, and works to ensure the anonymity of the user online. It achieves this by use of encryption and bouncing encrypted communication across a network of nodes before it reaches the intended site. The information that the intended site possess is the IP address of the last node which makes the original destination anonymous. The downside of the TOR network is its slow speed.

Coupling an anonymous network with an anonymous currency like BITCOIN allows illegal activity such as the buying and selling of drugs, child porn, and counterfeits to flourish without the fear of tracking either information or financial flows. Cybercriminals, terrorists, drug peddlers and pedophiles among others, use the darknet to further their business as the darknet protects both them and their customer’s identities.

Criminal users on the darknet are savvy and sophisticated in covering their tracks and erasing the digital fingerprints they leave online. They conduct their business on secret password protected websites limited to trusted users (excluding undercover police), utilize sophisticated hard disk encryption (including some with multiple passwords, each opening up a different volume), distributed storage across multiple computers to ensure that each computer will not have a complete image and move sites frequently.  These tactics coupled with the volume of sites on the darknet makes it a formidable task for law enforcement to identify criminal rings and catch them.

Making the darknet safe requires detectives to impersonate criminals or their customers to infiltrate criminal rings. It is a tedious task with limitations in jurisdiction and prosecution. In the next few years this old fashioned method will be supplemented with technology to map and analyze darknet sites, contents and activity to profile criminal behavior.

For Governments wanting to crack down on child porn, like as in India, the only option is to set-up a team of specialized investigators to explore darknet activity originating from within the country and to partner with their counterparts from like thinking countries to nab criminals within their jurisdiction.

Neiman Marcus Seeks En Banc Review

On August 3, 2015, Neiman Marcus requested en banc review of the Seventh Circuit’s recent decision in Remijas v. Neiman Marcus Group, LLC, No. 14-3122. As we previously reported, the Seventh Circuit found that members of a putative class alleged sufficient facts to establish standing to sue Neiman Marcus following a 2013 data breach. During that breach, hackers gained access to customers’ credit and debit card information.

In its petition for rehearing en banc, Neiman Marcus argued that the panel’s use of an “objectively reasonable likelihood” of future injury was rejected by Clapper v. Amnesty Int’l USA 133 S. Ct. 1138 (2013), which required potential future injuries to be “certainly impending.”

Neiman Marcus also contended that the panel relied on speculation and conjecture rather than concrete factual allegations to assess injury. It emphasized that the plaintiffs had been fully re-reimbursed for fraudulent charges and, according to Neiman Marcus, the cost of mitigating future harm does not satisfy standing unless that future harm is imminent, as stated under Clapper.

In the Neiman Marcus data breach, only credit card information was compromised. Neiman Marcus argued that the likelihood of identity theft is significantly lower when card information is the only information compromised. In contrast, the likelihood of identity theft may be higher when names, addresses, log-in information, and Social Security numbers are compromised.

As Neiman Marcus noted, there is now a circuit split on whether the risk of future fraud and identity theft, or their associated mitigation costs, confer standing. In a pre-Clapper decision, Reilly v. Ceridian Corp., 644 F.3d 38 (3d Cir. 2011), the Third Circuit ruled that the risk of future harm does not establish standing where hackers gained access to names, addresses, Social Security numbers, dates of birth, and bank account information.

According to The Practitioner’s Handbook for Appeals to the United States Court of Appeals for the Seventh Circuit at 161, “it is more likely to have a petition for writ of certiorari granted by the Supreme Court than to have a request for en banc consideration granted.”

Can child porn be blocked by banning websites?

The Indian government is trying to block child porn by banning websites, an ineffective strategy, primarily due to the difficulty in the identification of child porn websites. Child porn is traded within closed rings of pedophiles using the dark internet. The dark internet are sites on the Internet not accessible through the search engines. Pornographic material are actively bought and sold between collectors who form these rings using peer to peer software and encrypted communications. Some reports estimate that there are over 100000 individuals who deal in pornography through secret chat rooms and other communication channels.
Child porn is broadly defined as the creation, distribution and collection of photographs, audio or video recordings of sexual activity involving a prepubescent person. The pornographic content may range in severity from posing while clothed, nakedness to explicit sexual activity, assault and bestiality.
Children who are victims of child pornographers suffer physical pain, somatic symptoms and physiological distress. Many do not complain out of loyalty to the offender (who could be a relative) and a sense of shame.
One of ways child porn is produced is through the malicious use social networks and the Internet to groom innocent children into sharing explicit images of themselves and then blackmail them into producing more content. The content is then sold to other collectors for a fee. With the widespread availability of webcams and Internet, the remote pornographer has direct video access to a groomed child, within the once secure confines of the child bedroom.
Reducing the amount of child porn on the Internet is a noble initiative and one that requires the co-operation of several stakeholders such as law enforcement, parents, victims, social groups, ISP’s, search engines and the community. Catching and shutting down rings has to be a priority and ISP’s hosting dark sites need to quickly detect and shutdown such child abuse sites.  The catch rate of child pornographers is quite low, at around 1000 a year with no mechanism to prevent repeat offenses.
In India, I would believe simply going by the increased spate of media reports on physical child abuse in prominent schools, that physical child abuse is a larger problem than tackling online pedophilia. All parents must be alert to the cues that their child provides to quickly identify abuse.

Google Granted Permission to Appeal to the UK Supreme Court

On July 28, 2015, the UK Supreme Court announced its decision to grant permission in part for Google Inc. (“Google”) to appeal the England and Wales Court of Appeal’s decision in Google Inc. v Vidal-Hall and Others.

As we reported previously, the claimants in this case were users of Apple’s Safari browser who argued that during certain months in 2011 and 2012, Google collected information about their browsing habits via cookies placed on their devices without their consent and in breach of Google’s privacy policy. The Court of Appeal ruled on two important issues.

The first issue was whether or not there was a tort of “misuse of private information” under English law. The Court of Appeal upheld the lower court’s decision and affirmed that there is such a tort under English law, but that this is not a new cause of action. Rather, the Court of Appeal stated that its approach “simply gives the correct legal label to one that already exists.”

The second issue was whether damages under Section 13(2) of the Data Protection Act 1998 (the “Act”) can be awarded in circumstances in which the claimant has not suffered any financial harm. The claimants argued that they had suffered anxiety and distress, but did not allege that they suffered financial harm. The Court of Appeal held that, on a literal interpretation, Section 13(2) of the Act does not permit any damages in the absence of financial harm, but noted that it does not appear to be compatible with EU Data Protection Directive 95/46/EC (the “Directive”). The Directive permits claims for damages without financial harm. Building on the evolution of English case law in this area over the last decade, the Court of Appeal held that the claimants could recover damages from Google without showing financial harm, regardless of language to the contrary in Section 13(2) of the Act.

Subsequently, Google applied for permission to appeal to the Supreme Court on the following grounds:

  • whether the Court of Appeal was right to hold the claimant’s claims for misuse of private information are tort claims for the purposes of the rules relating to service of the proceedings out of the jurisdiction;
  • whether the Court of Appeal was right to hold that Section 13(2) of the Act was incompatible with Article 23 of the Directive; and
  • whether the Court of Appeal was right to decline the application of Section 13(2) of the Act on the grounds that it conflicts with the rights guaranteed by Articles 7 and 8 of the EU Charter of Fundamental Rights.

The Supreme Court refused permission to appeal on the first ground on the basis that it does not raise an arguable point of law, but granted permission to appeal on all other grounds. It is anticipated that the Supreme Court will hear the appeal during 2016.

Shock News: Trusted Sites Serve Malware in Ads

Yes, I know. We shouldn't really be particularly surprised that a legitimate site -
even one the size of Yahoo - has ended up mistakenly serving some form of badware through their advertising networks. It’s not the first time. Yahoo hit the headlines for malware related problems in 2014, when an affiliate traffic pushing scheme targeted Yahoo users with malware served through adverts on the Yahoo website, and now it’s happened again. 

Ad revenue on the Internet is hard to live on at the best of times, and we can expect "lowest cost" behaviours, including, but not limited to, fairly rudimentary checks on the intentions of advertisers.

The obvious thing to do here is to bleat on about the efficacy of having a web filter in fighting some of those attacks - you've read that before, hey, you may have even read it before from me. Fill in this section on your own, as an exercise for the reader.

You probably also know how important HTTPS interception is - of course, this malware was served over HTTPS, wouldn't want any pesky insecure mixed content now, would we? Again, I’ve expounded at length on the subject. No HTTPS scanning = no security. Don't accept "blacklists" of sites that get MITM scanned: the delivery site won't be on that list, and your malware sails on through free and easy.

The thing I want to mention today is the other big secret of content filtering: some web filters only apply the full gamut of their filtering prowess to sites that are not already in their blocklists. This is wonderful for performance. It might even mean you only need a single web filter to provide for a huge organisation - but when a "trusted" site, that's already "known" to the web filter, bypasses some of the content filtering in order to save a few CPU cycles you may be getting a false economy.

Hunton’s Privacy Law Blog Listed Among the Top-Ranked Legal Blogs

Hunton & Williams is pleased to announce the firm’s Privacy & Information Security Law Blog has been ranked the #1 privacy and data security blog and the #4 overall legal blog by LexBlog’s 2015 Am Law 200 Blog Benchmark Report. Recently released, the report catalogues all blogs published by Am Law 200 law firms and calculates the rankings based on (1) the amount of traffic each blog generates, (2) their technology infrastructures and (3) the ability of each blog to incorporate responsive design for multiple devices. Hunton & Williams’ Global Privacy and Cybersecurity team is a leader in its field and has been ranked by Computerworld magazine, Chambers and Partners and The Legal 500 as a top law firm globally for privacy and data security.

CIPL to Host APEC Cross-Border Privacy Rules Workshop in the Philippines

On August 29, 2015, the Centre for Information Policy Leadership at Hunton & Williams (“CIPL”) will host a half-day workshop in Cebu, Philippines, on the APEC Cross-Border Privacy Rules (“CBPR”) and their role in enabling legal compliance and international data transfers. The CBPR are a privacy code of conduct developed by the 21 APEC member economies for cross-border data flows in the Asia-Pacific region.

The workshop is designed for information controllers and third party certification organizations (Accountability Agents) interested in the APEC CBPR, as well as for regulators in the Asia-Pacific region who are charged with enforcing the code of conduct. In addition, the workshop will introduce information processors based in APEC economies to the proposed APEC Privacy Recognition for Processors (“PRP”). The workshop also will cover the ongoing work on “dual certification” under both the CBPR and the EU’s Binding Corporate Rules.

The format of the workshop will be interactive and will focus on practical issues relevant to companies that are interested in seeking or providing CBPR certification (and PRP recognition when it becomes available).

The workshop will take place on the margins of the APEC Data Privacy Subgroup and Electronic Commerce Steering Group meetings during APEC’s Senior Officials Meeting, and will benefit from the participation of many CBPR and PRP experts from government and the private sector who will be present at these meetings.

The workshop will be accessible to both APEC delegates and non-APEC delegates.

For inquiries or to register for this workshop, please contact Markus Heyder at by August 10, 2015.