Monthly Archives: October 2014

Having Fun with Password Self-Rest Mechanisms

You know what makes me crazy? Security people who don't understand how crappy attempts to push security policy actually drive security (in the real world) lower. Sometimes, and this makes it a little bit less bad, it's not security people that are responsible but well-meaning developers, project managers, or others who simply don't understand.

The quintessential example of this phenomena is the password self-service reset functionality built into many websites. It's almost 2015 and I was forced to register for a website the other day where I can't really tell you why they needed me to set up a username and password, but I couldn't do what I needed to without that unfortunate string of events that all but guaranteed that I would be upset.

First is that nagging feeling that this site is going to get hacked, or already has been. You know the one. As a security professional (or often just someone with some sense) you pull up a website which just screams "We took zero security precautions because we know nothing about web development (while using the tag)" - and suddenly you realize you almost have to give this site some of your personal information. Lovely.

Then there's that feeling that when this site gets hacked there's very little you can do because they're going to need at least some of your info. You get through registration and you can't continue without setting up those "password self reset questions" so many sites are in-famous for. Genius questions which no one could ever find out about you like ... "Where did you go to high school?" or "What is your favorite color?" or "What is your favorite food?". Brilliant stuff like that. But sometimes they give you a choice of 5 different ones (all of which stink) and you have to pick 3. In this case I had to do this exact thing but the questions were infuriating. One of them asked for my mother's maiden name, another for my high school best friend, and another for the last 4 digits of my favorite credit card (seriously?!).

So I picked the ones that I knew were the least destructive (when spilled all over the Internets) and right before I clicked Next I thought of something. Why in the world am I giving them the real answers? I have a password manager which will remember these for me, and my password incidentally, so why not get creative?

So here's my advice to you - get creative!

Your favorite color? peanuts
Your first car? Orange
Your best friend in High School? Polar Bear

See this way when someone pillages the website's database of all that clear-text "security stuff" at least the data they steal won't be usable against you at some other website. Also, use per-website passwords. I have to be honest, at this point if you're not using a password manager with built-in password generator for even the most basic websites - you deserve what's comin' to you.

Good luck!

A Few Thoughts on Privacy in the Age of Social Media

Everyone already knows there are privacy issues related to social media and new technologies. Non-tech-oriented friends and family members often ask me questions about whether they should avoid Facebook messenger or flashlight apps. Or whether it's OK to use credit cards online in spite of recent breach headlines. The mainstream media writes articles about leaked personal photos and the Snappening. So, it's out there. We all know. We know there are bad people out there who will attempt to hack their way into our personal data. But, that's only a small part of the story.

For those who haven't quite realized it, there's no such thing as a free service. Businesses exist to generate returns on investment capital. Some have said about Social Media, "if you can't tell what the product is, it's probably you." To be fair, most of us are aware that Facebook and Twitter will monetize via advertising of some kind. And yes, it may be personalized based on what we like or retweet. But, I'm not sure we fully understand the extent to which this personal, potentially sensitive, information is being productized.

Here are a few examples of what I mean:

Advanced Profiling

I recently viewed a product marketing video targeted to communications service providers. It describes that massive adoption of mobile devices and broadband connections suggesting that by next year there will be 7.7 billion mobile phones in use with 15 billion connections globally. And that "All of these systems produce an amazing amount of customer data" to the tune of 40TB per day; only 3% of which is transformed into revenue. The rest isn't monetized. (Gasp!) The pitch is that by better profiling customers, telcos can improve their ability to monetize that data. The thing that struck me was the extent of the profiling.

As seen in the screen capture, the user profile presented extends beyond the telco services acquired or service usage patterns into the detailed information that flows through the system. The telco builds a very personal profile using information such as favorite sports teams, life events, contacts, location, favorite apps, etc. And we should assume that favorite sports team could easily be religious beliefs, political affiliations, or sexual interests.

IBM and Twitter

On October 29, IBM and Twitter announced a new relationship that enables enterprises to "incorporate Twitter data into their decision-making." In the announcement, Twitter describes itself as "an enormous public archive of human thought that captures the ideas, opinions and debates taking place around the world on almost any topic at any moment in time." And now all of those thoughts, ideas, and opinions are available for purchase through a partnership with IBM.

I'm not knocking Twitter or IBM. The technology behind these capabilities is fascinating and impressive. And perhaps Twitter users allow their data to be used in these ways by accepting the Terms of Use. But, it feels a lot more invasive to essentially provide any third party with a siphon into the massive data that is our Twitter accounts than it would be to, for example, insert a sponsored tweet into my feed that may be selected based on which accounts I follow or keywords I've tweeted.

Instagram Users and Facebook

I recently opened Facebook to see an updated list of People I may know. Most Facebook users are familiar with the feature. It can be an easy way to locate old friends or people who recently joined the network. But something was different. The list was heavily comprised of people who I sort of recognize but have never known personally.

I realized that Facebook was trying to connect me with many of the people behind the accounts I follow on Instagram. Many of these people don't use their real names, talk about their work, or discuss personal family matters on Instagram. They're photographers sharing photos. Essentially, they're artists sharing their art with anyone who wants to take a look. And it feels like a safe way to share.

But now I'm looking at a profile of someone I knew previously only as "Ty_Chi the landscape photographer" and I can now see that he is actually Tyson Kendrick, retail manager from Chicago, father of three girls and a boy. Facebook is telling me more than Mr. Kendrick wanted to share. And I'm looking at Richard Thompson, who's a marketing specialist for one of the brands I follow. I guess Facebook knows the real people behind brand accounts too. It started feeling pretty creepy.

What does it all mean?

Monetization of social media goes way beyond targeted advertising. Businesses are reaching deep into any available data to make connections or discover insights that produce better returns. Service providers and social media platforms may share customer details with each other or with third parties to improve their own bottom lines. And the more creative they get, the more our sense of privacy erodes.

What I've outlined here extends only slightly beyond what I think most people expect. But, we should collectively consider how far this will all go. If companies will make major financial decisions based on Twitter user activity, will there be well-funded campaigns to change user behavior on Social Media platforms? Will the free-flow exchange of ideas and opinions become more heavily and intentionally influenced?

The sharing/exchanging of users' personal data is becoming institutionalized. It's not a corner case of hackers breaking in. It's a systemic business practice that will grow, evolve, and expand.

I have no recipe to avoid what's coming. I have no suggestions for users looking to hold onto to the last threads of their privacy. I just think it's worth thinking critically about how our data may be used and what that may mean for us in years to come.

100 years ago (2)

On 27 October, 1914, Lieutenant Edmund Swetenham of the 2nd Battalion Durham Light Infantry was killed by a sniper in the trenches near Armentières. According to the battalion's War Diary, he had joined the battalion with the third reinforcement of 12 officers and 98 NCOs and men on the previous day, 26 October. He had been a regular officer in the regiment since April 1910 when he left Sandhurst.

APT28: A Window into Russia’s Cyber Espionage Operations?

The role of nation-state actors in cyber attacks was perhaps most widely revealed in February 2013 when Mandiant released the APT1 report, which detailed a professional cyber espionage group based in China. Today we release a new report: APT28: A Window Into Russia’s Cyber Espionage Operations?

This report focuses on a threat group that we have designated as APT28. While APT28’s malware is fairly well known in the cybersecurity community, our report details additional information exposing ongoing, focused operations that we believe indicate a government sponsor based in Moscow.

In contrast with the China-based threat actors that FireEye tracks, APT28 does not appear to conduct widespread intellectual property theft for economic gain. Instead, APT28 focuses on collecting intelligence that would be most useful to a government. Specifically, FireEye found that since at least 2007, APT28 has been targeting privileged information related to governments, militaries and security organizations that would likely benefit the Russian government.

In our report, we also describe several malware samples containing details that indicate that the developers are Russian language speakers operating during business hours that are consistent with the time zone of Russia’s major cities, including Moscow and St. Petersburg. FireEye analysts also found that APT28 has systematically evolved its malware since 2007, using flexible and lasting platforms indicative of plans for long-term use and sophisticated coding practices that suggest an interest in complicating reverse engineering efforts.

We assess that APT28 is most likely sponsored by the Russian government based on numerous factors summarized below:

Table for APT28

FireEye is also releasing indicators to help organizations detect APT28 activity. Those indicators can be downloaded at

As with the APT1 report, we recognize that no single entity completely understands the entire complex picture of intense cyber espionage over many years. Our goal by releasing this report is to offer an assessment that informs and educates the community about attacks originating from Russia. The complete report can be downloaded here: /content/dam/legacy/resources/pdfs/apt28.pdf.

The Other Side of Breach Hysteria

In a world where everyone is trying to sell you something, security is certainly no exception. But separating the hype from the truth can easily turn into a full time job if you're not careful.

With all the recent retail data breaches, it would appear as though the sky is falling in large chunks right on top of us. Every big-name retailer, and even some of the smaller ones, are being hacked and their precious card data is bring whisked away to be sold to miscreants and criminals.

Now enter the sales and marketing pitches. After every breach it would seem our mailboxes fill up with subject lines such as-
"Learn how not to be the next , read how our latest gizmo will keep you secure!"
I don't know about you, but the snake-oil pitch is starting to get old. While it's clear that the average buyer is getting the message about data breaches and hackers - I believe there are two other aspects of this which aren't talked about enough.

First there is the notion of "breach fatigue". If you read the news headlines you would have thought that everyone's bank accounts would be empty by now, and everyone in the United States would have been the victim of identity theft by now. But they haven't. Or they haven't been impacted directly. This leads to the Chicken Little problem.

You see, many security professionals cried that security incidents did not receive enough attention. Then the media took notice, and sensationalized the heck out of incidents to an almost rock-star fervor. The issue here is that I believe people are starting to grow weary of the "Oh no! Hackers are going to steal everything I have!" talk. Every incident is the biggest there has ever been. Every incident is hackers pillaging and stealing countless credit card records and identities. The average person doesn't quite know what to make of this, so they have no choice but to mentally assume the worst. Then - over time - the worst never comes. Sure, some get impacted directly but there is this thing called zero fraud liability (in the case of card fraud) that means they are impacted - but barely enough to notice because their banks make it alright. More on this in a minute.

We as humans have a shocking ability to develop a tolerance to almost anything. Data breach hysteria is no exception. I've now seen and heard people around televisions (at airports, for example, where I happen to be rather frequently) say things like "Oh well, more hackers, I keep hearing about these hackers and it never seems to make a difference." Make no mistake, this is bad.

You see, the other side of the awareness hill, which we are rapidly approaching, is apathy. This is the kind of apathy that is difficult to recover from because we push through the first wave of apathy into awareness, and then hysteria, which leads to a much stronger version of apathy where we will be stuck - I believe. So there we are, stuck.

If I'm honest, I'm sick and tired of all the hype surrounding data breaches. They happen every day of every week, and yet we keep acting like we're shocked that Retailer X, or Company Y was breached. Why are we still even shocked? Many are starting to lose the ability to become shocked - even though the numbers of records breached and scale of the intrusions is reaching absurd proportions.

Second point I'd like to make is around the notion of individual impact. Many people simply say that "this still doesn't impact me" because of a wonderful thing like zero fraud liability. Those 3 words have single-handedly destroyed the common person's ability to care about their credit card being stolen. After you've had your card cloned, or stolen online and had charges show up - you panic. Once you realize your bank has been kind enough to put the funds back, or roll-back the fraudulent charges you realize you have a safety net. Now these horrible, terrible, catastrophic breaches aren't so horrible, terrible and catastrophic. Now they're the bank's problem.

Every time someone has a case of credit card fraud the bank covers under zero fraud liability (and let's face it, most cards and banks have this today) - their level of apathy for these mega-breaches grows. I believe this is true. I also believe there is little we can do about it. Actually, I'm not sure if there is anything that needs to be done about it. Maybe things are just the way they're going to be.

There is a great phrase someone once used that I'm going to paraphrase and borrow here - things are as bad as the free market will support. If I may adapt this to security - the security of your organization is as good (or bad) as your business and your customers will support.

Think about that.

Security Lessons from Complex, Dynamic Environments

Security is hard.

Check that- security is relatively hard in static environments, but when you take on a dynamic company environment security becomes unpossible. I'm injecting a bit of humor here because you're going to need a chuckle before you read this.

Some of us in the security industry live in what's referred to as a static environment. Low rate of change (low entropy) means that you can implement a security control or measure and leave it there, knowing that it'll be just as effective today as tomorrow or the day after. Of course, this takes into account the rate at which effectiveness of security tools degrades, and understanding whether things were effective in the first place. It also means that you don't have to worry about things like a new system showing up on the network very often or a new route to the Internet. And when these do happen, you can be relatively sure something is wrong.

Early on in my career I worked for a technical recruiting firm. Computers were just a tool and companies having websites was a novelty. The ancient Novell NetWare 3.11 systems had not seen a reboot in literally half a decade but nothing was broken so everything just kept running and slowly accumulating inches of dust in the back room. When I worked there we modernized to NT 3.51 (don't laugh, I'm dating myself here) and built an IIS-based web page for external consumption. That place was a low entropy environment. We changed out server equipment never, and workstations every 5 years. If all of a sudden something new showed up in the 30 node network, I'd immediately suspect something was amiss. At the time, nothing that exciting ever happened.

Fast forward a few years and I'm working for a financial start-up. It's the early 2000's and this company is the polar opposite of a static company. We have at least 1 new server coming online a day, typically 5-10 new IP addresses showing up that no one can identify. We get by because we have one thing going for us. That one thing is the on-ramp to the Internet. We have a single T1 which connects us to the rest of the world. We drop a firewall and an IDS (I think we used an early SNORT version, maybe, plus a Sonic Wall firewall). When that changed and our employees started to go mobile and thus VPN things got a little hairy.

Fast forward another few years and I'm working at one of the world's largest companies on arguably one of the most complex networks mankind has ever seen. Forget trying to understand or know the everything - we're struggling to keep track of the few things we DO know. Heck we spend 4 weeks NMap'ing (and accidentally causing a minor crisis, oops) our own IP subnets to find all the NT4 systems when support finally and seriously for real this time, ran out.

Now let's look at security in the context of this article (and reported breach) - Let me highlight a few key quotes for you-
"The event was complicated by the fact that the company had undergone corporate acquisitions, which introduced more network connections, and consequently a wider attack surface. The firm had more than 100 entry and exit points to the Internet."
You may chuckle at that, but I bet you have pretty close to this at your organization. Sure, maybe the ingress/egress points you control are few, and well protected, but it's the ones you don't know about which will hurt you. Therein lies the big problem - the disconnect between business risk and information security ("cyber") risk. If information security isn't a part of the fabric of your business, and a part of the core of the business decision-making process you're going to continue to fail big, or suffer by a thousand papercuts.

While not necessarily as sexy as that APT Defender Super Deluxe Edition v2.0 box your vendor is trying to sell you, network and system configuration management, change management and asset management are things you absolutely must get right, and must be involved in as a security professional for your enterprise. The alternative is you have total chaos wherein you're trying to plug each new issue as you find out about it, while the business has long forgotten about the project and has moved on. This sort of asynchronous approach is brutal in both human effort and capital expenditure.

Now let's focus on another interesting quote from the article. Everyone like to offer advice to breach victims, as if they have any clue what they're saying. This one is a gem-
"Going forward, “rearchitecting the network is the best approach to ensure that the company has a consistent security posture across its wide enterprise," officials advised."
What sort of half-baked advice is that?! Those of you who have worked incidents in your careers, have you ever told someone that the best thing to do with your super-complex network is to totally rearchitect it? How quickly would you get thrown out of a 2nd story window if you did? While this advice sounds sane to the person who's saying it - and likely has never had to follow the advice - can you imagine being given the task of completely rearchitecting a large, complex network in-place? I've seen it done. Once. And it took super-human effort, an army of consultants, more outages than I'd care to admit, and it was still cobbled together in some places for "legacy support".

Anyway, somewhere in this was a point about how large, complex networks and dynamic environments are doomed to security failure unless security is elevated to the business level and becomes an executive priority. I recognize that not every company will be able to do this because it won't fit their operating and risk models - but if that's the case you have to prepare for the fallout. In the cases where risk models say security is a business-level issue you have a chance to "get it right"; this means you have to give a solid effort and align to business, and so on.

Security is hard, folks.

To Reform and Institutionalize Research for Public Safety (and Security)

On October 3rd, 2014 a petition appeared on the website titled "Unlocking public access to research on software safety through DMCA and CFAA reform". I encourage you to go read the text of the petition yourself.

While I believe that on the whole the CFAA and more urgently the DMCA need dramatic reforms if not to be flat-out dumped, I'm just not sure I'm completely on board with there this idea is going. I've discussed my displeasure for the CFAA on a few of our recent podcasts if you follow our Down the Security Rabbithole Podcast series, and I would likely throw a party if the DMCA were repealed tomorrow - but unlocking "research" broadly is dangerous.

There is no doubt in my mind that security research is critical in exposing safety and security issues in matters that affect the greater public good. However, let's not confuse legitimate research with thinly veiled extortion or a license to hack at will. We can all remember the incident Apple had where a hacker purportedly had exposed a flaw in their online forums, then to prove his point he exploited the vulnerability and extracted data of real users. All in the name of "research" right? I don't think so.

You see, what a recent conversation with Shawn Tuma has taught me is that under the CFAA we have one of these "I'll know it when I see it" conditions where a prosecuting attorney can choose to either go after someone, or look the other way if they believe they were acting in good faith and for the public good... or some such. This type of power makes me uncomfortable as it gives that prosecuting attorney way too much room. Room for what you ask? How about room to be swayed by a big corporation... I'm looking at you AT&T.

Let me lay out a scenario for you. Say you are a security professional interested in home automation and alarm systems. You purchase one, and begin to conduct research into the types of vulnerabilities one of these things is open to - since you'll be installing it in your home and all. You uncover some major design flaws, and maybe even a way to remotely disable the home alarm feature on thousands of units across the country. You want to notify the company, get them to fix the issue, and maybe get a little by-line credit for it. Only the company slaps a DMCA law suit on you for reverse engineering their product and you're in hot water. Clearly they have more money and attorneys than you do. Your choices are few - drop the research or face criminal prosecution. Odds are you're not even getting a choice.

In that scenario - it's clear that reforms are needed. Crystal clear, in fact.

The issue is we need to protect legitimate research from prosecutorial malfeasance while still allowing for laws to protect intellectual property and a company's security. So you see, the issue isn't as simple as opening up research, but much more subtle and deliberate.

How do we limit the law and protect legitimate research, while allowing for the protections companies still deserve? I think the answer lies in how we define a researcher. I propose that we require researchers to declare their research and its intent and draft ethical guidelines which can be agreed upon (and enforced on both ends) between the researcher and the organization being researched. There must be rules of engagement, and rules for "responsible and coordinated disclosure". The laws must be tweaked to shield the researched with declared intent and following the rules of engagement from being prosecuted by a company which is simply trying to skirt responsibility for safety, privacy and security. Furthermore, there must be provisions for matters that affect the greater good - which companies simply cannot opt out of.

Now, if you ask me if I believe this will happen any time soon, that's another matter entirely. Big companies will use their lobbying power to make sure this type of reform never happens, because it simply doesn't serve their self-interest. Having seen first-hand the inner workings of a large enterprise technology company - I know exactly how much profit is valued over security (or anything else, really). Profit now, and maybe no one will notice the big gaping holes later. That's just how it is in real life. But when public safety comes into play I think we will see a few major incidents where we have loss of life directly attributed to security flaws before we see any sort of reform. Of course when we do have serious incidents, they'll simply go after the hackers and shed any responsibility. That's just how these things work.

So in closing - I think there is a lot of work to be done here. First we need to more closely define and create formal understanding of security research. Once we're comfortable with that, we need to refine the CFAA and maybe get rid of the DMCA - to legitimize security research into the areas that affect public safety, privacy and security.

ShellShock payload sample Linux.Bashlet

Someone kindly shared their sample of the shellshock malware described by the Malware Must die group - you can read their analysis here:

File: fu4k_2485040231A35B7A465361FAF92A512D
Size: 152
MD5: 2485040231A35B7A465361FAF92A512


SHA256: e74b2ed6b8b005d6c2eea4c761a2565cde9aab81d5005ed86f45ebf5089add81
File name: trzA114.tmp
Detection ratio: 22 / 55
Analysis date: 2014-10-02 05:12:29 UTC ( 6 hours, 50 minutes ago )
Antivirus Result Update
Ad-Aware Linux.Backdoor.H 20141002
Avast ELF:Shellshock-A [Expl] 20141002
Avira Linux/Small.152.A 20141002
BitDefender Linux.Backdoor.H 20141002
DrWeb Linux.BackDoor.Shellshock.2 20141002
ESET-NOD32 Linux/Agent.AB 20141002
Emsisoft Linux.Backdoor.H (B) 20141002
F-Secure Linux.Backdoor.H 20141001
Fortinet Linux/Small.CU!tr 20141002
GData Linux.Backdoor.H 20141002
Ikarus Backdoor.Linux.Small 20141002
K7AntiVirus Trojan ( 0001140e1 ) 20141001
K7GW Trojan ( 0001140e1 ) 20141001
Kaspersky 20141001
MicroWorld-eScan Linux.Backdoor.H 20141002
Qihoo-360 Trojan.Generic 20141002
Sophos Linux/Bdoor-BGG 20141002
Symantec Linux.Bashlet 20141002
Tencent Win32.Trojan.Gen.Vdat 20141002
TrendMicro ELF_BASHLET.A 20141002
TrendMicro-HouseCall ELF_BASHLET.A 20141002
nProtect Linux.Backdoor.H 20141001