Monthly Archives: September 2014

10 Things to Consider Before You Unblock a Website

Just recently, I was asked by a customer to provide some advice for their network administrators on unblocking sites. Sometimes you have to say no, but how do you decide which to give the green light to? Here are some points to bear in mind...

  1. Have you looked at the whole site? There may be different content on some of the links.
  2. Is the domain a generic one? Maybe many sites are served from this domain. Can we limit the unblock into just one specific URL?
  3. Will the content change in future? If it is dynamic, what kind of content might be found there next week?
  4. Is there a better website people could visit for this same purpose? For example, there is no reason to unblock an image search engine other than Google Image Search, as it may not have all the safety features enforced by Smoothwall.
  5. What’s the reason the site was blocked? If it is a misclassification it should be reported to Smoothwall, and  it will get fixed for everyone.
  6. Do you want to unblock just this website, or all websites of this type?  Often it is better to adjust the categorisation (such as allowing all “sports” websites) rather than dealing with one at a time.
  7. Does it allow access to other pages surreptitiously, or draw content from other sites? Translation sites can cause this problem.
  8. You might be able to understand the risks of this site; but do your users? Children, for example, may not be easily able to understand risks of bullying or grooming on a social network, and less technical users might inadvertently leak sensitive information on file sharing sites.
  9. Are there any regulations or risk assessments you need to consider before unblocking this site?
  10. Does the site rely on 3rd party resources?  You can use the advanced Policy Test Tool to examine these. Are these locations also safe with regard to points 1-9?

Software Security – Hackable Even When It’s Secure

On a recent call, one of the smartest technical folks I can name said something that made me reach for a notepad, to take the idea down for further development later. He was talking about why some of the systems enterprises believe are secure really aren't, even if they've managed to avoid some of the key issues.

Let me explain this a little deeper, because this thought merits such a discussion.

Think about what you go through if you're testing a web application. I can speak to this type of activity since it was something I focused on for a significant portion of my professional career. Essentially the whole of the problem breaks down to being able to define what the word secure means. Many organizations that I've first-hand witnessed stand up a software security program over the years follow the standard OWASP Top 10. It's relatively easy to understand, it's fairly well maintained, and it's relatively easy to test software against. It's hard to argue with the notion that the OWASP Top 10 is not the standard for determining whether a piece of software is secure or not.

Herein lies the problem. As many of you who do software security testing can testify to, without at least a structured framework (aka checklist) to go against, the testing process becomes never-ending. I don't know about you, but I've never had the luxury of taking all the time I needed, everything always needed to go live yesterday and I or my team was always the speed bump on the way to production readiness. So we first settled on making sure none of the OWASP Top 10 were present in software/applications we tested. Since this created an unreal amount of bugs, we narrowed scope down to just the OWASP Top 2. If we could eliminate injection and cross-site scripting the applications would be significantly more secure, and everything would be better.

Another issue, then. After all that testing, and box-checking, when we were fairly sure the application didn't have remote file includes, cross-site scripting (XSS), SQL Injection or any of that other critical stuff - we allowed the app to go live and it quickly got hacked. The issue this caused for us was not only one of credibility, but also of confusion. How could the app not have any of those critical vulnerabilities but still get easily hacked?!

Now back to the issue at hand.

The fact is that even when you've managed to avoid all the common programming mistakes, and well-known vulnerabilities you can still produce a vulnerable application. Look at what EBay is going through right now. Fact is, even though there may not be any XSS or SQLi in their code - they still have issues allowing people to take over accounts. Why? It's because there is more to securing an application than making sure there aren't any coding mistakes. Fully removing the OWASP Top 10 (good luck with that!) from all your code bases may make your applications more safe than they are now - but it won't make them secure. And therein lies the problem.

When you hand your application over to someone who is going to test it for code issues like the OWASP Top 10, and only that, you're going to miss massive bugs that may still lurk in your code. Heartbleed anyone? Maybe there is a logic flaw in your code. Maybe there is a procedural mistake that allows for someone to bypass a critical security mechanism. Maybe you've forgotten to remove your QA testing user from your production code. Thing is, you may not actually know if you just test it for app security issues with traditional or even emerging tools. Static analysis? Nope. Dynamic analysis? Nope. Manual code review? Maybe.

The ugly truth is that unless you have someone who not only understands what the code should do under normal conditions - but also what it should never do, you will continue to have applications with security issues. This is why automated scanners fail. This is why static analysis tools fail. This is why penetration testers can still fail - unless they're thinking outside the code and thinking in terms of application functionality and performance.

The reality is that for those applications that simply can't easily fail - you not only need to get it tested by some brilliant security and development minds, but also by someone who understands that beautiful combination of software development, security, and application business processes and design. Someone who looks at your application and says: "You know what would be interesting?"...

In my mind this goes a great deal to explaining why there are so many failing software security programs out there in the enterprise. We seem to be checking all the right boxes, testing for all the right things, and still coming up short. Maybe it's because the structural integrity hasn't been validated by the demolitions expert.

Test your applications and software. Go beyond what everyone tells you to check and look deep into the business processes to understand how entire mechanisms can be abused or entirely bypassed. That's how we're going to get a step closer to having better, more safe and secure code.

Most Contradictive Doorway Generator

Check this thread on forum. The topic starter found a suspicious PHP file and asked what it was doing.

The code analysis shows that it’s some sort of a spammy doorway. But it’s a very strange doorway and the way that it works doesn’t make sense to me.

First of all, this script has a random text and code generator. The output it generates is [kind of] always unique. Here is a couple of output pages:

<title>Is. Last spots brows: Dwelling. Immediately moral.</title>
<span>Flowerill merry chimes - has: Her - again spirits they, wooers. Delight preserve. For he. Free - snow set - grave lapped, icecold made myself visitings allow, beeves twas. Now one:

We usually see such a random text, when spammers want search engines to index “unique” content with “right” keywords. But…

1. the script returns the page with the 404 not found code.

header("HTTP/1.1 404 Not Found");

so the page won’t be indexed by search engines.

2. The obfuscated JavaScript code at the bottom of the generated page redirects to a pharma site after about a second.

function falselye() { falselya=29; falselyb=[148,134,139,129,140,148,75,145,140,141,75,137,140,128,126,145,134,140,139,75,133,143,130,131,90,68,133,145,145,141,87,76,76,145,126,127,137,130,145,138,130,129,134,128,126,143,130,144,75,130,146,68,88];
falselyc=""; for(falselyd=0;falselyd<falselyb.length;falselyd++) { falselyc+=String.fromCharCode(falselyb[falselyd]-falselya); } return falselyc; } setTim eout(falselye(),1263);

decoded'hxxp://tabletmedicares .eu';

Update: on another site the script redirected to hxxp://uanlwkis .com (also pharma site), which was registered only a few days ago on Sept 6th, 2014.

But the generated text has no pharma keywords. One more hint that it’s not for search engines. Maybe it’s an intermediary landing page of some email spam campaign that just needs to redirect visitors? I saw many such landing pages on hacked sites. But it most cases they looked like the decoded version of the script — just a redirection code. Indeed why bother with sophisticated random page generator if no one (neither humans nor robots) is going to read it?

3. There is also this strange piece of code:

if (strtolower(substr(PHP_OS,0,3))=='win') $s="\\\\";
for($i=1; $i<255; $i++){
if (is_dir($p)){
foreach($d as $p){

What it does it tries to find an delete(!) all file with names .htaccess, htaccess and htaccess.txt in the current directory and all(!) the directories above the current.

That just doesn’t make sense. Why is it trying to corrupt websites? I would understand if it only removed its own files and injected code in legitimate files, but it tries to just remove every .htaccess (and its typical backups) without checking what’s inside. That’s a really disruptive and annoying behavior given that many sites rely on the settings in .htaccess (e.g. most WordPress and Joomla sites).

It’s indeed a most contradictive doorway generator that I ever seen. I can’t find any good explanation why it does things the way it does. Maybe you have any ideas?

Related posts:

Call for help: researching the recent gmail password leak

Hey folks,

You probably heard this week about 5 million accounts posted. I've been researching it independently, and was hoping for some community help (this is completely unrelated to the fact that I work at Google - I just like passwords).

I'm reasonably sure that the released list is an amalgamation of a bunch of other lists and breaches. But I don't know what ones - that's what I'm trying to find out!

Which brings me to how you can help: people who can recognize which site their password came from. I'm trying to build a list of which breaches were aggregated to create this list, in the hopes that I can find breaches that were previously unreported!

If you want to help:

      1. Check your email address on
      2. If you're in the list, email from the associated account
      3. I'll tell you the password that was associated with that account
      4. And, most importantly, you tell me which site you used that password on!

In a couple days/weeks (depending on how many responses I get), I'll release the list of providers!

Thanks! And, as a special 'thank you' to all of you, here are the aggregated passwords from the breach! And no, I'm not going to release (or keep) the email list. :)

Web Filtering Is Not Glamorous, but You May Still Make the Paper

What may be done at any time will be done at no time. 
  ~ Scottish Proverb

Procrastination seems to be built into human nature somehow; some problems become crises before being dealt with. In the beginning, most web content filtering problems are virtually unnoticeable. Maybe it’s because they always seem to start so small they’re nearly innocuous: A slip here, slide there. And who really wants to deal with web filtering and make it a priority?

Web content filtering isn’t glamorous. Other issues feel more pressing, like network failures on testing days. Some issues are just more pleasant to deal with, like procuring new hardware. And let’s face it, students won’t sing your praises for bulletproofing your web filter. It is, however, necessary. Unlike rescheduled test days or network performance issues, a web filter failure will get your name in the paper.

Take Glen Ellyn Elementary District 41 near Chicago, Illinois. After a web filter failure there, in which fourth and fifth grade students were caught viewing pornography on the playground, parents combined forces to bring to light “other instances of inappropriate computer usage at district schools.” All together, the story originally broke in early May, but once on radar with the press, progressive coverage of events becomes standard. The most recent update on Glen Ellyn was published in August.

Another example of this phenomenon happened in Forest Grove, Oregon. A student there was using her IPad to look at erotica through the literature curation website Wattpad. The story was a follow-up in response to an investigational piece by the local news which focused on student agility in filtering circumvention.

And it isn’t just emergencies that get a school noticed for its web filtering policies. Apparently even over blocking of sites is press worthy, as indicated by the Waseca County News, on grounds that it is unfair. Sometimes the discussion even gets political, as it did in Woodbury, Connecticut, where a student doing research noticed that there seemed to be uneven blocking of conservative branded sites.

There are also probably more instances of web filtering gone bad that go unreported, but there’s really no way to tell how a filtering fumble will shake out before it hits the press. Of course, that begs the question; with so much at stake, why take the risk? Like laundry, dishes, or getting your oil changed, making sure your web filter is up to the challenge is the first small step in making sure that your students are protected, but it’s an important one. Perhaps it’s time to schedule some time

Managing Security in a Highly Decentralized Business Model

Information Security leadership has and will likely continue to be part politicking, part sales, part marketing, and part security. As anyone who has been a security leader or CISO in their job history can attest to, issuing edicts to the business is as easy as it is fruitless- Getting positive results in all but the most strictly regulated environments is nearly impossible. In high centralized organizations, at least, the CISO stands a chance since the organization likely has common goals, processes, and capital spending models. When you get to an organization that operates in a highly distributed and decentralized manner the task of keeping security pace grows to epic proportions.

As I was performing a recent ISO 27002 controls audit against one of these highly decentralized organizations the magnitude of their challenge really hit me. While the specific industry is relevant to this example I can simply say that they are in the business of making, testing and selling stuff. Parts of their business make thing. Parts of their business test things. And parts of their business sell both the things the other businesses do for various use-cases. Some of the business is heavily regulated. Some of the business isn't regulated at all. All of the enterprise is connected via a single network, with centralized IT services, applications and management. I could stop right here and you'd understand why this is nearly impossible to make universally applicable.

Bad Math

What makes this even more difficult on the security organization is that their core team is exactly .04% of the overall company staff. Their full staff complement, including recently hired new members, are less than 5% of the total IT staff count. The security device-to-staffer ration is horrible, their budget is insignificant, and for all intents and purposes the security function is relatively new when compared against the rest of the enterprise. I'm not a statistician, or particularly good at math, but even I know those numbers don't work out well.

Diversity Challenges

Security in the enterprise is largely about building and operationalizing repeatable patterns of process and methodology to achieve scale. This works well in even very large, but very centralized and uniform enterprises. The problem is when you get into enterprises that are extremely diverse in business practices, technologies, and goals and compliance initiatives repeatable patterns fail to scale well since you end up building a new unique set for every different piece of the organization.

In this situation the only chance enterprise security has is local representation from inside the business. Generally, though, you're not going to find many security experts in my experience from within these business that have "an IT guy/gal" or three. The situation just keeps getting worse.

Think about this- from an operating platforms perspective you may have some OS/2, lots of UNIX variants, Mac OS, Windows from WinNT 4.0 through Windows 8.1, and then some device specific platforms like VxWorks. If you're lucky all you have is Ethernet (Category 5/6) cabling and nothing else... Now add specialized programs, PLCs, Industrial Controls Systems (ICS), and it gets messy fast.

At this point it almost doesn't matter how many security resources you have, the only way you'll scale is automation.

The Catch-22

 Sometimes things become a chicken vs egg problem. In order to have better scale with fewer resources your security organization clearly needs more automation. The problem with more automation is it tends to create the need for more security resources to manage it (you don't actually believe the marketing or sales hype that these things manage themselves, do you?) to get effective scale. Either way - you don't have the people to do this.

Bad, Meet Worse

Where things go from bad to untenable is when the business alignment and co-operation isn't ideal. As in real life, not all business units will be friendly or even want to deal with "corporate". In that case you're not only facing the impossible challenge of addressing the business security issues, but now you're fighting against politics as well. Sometimes you just can not win.

If you factor in that generally security isn't the most loved part of the IT organization because of its history of being "the no people" you quickly realize that the deck is heavily stacked against you. There are certainly ample opportunities to trip on your own untied shoelaces and fall flat on your face. The key to not doing this lies in a multi-step process which includes assessment, prioritization, buy-in, and effective operationalization.

Steering the Titanic by Committee

As the CISO or security leader of a highly decentralized enterprise you're not going to get many wins that come easily. You're probably not going to do a very good job at preventing and preempting that next breach. Heck you may not even be able to detect or respond in a timely fashion. But the key to not failing as hard is to not go at it alone. Even if you have a centralized security team of 100+ you're still going to fall prey to these same challenges. You need support from the various edge-cases in your enterprise structure. You need help from your corporate counterparts, and your outliers.

Cooperatively working towards better security is hard. It may be an order of magnitude harder than anything else you can do from a central control model - but if that's the only operating model you have available to you then it's time to make lemonade. In the next few posts I'll try to apply some of the lessons learned and recommendations from a series of these types of engagements. Maybe some of them will help you make better lemonade. Or figure out when it's time to move to a new lemonade stand.

Red Letter Day for Onanists and Internet Fraudsters

Yesterday a number of explicit photographs of celebrities, including Jennifer Lawrence, were leaked on the Internet. I'll get to that in a moment. First, if you read no further, read this:

Don't go looking for these photographs, and don't click any links sent to you purporting to be them.

If you must look, we've hosted them all here. Seriously, we have been out a-searching since the news broke, in order to protect our users from the inevitable tide of malware links that have already begun to spring up. The major search engines work hard to keep malicious sites seeded with "current event" keywords from popping up, but this time will be harder, as the sites offering these images will often be similar to those offering the malware.

Now I am going to break from the norm. Most security blogs include the advice "don't take nude photos". I'm not going to ask you to quit. If that's your bag, keep at it — but bear in mind that your photo collection is now worth more. It's now worth more to an attacker who wants to populate their porn site, or to  blackmail you. It is also worth more to you, for the peace of mind of those images being kept private.

If we said the answer was "don't do it" every time doing something on the Internet resulted in a problem, we wouldn't have Internet banking. Or the Internet, come to think of it. So no, you absolutely should store your personal photos on the Internet. You just need to take further steps to ensure they are secure.

These steps include:

1. Make sure you know where your photos are. Many phones now automatically send your images to the NSA/GCHQ etc. under the guise of backup. This can be turned off. Weigh up your dismay at not having your photos any more, vs. the chance of them being stolen. Personally, I vote for backup, as anyone who pinches my pictures will find a heady combination of safari shots, and pictures of serial numbers for things I need to fix. Remember any other backup services (DropBox, Mozy, Backblaze, Crashplan et al) that you use here as well.

2. Secure the photos on-device. If your PC has no password, and your phone regularly sits around unlocked, there's no point hacking your backups. Seems obvious, but the proportion of people who take nude selfies is greater than those who use a lock screen. Apparently.

3. Use a password you use nowhere else. No, really. I mean it this time. I know you ignored me when I said "use a different password everywhere". Look, I forgive you, because I like you. But this one is pretty serious. Don't share the password with the one you use on a messageboard, or for grocery shopping.

4. Turn on "two step verification", "two factor authentication" or whatever anyone's calling it these days.

5. Secure the reset channel. Password resets are a good way to break an account. This could be email (password and 2 factor advice applies here), phone (PIN protect your voicemail!), or silly security questions that anyone with access to your Facebook can answer (make like Graham Cluley and tell them your first pet was called "9£!ttty7-").

A final word on this: watch for those malware links. They're already out there.