Monthly Archives: June 2014

Episode #179: The Check is in the Mail

Tim mails one in:

Bob Meckle writes in:

I have recently come across a situation where it would be greatly beneficial to build a script to check revocation dates on certificates issued using a certain template, and send an email to our certificate staff letting them know which certificates will expire within the next 6 weeks. I am wondering if you guys have any tricks up your sleeve in regards to this situation?

Thanks for writing in Bob. This is actually quite simple on Windows. One of my favorite features of PowerShell is that dir (an alias for Get-ChildItem) can be used on pretty much any hierarchy, including the certificate store. After we have a list of the certificates we simply filter by the expiration date. Here is how we can find expiring certificates:

PS C:\> Get-ChildItem -Recurse cert: | Where-Object { $_.NotAfter -le (Get-Date).AddDays(42) -and $_.NotAfter -ge (Get-Date) }

We start off by getting a recursive list of the certificates. Then the results are piped into the Where-Object cmdlet to filter for the certificates that expire between today and six weeks (42 days) from now (inclusive). Additional filtering can be added by simply modifying the Where-Object filter, per Bob's original request. We can shorten the command using aliases and shortened parameter names, as well as store the output to be emailed.

PS C:\> $tomail = ls -r cert: | ? { $_.NotAfter -le (Get-Date).AddDays(42) -and $_.NotAfter -ge (Get-Date) }

Emailing is quite simple too. PowerShell version 4 adds a new cmdlet, Send-MailMessage. We can send the certificate information like this:

PS C:\> Send-MailMessage -To me@blah.com -From cert@blah.com -Subject "Expiring Certs" -Body $tomail

Pretty darn simple if you ask me. Hal, what do you have up your sleeve?

Hal relies on his network:

Tim, you're getting soft with all these cool PowerShell features. Why when I was your age... Oh, nevermind! Kids these days! Hmph!

If I'm feeling curmudgeonly, it's because this is far from a cake-walk in Linux. Obviously, I can't check a Windows certificate store remotely from a Linux machine. So I thought I'd focus on checking remote web site certificates (which could be Windows, Linux, or anything else) from my Linux command line.

The trick for checking a certificate is fairly well documented on the web:

$ echo | openssl s_client -connect www.google.com:443 2>/dev/null | 
openssl x509 -noout -dates

notBefore=Jun 4 08:58:29 2014 GMT
notAfter=Sep 2 00:00:00 2014 GMT

We use the OpenSSL built-in "s_client" to connect to the target web server and dump the certificate information. The "2>/dev/null" drops some extraneous standard error logging. The leading "echo" piped into the standard input makes sure that we close the connection right after receiving the certificate info. Otherwise our command line will hang and never return to the shell prompt. We then use OpenSSL again to output the information we want from the downloaded certificate. Here we're just requesting the "-dates"-- "-noout" stops the openssl command from displaying the certificate itself.

However, there is much more information you can parse out. Here's a useful report that displays the certificate issuer, certificate name, and fingerprint in addition to the dates:

$ echo | openssl s_client -connect www.google.com:443 2>/dev/null | 
openssl x509 -noout -issuer -subject -fingerprint -dates

issuer= /C=US/O=Google Inc/CN=Google Internet Authority G2
subject= /C=US/ST=California/L=Mountain View/O=Google Inc/CN=www.google.com
SHA1 Fingerprint=F4:E1:5F:EB:F1:9F:37:1A:29:DA:3A:74:BF:05:C5:AD:0A:FB:B1:95
notBefore=Jun 4 08:58:29 2014 GMT
notAfter=Sep 2 00:00:00 2014 GMT

If you want a complete dump, just use "... | openssl x509 -text".

But let's see if we can answer Bob's question, at least for a single web server. Of course, that's going to involve some date arithmetic, and the shell is clumsy at that. First I need to pull off just the "notAfter" date from my output:

$ echo | openssl s_client -connect www.google.com:443 2>/dev/null | 
openssl x509 -noout -dates | tail -1 | cut -f2 -d=

Sep 2 00:00:00 2014 GMT

You can convert a time stamp string like this into a "Unix epoch" date with the GNU date command:

$ date +%s -d 'Sep  2 00:00:00 2014 GMT'
1409616000

To do it all in one command, I just use "$(...) for command output substitution:

$ date +%s -d "$(echo | openssl s_client -connect www.google.com:443 2>/dev/null | 
openssl x509 -noout -dates | tail -1 | cut -f2 -d=)"

1409616000

I can calculate the difference between this epoch date and the current epoch date ("date +%s") with some shell arithmetic and more command output substitution:

$ echo $(( $(date +%s -d "$(echo | openssl s_client -connect www.google.com:443 2>/dev/null | 
openssl x509 -noout -dates | tail -1 | cut -f2 -d=)")
- $(date +%s) ))

5452606

Phew! This is already getting pretty long-winded, but now I can check my result against Bob's 6 week threshold (that's 42 days or 3628800 seconds):

$ [[ $(( $(date +%s -d "$(echo | openssl s_client -connect www.google.com:443 2>/dev/null | 
openssl x509 -noout -dates | tail -1 | cut -f2 -d=)") - $(date +%s) ))
-gt 3628800 ]] && echo GOOD || echo EXPIRING

GOOD

Clearly this is straying all too close to the borders of Scriptistan, but if you wanted to check several web sites, you could add an outer loop:

$ for d in google.com sans.org facebook.com twitter.com; do 
echo -ne $d\\t;
[[ $(( $(date +%s -d "$(echo | openssl s_client -connect www.$d:443 2>/dev/null |
openssl x509 -noout -dates | tail -1 | cut -f2 -d=)") - $(date +%s) ))
-gt 3628800 ]] && echo GOOD || echo EXPIRING;
done

google.com GOOD
sans.org GOOD
facebook.com GOOD
twitter.com GOOD

See? It's all good, Bob!

Timing-safe memcmp and API parity

OpenBSD released a new API with a timing-safe bcmp and memcmp. I strongly agree with their strategy of encouraging developers to adopt “safe” APIs, even at a slight performance loss. The strlcpy/strlcat family of functions they pioneered have been immensely helpful against overflows.

Data-independent timing routines are extremely hard to get right, and the farther you are from the hardware, the harder it is to avoid unintended leakage. Your best bet if working in an embedded environment, is to use assembly and thoroughly test on the target CPU under multiple scenarios (interrupts, power management throttling clocks, etc.) Moving up to C creates a lot of pitfalls, especially if you support multiple compilers and versions. Now you are subject to micro-architectural variance, such as cache, branch prediction, bus contention, etc. And compilers have a lot of leeway with optimizing away code with strictly-intended behavior.

While I think the timing-safe bcmp (straightforward comparison for equality) is useful, I’m more concerned with the new memcmp variant. It is more complicated and subject to compiler and CPU quirks (because of the additional ordering requirements), may confuse developers who really want bcmp, and could encourage unsafe designs.

If you ask a C developer to implement bytewise comparison, they’ll almost always choose memcmp(). (The “b” series of functions is more local to BSD and not Windows or POSIX platforms.) This means that developers using timingsafe_memcmp() will be incorporating unnecessary features simply by picking the familiar name. If compiler or CPU variation compromised this routine, this would introduce a vulnerability. John-Mark pointed out to me a few ways the current implementation could possibly fail due to compiler optimizations. While the bcmp routine is simpler (XOR accumulate loop), it too could possibly be invalidated by optimization such as vectorization.

The most important concern is if this will encourage unsafe designs. I can’t come up with a crypto design that requires ordering of secret data that isn’t also a terrible idea. Sorting your AES keys? Why? Don’t do that. Database index lookups that reveal secret contents? Making array comparison constant-time fixes nothing when the index involves large blocks of RAM/disk read timing leaks. In any scenario that involves the need for ordering of secret data, much larger architectural issues need to be addressed than a comparison function.

Simple timing-independent comparison is an important primitive, but it should be used only when other measures are not available. If you’re concerned about HMAC timing leaks, you could instead hash or double-HMAC the data and compare the results with a variable-timing comparison routine. This takes a tiny bit longer but ensures any leaks are useless to an attacker. Such algorithmic changes are much safer than trying to set compiler and CPU behavior in concrete.

The justification I’ve heard from Ted Unangst is “API parity“. His argument is that developers will not use the timing-safe routines if they don’t conform to the ordering behavior of memcmp. I don’t get this argument. Developers are more likely to be annoyed with the sudden performance loss of switching to timing-safe routines, especially for comparing large blocks of data. And, there’s more behavior that should intentionally be broken in a “secure memory compare” routine, such as two zero-length arrays returning success instead of an error.

Perhaps OpenBSD will reconsider offering this routine purely for the sake of API parity. There are too many drawbacks.


In Defense of JavaScript Crypto

Thai Duong wrote a great post outlining why he likes JavaScript crypto, although it’s not as strong a defense as you might guess from the title. While he makes some fair points of some limited applications of JavaScript, his post is actually a great argument against those pushing web page JS crypto.

First, he starts off with a clever Unicode attack on JS AES by Bleichenbacher. It is a great way to illustrate how the language, with its bitwise and type hostility, actively works against crypto implementers. Though Thai points out lots of different ways to work around these problems, I disagree that it’s clear sailing for developers once your crypto library deals with these issues. You’ll get to pick up where your low-level library left off.

Oh, and those of you were looking for defense of web page crypto for your latest app? Sorry, that’s still dumb. Google’s End-to-End will only be shipped as a browser extension.

The most ironic part of Thai’s post involves avoiding PCI audit by shipping JS to the browser to encrypt credit card numbers. Back in 2009, I gave a Google Tech Talk on a variety of crypto topics. Afterwards, a Google engineer came up to me and gave exactly Thai’s example as the perfect use case for JavaScript crypto. “We avoid the web server being in scope for PCI audits by shipping JS to the user,” he said. “This way we can strip off the SSL as soon as possible at the edge and avoid the cost of full encryption to the backend.”

He went on to describe a race-condition prone method of auditing Google’s own web servers, hashing the JS file served by each to look for compromised copies. When I pointed out this was trivial to bypass, he said it didn’t really matter because PCI is a charade anyway.

While he’s right that PCI is more about full employment for auditors & vendors than security, news about NSA tapping the Google backbone also shows why clever ways to avoid end-to-end encryption often create unintended weaknesses. I hope Google no longer underestimates their exposure to nation-state adversaries after the Snowden revelations, but this use-case for JS crypto apparently hasn’t died yet.


First Quarter 2014 Exposes 176 Million Records

Troubling Trend Of Larger, More Severe Data Breaches Continues

We are pleased to announce the release of the next installment of Risk Based Security’s Data Breach QuickView report. Analysis of data compromise activity for Q1 2014 shows that, while the number of incidents taking place remains comparable to Q1 2013, the number of records lost per incident is on the rise. The total number of records exposed in the first quarter of 2014 exceeded 176 million – or roughly a 46% increase compared to the number of records loss in Q1 2013.

The report also highlights the continuing trend of targeting user names, e-mail addresses, and passwords. Although this type of information in and of itself typically doesn’t hold the same value as Social Security or credit card numbers, this data can be the keys to opening up the doors that access more valuable information. The continued focus on this type of data may be indicative of more complex or better-planned attacks currently happening involving third-parties and on the horizon.

The Data Breach QuickView report was just released and is possible through the partnership and combined resources of Risk Based Security and the Open Security Foundation. It is designed to provide an executive level summary of the key findings from RBS’ analysis of 2013’s data breach incidents. You can view the announcement and report here. You can view the announcement and report here.

Google -> Doorway -> Google -> Spam

Just a few thoughts about an interesting behavior of a black-hat SEO doorway.

Typically hackers create doorways on compromised sites to make search engines rank them for certain keywords and then, when searchers click on the links in search results, those doorways redirect them further to a site that hackers really promote. Sometime that redirect may go through some TDS (traffic directing service) but the whole scheme remains pretty much the same:

Search results -> doorway -> beneficiary site

Today, when doing a backlink research of one of such pharma doorways, I encountered a different scheme — a one with a loop.

The doorway had this URL structure:

http://www.hacked-site.com/blog/?prednisolone-without-prescription

When I checked it in Unmask Parasites or in Google cache, I saw spammy content with links to doorways on other hacked sites. Quite typical.

When I opened that page in a web browser, it redirected to a TDS

hxxp://bh2r3gof .biz/sutra/in.cgi?5&from=90dc16aaf3e24ea68c94c3f784a37ff9&gdw=w-48&gdf=%2F90dc16aaf3e24ea68c94c3f784a37ff9-f08f149982bf04ffaa308aba00b2d569.txt&host=www.hacked-site.com&kw=prednisolone%20without%20prescription&HTTP_REFERER=https%3A%2F%2Fwww.google.com%2F%3Fsource%3Dnoref%26q%3Dprednisolone%2520without%2520prescription

Cloaking and conditional redirects to traffic directing services are also quite typical for doorways.

Then TDS made another hop:

hxxp://bh2r3gof .biz/sutra/in.cgi?5&from=90dc16aaf3e24ea68c94c3f784a37ff9&seoref=8bB06MP5xFTR3TkqmNILbbW5mW30f%2B%2FMiQdbatwxiv5CUUTkQjEO75VtTs7IRqdVTmPfmX……..2FIXHlBdqV6iDd1ruYQMhqmYVCozdkTrAN76fOABicz

And then I ended up on … Google:

http://www.google.com/search?q=prednisolone+without+prescription

Google results for prednisolone without prescription

And that is not very typical. But interesting.

One one hand, it looks like the spammers didn’t consider me as a target traffic (probably because of my IP) and redirected back to the Google search results page for the same query that I supposed to be using when found that site. For some, this chain of redirects may look as if they clicked on a search result and then Google thought for some time and reloaded the same page, which may look like just a glitch.

On the other hand, it looks like a second level of search engine optimization, when spammers fine tune a search query that may return better doorways. I can’t help thinking about this because when I see the search results on Google pages that the TDS redirects me to, I realize that most of them are doorways on hacked sites (yes, including those with fake stars in ratings).

Google search results for prednisolone australia

I know, it’s quite pointless. Why redirect people who already clicked on your doorway to a new set of search results? Although those results may contain some of your other doorways, there is no guarantee that the searcher will not go away or click on them and not on links of your competitors? And you can’t control Google search results — it may happen that there will be no your doorways on the results page.

OK. Let’s try to think as the TDS owners. The TDS recognizes the traffic that it doesn’t need for the pharma campaign. It may

  1. redirect it to pharma landing pages anyway (which may decrease the quality and the price of such a traffic)
  2. dispose of that traffic altogether (redirect it to a neutral third-party site, like Google)
  3. try to monetize that traffic anyway — redirect it to some scam site, porn site, malware site or any other resource that knows how to take advantage of low quality traffic.

<warning:unfounded speculation>

But what it the spammers also run another campaign that targets the traffic that doesn’t fit their pharma campaign? They can simply redirect the traffic to landing pages of the second campaign. But such traffic will not be targeted. People searched for different things. So maybe it is worth it to “re-target” the traffic and redirect the searchers to a Google result page for keywords relevant to that second campaign and that contains links to their doorways.

Here’s the scenario:
People search something on Google and click on some result. For some reason Google reloads the page and shows them completely different results for a different query. Some searchers will definitely leave (don’t worry, this was unwanted traffic anyway) but some may become interested in what Google offers them and click on results (after all spammers usually promote something that people need and if it comes from Google it looks more legit). So, instead of either lost or untargeted traffic, they get targeted traffic of people who willingly clicked on search results. All they need to do is make sure their doorways dominate for relevant search queries (shouldn’t be hard since the TDS provide the search query itself and there is no need to rank for short generic queries.)

</warning:unfounded speculation>

OK, enough speculations. That particular campaign was not using the second level Google optimization. It simply dumped unneeded traffic back to Google. When opening the same doorways from different country IPs, the TDS redirected me to a random “Canadian Pharmacy” site from its pool of about a dozen of sites.

Anyway, the point of this post is despite of Matt Cutts’ recent announcement of rolling out the second ranking update for “very spammy queries” I still see that 50% or more of top search results for pharma keywords still point to doorways on hacked sites.

And as long as “very spammy queries” return “very spammy results”, there will be incentive for black hat SEOs to hack sites and create doorways there.

Related posts:

2 Weeks To Secure Your Networks… Starting…

Well, roughly 2 weeks ago. Apparently, there's a malware storm a-comin' - batten down the hatches, man the barricades, etc.

Yawn. Look, if you're not ready for this influx of malware, you're not ready to plug in your router. Surviving on the Internet during this coming malware bonanza is like surviving in a 'phone booth with 2 angry brown bears. If I said, hey, let's go with one angry brown bear instead, you wouldn't fancy your chances any better.

Ursine analogies aside, if we do get the proposed storm (and here I'm going to suggest that we're looking at a level of likelihood similar to that of weather forecasting), keep doing what you're doing. It's always a good time to start doing what you're doing better, but to make changes for this - fairly generic - incident that you're not willing to keep in place full-time is a second rate scheme.

My advice, pick one thing you've been looking to improve about your IT security for a while, and use the press coverage to justify your budget spend - but don't show the bean counters this article.

Passwords – At it again?

The recent eBay hack got me thinking about passwords, for about the 5th time this year. After Heartbleed, I did a bit of an audit on the passwords I was using, and I hope you did too. I then moved house, and had to change a bunch of address details, and in the process, I found a few more places I had passwords set up that I didn't know I had. One of these places emailed me a reminder with the password in plain text. This means they are storing my password, on their server, in the clear. I'm not mean enough to name names, and indeed I have offered to help them fix it, and given a few pointers - I'm nice like that, you see!

There's a moral to this tale, however. I should be concerned that Company X's servers may be compromised, and my password released, because they stored it badly. If that was the case, I would want to change my password as soon as I heard of the breach, as an attacker would immediately be able to access my account. My best defence would probably be that my name's likely to be right in the middle of the list, and any attacker is probably working his way past Archibald Atkins up there at the top of the user list - I hope I can get to reset my creds before the bad guys get to "N"!

However, I hope that eBay are smarter (not that there's any direct evidence that this is the case: they've been a bit evasive on how they stored our passwords). Despite this, I immediately changed my eBay password too. Why? because even a hashed password is cracked fairly easily these days, and that crack is getting easier every day.

Given a 6 character password (still accepted by many sites), hashed with MD5, it is possible to check every possible password in less than a minute on standard hardware.

So: sites are still storing passwords plaintext. For a while, MD5 was the go-to hash function. How many people do you think are still using that? SHA-1? Not much better apparently. Salt-per-password - better odds, but not unbeatable. While there's so much that a site could do "wrong" that would mean your password is brute forced in no time, there's a bunch you could do wrong too, like picking a dictionary word, or something nice and short. Be aware that the bad guys are finding ways to crack passwords orders of magnitude faster, such as using CUDA/GL setups.

What can we do to protect ourselves against the disparity between the ability of wrong 'uns to crack passwords, and the slow uptake of more secure hashing?

You can never ever re-use a password. I am pretty sure I still am - probably on accounts I should have closed years ago, but tidying up your passwords is worse than changing your postal address! It's really difficult. You will need a password manager. I chose Lastpass personally, some of my colleagues use passwordsafe and keep the file in dropbox - pick the one that's right for you.

A password manager is essential to keep up with the large number of passwords you will need - however, I would advocate keeping your key passwords out of any manager - eggs, basket, and all that. So email, financial services, that sort of thing, probably should stay in your head!

Finally, any sites which offer 2 factor authentication, please do take them up on the offer. That way you're less likely to suffer a breach while the organisation decides on the best way to tell you your password has gone walkies.

TL;DR - three things you need to remember about your passwords:


  • Two factor Where You can
  • Password Manager for the Many
  • Remember the Few


What’s with the TrueCrypt warning?

TrueCrypt, the free open source full disk encryption program favoured by many security-savvy people, including apparently Edward Snowden, is no more. Its website now redirects to its SourceForge page which starts with this message: WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues This page exists only to help migrate existing data […]