Category Archives: SSL

Researcher: Identifying non-decryption DNS-Over-HTTPS traffic

Apparently, without even decrypting it, DNS-over-HTTPS (DoH) traffic can be detected, a security researcher has discovered.

The aim of the DoH protocol is to improve the overall Internet security by using TLS when submitting DNS requests and obtaining DNS responses over HTTP.

DoH seeks to counter both passive monitoring and aggressive redirection attacks by encrypting DNS data and allowing domain authentication. Different protections are given over TLS via DNS.

One could actually identify DoH traffic by analyzing both traffic to and from a site, according to Johannes Ullrich, Dean of Research at the SANS Technology Institute.

For his project, the researcher used Firefox since Mozilla makes it easy to activate DoH — the internet agency has been operating with DoH since 2017— and because the software enables TLS master keys to be obtained via the SSLKEYLOGFILE environment variable (Chrome often allows this).

Firefox 71 on Mac was used for the experiment with Cloudflare as a resolver — Mozilla has also recently added NextDNS to its Trusted Recursive Resolver (TRR) program.

Although not definitive, particularly since only a few minutes of traffic was obtained, the test showed that DoH traffic is actually easy to identify.

The researcher launched Firefox after running tcpdump, and navigated to a few dozen sites. The packet capture file was loaded into Wireshark 3.1.0, which fully supports DoH and HTTP2 (Firefox requires HTTP2 for DoH).

“I identified the DoH traffic using the simple display filter ‘dns and tls.’ The entire DoH traffic was confined to a single connection between my host and mozilla.cloudflare-dns.com (2606:4700::6810:f8f9),” the researcher notes.

In this particular case, traffic could be identified using the hostname, but one could run their own DoH server as well.

Further research has showed that traffic can be defined using the DoH payload frequency. Usually, DNS queries and replies are no larger than a few hundred bytes, whereas HTTPS links appear to reach the entire transmission unit (MTU), describes Ullrich.

“In short: if you see long-lasting TLS connections, with payloads that rarely exceed a kByte, you probably got a DoH connection,” the researcher notes.

Some of the objects found during the trial may be unique to execution, but some more definitive findings might come from additional testing, Ullrich also states.

 

The post Researcher: Identifying non-decryption DNS-Over-HTTPS traffic appeared first on .

Still Why No HTTPS?

Still Why No HTTPS?

Back in July last year, Scott Helme and I shipped a little pet project that tracked the world's largest websites not implementing HTTPS by default. We called it Why No HTTPS? and it gave people a way to see the largest websites not taking transport layer security seriously. We also broke the list down on a country-by-country basis and it quickly became a means of highlighting security gaps and serving as a "list of shame". I've had many organisations reach out and ask to be removed once they'd done their TLS things properly so clearly, the site is driving the right behaviour. Today, we're happy to share the first update since November last year.

The Web is More Secure More of the Time

Let's start with the good news: since the first release of this little project, HTTPS adoption has steadily trended upwards:

Still Why No HTTPS?

We've gone from 70% of all HTTP requests being over the secure scheme to 80% which is a pretty good effort in a relatively short amount of time. But, of course, it's the websites serving that remaining 20% of traffic that I want to focus on here. Let's being with where we source the list of top sites from and that's something we've changed for this release.

Bye Bye Alexa, Hello Tranco

When we launched the site, the list was based on the Alexa Top 1M. However, this list was becoming somewhat tricky to use reliably as Scott explained in October:

I used to use the Alexa Top 1 Million for this research but I've been having issues with the list. They tried to remove access at one point and while I managed to have it restored, there are other issues too. The accuracy of the data has been called into question and also the list itself has been having weird issues recently like not returning 1 million entries... Yep, that's right, the Alexa Top 1 Million list has been returning, in some cases, only ~650,000 entries recently, which is of course a problem.

Consequently, there are some differences in the way sites are ranked and as a result, there are some unexpected appearances. For example, the 21st largest site on the global list is googletagmanager.com. Now obviously this isn't a website in the sense that folks go there looking for useful content (many would argue quite the contrary), but based on the Tranco data it's one of the most traffic'd websites in the world so it's within scope of this project.

So that's our starting point in terms of identifying which sites we assess, let's move onto the methodology around how a site ultimately makes our list.

Methodology and False-Positives

A quick recap on our methodology first: Scott runs a service which indexes a whole bunch of security things on the world's top million websites each day. He publishes the results of that effort via his free crawler.ninja website (really Scott, .ninja?!) and I then roll the HTTP sites and HTTPS sites list into the Why No HTTPS? website. In that regard, it's quite simple. Except it's not really...

As I explained in this Q&A blog post last year, there are a whole bunch of reasons why a site that you see apparently doing things right might still be on our list. If you're going to chime in here with a bit of "But [blah].com loads over HTTPS by default for me", do please start by reading that blog post.

Read the post? Good! What we're left with pretty much boils down to an expectation that a site responds to an HTTP request over the insecure scheme with either a 301 or 302 (ideally the former so it's a permanent redirect) to a secure URL (multi-hop is also ok: a 301 to an HTTP address that then 301s to an HTTPS address is fine). If I make an insecure curl request from here in Australia, for example, and I get an HTTP 401 then the site goes onto the list. There has been some dissatisfaction over this methodology due to how much website behaviour can vary from location to location, so in this update we've added a means of getting a "free pass" that will automatically exclude a site from the list.

HSTS Preload Gives You an Immediate "Free Pass"

Preloaded HSTS is awesome (here's an old blog post that explains why). Once a site is pinned into the browser's static list of HSTS sites, insecure requests will always be upgraded and the 301 / 302 done by the website becomes redundant. Further, check out the requirements to be preloaded in the first place, in particular, this one:

Redirect from HTTP to HTTPS on the same host, if you are listening on port 80.

What this means is that if a site is in the preload list, we're comfortable excluding it from our list of shame. A great example of this is the domain I mentioned earlier - googletagmanager.com. When I curl that address insecurely, here's what happens:

Still Why No HTTPS?

Arguably, this should keep the site in scope of being on our list but because it's been successfully preloaded and the browser simply won't allow an insecure request, it gets a free pass. Other notable "free pass" sites include hyatt.com (a curl for me just 301s to a www prefixed address served insecurely) and... haveibeenpwned.com:

Still Why No HTTPS?

Over many years I've carefully honed a bunch of Cloudflare firewall rules to identify non-browser traffic that doesn't adhere to expected norms. The response above serves a body containing anti-automation (CAPTCHA) over the same scheme the request was made to (a Cloudflare behaviour). You shouldn't ever get that response in an actual browser but if you did, the fact that HSTS has been preloaded for the domain for years means the request would automatically be upgraded hence HIBP is really a false positive.

This practice of giving HSTS preloaded sites a free pass is something we hope will drive more websites in this direction. The next time someone reaches out and claims their site is incorrectly categorised that's going to be my first response - preload your domain then the next update to the site will keep you excluded.

Check the Diffs on GitHub

Lastly, if you'd like to see exactly what's changed in the data set, check out the public GitHub repository. You'll see all the input data and all the output data, the latter being precisely the files that drive the Why No HTTPS? website. I personally find it interesting to look at diffs on files such as the top50-au.json one as it gives me a really good sense of what's changed. I've ordered these files by domain name rather than rank to make things a little easier, but of course with ranks regularly changing anyway then the move from Alexa to Tranco there's going to be a heap of changes from last time even if the HTTPS status hasn't changed. At the very least though, it makes it super easy to see which sites have now dropped off the list altogether.

Comments Below

There's always a bunch of feedback on these releases and people often find really interesting things in the data. Do chime in below, keeping in mind the earlier point about reading the Q&A blog post first. And, of course, please continue to use this site as leverage to move more organisations in the "secure by default" direction.

My Cloud WAF Service Provider Suffered a Data Breach…How Can I Protect Myself?

In the age of information, data is everything. Since the implementation of GDPR in the EU, businesses around the world have grown more “data conscious;” in turn, people, too, know that their data is valuable.

It’s also common knowledge at this point that data breaches are costly. For example, Equifax, the company behind the largest-ever data breach, is expected to pay at least $650 million in settlement fees.

And that’s just the anticipated legal costs associated with the hacking. The company is spending hundreds of millions of dollars in upgrading its systems to avert any future incidents. 

In the cloud WAF arena, data breaches are no strangers. Having powerful threat detection capabilities behind your cloud WAF service provider, while important, is not the only thing to rely on for data breach prevention. 

API security and secure SSL certificate management are just as important. 

So, what are some ways hackers can cause damage as it relates to cloud WAF customers? And how can you protect yourself if you are using a cloud WAF service?

The topics covered in this blog will answer the following:

  • What can hackers do with stolen emails?
  • What can hackers do with salted passwords?
  • What can hackers do with API keys?
  • What can hackers do with compromised SSL certificates?
  • What can I do to protect myself if I am using a cloud WAF?


► What can hackers do with stolen emails?

When you sign up for a cloud WAF service, your email is automatically stored in the WAF vendor’s database so long as you use their service. 

In case of a data breach, if emails alone are compromised, then phishing emails and spam are probably your main concern. Phishing emails are so common we often sometimes we forget how dangerous they are. 

For example, if a hacker has access to your email, they have many ways they can impersonate a legal entity (e.g. by purchasing a similar company domain) and send unsolicited emails to your inbox.

 

► What can hackers do with salted passwords?

Cloud WAF vendors that store passwords in their database without any hashing or salting are putting their customers at risk if there is a breach, and even more so if hackers already have email addresses. 

In this scenario, hackers can quickly take over your account or sell your login credentials online. But what if the WAF vendors salted the passwords? Hashing passwords can certainly protect against some hacker intrusions.

In the event of a password breach without salting/hashing, a hacker can get your website to validate your password when the website compares and matches the stored hash to the hash in the database.

This is where salting the hash can help defeat this particular attack, but it won’t guarantee protection against hash collision attacks (a type of attack on a cryptographic hash that tries to find two inputs that produce the same hash value).

In this scenario, systems with weak hashing algorithms can allow hackers access to your account even if the actual password is wrong because whether they insert different inputs (actual password and some other string of characters for example), the output is the same.

► What can hackers do with API keys?

Cloud WAF vendors that use or provide APIs to allow third-party access must place extra attention to API security to protect their customers. 

APIs are connected to the internet and transfer data and allows many cloud WAFs work to implement load balancers among other things via APIs. 

If API keys are not using HTTPS or API requests not being authenticated, then there is a risk for hackers to take over the accounts of developers. 

If a cloud WAF vendor is using a public API but did not register for an authorized account to gain access to the API, hackers can exploit this situation to send repeated API requests. Had the APIs been registered, then the API key can be tracked if it’s being used for too many suspicious requests. 

Beyond securing API keys, developers must also secure their cloud credentials. If a hacker gains access to this then they are able to possibly take down servers, completely mess up DNS information, and more. 

API security is not only a concern for developers but also for end users using APIs for their cloud WAF service as you’ll see in the next section. 

► What can hackers do with compromised SSL certificates?

Next, what happens if the SSL certificates WAF customers provided ends up in the hands of hackers? 

Let’s assume the hacker has both the API keys and SSL certificates. In this scenario, hackers can affect the security of the incoming and outgoing traffic for customer websites.

With the API keys, hackers can whitelist their own websites from the cloud WAF’s settings, allowing their websites to bypass detection. This allows them to attack sites freely.

Additionally, hackers could modify the traffic of a customer website to divert traffic to their own sites for malicious purposes. Because the hackers also have the SSL certificates then they can expose this traffic as well and put you at risk for exploits and other vulnerabilities.

 

► What can I do to protect myself if I am using a cloud WAF?

First, understand that your data is never 100% safe. If a company claims that your data is 100% safe, then you should be wary. No company can guarantee that your data will always be safe with them. 

When there is a data breach, however, cloud WAF customers are strongly encouraged to change their passwords, enable 2FA, upload new SSL certificates, and reset their API keys. 

Only two of these are realistic preventive measures (changing your passwords frequently and using 2FA), but it’s unlikely that you, as a customer, will frequently upload new SSL certificates and change your API keys. 

Thus, we recommend that you ask your WAF vendors about the security of not just the WAF technology itself but also how they deal with API security and how they store SSL certificates for their customers.

If you’d like to chat with one of our security experts and see how our cloud WAF works, submit the form below!

[contact-form-7]

The post My Cloud WAF Service Provider Suffered a Data Breach…How Can I Protect Myself? appeared first on Cloudbric.