Within this blog post we will take some (particular) looks at some HSTS browser implementation, but first a few words about HSTS.
According to the current draft of HTTP Strict Transport Security:
- HSTS can be used by user agents such browsers to discover and/or utilize in a secure way web sites.
- HSTS enables web site owners to offer their web site in only a secure way
- HSTS is concerned with three threat classes: passive network attackers, active network attackers, and imperfect web developers.
In plain english, HSTS servers instruct (HSTS aware)browsers to connect to them only using HTTPS(HTTP over SSL/TLS.
The SST/TLS session (I presume) should be authenticated(typically the server is authenticated by the client to prevent active MITM) and encrypted(to prevent passive eavesdropping attacks)), yes SSL/TLS can be used without encryption or without authentication. I’m not particularly sure where in the HSTS draft these two conditions are mandated, as described in this post one HSTS browser implementation does not seem to account for such conditions.
The server sends to the browser within the HTTP response a header field named "Strict-Transport-Security" in order to inform it that it’s a HSTS server. Within this header a period of time is also specified during which the browsers shall access the server in a secure fashion only.
This header must be appended to HTTP responses sent over SSL/TLS only.
For example if the user types: http://www.example.net/pagex/ within the browser’s address bar, and the www.example.net server is marked as an STS server by the browser, this request will be automatically modified to https://www.example.net/pagex/ before being send to the STS server.
Additional information on HSTS(to not repeat too much in this post what already has been said):
There is a weakness in which the initial contact is made by the browser with an STS sever. Typically the user types http://www.example.net and is redirected to https://www.example.net with a 301 redirection. Because this initial contact is made over an insecure channel, is vulnerable to various active MITM attacks, like the one described by Moxie Marlinspike with sslstrip(http://www.thoughtcrime.org/software/sslstrip/).
This type of initial contact over an insecure channel is described within the section 12.2(Bootstrap MITM (Man-In-The-Middle) vulnerability) of the HSTS draft.
Google Chrome currently attempts to mitigate this limitation with a preloaded STS sites list, https://sites.google.com/a/chromium.org/dev/sts.
Another possible future solution to this can be the use of DNSSEC to inform clients about STS sites.
Server side issues
It has been described, see http://coderrr.wordpress.com/2010/12/27/canonical-redirect-pitfalls-with-http-strict-transport-security-and-some-solutions/, that canonical redirects can affect the way the destination sever is noted as a STS server.
The HSTS draft specifies an includeSubDomains flag which, if present, signals to the browser that the HSTS policy applies to this HSTS server as well as any subdomains of the server's FQDN.
But this does not address all the issues described in the above reference.
Client side issues
One aim of the HSTS on the client side is to terminate the connection without any user interaction if there are any errors with the underlying secure transport.
Plain english, although the draft itself might be not so explicit:
- instead of prompting the user about certificates errors(untrusted root CA, domain name mismatch, expired certificate), simple terminate the connection with an error message without giving the possibility to the user to continue.
- SSL/TLS errors result in connection termination.
- if the browser has enabled(by default, accidentally, etc.) some anonymous or null cipher suites, do not use them when connecting to a known HSTS server, otherwise active and passive network attacks are possible. As said, I’m not sure where exactly this is described in the HSTS draft.
Null cipher suites
For example, Firefox 4 Beta 10 can be configured to use null encryption cipher suites, and we will enable the TLS_RSA_WITH_NULL_MD5 one.
Let us assume that Firefox encounters securely an STS web site, meaning I will manually enter https://siteadress in order to avoid any cache redirect questions.
The browser loads the requested page(and in the back does some OCSP successful checks), the domain name is marked as an HSTS server.
Close the browser.
Configure the server to only support the TLS_RSA_WITH_NULL_MD5 cipher suite in order to simulate a misconfigured server that would pick such a cipher suite(maybe after some troubleshooting work they forgot the null cipher suite in place).
Open the browser and request, over plain HTTP another web page from that site. Since the domain name is marked as an HSTS one, Firefox sends the request directly over HTTPS. The TLS connection is successful and the browser sends to the server its HTTP request over the session using the TLS_RSA_WITH_NULL_MD5 cipher suite. Firefox does not terminate the connection; visually the user may notice the absence of the SSL padlock. The session is subject to passive (eavesdropping) network attacks.
Certificate revocation status ambiguities
Apart from the possible insecure initial contact attempted by the browser, it must be noted that HSTS partially addresses the active network attacks threat class.
For example, there are some certificate revocation status ambiguities regarding HSTS(possibly due to infrastructure limitations).
For example, an STS server private key is compromised or an attacker manages to obtain a valid certificate for the STS server’s FQDN(maybe from another trusted CA), possibly a domain validated certificate(after it may have been compromised, say by XSS, an email account related to that domain) or by exploiting a weakness like the one used by Moxie Marlinspike for a PayPal certificate with a null character in the domain name.
Let us assume that this certificate will be revoked.
If the HSTS client side implementation is not strict regarding certificate revocation status, even if the certificate was revoked, an active MITM can still be used, during a window of opportunity.
The proposed lockCA STS extension, http://lists.w3.org/Archives/Public/public-web-security/2009Dec/0185.html, can be used to prevent MITM attacks like the one with Moxie Marlinspike’s PayPal certificate with a null character in the domain name, by associating an STS web site with a particular CA even when the certificate was still not revoked, in our particular example this being true since the original PayPal certificate was an EV certificate from VeriSign and the null character one was from another CA.
However lockCA is likely not feasible at the corporate level, due to the presence of outbound HTTPS inspection web proxies, which do active SSL/TLS MITM creating on-the-fly certificates for the requested servers, certificates signed by the proxies with their own CA certificate(either root or subordinate), this CA certificate being trusted by corporate clients.
Imagine a mobile user, working at home/on the road interacting directly with the STS server and while at the office interacting through the MITM HTTPS proxy with STS server(in case this server was not excluded from the HTTPS inspection).
To exemplify the certificate revocation weakness in practice, comparing the STS implementation of Firefox 4.0 Beta 10 and Chrome 9.0.597.94(let’s utilize Windows 7 as the underlying OS) it must be noted that “errors” within the underlying secure transport regarding to the certificate revocation status are interpreted differently.
The differences start from a regular HTTPS connection. I can intercept and modify the OCSP requests and the CRL downloads attempts(only Chrome does CRL downloads, Firefox not), and note that Firefox does not mind, while on the other hand Chrome shows a sort of a broken SSL padlock.
Let us assume that both mentioned browsers encounter an STS web site in the same way(for simplicity this site would not be within Google Preloaded STS list), meaning I will manually enter https://siteadress in order to avoid any cache redirect questions and to contact securely the STS server.
The browsers load the requested page(and in the back do some OCSP successful checks), the domain name is marked as an HSTS server.
Close the browsers.
Open the browsers and request, over plain HTTP another web page from that site. Since the domain name is marked as an HSTS one, both browsers send the request directly over HTTPS. Apparently Chrome does some OCSP response caching, but Firefox(since we closed it and re-opened it) will do another OCSP successful check.
Again the browsers load the requested page, and the time for which the domain name is marked as an HSTS server is updated.
Close the browsers.
Open the browsers and prepare to request, over plain HTTP, another web page from that site(since the domain name is marked as an HSTS one, both browsers will send the request directly over HTTPS).
This time we will MITM the HTTPS connection and present another certificate valid in terms of CA trust and domain name match, but revoked(pretending we managed to convince another CA to issue us one for a domain we do not own, and they figure it and revoked it).
Firefox will try to contact the OCSP responder if any, if just a CRL is specified in the server’s cert, Firefox will not do any certificate checks and will load the web site(MITM success). If we alter the OCSP response(say I return a bogus response), Firefox does not mind even this is an STS server and loads the web site(again MITM success).
Chrome, on the other side, does not accepts our bogus OCSP responses(tries with GET and POST) and falls back to CRL download attempt if a link is present within the server’s certificate(not sure, did not verify, but I think Chrome uses Windows’ Crypto API for these operations). If we alter the CRL response too, Chrome, compared with a regular site, instead of showing the broken padlock, modifies its behavior and terminates the connection with Error 205 (net::ERR_CERT_UNABLE_TO_CHECK_REVOCATION): Unknown error.
Now, in Firefox’s behavior defense one may note that this is not a big deal, since we can use OCSP response replay during a window of opportunity, or block the OCSP requests and make Chrome fallback to CRL download attempt(if any) in order to have a larger window of replay opportunity(the CRL typically lasts longer than a OCSP response).
Or if we would have stolen the original certificate’s corresponding private key, since Chrome does some OCSP response caching, it would have not even tried to obtain a fresher OCSP response.
In Chrome behavior defense another may note that in order to prevent OCSP replay, nonces can be used to ensure freshness for the OCSP response instead of a time based OCSP response(and probably caching can be avoided).
However, which OCSP responders allow the use of nonces, say to be feasible to force the STS client implementation to accept only nonce OCSP responses, since the use of nonces does not scale very good.
Also not all CAs may issue certificates than can be checked with OCSP.
And the presence of outbound HTTPS inspection web proxies, which do active SSL/TLS MITM creating on-the-fly certificates for the requested servers while striping OCSP/CRL information from them(these proxies do the certificate revocation status on their own, see this http://www.carbonwind.net/blog/post/A-quick-look-at-the-on-the-fly-created-server-certificate-by-Forefront-TMG-2010-RTMe28099s-Outbound-HTTPS-Inspection.aspx for such a proxy implementation example) will make infeasible the setting of a must for the certificate revocation status of the certificate of a STS server.
Still, Chrome’s behavior, since it actually terminates the connection if the attempts to verify the server’s certificate fail, is stronger than Firefox’s one, because it forces an attacker to do more.