Last week at CanSecWest, Alex Sotirov and Mike Zusman showed how extended validity (EV) certificates don't really provide much additional help to securing a site with SSL. To sum-up a couple of their conclusions:
1. It's not hard to get a regular cert for an interesting domain.
2. An EV-certified site can load data from any other SSL-encrypted locations, regardless of the cert.
3. Rogue cert + MITM + EV-site = arbitrary attack code execution on EV-site.
It seems to me that the problem is rooted in a slight but pervasive misunderstanding of what EV certs do: they provide a more rigorous check to ensure that the entity serving data through the EV certified channel is actually who they claim to be. They don't currently give proof to a site's visitor that the site has not been compromised.
Having said that, if an attacker can prey on the way the site serves content, it doesn't matter whether or not the EV entity is actually who they claim to be; an attacker can just piggyback on their session, serving some cleverly crafted data with a rogue cert. This can be done by playing man in the middle with third-party content embedded on the EV site, or by playing tricks with the browser's cache.
Easy fix: In order to display the green bar (EV badging) require all the stuff on a web page to be served with the same EV cert. This is not attractive for many reasons, including ad syndication and distributed content serving -- both highly desirable uses may cross the fully-qualified domain border and thus require non-EV certs or multiple different EV certs. (EV certs are not allowed to have wildcard domain matching, so any difference in domain name will cause the cert to be invalid).
A more desirable fix, in my opinion, will take a look at the problem from a base level and figure out why EV breaks in these mixed-content or mixed-cert scenarios--then fix EV. The EV cert says "trust my subject at domain.com." What we really need is a way to say "trust this site."
More to come.
Cheers to Sotirov and Zusman for this excellent discovery and PoC man-in-the-middle script.
Wednesday, March 25, 2009
Tuesday, March 10, 2009
where should web policies be enforced?
I've been spending a lot of time thinking about context-based security decisions lately. These are decisions made on behalf of a website (or its users) in order to maintain data integrity and secrecy. Take for instance the issues of CSRF and Clickjacking; both of these attacks involve some sort of contextual manipulation. CSRF is a HTTP request generated by an untrustworthy source and Clickjacking is data theft by overlaying forms (etc).
There are many approaches to stop these things... but on whose shoulders should the policy enforcement lie? Should a web browser be in charge of making sites safe, or should it just be an enabler that provides enough information to a server so it can make appropriate decisions?
CSRF can be stopped by the browser, but in a fairly convoluted way. It's tough for a web browser to discern which requests will cause a significant transaction on the web server. For example, the simple request
could be enough to trash someone's account on a server. Even some POST requests don't cause internal state changes (imagine a currency conversion calculator or a POST-based language translator form). It seems a bit easier to stop CSRF on the server where the application itself can decide whether or not to trust each request. This requires the browser, however, to provide some information about the request such as where it came from (but HTTP-Referrer is not reliable) or how it came (such as whether it was from an image tag, etc).
Clickjacking is more easily approached on the client side. One approach is to make an impenetrable fence around certain special frames where any outer/parent frames can't layer stuff on top of their children. This frame-firewall approach is not always attractive, so maybe there should be some mechanism that allows the server to say "hey, this is sensitive, don't overlay this content I'm serving." Then again, maybe it would be ideal to just tell the server a bit about where the content will be rendered, and let the server decide whether or not to serve data.
But what both Clickjacking and CSRF have in common is that they leverage contextual blindness that many web applications have -- they're not aware of where requests come from or where there responses end up.
It seems clear that we can't rely on just a browser-side or server-side fix for these types of things, and instead we need to have some sort of cooperation. The question remains, however, who should do the bulk of the enforcement. I'm currently leaning towards using the browser as a context-revealing tool and leaving enforcement and policy decisions up to the server, but there's many times when that's not enough to stop client-side attacks.
There are many approaches to stop these things... but on whose shoulders should the policy enforcement lie? Should a web browser be in charge of making sites safe, or should it just be an enabler that provides enough information to a server so it can make appropriate decisions?
CSRF can be stopped by the browser, but in a fairly convoluted way. It's tough for a web browser to discern which requests will cause a significant transaction on the web server. For example, the simple request
GET /profile/deleteme.do
could be enough to trash someone's account on a server. Even some POST requests don't cause internal state changes (imagine a currency conversion calculator or a POST-based language translator form). It seems a bit easier to stop CSRF on the server where the application itself can decide whether or not to trust each request. This requires the browser, however, to provide some information about the request such as where it came from (but HTTP-Referrer is not reliable) or how it came (such as whether it was from an image tag, etc).
Clickjacking is more easily approached on the client side. One approach is to make an impenetrable fence around certain special frames where any outer/parent frames can't layer stuff on top of their children. This frame-firewall approach is not always attractive, so maybe there should be some mechanism that allows the server to say "hey, this is sensitive, don't overlay this content I'm serving." Then again, maybe it would be ideal to just tell the server a bit about where the content will be rendered, and let the server decide whether or not to serve data.
But what both Clickjacking and CSRF have in common is that they leverage contextual blindness that many web applications have -- they're not aware of where requests come from or where there responses end up.
It seems clear that we can't rely on just a browser-side or server-side fix for these types of things, and instead we need to have some sort of cooperation. The question remains, however, who should do the bulk of the enforcement. I'm currently leaning towards using the browser as a context-revealing tool and leaving enforcement and policy decisions up to the server, but there's many times when that's not enough to stop client-side attacks.