I have been fighting with what I thought was a really slow window manager, and so I changed to a lighter weight one and it still took forever to draw things. After fiddling with stuff on and off for a few months, it turns out to be pretty simple: the radeon driver decided to use the CPU for too much of its own job.
I set a couple of flags (thanks to linportal), and everything is speedy again. So if you're fighting with a radeon driver that seems to be worthless (especially with multiple displays) try setting the "MigrationHeuristic" option to "greedy" in your xorg.conf's device section.
Option "MigrationHeuristic" "greedy"
Monday, December 14, 2009
Friday, November 20, 2009
update on HTTPS security
Version 2.0 of my Force-TLS add-on for Firefox was released by the AMO editors on Tuesday, and in incorporates a few important changes: It supports the Strict-Transport-Security header introduced by PayPal, and also has an improved UI that lets you add/remove sites from the forced list. For more information see my Force-TLS web site.
On a similar topic, I've been working to actually implement Strict-Transport-Security in Firefox. The core functionality is in there, and if you want to play with some demo builds, grab a custom built Firefox and play. These builds don't yet enforce certificate integrity as the spec requires, but aside from that, they implement STS properly.
The built-in version performs an internal redirect to upgrade channels -- before any request hits the wire. This is an improvement over the way the HTTP protocol handler was hacked up by version 1 of Force-TLS, and doesn't suffer from any subtle bugs that may pop up due to mutating a channel's URI through an nsIContentPolicy. I'm not sure that add-ons can completely trigger the proper internal redirect, since not all of the HTTP channel code is exposed to scripts, and add-ons would need to replicate some of the functions compiled into the nsHttpChannel, opening up a possibility of obscure side-effects if the add-on gets out of sync with the binary's version of those functions.
Edit: The newest version of NoScript does channel redirecting through setting up a replacement channel in a really clever way -- pretty much the same as my patch. It replicates some of the internal-only code in nsHttpChannel, though, and it would need to get updated in NoScript if for some reason we change it in Firefox.
On a similar topic, I've been working to actually implement Strict-Transport-Security in Firefox. The core functionality is in there, and if you want to play with some demo builds, grab a custom built Firefox and play. These builds don't yet enforce certificate integrity as the spec requires, but aside from that, they implement STS properly.
The built-in version performs an internal redirect to upgrade channels -- before any request hits the wire. This is an improvement over the way the HTTP protocol handler was hacked up by version 1 of Force-TLS, and doesn't suffer from any subtle bugs that may pop up due to mutating a channel's URI through an nsIContentPolicy. I'm not sure that add-ons can completely trigger the proper internal redirect, since not all of the HTTP channel code is exposed to scripts, and add-ons would need to replicate some of the functions compiled into the nsHttpChannel, opening up a possibility of obscure side-effects if the add-on gets out of sync with the binary's version of those functions.
Edit: The newest version of NoScript does channel redirecting through setting up a replacement channel in a really clever way -- pretty much the same as my patch. It replicates some of the internal-only code in nsHttpChannel, though, and it would need to get updated in NoScript if for some reason we change it in Firefox.
Thursday, November 12, 2009
OWASP AppSec DC '09
I'm at OWASP AppSec DC '09 this week. If you're there too, come find me and say hi!
Labels:
appsec dc,
conference,
owasp,
security
Monday, October 12, 2009
csp @ stanford security seminar
I'll be giving a talk at the October 13 Stanford Security Seminar. 4:30pm in Gates 4B. Show up if you're interested in CSP or want to heckle!
Labels:
content security policy,
csp,
mozilla,
security
Friday, October 02, 2009
CSP Preview!
Brandon Sterne and I released a preview of Firefox with Content Security Policy features built in. There are still little bits of the specification that aren't yet ready (like HTTP redirection handling), but most of the core functionality should be there.
If you'd like to play around with this pre-release version of Firefox (very alpha, future release) that has CSP built in, download it here! You can test it out at Brandon's demo page.
In case you're not familiar with CSP, it's a content-restriction system that allows web sites to specify what other types of stuff can be embedded on their pages and where it can be loaded from. It's very similar to something called HTTP Immigration Control that I was working on in grad school, so I'm very exited to be part of the design, specification and implementation -- hopefully a big step towards securing the web.
Previously: Shutting Down XSS with Content Security Policy and CSP: With or Without Meta?
Update: The old download link expired. New one should have a much longer lifetime (here).
If you'd like to play around with this pre-release version of Firefox (very alpha, future release) that has CSP built in, download it here! You can test it out at Brandon's demo page.
In case you're not familiar with CSP, it's a content-restriction system that allows web sites to specify what other types of stuff can be embedded on their pages and where it can be loaded from. It's very similar to something called HTTP Immigration Control that I was working on in grad school, so I'm very exited to be part of the design, specification and implementation -- hopefully a big step towards securing the web.
Previously: Shutting Down XSS with Content Security Policy and CSP: With or Without Meta?
Update: The old download link expired. New one should have a much longer lifetime (here).
Labels:
content security policy,
csp,
firefox,
mozilla,
security
Thursday, September 17, 2009
notawesome
While discussing privacy and Firefox 3.5 with Chris a couple weeks ago, we stumbled upon the thought that people might want to be able to select which bookmarks show up when they're given automatic suggestions in Firefox 3's Awesome Bar. This discussion really started with a bit of public metrics and discussion in the blogosphere.
In mid August, Ken Kovash wrote about reasons users gave for not upgrading from Firefox 2 to Firefox 3.0. The number one reason was, surprisingly, the Awesome Bar. Without going into detail, the gist was that people didn't really want certain bookmarks to show up when they start typing URLs.
Perhaps the settings weren't obvious enough, but users can set the awesome bar to search only bookmarks, only history, both, or neither (Alex Faaborg discussed it in June, in fact).
Here's the use case: Bob bookmarks a couple porn sites, then during a public presentation, he starts typing "www" in the URL bar. His porn sites show up in the suggestion list, and everyone in the audience gasps.
The work-arounds for this I see are:
But maybe this isn't good enough for everyone. Some folks might want to just hide a couple of bookmarks from the awesome bar. We need a way to make certain bookmarks "not awesome" so they won't show up.
Enter bookmark tags... you can add tags to bookmarks to find them easily. Why not tag bookmarks with "notawesome", then somehow hide those from the awesome bar search?
On a whim, I hacked together a quick addon to do this: notawesome!
lifehacker picked up on this (dunno how they found it buried in AMO), and apparently some folks find it useful.
To those 800 people using it already: thanks for trying it out, and your comments! I'll see if I can find some time to make it better. If anyone else wants to hack on it, let me know...
In mid August, Ken Kovash wrote about reasons users gave for not upgrading from Firefox 2 to Firefox 3.0. The number one reason was, surprisingly, the Awesome Bar. Without going into detail, the gist was that people didn't really want certain bookmarks to show up when they start typing URLs.
Perhaps the settings weren't obvious enough, but users can set the awesome bar to search only bookmarks, only history, both, or neither (Alex Faaborg discussed it in June, in fact).
Here's the use case: Bob bookmarks a couple porn sites, then during a public presentation, he starts typing "www" in the URL bar. His porn sites show up in the suggestion list, and everyone in the audience gasps.
The work-arounds for this I see are:
- Use a separate browser for "private" sites.
- Use a separate Firefox profile for browsing "private" sites.
- Use Private Browsing when browsing "private" sites (but then you can't bookmark the sites).
- Turn off bookmarks and/or history searching for awesome bar.
But maybe this isn't good enough for everyone. Some folks might want to just hide a couple of bookmarks from the awesome bar. We need a way to make certain bookmarks "not awesome" so they won't show up.
Enter bookmark tags... you can add tags to bookmarks to find them easily. Why not tag bookmarks with "notawesome", then somehow hide those from the awesome bar search?
On a whim, I hacked together a quick addon to do this: notawesome!
lifehacker picked up on this (dunno how they found it buried in AMO), and apparently some folks find it useful.
To those 800 people using it already: thanks for trying it out, and your comments! I'll see if I can find some time to make it better. If anyone else wants to hack on it, let me know...
Monday, August 10, 2009
force tls
A while back, Collin Jackson and Adam Barth presented this idea called ForceHTTPS. The main idea was simple, yet powerful: allow sites a way to say "in the future, ALWAYS load me via HTTPS". Why?
I like this force-security feature, and by suggestion from a few other interested parties, I took to implementing a slightly different version from what Jackson and Barth had done in the past. For now, I'm calling it ForceTLS, and the indicator to "always load via HTTPS" is an HTTP response header.
There's more details on my Force-TLS web site, but that's the gist of what it does. Some folks are working on a more detailed specification that hopefully will be published soon. For now, check out the add-on for Firefox, and let me know what you think!
"Computers are increasingly mobile and, to serve them, more and more public spaces (cafes, airports, libraries, etc.) offer their customers WiFi access. When a web browser on such a network requests a resource, it is implicitly trusting the hotspot not to interfere with the communication. A malicious computer hooked up to the network could alter the traffic, however, and this can have some unpleasant consequences." [Mozilla Security Blog]
I like this force-security feature, and by suggestion from a few other interested parties, I took to implementing a slightly different version from what Jackson and Barth had done in the past. For now, I'm calling it ForceTLS, and the indicator to "always load via HTTPS" is an HTTP response header.
There's more details on my Force-TLS web site, but that's the gist of what it does. Some folks are working on a more detailed specification that hopefully will be published soon. For now, check out the add-on for Firefox, and let me know what you think!
Thursday, August 06, 2009
inheriting XPCOM across languages
I've been working on an Add-On for Firefox 3.* recently, and came across a situation where I wanted to do a little XPCOM component inheritance. Basically, there's an HTTPProtocolHandler in Firefox that is used in a variety of places, mainly in the creation of URIs and connections through HTTP. I wanted to modify the HTTP Protocol Handler so that it would get to "filter" each HTTP URI before a connection is created, and then maybe upgrade it to HTTPS if necessary (ForceTLS: see the AMO listing, my site, and the blog entry).
Anyhow, since there can be only one HTTP protocol handler, I have to somehow modify it, and since it's written in C++, I basically have to write my own from scratch to deploy it in an add-on.
But wait, there's got to be an easy way. Here's a thought: create a really basic component, capture a reference to the existing HTTP protocol handler, register the new one as the HTTP protocol handler, and for all method calls and property accesses on my handler, delegate back to the original protocol handler. In js-pseudocode:
But of course it's not that easy because there is no propertyGetter or functionCalled general methods (like in python). So instead, I had to take to playing with prototypes, aided of course by my JS guru Ben:
But this didn't work because of XPCOM and QueryInterface: the JS object oldHandler may have other interfaces it supports, but the appropriate methods aren't in the JS instance. So I have to do something a bit more elaborate:
Essentially, I have to import the functions and variables from the old HTTP protocol handler, and every time my instance (which is replacing the old protocol handler) is QI'ed, I have to QI the old one and re-import all its properties. This is because the old handler was also an nsIObserver and who knows what else.
I implemented my own newURI method by wrapping the one in the old handler and manipulating the URI that comes out of it. Because this is manually defined, it won't be shadowed by functions imported by the inheritCurrentInterface() calls.
The only lingering XPCOM question I've got is what to do with getInterface(). I think because of the inheritCurrentInterface() implementation, getInterface will get imported with the appropriate functionality when it's needed, but I'm not sure.
So I guess the next step is to try and figure out how to provide a JS library that makes this all a lot easier. I'd like some syntax like:
That syntax may not be workable, but something like it would be nice.
Anyhow, since there can be only one HTTP protocol handler, I have to somehow modify it, and since it's written in C++, I basically have to write my own from scratch to deploy it in an add-on.
But wait, there's got to be an easy way. Here's a thought: create a really basic component, capture a reference to the existing HTTP protocol handler, register the new one as the HTTP protocol handler, and for all method calls and property accesses on my handler, delegate back to the original protocol handler. In js-pseudocode:
myHandler.aPropertyAccessed = function(propName, context) {
if(typeof this[propName] === 'undefined')
return oldHandler[propName];
return this[propName];
};
myHandler.aFunctionCalled = function(fname, args) {
if(typeof this[fname] === 'function')
return this[fname].apply(fname, args);
return gOldHandler.functionCalled(fname, args, gOldHandler);
}
But of course it's not that easy because there is no propertyGetter or functionCalled general methods (like in python). So instead, I had to take to playing with prototypes, aided of course by my JS guru Ben:
// "@mozilla.org/network/protocol;1?name=http"
var kCID = "{4f47e42e-4d23-4dd3-bfda-eb29255e9ea3}";
var gOldHandler = Components.classesByID[kCID]
.getService(Ci.nsIHttpProtocolHandler);
function MyHandler() {}
MyHandler.prototype = {
//custom methods and overridden stuff here
}
MyHandler.prototype.__proto__ = gOldHandler;
But this didn't work because of XPCOM and QueryInterface: the JS object oldHandler may have other interfaces it supports, but the appropriate methods aren't in the JS instance. So I have to do something a bit more elaborate:
// Given two instances, copy in all properties from "super"
// and create forwarding methods for all functions.
function inheritCurrentInterface(self, super) {
for(let prop in super) {
if(typeof self[prop] === 'undefined')
if(typeof super[prop] === 'function') {
(function(prop) {
self[prop] = function() {
return super[prop].apply(super,arguments);
};
})(prop);
}
else
self[prop] = super[prop];
}
}
function MyHandler() {
//grab initial methods (nsIHttpProtocolhandler)
inheritCurrentInterface(this, gOldHandler);
}
MyHandler.prototype = {
QueryInterface:
function(aIID) {
gOldHandler.QueryInterface(aIID);
inheritCurrentInterface(this, gOldHandler);
return this;
},
newURI:
function(spec, originCharset, baseURI) {
var uri = gOldHandler.newURI.apply(gOldHandler, arguments);
//... do my stuff here ...
return uri;
}
};
Essentially, I have to import the functions and variables from the old HTTP protocol handler, and every time my instance (which is replacing the old protocol handler) is QI'ed, I have to QI the old one and re-import all its properties. This is because the old handler was also an nsIObserver and who knows what else.
I implemented my own newURI method by wrapping the one in the old handler and manipulating the URI that comes out of it. Because this is manually defined, it won't be shadowed by functions imported by the inheritCurrentInterface() calls.
The only lingering XPCOM question I've got is what to do with getInterface(). I think because of the inheritCurrentInterface() implementation, getInterface will get imported with the appropriate functionality when it's needed, but I'm not sure.
So I guess the next step is to try and figure out how to provide a JS library that makes this all a lot easier. I'd like some syntax like:
Components.utils.import("resource://gre/modules/XPCOMUtils.jsm");
var MyService = XPCOMUtils.extendService(kCOMPONENT_CLASS_ID);
MyService.constructor = function(foo) {
//do something with foo
};
MyService.prototype = {
//override methods here
componentMethod: function(a, b, c) {
super.componentMethod(a, b, c);
},
};
// then the rest of XPCOMUtils init stuff...
That syntax may not be workable, but something like it would be nice.
Labels:
firefox,
forcetls,
javascript,
tls,
xpcom
Monday, June 29, 2009
CSP: with or without meta?
We're working up a storm on Content Security Policy (CSP) here at Mozilla, and I've been spending a lot of time hacking out an implementation and talking with people about how CSP works. I keep coming back to sharp edges caused by allowing policies in <meta> tags. Not only does meta-tag support make implementation of CSP more difficult, but it actually also provides an additional attack surface.
What is CSP?
Quick summary of Content Security Policy: CSP lets web site authors specify a policy that locks down where the site may obtain resources as well as what types of resources may be requested. This policy is specified in an HTTP Request header or may also be specified in a meta tag. There's a great blog post by Brandon that talks about this stuff more in depth.
Enter the http-equivalent META tag.
Originally, the point of allowing policy definitions in meta tags was to gain greater flexibility and give an option to folks who can't modify HTTP headers due to web hosting restrictions. Later on, we started thinking that meta-tag-CSP would be a useful way to allow "tightening" or intersection of policies for specific segments of a web site.
The only use case that comes to my mind is a shared web hosting service. Imagine the controllers of a hosting service want to forbid embedding of Flash content from EvilFlashHacker.com; at the same time their customers may want a more restrictive policy, but only one policy can be specified in HTTP. As a result the hosting company has three options:
An ability to specify policies in meta tags gives way for situation 3: policy tightening. Unfortunately there are side-effects to allowing policy specification in both HTTP headers and meta tags.
Side Effects.
Implementing CSP becomes quite a bit more complex with meta tags. First, the user agent has to figure out to do when there are two conflicting policies, one in HTTP and one in meta. We solved this with an intersection algorithm that can only tighten an effective policy. Aside from conflicts, there's also the issue of parsing the meta tag out of the document appropriately before any resources subject to CSP are requested.
Allowing policy specification in a meta tag also opens up another use for successful content injection attacks: injection of an unauthorized policy. Additionally, such a policy could be used for a limited CSRF attack on the site itself through a policy-uri or report-uri directive. Of course an unauthorized "allow none" policy can effectively DoS a site, too.
Content Separation.
In the haze of thinking about meta-tag-CSP uses, I lost track of the reason CSP was HTTP header-based in the first place: to separate the content from the transmission channel and underlying policies that controls what it is for and what it can do. There's a clear advantage to this separation: it is harder to attack. Adversaries now must be able to break into the protocol level, not just the application/content. HTTP headers are way more easily hardened on the server-side than an HTML tag.
I want to eradicate meta-tag support for CSP, and all of the thorns that come with it -- policy intersection, document parsing complexity, HTML injection threats, etc -- because I don't think the relatively small gain from implementing it is worth the potential cost and risk. Is there a use (other than the hosting service use case above) that requires a meta tag CSP... and is worth the security risk and code complexity?
What is CSP?
Quick summary of Content Security Policy: CSP lets web site authors specify a policy that locks down where the site may obtain resources as well as what types of resources may be requested. This policy is specified in an HTTP Request header or may also be specified in a meta tag. There's a great blog post by Brandon that talks about this stuff more in depth.
Enter the http-equivalent META tag.
Originally, the point of allowing policy definitions in meta tags was to gain greater flexibility and give an option to folks who can't modify HTTP headers due to web hosting restrictions. Later on, we started thinking that meta-tag-CSP would be a useful way to allow "tightening" or intersection of policies for specific segments of a web site.
The only use case that comes to my mind is a shared web hosting service. Imagine the controllers of a hosting service want to forbid embedding of Flash content from EvilFlashHacker.com; at the same time their customers may want a more restrictive policy, but only one policy can be specified in HTTP. As a result the hosting company has three options:
- Let their customers override the policy (possibly removing the no-EvilFlashHacker.com rule)
- Disallow the ability for their customers to tighten the CSP
- Provide some way to allow policy tightening without the possibility of loosening.
An ability to specify policies in meta tags gives way for situation 3: policy tightening. Unfortunately there are side-effects to allowing policy specification in both HTTP headers and meta tags.
Side Effects.
Implementing CSP becomes quite a bit more complex with meta tags. First, the user agent has to figure out to do when there are two conflicting policies, one in HTTP and one in meta. We solved this with an intersection algorithm that can only tighten an effective policy. Aside from conflicts, there's also the issue of parsing the meta tag out of the document appropriately before any resources subject to CSP are requested.
Allowing policy specification in a meta tag also opens up another use for successful content injection attacks: injection of an unauthorized policy. Additionally, such a policy could be used for a limited CSRF attack on the site itself through a policy-uri or report-uri directive. Of course an unauthorized "allow none" policy can effectively DoS a site, too.
Content Separation.
In the haze of thinking about meta-tag-CSP uses, I lost track of the reason CSP was HTTP header-based in the first place: to separate the content from the transmission channel and underlying policies that controls what it is for and what it can do. There's a clear advantage to this separation: it is harder to attack. Adversaries now must be able to break into the protocol level, not just the application/content. HTTP headers are way more easily hardened on the server-side than an HTML tag.
I want to eradicate meta-tag support for CSP, and all of the thorns that come with it -- policy intersection, document parsing complexity, HTML injection threats, etc -- because I don't think the relatively small gain from implementing it is worth the potential cost and risk. Is there a use (other than the hosting service use case above) that requires a meta tag CSP... and is worth the security risk and code complexity?
Labels:
content security policy,
csp,
meta tag,
mozilla,
security
Tuesday, April 21, 2009
roll your own EV
In working on a project recently, I found myself wanting to become an EV-SSL certificate authority (EV means Extended Validation). Lofty goals, yes, but really I just wanted to play with EV certificates and see if a couple of things were feasible. I'll post what happens as I figure it out.
Anyway, I needed to find a way to get a browser to accept a root CA that I created, and then get the browser to trust that root CA to issue EV certificates. This is harder than it sounds; regular SSL root certificates can be added easily to any browser, but the EV root certs can't. This is to protect users from accidental or malicious installation of EV root certs -- but unfortunately also protected me from easily doing it too.
Turns out, Firefox will let you "test" some CA certs as EV authorities, but you have to get your hands on a debugging build. Not only that, but unless you want to maintain a fresh CRL or OCSP server, you'll have to modify the source code. Sounds daunting, but it really isn't too bad. I've documented the whole process here, and I'll summarize in this blog post.
1. Create an EV-SSL Certificate Authority, and make an EV cert. This sounds fancy, but basically means: create a certificate authority, then issue a cert with a specific policy OID. The differences between regular CAs and EV CAs are minimal except in how the browser decides to classify them. In short, this should do the trick:
2. Tame Firefox. This involves patching the Firefox source code to perform lazy freshness checks on certificates (and there's a patch for that here), and set it up to accept externally defined EV root authorities (you will list them in a text file). Then you must compile the source in debug mode to enable it. Details here.
3. Install your CA and go. You have to extract the base-64 encoded subject and serial number out of your CA certificate by installing this patch, compiling the NSS tools, and running the pp tool on your root certificate. Once you've got that data, put it, the EV policy OID of your choice, and the CA cert fingerprint in a file called "test_ev_roots.txt". That text file goes in your Firefox profile directory. Once that's set up, you run Firefox, install the root CA as a regular SSL trusted authority, and you're ready to go. Details here.
Summary. It's not impossible to install a root certificate and get Firefox to consider it an EV root, but it is surely difficult (and this is good). The instructions presented in this post are simply summary, and not indended to be details, which can be found here.
Edit: I guess I should explain that EV means Extended validation; basically a more thorough check is performed by a certificate authority before issuing an EV certificate [EV on wikipedia]
Anyway, I needed to find a way to get a browser to accept a root CA that I created, and then get the browser to trust that root CA to issue EV certificates. This is harder than it sounds; regular SSL root certificates can be added easily to any browser, but the EV root certs can't. This is to protect users from accidental or malicious installation of EV root certs -- but unfortunately also protected me from easily doing it too.
Turns out, Firefox will let you "test" some CA certs as EV authorities, but you have to get your hands on a debugging build. Not only that, but unless you want to maintain a fresh CRL or OCSP server, you'll have to modify the source code. Sounds daunting, but it really isn't too bad. I've documented the whole process here, and I'll summarize in this blog post.
1. Create an EV-SSL Certificate Authority, and make an EV cert. This sounds fancy, but basically means: create a certificate authority, then issue a cert with a specific policy OID. The differences between regular CAs and EV CAs are minimal except in how the browser decides to classify them. In short, this should do the trick:
Details here../CA.pl -newca
openssl req -config ./openssl.cnf -new -keyout newkey.pem \
-out newreq.pem -days 30
openssl ca -config ./openssl.cnf -policy policy_anything \
-out newcert.pem -infiles newreq.pem
2. Tame Firefox. This involves patching the Firefox source code to perform lazy freshness checks on certificates (and there's a patch for that here), and set it up to accept externally defined EV root authorities (you will list them in a text file). Then you must compile the source in debug mode to enable it. Details here.
3. Install your CA and go. You have to extract the base-64 encoded subject and serial number out of your CA certificate by installing this patch, compiling the NSS tools, and running the pp tool on your root certificate. Once you've got that data, put it, the EV policy OID of your choice, and the CA cert fingerprint in a file called "test_ev_roots.txt". That text file goes in your Firefox profile directory. Once that's set up, you run Firefox, install the root CA as a regular SSL trusted authority, and you're ready to go. Details here.
Summary. It's not impossible to install a root certificate and get Firefox to consider it an EV root, but it is surely difficult (and this is good). The instructions presented in this post are simply summary, and not indended to be details, which can be found here.
Edit: I guess I should explain that EV means Extended validation; basically a more thorough check is performed by a certificate authority before issuing an EV certificate [EV on wikipedia]
Labels:
certificates,
compiling,
EVSSL,
firefox
Wednesday, March 25, 2009
ev certs are not so ev
Last week at CanSecWest, Alex Sotirov and Mike Zusman showed how extended validity (EV) certificates don't really provide much additional help to securing a site with SSL. To sum-up a couple of their conclusions:
1. It's not hard to get a regular cert for an interesting domain.
2. An EV-certified site can load data from any other SSL-encrypted locations, regardless of the cert.
3. Rogue cert + MITM + EV-site = arbitrary attack code execution on EV-site.
It seems to me that the problem is rooted in a slight but pervasive misunderstanding of what EV certs do: they provide a more rigorous check to ensure that the entity serving data through the EV certified channel is actually who they claim to be. They don't currently give proof to a site's visitor that the site has not been compromised.
Having said that, if an attacker can prey on the way the site serves content, it doesn't matter whether or not the EV entity is actually who they claim to be; an attacker can just piggyback on their session, serving some cleverly crafted data with a rogue cert. This can be done by playing man in the middle with third-party content embedded on the EV site, or by playing tricks with the browser's cache.
Easy fix: In order to display the green bar (EV badging) require all the stuff on a web page to be served with the same EV cert. This is not attractive for many reasons, including ad syndication and distributed content serving -- both highly desirable uses may cross the fully-qualified domain border and thus require non-EV certs or multiple different EV certs. (EV certs are not allowed to have wildcard domain matching, so any difference in domain name will cause the cert to be invalid).
A more desirable fix, in my opinion, will take a look at the problem from a base level and figure out why EV breaks in these mixed-content or mixed-cert scenarios--then fix EV. The EV cert says "trust my subject at domain.com." What we really need is a way to say "trust this site."
More to come.
Cheers to Sotirov and Zusman for this excellent discovery and PoC man-in-the-middle script.
1. It's not hard to get a regular cert for an interesting domain.
2. An EV-certified site can load data from any other SSL-encrypted locations, regardless of the cert.
3. Rogue cert + MITM + EV-site = arbitrary attack code execution on EV-site.
It seems to me that the problem is rooted in a slight but pervasive misunderstanding of what EV certs do: they provide a more rigorous check to ensure that the entity serving data through the EV certified channel is actually who they claim to be. They don't currently give proof to a site's visitor that the site has not been compromised.
Having said that, if an attacker can prey on the way the site serves content, it doesn't matter whether or not the EV entity is actually who they claim to be; an attacker can just piggyback on their session, serving some cleverly crafted data with a rogue cert. This can be done by playing man in the middle with third-party content embedded on the EV site, or by playing tricks with the browser's cache.
Easy fix: In order to display the green bar (EV badging) require all the stuff on a web page to be served with the same EV cert. This is not attractive for many reasons, including ad syndication and distributed content serving -- both highly desirable uses may cross the fully-qualified domain border and thus require non-EV certs or multiple different EV certs. (EV certs are not allowed to have wildcard domain matching, so any difference in domain name will cause the cert to be invalid).
A more desirable fix, in my opinion, will take a look at the problem from a base level and figure out why EV breaks in these mixed-content or mixed-cert scenarios--then fix EV. The EV cert says "trust my subject at domain.com." What we really need is a way to say "trust this site."
More to come.
Cheers to Sotirov and Zusman for this excellent discovery and PoC man-in-the-middle script.
Labels:
"subprime pki",
security,
spoofing,
ssl
Tuesday, March 10, 2009
where should web policies be enforced?
I've been spending a lot of time thinking about context-based security decisions lately. These are decisions made on behalf of a website (or its users) in order to maintain data integrity and secrecy. Take for instance the issues of CSRF and Clickjacking; both of these attacks involve some sort of contextual manipulation. CSRF is a HTTP request generated by an untrustworthy source and Clickjacking is data theft by overlaying forms (etc).
There are many approaches to stop these things... but on whose shoulders should the policy enforcement lie? Should a web browser be in charge of making sites safe, or should it just be an enabler that provides enough information to a server so it can make appropriate decisions?
CSRF can be stopped by the browser, but in a fairly convoluted way. It's tough for a web browser to discern which requests will cause a significant transaction on the web server. For example, the simple request
could be enough to trash someone's account on a server. Even some POST requests don't cause internal state changes (imagine a currency conversion calculator or a POST-based language translator form). It seems a bit easier to stop CSRF on the server where the application itself can decide whether or not to trust each request. This requires the browser, however, to provide some information about the request such as where it came from (but HTTP-Referrer is not reliable) or how it came (such as whether it was from an image tag, etc).
Clickjacking is more easily approached on the client side. One approach is to make an impenetrable fence around certain special frames where any outer/parent frames can't layer stuff on top of their children. This frame-firewall approach is not always attractive, so maybe there should be some mechanism that allows the server to say "hey, this is sensitive, don't overlay this content I'm serving." Then again, maybe it would be ideal to just tell the server a bit about where the content will be rendered, and let the server decide whether or not to serve data.
But what both Clickjacking and CSRF have in common is that they leverage contextual blindness that many web applications have -- they're not aware of where requests come from or where there responses end up.
It seems clear that we can't rely on just a browser-side or server-side fix for these types of things, and instead we need to have some sort of cooperation. The question remains, however, who should do the bulk of the enforcement. I'm currently leaning towards using the browser as a context-revealing tool and leaving enforcement and policy decisions up to the server, but there's many times when that's not enough to stop client-side attacks.
There are many approaches to stop these things... but on whose shoulders should the policy enforcement lie? Should a web browser be in charge of making sites safe, or should it just be an enabler that provides enough information to a server so it can make appropriate decisions?
CSRF can be stopped by the browser, but in a fairly convoluted way. It's tough for a web browser to discern which requests will cause a significant transaction on the web server. For example, the simple request
GET /profile/deleteme.do
could be enough to trash someone's account on a server. Even some POST requests don't cause internal state changes (imagine a currency conversion calculator or a POST-based language translator form). It seems a bit easier to stop CSRF on the server where the application itself can decide whether or not to trust each request. This requires the browser, however, to provide some information about the request such as where it came from (but HTTP-Referrer is not reliable) or how it came (such as whether it was from an image tag, etc).
Clickjacking is more easily approached on the client side. One approach is to make an impenetrable fence around certain special frames where any outer/parent frames can't layer stuff on top of their children. This frame-firewall approach is not always attractive, so maybe there should be some mechanism that allows the server to say "hey, this is sensitive, don't overlay this content I'm serving." Then again, maybe it would be ideal to just tell the server a bit about where the content will be rendered, and let the server decide whether or not to serve data.
But what both Clickjacking and CSRF have in common is that they leverage contextual blindness that many web applications have -- they're not aware of where requests come from or where there responses end up.
It seems clear that we can't rely on just a browser-side or server-side fix for these types of things, and instead we need to have some sort of cooperation. The question remains, however, who should do the bulk of the enforcement. I'm currently leaning towards using the browser as a context-revealing tool and leaving enforcement and policy decisions up to the server, but there's many times when that's not enough to stop client-side attacks.
Labels:
clickjacking,
context,
csrf,
security
Wednesday, February 25, 2009
career move
I have been trying to keep this blog fairly technical, but since I haven't posted anything in a while and I've more or less changed my main focus, I figure it is relevant to post an update.
Recently I completed my Ph.D. and took a position at Mozilla Corporation. I'm going to be working on the security team there to protect the internets. Eventually I'll get back into the groove of posting relevant information to this blog (since I'll keep my focus in the security/computing realm) but it might take me a while to ramp up. In the meantime, thanks to all those nice folks at Mozilla who have been helpful with my move.
Recently I completed my Ph.D. and took a position at Mozilla Corporation. I'm going to be working on the security team there to protect the internets. Eventually I'll get back into the groove of posting relevant information to this blog (since I'll keep my focus in the security/computing realm) but it might take me a while to ramp up. In the meantime, thanks to all those nice folks at Mozilla who have been helpful with my move.
Subscribe to:
Posts (Atom)