Showing posts with label security. Show all posts
Showing posts with label security. Show all posts

Friday, December 03, 2021

Reckless Software Construction

Dear software developers: please please please stop making disposable software.

Demand for rapid development of software has radically changed how we develop, deploy, and maintain software over the past decade or so.  Storage is cheap.  Bandwidth is plentiful.  Agile and rapid iteration allow fast-to-market delivery.  All of this leads to fast, inefficient, messy, and reckless Rube Goldberg software construction.

I want to be clear: I am not trash-talking all software engineering via component composition (where devs take pre-built things and slap them together to make a new thing).  This is useful for many service-based products or web applications where the software runs on the software company's own hardware.  This strategy is over-used and is not the tool for every job.  I like some quality DevOps and do love a good web app, but the problem is when software is thrown together without care and diligence.

The equation is simple: moving fast and cutting corners leads to increased risk.

Agile dev + fast-to-market + prefab software = unnecessary risk

Stop it.  Be an engineer, not MacGyver.

"Our CISO will take care of this."

No.  Your CISO doesn't want to remediate your security problems after-the-fact.  Your CISO is there to help guide your development efforts down a safe path, NOT to compensate for your desire to "ship fast, fix later".  You should be working with your security team to avoid unnecessary risk and not call them in as a clean-up crew when you screw it up.  

Fast to market is great, but success will be short-lived if you are also fast-to-hacked.

Security teams are increasingly tasked with securing what some call a Software Supply Chain: the third party, often unknowing, contributors to a software product.  This chain of components (or frankly, pre-packaged #includes, libraries, or services) creates risk that needs to be at least monitored.  All of the third-party things your devs use to deploy your product will potentially fall victim to the next SolarWinds, hacked NPM package, or Kaseya.

Take Inventory

The provenance of software has become so out of control that even the White House is urging an inventory of all the stuff (a Software Bill of Materials, or SBOM) you use to make your things.  Cybersecurity risk is high not only because attackers know hacking your stuff is profitable, but also because your stuff is too complicated.  There, I said it: the software you created is unnecessarily complicated. This call for SBOM is a cry for help; you need to at the very least know what your stuff is made of.

Take Responsibility

This need for SBOM is a symptom of developers’ rush to pass the responsibility for fair and safe software onto others.  So you make a thing out of other peoples' things... it is YOUR responsibility to vet the components.  Your customers are relying on you to do this!  Requiring an SBOM help someone else vet your product, but it's not foolproof and is not for end-users. Software developers must take responsibility for securing their work and not blindly build trash out of other peoples’ trash. SBOM is not a panacea, it is a stopgap until we regain control over the junk we build.

Take Ownership

Ultimately when your software is compromised, your customers will blame you.  Will you pass the blame to your third-party suppliers?  You should be building something you're proud of and something you UNDERSTAND.  The right tool for the job is not always the fanciest/newest tool.  You do NOT need to use docker containers to build a solitaire app on iOS. 

Take Action

The answer is not "more software."  The answer is not "hire a consultant."  The answer is not SBOM. The answer is not DevSecOps (though that can help).

The first thing you should do is seriously think about your engineering practices... like really think about it.  If you develop client-side software (apps), evaluate the components and tools you use to build your app.  Maybe throwing it together quickly got it out the door, but how much effort are you spending maintaining it?  Are you building the right thing?  Phone Apps are bloated.  Why is my email app 270MB?  Why does this email app update EVERY DAMN WEEK?  These are questions I'm asking your developers, and you should too.  Doing your job well requires more than just tossing the product over the wall and going out for a beer.

Consider factoring out third-party components when you only use a small part of the component.  Do you really need jquery and d3.js and Angular? Do you need all those NPM packages?  An elegant, well-crafted architecture results in a more stable and secure product.  Take pride in your design and not just your speed. Focus. Keep it simple as you can; any amount of early investment to understand and reduce complexity of your software's provenance will minimize the chance that someone else's mistake ruins your product.

Sunday, February 26, 2012

Malware and Phishing Protection in Firefox

For a while, Firefox has included malware and phishing protection to keep our users safe on the web.  Recently, Gian-Carlo Pascutto made some significant improvements to our Firefox support for the feature, resulting in much more efficient operation and use of the Safe Browsing API for this protection.

Privacy in the Safe Browsing API

I want to take a little time to explain how this feature works and why I like it from a privacy perspective:  Firefox can check whether or not a web site is on the Safe Browsing blacklist without actually telling the API what the web site is called.

At a high level, using this API to find URLs on the "bad" list is like asking your friend to identify whether or not he likes things you show him through a dirty window.  Say you hold up an apple to the dirty window and the your friend on the other side sees a fuzzy image of what you're holding.  It looks round and red and pretty small, but he's not sure what it is.  Your friend looks at his list of things he doesn't like and says he likes everything like that except for plums and red tennis balls.  While he still does not know exactly what you're holding, you can know for sure he likes the apple.

More technically, this uses a hash function to turn web URLs into numbers.  Each number corresponds to exactly one URL.  For each site you visit, Firefox hashes the URL and sends the first part of the resulting number to the Safe Browsing API.  The API responds with any values on the list of bad URLs that start with the value it received.  When Firefox gets the list of "bad" site hash values that match the first part, it looks to see if the entire hash is in the list.  Based on whether or not it's in the provided list of bad stuff, Firefox can determined whether the URL is on the Safe Browsing blacklist or not.

Consider this hypothetical example of two sites and their (fake) hash values:

SiteHash Value
http://mozilla.com1339
http://phishingsite.com1350

When you visit http://mozilla.com, Firefox calculates the hash of the URL, which is 1339.  It then asks the Safe Browsing API what bad sites it knows about that start with "13".  It returns a list of numbers including "1350".  Firefox takes that list, notices that 1339 (http://mozilla.com) is not in the list, so the site must be okay. 

If you repeat the same procedure with http://phishingsite.com, the same prefix "13" is sent to the API, and the same list of bad sites (including 1350) is returned.  In this case, however,  the site's hash is "1350" so Firefox knows it's on the list of bad sites and gives you a warning.

For you techies and geeks out there: yeah, I'm glossing over a few protocol details, but the gist is that you don't need to tell Google exactly where you browse in return for the bad-stuff blocking. 

Keeping the Safe Browsing Service Running Smoothly

Google hosts the Safe Browsing service on the same infrastructure as many of their other services, and they need to ensure that our users aren't blocked from accessing the malware and phishing blacklists as well as make sure they invest in the right resources to keep the service operating well.  One of the mechanisms they need for performing this quality-of-service assurance is a cookie, so the first request Firefox makes to the Safe Browsing API results in the setting of a Google cookie.

I know that not everyone likes that cookie, but Google needs it to make sure their service is working well so I've been working with them to ensure that they can use it for quality of service metrics but not track you around the web.  The most straightforward way to do this is to split the Firefox cookie jar into two: one for the web and one for the Safe Browsing feature.  It's not there yet, but with a little engineering work, in a future version of Firefox that cookie will only be used for Safe Browsing, and not sent with every request to Google as you browse the web.

The cookie can be turned off entirely if you disable third party cookies in Firefox.  When you turn off third party cookies, even if the cookie has been previously set your browser will not send the Google cookie -- unless you visit a Google website. You can also turn off malware and phishing protection, but I really don't recommend it.

Making "Safer Browsing"

While Firefox has been using Safe Browsing for a while, Google has started experimenting with a couple new features in Safe Browsing for additional malware and phishing filtering.  Both of these new features are pretty new and it's not yet clear how effective they are or what percent of my browsing history will be traded for this improvement.  Both new features involve sending whole URLs to Google and departing from Firefox's current privacy-preserving state requires evidence of a significant gain in protection. When Google measures and shares how much gain is encountered by their pilot deployment in Chrome, we can take a deeper look and consider whether these new features are worth it.

For now, Firefox users are getting a lot of protection for very little in return and there does seem to be good reason for Google to use cookies with Safe Browsing.  We are always looking out for things we can do to give Firefox users both the best of privacy and security.

Thursday, March 24, 2011

Force-TLS compatible with Firefox 4!

I've updated the Force-TLS Firefox Add-On to work with the newest version of Firefox! Force-TLS version 3.0.0 should work in all Firefox 3.0 and newer.

So what does this mean? Well, HTTP Strict-Transport-Security (HSTS) is implemented in Firefox 4, and that's a pretty similar technology to Force-TLS. In fact, it is nearly identical except there's no UI in Firefox 4. If you install Force-TLS, you'll get a UI and also get the built-in HSTS support that's implemented much more completely and efficiently than any add-on. A while ago, I blogged about an experimental add-on called STS-UI that adds a UI to HSTS; Force-TLS shows essentially the same user interface but I've been wanting to keep both the back-end for Firefox 3.x and the front-end for all versions of Firefox in the same add-on.

So what's new in version 3.0.0?
  • Smarter: The invisible bits of Force-TLS are restructured to use the custom HTTPS-upgrading and header-noticing bits for earlier Firefox versions but use the HSTS back-end built into Firefox 4 when it's available.
  • Better: A few bugs in the user interface were fixed.
  • Organized: I've moved the code into an open source repository.

I've got a list of enhancements queued up for the next version of Force-TLS, but not a whole lot of time to work on it. If you'd like to help make Force-TLS more awesome, send an email to forcetls@sidstamm.com

Previously:

Friday, November 12, 2010

bah. blacksheep.

It's a deeply satisfying way to defeat an attacker: give him a taste of his own medicine, and the BlackSheep add-on attempts to do just that. This add-on listens for Firesheep's patterns on the network and warns a user that they are being watched. BlackSheep drops juicy bait for Firesheep (fake session cookies wrapped in legitimate-looking requests to sites) and waits for Firesheep to pick them up. When a Firesheep attacker tries to exploit these fake insecure sessions, BlackSheep picks up on it and alerts its user.

While there are some Firesheep users that will set off this alarm, the problem is a bit more serious. Many attackers have probably looked deeper into the fundamental flaw and also want to protect themselves from such an attack. These folks will be using a security-forcing add-on like Force-TLS, STS-UI on Firefox 4, HTTPS Everywhere or NoScript. All of the protected bad guy's connections to vulnerable sites will be secure, including the sessions that have been copied from other users on the network.

The upshot is that while BlackSheep may see the attacker connecting to vulnerable sites, it won't always know when the bait was taken. For many serious Firesheep-like attacks, BlackSheep will remain quiet.

There are many Firesheep users who are just playing around with the new, hot hacking tool, and so BlackSheep will cry out some of the time; it can't however be relied on to detect most Firesheep attacks. While I like the idea of an alarm like BlackSheep, solving the underlying problem (and stopping serious attackers that are more of a cause for concern) is a bit more complex and requires a different approach. Ultimately, we need to secure wifi networks, protect ourselves with HTTPS-forcing technologies, and ask sites to protect their users by using HTTPS for the whole session.

Thursday, August 26, 2010

HTTP Strict Transport Security has landed!

It was a year ago now that I first blogged about ForceTLS, and it's matured quite a bit since. I revised ForceTLS to be more robust, and began actually implementing it as HTTP-Strict-Transport-Security in Firefox. I'm excited to say that my patch has been reviewed and landed in mozilla-central.

What's that mean? Look for it in the next beta release of Firefox 4! If you can't wait, grab a nightly build, but when 4.0 is released, HTTP Strict-Transport-Security will be built-in and turned on by default.


Though the feature's core functionality is there, work on HSTS is not completely finished. There are still a few tweaks I'd like to make, mainly providing a decent UI so people can add/remove HSTS state for servers themselves -- but none of this is necessary to be specification compliant. As landed, HSTS is the behind-the-scenes implementation that listens to the HTTP Strict-Transport-Security header and follows those instructions.


In case you don't feel like trawling through the IETF Internet Draft specification but you want to figure out how it works, here's a quick summary:


  1. Over an HTTPS connection, the server provides the Strict-Transport-Security header indicating it wants to be an HSTS host. It looks something like this:
    Strict-Transport-Security: max-age=60000
    The header's presence indicates the server's desire to be an HSTS host, and the max-age states for how many seconds the browser should remember this.
  2. For an HSTS host (e.g., paypal.com), any subsequent requests assembled for an insecure connection to that host (http://paypal.com), will be rewritten to a secure request (https://paypal.com) before any network connection is opened.
  3. Optionally, the header can include a second includeSubdomains directive that tells the browser to additionally "upgrade" all subdomains of the serving host. That looks like this:
    Strict-Transport-Security: max-age=60000; includeSubdomains

If Firefox knows your host is an HSTS one, it will automatically establish a secure connection to your server without even trying an insecure one. This way, if I am surfing the 'net in my favorite cafe and a hacker is playing MITM with paypal.com (intercepting http requests for paypal.com and then forwarding them on to the real site), either I'll thwart the attacker by getting an encrypted connection to paypal.com immediately, or the attack will be detected by HSTS and the connection won't work at all.

There are more details in the specification, like some further restrictions on cert errors and the like, but this is the general setup, and I believe a pretty nice feature to increase your site's security.


Also: Jeff and Andy at Paypal are working hard at standardizing this.



Tuesday, August 17, 2010

OWASP AppSec USA

Michael Coates writes:
Several Mozilla employees will be attending this year's OWASP AppSecUSA event held in Irvine, CA (Sept 7-10). OWASP conferences focuse entirely on application security and are considered the premier event for this area of security.
I'm one of those mysterious Mozilla folk going to the meeting, and if you're there drop by and say hi. If you're not, you should consider coming! OWASP does wonderful things for web application security, and there are always fantastic talks to look forward to at AppSec USA.

This year, I'm hoping to spread the word about Content Security Policy a bit more; if you run a web site (or better yet, if you secure one) come find us at our booth or harass one of the Mozilla folk in the hallway and we'll be more than happy to help figure out how CSP can make your site more secure for your users.

Thursday, November 12, 2009

OWASP AppSec DC '09

I'm at OWASP AppSec DC '09 this week. If you're there too, come find me and say hi!

Monday, October 12, 2009

csp @ stanford security seminar

I'll be giving a talk at the October 13 Stanford Security Seminar. 4:30pm in Gates 4B. Show up if you're interested in CSP or want to heckle!

Friday, October 02, 2009

CSP Preview!

Brandon Sterne and I released a preview of Firefox with Content Security Policy features built in. There are still little bits of the specification that aren't yet ready (like HTTP redirection handling), but most of the core functionality should be there.

If you'd like to play around with this pre-release version of Firefox (very alpha, future release) that has CSP built in, download it here! You can test it out at Brandon's demo page.

In case you're not familiar with CSP, it's a content-restriction system that allows web sites to specify what other types of stuff can be embedded on their pages and where it can be loaded from. It's very similar to something called HTTP Immigration Control that I was working on in grad school, so I'm very exited to be part of the design, specification and implementation -- hopefully a big step towards securing the web.

Previously: Shutting Down XSS with Content Security Policy and CSP: With or Without Meta?

Update: The old download link expired. New one should have a much longer lifetime (here).

Monday, August 10, 2009

force tls

A while back, Collin Jackson and Adam Barth presented this idea called ForceHTTPS. The main idea was simple, yet powerful: allow sites a way to say "in the future, ALWAYS load me via HTTPS". Why?

"Computers are increasingly mobile and, to serve them, more and more public spaces (cafes, airports, libraries, etc.) offer their customers WiFi access. When a web browser on such a network requests a resource, it is implicitly trusting the hotspot not to interfere with the communication. A malicious computer hooked up to the network could alter the traffic, however, and this can have some unpleasant consequences." [Mozilla Security Blog]

I like this force-security feature, and by suggestion from a few other interested parties, I took to implementing a slightly different version from what Jackson and Barth had done in the past. For now, I'm calling it ForceTLS, and the indicator to "always load via HTTPS" is an HTTP response header.

There's more details on my Force-TLS web site, but that's the gist of what it does. Some folks are working on a more detailed specification that hopefully will be published soon. For now, check out the add-on for Firefox, and let me know what you think!

Monday, June 29, 2009

CSP: with or without meta?

We're working up a storm on Content Security Policy (CSP) here at Mozilla, and I've been spending a lot of time hacking out an implementation and talking with people about how CSP works. I keep coming back to sharp edges caused by allowing policies in <meta> tags. Not only does meta-tag support make implementation of CSP more difficult, but it actually also provides an additional attack surface.

What is CSP?
Quick summary of Content Security Policy: CSP lets web site authors specify a policy that locks down where the site may obtain resources as well as what types of resources may be requested. This policy is specified in an HTTP Request header or may also be specified in a meta tag. There's a great blog post by Brandon that talks about this stuff more in depth.

Enter the http-equivalent META tag.
Originally, the point of allowing policy definitions in meta tags was to gain greater flexibility and give an option to folks who can't modify HTTP headers due to web hosting restrictions. Later on, we started thinking that meta-tag-CSP would be a useful way to allow "tightening" or intersection of policies for specific segments of a web site.

The only use case that comes to my mind is a shared web hosting service. Imagine the controllers of a hosting service want to forbid embedding of Flash content from EvilFlashHacker.com; at the same time their customers may want a more restrictive policy, but only one policy can be specified in HTTP. As a result the hosting company has three options:

  1. Let their customers override the policy (possibly removing the no-EvilFlashHacker.com rule)

  2. Disallow the ability for their customers to tighten the CSP

  3. Provide some way to allow policy tightening without the possibility of loosening.

An ability to specify policies in meta tags gives way for situation 3: policy tightening. Unfortunately there are side-effects to allowing policy specification in both HTTP headers and meta tags.

Side Effects.
Implementing CSP becomes quite a bit more complex with meta tags. First, the user agent has to figure out to do when there are two conflicting policies, one in HTTP and one in meta. We solved this with an intersection algorithm that can only tighten an effective policy. Aside from conflicts, there's also the issue of parsing the meta tag out of the document appropriately before any resources subject to CSP are requested.

Allowing policy specification in a meta tag also opens up another use for successful content injection attacks: injection of an unauthorized policy. Additionally, such a policy could be used for a limited CSRF attack on the site itself through a policy-uri or report-uri directive. Of course an unauthorized "allow none" policy can effectively DoS a site, too.

Content Separation.
In the haze of thinking about meta-tag-CSP uses, I lost track of the reason CSP was HTTP header-based in the first place: to separate the content from the transmission channel and underlying policies that controls what it is for and what it can do. There's a clear advantage to this separation: it is harder to attack. Adversaries now must be able to break into the protocol level, not just the application/content. HTTP headers are way more easily hardened on the server-side than an HTML tag.

I want to eradicate meta-tag support for CSP, and all of the thorns that come with it -- policy intersection, document parsing complexity, HTML injection threats, etc -- because I don't think the relatively small gain from implementing it is worth the potential cost and risk. Is there a use (other than the hosting service use case above) that requires a meta tag CSP... and is worth the security risk and code complexity?

Wednesday, March 25, 2009

ev certs are not so ev

Last week at CanSecWest, Alex Sotirov and Mike Zusman showed how extended validity (EV) certificates don't really provide much additional help to securing a site with SSL. To sum-up a couple of their conclusions:

1. It's not hard to get a regular cert for an interesting domain.
2. An EV-certified site can load data from any other SSL-encrypted locations, regardless of the cert.
3. Rogue cert + MITM + EV-site = arbitrary attack code execution on EV-site.

It seems to me that the problem is rooted in a slight but pervasive misunderstanding of what EV certs do: they provide a more rigorous check to ensure that the entity serving data through the EV certified channel is actually who they claim to be. They don't currently give proof to a site's visitor that the site has not been compromised.

Having said that, if an attacker can prey on the way the site serves content, it doesn't matter whether or not the EV entity is actually who they claim to be; an attacker can just piggyback on their session, serving some cleverly crafted data with a rogue cert. This can be done by playing man in the middle with third-party content embedded on the EV site, or by playing tricks with the browser's cache.

Easy fix: In order to display the green bar (EV badging) require all the stuff on a web page to be served with the same EV cert. This is not attractive for many reasons, including ad syndication and distributed content serving -- both highly desirable uses may cross the fully-qualified domain border and thus require non-EV certs or multiple different EV certs. (EV certs are not allowed to have wildcard domain matching, so any difference in domain name will cause the cert to be invalid).

A more desirable fix, in my opinion, will take a look at the problem from a base level and figure out why EV breaks in these mixed-content or mixed-cert scenarios--then fix EV. The EV cert says "trust my subject at domain.com." What we really need is a way to say "trust this site."

More to come.

Cheers to Sotirov and Zusman for this excellent discovery and PoC man-in-the-middle script.

Tuesday, March 10, 2009

where should web policies be enforced?

I've been spending a lot of time thinking about context-based security decisions lately. These are decisions made on behalf of a website (or its users) in order to maintain data integrity and secrecy. Take for instance the issues of CSRF and Clickjacking; both of these attacks involve some sort of contextual manipulation. CSRF is a HTTP request generated by an untrustworthy source and Clickjacking is data theft by overlaying forms (etc).

There are many approaches to stop these things... but on whose shoulders should the policy enforcement lie? Should a web browser be in charge of making sites safe, or should it just be an enabler that provides enough information to a server so it can make appropriate decisions?

CSRF can be stopped by the browser, but in a fairly convoluted way. It's tough for a web browser to discern which requests will cause a significant transaction on the web server. For example, the simple request

GET /profile/deleteme.do

could be enough to trash someone's account on a server. Even some POST requests don't cause internal state changes (imagine a currency conversion calculator or a POST-based language translator form). It seems a bit easier to stop CSRF on the server where the application itself can decide whether or not to trust each request. This requires the browser, however, to provide some information about the request such as where it came from (but HTTP-Referrer is not reliable) or how it came (such as whether it was from an image tag, etc).

Clickjacking is more easily approached on the client side. One approach is to make an impenetrable fence around certain special frames where any outer/parent frames can't layer stuff on top of their children. This frame-firewall approach is not always attractive, so maybe there should be some mechanism that allows the server to say "hey, this is sensitive, don't overlay this content I'm serving." Then again, maybe it would be ideal to just tell the server a bit about where the content will be rendered, and let the server decide whether or not to serve data.

But what both Clickjacking and CSRF have in common is that they leverage contextual blindness that many web applications have -- they're not aware of where requests come from or where there responses end up.

It seems clear that we can't rely on just a browser-side or server-side fix for these types of things, and instead we need to have some sort of cooperation. The question remains, however, who should do the bulk of the enforcement. I'm currently leaning towards using the browser as a context-revealing tool and leaving enforcement and policy decisions up to the server, but there's many times when that's not enough to stop client-side attacks.

Friday, February 16, 2007

drive-by pharming

If you have not set a hard-to-guess password on your broadband router, do it now. There's a way attackers can compromise your router from the inside using simple JavaScript.

The basic idea is this: you visit a malicious website and it distracts you. While you're distracted (playing a game, reading news, etc), it runs JavaScript code to scan your internal network and identify the IP address of your router. Once discovered, the malicious script can send "reconfiguration requests" to the router to attempt setting the DNS server your network uses. If successful, all DNS queries can be directed through an attacker's server, thus Pharming you. For technical details, please see our tech report, but in brief this attack is not complex.

The solution: make your router's admin password hard to guess.

I recently developed this with Zulfikar Ramzan from Symantec, who forwarded to my advisor (Markus) an interesting Black Hat talk by Jeremiah Grossman. Markus in turn forwarded to me and that's when it struck me that we could similarly mount a pharming attack without playing man-in-the-middle - all it takes is a tweak of the router's DNS server setting, and a whole home network is pharmed. Coupled with the idea that roughly 50% of broadband routers still use the default password, this attack affects a whole lot of people.

Symantec PR picked up on what we did, and issued a press release today:
(Symantec Press Release)

Read More:
(Zully's Blog Post)
(Tech Report)


Select Media Coverage:
(Google aggregate)
(Info World -- IDG article)
(Forbes)
(Appscout)
(BroadbandReports.com -- amusing comments thread)
(Washington Post Blog)
(Red Herring)
(Computer World)

Update (16-Feb-07 9:30am ET): The story got picked up by Forbes ad the Washington Post, and the Google News index on "Drive-by Pharming" is roughly fifty-something.

My Favorite Headlines:
Researchers highlight a router route to pharming
New Drive-By Attack Taking Over Home Routers
Broadband routers welcome drive-by hackers
Change Your Router Password NOW!

Update (16-Feb-07 10:30am ET): Slashdot picked it up.

Monday, February 12, 2007

bad security assumption

Good assumption:
My domain (DNS) name is not safe from forgery. Bad people might "hijack" it and use it to pretend they are me.


Bad assumption:
If my domain name is hijacked or spoofed, then I lose control of all the subdomains too. This means that if someone else pretends to be sidstamm.com then they will also take control over blog.sidstamm.com, mail.sidstamm.com and ohcrap.sidstamm.com.


Bad: DNS spoofing is done on the record level, and since each subdomain happens to be a different record, an attacker might control one subdomain and let you retain control over the rest.

Consequence of this: The same-origin policy enforced by most browsers says this: scripts served by one host cannot access, execute or change data served by another host. In this case b.a.com and a.com are considered different hosts.

There is one exception to the rule: a website may change its "document.domain" property to a suffix of what it currently might be. For example, a page served by b.a.com may set its domain to a.com AFTER it is served. In this case, b.a.com can play with a.com's data.

Your data is no longer safe, unless you control all of your subdomains. The case used to be simpler: a phisher or pharmer must create a complete duplicate of a site to fool with it. Now, he just creates a parasite frame, and watches you interact with the real thing. Beware of visiting update-security.yourbank.com.

Scary.

(Link to Abe Fettig's explanation)
(Link to Same-Origin Policy info)