Showing posts with label mozilla. Show all posts
Showing posts with label mozilla. Show all posts

Monday, June 03, 2019

Data Transparency: Revisited

With this academic year behind me, I had some time to think and reflect on what has brought me where I am.  In 2012, I had an opportunity to participate in what served to energize me and mold what ended up being my short but intense career in web tech. 

In April 2012, the Wall Street Journal hosted a Data Transparency Weekend to bring together technologists, activists, journalists, and inventors from across the globe to work on the lack of transparency about how people are watched, profiled, and targeted online.  This NYC event connected me with allies and mentors who all were doing amazing work in online privacy.  From the wicked smart reporting (and organizing) of Julia Anguin and Jennifer Valentino-DeVries, to folks like Dan Kaminsky, Danny Weitzner, Ed Felten, Alessandro AcquistiChris Hoofnagle, Peter Eckersley, and of course Ashkan Soltani whose work has repeatedly inspired my own.  I cannot hope to name all the amazing folks who were there, and thinking back it was incredible we all ended up in the same spot at the same time. To all of you who spent this time with me: thank you.

Since 2012, the level of conversation about online data and tracking has skyrocketed, but not much has changed about how I'm tracked and targeted online; if anything, it has intensified.

Our everyday lives are being invaded by what I consider multi-modal harassment: we are all barraged with unwanted solicitations, phone calls, text messages, emails, and display advertisements.  We're being force-fed product info for things that "annoying brother" thinks you want.  Some of us pay for TV and our programming still gets interrupted with ads.  The web is full of "free" sites, where you pay by allowing them to force-feed you ideas of other things you are supposed to want.  We end up spending money externally (on things we don't actively seek) instead of those things we seek and intentionally use. To me, it feels like I'm always walking up the street to my favorite pub, but against the wind of a severe storm with driving rain of advertisements at my face.

We also face a data collection problem: organizations like Amazon, Facebook, Google, and others are accumulating massive profiles of data on individuals.  They are often "innovative" (reckless) with the data once it is collected.  Secondary use is commonplace for "experimentation", and can lead to unanticipated violations of consumers' privacy.  Tools keep emerging that enable more collection and processing of data.  Facial recognition (FR) and machine learning (ML) are new shiny things that everyone wants, and while they do interesting things, the reaching impact and in fact the degree of "correctness" of using these tools is not widely understood.  ML and FR can be used to make dumb decisions (like connecting porn stars to social media profiles or widespread tracking used for assigning a "social credit score" in China).

How do we know who to trust with information about us when it's not obvious when they're collecting that data?  How can we even make choices about who *to* trust? This is outright information theft when someone observes and measures me for their own un-shared profit.  It's worse when there are no incentives to protect gathered data since it exposes the data subjects like me to unanticipated risk.

When Cyber becomes Physical

Our online presence is monitored and tracked in cyberspace using means that would not be tolerated in the physical world.  I'm not only concerned with the risk we're exposed to due to this collection, but as connected devices become so pervasive, tracking in physical space becomes much more feasible.  This crossing-over of collection from cyber- to physical- realms also brings with it all the risks of the online data free-for-all.

Most of this "innovation" in tracking and data warehousing is driven by marketing.  I used to ruthlessly argue that the right solution was a collaborative effort between marketing firms and consumers.  After having seen the rise and fall of Do Not Track, I no longer believe collaboration can happen.  I now realize that the incentives are all wrong: ad tech cares only about the bottom line and there is little cost in getting ad spreads in front of consumers.  This is wildly different from the physical world where space, audience, and construction costs pressure ad firms to be much more careful about who and how they target.

Where do we go from here?  

We need to solve two giant problems: advertisement inundation and reckless data collection. 

For years I've heard of promises that we'll see better ads (and fewer of them!) if we allow firms to track us.  Neither of these has happened; I get crap calls and see crap ads online, and my eyes and ears are tired of it.  Consumers need more signal and less noise.  Disconnect, callblock, and adblockfast (all promising brain children of Brian Kennish) help attenuate noise.  While disappointing that we need stuff like this, noise attenuation should be a feature of *all* mainstream software, and not an add-on.  Consumers also need to get over the fear of directly paying for web sites and services like we happily do with phone apps.  For those of us who want free stuff and will tolerate ads (like with broadcast TV), a fairer marketing scheme is critical, but that requires some big changes like the ones Brave is trying out.

In the long run, we need to think more about the consumers of our technology and train responsible engineers and architects.  These are the people who *must* consider societal impacts of their work beyond what is fastest, or generates the quickest dollar, which includes being transparent and respectful with how we treat people's data.  If we are to involve consumers in the trade of their data, the first necessary step remains the same as it was in 2012: Data Transparency.  Lets start with that.

Monday, March 18, 2013

what ever happened to the second party?


I got into a terminology discussion with Brendan this week, and it turns out there's general confusion over these labels we give to businesses on the web: first party and third party.  This topic has been debated ad nauseum in the TPWG, but I want to share my thoughts on what it means in the context of cookies and the general browser/webpage point of view.

The Marx brothers have a take on this in Night at the Opera when they get into discussion of parties and contracts, and I think they're on to something, but on the web these party labels probably come from business-focused contractual engagements. So which party am I?  I'm not a party (though that sounds like fun).

In the case of cookies, the party labels are all about contractual arrangements to produce a good or service for you. You, the surfer, are not part of the contract, but you benefit from a system of first, second and third party businesses.

Here, the first party is the business you seek to engage.  The second party in question is a contractor doing business explicitly for the first party. For example, when you visit the grocery store, someone might help bag your groceries. Maybe they're a temp worker and are actually employed by a different company, but their sole job is to do what the grocery store asks, and they do their work in the store. In these cases there's a direct business agreement between first (business) and second (contractor) parties to produce one service or good. For all intents and purposes, the bagger seems like part of the store.
 
Second-party cookies don't make much sense in the online cookie context since to the web browser, there's no technical distinction between the first-party or second-party web software. The assumption here is that second parties operate within the "umbrella" of the first party, so the cookies are part of the first party offering.

Any third party players are peripheral to the transaction and may add value but their primary purpose is something other than the sought-after good or service. These third parties are more like the flier guy who walks around the parking lot while you shop and puts discount fliers for his car dealership on everyone's windshields.  (Wow, zero down, $169 a month?)  He's not stocking shelves or bagging your groceries at the grocery store, but is still a peripheral part of the whole grocery shopping experience. Customers expectations for the third party here are likely different than those for the temp worker.  (What's maybe not obvious, is if you go to his dealership, the flyer may inform him what kind of groceries you bought, and tracking cookies can be even more invisible than these fliers -- but that's a blog post for a different day.)

So how's this work online?  The first party on this blog is me: blog.sidstamm.com.  There's a second party here too, the folks who made my blog framework software.  They maintain the software (I'm too lazy), and I use it to publish my blog, but it all comes through on this same domain name.  When you read this, the two of us are working together with the goal of bringing you my thoughts.  There also happen to be a "G+ Share" button and search bar on the site, but they're third party; controlled by some other entity, served over a different domain, and only showing up here to augment your experience beyond the blog you seek.

So don't panic: the second parties are still there!  We just don't use the term much because they're so tightly integrated with first parties, that they usually appear the same.

Wednesday, March 06, 2013

Who uses the password manager?


Who uses the password manager, and why? My colleague Monica Chew tries to answer these questions and more by measuring password manager use. 

Check out her blog post.

Thursday, December 27, 2012

what is privacy?

Often times when I find myself in a conversation about Privacy, there's a lack of clarity around what exactly we're discussing.  It's widely accepted that people who are experts on privacy all speak the same language and have the same goals.

I'm not so sure this is true.

This came up in a discussion with Jishnu yesterday, and we needed a common starting place.  So I'd like to take a little time to lay out what I'm thinking when I talk about Privacy, especially since I'm mainly focused on empowering individuals with control over data sharing and not so much on keeping secrets.
Privacy is the ability for an individual to have transparency, choice, and control over information about themselves.
At the risk of sounding too cliché, I'm gonna use a pyramid to explain my thinking.  There are three parts to establishing privacy:

First, an organization's (or individual's) collection, sharing and use of data must be transparent.  This is crucial because choice and control cannot be realized without honesty and fairness.

Second, individuals must be provided choice.  This means data subjects (those people whose data is being collected, used or shared) must be able to understand what's going to happen with their data and have the ability to provide dissent or consent.

Third, when it's clear what's happening and individuals have an understanding about what they want, they must be given control over collection, sharing or use of the data in question.

This means control depends on choice which depends on transparency.  You cannot make decisions unless you're given the facts.  You cannot make your desires reality unless you've decided what you want.

For the engineers out there (like me), this dependencies can be modeled as such:
[Transparency] = Awareness of Data Practices
[Choice] = [Transparency] + Individual's Wants
[Control] = [Choice] + Organizational Cooperation
Control is the goal, but it requires Transparency and Choice to work -- as well as some additional inputs.  Privacy is the whole thing: all three pieces acting together with support from both data controllers and data subjects to empower individuals with a say in how their data is used.

The privacy perception gap is a symptom of ineffective transparency and choice; it is the result of peoples' inability to really understand what's going on so they have no chance to establish positions about what is okay.  When transparency and choice are built into a system, the gap shrinks and people have most of what they need to regain control over their privacy.

What is privacy to you?

Monday, March 12, 2012

making DNT easier for web sites

Jos Boumans has done some analysis about the effect of turning on Do Not Track in your browser, and his findings show that sites in general are slow to show that they support the feature.
"As it stands, only 4 out of 482 measured top 500 sites are actively responding to the DNT header being sent." (Link)
As a user, it's hard to tell if sites are honoring my Do Not Track request, and as a site developer, it might be a daunting task to hack up my back-end code.  The W3C Tracking Protection working group at the W3C are working on helping out transparency and implementations, but in the meantime Jos has released his mod_cookietrack apache module to make it easier for site owners to track their users' clicks in a respectful way -- right now.
The Apache module, mod_cookietrack, does all sorts of stuff like mod_usertrack, but one thing it does better is honor DNT; if a server using this module sees "DNT: 1" in an HTTP request, it replaces the tracking cookie with one that says "DNT" -- something that's not unique to a visitor.

Apparently it was a lot of work to get DNT supported properly in mod_cookietrack, a native browser module that performs well and is safe on multiple threads, so thanks Jos for your hard work so that more organizations can support DNT on their web sites.

More:

Sunday, February 26, 2012

Malware and Phishing Protection in Firefox

For a while, Firefox has included malware and phishing protection to keep our users safe on the web.  Recently, Gian-Carlo Pascutto made some significant improvements to our Firefox support for the feature, resulting in much more efficient operation and use of the Safe Browsing API for this protection.

Privacy in the Safe Browsing API

I want to take a little time to explain how this feature works and why I like it from a privacy perspective:  Firefox can check whether or not a web site is on the Safe Browsing blacklist without actually telling the API what the web site is called.

At a high level, using this API to find URLs on the "bad" list is like asking your friend to identify whether or not he likes things you show him through a dirty window.  Say you hold up an apple to the dirty window and the your friend on the other side sees a fuzzy image of what you're holding.  It looks round and red and pretty small, but he's not sure what it is.  Your friend looks at his list of things he doesn't like and says he likes everything like that except for plums and red tennis balls.  While he still does not know exactly what you're holding, you can know for sure he likes the apple.

More technically, this uses a hash function to turn web URLs into numbers.  Each number corresponds to exactly one URL.  For each site you visit, Firefox hashes the URL and sends the first part of the resulting number to the Safe Browsing API.  The API responds with any values on the list of bad URLs that start with the value it received.  When Firefox gets the list of "bad" site hash values that match the first part, it looks to see if the entire hash is in the list.  Based on whether or not it's in the provided list of bad stuff, Firefox can determined whether the URL is on the Safe Browsing blacklist or not.

Consider this hypothetical example of two sites and their (fake) hash values:

SiteHash Value
http://mozilla.com1339
http://phishingsite.com1350

When you visit http://mozilla.com, Firefox calculates the hash of the URL, which is 1339.  It then asks the Safe Browsing API what bad sites it knows about that start with "13".  It returns a list of numbers including "1350".  Firefox takes that list, notices that 1339 (http://mozilla.com) is not in the list, so the site must be okay. 

If you repeat the same procedure with http://phishingsite.com, the same prefix "13" is sent to the API, and the same list of bad sites (including 1350) is returned.  In this case, however,  the site's hash is "1350" so Firefox knows it's on the list of bad sites and gives you a warning.

For you techies and geeks out there: yeah, I'm glossing over a few protocol details, but the gist is that you don't need to tell Google exactly where you browse in return for the bad-stuff blocking. 

Keeping the Safe Browsing Service Running Smoothly

Google hosts the Safe Browsing service on the same infrastructure as many of their other services, and they need to ensure that our users aren't blocked from accessing the malware and phishing blacklists as well as make sure they invest in the right resources to keep the service operating well.  One of the mechanisms they need for performing this quality-of-service assurance is a cookie, so the first request Firefox makes to the Safe Browsing API results in the setting of a Google cookie.

I know that not everyone likes that cookie, but Google needs it to make sure their service is working well so I've been working with them to ensure that they can use it for quality of service metrics but not track you around the web.  The most straightforward way to do this is to split the Firefox cookie jar into two: one for the web and one for the Safe Browsing feature.  It's not there yet, but with a little engineering work, in a future version of Firefox that cookie will only be used for Safe Browsing, and not sent with every request to Google as you browse the web.

The cookie can be turned off entirely if you disable third party cookies in Firefox.  When you turn off third party cookies, even if the cookie has been previously set your browser will not send the Google cookie -- unless you visit a Google website. You can also turn off malware and phishing protection, but I really don't recommend it.

Making "Safer Browsing"

While Firefox has been using Safe Browsing for a while, Google has started experimenting with a couple new features in Safe Browsing for additional malware and phishing filtering.  Both of these new features are pretty new and it's not yet clear how effective they are or what percent of my browsing history will be traded for this improvement.  Both new features involve sending whole URLs to Google and departing from Firefox's current privacy-preserving state requires evidence of a significant gain in protection. When Google measures and shares how much gain is encountered by their pilot deployment in Chrome, we can take a deeper look and consider whether these new features are worth it.

For now, Firefox users are getting a lot of protection for very little in return and there does seem to be good reason for Google to use cookies with Safe Browsing.  We are always looking out for things we can do to give Firefox users both the best of privacy and security.

Monday, December 19, 2011

seat belts and airbags

As much as I like giving users choice and control, bombarding people with too many options makes using software painful.  This is why it is important to consider both defaults and flexibility of all the privacy-impacting features we roll out -- the airbags and seat belts of the software industry.  Not everyone who cares about privacy know how to configure Firefox (or any software) to precisely suit their needs.  Those who are both care about their privacy and know how to configure software to precisely what they want are not the same; those with both qualities are often Privacy Professionals, or they work in a related field.


A couple of weeks ago, I was inspired by some stuff Dr. John Guelke  said to segment my thinking on privacy into two efforts: the privacy feature seat belts and airbags.  He approaches privacy as something driven by social norms, whereas until recently, I mostly thought about it as a subjective choice about what I want with my identity and data.  In fact, both of these perspectives are important, and they must work together to create the most positive effect for the Web.  There are distinctly different reasons to provide certain safe defaults than there are to provide features users ultimate control: the airbags can help protect everyone†, and the seat belts†† will protect those who know to use them.

Choice and Control (Seat Belts).

It's crucial that people have all they need to maintain complete control over their experiences online, or the web becomes controlled solely by the businesses on it and not the people who live in it.  Increasingly, people are performing more of their everyday activities online and deserve to be as much a part of their activities as they would in the real world and this is why I care so much about giving people who want it control over each bit of how they see and interact with the web.  This is the reason Do Not Track was built into Firefox, and this is why software allows people change how the browser handles cookies. These features empower users to control their experiences online.

Users enable and deploy these features on their own.  Firefox doesn't turn on Do Not Track by default, because it's a seat belt.  People choose if they want it or not.

Social Norms (Airbags).

There are expectations about what people understand that are consistently held by a society or group.  These social norms dictate expected behavior and, though not something that limit behavior, can be seen as social defaults.  These norms change and fluctuate with the society, but you could say they are precisely what any member of the society expect to happen.

The Web is a society of sorts, and people carry over their social norms from physical interactions with people to those interactions with web sites and corporate entities online.  Here is where the social norms very importantly dictate the defaults of how a web browser should work (and frankly, how web sites should work too).  People expect a site to remember small bits of information about their interactions, such as what is in their shopping cart, and this is why cookies are enabled by default, like an airbag.  People do not expect to disclose their precise location to web sites, and that is why Geolocation is not activated by default.



Directing Efforts.


There are two driving forces here that dictate the best paths forward for inventing and building privacy features into the web: social norms, and individual choice.   It's easier to listen to the cry or predict a need for individual choice; we can create any feature as if it were a seat belt -- features that users may or may not want to enable.  The harder direction is understanding and following social norms, or what people expect without request or action.  These are hard because they differ not only with time, but also across different groups of people.  Technologists like me can more easily understand our subculture's values and build those into our software.  We have to be careful, though, since society as a whole may not have the same values as our smaller group of software developers.  We as an industry need to focus on what benefits all as a sensible default, and that may be completely opposite of what we computer geeks think.

We need a better understanding of social norms and how they relate to people's data online.  That understanding can help map norms to the defaults we build into all the web-oriented software we make.  Everything else then should then be an optional feature, like a seat belt.

Though you may not use all of Firefox's privacy features, I do recommend wearing your seat belt.  Really.  It could save your life.  :)


--- Footnotes:---

† = Okay, so the analogy breaks down since airbags aren't good for you unless your seat belt is engaged, but the gist is that you don't have to think about the airbags.

†† = And sure, "everyone" knows about seat belts, but pretend for this argument that they don't and the feature is more like those glass-breaking hammers that you can buy to free you from a submerged car; you can buy and use them, but they don't usually come with your car.

Wednesday, November 09, 2011

Firefox won't activate DNT by default

Firefox isn't gonna turn on DNT by default because then DNT won't work.
"As Do Not Track picks up steam and standardization is well underway in the W3C, people have begun asking, "If Do Not Track is so good for the web, why don't you turn it on by default?"
"Frankly, it becomes meaningless if we enable it by default for all our users. Do Not Track is intended to express an individual's choice, or preference, to not be tracked. It's important that the signal represents a choice made by the person behind the keyboard and not the software maker, because ultimately it's not Firefox being tracked, it's the user. "

(Link)
Sure, we could run a few engagement campaigns to inform people about the option, but we won't make that decision for our users.

Edit (9-Nov-2011 @ 11:24): 

There are three different signals to consider in broadcasting the user's preferences for tracking:

1. User says they accept tracking
2. User says they reject tracking
3. User hasn't chosen anything

We're defaulting to state 3: we don't know what the user wants, so we're not sending any signals to servers.  The signal being sent should be the user's choice, not ours, so we don't broadcast anything until they've chosen what to send.

Wednesday, September 28, 2011

Measuring Progress

In the reasonably short time that I've been involved with Mozilla, we've made amazing changes to the web and our Firefox browser. We've seen the adoption of HTML5, open video, and a slew of other features. This means the web is yet even more complex and by extension, so is Firefox.

Sometimes Firefox doesn't perform as well as it should, and it's hard for us to understand why.

Enter the Telemetry project. Our performance team, led by Taras Glek, developed a feature that lets us measure performance-related stuff as you use Firefox. Starting with the version of Firefox released today, you have the opportunity to opt-in to send us some of these statistics. They're not tied to you, and we will take a look at the data in aggregate to see if there are widespread problems in the various bits of Firefox's plumbing.

I posted a note about this over on The Mozilla Privacy Blog. As we deployed this feature, we worked hard to make sure that our users will have choice and control of the data they send us. This involves a few bits of critical thinking: first, we have to make sure you're not surprised about this.  Second, we make sure that we're only collecting what we need to make Firefox better. Third, our practices must be transparent (and not just open source, like we try to be clear about what we collect).  Fourth, we make sure that you know you're sending us this data and can make it stop if you want.

We wrote down how telemetry works for you to read (if you want) and how the feature lines up with our promises put forth in the Privacy Operating Principles that we've been working with for a while now. As we add new probes to telemetry to see where to improve Firefox, we'll be cataloging those as well, including risk analysis for stuff that's remotely private.  We'll never collect stuff like your address or credit card numbers through this system (that'd be weird), but we may want to know which of the add-ons you're using that are slowing down Firefox.

This risk analysis and privacy review are the things we plan to do with new Firefox features that involve your data; whether or not we collect anything, it's important that we live up to the operating principles we've put out, and Telemetry is an early example of how we plan to keep you in control.

Thursday, September 22, 2011

Careful... pixel-data access is pointy

Robert O'Callahan writes:
Some Web applications require the pixel data of Web pages to be exposed to Web applications [...] There are some pretty big security implications here. The biggest problem is cross-origin information leakage.
He's right on. This has a bunch of subtle risks to haphazardly implementing pixel-data access. The one near and dear to my heart is the risk of defeating what we shipped a while back to stop the CSS- and JavaScript-based history sniffing. Draw links, read colors, defeat fix. Not good. We can't just lie to the content script attempting to access the rendered data -- once it's drawn, it's really hard to figure out what's a link and what isn't. So what do we do? Take a look at this and the other issues with implementing pixel-data access over on his blog. If you've got ideas, we're all ears.

Thursday, September 08, 2011

mozilla privacy blog

Hey, good news! Mozilla has a privacy blog where we will be blogging about all sorts of privacy stuff.

I'll continue to write about it here, but check it out for more reading. The latest post by Alex Fowler announces a field guide to DNT that discusses what to do when you receive the header, and what some other sites are already doing. He also talks about how many people have turned on DNT.

Check it out: Mozilla Privacy Blog

Thursday, July 14, 2011

on unifying site behavior and consent

Lets face it, the users of your ShinyNewWebSite(beta) will never know exactly how it works. Perhaps that's by design (look, it's magic!), perhaps that's simply because they're not computer programmers, but this is the reality.

So there's this problem: how do I get users to provide informed consent to use my shiny new data collection web site? I want to do some really cool stuff, but I want the users of the site to know what's happening and feel in control.

This is hard. I think there's a ton of value in data mining and personalization, and it's not reasonable to expect users to comprehend the entire process of how their data is collected and used. We do however need to empower users to manage trust for the organizations who collect and use their data, and one way to do this is to get them closer to understanding what happens.

Here's one way I've been thinking about this: on one end of a spectrum are the users; they have values and want to assert protection over some of their data. On the other end of the spectrum are the web sites; they produce value from the users' data and want to be honest and compliant with users' desires. Right now there's often a huge gap between what users want and what sites actually do with their data. We need to shrink this gap.


I've talked about this gap from a user's perspective before (the privacy perception gap) and ultimately this gap leads to shock and discomfort. In Firefox 4, we deployed DNT as one feature to help shrink the gap from the user's informed-consent side.


Anything we can do to help make obvious users' preferences and privacy choices shrinks the gap from the user side, but we should work from the site's side as well, and hope the efforts meet somewhere in the middle. What else can we do to help bring site behavior into to the user's mental model of what's going on?


We need something new to improve upon privacy policies. We need something more objective than self-explanation. We need something empirical that can be measured, digested and shown to users. We need technology that makes it easier for people to peer into the opaque bits of the web and see what data is collected and how it's used. While it's not realistic to expect a silver bullet that makes all users instantly understand how sites work, we should still try hard; let's throw all the ideas that we have out on the table and approach this gap with as many tools as we have to try and shrink it.

Monday, June 20, 2011

Markus Jakobsson: why we must ask "why" in designing secure systems

On Wednesday (June 22 @ 12pm PDT), Markus Jakobsson will talk about some of the security research he's been working on. Join us to hear some stories and learn how and why to build in security from the ground up! Details below. This will be streamed to the world on air mozilla, and hosted at the Mozilla HQ in Mountain View.

22-June-2010 EDIT: The video is available here.

Where:Mozilla HQ (10-forward) and Air Mozilla (marketing site)
Speaker:Dr. Markus Jakobsson
Subject: "Why we must ask 'why' in designing secure systems"

Summary: Computer security has a tradition of responding to the symptoms of problems without taking the time to ask what the sources of the problems are. Markus will argue that this approach has made the user authentication experience frustrating and vulnerable; enabled phishing; and created a tremendous market for malware. Markus will give examples of some well-known approaches that were designed without a thorough understanding of the underlying problems and limitations, and how they could be redesigned and improved. In particular, he will cover web and app spoofing; mobile passwords; and bullet-proof detection of malware.

Join Us!

Thursday, May 26, 2011

managing your relationship with sites

This post is co-written by Margaret Lebovic and Sid Stamm. This article is cross-posted on Margaret's blog

As the web becomes more and more complex (and AWESOME), it's important that you can manage your relationship with the variety of sites out there. Sure, Firefox 4 has a Page Info dialog that lets you control what a web page is allowed to do, including whether you want to let it store data on your computer, access your location information, open pop-up windows, and on and on. However, this dialog only lets you manage your relationship with the one page you're currently visiting, not the entire set of sites you visit on the web.

We think it's important to be able to manage your whole relationship with web sites in an intuitive way, and that's why we're exited to show you what we've started working on: a site-based permissions interface.


This feature is still experimental, but you can give it a shot. In the future, we'll be putting some polish on the UI, adding more controls like "always access securely" (HSTS), and hopefully giving you a better view of what a site knows about you. We also want to integrate this permissions manager with the site identity block in the location bar for quick and easy access.

Try it out! Grab a Firefox nightly build and try out the feature by typing about:permissions into the location bar.

(Credit: thanks to Jennifer Boriss, Medhi Mulani and Margaret for all the hard work on this project.)

Friday, May 20, 2011

Do Not Track -- Now on Firefox Mobile!

Since we first announced our implementation of the Do Not Track HTTP
header
, we've seen an amazing amount of support from trade groups, and even other browser makers.
To build on our view that you should have control of how you're tracked
on not only desktop computers but also your mobile devices, we're
excited to announce that the latest beta of Firefox for Android also includes this feature.

You can enable Do Not Track in the latest beta of Firefox for Android through an
easy-to-find switch in the preferences--see image to the right, and websites will see exactly the same signal that Do Not Track-enabled desktop browsers send. Every time Firefox loads a web page, image, or advertisement it includes a "DNT: 1" signal that tells the entire web you don't want to be tracked.

The web on your phone should be the same web as on your desktop, so to
provide this consistency we've put the exact same Do Not Track feature
in both the desktop and mobile versions of Firefox.

Try it out today! Grab the latest beta of Firefox for Android and turn on the feature. If you visit my blog from Firefox (mobile or desktop) with Do Not Track turned on, the widget below will glow green just for you.

Sunday, May 15, 2011

Clearing Flash cookies using Firefox

Back in March, we shipped Firefox 4 with a feature that sends a signal to plugins like Flash and Silverlight when you clear your cookies. Adobe has announced that starting with Flash Player version 10.3, they'll be listening to the signal! This is exciting, because clearing your flash cookies is as easy as clearing regular cookies in this latest version of flash.

Here's when Firefox 4 tells Flash Player version 10.3 to delete LSOs (Flash cookies):
  • When you clear all your cookies in Firefox using "clear recent history" [how-to link]
  • When you choose "forget about this site" in your library (history) window [how-to link]
  • When you quit Firefox, if you have Firefox configured to clear your cookies automatically upon exit [how-to link]

Chrome and Internet Explorer are also supporting this behavior, so this is fantastic news for everyone's privacy on the web!

More reading for techies:

Thursday, March 24, 2011

Force-TLS compatible with Firefox 4!

I've updated the Force-TLS Firefox Add-On to work with the newest version of Firefox! Force-TLS version 3.0.0 should work in all Firefox 3.0 and newer.

So what does this mean? Well, HTTP Strict-Transport-Security (HSTS) is implemented in Firefox 4, and that's a pretty similar technology to Force-TLS. In fact, it is nearly identical except there's no UI in Firefox 4. If you install Force-TLS, you'll get a UI and also get the built-in HSTS support that's implemented much more completely and efficiently than any add-on. A while ago, I blogged about an experimental add-on called STS-UI that adds a UI to HSTS; Force-TLS shows essentially the same user interface but I've been wanting to keep both the back-end for Firefox 3.x and the front-end for all versions of Firefox in the same add-on.

So what's new in version 3.0.0?
  • Smarter: The invisible bits of Force-TLS are restructured to use the custom HTTPS-upgrading and header-noticing bits for earlier Firefox versions but use the HSTS back-end built into Firefox 4 when it's available.
  • Better: A few bugs in the user interface were fixed.
  • Organized: I've moved the code into an open source repository.

I've got a list of enhancements queued up for the next version of Force-TLS, but not a whole lot of time to work on it. If you'd like to help make Force-TLS more awesome, send an email to forcetls@sidstamm.com

Previously:

Wednesday, March 09, 2011

Do-Not-Track Standardization has Begun

Thanks to a lot of hard work by Jonathan Mayer and Arvind Narayanan (the donottrack.us guys at Stanford), we've submitted a draft specification to the IETF for review. We've proposed a specification that not only outlines what the DNT HTTP header should look like, but also how servers can honor a user's choice for privacy.

This draft is just the beginning: there will be much debate, but we want you to be part of it.

More:

Monday, February 07, 2011

Get your DNT header for older versions of Firefox!

When we recently announced our intent to add a do not track header to Firefox, we focused on how it will probably be available in a future version -- Firefox 4.0. But what about people who would prefer to use previous versions of Firefox? How can you get the HTTP header into version 3.6, or even earlier versions?

Though we recommend using our latest and greatest product, there's an add-on you can install to add the "DNT: 1" header to older versions: Universal Behavioral Advertising Opt-Out (a.k.a. UBAO). The name is a mouthful, but its operation is simple: installing this add-on is like ticking the checkbox in new versions of Firefox to send a "DNT: 1" HTTP header with all requests your browser sends out.

There are other add-ons that send the header! AdBlock Plus and NoScript send the header too, but if you don't want the extra features that come along with those add-ons, UBAO is for you.

Monday, January 31, 2011

Try out the "Do Not Track" HTTP header

Last week, I blogged about some of the work we're doing at Mozilla to help people better control how they're tracked as they browse the web. The basic idea was to give people a universal "opt out" of tracking for behavioral advertising. A Firefox user will be able to check a box in the preferences dialog and then a HTTP header would be sent with all HTTP requests so all servers know the user wants to opt out.

Well, I'm excited to report that we've landed the first iteration of this feature into Firefox nightly builds (the pre-beta builds that are rough around the edges)! If you'd like to try out the feature, grab a nightly build; I must warn you though, these nightlies are not as stable as the beta releases.

In the build, to enable the feature, open the preferences pane and select the advanced tab. Tick the box that says "Tell sites I do not want to be tracked" and start browsing.




Every connection your browser makes to download content will send a signal that says "don't track me." Literally, it looks like this to servers:
DNT: 1
Note: this is different from the initial experiment that used "X-Do-Not-Track" and my original post last week that said "Tracking-Preference: do-not-track"; it's both shorter and very precise. The researchers at donottrack.us are also recommending this syntax.

I encourage you to try out the test builds, or if you'd like to wait for a more stable version, wait for an upcoming beta release with the feature in it. We do not anticipate that sites are looking for the signal yet, so you probably won't notice a difference as you browse the web. I'm hoping to have a demo site available shortly that will give you an example of what types of changes you might see using this feature -- and when I do, I'll post a link here.