Wednesday, February 24, 2016

Keep the Back Door Locked

Sure, I want to stop bad guys, but requiring Apple to make their phones vulnerable is not the right approach.  The current public discourse on the Apple vs. FBI "open the phone" is really a conflated mix of two issues: (1) the FBI wants help to crack open a known criminal's phone and (2) whether or not Apple should be required to create law enforcement back-doors into their products.  Lets separate the two issues.

(1) Should the FBI be given access to Farook's iPhone contents?  

I think most people agree the FBI should have the data.  Bill Gates made a statement on these issues on Tuesday morning, and made his position pretty clear: "Apple has access to the information, they're just refusing to provide the access, and the courts will tell them whether to provide the access or not." If Apple does indeed have access to the information, the right way forward is for the FBI to seek the court's order requiring Apple to release the information.  This isn't new.  In fact, the FBI have a court order in hand.

Does Apple really have access to the data on Farook's iPhone?  Is it able to comply with the court order?  Tim Cook's messaging indicates they do not, and Apple is pushing back saying that they will not comply with the part of the court order that goes beyond this simple data turnover: the part that says "give the FBI a tool to help us hack the phone quickly."   This is where the discourse gets concerning; this tool could be considered a backdoor.  It's not as egregious as "give us a master key", but it is certainly bypassing the iPhone's owner's security mechanism in a way not intended by the manufacturer.

(2) Should Apple create a tool for the FBI that enables easy hacking of Farook's phone?  

If you read  carefully into the court order, the court asks apple to provide a tool that will only work on the specific subject device -- not all iPhones.  The specific ask reads:
"Apple shall assist in enabling the search of a cellular telephone, [make, model, serial number, IMEI] on the Verizon Network, (the "SUBJECT DEVICE") pursuant to a warrant of this court by providing reasonable technical assistance to assist law enforcement agents in obtaining access to the data on the SUBJECT DEVICE."
This reads like a natural extension of "hand over the contents of this phone."   It sounds quite reasonable, much like ordering a building superintendent to unlock a specific criminal's apartment for a search.  This doesn't immediately seem different from the first issue (give us access to Farook's data).

But it is.

If you keep reading, the court orders Apple to provide the FBI with a tool to override some of the security features in the phone.  Ordinarily, Apple would not have a fast way to "unlock the apartment." They have provided people with secure phones that keep data private from everyone, including from Apple.   But in this case the court is ordering Apple to do the FBI's job: engineer something new to reverse their phone's security.  This is like asking the door lock manufacturer to make you a lock-picking machine for the apartment's lock.  Doesn't the FBI usually just pick the lock or kick in the door?  The courts don't compel the lock maker to make a lock-picking machine to do it.

There's urgency here to get everyone to pitch in to stop terrorism, and I understand this concern. Irrational bad guys are really scary.   But this order is not routine! It is an ask to do something very abnormal to aid law enforcement.  Assume it's a good idea: we all want to help the FBI unlock the phone, and so Apple makes the tool.  Now what?  Can such a tool be constructed so it cannot be used on other iPhones?  In my opinion, and in Apple's, it cannot.  The existence of this tool threatens the security of all iPhone users when it is not limited to this individual device. If the tool fell into the wrong hands, it may be used by criminals or even the terrorists the FBI is trying to stop.  

Where does this lead?

This neutralizes any benefits from encryption, and not just on iPhones.  For a moment, lets assume this tool can be safely created to work against only one device.  The requests wouldn't stop at Apple's compliance with a single phone.  The court order could lead to companies being required to defeat their own customers' security any time law enforcement requests it.  This is a very dangerous precedent.  Nick Weaver's analysis is frightening: imagine if device manufacturers had to do "the dirty work" of hacking into their own products at any time.  Currently, law enforcement must do the often substantial work to break a device, but if they can just get a court order and require someone else to put in the effort that removes any incentive to investigate carefully before pursuing a subject's data. 

While the order itself does not create a technological backdoor, it creates one through legal precedent. Apple is right to appeal and ask the courts to think a bit harder about this order. Encryption is the only thing that provides any sort of confidentiality on the wild web, and we should not throw it away to decrypt one phone.  I'm not sure where it is, but somewhere we need to draw the line somewhere between "never help the FBI catch terrorists" and "make it trivial to defeat your customers' security" -- a balance where law enforcement officers' hands are not tied and encryption still works for the good guys.

Sunday, January 31, 2016

shake it up

Much has happened on the web in the last two and a half years, and of course I've been too wrapped up in it to say anything here.

It's time to change that.

A little over a year ago I returned to my roots.  I've always had my sights set on teaching, and it's fantastic to be back in a place so dedicated to education.  We need to alter the Web's course and the best place for me to contribute to this goal is by preparing our future software designers and entrepreneurs to lead the charge.

I'll admit that I got a bit tired of trying to change the Web.  It's exhausting working on an initiative that has the whole force of online marketing against you.  Skeptics and those who rely on the opacity of data trading alike are a powerful force.

But I haven't stopped caring.  Admittedly I backed off, but some (with more stamina than I) haven't.  On January 20, Andreas Gal posted his thoughts with a very optimistic headline: "Brendan is back to save the web".  He does a great job of making a point that I've been trying to articulate for years: the economic incentives online are stuck and we need a new player to emerge with new incentives and a fresh look at how to make the Web an economy again instead of a giant data mine.  Andreas makes a clear case that all the current web browsers cost money to produce, but nobody pays for them directly; instead they are indirectly kept aloft by whatever makes the Web go round.

Right now that's almost exclusively advertisements.

Somehow the web has found itself in an advertising monoculture where advertising is frequently aggravating and at best an unnecessary bloat in an ecosystem that should not be bogged down by distractions from generative content.  The web should be a place vibrant with commerce and innovation: clear of distractions and rich with creativity.  People should not be sold on what they want, they should instead be able to make what they want.

But the question remains: how do we get the web from where it is to where it should be?

We need economic incentives that encourage Web sites without this bloat.  We need content that is a generative "makers" platform.  The Web should be an ecosystem where businesses get rewarded for their content and not the willingness to plaster solicitations all over their digital presence.  This is what Brendan wants to do.

Brave is his attempt to steer the web in the right direction.  His vision is to make a web browser that is a true user agent again, and not a self-serving or web-serving agent.  People should be molding the web instead of the web molding its people.

I agree with Brendan that the web should not be an ad-blocking fight, it should be a place for novel and generative things, but we can't just turn our backs on ads.  I'm intrigued by Brave's new approach and excited to see where Brendan and his team take us.

Thursday, November 21, 2013

facebook privacy in a graphic

One reason I deleted my Facebook account was what I perceived to be their shampoo-instruction-style erosion of privacy.  They seemed to be changing things, reacting to public outrage, rolling back a little bit, then repeating.  Slowly, they appeared to be drawing in users and strong-arming them into letting go of some control over their personal data by providing an ultimatum: "keep on top of our policy changes or leave".  I understand they need to make money, but surely there's a more fair way than filing down peoples' control to extract more personal info.

Credit: Matt McKeon

Browsing around today, I stumbled across Matt McKeon's infographic showing the evolution of Facebook's privacy policies and Kurt Opsahl's related timeline of changes.  The data only goes through 2010 (perhaps their M.O. has changed since then), but it's a striking graphic and worth a look.  It would be fascinating if construction of such an infographic timeline were automated and it could be deployed for other sites out there.

Monday, March 18, 2013

what ever happened to the second party?

I got into a terminology discussion with Brendan this week, and it turns out there's general confusion over these labels we give to businesses on the web: first party and third party.  This topic has been debated ad nauseum in the TPWG, but I want to share my thoughts on what it means in the context of cookies and the general browser/webpage point of view.

The Marx brothers have a take on this in Night at the Opera when they get into discussion of parties and contracts, and I think they're on to something, but on the web these party labels probably come from business-focused contractual engagements. So which party am I?  I'm not a party (though that sounds like fun).

In the case of cookies, the party labels are all about contractual arrangements to produce a good or service for you. You, the surfer, are not part of the contract, but you benefit from a system of first, second and third party businesses.

Here, the first party is the business you seek to engage.  The second party in question is a contractor doing business explicitly for the first party. For example, when you visit the grocery store, someone might help bag your groceries. Maybe they're a temp worker and are actually employed by a different company, but their sole job is to do what the grocery store asks, and they do their work in the store. In these cases there's a direct business agreement between first (business) and second (contractor) parties to produce one service or good. For all intents and purposes, the bagger seems like part of the store.
Second-party cookies don't make much sense in the online cookie context since to the web browser, there's no technical distinction between the first-party or second-party web software. The assumption here is that second parties operate within the "umbrella" of the first party, so the cookies are part of the first party offering.

Any third party players are peripheral to the transaction and may add value but their primary purpose is something other than the sought-after good or service. These third parties are more like the flier guy who walks around the parking lot while you shop and puts discount fliers for his car dealership on everyone's windshields.  (Wow, zero down, $169 a month?)  He's not stocking shelves or bagging your groceries at the grocery store, but is still a peripheral part of the whole grocery shopping experience. Customers expectations for the third party here are likely different than those for the temp worker.  (What's maybe not obvious, is if you go to his dealership, the flyer may inform him what kind of groceries you bought, and tracking cookies can be even more invisible than these fliers -- but that's a blog post for a different day.)

So how's this work online?  The first party on this blog is me:  There's a second party here too, the folks who made my blog framework software.  They maintain the software (I'm too lazy), and I use it to publish my blog, but it all comes through on this same domain name.  When you read this, the two of us are working together with the goal of bringing you my thoughts.  There also happen to be a "G+ Share" button and search bar on the site, but they're third party; controlled by some other entity, served over a different domain, and only showing up here to augment your experience beyond the blog you seek.

So don't panic: the second parties are still there!  We just don't use the term much because they're so tightly integrated with first parties, that they usually appear the same.

Wednesday, March 06, 2013

Who uses the password manager?

Who uses the password manager, and why? My colleague Monica Chew tries to answer these questions and more by measuring password manager use. 

Check out her blog post.

Thursday, December 27, 2012

what is privacy?

Often times when I find myself in a conversation about Privacy, there's a lack of clarity around what exactly we're discussing.  It's widely accepted that people who are experts on privacy all speak the same language and have the same goals.

I'm not so sure this is true.

This came up in a discussion with Jishnu yesterday, and we needed a common starting place.  So I'd like to take a little time to lay out what I'm thinking when I talk about Privacy, especially since I'm mainly focused on empowering individuals with control over data sharing and not so much on keeping secrets.
Privacy is the ability for an individual to have transparency, choice, and control over information about themselves.
At the risk of sounding too cliché, I'm gonna use a pyramid to explain my thinking.  There are three parts to establishing privacy:

First, an organization's (or individual's) collection, sharing and use of data must be transparent.  This is crucial because choice and control cannot be realized without honesty and fairness.

Second, individuals must be provided choice.  This means data subjects (those people whose data is being collected, used or shared) must be able to understand what's going to happen with their data and have the ability to provide dissent or consent.

Third, when it's clear what's happening and individuals have an understanding about what they want, they must be given control over collection, sharing or use of the data in question.

This means control depends on choice which depends on transparency.  You cannot make decisions unless you're given the facts.  You cannot make your desires reality unless you've decided what you want.

For the engineers out there (like me), this dependencies can be modeled as such:
[Transparency] = Awareness of Data Practices
[Choice] = [Transparency] + Individual's Wants
[Control] = [Choice] + Organizational Cooperation
Control is the goal, but it requires Transparency and Choice to work -- as well as some additional inputs.  Privacy is the whole thing: all three pieces acting together with support from both data controllers and data subjects to empower individuals with a say in how their data is used.

The privacy perception gap is a symptom of ineffective transparency and choice; it is the result of peoples' inability to really understand what's going on so they have no chance to establish positions about what is okay.  When transparency and choice are built into a system, the gap shrinks and people have most of what they need to regain control over their privacy.

What is privacy to you?

Thursday, October 11, 2012

ownership and transparency in social media

Les Writes:
"You don’t own the spaces you inhabit on Facebook. You’re enjoying a party at someone’s house, and you barely know the guy. In fact, your content is the currency that pays for the booze (ie. the privilege of using their servers). That’s why it’s free-as-in-beer: You’ve given them what you post, instead of money. That’s valuable stuff, if they can ever quite figure out how to sell it."  [link]
It's not completely fair to expect that FB users realize the data about them that they so generously contribute to FB no longer belongs to them.  My hypothesis is that many people feel that no matter who has facts about you and prints them, they're still *yours*.  After all, companies have trademarks, can't things about me be mine and reserved for me?

On a smaller scale, the monetization of facts about me is not surprising; I give an interview to a magazine, they print it, it gets syndicated, no surprise.  On a large scale (lots of data collection,  frequently) I think people lose track of with whom they are communicating and get immersed in the task at hand.  Is it my FB friends, or is it FB, who is helpfully telling my friends things?  This system is flexible, crazy, complex, shiny and distracting!  Can I use it to video chat with my friends?  That's neat.  Oh, geez, I forgot FB is in the middle of all this communication...

People who sign up for FB are not signing up to contribute their life to this stranger throwing a party.  They sign up assuming it is a tool they can use to communicate with their friends; it is a machine they've "bought" (for free, heh) to help them communicate.  Nobody reads the terms of service.  Nobody reads the privacy policy.  They accept them since other people have and only read what their friends write.  Many are in denial or do not realize that what they contribute to the site is just that: a contribution.

I think there is shared responsibility here; consumers should be a little bit wary--but this isn't their area of expertise.  As such, the site operator also has a duty to be more forthcoming with what's going on.  My communications tool is supposed to be a communications tool.  If you market it as a "free communications tool that sells my data," I am better informed than if it's just marketed as a "communications tool."

Tuesday, May 22, 2012

Adding Privacy to Apps Permissions

I've been thinking about app permission models, especially as we're working on B2G and need a way for users to safely and thoughtfully manage the apps on their device.  Most permission models strive to do precisely one thing: allow apps to ask for consent to use features.

The problem I have with "allow/deny" consent to use features is that there's not a clear usage intention in having the access; a mirror app that asks for access to your camera probably doesn't need to store data it gets from the sensor, but it could go so far as to store video (and perhaps send it to "" to spy on you).

If apps can explain their usage intentions, consumers of the apps have more context and can make better decisions about the permissions they grant.  While the software probably can't make sure the usage intentions are actually followed, this commitment to customers puts the app developers on the hook for doing the right thing.

Head on over to the discussion in where I've posted my thoughts, and let us know what you think.

Edit (23-May-2012 / 9:33 PDT): Google Groups (the public archive) did not pick up my original post to the group.  If you're not subscribed via NNTP or the dev-webapps mailing list, you can see my original post in the quoted text of the first reply by Paul.