Tuesday, September 06, 2022


This past weekend, we lost one of the worlds most noble technologists, Peter Eckersley.  Peter and I regularly collaborated when I was working on HTTPS and privacy at Mozilla, and he worked enthusiastically and tirelessly to hold everyone to a higher standard than even himself.  He was instrumental in many of the EFF's projects like PrivacyBadger, Panopticlick, and HTTPS Everywhere, while he nudged the Mozilla contributors to keep working on privacy and HTTPS; but possibly his biggest contributions were his work on CertBot and LetsEncrypt.

He didn't do this alone.  He was a leader, a technical idealist who captured people who often disagreed with him and channeled their energy to make the web safer.

When you use a web browser, the odds are good that Peter's influence has helped secure the bits you're transferring from a server or to protect you from surveillance.  If you believe information should be free, your data is yours, and everyone should have secure messaging, please consider how you can help empower the everyday person to resist surveillance online.  He fought the good fight but met an adversary that took him away from his work far too early.  He was 43.  I will miss you, pde.

Friday, December 03, 2021

Reckless Software Construction

Dear software developers: please please please stop making disposable software.

Demand for rapid development of software has radically changed how we develop, deploy, and maintain software over the past decade or so.  Storage is cheap.  Bandwidth is plentiful.  Agile and rapid iteration allow fast-to-market delivery.  All of this leads to fast, inefficient, messy, and reckless Rube Goldberg software construction.

I want to be clear: I am not trash-talking all software engineering via component composition (where devs take pre-built things and slap them together to make a new thing).  This is useful for many service-based products or web applications where the software runs on the software company's own hardware.  This strategy is over-used and is not the tool for every job.  I like some quality DevOps and do love a good web app, but the problem is when software is thrown together without care and diligence.

The equation is simple: moving fast and cutting corners leads to increased risk.

Agile dev + fast-to-market + prefab software = unnecessary risk

Stop it.  Be an engineer, not MacGyver.

"Our CISO will take care of this."

No.  Your CISO doesn't want to remediate your security problems after-the-fact.  Your CISO is there to help guide your development efforts down a safe path, NOT to compensate for your desire to "ship fast, fix later".  You should be working with your security team to avoid unnecessary risk and not call them in as a clean-up crew when you screw it up.  

Fast to market is great, but success will be short-lived if you are also fast-to-hacked.

Security teams are increasingly tasked with securing what some call a Software Supply Chain: the third party, often unknowing, contributors to a software product.  This chain of components (or frankly, pre-packaged #includes, libraries, or services) creates risk that needs to be at least monitored.  All of the third-party things your devs use to deploy your product will potentially fall victim to the next SolarWinds, hacked NPM package, or Kaseya.

Take Inventory

The provenance of software has become so out of control that even the White House is urging an inventory of all the stuff (a Software Bill of Materials, or SBOM) you use to make your things.  Cybersecurity risk is high not only because attackers know hacking your stuff is profitable, but also because your stuff is too complicated.  There, I said it: the software you created is unnecessarily complicated. This call for SBOM is a cry for help; you need to at the very least know what your stuff is made of.

Take Responsibility

This need for SBOM is a symptom of developers’ rush to pass the responsibility for fair and safe software onto others.  So you make a thing out of other peoples' things... it is YOUR responsibility to vet the components.  Your customers are relying on you to do this!  Requiring an SBOM help someone else vet your product, but it's not foolproof and is not for end-users. Software developers must take responsibility for securing their work and not blindly build trash out of other peoples’ trash. SBOM is not a panacea, it is a stopgap until we regain control over the junk we build.

Take Ownership

Ultimately when your software is compromised, your customers will blame you.  Will you pass the blame to your third-party suppliers?  You should be building something you're proud of and something you UNDERSTAND.  The right tool for the job is not always the fanciest/newest tool.  You do NOT need to use docker containers to build a solitaire app on iOS. 

Take Action

The answer is not "more software."  The answer is not "hire a consultant."  The answer is not SBOM. The answer is not DevSecOps (though that can help).

The first thing you should do is seriously think about your engineering practices... like really think about it.  If you develop client-side software (apps), evaluate the components and tools you use to build your app.  Maybe throwing it together quickly got it out the door, but how much effort are you spending maintaining it?  Are you building the right thing?  Phone Apps are bloated.  Why is my email app 270MB?  Why does this email app update EVERY DAMN WEEK?  These are questions I'm asking your developers, and you should too.  Doing your job well requires more than just tossing the product over the wall and going out for a beer.

Consider factoring out third-party components when you only use a small part of the component.  Do you really need jquery and d3.js and Angular? Do you need all those NPM packages?  An elegant, well-crafted architecture results in a more stable and secure product.  Take pride in your design and not just your speed. Focus. Keep it simple as you can; any amount of early investment to understand and reduce complexity of your software's provenance will minimize the chance that someone else's mistake ruins your product.

Monday, June 03, 2019

Data Transparency: Revisited

With this academic year behind me, I had some time to think and reflect on what has brought me where I am.  In 2012, I had an opportunity to participate in what served to energize me and mold what ended up being my short but intense career in web tech. 

In April 2012, the Wall Street Journal hosted a Data Transparency Weekend to bring together technologists, activists, journalists, and inventors from across the globe to work on the lack of transparency about how people are watched, profiled, and targeted online.  This NYC event connected me with allies and mentors who all were doing amazing work in online privacy.  From the wicked smart reporting (and organizing) of Julia Anguin and Jennifer Valentino-DeVries, to folks like Dan Kaminsky, Danny Weitzner, Ed Felten, Alessandro AcquistiChris Hoofnagle, Peter Eckersley, and of course Ashkan Soltani whose work has repeatedly inspired my own.  I cannot hope to name all the amazing folks who were there, and thinking back it was incredible we all ended up in the same spot at the same time. To all of you who spent this time with me: thank you.

Since 2012, the level of conversation about online data and tracking has skyrocketed, but not much has changed about how I'm tracked and targeted online; if anything, it has intensified.

Our everyday lives are being invaded by what I consider multi-modal harassment: we are all barraged with unwanted solicitations, phone calls, text messages, emails, and display advertisements.  We're being force-fed product info for things that "annoying brother" thinks you want.  Some of us pay for TV and our programming still gets interrupted with ads.  The web is full of "free" sites, where you pay by allowing them to force-feed you ideas of other things you are supposed to want.  We end up spending money externally (on things we don't actively seek) instead of those things we seek and intentionally use. To me, it feels like I'm always walking up the street to my favorite pub, but against the wind of a severe storm with driving rain of advertisements at my face.

We also face a data collection problem: organizations like Amazon, Facebook, Google, and others are accumulating massive profiles of data on individuals.  They are often "innovative" (reckless) with the data once it is collected.  Secondary use is commonplace for "experimentation", and can lead to unanticipated violations of consumers' privacy.  Tools keep emerging that enable more collection and processing of data.  Facial recognition (FR) and machine learning (ML) are new shiny things that everyone wants, and while they do interesting things, the reaching impact and in fact the degree of "correctness" of using these tools is not widely understood.  ML and FR can be used to make dumb decisions (like connecting porn stars to social media profiles or widespread tracking used for assigning a "social credit score" in China).

How do we know who to trust with information about us when it's not obvious when they're collecting that data?  How can we even make choices about who *to* trust? This is outright information theft when someone observes and measures me for their own un-shared profit.  It's worse when there are no incentives to protect gathered data since it exposes the data subjects like me to unanticipated risk.

When Cyber becomes Physical

Our online presence is monitored and tracked in cyberspace using means that would not be tolerated in the physical world.  I'm not only concerned with the risk we're exposed to due to this collection, but as connected devices become so pervasive, tracking in physical space becomes much more feasible.  This crossing-over of collection from cyber- to physical- realms also brings with it all the risks of the online data free-for-all.

Most of this "innovation" in tracking and data warehousing is driven by marketing.  I used to ruthlessly argue that the right solution was a collaborative effort between marketing firms and consumers.  After having seen the rise and fall of Do Not Track, I no longer believe collaboration can happen.  I now realize that the incentives are all wrong: ad tech cares only about the bottom line and there is little cost in getting ad spreads in front of consumers.  This is wildly different from the physical world where space, audience, and construction costs pressure ad firms to be much more careful about who and how they target.

Where do we go from here?  

We need to solve two giant problems: advertisement inundation and reckless data collection. 

For years I've heard of promises that we'll see better ads (and fewer of them!) if we allow firms to track us.  Neither of these has happened; I get crap calls and see crap ads online, and my eyes and ears are tired of it.  Consumers need more signal and less noise.  Disconnect, callblock, and adblockfast (all promising brain children of Brian Kennish) help attenuate noise.  While disappointing that we need stuff like this, noise attenuation should be a feature of *all* mainstream software, and not an add-on.  Consumers also need to get over the fear of directly paying for web sites and services like we happily do with phone apps.  For those of us who want free stuff and will tolerate ads (like with broadcast TV), a fairer marketing scheme is critical, but that requires some big changes like the ones Brave is trying out.

In the long run, we need to think more about the consumers of our technology and train responsible engineers and architects.  These are the people who *must* consider societal impacts of their work beyond what is fastest, or generates the quickest dollar, which includes being transparent and respectful with how we treat people's data.  If we are to involve consumers in the trade of their data, the first necessary step remains the same as it was in 2012: Data Transparency.  Lets start with that.

Wednesday, February 24, 2016

Keep the Back Door Locked

Sure, I want to stop bad guys, but requiring Apple to make their phones vulnerable is not the right approach.  The current public discourse on the Apple vs. FBI "open the phone" is really a conflated mix of two issues: (1) the FBI wants help to crack open a known criminal's phone and (2) whether or not Apple should be required to create law enforcement back-doors into their products.  Lets separate the two issues.

(1) Should the FBI be given access to Farook's iPhone contents?  

I think most people agree the FBI should have the data.  Bill Gates made a statement on these issues on Tuesday morning, and made his position pretty clear: "Apple has access to the information, they're just refusing to provide the access, and the courts will tell them whether to provide the access or not." If Apple does indeed have access to the information, the right way forward is for the FBI to seek the court's order requiring Apple to release the information.  This isn't new.  In fact, the FBI have a court order in hand.

Does Apple really have access to the data on Farook's iPhone?  Is it able to comply with the court order?  Tim Cook's messaging indicates they do not, and Apple is pushing back saying that they will not comply with the part of the court order that goes beyond this simple data turnover: the part that says "give the FBI a tool to help us hack the phone quickly."   This is where the discourse gets concerning; this tool could be considered a backdoor.  It's not as egregious as "give us a master key", but it is certainly bypassing the iPhone's owner's security mechanism in a way not intended by the manufacturer.

(2) Should Apple create a tool for the FBI that enables easy hacking of Farook's phone?  

If you read  carefully into the court order, the court asks apple to provide a tool that will only work on the specific subject device -- not all iPhones.  The specific ask reads:
"Apple shall assist in enabling the search of a cellular telephone, [make, model, serial number, IMEI] on the Verizon Network, (the "SUBJECT DEVICE") pursuant to a warrant of this court by providing reasonable technical assistance to assist law enforcement agents in obtaining access to the data on the SUBJECT DEVICE."
This reads like a natural extension of "hand over the contents of this phone."   It sounds quite reasonable, much like ordering a building superintendent to unlock a specific criminal's apartment for a search.  This doesn't immediately seem different from the first issue (give us access to Farook's data).

But it is.

If you keep reading, the court orders Apple to provide the FBI with a tool to override some of the security features in the phone.  Ordinarily, Apple would not have a fast way to "unlock the apartment." They have provided people with secure phones that keep data private from everyone, including from Apple.   But in this case the court is ordering Apple to do the FBI's job: engineer something new to reverse their phone's security.  This is like asking the door lock manufacturer to make you a lock-picking machine for the apartment's lock.  Doesn't the FBI usually just pick the lock or kick in the door?  The courts don't compel the lock maker to make a lock-picking machine to do it.

There's urgency here to get everyone to pitch in to stop terrorism, and I understand this concern. Irrational bad guys are really scary.   But this order is not routine! It is an ask to do something very abnormal to aid law enforcement.  Assume it's a good idea: we all want to help the FBI unlock the phone, and so Apple makes the tool.  Now what?  Can such a tool be constructed so it cannot be used on other iPhones?  In my opinion, and in Apple's, it cannot.  The existence of this tool threatens the security of all iPhone users when it is not limited to this individual device. If the tool fell into the wrong hands, it may be used by criminals or even the terrorists the FBI is trying to stop.  

Where does this lead?

This neutralizes any benefits from encryption, and not just on iPhones.  For a moment, lets assume this tool can be safely created to work against only one device.  The requests wouldn't stop at Apple's compliance with a single phone.  The court order could lead to companies being required to defeat their own customers' security any time law enforcement requests it.  This is a very dangerous precedent.  Nick Weaver's analysis is frightening: imagine if device manufacturers had to do "the dirty work" of hacking into their own products at any time.  Currently, law enforcement must do the often substantial work to break a device, but if they can just get a court order and require someone else to put in the effort that removes any incentive to investigate carefully before pursuing a subject's data. 

While the order itself does not create a technological backdoor, it creates one through legal precedent. Apple is right to appeal and ask the courts to think a bit harder about this order. Encryption is the only thing that provides any sort of confidentiality on the wild web, and we should not throw it away to decrypt one phone.  I'm not sure where it is, but somewhere we need to draw the line somewhere between "never help the FBI catch terrorists" and "make it trivial to defeat your customers' security" -- a balance where law enforcement officers' hands are not tied and encryption still works for the good guys.

Sunday, January 31, 2016

shake it up

Much has happened on the web in the last two and a half years, and of course I've been too wrapped up in it to say anything here.

It's time to change that.

A little over a year ago I returned to my roots.  I've always had my sights set on teaching, and it's fantastic to be back in a place so dedicated to education.  We need to alter the Web's course and the best place for me to contribute to this goal is by preparing our future software designers and entrepreneurs to lead the charge.

I'll admit that I got a bit tired of trying to change the Web.  It's exhausting working on an initiative that has the whole force of online marketing against you.  Skeptics and those who rely on the opacity of data trading alike are a powerful force.

But I haven't stopped caring.  Admittedly I backed off, but some (with more stamina than I) haven't.  On January 20, Andreas Gal posted his thoughts with a very optimistic headline: "Brendan is back to save the web".  He does a great job of making a point that I've been trying to articulate for years: the economic incentives online are stuck and we need a new player to emerge with new incentives and a fresh look at how to make the Web an economy again instead of a giant data mine.  Andreas makes a clear case that all the current web browsers cost money to produce, but nobody pays for them directly; instead they are indirectly kept aloft by whatever makes the Web go round.

Right now that's almost exclusively advertisements.

Somehow the web has found itself in an advertising monoculture where advertising is frequently aggravating and at best an unnecessary bloat in an ecosystem that should not be bogged down by distractions from generative content.  The web should be a place vibrant with commerce and innovation: clear of distractions and rich with creativity.  People should not be sold on what they want, they should instead be able to make what they want.

But the question remains: how do we get the web from where it is to where it should be?

We need economic incentives that encourage Web sites without this bloat.  We need content that is a generative "makers" platform.  The Web should be an ecosystem where businesses get rewarded for their content and not the willingness to plaster solicitations all over their digital presence.  This is what Brendan wants to do.

Brave is his attempt to steer the web in the right direction.  His vision is to make a web browser that is a true user agent again, and not a self-serving or web-serving agent.  People should be molding the web instead of the web molding its people.

I agree with Brendan that the web should not be an ad-blocking fight, it should be a place for novel and generative things, but we can't just turn our backs on ads.  I'm intrigued by Brave's new approach and excited to see where Brendan and his team take us.

Thursday, November 21, 2013

facebook privacy in a graphic

One reason I deleted my Facebook account was what I perceived to be their shampoo-instruction-style erosion of privacy.  They seemed to be changing things, reacting to public outrage, rolling back a little bit, then repeating.  Slowly, they appeared to be drawing in users and strong-arming them into letting go of some control over their personal data by providing an ultimatum: "keep on top of our policy changes or leave".  I understand they need to make money, but surely there's a more fair way than filing down peoples' control to extract more personal info.

Credit: Matt McKeon http://mattmckeon.com/facebook-privacy/

Browsing around today, I stumbled across Matt McKeon's infographic showing the evolution of Facebook's privacy policies and Kurt Opsahl's related timeline of changes.  The data only goes through 2010 (perhaps their M.O. has changed since then), but it's a striking graphic and worth a look.  It would be fascinating if construction of such an infographic timeline were automated and it could be deployed for other sites out there.

Monday, March 18, 2013

what ever happened to the second party?

I got into a terminology discussion with Brendan this week, and it turns out there's general confusion over these labels we give to businesses on the web: first party and third party.  This topic has been debated ad nauseum in the TPWG, but I want to share my thoughts on what it means in the context of cookies and the general browser/webpage point of view.

The Marx brothers have a take on this in Night at the Opera when they get into discussion of parties and contracts, and I think they're on to something, but on the web these party labels probably come from business-focused contractual engagements. So which party am I?  I'm not a party (though that sounds like fun).

In the case of cookies, the party labels are all about contractual arrangements to produce a good or service for you. You, the surfer, are not part of the contract, but you benefit from a system of first, second and third party businesses.

Here, the first party is the business you seek to engage.  The second party in question is a contractor doing business explicitly for the first party. For example, when you visit the grocery store, someone might help bag your groceries. Maybe they're a temp worker and are actually employed by a different company, but their sole job is to do what the grocery store asks, and they do their work in the store. In these cases there's a direct business agreement between first (business) and second (contractor) parties to produce one service or good. For all intents and purposes, the bagger seems like part of the store.
Second-party cookies don't make much sense in the online cookie context since to the web browser, there's no technical distinction between the first-party or second-party web software. The assumption here is that second parties operate within the "umbrella" of the first party, so the cookies are part of the first party offering.

Any third party players are peripheral to the transaction and may add value but their primary purpose is something other than the sought-after good or service. These third parties are more like the flier guy who walks around the parking lot while you shop and puts discount fliers for his car dealership on everyone's windshields.  (Wow, zero down, $169 a month?)  He's not stocking shelves or bagging your groceries at the grocery store, but is still a peripheral part of the whole grocery shopping experience. Customers expectations for the third party here are likely different than those for the temp worker.  (What's maybe not obvious, is if you go to his dealership, the flyer may inform him what kind of groceries you bought, and tracking cookies can be even more invisible than these fliers -- but that's a blog post for a different day.)

So how's this work online?  The first party on this blog is me: blog.sidstamm.com.  There's a second party here too, the folks who made my blog framework software.  They maintain the software (I'm too lazy), and I use it to publish my blog, but it all comes through on this same domain name.  When you read this, the two of us are working together with the goal of bringing you my thoughts.  There also happen to be a "G+ Share" button and search bar on the site, but they're third party; controlled by some other entity, served over a different domain, and only showing up here to augment your experience beyond the blog you seek.

So don't panic: the second parties are still there!  We just don't use the term much because they're so tightly integrated with first parties, that they usually appear the same.

Wednesday, March 06, 2013

Who uses the password manager?

Who uses the password manager, and why? My colleague Monica Chew tries to answer these questions and more by measuring password manager use. 

Check out her blog post.