Privacy is an Inside Job

The Searls Wanigan, 1949
Ordinary people wearing and enjoying the world’s original privacy technology: clothing and shelter. (I’m the one on top. Still had hair then.)

Start here: clothing and shelter are privacy technologies. We use them to create secluded spaces for ourselves. Spaces we control.

Our ancestors have been wearing clothing for at least 170,000 years and building shelters for at least half a million years. So we’ve had some time to work out what privacy means. Yes, it differs among cultures and settings, but on the whole it is well understood and not very controversial.

On the Internet we’ve had about 21 years*. That’s not enough time to catch up with the physical world, but hey: it’s still early.

It helps to remember that nature in the physical world doesn’t come with privacy. We have to make our own. Same goes for the networked world. And, since most of us don’t yet have clothing and shelter in the networked world, we’re naked there.

So, since others exploit our exposure — and we don’t like it — privacy on the Internet is very controversial. Evidence: searching for “privacy” brings up 4,670,000,000 results. Most of the top results are for groups active in the privacy cause, and for well-linked writings on the topic. But most of the billions of results below that are privacy policies uttered in print by lawyers for companies and published because that’s pro forma.

Most of those companies reserve the right to change their policies whenever they wish, by the way, meaning they’re meaningless.

For real privacy, we can’t depend on anybody else’s policies, public or private. We can’t wait for Privacy as a Service. We can’t wait for our abusers to get the clues and start respecting personal spaces we’ve hardly begun to mark out (even though they ought to be obvious). And we can’t wait for the world’s regulators to start smacking our abusers around (which, while satisfying, won’t solve the problem).

We need to work with the knitters and builders already on the case in the networked world, and recruit more to help out. Their job is to make privacy policies technologies we wear, we inhabit, we choose, and we use to signal what’s okay and not okay to others.

The EFF has been all over this for years. So have many developers on the VRM list. (Those are ones I pay the most attention to. Weigh in with others and I’ll add them here.)

The most widely used personal privacy technology today is ad and tracking blockingMore than 200 million of us now employ those on our browsers. The tools are many and different, but basically they all block ads and/or tracking at our digital doorstep. In sum this amounts to the largest boycott in human history.

But there’s still no house behind the doorstep, and we’re still standing there naked, even if we’ve kept others from planting tracking beacons on us.

One of the forms privacy takes in the physical world is the mutual understanding we call manners, which are agreements about how to respect each others’ intentions.

Here at Customer Commons, we’ve been working on terms we can assert, to signal those intentions. Here’s a working draft of what they look like now:

UserSubmittedTerms1stDraft

That’s at the Consent and Information Working Group. Another allied effort is Consent Receipt.

If you’re working on privacy in any way — whether you’re a geek hacking code, a policy maker, an academic, a marketer trying to do the right thing, or a journalist working the privacy beat — remember this: Privacy is personal first. Before anything elseIf you’re not working on getting people clothing and shelter of their own, you’re not helping where it’s needed.

It’s time to civilize the Net. And that’s an inside job.

__________________

*If we start from the dawn of ISPs, graphical browsers, email and the first commercial activity, which began after the NSFnet went down on 30 April 1995.

 

 

 

New Rules for Privacy Regulations

The Wall Street Journal has an informative conversation with Lawrence Lessig: Technology Will Create New Models for Privacy Regulation. What underlies a change toward new models are two points: the servers holding vast user databases are increasingly (and very cheaply) breached, and the value of the information in those databases is being transferred to something more aligned to VRM: use of the data, on a need to know basis. Lessig notes:

The average cost per user of a data breach is now $240 … think of businesses looking at that cost and saying “What if I can find a way to not hold that data, but the value of that data?” When we do that, our concept of privacy will be different. Our concept so far is that we should give people control over copies of data. In the future, we will not worry about copies of data, but using data. The paradigm of required use will develop once we have really simple ways to hold data. If I were king, I would say it’s too early. Let’s muddle through the next few years. The costs are costly, but the current model of privacy will not make sense going forward.

The challenge, notes Lessig, is “a corrupt Congress” that is more interested in surveillance than markets and doing business. Perhaps that isn’t a problem, according to an Associated Press poll (which has no bias, of course!):

According to the new poll, 56 percent of Americans favor and 28 percent oppose the ability of the government to conduct surveillance on Internet communications without needing to get a warrant. That includes such surveillance on U.S. citizens. Majorities both of Republicans (67 percent) and Democrats (55 percent) favor government surveillance of Americans’ Internet activities to watch for suspicious activity that might be connected to terrorism. Independents are more divided, with 40 percent in favor and 35 percent opposed. Only a third of Americans under 30, but nearly two-thirds 30 and older, support warrantless surveillance.

Right. After all, who needs business?

Electronic Health Records and Patient-Centric Design

CIO’s story Why Electronic Health Records aren’t more usable offers an interesting perspective on the current (improved?) state of affairs in medical care records. From the article:

The American Medical Association in 2014 issued an eight-point framework for improving EHR usability. According to this framework, EHRs should:

  • enhance physicians’ ability to provide high-quality patient care
  • support team-based care
  • promote care coordination
  • offer product modularity and configurability
  • reduce cognitive workload
  • promote data liquidity
  • facilitate digital and mobile patient engagement
  • expedite user input into product design and post-implementation feedback.

Nevertheless, it does not appear that EHR vendors are placing more emphasis on UCD. The Office of the National Coordinator for Health IT requires developers to perform usability tests as part of a certification process that makes their EHRs eligible for the government’s EHR incentive program. Yet a recent study found that, of 41 EHR vendors that released public reports, fewer than half used an industry-standard UCD process. Only nine developers tested their products with at least 15 participants who had clinical backgrounds, such as physicians.

Note that this situation is not due to a lack of user-centric efforts to make medical records more useful. Indeed there are several efforts underway, including HealthAuth, Kantara’s Healthcare ID Assurance Working Group, Patient Privacy Rights, HEART working efforts with OAuth and UMA, and more. As the article noted, there are regulatory complications as well as crazy-complicated workflow requirements imposed by the software designers/vendors. We need a shift in focus here.

Mozilla has a cute video on the open, privacy protecting web

Check it out here at The Web We Want: An Open Letter:

And the note Mozilla posted with the video:

Our right to a free and open Internets has been under threat lately. The NSA — btw, that stands for the National Security Agency, which has the fancy responsibility of analyzing and acting upon security data — has gotten into the habit of spying on Americans with no justification (including 12 spies who were using NSA tools to spy on their significant others). No, I’m not kidding.

The FCC — btw, that stands for the Federal Communications Commission, which is supposed to regulate and protect our communications channels — just made it easier for big companies to control the speed at which you are allowed to access particular websites. For example, your Internet company (i.e., Comcast or Verizon) could turn into a tiered pay system. So instead of being like a public utility where everyone gets the same amount of water or electricity, Verizon could give Netflix faster access for a fee, but then the smaller start-up that wants to compete and couldn’t afford it would get slower access.

The Internet has become one of the most important resources in our lives. It’s a shared resource that all of us take part in. Government spying on it and corporate interference in it are probably not things we want for the future. So Mozilla had some children voice concern for their own future. Because it’s important. What kind of web do you want?

On Bringing Manners to Markets

Privacy in the physical world has been well understood and fairly non-controversial for thousands of years. We get it, for example, with clothing, doors, curtains and window shades. These each provide privacy by design, because they control visibility and access to our private places and spaces.

The virtual world, however, is very young, dating roughly back to 1995, when the first graphical browsers and ISPs came along. Thus, on the scale of civilization’s evolution, the Net is not only brand new, but in its infancy (the stage in life when it’s okay to go naked and pee and crap all over the place.) On the Net today, manners are almost completely absent. We see this, in a strange and mundane way, in corporate and government obsessions with gathering Big Data from consumers and citizens, mostly without their knowledge or conscious permission.

Companies today are moving budget to the Chief Marketing Officer (a title that didn’t exist a decade ago), so she or he can hire IBM, or SAP or some other BigCo to paint million-point portraits of people, with a palette of pixels harvested by surveillance, all so they can throw better marketing guesswork at them.

This isn’t new in marketing. It’s just an old practice (data-fed junk mail) that has fattened on Big Data and Big Fantasy. As a result we’re all drowning in guesswork, most of which is off the mark, no matter how well-understood we might be by the Big Data mills of the world.

Normally we would look to government to help us comprehend, guide and control infrastructures on which we utterly depend. (e.g. electricity, gas, water, sewage treatment, roads and bridges). But no one entity, including government, can begin to comprehend, much less monitor and regulate, the wild and wooly thing the Net has become (even at its lower layers), especially when so much of what we do with it depends on inside giant black or near-black boxes (Google, Facebook, Twitter, et. al.). But, thanks to Edward Snowden, we now know that the U.S. government itself — via the NSA and who knows what else — is doing the same thing, and also muscling private sector companies to cooperate with them.

But that’s a problem endemic to what Gore Vidal called the “national security state”, and plain old market forces won’t have much influence on it. Democratic and political ones will, but they’re not on the table here.

At Customer Commons, our table is the marketplace, and our role in it as customers. Whatever else we do, it can’t hurt to recognize and expose practices that are just plain rude.

We are not fish and advertising is not food

This is how the Internet looks to the online advertising business today:

2manyfish

This is how they approach it:

fishfeeding

And this is the result:

fishfeeding_mess

Advertising is a huge source of the “data pollution” Fred Wilson talked about at LeWeb a few weeks ago. (See here, starting at about 23 minutes in.)

What’s wrong with this view, and this approach, is the architectural assumption that:

  1. We are consumers and nothing more. Fish in a bowl.
  2. The Net — and the Web especially — is a container.
  3. Advertisers have a right to target us in that container. And to track us so we can be targeted.
  4. Negative externalities, such as data pollution, don’t matter.
  5. This can all be rationalized as an economic necessity.

Yet here is what remains true, regardless of the prevailing assumptions of the marketing world:

  1. We are not fish. Rather, as Cluetrain put it (in 1999!), we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it.
  2. The Net was designed as a wide open space where all the intelligence that matters is at its ends, and each of us sits (stands, walks, drives) at one.
  3. Even if advertisers have a legal right to target us, their manners are terrible and doomed for correction.
  4. Negative externalities matter. A lot. As Fred said in his talk, we eventually dealt with the pollution caused by industry, and we’ll deal with it in the virutal world as well.
  5. The larger economic necessity is for a well-functioning marketplace. We’ll get that online once free customers prove more valuable than captive ones.

The key is to replicate online the experience of operating as a free and independent customer in the physical world.

For example, when you go into a store, your default state is anonymity. Unless you are already known by name to the people at the store,  you are nameless by default. This is a civic grace. There is no need to know everybody by name, and to do so might actually slow things down and make the world strange and creepy. (Ask anybody who has lived in a surveillance state, such as East Germany before it fell, what it is like to be followed, or to know you might be followed, all the time.) We haven’t yet invented ways to be anonymous online, or to control one’s anonymity. But that’s a challenge, isn’t it? Meaning it is also a market opportunity.

We’ve lived in a fishbowl long enough. Time to get human. I guarantee there’s a lot more money coming from human beings than from fish whose only utterances are clicks.

Data Privacy Legal Hack-A-thon

Customer Commons is supporting, and board member, Mary Hodder, is hosting the Bay Area event. Additionally, there are NYC and London locations. Please join us if you are interested:

Data Privacy Legal Hackathon 2014
Data Privacy Legal Hackathon 2014

This is an unprecedented year documenting our loss of Privacy. Never before have we needed to stand up and team up to do something about it. In honour of Privacy Day, the Legal Hackers are leading the charge to do something about it, inspiring a two-day international Data Privacy Legal Hackathon. This is no ordinary event. Instead of talking about creating privacy tools in theory, the Data Privacy Legal Hackathon is about action! A call to action for tech & legal innovators who want to make a difference!

We are happy to announce a Data Privacy Legal Hackathon and invite the Kantara Community to get involved and participate. We are involved in not only hosting a Pre-Hackathon Project to create a Legal Map for consent laws across jurisdictions, but the CISWG will also be posting a project for the Consent Receipt Scenario that is posted in on the ISWG wiki.

The intention is to hack Open Notice with a Common Legal Map to create consent receipts that enable ‘customisers’ to control personal information If you would like to get involved in the hackathon, show your support, or help build the consent receipt infrastructure please get involved right away — you can get intouch with Mark (dot) Lizar (at)gmail (dot) com, Hodder (at) gmail (dot) com, or join the group pages that are in links below.

Across three locations on February 8th & 9th, 2014, get your Eventbrite Tickets Here:

* New York City * London, UK * San Francisco *

http://legalhackers.org/privacyhack2014/

This two-day event aims to mix the tech and legal scenes with people and companies that want to champion personal data privacy. Connecting entrepreneurs, developers, product makers, legal scholars, lawyers, and investors.

Each location will host a two-day “judged” hacking competition with a prize awarding finale, followed by an after-party to celebrate the event.

The Main Themes to The Hackathon Are:

  • Crossing the Pond Hack
  • Do Not Track Hack
  • Surveillance & Anti-Surveillance
  • Transparency Hacks
  • Privacy Policy Hack
  • Revenge Porn Hack

Prizes will be awarded:

  • 1st Prize:  $1,000
  • 2nd Prize:  $500
  • 3rd Prize: $250

There are pre-hackathon projects and activities. Join the Hackerleague to participate in these efforts and list your hack:

Sponsorship Is Available & Needed

Any organization or company seeking to show active support for data privacy and privacy technologies is invited to get involved.

  • Sponsor: prizes, food and event costs by becoming a Platinum, Gold or Silver Sponsor
  • Participate: at the event by leading or joining a hack project
  • Mentor: projects or topics that arise for teams, and share your expertise.

 

Contact NYC sponsorship: Phil Weiss email or @philwdjjd

Contact Bay Area sponsorship: Mary Hodder – Hodder (at) gmail (dot) com – Phone: 510 701 1975

Contact London sponsorship: Mark Lizar – Mark (dot) Lizar (at)gmail (dot) com – Phone: +44 02081237426 – @smarthart

Omie Update (version 0.2)

We’re overdue an update on the Omie Project…., so here goes.

To re-cap:

We at Customer Commons believe there is room/ need for a device that sits firmly on the side of the individual when it comes to their role as a customer or potential customer.
That can and will mean many things and iterations over time, but for now we’re focusing on getting a simple prototype up and running using existing freely available components that don’t lock us in to any specific avenues downstream.
Our role is demonstrate the art of the possible, catalyse the development project, and act to define what it means to ‘sit firmly on the side of the customer’.
For now, we’ve been working away behind the scenes, and now have a working prototype (Omie 0.2). But before getting into that, we should cover off the main questions that have come up around Omie since we first kicked off the project.

What defines an Omie?

At this stage we don’t propose to have a tight definition as the project could evolve in many directions; so our high level definition is that an Omie is ‘any physical device that Customer Commons licenses to use the name, and which therefore conforms to the ‘customer side’ requirements of Customer Commons.

Version 1.0 will be a ‘Customer Commons Omie’ branded white label Android tablet with specific modifications to the OS, an onboard Personal Cloud with related sync options, and a series of VRM/ Customer-related apps that leverage that Personal Cloud.

All components, wherever possible, will be open source and either built on open specs/ standards, or have created new ones. Our intention is not that Customer Commons becomes a hardware manufacturer and retailer; we see our role as being to catalyse a market in devices that enable people in their role of ‘customer’, and generate the win-wins that we believe this will produce. Anyone can then build an Omie, to the open specs and trust mechanisms.

What kind of apps can this first version run?

We see version 1 having 8 to 10 in-built apps that tackle different aspects of being a customer. The defining feature of all of these apps is that they all use the same Personal Cloud to underpin their data requirements rather than create their own internal database.

Beyond those initial apps, we have a long list of apps whose primary characteristic is that they could only run on a device over which the owner had full and transparent control.

We also envisage an Omie owner being able to load up any other technically compatible app to the device, subject to health warnings being presented around any areas that could breach the customer-side nature of the device.

How will this interact with my personal cloud?

As noted above, we will have one non-branded Personal Cloud in place to enable the prototyping work (on device and ‘in the cloud’), but we wish to work with existing or new Personal Cloud providers wishing to engage with the project to enable an Omie owner to sync their data to their branded Personal Clouds.

Where are we now with development?

We now have a version 0.2 prototype, some pics and details are below. We intend, at some point to run a Kickstarter or similar campaign to raise the funds required to bring a version 1.0 to market. As the project largely uses off the shelf components we see the amount required being around $300k. Meantime, the core team will keep nudging things forward.

How can I get involved?

We are aiming for a more public development path from version 0.3. We’re hoping to get the Omie web site up and running in the next few weeks, and will post details there.

Alternatively, if you want to speed things along, please donate to Customer Commons.

VERSION 0.2

Below are a few pics from our 0.2 prototype.

Home Screen – Showing a secure OS, a working, local Personal Cloud syncing to ‘the cloud’ for many and varied wider uses. This one shows the VRM related apps, there is another set of apps underway around Quantified Self.

Omie 0.2 Home Screen

My Suppliers – Just as a CRM system begins with a list of customers, a VRM device will encompass a list of ‘my suppliers’ (and ‘my stuff’).

Omie 0.2 My Suppliers

My Transactions – Another critical component, building my transaction history on my side.

Omie 0.2 Transactions

Intent Casting/ Stroller for Twins – Building out Doc’s classic use case, real time, locally expressed intention to buy made available as a standard stream of permissioned data. Right now there are about 50 online sellers ‘listening’ for these intent casts, able to respond, and doing business; and 3 CRM systems.

Omie 0.2 Intent Casting

So what have we learned in the build of version 0.2?

Firstly, that it feels really good to have a highly functional, local place for storing and using rich, deep personal information that is not dependent on anyone else or any service provider, and has no parts of it that are not substitutable.

Secondly, that without minimising the technical steps to take, the project is more about data management than anything else, and that we need to encourage a ‘race to the top’ in which organisations they deal with can make it easy for customers to move data backwards and forwards between the parties. Right now many organisations are stuck in a negative and defensive mind-set around receiving volunteered information from individuals, and very few are returning data to customers in modern, re-usable formats through automated means.

Lastly that the types of apps that emerge in this very different personal data eco-system are genuinely new functions not enabled by the current eco-system, and not just substitutes for those there already. For example, the ‘smart shopping cart’ in which a customer takes their requirements and preferences with them around the web is perfectly feasible when the device genuinely lives on the side of the customer.