Privacy is personal. Let’s start there.

The GDPR won’t give us privacy. Nor will ePrivacy or any other regulation. We also won’t get it from the businesses those regulations are aimed at.

Because privacy is personal. If it wasn’t we wouldn’t have invented clothing and shelter, or social norms for signaling to each what’s okay and what’s not okay.

On the Internet we have none of those. We’re still as naked as we were in Eden.

But let’s get some perspective here:  we invented clothing and shelter long before we invented history, and most of us didn’t get online until long after Internet service providers and graphical browsers showed up in 1994.

In these early years, it has been easier and more lucrative for business to exploit our exposed selves than it has been for technology makers to sew (and sell) us the virtual equivalents of animal skins and woven fabrics.

True, we do have the primitive shields called ad blockers and tracking protectors. And, when shields are all you’ve got, they can get mighty popular. That’s why 1.7 billion people on Earth were already blocking ads online by early 2017.† This made ad blocking the largest boycott in human history. (Note: some ad blockers also block tracking, but the most popular ad blocker is in the business of selling passage for tracking to companies whose advertising is found “acceptable” on grounds other than tracking.)

In case you think this happened just because most ads are “intrusive” or poorly targeted, consider the simple fact that ad blocking has been around since 2004, yet didn’t hockey-stick until the advertising business turned into direct response marketing, hellbent on collecting personal data and targeting ads at eyeballs.††

This happened in the late ’00s, with the rise of social media platforms and programmatic “adtech.” Euphemized by its perpetrators as  “interactive,” “interest-based,” “behavioral” and “personalized,” adtech was, simply-put, tracking-based advertising. Or, as I explain at the last link direct response marketing in the guise of advertising.

The first sign that people didn’t like tracking was Do Not Track, an idea hatched by  Chris Soghoian, Sid Stamm, and Dan Kaminsky, and named after the FTC’s popular Do Not Call Registry. Since browsers get copies of Web pages by requesting them (no, we don’t really “visit” those pages—and this distinction is critical), the idea behind Do Not Track was to make to put the request not to be tracked in the header of a browser. (The header is how a browser asks to see a Web page, and then guides the data exchanges that follow.)

Do Not Track was first implemented in 2009 by Sid Stamm, then a privacy engineer at Mozilla, as an option in the company’s Firefox browser. After that, the other major browser makers implemented Do Not Track in different ways at different times, culminating in Mozilla’s decision to block third party cookies in Firefox, starting in February 2013.

Before we get to what happened next, bear in mind that Do Not Track was never anything more than a polite request to have one’s privacy respected. It imposed no requirements on site owners. In other words, it was a social signal asking site owners and their third party partners to respect the simple fact that browsers are personal spaces, and that publishers and advertisers’ rights end at a browser’s front door.

The “interactive” ad industry and its dependents in publishing responded to that brave move by stomping on Mozilla like Gozilla on Bambi:

In this 2014 post  I reported on the specifics how that went down:

Google and Facebook both said in early 2013 that they would simply ignore Do Not Track requests, which killed it right there. But death for Do Not Track was not severe enough for the Interactive Advertising Bureau (IAB), which waged asymmetric PR warfare on Mozilla (the only browser maker not run by an industrial giant with a stake in the advertising business), even running red-herring shit like this on its client publishers websites:

As if Mozilla was out to harm “your small business,” or that any small business actually gave a shit.

And it worked.

In early 2013, Mozilla caved to pressure from the IAB.

Two things followed.

First, soon as it was clear that Do Not Track was a fail, ad blocking took off. You can see that in this Google Trends graph†††, published in Ad Blockers and the Next Chapter of the Internet (5 November 2015 in Harvard Business Review):

Next, ad searches for “how to block ads” rose right in step with searches for retargeting, which is the most obvious evidence that advertising is following you around:

You can see that correlation in this Google Trends graph in Don Marti’s Ad Blocking: Why Now, published by DCN (the online publishers’ trade association) on 9 July 2015:

Measures of how nearly all of us continue to hate tracking were posted by Dr. Johnny Ryan (@johnnyryan) in PageFair last September. In that post, he reports on a PageFair “survey of 300+ publishers, adtech, brands, and various others, on whether users will consent to tracking under the GDPR and the ePrivacy Regulation.” Bear in mind that the people surveyed were industry insiders: people you would expect to exaggerate on behalf of continued tracking.

Here’s one result:

Johnny adds, “Only a very small proportion (3%) believe that the average user will consent to ‘web-wide’ tracking for the purposes of advertising (tracking by any party, anywhere on the web).” And yet the same survey reports “almost a third believe that users will consent if forced to do so by tracking walls,” that deny access to a website unless a visitor agrees to be tracked.”

He goes on to add, “However, almost a third believe that users will consent if forced to do so by ‘tracking walls”, that deny access to a website unless a visitor agrees to be tracked. Tracking walls, however, are prohibited under Article 7 of the GDPR, the rules of which are already formalised and will apply in law from late May 2018.[3] “

Which means that the general plan by the “interactive” advertising business is to put up those walls anyway, on the assumption that people will think they won’t get to a site’s content without consenting to tracking. We can read that in the subtext of IAB Europe‘s Transparency and Consent Framework, a work-in-progress you can follow here on Github., and read unpacked in more detail at AdvertisingConsent.eu.

So, to sum all this up, so far online what we have for privacy are: 1) popular but woefully inadequate ad blocking and tracking protection add-ons in our browsers; 2) a massively interesting regulation called the GDPR…

… and 3) plans by privacy violators to obey the letter of that regulation while continuing to violate its spirit.

So how do we fix this on the personal side? Meaning, what might we have for clothing and shelter, now that regulators and failed regulatory captors are duking it out in media that continue to think all the solutions to our problems will come from technologies and social signals other than our own?

Glad you asked. The answers will come in our next three posts here. We expect those answers to arrive in the world and have real effects—for everyone except those hellbent on tracking us—before the 25 May GDPR deadline for compliance.


† From Beyond ad blocking—the biggest boycott in human history: “According to PageFair’s 2017 Adblock Report, at least 615 million devices now block ads. That’s larger than the human population of North America. According to GlobalWebIndex, 37% of all mobile users, worldwide, were blocking adsby January of last year, and another 42% would like to. With more than 4.6 billion mobile phone usersin the world, that means 1.7 billion people are blocking ads already—a sum exceeding the population of the Western Hemisphere.”

†† It was plain old non-tracking-based advertising that not only only sponsored publishing and other ad-suported media, but burned into people’s heads nearly every brand you can name. After a $trillion or more has been spent chasing eyeballs, not one brand known to the world has been made by it. For lots more on all this, read everything you can by Bob Hoffman (@AdContrarian) and Don Marti (@dmarti).

††† Among the differences between the graph above and the current one—both generated by the same Google Trends search—are readings above zero in the latter for Do Not Track prior to 2007. While there are results in a search for “Do Not Track” in the 2004-2006 time frame, they don’t refer to the browser header approach later branded and popularized as Do Not Track.

Also, in case you’re reading this footnote, the family at the top is my father‘s. He’s the one on the left. The location was Niagara Falls and the year was 1916. Here’s the original. I flipped it horizontally so the caption would look best in the photo.

 

How customers help companies comply with the GDPR

That’s what we’re starting this Thursday (26 April) at GDPR Hack Day at MIT.

The GDPR‘s “sunrise day” — when the EU can start laying fines on companies for violations of it — is May 25th. We want to be ready for that: with a cookie of our own baking that will get us past the “gauntlet walls” of consent requirements that are already appearing on the world’s commercial websites—especially the ad-supported ones.

The reason is this:

Which you can also see in a search for GDPR.

Most of the results in that search are about what companies can do (or actually what companies can do for companies, since most results are for companies doing SEO to sell their GDPR prep services).

We propose a simpler approach: do what the user wants. That’s why the EU created the GDPR in the first place. Only in our case, we can start solving in code what regulation alone can’t do:

  1. Un-complicate things (for example, relieving sites of the need to put up a wall of permissions, some of which are sure to obtain grudging “consent” to the same awful data harvesting practices that caused the GDPR in the firs place).
  2. Give people a good way to start signaling their intentions to websites—especially business-friendly ones
  3. Give advertisers a safe way to keep doing what they are doing, without unwelcome tracking
  4. Open countless new markets by giving individuals better ways of signaling what they want from business, starting with good manners (which went out the window when all the tracking and profiling started)

What we propose is a friendly way to turn off third party tracking at all the websites a browser encounters requests for permission to track, starting with a cookie that will tell the site, in effect, first party tracking for site purposes is okay, but third party tracking is not.

If all works according to plan, that cookie will persist from site to site, getting the browser past many gauntlet walls. It will also give all those sites and their techies a clear signal of intention from the user’s side. (All this is subject to revision and improvement as we hack this thing out.)

This photo of the whiteboard at our GDPR session at IIW on April 5th shows how wide ranging and open our thinking was at the time:

Photos from the session start here. Click on your keyboard’s right (>) arrow to move through them. Session notes are on the IIW wiki here.

Here is the whiteboard in outline form:

Possible Delivery Paths

Carrots

  • Verifiable credential to signal intent
  • Ads.txt replaced by a more secure system + faster page serving
  • For publishers:
    • Ad blocking decreases
    • Subscriptions increase
    • Sponsorship becomes more attractive
  • For advertisers
    • Branding—the real kind, where pubs are sponsored directly—can come back
    • Clearly stated permissions from “data subjects” for “data processors” and “data controllers” (those are GDPR labels)
    • Will permit direct ads (programmatic placement is okay; just not based on surveillance)
    • Puts direct intentcasting from data subject (users) on the table, replacing adtech’s spying and guesswork with actual customer-driven leads and perhaps eventually a shopping cart customers take from site to site
    • Liability reduction or elimination
    • Risk management
    • SSI (self-sovereign identity) / VC (verified credential) approach —> makes demonstration of compliance automateable (for publishers and ad creative)
    • Can produce a consent receipt that works for both sides
    • Complying with a visitor’s cookie is a lot easier than hiring expensive lawyers and consultants to write gauntlet walls that violate the spirit of the GDPR while obtaining grudging compliance from users with the letter of it

Sticks

  • The GDPR, with ePrivacy right behind it, and big fines that are sure to come down
  • A privacy manager or privacy dashboard on the user’s side, with real scale across multiple sites, is inevitable. This will help bring one into the world, and sites should be ready for it.
  • Since ample research (University of Pennsylvania, AnnenbergPageFair) has made clear that most users do not want to be tracked, browser makers will be siding eventually, inevitably, with those users by amplifying tracking protections. The work we’re doing here will help guide that work—for all browser makers and add-on developers

Participating organizations (some onboard, some partially through individuals)

Sources

Additions and corrections to all the above are welcome.

So is space somewhere in Cambridge or Boston to continue discussions and hackings on Friday, April 27th.

The Only Way Customers Come First

— is by proffering terms of their own.

That’s what will happen when sites and services click “accept” to your terms, rather than the reverse. This then you are what lawyers call the first party. Sites and services that agree to your terms are second parties.

As a first party, you get scale across all the sites and services that agree to your terms, just like today each of those sites and services gets scale across thousand or millions of second-class netizens called “users”:

This the exact reverse of what we’ve had in mass markets ever since industry won the industrial revolution. But we can get that scale now, because we have the Internet, which was designed to support it. (Details here and here.)

And now is the time, for two reasons:

  1. We can make our leadership pay off for sites and services; and
  2. Agreeing with us can make sites and services compliant with tough new privacy laws.

First example:#NoProfiling:

With #NoStalking, we proffer a term that says—

This does a bunch of good things for advertising supported sites:

  1. It relieves them of the need to track us like animals everywhere we go, and harvest personal data we’d rather not give anybody without our permission.
  2. Because of #1, it gives them compliance with the EU’s General Data Protection Regulation (aka GDPR), which will start fining companies “up to 10,000,000 EUR or up to 2% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater (Article 83, Paragraph 4),” or “a fine up to 20,000,000 EUR or up to 4% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater (Article 83, Paragraph 5 & 6).”
  3. It provides simple and straightforward “brand safety” directly from human beings, rather than relying on an industry granfalloon to do the same.
  4. It lets good publishers sell advertising to brands that want to sponsor journalism rather than chase eyeballs to the cheapest, shittiest sites.
  5. It provides a valuable economic signal from demand to supply in the open marketplace—one that can be enlarged to include other signals, such as our next term…

#intentcasting:

This is where individuals present themselves to the marketplace as qualified leads, but on their own terms.

#nostalking and #intentcasting are the first terms to be published at Customer Commons. Both have the potential to generate fresh and healthy economic activity, one in publishing and the other in retailing.

Every new first party term  has the potential to reform whole markets for the good of everyone, simply by creating better ways for demand to signal, engage and improve supply. In doing that, first party terms will also make good on the promise of the Internet in the first place. After two decades of failing to do that, it’s about time.

We’ll be working on exactly these terms at VRM Day next Monday, and at IIW for the following three days, all at the Computer History Museum in Silicon Valley. Sign up at those links. Help us change the world.

 

 

Time for THEM to agree to OUR terms

Screen Shot 2016-03-25 at 12.12.45 PM
We can do for everybody what Creative Commons does for artists: give them terms they can offer—and be can read and agreed to by lawyers, ordinary folks, and their machines. And then we can watch “free market” come to mean what it says, and not just “your choice of captor.”

Try to guess how many times, in the course of your life in the digital world, have “agreed” to terms like these:

URsoScrewed

Hundreds? Thousands? (Feels like) millions?

Look at the number of login/password combinations remembered by your browser. That’ll be a fraction of the true total.

Now think about what might happen if we could turn these things around. How about if sites and services could agree to our terms and conditions, and our privacy policies?

We’d have real agreements, and real relationships, freely established, between parties of equal power who both have an interest in each other’s success.

We’d have genuine (or at least better) trust, and better signaling of intentions between both parties. We’d have better exchanges of information and better control over what gets done with that information. And the information would be better too, because we wouldn’t have to lie or hide to protect our identities or our data.

We’d finally have the only basis on which the Seven Laws of Identity, issued by Kim Cameron in 2005, would actually work. Check ’em out:

laws

Think about it. None of those work unless individuals are in charge of themselves and their relationships in the digital world. And they can’t as long as only one side is in charge. What we have instead are opposites: limited control and coerced consent, maximum disclosure for unconstrained use, unjustified parties, misdirected identity, silo’d operators and technologies, inhuman integration, and inconsistent experiences across contexts of all kinds. (I’ll add links for all of those later when I have time.)

Can we fix this problem, eleven years after Kim came down from the mountain (well, Canada) with those laws?

No, we can’t. Not without leverage.

The sad fact is that we’ve been at a disadvantage since geeks based the Web on an architecture called “client-server.” I’ve been told that term was chosen because “slave-master” didn’t sound so good. Personally, I prefer calf-cow:

calf-cow

As long as we’re the calves coming to the cows for the milk of “content” (plus unwanted cookies), we’re not equals.

But once we become independent, and can assert enough power to piss off the cows that most want to take advantage of us, the story changes.

Good news: we are independent now, and controlling our own lives online is pissing off the right cows.

We’re gaining that independence through ad and tracking blockers. There are also a lot of us now. And a lot more jumping on the bandwagon.

According to PageFair and Adobe, the number of people running ad blockers alone passed 200 million last May, with annual growth rates of 41% in the world, 48% the U.S. and 82% in the U.K. alone.

Of course the “interactive” ad industry (the one that likes to track you) considers this a problem only they can solve. And, naturally, the disconnect between their urge to track and spam us, and our decision to stop all of it, is being called a “war.”

But it doesn’t have to be.

Out in the offline world, we were never at war with advertising. Sure, there’s too much of it, and a lot of it we don’t like. But we also know we wouldn’t have sports broadcasts (or sports talk radio) without it. We know how much advertising contributes to the value of the magazines and newspapers we read. (Which is worth more: a thick or a thin Vogue, Sports Illustrated, Bride’s or New York Times?) And to some degree we actually value what old fashioned Mad Men type advertising brings to the market’s table.

On the other hand, we have always been at war with the interactive form of advertising we call junk mail. Look up unwanted+mail, click on “images,” and and you’ll get something like this:

unwantedmail

What’s happened online is that the advertising business has turned into the “interactive”  junk message business. Only now you can’t tell the difference between an ad that’s there for everybody and one that’s aimed by crosshairs at your eyeballs.

The difference between real advertising and tracking-based junk messages is the same as that between wheat and chaff.

Today’s ad and tracking blockers are are primitive prophylactics: ways to protect our eyeballs from advertising and tracking. But how about if we turn these into instruments of agreement? We could agree to allow the kind of ads that pay the publisher and aren’t aimed at us by tracking.

Here at Customer Commons we’ve been working on those kinds of terms for the last several years. Helping us have been law school students and teachers, geeks and ordinary folks. Last we publishe a straw man version of those terms, they looked like this:

UserSubmittedTerms1stDraft

What those say (in the green circles) is “You (the second party) alone can use data you get from me, for as long as you want, just for your site or app, and will obey the Do Not Track request from my browser.”

This can be read easily by lawyers, ordinary folks and machines on both sides, just the way the graphic at the top of this post, borrowed from Creative Commons (or model for this), describes.

We’re also not alone.

Joining us in this effort are the Identity Ecosystem Working Group, the Personal Data Ecosystem Consortium, the Consent and Information Sharing Working Group (which is working on a Consent Receipt to give agreements a way to be recorded by both parties), Mozilla and others on the ProjectVRM Development Work list.

Many people from those groups (including Kim Cameron himself) will be at IIW, the Internet Identity Workshop, at the Computer History Museum in Silicon Valley, on the last week of next month, April 26-28. It’s an unconference. No panels, no keynotes, no plenaries. It’s all breakouts, on topics chosen by participants.

The day before, at the same location, will be VRM Day. The main topic there will be terms, and how we plan to get working versions of them in the next three days at IIW.

This is a huge opportunity. I am sure we have enough code, and enough done work on standards and the rest of it, to put up exactly the terms we can offer and publishers online can accept, and will start to end the war (that really isn’t) between publishers and their readers.

Once we have those terms in place, others can follow, opening up to much better signaling between supply and demand, because both sides are equals.

So this is an open invitation to everybody already working in this space, especially browser makers (and not just Mozilla) and the ad and tracking blockers. IIW is a perfect place to show to show what we’ve got, to work together, and to move things forward.

Let’s do it.

 

Giving Customers Scale

scale-leverage

Customers need scale.

Scale is leverage. A way to get lift.

Big business gets scale by aggregating resources, production methods, delivery services — and, especially, customers: you, me and billions of others without whom business would not exist.

Big business is heavy by nature. That’s why we use mass as an adjective for much of what big business does: mass manufacturing, mass distribution, mass retailing, mass marketing, and mass approaches to everything, including legal agreements.

For personal perspective on this, consider how you can’t operate your mobile phone until you click “accept” to a 55-screen list of terms and conditions you’ll never read because there’s no point to it. Privacy policies are just as bad. Few offer binding commitments and nearly all are lengthy and complicated. According to a Carnegie-Mellon study, it would take 76 work days per year just to read all the privacy policies encountered by the average person. The Atlantic says this yields an “opportunity cost” of $781 billion per year, exceeding the GNP of Florida.

We accept this kind of thing because we don’t know any other way to get along with big business, and big business doesn’t know any other way to get along with us. And we’ve had this status quo ever since industry won the Industrial Revolution.

In 1943 — perhaps the apex of the Industrial Age — law professor Friedrich Kessler called these non-agreements “contracts of adhesion,” meaning the submissive party was required to adhere to the terms of the contract while the dominant party could change whatever they liked. On one side, glue. On the other, Velcro. Kessler said contracts of adhesion were pro forma because there was no way a big business could have different contracts with thousands or millions of customers. What we lost, Kessler said, was freedom of contract, because it didn’t scale.

So, for a century and a half, in economic sectors from retail to health care, we have had dominant companies controlling captive markets, often enabled by captured regulators as well. This way of economic life is so deeply embedded that most of us believe, in effect, that “free market” means “your choice of captor.” Stockholm syndrome has become the norm, not the exception.

Thus it is also no surprise that marketing, the part of business that’s supposed to “relate” to customers, calls us “targets” and “assets” they “acquire,” “control,” “manage,” “lock in” and “own” as if we are slaves or cattle. This is also why, even though big business can’t live without us, our personal influence on it is mostly limited to cash, coerced loyalty and pavlovian responses to coupons, discounts and other marketing stimuli.

Small businesses are in the same boat. As customers, we can can relate personally, face to face, with the local cleaner or baker or nail salon. Yet, like their customers, most small businesses are also at the mercy of giant banks, credit agencies, business management software suppliers and other big business services. Many more are also crushed by big companies that use big compute power and the Internet to eliminate intermediaries in the supply chain.

It gets worse. In Foreign Policy today, Parag Khanna reports on twenty-five companies that “are more powerful than many countries.” In addition to the usual suspects (Walmart, ExxonMobil, Apple, Nestlé, Maersk) he also lists newcomers such as Uber, which is not only obsoleting the taxi business, but also the government agencies that regulate it.

It also gets more creepy, since the big craze in big business for the last few years has been harvesting “behavioral” data. While they say they’re doing it to “deliver” us a “better experience” or whatever, their main purpose is to manipulate each of us for their own gain. Here’s how Shoshana Zuboffunpacks that in Secrets of Surveillance Capitalism:

Among the many interviews I’ve conducted over the past three years, the Chief Data Scientist of a much-admired Silicon Valley company that develops applications to improve students’ learning told me, “The goal of everything we do is to change people’s actual behavior at scale. When people use our app, we can capture their behaviors, identify good and bad behaviors, and develop ways to reward the good and punish the bad. We can test how actionable our cues are for them and how profitable for us”…

We’ve entered virgin territory here. The assault on behavioral data is so sweeping that it can no longer be circumscribed by the concept of privacy and its contests.  This is a different kind of challenge now, one that threatens the existential and political canon of the modern liberal order defined by principles of self-determination that have been centuries, even millennia, in the making. I am thinking of matters that include, but are not limited to, the sanctity of the individual and the ideals of social equality; the development of identity, autonomy, and moral reasoning; the integrity of contract, the freedom that accrues to the making and fulfilling of promises; norms and rules of collective agreement; the functions of market democracy; the political integrity of societies; and the future of democratic sovereignty.

And that might be the short list. And an early one too.

Think about what happens when the “Internet of Things” (aka IoT) comes to populate our private selves and spaces? The marketing fantasy for IoT is people’s things reporting everything they do, so they can be studied and manipulated like laboratory mice.

Our tacit agreement to be mice in the corporate mazes amounts to a new social contract in which nobody has much of a clue about what the consequences will be. One that’s easy to imagine is personalized pricing based on intimate knowledge gained from behavioral tracking through the connected things in our lives. In the new world where our things narc on us to black boxes we can’t see or understand, our bargaining power falls to zero. So does our rank in the economic caste system.

But hope is not lost.

With the Internet, scale for individuals is thinkable, because the Internet was also designed from the start to give every node on the network the ability to connect with every other node, and to reduce the functional distance between all of them as close to zero as possible. Same with cost. As I put it in The Giant Zero,

On the Net you can have a live voice conversation with anybody anywhere, at no cost or close enough. There is no “long distance.”

On the Net you can exchange email with anybody anywhere, instantly. No postage required.

On the Net anybody can broadcast to the whole world. You don’t need to be a “station” to do it. There is no “range” or “coverage.” You don’t need antennas, beyond the unseen circuits in wireless devices.

In a 2002 interview Peter Drucker said, “In the Industrial Age, only industry was in a position to raise capital, manufacture, ship and communicate at scale, across the world. Individuals did not have that power. Now, with the Internet, they do.”*

The potential for this is summarized by the “one clue” atop The Cluetrain Manifesto, published online in April 1999 and in book form in January 2000:

Cluetrain's "one clue"

What happens when our reach is outward from our own data, kept in our own spaces, which we alone control? For other examples of what could happen, consider the personal computer, the Internet and mobile computing and communications. In each case, individuals could do far more with those things than centralized corporate or government systems ever could. It also helps to remember that big business and big government at first fought—or just didn’t understand—how much individuals could do with computing, networking and mobile communications.

Free, independent and fully human beings should be also good for business, because they are boundless sources of intelligence, invention, genuine (rather than coerced or “managed”) loyalty and useful feedback—to an infinitely greater degree than they were before the Net came along.

In The Intention Economy: When Customers Take Charge (Harvard Business Review Press, 2012), I describe the end state that will emerge when customers get scale with business:

Rather than guessing what might get the attention of consumers—or what might “drive” them like cattle—vendors will respond to actual intentions of customers. Once customers’ expressions of intent become abundant and clear, the range of economic interplay between supply and demand will widen, and its sum will increase… This new economy will outperform the Attention Economy that has shaped marketing and sales since the dawn of advertising. Customer intentions, well-expressed and understood, will improve marketing and sales, because both will work with better information, and both will be spared the cost and effort wasted on guesses about what customers might want, and flooding media with messages that miss their marks.

The Intention Economy reported on development work fostered by ProjectVRM, which I launched at the Berkman Center for Internet and Society in 2006. Since then the list of VRM developments has grown to many dozens, around the world.

VRM stands for Vendor Relationship Management. It was conceived originally as the customer-side counterpart of Customer Relationship Mangement, a $23 billion business (Gartner, 2014) that has from the start been carrying the full burden of relationship management on its own. (Here’s a nice piece about VRM, published today in CMO.)

There are concentrations of VRM development in Europe and Australia, where privacy laws are strong. This is not coincidental. Supportive policy helps. But it is essential for individuals to have means of their own for creating the online equivalent of clothing and shelter, which are the original privacy technologies in the physical world—and are still utterly lacking in the virtual one, mostly because it’s still early.

VRM development has been growing gradually and organically over the past nine years, but today are three things happening  that should accelerate development and adoption in the near term:

  1. The rise of ad, tracking and content blocking, which is now well past 200 million people. This gives individuals two new advantages: a) The ability to control what is allowed into their personal spaces within browsers and apps; and b) Potential leverage in the marketplace — the opportunity to deal as equals for the first time.
  2. Apple’s fight with the FBI, on behalf of its own customers. This too is unprecedented, and brings forward the first major corporate player to take the side of individuals in their fight for privacy and agency in the marketplace. Mozilla and the EFF are also standout players in the fight for personal freedom from surveillance, and for individual equality in dealings with business.
  3. A growing realization within CRM that VRM is a necessity for customers, and for many kinds of positive new growth opportunities. (See the Capgemini videos here.)

To take full advantage of these opportunities, VRM development is necessary but insufficient. To give customers scale, we also need an organization that does what VRM developers alone cannot: develop terms of engagement that customers can assert in their dealings with companies; certify compliance with VRM standards, hold events that customers lead and do not merely attend, prototype products (e.g. Omie) that have low commercial value but high market leverage, bring millions of members to the table when we need to bargain with giants in business — among other things that our members will decide.

That’s why we started Customer Commons, and why we need to ramp it up now. In the next post, we’ll explain how. In the meantime we welcome your thoughts.


* Drucker said roughly this in a 2001 interview published in Business 2.0 that is no longer on the Web. So I’m going from memory here.

Privacy is an Inside Job

The Searls Wanigan, 1949
Ordinary people wearing and enjoying the world’s original privacy technology: clothing and shelter. (I’m the one on top. Still had hair then.)

Start here: clothing and shelter are privacy technologies. We use them to create secluded spaces for ourselves. Spaces we control.

Our ancestors have been wearing clothing for at least 170,000 years and building shelters for at least half a million years. So we’ve had some time to work out what privacy means. Yes, it differs among cultures and settings, but on the whole it is well understood and not very controversial.

On the Internet we’ve had about 21 years*. That’s not enough time to catch up with the physical world, but hey: it’s still early.

It helps to remember that nature in the physical world doesn’t come with privacy. We have to make our own. Same goes for the networked world. And, since most of us don’t yet have clothing and shelter in the networked world, we’re naked there.

So, since others exploit our exposure — and we don’t like it — privacy on the Internet is very controversial. Evidence: searching for “privacy” brings up 4,670,000,000 results. Most of the top results are for groups active in the privacy cause, and for well-linked writings on the topic. But most of the billions of results below that are privacy policies uttered in print by lawyers for companies and published because that’s pro forma.

Most of those companies reserve the right to change their policies whenever they wish, by the way, meaning they’re meaningless.

For real privacy, we can’t depend on anybody else’s policies, public or private. We can’t wait for Privacy as a Service. We can’t wait for our abusers to get the clues and start respecting personal spaces we’ve hardly begun to mark out (even though they ought to be obvious). And we can’t wait for the world’s regulators to start smacking our abusers around (which, while satisfying, won’t solve the problem).

We need to work with the knitters and builders already on the case in the networked world, and recruit more to help out. Their job is to make privacy policies technologies we wear, we inhabit, we choose, and we use to signal what’s okay and not okay to others.

The EFF has been all over this for years. So have many developers on the VRM list. (Those are ones I pay the most attention to. Weigh in with others and I’ll add them here.)

The most widely used personal privacy technology today is ad and tracking blockingMore than 200 million of us now employ those on our browsers. The tools are many and different, but basically they all block ads and/or tracking at our digital doorstep. In sum this amounts to the largest boycott in human history.

But there’s still no house behind the doorstep, and we’re still standing there naked, even if we’ve kept others from planting tracking beacons on us.

One of the forms privacy takes in the physical world is the mutual understanding we call manners, which are agreements about how to respect each others’ intentions.

Here at Customer Commons, we’ve been working on terms we can assert, to signal those intentions. Here’s a working draft of what they look like now:

UserSubmittedTerms1stDraft

That’s at the Consent and Information Working Group. Another allied effort is Consent Receipt.

If you’re working on privacy in any way — whether you’re a geek hacking code, a policy maker, an academic, a marketer trying to do the right thing, or a journalist working the privacy beat — remember this: Privacy is personal first. Before anything elseIf you’re not working on getting people clothing and shelter of their own, you’re not helping where it’s needed.

It’s time to civilize the Net. And that’s an inside job.

__________________

*If we start from the dawn of ISPs, graphical browsers, email and the first commercial activity, which began after the NSFnet went down on 30 April 1995.

 

 

 

Electronic Health Records and Patient-Centric Design

CIO’s story Why Electronic Health Records aren’t more usable offers an interesting perspective on the current (improved?) state of affairs in medical care records. From the article:

The American Medical Association in 2014 issued an eight-point framework for improving EHR usability. According to this framework, EHRs should:

  • enhance physicians’ ability to provide high-quality patient care
  • support team-based care
  • promote care coordination
  • offer product modularity and configurability
  • reduce cognitive workload
  • promote data liquidity
  • facilitate digital and mobile patient engagement
  • expedite user input into product design and post-implementation feedback.

Nevertheless, it does not appear that EHR vendors are placing more emphasis on UCD. The Office of the National Coordinator for Health IT requires developers to perform usability tests as part of a certification process that makes their EHRs eligible for the government’s EHR incentive program. Yet a recent study found that, of 41 EHR vendors that released public reports, fewer than half used an industry-standard UCD process. Only nine developers tested their products with at least 15 participants who had clinical backgrounds, such as physicians.

Note that this situation is not due to a lack of user-centric efforts to make medical records more useful. Indeed there are several efforts underway, including HealthAuth, Kantara’s Healthcare ID Assurance Working Group, Patient Privacy Rights, HEART working efforts with OAuth and UMA, and more. As the article noted, there are regulatory complications as well as crazy-complicated workflow requirements imposed by the software designers/vendors. We need a shift in focus here.

Volvo’s In-Car Delivery Service

In Volvo launches in-car package delivery service in Gothenburg, Volvo’s new service “lets you have your Christmas shopping delivered directly to your car.” Intriguing idea that saves on parking hassles like those people who are waiting/idling around the favored spots.

With just days to go before Black Friday and Cyber Monday – the busiest online shopping days of the Christmas season – Sweden’s Volvo Cars has unveiled a brand new way to take some of the hassle out Christmas shopping.

The premium car maker has launched the world’s first commercially available in-car delivery service by teaming up with PostNord, the Nordic region’s leading communication and logistics supplier, Lekmer.com, the leading Nordic online toy and baby goods store, and Mat.se, a Swedish online grocery retailer, to have Christmas toys, gifts, food and drinks delivered to its cars. …

The Volvo In-car Delivery works by means of a digital key, which is used to gain one-time access to your vehicle. Owners simply order the goods online, receive a notification that the goods have been delivered and then just drive home with them.

Alas, not available everywhere. Yet.

Omie Update (version 0.2)

We’re overdue an update on the Omie Project…., so here goes.

To re-cap:

We at Customer Commons believe there is room/ need for a device that sits firmly on the side of the individual when it comes to their role as a customer or potential customer.
That can and will mean many things and iterations over time, but for now we’re focusing on getting a simple prototype up and running using existing freely available components that don’t lock us in to any specific avenues downstream.
Our role is demonstrate the art of the possible, catalyse the development project, and act to define what it means to ‘sit firmly on the side of the customer’.
For now, we’ve been working away behind the scenes, and now have a working prototype (Omie 0.2). But before getting into that, we should cover off the main questions that have come up around Omie since we first kicked off the project.

What defines an Omie?

At this stage we don’t propose to have a tight definition as the project could evolve in many directions; so our high level definition is that an Omie is ‘any physical device that Customer Commons licenses to use the name, and which therefore conforms to the ‘customer side’ requirements of Customer Commons.

Version 1.0 will be a ‘Customer Commons Omie’ branded white label Android tablet with specific modifications to the OS, an onboard Personal Cloud with related sync options, and a series of VRM/ Customer-related apps that leverage that Personal Cloud.

All components, wherever possible, will be open source and either built on open specs/ standards, or have created new ones. Our intention is not that Customer Commons becomes a hardware manufacturer and retailer; we see our role as being to catalyse a market in devices that enable people in their role of ‘customer’, and generate the win-wins that we believe this will produce. Anyone can then build an Omie, to the open specs and trust mechanisms.

What kind of apps can this first version run?

We see version 1 having 8 to 10 in-built apps that tackle different aspects of being a customer. The defining feature of all of these apps is that they all use the same Personal Cloud to underpin their data requirements rather than create their own internal database.

Beyond those initial apps, we have a long list of apps whose primary characteristic is that they could only run on a device over which the owner had full and transparent control.

We also envisage an Omie owner being able to load up any other technically compatible app to the device, subject to health warnings being presented around any areas that could breach the customer-side nature of the device.

How will this interact with my personal cloud?

As noted above, we will have one non-branded Personal Cloud in place to enable the prototyping work (on device and ‘in the cloud’), but we wish to work with existing or new Personal Cloud providers wishing to engage with the project to enable an Omie owner to sync their data to their branded Personal Clouds.

Where are we now with development?

We now have a version 0.2 prototype, some pics and details are below. We intend, at some point to run a Kickstarter or similar campaign to raise the funds required to bring a version 1.0 to market. As the project largely uses off the shelf components we see the amount required being around $300k. Meantime, the core team will keep nudging things forward.

How can I get involved?

We are aiming for a more public development path from version 0.3. We’re hoping to get the Omie web site up and running in the next few weeks, and will post details there.

Alternatively, if you want to speed things along, please donate to Customer Commons.

VERSION 0.2

Below are a few pics from our 0.2 prototype.

Home Screen – Showing a secure OS, a working, local Personal Cloud syncing to ‘the cloud’ for many and varied wider uses. This one shows the VRM related apps, there is another set of apps underway around Quantified Self.

Omie 0.2 Home Screen

My Suppliers – Just as a CRM system begins with a list of customers, a VRM device will encompass a list of ‘my suppliers’ (and ‘my stuff’).

Omie 0.2 My Suppliers

My Transactions – Another critical component, building my transaction history on my side.

Omie 0.2 Transactions

Intent Casting/ Stroller for Twins – Building out Doc’s classic use case, real time, locally expressed intention to buy made available as a standard stream of permissioned data. Right now there are about 50 online sellers ‘listening’ for these intent casts, able to respond, and doing business; and 3 CRM systems.

Omie 0.2 Intent Casting

So what have we learned in the build of version 0.2?

Firstly, that it feels really good to have a highly functional, local place for storing and using rich, deep personal information that is not dependent on anyone else or any service provider, and has no parts of it that are not substitutable.

Secondly, that without minimising the technical steps to take, the project is more about data management than anything else, and that we need to encourage a ‘race to the top’ in which organisations they deal with can make it easy for customers to move data backwards and forwards between the parties. Right now many organisations are stuck in a negative and defensive mind-set around receiving volunteered information from individuals, and very few are returning data to customers in modern, re-usable formats through automated means.

Lastly that the types of apps that emerge in this very different personal data eco-system are genuinely new functions not enabled by the current eco-system, and not just substitutes for those there already. For example, the ‘smart shopping cart’ in which a customer takes their requirements and preferences with them around the web is perfectly feasible when the device genuinely lives on the side of the customer.