Change of Address (√)

Way back in 2006 or so, in the first Project VRM meetings, our canonical use case was ‘change of address’; that is to say, we wanted individuals to have the ability to update their address in one place and have that flow to multiple suppliers.

That seemed easy enough, so we thought at the time; all that’s needed is:

– a data store controlled by the individual

– a user interface

– an API that allowed organisations to connect

We did not note the need at the time, but there probably should have been one around ‘standardised data sharing terms’ so that organisations would not get tied in legal knots signing many different contracts to cover themselves as they would need to do.

So, 12 or so years later, that proved to not be quite so easy….. I think our most flawed assumption was that organisations would see this as a good thing and be willing to get involved.

No matter, the reason for my post is to flag that individual driven change of address can now be done at Internet scale, albeit yes we still need to crack the adoption issue. There are also then a number of downstream use cases, e.g. where the address change must be verified.

Here’s a visual of how change of address works in the JLINC environment; the same principles could apply in other environments. The critical dependency is that both parties (individual and organisation) have their own data-sets that they voluntarily connect to each other.

Beyond the fact that this plumbing now demonstrably works at scale, I think the most interesting thing that has emerged from the JLINC deployment is the Standard Information Sharing Agreement. The requirement here is to have an agreement that works for both parties; here is the initial one built for JLINC. The expectation is that these will evolve over time and likely become life aspect/ sector specific (e.g. Health); but critically they will not mimic the current model where each organisation invents their own. The secondary function that I believe makes this scale is the ability to record every single data exchange that takes place should either or both parties need to refer to that downstream.

So, we can now tick the box around ‘change of address’, at least as working plumbing. The better news still is that the same plumbing and approach works for any type of data, or any data flow (so organisations sending data to Alice too). At least it should not take another 12 years to make that next use case work, which incidentally was ‘Intentcasting’; i.e. an individual being able to articulate what they are in the market for without losing control over that data.

Privacy is personal. Let’s start there.

The GDPR won’t give us privacy. Nor will ePrivacy or any other regulation. We also won’t get it from the businesses those regulations are aimed at.

Because privacy is personal. If it wasn’t we wouldn’t have invented clothing and shelter, or social norms for signaling to each what’s okay and what’s not okay.

On the Internet we have none of those. We’re still as naked as we were in Eden.

But let’s get some perspective here:  we invented clothing and shelter long before we invented history, and most of us didn’t get online until long after Internet service providers and graphical browsers showed up in 1994.

In these early years, it has been easier and more lucrative for business to exploit our exposed selves than it has been for technology makers to sew (and sell) us the virtual equivalents of animal skins and woven fabrics.

True, we do have the primitive shields called ad blockers and tracking protectors. And, when shields are all you’ve got, they can get mighty popular. That’s why 1.7 billion people on Earth were already blocking ads online by early 2017.† This made ad blocking the largest boycott in human history. (Note: some ad blockers also block tracking, but the most popular ad blocker is in the business of selling passage for tracking to companies whose advertising is found “acceptable” on grounds other than tracking.)

In case you think this happened just because most ads are “intrusive” or poorly targeted, consider the simple fact that ad blocking has been around since 2004, yet didn’t hockey-stick until the advertising business turned into direct response marketing, hellbent on collecting personal data and targeting ads at eyeballs.††

This happened in the late ’00s, with the rise of social media platforms and programmatic “adtech.” Euphemized by its perpetrators as  “interactive,” “interest-based,” “behavioral” and “personalized,” adtech was, simply-put, tracking-based advertising. Or, as I explain at the last link direct response marketing in the guise of advertising.

The first sign that people didn’t like tracking was Do Not Track, an idea hatched by  Chris Soghoian, Sid Stamm, and Dan Kaminsky, and named after the FTC’s popular Do Not Call Registry. Since browsers get copies of Web pages by requesting them (no, we don’t really “visit” those pages—and this distinction is critical), the idea behind Do Not Track was to make to put the request not to be tracked in the header of a browser. (The header is how a browser asks to see a Web page, and then guides the data exchanges that follow.)

Do Not Track was first implemented in 2009 by Sid Stamm, then a privacy engineer at Mozilla, as an option in the company’s Firefox browser. After that, the other major browser makers implemented Do Not Track in different ways at different times, culminating in Mozilla’s decision to block third party cookies in Firefox, starting in February 2013.

Before we get to what happened next, bear in mind that Do Not Track was never anything more than a polite request to have one’s privacy respected. It imposed no requirements on site owners. In other words, it was a social signal asking site owners and their third party partners to respect the simple fact that browsers are personal spaces, and that publishers and advertisers’ rights end at a browser’s front door.

The “interactive” ad industry and its dependents in publishing responded to that brave move by stomping on Mozilla like Gozilla on Bambi:

In this 2014 post  I reported on the specifics how that went down:

Google and Facebook both said in early 2013 that they would simply ignore Do Not Track requests, which killed it right there. But death for Do Not Track was not severe enough for the Interactive Advertising Bureau (IAB), which waged asymmetric PR warfare on Mozilla (the only browser maker not run by an industrial giant with a stake in the advertising business), even running red-herring shit like this on its client publishers websites:

As if Mozilla was out to harm “your small business,” or that any small business actually gave a shit.

And it worked.

In early 2013, Mozilla caved to pressure from the IAB.

Two things followed.

First, soon as it was clear that Do Not Track was a fail, ad blocking took off. You can see that in this Google Trends graph†††, published in Ad Blockers and the Next Chapter of the Internet (5 November 2015 in Harvard Business Review):

Next, ad searches for “how to block ads” rose right in step with searches for retargeting, which is the most obvious evidence that advertising is following you around:

You can see that correlation in this Google Trends graph in Don Marti’s Ad Blocking: Why Now, published by DCN (the online publishers’ trade association) on 9 July 2015:

Measures of how nearly all of us continue to hate tracking were posted by Dr. Johnny Ryan (@johnnyryan) in PageFair last September. In that post, he reports on a PageFair “survey of 300+ publishers, adtech, brands, and various others, on whether users will consent to tracking under the GDPR and the ePrivacy Regulation.” Bear in mind that the people surveyed were industry insiders: people you would expect to exaggerate on behalf of continued tracking.

Here’s one result:

Johnny adds, “Only a very small proportion (3%) believe that the average user will consent to ‘web-wide’ tracking for the purposes of advertising (tracking by any party, anywhere on the web).” And yet the same survey reports “almost a third believe that users will consent if forced to do so by tracking walls,” that deny access to a website unless a visitor agrees to be tracked.”

He goes on to add, “However, almost a third believe that users will consent if forced to do so by ‘tracking walls”, that deny access to a website unless a visitor agrees to be tracked. Tracking walls, however, are prohibited under Article 7 of the GDPR, the rules of which are already formalised and will apply in law from late May 2018.[3] “

Which means that the general plan by the “interactive” advertising business is to put up those walls anyway, on the assumption that people will think they won’t get to a site’s content without consenting to tracking. We can read that in the subtext of IAB Europe‘s Transparency and Consent Framework, a work-in-progress you can follow here on Github., and read unpacked in more detail at AdvertisingConsent.eu.

So, to sum all this up, so far online what we have for privacy are: 1) popular but woefully inadequate ad blocking and tracking protection add-ons in our browsers; 2) a massively interesting regulation called the GDPR…

… and 3) plans by privacy violators to obey the letter of that regulation while continuing to violate its spirit.

So how do we fix this on the personal side? Meaning, what might we have for clothing and shelter, now that regulators and failed regulatory captors are duking it out in media that continue to think all the solutions to our problems will come from technologies and social signals other than our own?

Glad you asked. The answers will come in our next three posts here. We expect those answers to arrive in the world and have real effects—for everyone except those hellbent on tracking us—before the 25 May GDPR deadline for compliance.


† From Beyond ad blocking—the biggest boycott in human history: “According to PageFair’s 2017 Adblock Report, at least 615 million devices now block ads. That’s larger than the human population of North America. According to GlobalWebIndex, 37% of all mobile users, worldwide, were blocking adsby January of last year, and another 42% would like to. With more than 4.6 billion mobile phone usersin the world, that means 1.7 billion people are blocking ads already—a sum exceeding the population of the Western Hemisphere.”

†† It was plain old non-tracking-based advertising that not only only sponsored publishing and other ad-suported media, but burned into people’s heads nearly every brand you can name. After a $trillion or more has been spent chasing eyeballs, not one brand known to the world has been made by it. For lots more on all this, read everything you can by Bob Hoffman (@AdContrarian) and Don Marti (@dmarti).

††† Among the differences between the graph above and the current one—both generated by the same Google Trends search—are readings above zero in the latter for Do Not Track prior to 2007. While there are results in a search for “Do Not Track” in the 2004-2006 time frame, they don’t refer to the browser header approach later branded and popularized as Do Not Track.

Also, in case you’re reading this footnote, the family at the top is my father‘s. He’s the one on the left. The location was Niagara Falls and the year was 1916. Here’s the original. I flipped it horizontally so the caption would look best in the photo.

 

Latest draft of the No Stalking for Advertising Term V.2

UX and INTERFACE

Revised  DRAFT  of a singular, comprehensive term:

 
Draft Icon for inclusion in MVCR and other uses:

USER TERMS: Human language and {{ legal language }} below.

PREAMBLE:  The User submitted term shown here creates an opportunity for individuals to share their single term with entities about how they wish to be treated. This effort is meant to describe human, legal and machine readable versions of a comprehensive term along with additional information for agents who might implement this term for individuals as well as for entities who might see, accept or refuse the term.  {{ Information is defined as personal information provided by the individual about themselves. Data + Meaning = Information. The observer creates meaning (or observer is “informed by” the data), and then can be assigned duties. Information not collected from a person does not by definition constitute personal data. }}

TERMS AGREEMENT:  {{ Information can only be shared with those parties who first agree to abide by these terms.  Any sharing of information with a party that has not first agreed to these terms is a violation of these terms. }}

SHARE: describes the terms for sharing information with entities by individuals.

Choice: 2nd

1st-2nd Party:   My information shared and what I do will be kept between me and the entity.

{{Information shared by an individual (the “1st party”) and their activities are not permitted to be shared by the 2nd party with any other parties.}}

DURATION: describes the terms for retaining information by entities about individuals. {{ Add language referring to laws or contracts, defining 3rd party jurisdiction, to limit this from abuse. }}

QUESTION: should this be just for the session? or for as long as the person still has a relationship and agrees to sharing?

Choice: Session

Session:  My information shared or about what I do will only be kept for the session, unless required by law or contractual obligation.

{{ Information about an individual must be destroyed by the 2nd party immediately after the completion of the transaction for which it was collected or otherwise generated, unless otherwise required by law or contract obligation. }} [NOTE: What about records for audit?  What about hashed storage, e.g., in blockchain or other ledger system?]

OR ?

Choice: Infinity

Unlimited until further notice:  My information will be kept as long as I continue to choose this term, unless required by law or contractual obligation. If I change to another lesser term, my new term will be followed.

{{ Information about an individual can be retained indefinitely by the 2nd party, unless and until the 1st party notifies the 2nd party they have made an alternate selection for duration. }}

PURPOSE: describes the purpose for use of individual’s information provided or about actions they take

Choice: Site / App Use

Site and App UseMy information will be used for providing and / or enhancing the site or service, but not other purposes without my permission.

{{ Information about an individual may be used beyond the transaction for which it was collected or generated, but only with respect to the operation [or further development?] of the site or app over which such original transaction occurred and not for any other secondary uses by the 2nd party or other parties. }}

TRACKING

Choice: Tracking

Tracking: I will allow myself to be tracked by 3rd parties.

{{ Tracking of individual and their activities by any 3rd parties is authorized. }}

Time for THEM to agree to OUR terms

Screen Shot 2016-03-25 at 12.12.45 PM
We can do for everybody what Creative Commons does for artists: give them terms they can offer—and be can read and agreed to by lawyers, ordinary folks, and their machines. And then we can watch “free market” come to mean what it says, and not just “your choice of captor.”

Try to guess how many times, in the course of your life in the digital world, have “agreed” to terms like these:

URsoScrewed

Hundreds? Thousands? (Feels like) millions?

Look at the number of login/password combinations remembered by your browser. That’ll be a fraction of the true total.

Now think about what might happen if we could turn these things around. How about if sites and services could agree to our terms and conditions, and our privacy policies?

We’d have real agreements, and real relationships, freely established, between parties of equal power who both have an interest in each other’s success.

We’d have genuine (or at least better) trust, and better signaling of intentions between both parties. We’d have better exchanges of information and better control over what gets done with that information. And the information would be better too, because we wouldn’t have to lie or hide to protect our identities or our data.

We’d finally have the only basis on which the Seven Laws of Identity, issued by Kim Cameron in 2005, would actually work. Check ’em out:

laws

Think about it. None of those work unless individuals are in charge of themselves and their relationships in the digital world. And they can’t as long as only one side is in charge. What we have instead are opposites: limited control and coerced consent, maximum disclosure for unconstrained use, unjustified parties, misdirected identity, silo’d operators and technologies, inhuman integration, and inconsistent experiences across contexts of all kinds. (I’ll add links for all of those later when I have time.)

Can we fix this problem, eleven years after Kim came down from the mountain (well, Canada) with those laws?

No, we can’t. Not without leverage.

The sad fact is that we’ve been at a disadvantage since geeks based the Web on an architecture called “client-server.” I’ve been told that term was chosen because “slave-master” didn’t sound so good. Personally, I prefer calf-cow:

calf-cow

As long as we’re the calves coming to the cows for the milk of “content” (plus unwanted cookies), we’re not equals.

But once we become independent, and can assert enough power to piss off the cows that most want to take advantage of us, the story changes.

Good news: we are independent now, and controlling our own lives online is pissing off the right cows.

We’re gaining that independence through ad and tracking blockers. There are also a lot of us now. And a lot more jumping on the bandwagon.

According to PageFair and Adobe, the number of people running ad blockers alone passed 200 million last May, with annual growth rates of 41% in the world, 48% the U.S. and 82% in the U.K. alone.

Of course the “interactive” ad industry (the one that likes to track you) considers this a problem only they can solve. And, naturally, the disconnect between their urge to track and spam us, and our decision to stop all of it, is being called a “war.”

But it doesn’t have to be.

Out in the offline world, we were never at war with advertising. Sure, there’s too much of it, and a lot of it we don’t like. But we also know we wouldn’t have sports broadcasts (or sports talk radio) without it. We know how much advertising contributes to the value of the magazines and newspapers we read. (Which is worth more: a thick or a thin Vogue, Sports Illustrated, Bride’s or New York Times?) And to some degree we actually value what old fashioned Mad Men type advertising brings to the market’s table.

On the other hand, we have always been at war with the interactive form of advertising we call junk mail. Look up unwanted+mail, click on “images,” and and you’ll get something like this:

unwantedmail

What’s happened online is that the advertising business has turned into the “interactive”  junk message business. Only now you can’t tell the difference between an ad that’s there for everybody and one that’s aimed by crosshairs at your eyeballs.

The difference between real advertising and tracking-based junk messages is the same as that between wheat and chaff.

Today’s ad and tracking blockers are are primitive prophylactics: ways to protect our eyeballs from advertising and tracking. But how about if we turn these into instruments of agreement? We could agree to allow the kind of ads that pay the publisher and aren’t aimed at us by tracking.

Here at Customer Commons we’ve been working on those kinds of terms for the last several years. Helping us have been law school students and teachers, geeks and ordinary folks. Last we publishe a straw man version of those terms, they looked like this:

UserSubmittedTerms1stDraft

What those say (in the green circles) is “You (the second party) alone can use data you get from me, for as long as you want, just for your site or app, and will obey the Do Not Track request from my browser.”

This can be read easily by lawyers, ordinary folks and machines on both sides, just the way the graphic at the top of this post, borrowed from Creative Commons (or model for this), describes.

We’re also not alone.

Joining us in this effort are the Identity Ecosystem Working Group, the Personal Data Ecosystem Consortium, the Consent and Information Sharing Working Group (which is working on a Consent Receipt to give agreements a way to be recorded by both parties), Mozilla and others on the ProjectVRM Development Work list.

Many people from those groups (including Kim Cameron himself) will be at IIW, the Internet Identity Workshop, at the Computer History Museum in Silicon Valley, on the last week of next month, April 26-28. It’s an unconference. No panels, no keynotes, no plenaries. It’s all breakouts, on topics chosen by participants.

The day before, at the same location, will be VRM Day. The main topic there will be terms, and how we plan to get working versions of them in the next three days at IIW.

This is a huge opportunity. I am sure we have enough code, and enough done work on standards and the rest of it, to put up exactly the terms we can offer and publishers online can accept, and will start to end the war (that really isn’t) between publishers and their readers.

Once we have those terms in place, others can follow, opening up to much better signaling between supply and demand, because both sides are equals.

So this is an open invitation to everybody already working in this space, especially browser makers (and not just Mozilla) and the ad and tracking blockers. IIW is a perfect place to show to show what we’ve got, to work together, and to move things forward.

Let’s do it.

 

Giving Customers Scale

scale-leverage

Customers need scale.

Scale is leverage. A way to get lift.

Big business gets scale by aggregating resources, production methods, delivery services — and, especially, customers: you, me and billions of others without whom business would not exist.

Big business is heavy by nature. That’s why we use mass as an adjective for much of what big business does: mass manufacturing, mass distribution, mass retailing, mass marketing, and mass approaches to everything, including legal agreements.

For personal perspective on this, consider how you can’t operate your mobile phone until you click “accept” to a 55-screen list of terms and conditions you’ll never read because there’s no point to it. Privacy policies are just as bad. Few offer binding commitments and nearly all are lengthy and complicated. According to a Carnegie-Mellon study, it would take 76 work days per year just to read all the privacy policies encountered by the average person. The Atlantic says this yields an “opportunity cost” of $781 billion per year, exceeding the GNP of Florida.

We accept this kind of thing because we don’t know any other way to get along with big business, and big business doesn’t know any other way to get along with us. And we’ve had this status quo ever since industry won the Industrial Revolution.

In 1943 — perhaps the apex of the Industrial Age — law professor Friedrich Kessler called these non-agreements “contracts of adhesion,” meaning the submissive party was required to adhere to the terms of the contract while the dominant party could change whatever they liked. On one side, glue. On the other, Velcro. Kessler said contracts of adhesion were pro forma because there was no way a big business could have different contracts with thousands or millions of customers. What we lost, Kessler said, was freedom of contract, because it didn’t scale.

So, for a century and a half, in economic sectors from retail to health care, we have had dominant companies controlling captive markets, often enabled by captured regulators as well. This way of economic life is so deeply embedded that most of us believe, in effect, that “free market” means “your choice of captor.” Stockholm syndrome has become the norm, not the exception.

Thus it is also no surprise that marketing, the part of business that’s supposed to “relate” to customers, calls us “targets” and “assets” they “acquire,” “control,” “manage,” “lock in” and “own” as if we are slaves or cattle. This is also why, even though big business can’t live without us, our personal influence on it is mostly limited to cash, coerced loyalty and pavlovian responses to coupons, discounts and other marketing stimuli.

Small businesses are in the same boat. As customers, we can can relate personally, face to face, with the local cleaner or baker or nail salon. Yet, like their customers, most small businesses are also at the mercy of giant banks, credit agencies, business management software suppliers and other big business services. Many more are also crushed by big companies that use big compute power and the Internet to eliminate intermediaries in the supply chain.

It gets worse. In Foreign Policy today, Parag Khanna reports on twenty-five companies that “are more powerful than many countries.” In addition to the usual suspects (Walmart, ExxonMobil, Apple, Nestlé, Maersk) he also lists newcomers such as Uber, which is not only obsoleting the taxi business, but also the government agencies that regulate it.

It also gets more creepy, since the big craze in big business for the last few years has been harvesting “behavioral” data. While they say they’re doing it to “deliver” us a “better experience” or whatever, their main purpose is to manipulate each of us for their own gain. Here’s how Shoshana Zuboffunpacks that in Secrets of Surveillance Capitalism:

Among the many interviews I’ve conducted over the past three years, the Chief Data Scientist of a much-admired Silicon Valley company that develops applications to improve students’ learning told me, “The goal of everything we do is to change people’s actual behavior at scale. When people use our app, we can capture their behaviors, identify good and bad behaviors, and develop ways to reward the good and punish the bad. We can test how actionable our cues are for them and how profitable for us”…

We’ve entered virgin territory here. The assault on behavioral data is so sweeping that it can no longer be circumscribed by the concept of privacy and its contests.  This is a different kind of challenge now, one that threatens the existential and political canon of the modern liberal order defined by principles of self-determination that have been centuries, even millennia, in the making. I am thinking of matters that include, but are not limited to, the sanctity of the individual and the ideals of social equality; the development of identity, autonomy, and moral reasoning; the integrity of contract, the freedom that accrues to the making and fulfilling of promises; norms and rules of collective agreement; the functions of market democracy; the political integrity of societies; and the future of democratic sovereignty.

And that might be the short list. And an early one too.

Think about what happens when the “Internet of Things” (aka IoT) comes to populate our private selves and spaces? The marketing fantasy for IoT is people’s things reporting everything they do, so they can be studied and manipulated like laboratory mice.

Our tacit agreement to be mice in the corporate mazes amounts to a new social contract in which nobody has much of a clue about what the consequences will be. One that’s easy to imagine is personalized pricing based on intimate knowledge gained from behavioral tracking through the connected things in our lives. In the new world where our things narc on us to black boxes we can’t see or understand, our bargaining power falls to zero. So does our rank in the economic caste system.

But hope is not lost.

With the Internet, scale for individuals is thinkable, because the Internet was also designed from the start to give every node on the network the ability to connect with every other node, and to reduce the functional distance between all of them as close to zero as possible. Same with cost. As I put it in The Giant Zero,

On the Net you can have a live voice conversation with anybody anywhere, at no cost or close enough. There is no “long distance.”

On the Net you can exchange email with anybody anywhere, instantly. No postage required.

On the Net anybody can broadcast to the whole world. You don’t need to be a “station” to do it. There is no “range” or “coverage.” You don’t need antennas, beyond the unseen circuits in wireless devices.

In a 2002 interview Peter Drucker said, “In the Industrial Age, only industry was in a position to raise capital, manufacture, ship and communicate at scale, across the world. Individuals did not have that power. Now, with the Internet, they do.”*

The potential for this is summarized by the “one clue” atop The Cluetrain Manifesto, published online in April 1999 and in book form in January 2000:

Cluetrain's "one clue"

What happens when our reach is outward from our own data, kept in our own spaces, which we alone control? For other examples of what could happen, consider the personal computer, the Internet and mobile computing and communications. In each case, individuals could do far more with those things than centralized corporate or government systems ever could. It also helps to remember that big business and big government at first fought—or just didn’t understand—how much individuals could do with computing, networking and mobile communications.

Free, independent and fully human beings should be also good for business, because they are boundless sources of intelligence, invention, genuine (rather than coerced or “managed”) loyalty and useful feedback—to an infinitely greater degree than they were before the Net came along.

In The Intention Economy: When Customers Take Charge (Harvard Business Review Press, 2012), I describe the end state that will emerge when customers get scale with business:

Rather than guessing what might get the attention of consumers—or what might “drive” them like cattle—vendors will respond to actual intentions of customers. Once customers’ expressions of intent become abundant and clear, the range of economic interplay between supply and demand will widen, and its sum will increase… This new economy will outperform the Attention Economy that has shaped marketing and sales since the dawn of advertising. Customer intentions, well-expressed and understood, will improve marketing and sales, because both will work with better information, and both will be spared the cost and effort wasted on guesses about what customers might want, and flooding media with messages that miss their marks.

The Intention Economy reported on development work fostered by ProjectVRM, which I launched at the Berkman Center for Internet and Society in 2006. Since then the list of VRM developments has grown to many dozens, around the world.

VRM stands for Vendor Relationship Management. It was conceived originally as the customer-side counterpart of Customer Relationship Mangement, a $23 billion business (Gartner, 2014) that has from the start been carrying the full burden of relationship management on its own. (Here’s a nice piece about VRM, published today in CMO.)

There are concentrations of VRM development in Europe and Australia, where privacy laws are strong. This is not coincidental. Supportive policy helps. But it is essential for individuals to have means of their own for creating the online equivalent of clothing and shelter, which are the original privacy technologies in the physical world—and are still utterly lacking in the virtual one, mostly because it’s still early.

VRM development has been growing gradually and organically over the past nine years, but today are three things happening  that should accelerate development and adoption in the near term:

  1. The rise of ad, tracking and content blocking, which is now well past 200 million people. This gives individuals two new advantages: a) The ability to control what is allowed into their personal spaces within browsers and apps; and b) Potential leverage in the marketplace — the opportunity to deal as equals for the first time.
  2. Apple’s fight with the FBI, on behalf of its own customers. This too is unprecedented, and brings forward the first major corporate player to take the side of individuals in their fight for privacy and agency in the marketplace. Mozilla and the EFF are also standout players in the fight for personal freedom from surveillance, and for individual equality in dealings with business.
  3. A growing realization within CRM that VRM is a necessity for customers, and for many kinds of positive new growth opportunities. (See the Capgemini videos here.)

To take full advantage of these opportunities, VRM development is necessary but insufficient. To give customers scale, we also need an organization that does what VRM developers alone cannot: develop terms of engagement that customers can assert in their dealings with companies; certify compliance with VRM standards, hold events that customers lead and do not merely attend, prototype products (e.g. Omie) that have low commercial value but high market leverage, bring millions of members to the table when we need to bargain with giants in business — among other things that our members will decide.

That’s why we started Customer Commons, and why we need to ramp it up now. In the next post, we’ll explain how. In the meantime we welcome your thoughts.


* Drucker said roughly this in a 2001 interview published in Business 2.0 that is no longer on the Web. So I’m going from memory here.

Privacy is an Inside Job

The Searls Wanigan, 1949
Ordinary people wearing and enjoying the world’s original privacy technology: clothing and shelter. (I’m the one on top. Still had hair then.)

Start here: clothing and shelter are privacy technologies. We use them to create secluded spaces for ourselves. Spaces we control.

Our ancestors have been wearing clothing for at least 170,000 years and building shelters for at least half a million years. So we’ve had some time to work out what privacy means. Yes, it differs among cultures and settings, but on the whole it is well understood and not very controversial.

On the Internet we’ve had about 21 years*. That’s not enough time to catch up with the physical world, but hey: it’s still early.

It helps to remember that nature in the physical world doesn’t come with privacy. We have to make our own. Same goes for the networked world. And, since most of us don’t yet have clothing and shelter in the networked world, we’re naked there.

So, since others exploit our exposure — and we don’t like it — privacy on the Internet is very controversial. Evidence: searching for “privacy” brings up 4,670,000,000 results. Most of the top results are for groups active in the privacy cause, and for well-linked writings on the topic. But most of the billions of results below that are privacy policies uttered in print by lawyers for companies and published because that’s pro forma.

Most of those companies reserve the right to change their policies whenever they wish, by the way, meaning they’re meaningless.

For real privacy, we can’t depend on anybody else’s policies, public or private. We can’t wait for Privacy as a Service. We can’t wait for our abusers to get the clues and start respecting personal spaces we’ve hardly begun to mark out (even though they ought to be obvious). And we can’t wait for the world’s regulators to start smacking our abusers around (which, while satisfying, won’t solve the problem).

We need to work with the knitters and builders already on the case in the networked world, and recruit more to help out. Their job is to make privacy policies technologies we wear, we inhabit, we choose, and we use to signal what’s okay and not okay to others.

The EFF has been all over this for years. So have many developers on the VRM list. (Those are ones I pay the most attention to. Weigh in with others and I’ll add them here.)

The most widely used personal privacy technology today is ad and tracking blockingMore than 200 million of us now employ those on our browsers. The tools are many and different, but basically they all block ads and/or tracking at our digital doorstep. In sum this amounts to the largest boycott in human history.

But there’s still no house behind the doorstep, and we’re still standing there naked, even if we’ve kept others from planting tracking beacons on us.

One of the forms privacy takes in the physical world is the mutual understanding we call manners, which are agreements about how to respect each others’ intentions.

Here at Customer Commons, we’ve been working on terms we can assert, to signal those intentions. Here’s a working draft of what they look like now:

UserSubmittedTerms1stDraft

That’s at the Consent and Information Working Group. Another allied effort is Consent Receipt.

If you’re working on privacy in any way — whether you’re a geek hacking code, a policy maker, an academic, a marketer trying to do the right thing, or a journalist working the privacy beat — remember this: Privacy is personal first. Before anything elseIf you’re not working on getting people clothing and shelter of their own, you’re not helping where it’s needed.

It’s time to civilize the Net. And that’s an inside job.

__________________

*If we start from the dawn of ISPs, graphical browsers, email and the first commercial activity, which began after the NSFnet went down on 30 April 1995.

 

 

 

Terms: What are They and Why Should You Care?

User Terms Draft 2 Icons
User Terms Draft 2 Icons

Terms are choices you make to ask that your data and activities be treated a certain way. Customer Commons is developing terms with Kantara and the Consent and Information Sharing Working Group so that we have a standardized set of terms, which can commonly be used through browsers, apps and other forms.

It is our intention that Terms will come in Human, Legal and Engineering forms so that people can read them, they can be legally binding, and apis and code will convey and negotiate your chosen terms. The idea isn’t that you would constantly be choosing these things, but rather have your agent take your choices and negotiate for you. We also envision being able to copy someone else’s terms you trust, if you don’t understand what these choices will mean for you.

Terms may also be created that fit with various contexts, like how to handle your health data, or what to do about data you share for a purchase, verses data you share for social activity. Those will come later after the initial set is developed. What you see in the picture above are draft icons. We intend to develop prettier versions with a designer, and work with engineers to develop sample or open source code for both choosing terms, as well as responding to those term requests from individuals.

If you are interested in helping with this project, you can join CISWG UX Kantara, by getting on the mail list, signing the IP agreement (so that all contributions can be used in the project) and getting on our calls.  We hope to see you there. Or comment here with questions!

New Rules for Privacy Regulations

The Wall Street Journal has an informative conversation with Lawrence Lessig: Technology Will Create New Models for Privacy Regulation. What underlies a change toward new models are two points: the servers holding vast user databases are increasingly (and very cheaply) breached, and the value of the information in those databases is being transferred to something more aligned to VRM: use of the data, on a need to know basis. Lessig notes:

The average cost per user of a data breach is now $240 … think of businesses looking at that cost and saying “What if I can find a way to not hold that data, but the value of that data?” When we do that, our concept of privacy will be different. Our concept so far is that we should give people control over copies of data. In the future, we will not worry about copies of data, but using data. The paradigm of required use will develop once we have really simple ways to hold data. If I were king, I would say it’s too early. Let’s muddle through the next few years. The costs are costly, but the current model of privacy will not make sense going forward.

The challenge, notes Lessig, is “a corrupt Congress” that is more interested in surveillance than markets and doing business. Perhaps that isn’t a problem, according to an Associated Press poll (which has no bias, of course!):

According to the new poll, 56 percent of Americans favor and 28 percent oppose the ability of the government to conduct surveillance on Internet communications without needing to get a warrant. That includes such surveillance on U.S. citizens. Majorities both of Republicans (67 percent) and Democrats (55 percent) favor government surveillance of Americans’ Internet activities to watch for suspicious activity that might be connected to terrorism. Independents are more divided, with 40 percent in favor and 35 percent opposed. Only a third of Americans under 30, but nearly two-thirds 30 and older, support warrantless surveillance.

Right. After all, who needs business?

Volvo’s In-Car Delivery Service

In Volvo launches in-car package delivery service in Gothenburg, Volvo’s new service “lets you have your Christmas shopping delivered directly to your car.” Intriguing idea that saves on parking hassles like those people who are waiting/idling around the favored spots.

With just days to go before Black Friday and Cyber Monday – the busiest online shopping days of the Christmas season – Sweden’s Volvo Cars has unveiled a brand new way to take some of the hassle out Christmas shopping.

The premium car maker has launched the world’s first commercially available in-car delivery service by teaming up with PostNord, the Nordic region’s leading communication and logistics supplier, Lekmer.com, the leading Nordic online toy and baby goods store, and Mat.se, a Swedish online grocery retailer, to have Christmas toys, gifts, food and drinks delivered to its cars. …

The Volvo In-car Delivery works by means of a digital key, which is used to gain one-time access to your vehicle. Owners simply order the goods online, receive a notification that the goods have been delivered and then just drive home with them.

Alas, not available everywhere. Yet.