Privacy

Choosing Your Terms

AI prompt (with Microsoft Image Creator): “A person chooses ‘NoStalking’ from a collection of privacy-providing terms on the Customer Commons website”

Customer Commons was designed to be for personal privacy terms what Creative Commons is for personal copyright licenses. So far we have one privacy term here, called NoStalking. It’s an agreement a person chooses when they want another party not to track them away from their site or service, but still allows ads to be displayed. Since it’s a contract, think of it as a Do Not Track agreement rather than as just a preference signal (which is all Do Not Track ever was—and why it failed).

The IEEE’s P7012 Working Group (with four Customer Commons board members on it) has been working for the past few years on a standard for making terms such as NoStalking readable by machines, and not just by ordinary folk and lawyers.

The questions in front of the working group right now are:

  1. How the individual chooses a term, or set of them.
  2. How both the individual (the first party) and the site or service (the second party) might keep a record of all the terms for which they have agreements signed by their machines, so that compliance can be monitored and disputes reliant on auditable data.
  3. How the standard can apply to both simple scenarios such as NoStalking and more complex ones that, for example, might involve negotiation and movement toward a purchase at the end of what marketers call a customer journey, or the completion of that journey in a state of relationship. Also how to end such a relationship, and to record that as well.

At this stage of the Internet’s history, our primary ways of interacting with sites and services are through browsers and apps on our computers and mobile devices. Since both are built on the client-server (aka slave-master or calf-cow) model, neither browsers nor apps provide ways to address the questions above. They are all built to make you agree to others’ terms, and to leave recording those agreements entirely the responsibility of those other parties.

So we need an independent instrument that can work within or alongside browsers and apps. On the Creative Commons model, we’re calling this instrument a chooser. However, unlike the Creative Commons chooser, this one will not sit on a website. It will be an instrument of the person’s own. How it will work matters less at this stage than outlining or wire-framing what it will do.

Here are some basic rules around which we are basing our approach to completing the standard:

  1. The individual is a self-sovereign and an independent actor in the ecosystem.
  2. Organisations are present in this ecosystem as voluntary providers of products and services.
  3. The individual provides no more data than is required for service.
  4. All personal data is deleted at the termination of the agreement, unless expressly over-ridden by national regulations.
  5. Any purposes not overtly mentioned as allowed are not allowed.
  6. Service provision will always require an identifier; this method assumes the individual can bring their own; potentially supported by a software agent and related services.
  7. Agreements are signed before any data exchange.
  8. Precise data required for each purpose is out of band for the agreement design and selection.
  9. That agreements are invoked at precisely the most relevant time: when an individual (in this case, the first party) is ready to engage any site or service (the second party) that is digital itself or has a digital route to a completed engagement. This point is important because it is precisely the same time as the second party normally invokes its own terms, and can update them in compliance with the first party’s requirements. This is the window of opportunity in which agents representing both parties can come to a set of acceptable terms. Note that there can be plenty of terms that favor the individual’s privacy requirements that are also good for the other side. NoStalking is a good example, because it says (in plain English) “Just give me ads not based on tracking me.” (In a way Google’s new privacy sandbox complies with this.)
  10. To be clear – the Chooser is what is handling that back and forth negotiation to an acceptable solution for both parties before it hands off to agreement signing.

More to follow.

 

0
Read More

Beyond E-commerce

Phil Windley explains e-commerce 1.0  in a single slide that says this:

One reason this happened is that client-server, aka calf-cow  (illustrated in Thinking outside the browser) has been the default format for all relationships on the Web, and cookies were required to maintain those relationships. Which really aren’t. Here’s why:

  1. The calves in these relationship have no easy way even to find  (much less to understand or create) the cookies in their browsers’ jars.
  2. The calves have no real identity of their own, but instead have as many different identities as there are websites that know (via cookies) their visiting browsers. This gives them no independence, much less a place to stand like Archimedes, with a lever on the world. The browser may be a great tool, but it’s neither that place to stand, nor a sufficient lever.
  3. All the “agreements” the calves have with the websites’ cows, whose terms the calves have “accepted” with one click, or adjusted with some number of additional clicks, leave no readable record on the calves’ side. This severely limits their capacity to argue or dispute, which are requirements for a true relationship.
  4. There exists no independent way individuals can signal their intentions—such as interests in purchase, conditions for engagement, or the need to be left alone (which is how Brandeis and Warren define privacy). As a calf, the browser can’t do that.

In other words, the best we can do in e-commerce 1.0 is what the calf-cow system allows. And that’s to depend utterly on the operators of websites—and especially of giant retailers (led by Amazon) and intermediaries (primarily Google and Facebook).

Nearly all of signaling between demand and supply remains trapped inside these silos and walled gardens. We search inside their systems, we are notified of product and service availability inside their systems, we make agreements inside their systems (to terms and conditions they provide and require), or privacy is dependent on their systems, and product and service delivery is handled either inside their systems or through allied and dependent systems.

Credit where due: an enormous amount of good has come out of these systems. But a far larger amount of good is MLOTT—money left on the table—because there is a boundless sum and variety of demand and supply that still cannot easily signal their interest, intentions of presence to each other in the digital world.

Putting that money on the table is the job of e-commerce 2.0—or whatever else we call it.

[Later… We have a suggestion.)


Cross-posted at the ProjectVRM blog, here.

0
Read More

Just in case you feel safe with Twitter

twitter bird with crosshairs

Just got a press release by email from David Rosen (@firstpersonpol) of the Public Citizen press office. The headline says “Historic Grindr Fine Shows Need for FTC Enforcement Action.” The same release is also a post in the news section of the Public Citizen website. This is it:

WASHINGTON, D.C. – The Norwegian Data Protection Agency today fined Grindr $11.7 million following a Jan. 2020 report that the dating app systematically violates users’ privacy. Public Citizen asked the Federal Trade Commission (FTC) and state attorneys general to investigate Grindr and other popular dating apps, but the agency has yet to take action. Burcu Kilic, digital rights program director for Public Citizen, released the following statement:

“Fining Grindr for systematic privacy violations is a historic decision under Europe’s GDPR (General Data Protection Regulation), and a strong signal to the AdTech ecosystem that business-as-usual is over. The question now is when the FTC will take similar action and bring U.S. regulatory enforcement in line with those in the rest of the world.

“Every day, millions of Americans share their most intimate personal details on apps like Grindr, upload personal photos, and reveal their sexual and religious identities. But these apps and online services spy on people, collect vast amounts of personal data and share it with third parties without people’s knowledge. We need to regulate them now, before it’s too late.”

The first link goes to Grindr is fined $11.7 million under European privacy law, by Natasha Singer (@NatashaNYT) and Aaron Krolik. (This @AaronKrolik? If so, hi. If not, sorry. This is a blog. I can edit it.) The second link goes to a Public Citizen post titled Popular Dating, Health Apps Violate Privacy

In the emailed press release, the text is the same, but the links are not. The first is this:

https://default.salsalabs.org/T72ca980d-0c9b-45da-88fb-d8c1cf8716ac/25218e76-a235-4500-bc2b-d0f337c722d4

The second is this:

https://default.salsalabs.org/Tc66c3800-58c1-4083-bdd1-8e730c1c4221/25218e76-a235-4500-bc2b-d0f337c722d4

Why are they not simple and direct URLs? And who is salsalabs.org?

You won’t find anything at that link, or by running a whois on it. But I do see there is a salsalabs.com, which has  “SmartEngagement Technology” that “combines CRM and nonprofit engagement software with embedded best practices, machine learning, and world-class education and support.” since Public Citizen is a nonprofit, I suppose it’s getting some “smart engagement” of some kind with these links. PrivacyBadger tells me Salsalabs.com has 14 potential trackers, including static.ads.twitter.com.

My point here is that we, as clickers on those links, have at best a suspicion about what’s going on: perhaps that the link is being used to tell Public Citizen that we’ve clicked on the link… and likely also to help target us with messages of some sort. But we really don’t know.

And, speaking of not knowing, Natasha and Aaron’s New York Times story begins with this:

The Norwegian Data Protection Authority said on Monday that it would fine Grindr, the world’s most popular gay dating app, 100 million Norwegian kroner, or about $11.7 million, for illegally disclosing private details about its users to advertising companies.

The agency said the app had transmitted users’ precise locations, user-tracking codes and the app’s name to at least five advertising companies, essentially tagging individuals as L.G.B.T.Q. without obtaining their explicit consent, in violation of European data protection law. Grindr shared users’ private details with, among other companies, MoPub, Twitter’s mobile advertising platform, which may in turn share data with more than 100 partners, according to the agency’s ruling.

Before this, I had never heard of MoPub. In fact, I had always assumed that Twitter’s privacy policy either limited or forbid the company from leaking out personal information to advertisers or other entities. Here’s how its Private Information Policy Overview begins:

You may not publish or post other people’s private information without their express authorization and permission. We also prohibit threatening to expose private information or incentivizing others to do so.

Sharing someone’s private information online without their permission, sometimes called doxxing, is a breach of their privacy and of the Twitter Rules. Sharing private information can pose serious safety and security risks for those affected and can lead to physical, emotional, and financial hardship.

On the MoPub site, however, it says this:

MoPub, a Twitter company, provides monetization solutions for mobile app publishers and developers around the globe.

Our flexible network mediation solution, leading mobile programmatic exchange, and years of expertise in mobile app advertising mean publishers trust us to help them maximize their ad revenue and control their user experience.

The Norwegian DPA apparently finds a conflict between the former and the latter—or at least in the way the latter was used by Grinder (since they didn’t fine Twitter).

To be fair, Grindr and Twitter may not agree with the Norwegian DPA. Regardless of their opinion, however, by this point in history we should have no faith that any company will protect our privacy online. Violating personal privacy is just too easy to do, to rationalize, and to make money at.

To start truly facing this problem, we need start with a simple fact: If your privacy is in the hands of others alone, you don’t have any. Getting promises from others not to stare at your naked self isn’t the same as clothing. Getting promises not to walk into your house or look in your windows is not the same as having locks and curtains.

In the absence of personal clothing and shelter online, or working ways to signal intentions about one’s privacy, the hands of others alone is all we’ve got. And it doesn’t work. Nor do privacy laws, especially when enforcement is still so rare and scattered.

Really, to potential violators like Grindr and Twitter/MoPub, enforcement actions like this one by the Norwegian DPA are at most a little discouraging. The effect on our experience of exposure is still nil. We are exposed everywhere, all the time, and we know it. At best we just hope nothing bad happens.

The only way to fix this problem is with the digital equivalent of clothing, locks, curtains, ways to signal what’s okay and what’s not—and to get firm agreements from others about how our privacy will be respected.

At Customer Commons, we’re starting with signaling, specifically with first party terms that you and I can proffer and sites and services can accept.

The first is called P2B1, aka #NoStalking. It says “Just give me ads not based on tracking me.” It’s a term any browser (or other tool) can proffer and any site or service can accept—and any privacy-respecting website or service should welcome.

Making this kind of agreement work is also being addressed by IEEE7012, a working group on machine-readable personal privacy terms.

Now we’re looking for sites and services willing to accept those terms. How about it, Twitter, New York Times, Grindr and Public Citizen? Or anybody.

DM us at @CustomerCommons and we’ll get going on it.

 

0
Read More

The business problems only customers can solve

Customer Commons was created because there are many business and market problems that can only be solved from the customers’ side, under the customer’s control, and at scale, with #customertech.

In the absence of solutions that customers control, both customers and businesses are forced to use business-side-only solutions that limit customer power to what can be done within each business’s silo, or to await regulatory help, usually crafted by captive regulators who can’t even imagine full customer agency.

Here are some examples of vast dysfunctions that customers face today (and which hurt business and markets as well), in the absence of personal agency and scale:

  • Needing to “consent” to terms that can run more than 10,000 words long, and are different for every website and service provider
  • Dealing with privacy policies that can also run more than 10,000 words long, which are different for every website and service provider, and that the site or service can change whenever they want, and in practice don’t even need to obey
  • Dealing with personal identity systems that are different for every website or service provider
  • Dealing with subscription systems that are different for every website and service provider requiring them
  • Dealing with customer service and tech support systems that are different for every website or service provider
  • Dealing with login and password requirements that are as different, and numerous, as there are websites and service providers
  • Dealing with crippled services and/or higher prices for customers who aren’t “members” of a “loyalty” program, which involves high cognitive and operational overhead for customer and seller alike—and (again) work differently for every website and service provider
  • Dealing with an “Internet of Things” that’s really just an Amazon of things, an Apple of Things, and a Google of things.

And here are some examples of solutions customers can bring to business and markets:

  • Standardized terms that customers can proffer as first parties, and all the world’s sites and services can agree to, in ways where both parties have records of agreements
  • Privacy policies of customers’ own, which are easy for every website and service provider to see and respect 
  • Self-sovereign methods for customers to present only the identity credentials required to do business, relieving many websites and service providers of the need to maintain their own separate databases of personal identity data
  • Standard ways to initiate, change and terminate customers’ subscriptions—and to keep records of those subscriptions—greatly simplifying the way subscriptions are done, across all websites and service providers
  • Standard ways for customers to call for and engage customer service and tech support systems that work the same way across all of them
  • Standard ways for customers to relate, without logins and passwords, and to do that with every website and service provider
  • Standard ways to express loyalty that will work across every website, retailer and service provider
  • Standard ways for customers to “intentcast” an interest in buying, securely and safely, at scale, across whole categories of products and services
  • Standard ways for customers’ belongings to operate, safely and securely, in a true Internet of Things
  • Standardized dashboards on which customers can see their own commercially valuable data, control how it is used, and see who has shared it, how, and under what permissions, across all the entities the customer deals with

There are already many solutions in the works for most of the above. Our work at Customer Commons is to help all of those—and many more—come into the world.

 

0
Read More

Going #Faceless

Facial recognition by entities other than people and their pets has gotten out of control.

Thanks to ubiquitous surveillance systems, including the ones in our own phones, we can no longer assume we are anonymous in public places or private in private ones. This became especially clear a few weeks ago when Kashmir Hill (@kashhill) reported in the New York Times that a company called Clearview.ai “invented a tool that could end your ability to walk down the street anonymously, and provided it to hundreds of law enforcement agencies, ranging from local cops in Florida to the F.B.I. and the Department of Homeland Security.”

If your face has ever appeared anywhere online, it’s a sure bet to assume that you are not faceless to any of those systems. Clearview, Kashmir says, has “a database of more than three billion images” from “Facebook, YouTube, Venmo and millions of other websites ” and “goes far beyond anything ever constructed by the United States government or Silicon Valley giants.”

Among law enforcement communities, only New Jersey’s has started to back off on using Clearview.

And Clearview is just one company. Laws will also take years to catch up with developments in facial recognition, or to get ahead of them, if they ever can. And let’s face it: government interests are highly conflicted here. Intelligence and law enforcement agencies’ need to know all they can is at extreme odds with our need, as human beings, to assume we enjoy at least some freedom from being known by God-knows-what, everywhere we go.

Personal privacy is the heart of civilized life, and beats strongest in democratic societies. It’s not up for “debate” between companies and governments, or political factions. Loss of privacy is a problem that affects each of us, and requires action by each of us as well.

A generation ago, when the Internet was still new to us, four guys (I was one of them) nailed a document called The Cluetrain Manifesto to a door on the Web. It said,

We are not seats or eyeballs or end users or consumers. We are human beings and our reach exceeds your grasp. Deal with it.

Since then their grasp has exceeded our reach. And now they’ve gone too far, grabbing even our faces, everywhere we go.

Enough.

Now it’s time for our reach to exceed their grasp.

Now it’s time, finally, to make them  deal with it.

We need to do that as individuals, and as a society.

Here’s a three-part plan for that.

First, use image above, or one like it, as a your personal avatar, including your Facebook, Twitter or Whatever profile picture. Here’s one that’s favicon size:

 

Second, sign the Get Out Of My Face (#GOOMF) petition, here.  (With enough of us on it, this will work.)

Here at Customer Commons, we have some good ideas, but there are certainly others among the billions of us whose privacy is at stake.

We should discuss this, using the hashtag #faceless. Do that wherever you like.

Here’s a rule to guide both discussion and development:

No complaining. No blaming.

That stuff goes nowhere and wastes energy. Instead we need useful and constructive ideas toward what we can do—each of us, alone and together—to secure, protect and signal our privacy needs and intentions in the world, in ways others can recognize and respect.

We have those in the natural world. We don’t yet in the digital one. So let’s invent them.

 

 

0
Read More

Why we’re not endorsing Contract for the Web

Contract for the Web—not signing

The Contract for the Web is a new thing that wants people to endorse it.

While there is much to like in it, what we see under Principle 5 (of 9) is a deal-breaker:

Respect and protect people’s privacy and personal data to build online trust.
So people are in control of their lives online, empowered with clear and meaningful choices around their data and privacy:

  1. By giving people control over their privacy and data rights, with clear and meaningful choices to control processes involving their privacy and data, including:
  2. Providing clear explanations of processes affecting users’ data and privacy and their purpose.
  3. Providing control panels where users can manage their data and privacy options in a quick and easily accessible place for each user account.
  4. Providing personal data portability, through machine-readable and reusable formats, and interoperable standards — affecting personal data provided by the user, either directly or collected through observing the users’ interaction with the service or device.

Note which party is “giving” and “providing” here. It’s not the individual.

By this principle, individuals should have no more control over their lives online than what website operators and governments “give” or “provide” them, with as many “control panels” as there are websites and “user accounts.” This is the hell we are in now, which metaphorically iworks like this:

It also leaves unaddressed two simple needs we have each had since the Web came into our lives late in the last millennium:

  1. Our own damn controls, that work globally, at scale, across all the websites of the world; and
  2. Our own damn terms and conditions that websites can agree to.

At Customer Commons we encourage #1 (as has ProjectVRM, since 2006), and are working on #2.

If you want to read the thinking behind this position, a good place to start is the Privacy Manifesto draft at ProjectVRM, which is open to steady improvement. (A slightly older but more readable copy is here at Medium.)

We also recommend Klint Finley‘s What’s a Digital Bill of Rights Without Enforcement? in Wired. He makes the essential point in the title. It’s one I also made in Without Enforcement, GDPR is a Fail, in July 2018.

A key point here is that companies and governments are not the only players. As we say in Customers as a Third Force, each of us—individually and collectively—can and should be players too.

We’ll reach out to Tim Berners-Lee and others involved in drafting this “contract” to encourage full respect for the independent agency of individuals.

0
Read More

Let’s make May 25th Privmas Day

25 May is when the GDPR—the General Data Protection Regulation—went into effect. Finally, our need for privacy online has legal backing strong enough to shake the foundations of surveillance capitalism, and maybe even drop it to the ground—with our help.

This calls for a celebration. In fact, many of them. Every year.

So let’s call 25 May Privmas Day. Hashtag: #Privmas.

And, to celebrate our inaugural Privmas let’s make a movement out of blocking third party cookies, since most of the spying on us starts there. Let’s call it #NoMore3rds.

Turning off third party cookies is easy. Here’s our guide, for six different browsers.

There is much more we can do. But let’s start with #NoMore3rds, and give us all something to celebrate.

 

0
Read More

Privacy is personal. Let’s start there.

The GDPR won’t give us privacy. Nor will ePrivacy or any other regulation. We also won’t get it from the businesses those regulations are aimed at.

Because privacy is personal. If it wasn’t we wouldn’t have invented clothing and shelter, or social norms for signaling to each what’s okay and what’s not okay.

On the Internet we have none of those. We’re still as naked as we were in Eden.

But let’s get some perspective here:  we invented clothing and shelter long before we invented history, and most of us didn’t get online until long after Internet service providers and graphical browsers showed up in 1994.

In these early years, it has been easier and more lucrative for business to exploit our exposed selves than it has been for technology makers to sew (and sell) us the virtual equivalents of animal skins and woven fabrics.

True, we do have the primitive shields called ad blockers and tracking protectors. And, when shields are all you’ve got, they can get mighty popular. That’s why 1.7 billion people on Earth were already blocking ads online by early 2017.† This made ad blocking the largest boycott in human history. (Note: some ad blockers also block tracking, but the most popular ad blocker is in the business of selling passage for tracking to companies whose advertising is found “acceptable” on grounds other than tracking.)

In case you think this happened just because most ads are “intrusive” or poorly targeted, consider the simple fact that ad blocking has been around since 2004, yet didn’t hockey-stick until the advertising business turned into direct response marketing, hellbent on collecting personal data and targeting ads at eyeballs.††

This happened in the late ’00s, with the rise of social media platforms and programmatic “adtech.” Euphemized by its perpetrators as  “interactive,” “interest-based,” “behavioral” and “personalized,” adtech was, simply-put, tracking-based advertising. Or, as I explain at the last link direct response marketing in the guise of advertising.

The first sign that people didn’t like tracking was Do Not Track, an idea hatched by  Chris Soghoian, Sid Stamm, and Dan Kaminsky, and named after the FTC’s popular Do Not Call Registry. Since browsers get copies of Web pages by requesting them (no, we don’t really “visit” those pages—and this distinction is critical), the idea behind Do Not Track was to make to put the request not to be tracked in the header of a browser. (The header is how a browser asks to see a Web page, and then guides the data exchanges that follow.)

Do Not Track was first implemented in 2009 by Sid Stamm, then a privacy engineer at Mozilla, as an option in the company’s Firefox browser. After that, the other major browser makers implemented Do Not Track in different ways at different times, culminating in Mozilla’s decision to block third party cookies in Firefox, starting in February 2013.

Before we get to what happened next, bear in mind that Do Not Track was never anything more than a polite request to have one’s privacy respected. It imposed no requirements on site owners. In other words, it was a social signal asking site owners and their third party partners to respect the simple fact that browsers are personal spaces, and that publishers and advertisers’ rights end at a browser’s front door.

The “interactive” ad industry and its dependents in publishing responded to that brave move by stomping on Mozilla like Gozilla on Bambi:

In this 2014 post  I reported on the specifics how that went down:

Google and Facebook both said in early 2013 that they would simply ignore Do Not Track requests, which killed it right there. But death for Do Not Track was not severe enough for the Interactive Advertising Bureau (IAB), which waged asymmetric PR warfare on Mozilla (the only browser maker not run by an industrial giant with a stake in the advertising business), even running red-herring shit like this on its client publishers websites:

As if Mozilla was out to harm “your small business,” or that any small business actually gave a shit.

And it worked.

In early 2013, Mozilla caved to pressure from the IAB.

Two things followed.

First, soon as it was clear that Do Not Track was a fail, ad blocking took off. You can see that in this Google Trends graph†††, published in Ad Blockers and the Next Chapter of the Internet (5 November 2015 in Harvard Business Review):

Next, ad searches for “how to block ads” rose right in step with searches for retargeting, which is the most obvious evidence that advertising is following you around:

You can see that correlation in this Google Trends graph in Don Marti’s Ad Blocking: Why Now, published by DCN (the online publishers’ trade association) on 9 July 2015:

Measures of how nearly all of us continue to hate tracking were posted by Dr. Johnny Ryan (@johnnyryan) in PageFair last September. In that post, he reports on a PageFair “survey of 300+ publishers, adtech, brands, and various others, on whether users will consent to tracking under the GDPR and the ePrivacy Regulation.” Bear in mind that the people surveyed were industry insiders: people you would expect to exaggerate on behalf of continued tracking.

Here’s one result:

Johnny adds, “Only a very small proportion (3%) believe that the average user will consent to ‘web-wide’ tracking for the purposes of advertising (tracking by any party, anywhere on the web).” And yet the same survey reports “almost a third believe that users will consent if forced to do so by tracking walls,” that deny access to a website unless a visitor agrees to be tracked.”

He goes on to add, “However, almost a third believe that users will consent if forced to do so by ‘tracking walls”, that deny access to a website unless a visitor agrees to be tracked. Tracking walls, however, are prohibited under Article 7 of the GDPR, the rules of which are already formalised and will apply in law from late May 2018.[3] “

Which means that the general plan by the “interactive” advertising business is to put up those walls anyway, on the assumption that people will think they won’t get to a site’s content without consenting to tracking. We can read that in the subtext of IAB Europe‘s Transparency and Consent Framework, a work-in-progress you can follow here on Github., and read unpacked in more detail at AdvertisingConsent.eu.

So, to sum all this up, so far online what we have for privacy are: 1) popular but woefully inadequate ad blocking and tracking protection add-ons in our browsers; 2) a massively interesting regulation called the GDPR…

… and 3) plans by privacy violators to obey the letter of that regulation while continuing to violate its spirit.

So how do we fix this on the personal side? Meaning, what might we have for clothing and shelter, now that regulators and failed regulatory captors are duking it out in media that continue to think all the solutions to our problems will come from technologies and social signals other than our own?

Glad you asked. The answers will come in our next three posts here. We expect those answers to arrive in the world and have real effects—for everyone except those hellbent on tracking us—before the 25 May GDPR deadline for compliance.


† From Beyond ad blocking—the biggest boycott in human history: “According to PageFair’s 2017 Adblock Report, at least 615 million devices now block ads. That’s larger than the human population of North America. According to GlobalWebIndex, 37% of all mobile users, worldwide, were blocking adsby January of last year, and another 42% would like to. With more than 4.6 billion mobile phone usersin the world, that means 1.7 billion people are blocking ads already—a sum exceeding the population of the Western Hemisphere.”

†† It was plain old non-tracking-based advertising that not only only sponsored publishing and other ad-suported media, but burned into people’s heads nearly every brand you can name. After a $trillion or more has been spent chasing eyeballs, not one brand known to the world has been made by it. For lots more on all this, read everything you can by Bob Hoffman (@AdContrarian) and Don Marti (@dmarti).

††† Among the differences between the graph above and the current one—both generated by the same Google Trends search—are readings above zero in the latter for Do Not Track prior to 2007. While there are results in a search for “Do Not Track” in the 2004-2006 time frame, they don’t refer to the browser header approach later branded and popularized as Do Not Track.

Also, in case you’re reading this footnote, the family at the top is my father‘s. He’s the one on the left. The location was Niagara Falls and the year was 1916. Here’s the original. I flipped it horizontally so the caption would look best in the photo.

 

0
Read More

How customers help companies comply with the GDPR

That’s what we’re starting this Thursday (26 April) at GDPR Hack Day at MIT.

The GDPR‘s “sunrise day” — when the EU can start laying fines on companies for violations of it — is May 25th. We want to be ready for that: with a cookie of our own baking that will get us past the “gauntlet walls” of consent requirements that are already appearing on the world’s commercial websites—especially the ad-supported ones.

The reason is this:

Which you can also see in a search for GDPR.

Most of the results in that search are about what companies can do (or actually what companies can do for companies, since most results are for companies doing SEO to sell their GDPR prep services).

We propose a simpler approach: do what the user wants. That’s why the EU created the GDPR in the first place. Only in our case, we can start solving in code what regulation alone can’t do:

  1. Un-complicate things (for example, relieving sites of the need to put up a wall of permissions, some of which are sure to obtain grudging “consent” to the same awful data harvesting practices that caused the GDPR in the firs place).
  2. Give people a good way to start signaling their intentions to websites—especially business-friendly ones
  3. Give advertisers a safe way to keep doing what they are doing, without unwelcome tracking
  4. Open countless new markets by giving individuals better ways of signaling what they want from business, starting with good manners (which went out the window when all the tracking and profiling started)

What we propose is a friendly way to turn off third party tracking at all the websites a browser encounters requests for permission to track, starting with a cookie that will tell the site, in effect, first party tracking for site purposes is okay, but third party tracking is not.

If all works according to plan, that cookie will persist from site to site, getting the browser past many gauntlet walls. It will also give all those sites and their techies a clear signal of intention from the user’s side. (All this is subject to revision and improvement as we hack this thing out.)

This photo of the whiteboard at our GDPR session at IIW on April 5th shows how wide ranging and open our thinking was at the time:

Photos from the session start here. Click on your keyboard’s right (>) arrow to move through them. Session notes are on the IIW wiki here.

Here is the whiteboard in outline form:

Possible Delivery Paths

Carrots

  • Verifiable credential to signal intent
  • Ads.txt replaced by a more secure system + faster page serving
  • For publishers:
    • Ad blocking decreases
    • Subscriptions increase
    • Sponsorship becomes more attractive
  • For advertisers
    • Branding—the real kind, where pubs are sponsored directly—can come back
    • Clearly stated permissions from “data subjects” for “data processors” and “data controllers” (those are GDPR labels)
    • Will permit direct ads (programmatic placement is okay; just not based on surveillance)
    • Puts direct intentcasting from data subject (users) on the table, replacing adtech’s spying and guesswork with actual customer-driven leads and perhaps eventually a shopping cart customers take from site to site
    • Liability reduction or elimination
    • Risk management
    • SSI (self-sovereign identity) / VC (verified credential) approach —> makes demonstration of compliance automateable (for publishers and ad creative)
    • Can produce a consent receipt that works for both sides
    • Complying with a visitor’s cookie is a lot easier than hiring expensive lawyers and consultants to write gauntlet walls that violate the spirit of the GDPR while obtaining grudging compliance from users with the letter of it

Sticks

  • The GDPR, with ePrivacy right behind it, and big fines that are sure to come down
  • A privacy manager or privacy dashboard on the user’s side, with real scale across multiple sites, is inevitable. This will help bring one into the world, and sites should be ready for it.
  • Since ample research (University of Pennsylvania, AnnenbergPageFair) has made clear that most users do not want to be tracked, browser makers will be siding eventually, inevitably, with those users by amplifying tracking protections. The work we’re doing here will help guide that work—for all browser makers and add-on developers

Participating organizations (some onboard, some partially through individuals)

Sources

Additions and corrections to all the above are welcome.

So is space somewhere in Cambridge or Boston to continue discussions and hackings on Friday, April 27th.

0
Read More

Hey publishers, let’s get past mistaking tracking protection for ad blocking

Here’s what the Washington Post tells me when I go to one of its pieces (such as this one):

Here’s the problem: the Post says I’m blocking ads when I’m just protecting myself from tracking.

In fact I welcome ads. By that I mean real ads. Not messages that look like real ads, but are direct marketing messages aimed by tracking. Let’s call them fake ads.

Here’s one way to spot them:

When you see one of those in the corner of an ad, it means the ad is “interest based,” which is a euphemism for based on tracking you.

If you click on that icon, you get an explanation of what the ad is doing there (though no specifics about the tracking itself, or where trackers sniffed your digital exhaust across the Web), plus a way to “choose” what kind of ads you see or don’t. Here’s how the AdChoices site puts it:

Here are just some of the many ways this is fulla shit:

  1. It’s not your AdChoices Icon. It’s the Digital Advertising Alliance‘s. They are not you. They are a cabal of “leading national advertising and marketing trade groups.” And they don’t work for you. Nor does their icon.
  2. The most “control” you take when you click on that icon is over a subset of advertising systems that might be different with every AdChoices icon you click. It might be Google‘s, Experian‘s, DataXu/Evidon‘s, Amazon‘s or any one of thousands of other ad placement systems, each with their own opt-out rosters, none of which you can track, audit, or make accountable to you in the least.
  3. What’s behind the AdChoices icon is what you find behind every fig leaf. And it has the hots for your data.
  4. Next to the wheat of real advertising (which we’ve had since forever, has never tracked you, and carries straightforward brand messages for populations rather than individuals), “relevant” advertising is pure chaff. I explain the difference in Separating Advertising’s Wheat and Chaff.
  5. The benefits of relevant advertising are mostly monetary ones going to intermediaries rather than to advertisers, publishers or human beings. As Bob Hoffman puts it to publishers, “adtech middlemen are scraping 60-70% of your media dollars (WFA and The Guardian).”
  6. After perhaps a $trillion has been spent on on “relevant” advertising, not one brand name (meaning one known by the world) has been created by it, nor has a known brand even been sustained. On the contrary, many brands have hurt themselves by annoying the shit out of people, creeping them out with unexpected or unwanted “relevance,” or both. So it’s no surprise that Procter & Gamble cut $100 million out of its digital advertising budget, and all they missed was the trouble it caused.

Aagain, I have no trouble with real advertising, meaning the wheat kind, which isn’t based on tracking me. In fact I like it because it tends to ad value the publications I read, and I know it sponsors those publications, rather than using those publications just for chasing readers’ eyeballs to wherever they might be found, meaning the publisher-sponsoring value of a “relevant” ad based on tracking is less than zero. I also know real ads aren’t vectors for fraud and malware.

That’s why I run tracking protection, in this case with Privacy Badger, which tells me the Washington Post has 49 potential trackers trained to sniff my digital ass. I don’t want them there. I am also sure the Post’s subscribers and editorial staff don’t want them there either.

So how do we fix that?

You can track movement toward the answer in these reports:

  1. Helping publishers and advertisers move past the ad blockade 
  2. How #adblocking matures from #noads to #safeads
  3. How NoStalking is a good deal for publishers
  4. What if businesses agreed to customers’ terms and conditions?
  5. How true advertising can save journalism from drowning in a sea of content
  6. What if businesses agreed to customers’ terms and conditions?
  7. How to plug the publishing revenue drain

Right now Customer Commons is working on NoStalking, which simply says this:

Obeying that request has three benefits:

  1. It puts both publishers and advertisers in compliance with the General Data Protection Regulation (GDPR), a European privacy law that forbids personal tracking without express personal permission, has global reach (it applies to European Citizens using U.S. services) and large fangs that will come out in May of next year. I explain more about that one here.
  2. Ads not based on tracking—real ads—are far more valuable to publishers than the fake “relevant” kind. First, they actually sponsor the publication. Second, they carry no cognitive overhead for either the publisher or the reader. Both know exactly what an ad is for and what it’s doing there. Third, they can be sold and published the old fashioned ways that publishers abandoned when they jobbed out income production to revenue-sucking intermediaries. It ain’t that hard to go back.
  3. Real ads are more valuable to advertisers because they carry clear economic and creative signals. Don Marti explains how at DCN.

So here’s a request to the Washington Post and to every other digial publisher out there: talk to us. Let’s fix this thing together. Sooner the better. Thanks.

0
Read More

Lorem ipsum

Recent Posts