Terms

Choosing Your Terms

AI prompt (with Microsoft Image Creator): “A person chooses ‘NoStalking’ from a collection of privacy-providing terms on the Customer Commons website”

Customer Commons was designed to be for personal privacy terms what Creative Commons is for personal copyright licenses. So far we have one privacy term here, called NoStalking. It’s an agreement a person chooses when they want another party not to track them away from their site or service, but still allows ads to be displayed. Since it’s a contract, think of it as a Do Not Track agreement rather than as just a preference signal (which is all Do Not Track ever was—and why it failed).

The IEEE’s P7012 Working Group (with four Customer Commons board members on it) has been working for the past few years on a standard for making terms such as NoStalking readable by machines, and not just by ordinary folk and lawyers.

The questions in front of the working group right now are:

  1. How the individual chooses a term, or set of them.
  2. How both the individual (the first party) and the site or service (the second party) might keep a record of all the terms for which they have agreements signed by their machines, so that compliance can be monitored and disputes reliant on auditable data.
  3. How the standard can apply to both simple scenarios such as NoStalking and more complex ones that, for example, might involve negotiation and movement toward a purchase at the end of what marketers call a customer journey, or the completion of that journey in a state of relationship. Also how to end such a relationship, and to record that as well.

At this stage of the Internet’s history, our primary ways of interacting with sites and services are through browsers and apps on our computers and mobile devices. Since both are built on the client-server (aka slave-master or calf-cow) model, neither browsers nor apps provide ways to address the questions above. They are all built to make you agree to others’ terms, and to leave recording those agreements entirely the responsibility of those other parties.

So we need an independent instrument that can work within or alongside browsers and apps. On the Creative Commons model, we’re calling this instrument a chooser. However, unlike the Creative Commons chooser, this one will not sit on a website. It will be an instrument of the person’s own. How it will work matters less at this stage than outlining or wire-framing what it will do.

Here are some basic rules around which we are basing our approach to completing the standard:

  1. The individual is a self-sovereign and an independent actor in the ecosystem.
  2. Organisations are present in this ecosystem as voluntary providers of products and services.
  3. The individual provides no more data than is required for service.
  4. All personal data is deleted at the termination of the agreement, unless expressly over-ridden by national regulations.
  5. Any purposes not overtly mentioned as allowed are not allowed.
  6. Service provision will always require an identifier; this method assumes the individual can bring their own; potentially supported by a software agent and related services.
  7. Agreements are signed before any data exchange.
  8. Precise data required for each purpose is out of band for the agreement design and selection.
  9. That agreements are invoked at precisely the most relevant time: when an individual (in this case, the first party) is ready to engage any site or service (the second party) that is digital itself or has a digital route to a completed engagement. This point is important because it is precisely the same time as the second party normally invokes its own terms, and can update them in compliance with the first party’s requirements. This is the window of opportunity in which agents representing both parties can come to a set of acceptable terms. Note that there can be plenty of terms that favor the individual’s privacy requirements that are also good for the other side. NoStalking is a good example, because it says (in plain English) “Just give me ads not based on tracking me.” (In a way Google’s new privacy sandbox complies with this.)
  10. To be clear – the Chooser is what is handling that back and forth negotiation to an acceptable solution for both parties before it hands off to agreement signing.

More to follow.

 

0
Read More

Just in case you feel safe with Twitter

twitter bird with crosshairs

Just got a press release by email from David Rosen (@firstpersonpol) of the Public Citizen press office. The headline says “Historic Grindr Fine Shows Need for FTC Enforcement Action.” The same release is also a post in the news section of the Public Citizen website. This is it:

WASHINGTON, D.C. – The Norwegian Data Protection Agency today fined Grindr $11.7 million following a Jan. 2020 report that the dating app systematically violates users’ privacy. Public Citizen asked the Federal Trade Commission (FTC) and state attorneys general to investigate Grindr and other popular dating apps, but the agency has yet to take action. Burcu Kilic, digital rights program director for Public Citizen, released the following statement:

“Fining Grindr for systematic privacy violations is a historic decision under Europe’s GDPR (General Data Protection Regulation), and a strong signal to the AdTech ecosystem that business-as-usual is over. The question now is when the FTC will take similar action and bring U.S. regulatory enforcement in line with those in the rest of the world.

“Every day, millions of Americans share their most intimate personal details on apps like Grindr, upload personal photos, and reveal their sexual and religious identities. But these apps and online services spy on people, collect vast amounts of personal data and share it with third parties without people’s knowledge. We need to regulate them now, before it’s too late.”

The first link goes to Grindr is fined $11.7 million under European privacy law, by Natasha Singer (@NatashaNYT) and Aaron Krolik. (This @AaronKrolik? If so, hi. If not, sorry. This is a blog. I can edit it.) The second link goes to a Public Citizen post titled Popular Dating, Health Apps Violate Privacy

In the emailed press release, the text is the same, but the links are not. The first is this:

https://default.salsalabs.org/T72ca980d-0c9b-45da-88fb-d8c1cf8716ac/25218e76-a235-4500-bc2b-d0f337c722d4

The second is this:

https://default.salsalabs.org/Tc66c3800-58c1-4083-bdd1-8e730c1c4221/25218e76-a235-4500-bc2b-d0f337c722d4

Why are they not simple and direct URLs? And who is salsalabs.org?

You won’t find anything at that link, or by running a whois on it. But I do see there is a salsalabs.com, which has  “SmartEngagement Technology” that “combines CRM and nonprofit engagement software with embedded best practices, machine learning, and world-class education and support.” since Public Citizen is a nonprofit, I suppose it’s getting some “smart engagement” of some kind with these links. PrivacyBadger tells me Salsalabs.com has 14 potential trackers, including static.ads.twitter.com.

My point here is that we, as clickers on those links, have at best a suspicion about what’s going on: perhaps that the link is being used to tell Public Citizen that we’ve clicked on the link… and likely also to help target us with messages of some sort. But we really don’t know.

And, speaking of not knowing, Natasha and Aaron’s New York Times story begins with this:

The Norwegian Data Protection Authority said on Monday that it would fine Grindr, the world’s most popular gay dating app, 100 million Norwegian kroner, or about $11.7 million, for illegally disclosing private details about its users to advertising companies.

The agency said the app had transmitted users’ precise locations, user-tracking codes and the app’s name to at least five advertising companies, essentially tagging individuals as L.G.B.T.Q. without obtaining their explicit consent, in violation of European data protection law. Grindr shared users’ private details with, among other companies, MoPub, Twitter’s mobile advertising platform, which may in turn share data with more than 100 partners, according to the agency’s ruling.

Before this, I had never heard of MoPub. In fact, I had always assumed that Twitter’s privacy policy either limited or forbid the company from leaking out personal information to advertisers or other entities. Here’s how its Private Information Policy Overview begins:

You may not publish or post other people’s private information without their express authorization and permission. We also prohibit threatening to expose private information or incentivizing others to do so.

Sharing someone’s private information online without their permission, sometimes called doxxing, is a breach of their privacy and of the Twitter Rules. Sharing private information can pose serious safety and security risks for those affected and can lead to physical, emotional, and financial hardship.

On the MoPub site, however, it says this:

MoPub, a Twitter company, provides monetization solutions for mobile app publishers and developers around the globe.

Our flexible network mediation solution, leading mobile programmatic exchange, and years of expertise in mobile app advertising mean publishers trust us to help them maximize their ad revenue and control their user experience.

The Norwegian DPA apparently finds a conflict between the former and the latter—or at least in the way the latter was used by Grinder (since they didn’t fine Twitter).

To be fair, Grindr and Twitter may not agree with the Norwegian DPA. Regardless of their opinion, however, by this point in history we should have no faith that any company will protect our privacy online. Violating personal privacy is just too easy to do, to rationalize, and to make money at.

To start truly facing this problem, we need start with a simple fact: If your privacy is in the hands of others alone, you don’t have any. Getting promises from others not to stare at your naked self isn’t the same as clothing. Getting promises not to walk into your house or look in your windows is not the same as having locks and curtains.

In the absence of personal clothing and shelter online, or working ways to signal intentions about one’s privacy, the hands of others alone is all we’ve got. And it doesn’t work. Nor do privacy laws, especially when enforcement is still so rare and scattered.

Really, to potential violators like Grindr and Twitter/MoPub, enforcement actions like this one by the Norwegian DPA are at most a little discouraging. The effect on our experience of exposure is still nil. We are exposed everywhere, all the time, and we know it. At best we just hope nothing bad happens.

The only way to fix this problem is with the digital equivalent of clothing, locks, curtains, ways to signal what’s okay and what’s not—and to get firm agreements from others about how our privacy will be respected.

At Customer Commons, we’re starting with signaling, specifically with first party terms that you and I can proffer and sites and services can accept.

The first is called P2B1, aka #NoStalking. It says “Just give me ads not based on tracking me.” It’s a term any browser (or other tool) can proffer and any site or service can accept—and any privacy-respecting website or service should welcome.

Making this kind of agreement work is also being addressed by IEEE7012, a working group on machine-readable personal privacy terms.

Now we’re looking for sites and services willing to accept those terms. How about it, Twitter, New York Times, Grindr and Public Citizen? Or anybody.

DM us at @CustomerCommons and we’ll get going on it.

 

0
Read More

The business problems only customers can solve

Customer Commons was created because there are many business and market problems that can only be solved from the customers’ side, under the customer’s control, and at scale, with #customertech.

In the absence of solutions that customers control, both customers and businesses are forced to use business-side-only solutions that limit customer power to what can be done within each business’s silo, or to await regulatory help, usually crafted by captive regulators who can’t even imagine full customer agency.

Here are some examples of vast dysfunctions that customers face today (and which hurt business and markets as well), in the absence of personal agency and scale:

  • Needing to “consent” to terms that can run more than 10,000 words long, and are different for every website and service provider
  • Dealing with privacy policies that can also run more than 10,000 words long, which are different for every website and service provider, and that the site or service can change whenever they want, and in practice don’t even need to obey
  • Dealing with personal identity systems that are different for every website or service provider
  • Dealing with subscription systems that are different for every website and service provider requiring them
  • Dealing with customer service and tech support systems that are different for every website or service provider
  • Dealing with login and password requirements that are as different, and numerous, as there are websites and service providers
  • Dealing with crippled services and/or higher prices for customers who aren’t “members” of a “loyalty” program, which involves high cognitive and operational overhead for customer and seller alike—and (again) work differently for every website and service provider
  • Dealing with an “Internet of Things” that’s really just an Amazon of things, an Apple of Things, and a Google of things.

And here are some examples of solutions customers can bring to business and markets:

  • Standardized terms that customers can proffer as first parties, and all the world’s sites and services can agree to, in ways where both parties have records of agreements
  • Privacy policies of customers’ own, which are easy for every website and service provider to see and respect 
  • Self-sovereign methods for customers to present only the identity credentials required to do business, relieving many websites and service providers of the need to maintain their own separate databases of personal identity data
  • Standard ways to initiate, change and terminate customers’ subscriptions—and to keep records of those subscriptions—greatly simplifying the way subscriptions are done, across all websites and service providers
  • Standard ways for customers to call for and engage customer service and tech support systems that work the same way across all of them
  • Standard ways for customers to relate, without logins and passwords, and to do that with every website and service provider
  • Standard ways to express loyalty that will work across every website, retailer and service provider
  • Standard ways for customers to “intentcast” an interest in buying, securely and safely, at scale, across whole categories of products and services
  • Standard ways for customers’ belongings to operate, safely and securely, in a true Internet of Things
  • Standardized dashboards on which customers can see their own commercially valuable data, control how it is used, and see who has shared it, how, and under what permissions, across all the entities the customer deals with

There are already many solutions in the works for most of the above. Our work at Customer Commons is to help all of those—and many more—come into the world.

 

0
Read More

Why we’re not endorsing Contract for the Web

Contract for the Web—not signing

The Contract for the Web is a new thing that wants people to endorse it.

While there is much to like in it, what we see under Principle 5 (of 9) is a deal-breaker:

Respect and protect people’s privacy and personal data to build online trust.
So people are in control of their lives online, empowered with clear and meaningful choices around their data and privacy:

  1. By giving people control over their privacy and data rights, with clear and meaningful choices to control processes involving their privacy and data, including:
  2. Providing clear explanations of processes affecting users’ data and privacy and their purpose.
  3. Providing control panels where users can manage their data and privacy options in a quick and easily accessible place for each user account.
  4. Providing personal data portability, through machine-readable and reusable formats, and interoperable standards — affecting personal data provided by the user, either directly or collected through observing the users’ interaction with the service or device.

Note which party is “giving” and “providing” here. It’s not the individual.

By this principle, individuals should have no more control over their lives online than what website operators and governments “give” or “provide” them, with as many “control panels” as there are websites and “user accounts.” This is the hell we are in now, which metaphorically iworks like this:

It also leaves unaddressed two simple needs we have each had since the Web came into our lives late in the last millennium:

  1. Our own damn controls, that work globally, at scale, across all the websites of the world; and
  2. Our own damn terms and conditions that websites can agree to.

At Customer Commons we encourage #1 (as has ProjectVRM, since 2006), and are working on #2.

If you want to read the thinking behind this position, a good place to start is the Privacy Manifesto draft at ProjectVRM, which is open to steady improvement. (A slightly older but more readable copy is here at Medium.)

We also recommend Klint Finley‘s What’s a Digital Bill of Rights Without Enforcement? in Wired. He makes the essential point in the title. It’s one I also made in Without Enforcement, GDPR is a Fail, in July 2018.

A key point here is that companies and governments are not the only players. As we say in Customers as a Third Force, each of us—individually and collectively—can and should be players too.

We’ll reach out to Tim Berners-Lee and others involved in drafting this “contract” to encourage full respect for the independent agency of individuals.

0
Read More

Change of Address (√)

Way back in 2006 or so, in the first Project VRM meetings, our canonical use case was ‘change of address’; that is to say, we wanted individuals to have the ability to update their address in one place and have that flow to multiple suppliers.

That seemed easy enough, so we thought at the time; all that’s needed is:

– a data store controlled by the individual

– a user interface

– an API that allowed organisations to connect

We did not note the need at the time, but there probably should have been one around ‘standardised data sharing terms’ so that organisations would not get tied in legal knots signing many different contracts to cover themselves as they would need to do.

So, 12 or so years later, that proved to not be quite so easy….. I think our most flawed assumption was that organisations would see this as a good thing and be willing to get involved.

No matter, the reason for my post is to flag that individual driven change of address can now be done at Internet scale, albeit yes we still need to crack the adoption issue. There are also then a number of downstream use cases, e.g. where the address change must be verified.

Here’s a visual of how change of address works in the JLINC environment; the same principles could apply in other environments. The critical dependency is that both parties (individual and organisation) have their own data-sets that they voluntarily connect to each other.

Beyond the fact that this plumbing now demonstrably works at scale, I think the most interesting thing that has emerged from the JLINC deployment is the Standard Information Sharing Agreement. The requirement here is to have an agreement that works for both parties; here is the initial one built for JLINC. The expectation is that these will evolve over time and likely become life aspect/ sector specific (e.g. Health); but critically they will not mimic the current model where each organisation invents their own. The secondary function that I believe makes this scale is the ability to record every single data exchange that takes place should either or both parties need to refer to that downstream.

So, we can now tick the box around ‘change of address’, at least as working plumbing. The better news still is that the same plumbing and approach works for any type of data, or any data flow (so organisations sending data to Alice too). At least it should not take another 12 years to make that next use case work, which incidentally was ‘Intentcasting’; i.e. an individual being able to articulate what they are in the market for without losing control over that data.

0
Read More

How customers help companies comply with the GDPR

That’s what we’re starting this Thursday (26 April) at GDPR Hack Day at MIT.

The GDPR‘s “sunrise day” — when the EU can start laying fines on companies for violations of it — is May 25th. We want to be ready for that: with a cookie of our own baking that will get us past the “gauntlet walls” of consent requirements that are already appearing on the world’s commercial websites—especially the ad-supported ones.

The reason is this:

Which you can also see in a search for GDPR.

Most of the results in that search are about what companies can do (or actually what companies can do for companies, since most results are for companies doing SEO to sell their GDPR prep services).

We propose a simpler approach: do what the user wants. That’s why the EU created the GDPR in the first place. Only in our case, we can start solving in code what regulation alone can’t do:

  1. Un-complicate things (for example, relieving sites of the need to put up a wall of permissions, some of which are sure to obtain grudging “consent” to the same awful data harvesting practices that caused the GDPR in the firs place).
  2. Give people a good way to start signaling their intentions to websites—especially business-friendly ones
  3. Give advertisers a safe way to keep doing what they are doing, without unwelcome tracking
  4. Open countless new markets by giving individuals better ways of signaling what they want from business, starting with good manners (which went out the window when all the tracking and profiling started)

What we propose is a friendly way to turn off third party tracking at all the websites a browser encounters requests for permission to track, starting with a cookie that will tell the site, in effect, first party tracking for site purposes is okay, but third party tracking is not.

If all works according to plan, that cookie will persist from site to site, getting the browser past many gauntlet walls. It will also give all those sites and their techies a clear signal of intention from the user’s side. (All this is subject to revision and improvement as we hack this thing out.)

This photo of the whiteboard at our GDPR session at IIW on April 5th shows how wide ranging and open our thinking was at the time:

Photos from the session start here. Click on your keyboard’s right (>) arrow to move through them. Session notes are on the IIW wiki here.

Here is the whiteboard in outline form:

Possible Delivery Paths

Carrots

  • Verifiable credential to signal intent
  • Ads.txt replaced by a more secure system + faster page serving
  • For publishers:
    • Ad blocking decreases
    • Subscriptions increase
    • Sponsorship becomes more attractive
  • For advertisers
    • Branding—the real kind, where pubs are sponsored directly—can come back
    • Clearly stated permissions from “data subjects” for “data processors” and “data controllers” (those are GDPR labels)
    • Will permit direct ads (programmatic placement is okay; just not based on surveillance)
    • Puts direct intentcasting from data subject (users) on the table, replacing adtech’s spying and guesswork with actual customer-driven leads and perhaps eventually a shopping cart customers take from site to site
    • Liability reduction or elimination
    • Risk management
    • SSI (self-sovereign identity) / VC (verified credential) approach —> makes demonstration of compliance automateable (for publishers and ad creative)
    • Can produce a consent receipt that works for both sides
    • Complying with a visitor’s cookie is a lot easier than hiring expensive lawyers and consultants to write gauntlet walls that violate the spirit of the GDPR while obtaining grudging compliance from users with the letter of it

Sticks

  • The GDPR, with ePrivacy right behind it, and big fines that are sure to come down
  • A privacy manager or privacy dashboard on the user’s side, with real scale across multiple sites, is inevitable. This will help bring one into the world, and sites should be ready for it.
  • Since ample research (University of Pennsylvania, AnnenbergPageFair) has made clear that most users do not want to be tracked, browser makers will be siding eventually, inevitably, with those users by amplifying tracking protections. The work we’re doing here will help guide that work—for all browser makers and add-on developers

Participating organizations (some onboard, some partially through individuals)

Sources

Additions and corrections to all the above are welcome.

So is space somewhere in Cambridge or Boston to continue discussions and hackings on Friday, April 27th.

0
Read More

The Only Way Customers Come First

— is by proffering terms of their own.

That’s what will happen when sites and services click “accept” to your terms, rather than the reverse.

The role you play here is what lawyers call the first party. Sites and services that agree to your terms are second parties.

As a first party, you get scale across all the sites and services that agree to your terms:

This the exact reverse of what we’ve had in mass markets ever since industry won the industrial revolution. But we can get that scale now, because we have the Internet, which was designed to support it. (Details here and here.)

And now is the time, for two reasons:

  1. We can make our leadership pay off for sites and services; and
  2. Agreeing with us can make sites and services compliant with tough new privacy laws.

Our first example is P2B1(beta), which might best be called #NoProfiling:

With #NoProfiling, we proffer a term that says—

This does a bunch of good things for advertising supported sites:

  1. It relieves them of the need to track us like animals everywhere we go, and harvest personal data we’d rather not give anybody without our permission.
  2. Because of #1, it gives them compliance with the EU’s General Data Protection Regulation (aka GDPR), which allows fines of “up to 10,000,000 EUR or up to 2% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater (Article 83, Paragraph 4),” or “a fine up to 20,000,000 EUR or up to 4% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater (Article 83, Paragraph 5 & 6).”
  3. It provides simple and straightforward “brand safety” directly from human beings, rather than relying on an industry granfalloon to do the same.
  4. It lets good publishers sell advertising to brands that want to sponsor journalism rather than chase eyeballs to the cheapest, shittiest sites.
  5. It provides a valuable economic signal from demand to supply in the open marketplace.

We’ll have other terms. As with #NoProfiling, those will also align incentives.

 

 

0
Read More

Lorem ipsum

Recent Posts