That’s what will happen when sites and services click “accept” to your terms, rather than the reverse. This then you are what lawyers call the first party. Sites and services that agree to your terms are second parties.
As a first party, you get scale across all the sites and services that agree to your terms, just like today each of those sites and services gets scale across thousand or millions of second-class netizens called “users”:
This the exact reverse of what we’ve had in mass markets ever since industry won the industrial revolution. But we can get that scale now, because we have the Internet, which was designed to support it. (Details here and here.)
And now is the time, for two reasons:
We can make our leadership pay off for sites and services; and
Agreeing with us can make sites and services compliant with tough new privacy laws.
This does a bunch of good things for advertising supported sites:
It relieves them of the need to track us like animals everywhere we go, and harvest personal data we’d rather not give anybody without our permission.
Because of #1, it gives them compliance with the EU’s General Data Protection Regulation (aka GDPR), which will start fining companies “up to 10,000,000 EUR or up to 2% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater (Article 83, Paragraph 4),” or “a fine up to 20,000,000 EUR or up to 4% of the annual worldwide turnover of the preceding financial year in case of an enterprise, whichever is greater (Article 83, Paragraph 5 & 6).”
It provides simple and straightforward “brand safety” directly from human beings, rather than relying on an industry granfalloon to do the same.
#nostalking and #intentcasting are the first terms to be published at Customer Commons. Both have the potential to generate fresh and healthy economic activity, one in publishing and the other in retailing.
Every new first party term has the potential to reform whole markets for the good of everyone, simply by creating better ways for demand to signal, engage and improve supply. In doing that, first party terms will also make good on the promise of the Internet in the first place. After two decades of failing to do that, it’s about time.
We’ll be working on exactly these terms at VRM Day next Monday, and at IIW for the following three days, all at the Computer History Museum in Silicon Valley. Sign up at those links. Help us change the world.
Try to guess how many times, in the course of your life in the digital world, have “agreed” to terms like these:
Hundreds? Thousands? (Feels like) millions?
Look at the number of login/password combinations remembered by your browser. That’ll be a fraction of the true total.
Now think about what might happen if we could turn these things around. How about if sites and services could agree to our terms and conditions, and our privacy policies?
We’d have real agreements, and real relationships, freely established, between parties of equal power who both have an interest in each other’s success.
We’d have genuine (or at least better) trust, and better signaling of intentions between both parties. We’d have better exchanges of information and better control over what gets done with that information. And the information would be better too, because we wouldn’t have to lie or hide to protect our identities or our data.
Think about it. None of those work unless individuals are in charge of themselves and their relationships in the digital world. And they can’t as long as only one side is in charge. What we have instead are opposites: limited control and coerced consent, maximum disclosure for unconstrained use, unjustified parties, misdirected identity, silo’d operators and technologies, inhuman integration, and inconsistent experiences across contexts of all kinds. (I’ll add links for all of those later when I have time.)
Can we fix this problem, eleven years after Kim came down from the mountain (well, Canada) with those laws?
No, we can’t. Not without leverage.
The sad fact is that we’ve been at a disadvantage since geeks based the Web on an architecture called “client-server.” I’ve been told that term was chosen because “slave-master” didn’t sound so good. Personally, I prefer calf-cow:
As long as we’re the calves coming to the cows for the milk of “content” (plus unwanted cookies), we’re not equals.
But once we become independent, and can assert enough power to piss off the cows that most want to take advantage of us, the story changes.
Good news: we are independent now, and controlling our own lives online is pissing off the right cows.
We’re gaining that independence through ad and tracking blockers. There are also a lot of us now. And a lot more jumping on the bandwagon.
According to PageFair and Adobe, the number of people running ad blockers alone passed 200 million last May, with annual growth rates of 41% in the world, 48% the U.S. and 82% in the U.K. alone.
Of course the “interactive” ad industry (the one that likes to track you) considers this a problem only they can solve. And, naturally, the disconnect between their urge to track and spam us, and our decision to stop all of it, is being called a “war.”
But it doesn’t have to be.
Out in the offline world, we were never at war with advertising. Sure, there’s too much of it, and a lot of it we don’t like. But we also know we wouldn’t have sports broadcasts (or sports talk radio) without it. We know how much advertising contributes to the value of the magazines and newspapers we read. (Which is worth more: a thick or a thin Vogue, Sports Illustrated, Bride’s or New York Times?) And to some degree we actually value what old fashioned Mad Men type advertising brings to the market’s table.
On the other hand, we have always been at war with the interactive form of advertising we call junk mail. Look up unwanted+mail, click on “images,” and and you’ll get something like this:
What’s happened online is that the advertising business has turned into the “interactive” junk message business. Only now you can’t tell the difference between an ad that’s there for everybody and one that’s aimed by crosshairs at your eyeballs.
Today’s ad and tracking blockers are are primitive prophylactics: ways to protect our eyeballs from advertising and tracking. But how about if we turn these into instruments of agreement? We could agree to allow the kind of ads that pay the publisher and aren’t aimed at us by tracking.
Here at Customer Commons we’ve been working on those kinds of terms for the last several years. Helping us have been law school students and teachers, geeks and ordinary folks. Last we publishe a straw man version of those terms, they looked like this:
What those say (in the green circles) is “You (the second party) alone can use data you get from me, for as long as you want, just for your site or app, and will obey the Do Not Track request from my browser.”
This can be read easily by lawyers, ordinary folks and machines on both sides, just the way the graphic at the top of this post, borrowed from Creative Commons (or model for this), describes.
Many people from those groups (including Kim Cameron himself) will be at IIW, the Internet Identity Workshop, at the Computer History Museum in Silicon Valley, on the last week of next month, April 26-28. It’s an unconference. No panels, no keynotes, no plenaries. It’s all breakouts, on topics chosen by participants.
The day before, at the same location, will be VRM Day. The main topic there will be terms, and how we plan to get working versions of them in the next three days at IIW.
This is a huge opportunity. I am sure we have enough code, and enough done work on standards and the rest of it, to put up exactly the terms we can offer and publishers online can accept, and will start to end the war (that really isn’t) between publishers and their readers.
Once we have those terms in place, others can follow, opening up to much better signaling between supply and demand, because both sides are equals.
So this is an open invitation to everybody already working in this space, especially browser makers (and not just Mozilla) and the ad and tracking blockers. IIW is a perfect place to show to show what we’ve got, to work together, and to move things forward.
As a new digital age unfolds brands have a make-or-break strategic opportunity to place their customer relationships on a powerful new footing.
The opportunity: to work with customers to create new ‘Me2B’ services that empower them with data and help them use this data to meet previously unmet needs, such as making better decisions and organising and managing their lives better.
Brands that enable these new relationships and services are sustaining and deepening customer trust, growing revenue streams and profits, differentiating themselves in crowded markets, and positioning themselves strategically at the forefront of the digital economy.
Personal Information Economy 2015: Growth Through Trust
The rise of Me2B commerce
Event Venue: Kings Place, 90 York Way, London, N1 9AG Event Date: Tuesday, December 8th 2015 from 09:00 to 19:00 (GMT) More information here.
Join us for a joint PDEC and Customer Commons salon dinner April 6th, Monday night, 6-9pm in Mountain View. This is the night before IIW’s, and at the end of the VRM day, where we will have an opportunity to talk about Banking, Credit and Personal Data with LaVonne Reimer. Sign up at Eventbrite for the Salon Dinner.
About LaVonne: She is a lawyer-turned-entrepreneur with over 15 years experience deploying technologies in markets with data privacy and regulatory sensitivities. Most recently, she engaged an expert user community to streamline ethical data-sharing practices in the commercial credit ecosystem.
For dinner, the PDEC / Customer Commons Salon, is 6-9pm at Fu Lam Mum in Mountain View.
NOTE: Those who want to arrive earlier thank 6pm for socializing, please do, and we have a no host bar at Fu Lam Mum. For those coming at 6pm, we’ll start dinner about 6:30pm and for those just coming for discussion that will start about 7:30pm. However discussion people are welcome earlier for socializing too.
Thanks to everyone who attended the Customer Commons Salon last night. It was a nice night to socialize, and talk. Doc Searls gave us a quick report on Omie, the Customer Commons project that will be made for Android, and later we hope, other platrforms. Omie is meant to make the device yours, instead of having you captive to all those taking your data and experience.
We had a great night at MINGs in Palo Alto, and want to thank them for the delicious food and accommodations!
We look forward to our next salon, the Monday night before IIW, as always!
Customer Commons is supporting, and board member, Mary Hodder, is hosting the Bay Area event. Additionally, there are NYC and London locations. Please join us if you are interested:
This is an unprecedented year documenting our loss of Privacy. Never before have we needed to stand up and team up to do something about it. In honour of Privacy Day, the Legal Hackers are leading the charge to do something about it, inspiring a two-day international Data Privacy Legal Hackathon. This is no ordinary event. Instead of talking about creating privacy tools in theory, the Data Privacy Legal Hackathon is about action! A call to action for tech & legal innovators who want to make a difference!
We are happy to announce a Data Privacy Legal Hackathon and invite the Kantara Community to get involved and participate. We are involved in not only hosting a Pre-Hackathon Project to create a Legal Map for consent laws across jurisdictions, but the CISWG will also be posting a project for the Consent Receipt Scenario that is posted in on the ISWG wiki.
The intention is to hack Open Notice with a Common Legal Map to create consent receipts that enable ‘customisers’ to control personal information If you would like to get involved in the hackathon, show your support, or help build the consent receipt infrastructure please get involved right away — you can get intouch with Mark (dot) Lizar (at)gmail (dot) com, Hodder (at) gmail (dot) com, or join the group pages that are in links below.
Across three locations on February 8th & 9th, 2014, get your Eventbrite Tickets Here:
This two-day event aims to mix the tech and legal scenes with people and companies that want to champion personal data privacy. Connecting entrepreneurs, developers, product makers, legal scholars, lawyers, and investors.
Each location will host a two-day “judged” hacking competition with a prize awarding finale, followed by an after-party to celebrate the event.
NOTE: The venue is now at Stanford University, in conjunction with the United Nations Association Film Festival, and will be followed by a panel discussion on the “Future of Online Privacy.” Cullen will be there as well.
Our data supporting the paper is here: Addendum Q&A and shortly we’ll upload a .xls of the data for those who want to do a deep dive into the results.
We all know that many people hide or submit incorrect data, click away from sites or refuse to install an app on a phone. We’ve all mostly done it. But how many? How much is this happening?
We’re at IIW today and of course, the age old dilemma is happening in sessions where one guy in the room says: “People will click through anything; they don’t care about privacy.” And the next guy will say, “People are angry and frustrated and they don’t like what’s happening.” But what’s real? What’s right?
We conducted this survey to get a baseline about what people do now as they engage in strategies to create privacy for themselves, to try to control their personal data.
The amazing thing is.. 92 % hide, lie, refuse to install or click, some of the time. We surveyed 1704 people, and had an astonishing 95% completion rate for this survey. We also had 35% of these people writing comments in the “comment more” boxes at the bottom of the multiple choice answers. Also astonishingly high.
People expressed anger, cynicism, frustration. And they said overwhelmingly that the sites and services that ask for data DON’T NEED it. Unless they have to get something shipped from a seller. But people don’t believe the sites. There is distrust. The services have failed to enroll the people they want using their services that something necessary is happening, and the people who use the services are mad.
We know the numbers are high, and that it’s likely due to many not having a way to give feedback on this topic. So when we offered the survey, people did vent.
But we think it also indicates the need for qualitative and quantitative research on what is true now for people online. We want more nuanced information about what people believe, and how we might fix this problem. Many sites only look at user logs to figure out what is happening on a site or with an app, and therefore, they miss this problem and the user feelings behind them. We want to see this studied much more seriously so that people no longer make the conflicting statements at conferences, so that developers say the user’s don’t care, so that business models are developed that think different than we do now, where sites and services just take personal data. We want to get beyond the dispute over whether people care, to real solutions that involve customers and individuals in ways that respect them and their desires when they interact with companies.
A large percentage of individuals employ artful dodges to avoid giving out requested personal information online when they believe at least some of that information is not required. These dodges include hiding personal details, intentionally submitting incorrect data, clicking away from sites or refusing to install phone applications. This suggests most people do not want to reveal more than they have to when all they want is to download apps, watch videos, shop or participate in social networking.
Keywords: privacy, personal data, control, invasion, convergence
Customer Commons’ purpose in conducting this research is to understand more fully the ways in which people manage their online identities and personal information. This survey, the first of a planned series of research efforts, explores self-reported behavior around disclosure of personal information to sites and services requesting that information online. We believe the results of this survey offer a useful starting point for a deeper conversation about the behaviors and concerns of individuals seeking to protect their privacy. Subsequent research will explore how people feel and behave toward online tracking.
This research is also intended to inform the development of software tools that give individuals ways to monitor and control the flow and use of personal data.
For this research project, Customer Commons in late 2012 surveyed a randomized group of 1704 individuals within the United States (1689 finished the survey, or 95%). Respondents were geographically distributed, aged 18 and up (see the appendix for specifics), and obtained through SurveyMonkey.com. The margin of error was 2.5%.
Respondents gave checkbox answers to questions and in some cases added remarks in a text box. (Survey questions and answers are in an addendum to this paper.)
Protecting personal data
This survey focused on the methods people use to restrict disclosure of requested personal information. Those methods include withholding, obscuring or falsifying the requested information.
Only 8.45% of respondents reported that they always accurately disclosepersonal information that is requested of them. The remaining 91.55% reported that they are less than fully disclosing. If they decide the site doesn’t need personal information such as names, birthdates, phone numbers, or zip codes, they leave blank answers, submit intentionally incorrect information, click away from the site, or — in the case of mobile applications, decline to install.
Most people withhold at least some personal data. Specifically,
75.7% of respondents avoid giving their mobile numbers
74.8% avoid “social” login shortcuts such as those provided by Facebook or Twitter
73.4% avoided giving sites or services access to a friend or contact list.
58.3% don’t provide a primary email address
49.3% don’t provide a real identity
The concept of trust was raised in 22% of the written responses explaining why people hide their information. Some examples include:
“I cannot trust a random website”
“I do not want spam and do not want to expose others to spam. I also don’t know how that information could be used or if the people running the site are trustworthy.”
“If I know why info is needed then I might provide, otherwise no way”
“I felt the need to cover my I.D. a little bit — like age and gender. And I still withhold my social security #.”
“If I feel they don’t need it to provide a service to me they don’t get it even if I have to enter in fake info”
“Worries on identity theft and general privacy.”
“i would never give out my friends or and familys (sic) info ever”
Many respondents said sites and services request more data than required. Others suggested that providing requested information would result in an increased risk to their security. More results:
When the 71% of respondents who reported withholding information were asked why, they said they didn’t believe the sites needed the information. Specifically,
68% reported they either didn’t know the site well when they withheld their data or didn’t trust the site.
45% of those who felt they knew the site or service well still withheld information.
Respondents lied about various line items as a strategy to protect their privacy. For example, 34.2% intentionally provided an incorrect phone number, and 13.8% provided incorrect employment information. Here are some reasons they gave:
“I didn’t want them to have all my information, or feel it was necessary.”
“I have obscured various information so that I would not have further contact with a vendor who won’t leave me alone”
“Faking it is the best to avoid unwanted contact”
“Sometimes you just want to use a service without them knowing every thing about you.”
“I don’t like websites to have very much information on me. I regularly give out spam email addresses, bad birthday dates, and bad location information.”
“Registering for many mundane website often requires some pretty detailed personal info. I generally fudge this. None of their business”
“Because information is so easily found and transferred on the internet I do provide false info quite often to protect my identity.”
Even those who had never submitted incorrect information made statements such as:
“Have never made up info – just ignored requests :-)”
“i just don’t use that website”
“I have an email address that is purly (sic) for junk mail. I use this email address for websites that request my email address and then I go into that email and delete all email monthly.”
“I have never given incorrect information, but I have thought about it.”
“I don’t lie, but I omit as I feel appropriate.”
Going with the flow
Correcting already obscured or falsified information appears to be too much of a chore. Specifically,
Over 50% have rarely or never corrected data they submitted incorrectly
30% correct their data “sometimes.” Of that 30%,
55% said a purchase required correct information
56% had a growing feeling of comfort with the site or service
46% cited the ability to realize new benefits from the site with corrected information
30% said they noticed others’ incorrect data at Facebook or other social sites, or in phone applications, and —
80% of this group assumed that the data was falsified as a way to protect privacy
40% believed the incorrect data was there to mislead marketers
12% believed secretive associates were trying to mislead them
13% believed services always needed correct personal information
75% believed the services needed it only sometimes
12% said it was never needed.
Respondents also believed that other users of these services always needed or expected correct personal data about each other 27% of the time, whereas 23% said it was sometimes needed, and 48% said it was never needed.
The results of this survey support the hypothesis that people limit, refuse to give or obfuscate personal information in an attempt to create a measure of privacy online.
On July 30, 2010, in the first article in its “What They Know” series, The Wall Street Journal reported, “One of the fastest-growing businesses on the Internet … is the business of spying on Internet users. The Journal conducted a comprehensive study that assesses and analyzes the broad array of cookies and other surveillance technology that companies are deploying on Internet users. It reveals that the tracking of consumers has grown both far more pervasive and far more intrusive than is realized by all but a handful of people in the vanguard of the industry.”[i]
Adds Doc Searls, in The Intention Economy, “Tracking and ‘personalizing’—the current frontier of online advertising—probe the limits of tolerance. While harvesting mountains of data about individuals and signaling nothing obvious about their methods, tracking and personalizing together ditch one of the few noble virtues to which advertising at its best aspires: respect for the prospect’s privacy and integrity, which has long included a default assumption of anonymity.”[ii]
This survey showed one result of this system. Respondents expressed a general lack of trust in their relationships with online businesses. Many feelings ran strong. Here are some of the comments:
“Scary world out there, and I am a bit angry about the fact that all these website ‘track me’ as if that is OK, and then they sell MY data, obviously making money in the process. How is that OK or even legal? Don’t I control MY information? Apparently not…”
“So if I think it might be ‘harmful’ to give out info, I don’t do it.”
“I want cookies outlawed 🙁
“My ex-husband was abusive and has stalked me. I don’t need to let the greedy sellers of my personal information draw him a map to my front door.”
“While I doubt I have any real protection of privacy, I have a desire to try to send a message that I want my right to protection of privacy. I regret how much we as a society have lost to the powers of marketing.”
“I don’t trust the security procedures of most companies. Security costs money, which cuts into profits, thus most companies have limited incentive to protect PII from cyber criminals.”
“The web is far less secure than commonly known.”
“Just as I have disconnected my land line because of a flood of unwanted calls, I refuse to give online/ access information for the same reason.”
These survey responses show people resort to withholding data or submitting false data to avoid feeling exposed online. When deciding whether to share personal information, the majority of respondents doubt that sites or services need to collect more than a minimum of obviously necessary personal data.
When people withhold personal data, it is to create a sense of privacy and control of their personal lives.
People are afraid or distrustful of sites, services and phone apps that request their personal data. They withhold or falsify information because they do not believe the sites need their data, and because they do not want to disclose information that might lead to spamming or other intrusions. Moreover, the techniques that people employ to preserve their sense of privacy online are largely improvised, informed by fear, and based on their subjective evaluation of entities that solicit personal information.
For the sake of privacy, people contribute to and tolerate the presence of incorrect personal data online, and attempt to correct it only when they see the clear upsides of accuracy. And, despite the failure of businesses and other organizations to convince users of the need to provide personal details beyond an email address, most users remain comfortable disclosing additional personal data only with those they know and trust.
Research Funding Grant
This research project was funded with a grant from CommerceNet, a not-for-profit research institute working to fulfill the potential of the Internet since 1993.
Customer Commons is a not-for-profit working to restore the balance of power, respect and trust between individuals and the organizations that serve them, especially in the online world. We stand with the individual and therefore do not take contributions from commercial entities.