Electronic Health Records and Patient-Centric Design

CIO’s story Why Electronic Health Records aren’t more usable offers an interesting perspective on the current (improved?) state of affairs in medical care records. From the article:

The American Medical Association in 2014 issued an eight-point framework for improving EHR usability. According to this framework, EHRs should:

  • enhance physicians’ ability to provide high-quality patient care
  • support team-based care
  • promote care coordination
  • offer product modularity and configurability
  • reduce cognitive workload
  • promote data liquidity
  • facilitate digital and mobile patient engagement
  • expedite user input into product design and post-implementation feedback.

Nevertheless, it does not appear that EHR vendors are placing more emphasis on UCD. The Office of the National Coordinator for Health IT requires developers to perform usability tests as part of a certification process that makes their EHRs eligible for the government’s EHR incentive program. Yet a recent study found that, of 41 EHR vendors that released public reports, fewer than half used an industry-standard UCD process. Only nine developers tested their products with at least 15 participants who had clinical backgrounds, such as physicians.

Note that this situation is not due to a lack of user-centric efforts to make medical records more useful. Indeed there are several efforts underway, including HealthAuth, Kantara’s Healthcare ID Assurance Working Group, Patient Privacy Rights, HEART working efforts with OAuth and UMA, and more. As the article noted, there are regulatory complications as well as crazy-complicated workflow requirements imposed by the software designers/vendors. We need a shift in focus here.

Mozilla has a cute video on the open, privacy protecting web

Check it out here at The Web We Want: An Open Letter:

And the note Mozilla posted with the video:

Our right to a free and open Internets has been under threat lately. The NSA — btw, that stands for the National Security Agency, which has the fancy responsibility of analyzing and acting upon security data — has gotten into the habit of spying on Americans with no justification (including 12 spies who were using NSA tools to spy on their significant others). No, I’m not kidding.

The FCC — btw, that stands for the Federal Communications Commission, which is supposed to regulate and protect our communications channels — just made it easier for big companies to control the speed at which you are allowed to access particular websites. For example, your Internet company (i.e., Comcast or Verizon) could turn into a tiered pay system. So instead of being like a public utility where everyone gets the same amount of water or electricity, Verizon could give Netflix faster access for a fee, but then the smaller start-up that wants to compete and couldn’t afford it would get slower access.

The Internet has become one of the most important resources in our lives. It’s a shared resource that all of us take part in. Government spying on it and corporate interference in it are probably not things we want for the future. So Mozilla had some children voice concern for their own future. Because it’s important. What kind of web do you want?

On Bringing Manners to Markets

Privacy in the physical world has been well understood and fairly non-controversial for thousands of years. We get it, for example, with clothing, doors, curtains and window shades. These each provide privacy by design, because they control visibility and access to our private places and spaces.

The virtual world, however, is very young, dating roughly back to 1995, when the first graphical browsers and ISPs came along. Thus, on the scale of civilization’s evolution, the Net is not only brand new, but in its infancy (the stage in life when it’s okay to go naked and pee and crap all over the place.) On the Net today, manners are almost completely absent. We see this, in a strange and mundane way, in corporate and government obsessions with gathering Big Data from consumers and citizens, mostly without their knowledge or conscious permission.

Companies today are moving budget to the Chief Marketing Officer (a title that didn’t exist a decade ago), so she or he can hire IBM, or SAP or some other BigCo to paint million-point portraits of people, with a palette of pixels harvested by surveillance, all so they can throw better marketing guesswork at them.

This isn’t new in marketing. It’s just an old practice (data-fed junk mail) that has fattened on Big Data and Big Fantasy. As a result we’re all drowning in guesswork, most of which is off the mark, no matter how well-understood we might be by the Big Data mills of the world.

Normally we would look to government to help us comprehend, guide and control infrastructures on which we utterly depend. (e.g. electricity, gas, water, sewage treatment, roads and bridges). But no one entity, including government, can begin to comprehend, much less monitor and regulate, the wild and wooly thing the Net has become (even at its lower layers), especially when so much of what we do with it depends on inside giant black or near-black boxes (Google, Facebook, Twitter, et. al.). But, thanks to Edward Snowden, we now know that the U.S. government itself — via the NSA and who knows what else — is doing the same thing, and also muscling private sector companies to cooperate with them.

But that’s a problem endemic to what Gore Vidal called the “national security state”, and plain old market forces won’t have much influence on it. Democratic and political ones will, but they’re not on the table here.

At Customer Commons, our table is the marketplace, and our role in it as customers. Whatever else we do, it can’t hurt to recognize and expose practices that are just plain rude.

We are not fish and advertising is not food

This is how the Internet looks to the online advertising business today:

2manyfish

This is how they approach it:

fishfeeding

And this is the result:

fishfeeding_mess

Advertising is a huge source of the “data pollution” Fred Wilson talked about at LeWeb a few weeks ago. (See here, starting at about 23 minutes in.)

What’s wrong with this view, and this approach, is the architectural assumption that:

  1. We are consumers and nothing more. Fish in a bowl.
  2. The Net — and the Web especially — is a container.
  3. Advertisers have a right to target us in that container. And to track us so we can be targeted.
  4. Negative externalities, such as data pollution, don’t matter.
  5. This can all be rationalized as an economic necessity.

Yet here is what remains true, regardless of the prevailing assumptions of the marketing world:

  1. We are not fish. Rather, as Cluetrain put it (in 1999!), we are not seats or eyeballs or end users or consumers. we are human beings and our reach exceeds your grasp. deal with it.
  2. The Net was designed as a wide open space where all the intelligence that matters is at its ends, and each of us sits (stands, walks, drives) at one.
  3. Even if advertisers have a legal right to target us, their manners are terrible and doomed for correction.
  4. Negative externalities matter. A lot. As Fred said in his talk, we eventually dealt with the pollution caused by industry, and we’ll deal with it in the virutal world as well.
  5. The larger economic necessity is for a well-functioning marketplace. We’ll get that online once free customers prove more valuable than captive ones.

The key is to replicate online the experience of operating as a free and independent customer in the physical world.

For example, when you go into a store, your default state is anonymity. Unless you are already known by name to the people at the store,  you are nameless by default. This is a civic grace. There is no need to know everybody by name, and to do so might actually slow things down and make the world strange and creepy. (Ask anybody who has lived in a surveillance state, such as East Germany before it fell, what it is like to be followed, or to know you might be followed, all the time.) We haven’t yet invented ways to be anonymous online, or to control one’s anonymity. But that’s a challenge, isn’t it? Meaning it is also a market opportunity.

We’ve lived in a fishbowl long enough. Time to get human. I guarantee there’s a lot more money coming from human beings than from fish whose only utterances are clicks.

Data Privacy Legal Hack-A-thon

Customer Commons is supporting, and board member, Mary Hodder, is hosting the Bay Area event. Additionally, there are NYC and London locations. Please join us if you are interested:

Data Privacy Legal Hackathon 2014
Data Privacy Legal Hackathon 2014

This is an unprecedented year documenting our loss of Privacy. Never before have we needed to stand up and team up to do something about it. In honour of Privacy Day, the Legal Hackers are leading the charge to do something about it, inspiring a two-day international Data Privacy Legal Hackathon. This is no ordinary event. Instead of talking about creating privacy tools in theory, the Data Privacy Legal Hackathon is about action! A call to action for tech & legal innovators who want to make a difference!

We are happy to announce a Data Privacy Legal Hackathon and invite the Kantara Community to get involved and participate. We are involved in not only hosting a Pre-Hackathon Project to create a Legal Map for consent laws across jurisdictions, but the CISWG will also be posting a project for the Consent Receipt Scenario that is posted in on the ISWG wiki.

The intention is to hack Open Notice with a Common Legal Map to create consent receipts that enable ‘customisers’ to control personal information If you would like to get involved in the hackathon, show your support, or help build the consent receipt infrastructure please get involved right away — you can get intouch with Mark (dot) Lizar (at)gmail (dot) com, Hodder (at) gmail (dot) com, or join the group pages that are in links below.

Across three locations on February 8th & 9th, 2014, get your Eventbrite Tickets Here:

* New York City * London, UK * San Francisco *

http://legalhackers.org/privacyhack2014/

This two-day event aims to mix the tech and legal scenes with people and companies that want to champion personal data privacy. Connecting entrepreneurs, developers, product makers, legal scholars, lawyers, and investors.

Each location will host a two-day “judged” hacking competition with a prize awarding finale, followed by an after-party to celebrate the event.

The Main Themes to The Hackathon Are:

  • Crossing the Pond Hack
  • Do Not Track Hack
  • Surveillance & Anti-Surveillance
  • Transparency Hacks
  • Privacy Policy Hack
  • Revenge Porn Hack

Prizes will be awarded:

  • 1st Prize:  $1,000
  • 2nd Prize:  $500
  • 3rd Prize: $250

There are pre-hackathon projects and activities. Join the Hackerleague to participate in these efforts and list your hack:

Sponsorship Is Available & Needed

Any organization or company seeking to show active support for data privacy and privacy technologies is invited to get involved.

  • Sponsor: prizes, food and event costs by becoming a Platinum, Gold or Silver Sponsor
  • Participate: at the event by leading or joining a hack project
  • Mentor: projects or topics that arise for teams, and share your expertise.

 

Contact NYC sponsorship: Phil Weiss email or @philwdjjd

Contact Bay Area sponsorship: Mary Hodder – Hodder (at) gmail (dot) com – Phone: 510 701 1975

Contact London sponsorship: Mark Lizar – Mark (dot) Lizar (at)gmail (dot) com – Phone: +44 02081237426 – @smarthart

Omie Update (version 0.2)

We’re overdue an update on the Omie Project…., so here goes.

To re-cap:

We at Customer Commons believe there is room/ need for a device that sits firmly on the side of the individual when it comes to their role as a customer or potential customer.
That can and will mean many things and iterations over time, but for now we’re focusing on getting a simple prototype up and running using existing freely available components that don’t lock us in to any specific avenues downstream.
Our role is demonstrate the art of the possible, catalyse the development project, and act to define what it means to ‘sit firmly on the side of the customer’.
For now, we’ve been working away behind the scenes, and now have a working prototype (Omie 0.2). But before getting into that, we should cover off the main questions that have come up around Omie since we first kicked off the project.

What defines an Omie?

At this stage we don’t propose to have a tight definition as the project could evolve in many directions; so our high level definition is that an Omie is ‘any physical device that Customer Commons licenses to use the name, and which therefore conforms to the ‘customer side’ requirements of Customer Commons.

Version 1.0 will be a ‘Customer Commons Omie’ branded white label Android tablet with specific modifications to the OS, an onboard Personal Cloud with related sync options, and a series of VRM/ Customer-related apps that leverage that Personal Cloud.

All components, wherever possible, will be open source and either built on open specs/ standards, or have created new ones. Our intention is not that Customer Commons becomes a hardware manufacturer and retailer; we see our role as being to catalyse a market in devices that enable people in their role of ‘customer’, and generate the win-wins that we believe this will produce. Anyone can then build an Omie, to the open specs and trust mechanisms.

What kind of apps can this first version run?

We see version 1 having 8 to 10 in-built apps that tackle different aspects of being a customer. The defining feature of all of these apps is that they all use the same Personal Cloud to underpin their data requirements rather than create their own internal database.

Beyond those initial apps, we have a long list of apps whose primary characteristic is that they could only run on a device over which the owner had full and transparent control.

We also envisage an Omie owner being able to load up any other technically compatible app to the device, subject to health warnings being presented around any areas that could breach the customer-side nature of the device.

How will this interact with my personal cloud?

As noted above, we will have one non-branded Personal Cloud in place to enable the prototyping work (on device and ‘in the cloud’), but we wish to work with existing or new Personal Cloud providers wishing to engage with the project to enable an Omie owner to sync their data to their branded Personal Clouds.

Where are we now with development?

We now have a version 0.2 prototype, some pics and details are below. We intend, at some point to run a Kickstarter or similar campaign to raise the funds required to bring a version 1.0 to market. As the project largely uses off the shelf components we see the amount required being around $300k. Meantime, the core team will keep nudging things forward.

How can I get involved?

We are aiming for a more public development path from version 0.3. We’re hoping to get the Omie web site up and running in the next few weeks, and will post details there.

Alternatively, if you want to speed things along, please donate to Customer Commons.

VERSION 0.2

Below are a few pics from our 0.2 prototype.

Home Screen – Showing a secure OS, a working, local Personal Cloud syncing to ‘the cloud’ for many and varied wider uses. This one shows the VRM related apps, there is another set of apps underway around Quantified Self.

Omie 0.2 Home Screen

My Suppliers – Just as a CRM system begins with a list of customers, a VRM device will encompass a list of ‘my suppliers’ (and ‘my stuff’).

Omie 0.2 My Suppliers

My Transactions – Another critical component, building my transaction history on my side.

Omie 0.2 Transactions

Intent Casting/ Stroller for Twins – Building out Doc’s classic use case, real time, locally expressed intention to buy made available as a standard stream of permissioned data. Right now there are about 50 online sellers ‘listening’ for these intent casts, able to respond, and doing business; and 3 CRM systems.

Omie 0.2 Intent Casting

So what have we learned in the build of version 0.2?

Firstly, that it feels really good to have a highly functional, local place for storing and using rich, deep personal information that is not dependent on anyone else or any service provider, and has no parts of it that are not substitutable.

Secondly, that without minimising the technical steps to take, the project is more about data management than anything else, and that we need to encourage a ‘race to the top’ in which organisations they deal with can make it easy for customers to move data backwards and forwards between the parties. Right now many organisations are stuck in a negative and defensive mind-set around receiving volunteered information from individuals, and very few are returning data to customers in modern, re-usable formats through automated means.

Lastly that the types of apps that emerge in this very different personal data eco-system are genuinely new functions not enabled by the current eco-system, and not just substitutes for those there already. For example, the ‘smart shopping cart’ in which a customer takes their requirements and preferences with them around the web is perfectly feasible when the device genuinely lives on the side of the customer.

Surf safely with Web Pal

It’s time to draw the line on surveillance.

Today nearly every commercial website infects our browsers with tracking files that report our activities back to parties we may not know or trust.

So we’re providing a way to draw that line:  Web Pal — a browser extension that blocks tracking and advertising*, eliminating the browser slowdowns caused by both.

Download the Web Pal here, from the Chrome Web Store
And click on the donate button to support our work.

Web Pal was developed for Customer Commons by Emmett Global, which provides privacy solutions to nonprofits. It combines Adblock Plus and Tampermonkey — two open source code bases — in one simple install that requires no additional work or maintenance. It also gives you a Customer Commons start page, which carries updates of news about surveillance and other topics of interest to Customer Commons members.

Here’s a video explaining the Web Pal:

We offer the Web Pal on Chrome. This gives you one safe browser with maximized protection, and the opportunity both to try out other protection systems on other browsers and to compare performance.  Here is a list of those systems, from ProjectVRM at Harvard’s Berkman Center for Internet and Society:

Abine † Do Not Track MeDeleteMeMaskMe PrivacyWatch: privacy-protecting browser extensions and services
AdBlock Plus Ad and tracking blocking.
Emmett † “An easy to install browser plugin that protects your privacy online”
Collusion Firefox add-on for viewing third parties tracking your movements
Disconnect.me † browser extentions to stop unwanted tracking, control data sharing
Ghostery † browser extension for tracking and controlling the trackers
Privacyfix † “One dashboard for your Facebook®, LinkedIn®, and Google® privacy. Blocks over 1200 trackers.”
PrivacyScore † browser extensions and services to users and site builders for keeping track of trackers
Privowny † – “Your personal data coach. Protect your identity/privacy. Track what the Internet knows about you.”

Note that these are maintained on a wiki and subject to change. In fact, we invite Customer Commons members to participate in ProjectVRM, and help drive development of these and other tools.

And, of course, we welcome feedback and suggestions for improving the Web Pal. And we encourage everybody to support development of all tools and services that make customers liberated, powerful and respected in the open marketplace.


* What Adblock Plus calls acceptable ads are passed through by default, but you can change it to block all ads. Just go to Chrome’s Windows menu and click down through Extensions / Emmett Web Pal / Options / Adblock Plus / Filter List. Then uncheck “Allow some non-intrusive advertising”.

Customer Commons Research: 92% of People Engage in Some Strategy to Hide Personal Data

We launched our first research paper today:  Lying and Hiding in the Name of Privacy (PDF here) by Mary Hodder and Elizabeth Churchill.

Our data supporting the paper is here:  Addendum Q&A and shortly we’ll upload a .xls of the data for those who want to do a deep dive into the results.

We all know that many people hide or submit incorrect data, click away from sites or refuse to install an app on a phone. We’ve all mostly done it.  But how many?  How much is this happening?

We’re at IIW today and of course, the age old dilemma is happening in sessions where one guy in the room says: “People will click through anything; they don’t care about privacy.”  And the next guy will say, “People are angry and frustrated and they don’t like what’s happening.”  But what’s real?  What’s right?

We conducted this survey to get a baseline about what people do now as they engage in strategies to create privacy for themselves, to try to control their personal data.

The amazing thing is.. 92 % hide, lie, refuse to install or click, some of the time. We surveyed 1704 people, and had an astonishing 95% completion rate for this survey. We also had 35% of these people writing comments in the “comment more” boxes at the bottom of the multiple choice answers. Also astonishingly high.

People expressed anger, cynicism, frustration. And they said overwhelmingly that the sites and services that ask for data DON’T NEED it.  Unless they have to get something shipped from a seller. But people don’t believe the sites. There is distrust.  The services have failed to enroll the people they want using their services that something necessary is happening, and the people who use the services are mad.

We know the numbers are high, and that it’s likely due to many not having a way to give feedback on this topic. So when we offered the survey, people did vent.

But we think it also indicates the need for qualitative and quantitative research on what is true now for people online. We want more nuanced information about what people believe, and how we might fix this problem.  Many sites only look at user logs to figure out what is happening on a site or with an app, and therefore, they miss this problem and the user feelings behind them. We want to see this studied much more seriously so that people no longer make the conflicting statements at conferences, so that developers say the user’s don’t care, so that business models are developed that think different than we do now, where sites and services just take personal data.  We want to get beyond the dispute over whether people care, to real solutions that involve customers and individuals in ways that respect them and their desires when they interact with companies.

 

 

 

Lying and Hiding in the Name of Privacy

Authors: Mary Hodder and Elizabeth Churchill

Creative Commons licenced: by-nc-nd CCLICENSE

©Customer Commons, 2013

Contact: Mary Hodder, hodder@gmail.com

Abstract

A large percentage of individuals employ artful dodges to avoid giving out requested personal information online when they believe at least some of that information is not required. These dodges include hiding personal details, intentionally submitting incorrect data, clicking away from sites or refusing to install phone applications. This suggests most people do not want to reveal more than they have to when all they want is to download apps, watch videos, shop or participate in social networking.

Keywords:  privacy, personal data, control, invasion, convergence

Download a PDF of the paper here.

 

Survey

Customer Commons’ purpose in conducting this research is to understand more fully the ways in which people manage their online identities and personal information. This survey, the first of a planned series of research efforts, explores self-reported behavior around disclosure of personal information to sites and services requesting that information online. We believe the results of this survey offer a useful starting point for a deeper conversation about the behaviors and concerns of individuals seeking to protect their privacy. Subsequent research will explore how people feel and behave toward online tracking.

This research is also intended to inform the development of software tools that give individuals ways to monitor and control the flow and use of personal data.

For this research project, Customer Commons in late 2012 surveyed a randomized group of 1,704 individuals within the United States (1,689 finished the survey, or 95%). Respondents were geographically distributed, aged 18 and up (see the appendix for specifics), and obtained through SurveyMonkey.com. The margin of error was 2.5%.

Respondents gave checkbox answers to questions and in some cases added remarks in a text box. (Survey questions and answers are in an addendum to this paper.)

Key Findings

Protecting personal data

This survey focused on the methods people use to restrict disclosure of requested personal information. Those methods include withholding, obscuring or falsifying the requested information.

Only 8.45% of respondents reported that they always accurately disclose personal information that is requested of them. The remaining 91.55% reported that they are less than fully disclosing. If they decide the site doesn’t need personal information such as names, birthdates, phone numbers, or zip codes, they leave blank answers, submit intentionally incorrect information, click away from the site, or — in the case of mobile applications, decline to install.­

Most people withhold at least some personal data. Specifically,

  • 75.7% of respondents avoid giving their mobile numbers
  • 74.8% avoid “social” login shortcuts such as those provided by Facebook or Twitter
  • 73.4% avoided giving sites or services access to a friend or contact list.
  • 58.3% don’t provide a primary email address
  • 49.3% don’t provide a real identity

The concept of trust was raised in 22% of the written responses explaining why people hide their information. Some examples include:

  • “I cannot trust a random website”
  • “I do not want spam and do not want to expose others to spam. I also don’t know how that information could be used or if the people running the site are trustworthy.”
  • “If I know why info is needed then I might provide, otherwise no way”
  • “I felt the need to cover my I.D. a little bit — like age and gender.  And I still withhold my social security #.”
  • “If I feel they don’t need it to provide a service to me they don’t get it even if I have to enter in fake info”
  • “Worries on identity theft and general privacy.”
  • “i would never give out my friends or and familys (sic) info ever”

Many respondents said sites and services request more data than required. Others suggested that providing requested information would result in an increased risk to their security. More results:

  • When the 71% of respondents who reported withholding information were asked why, they said they didn’t believe the sites needed the information. Specifically,
    • 68% reported they either didn’t know the site well when they withheld their data or didn’t trust the site.
    • 45% of those who felt they knew the site or service well still withheld information.

Respondents lied about various line items as a strategy to protect their privacy. For example, 34.2% intentionally provided an incorrect phone number, and 13.8% provided incorrect employment information. Here are some reasons they gave:

  • “I didn’t want them to have all my information, or feel it was necessary.”
  • “I have obscured various information so that I would not have further contact with a vendor who won’t leave me alone”
  • “Faking it is the best to avoid unwanted contact”
  • “Sometimes you just want to use a service without them knowing every thing about you.”
  • “I don’t like websites to have very much information on me. I regularly give out spam email addresses, bad birthday dates, and bad location information.”
  • “Registering for many mundane website often requires some pretty detailed personal info. I generally fudge this. None of their business”
  • “Because information is so easily found and transferred on the internet I do provide false info quite often to protect my identity.”

Even those who had never submitted incorrect information made statements such as:

  • “Have never made up info – just ignored requests :-)”
  • “i just don’t use that website”
  • “I have an email address that is purly (sic) for junk mail. I use this email address for websites that request my email address and then I go into that email and delete all email monthly.”
  • “I have never given incorrect information, but I have thought about it.”
  • “I don’t lie, but I omit as I feel appropriate.”

 

Going with the flow

Correcting already obscured or falsified information appears to be too much of a chore. Specifically,

  • Over 50% have rarely or never corrected data they submitted incorrectly
  • 30% correct their data “sometimes.” Of that 30%,
    • 55% said a purchase required correct information
    • 56% had a growing feeling of comfort with the site or service
    • 46% cited the ability to realize new benefits from the site with corrected information
    • 30% said they noticed others’ incorrect data at Facebook or other social sites, or in phone applications, and —
      • 80% of this group assumed that the data was falsified as a way to protect privacy
      • 40% believed the incorrect data was there to mislead marketers
      • 12% believed secretive associates were trying to mislead them
    • 13% believed services always needed correct personal information
    • 75% believed the services needed it only sometimes
    • 12% said it was never needed.

Respondents also believed that other users of these services always needed or expected correct personal data about each other 27% of the time, whereas 23% said it was sometimes needed, and 48% said it was never needed.

 

Privacy online

The results of this survey support the hypothesis that people limit, refuse to give or obfuscate personal information in an attempt to create a measure of privacy online.

On July 30, 2010, in the first article in its “What They Know” series, The Wall Street Journal reported, “One of the fastest-growing businesses on the Internet … is the business of spying on Internet users. The Journal conducted a comprehensive study that assesses and analyzes the broad array of cookies and other surveillance technology that companies are deploying on Internet users. It reveals that the tracking of consumers has grown both far more pervasive and far more intrusive than is realized by all but a handful of people in the vanguard of the industry.”[i]

Adds Doc Searls, in The Intention Economy, “Tracking and ‘personalizing’—the current frontier of online advertising—probe the limits of tolerance. While harvesting mountains of data about individuals and signaling nothing obvious about their methods, tracking and personalizing together ditch one of the few noble virtues to which advertising at its best aspires: respect for the prospect’s privacy and integrity, which has long included a default assumption of anonymity.”[ii]

This survey showed one result of this system. Respondents expressed a general lack of trust in their relationships with online businesses. Many feelings ran strong. Here are some of the comments:

  • “Scary world out there, and I am a bit angry about the fact that all these website ‘track me’ as if that is OK, and then they sell MY data, obviously making money in the process.  How is that OK or even legal?  Don’t I control MY information?  Apparently not…”
  • “So if I think it might be ‘harmful’ to give out info, I don’t do it.”
  • “I want cookies outlawed 🙁
  • “My ex-husband was abusive and has stalked me. I don’t need to let the greedy sellers of my personal information draw him a map to my front door.”
  • “While I doubt I have any real protection of privacy, I have a desire to try to send a message that I want my right to protection of privacy. I regret how much we as a society have lost to the powers of marketing.”
  • “I don’t trust the security procedures of most companies. Security costs money, which cuts into profits, thus most companies have limited incentive to protect PII from cyber criminals.”
  • “The web is far less secure than commonly known.”
  • “Just as I have disconnected my land line because of a flood of unwanted calls, I refuse to give online/ access information for the same reason.”

These survey responses show people resort to withholding data or submitting false data to avoid feeling exposed online. When deciding whether to share personal information, the majority of respondents doubt that sites or services need to collect more than a minimum of obviously necessary personal data.

Conclusion

When people withhold personal data, it is to create a sense of privacy and control of their personal lives.

People are afraid or distrustful of sites, services and phone apps that request their personal data. They withhold or falsify information because they do not believe the sites need their data, and because they do not want to disclose information that might lead to spamming or other intrusions. Moreover, the techniques that people employ to preserve their sense of privacy online are largely improvised, informed by fear, and based on their subjective evaluation of entities that solicit personal information.

For the sake of privacy, people contribute to and tolerate the presence of incorrect personal data online, and attempt to correct it only when they see the clear upsides of accuracy. And, despite the failure of businesses and other organizations to convince users of the need to provide personal details beyond an email address, most users remain comfortable disclosing additional personal data only with those they know and trust.

Research Funding Grant

This research project was funded with a grant from CommerceNet, a not-for-profit research institute working to fulfill the potential of the Internet since 1993.

Customer Commons

Customer Commons is a not-for-profit working to restore the balance of power, respect and trust between individuals and the organizations that serve them, especially in the online world. We stand with the individual and therefore do not take contributions from commercial entities.

ADDENDUM:  Questions and Answers

Click here to see the complete questions, answers and written answers offered by people to provide additional information.


[i] Julia Anguin, “The Web’s New Gold Mine: Your Secrets” The Wall Street Journal, July 30, 2010. http://online.wsj.com/article/SB10001424052748703940904575395073512989404.html

[ii] Doc Searls, The Intention Economy: When Customers Take Charge. (Cambridge, Massachusetts: Harvard Business Review Press, 2012). P. 28.