Monday, August 19, 2013

Internet Privacy: A Free Market Solution



One of the thorniest current issues is the way companies are abusing personal privacy on the internet. There’s an arms race underway to turn our personal data into profit, and much of the current internet investment and innovation seems to be in this space.

This analysis asserts that -- from a purely economic perspective -- the current approach to internet privacy is driven by asymmetric, and in many ways perverse, economic incentives. In this post I will propose one solution that attempts to properly align those incentives, and restore market sanity -- for vendors and users alike.

The Problem

We reveal our most personal information every day on the internet; sometimes it's intentional but more often it's not. With even casual web surfing, every click and post and purchase reveals a little more about who we are, who we know, and what we believe.

The current economic incentives are driving companies to aggressively transform their most unique asset -- unprecedented access to our personal information -- into revenue. The conventional wisdom in the marketing field is that more and better information will make it easier for marketers and advertisers to separate us from our money.

Thus, every day companies are inventing new ways to capture your personal data, and analyze your activity on the internet. For example, every day companies :
  • log (and often share) every click and action they can associate with you
  • track you almost everywhere you browse via “tracking cookies” correlate data about you from myriad online and offline sources
  • capture geolocation data to record physical movement over time
  • apply sophisticated proprietary software to build a psychosocial profile of you
There’s a widespread revulsion against this pattern of increasing privacy assault. Unfortunately, this moral incentive to respect privacy is overwhelmed by the economic incentives not to.

The net result has been a kind of helpless resignation, as people are continuously told that this is the price for our shiny internet toys. As far back as 1999, Scott McNealy, then-CEO of Sun Microsystems, said, “You have zero privacy anyway. Get over it.” Following the 2001 terrorist attacks, Oracle CEO Larry Ellison, whose company later acquired Sun, said: "The privacy you're concerned about is largely an illusion. All you have to give up is your illusions, not any of your privacy."

Of course, Sun and Oracle were/are enterprise-focused companies and thus have little fear of alienating individual users of their technology. Companies like Facebook and Google, regarded as the premier data-harvesting front ends for personal information, make lots of noise about respecting user privacy, all the while acting in the completely opposite way.

The Economic Root Cause

But why does this situation exist at all? Clearly the companies believe it’s the most profitable approach, and there’s some evidence to back them up. So there are clear perceived economic incentives to abuse privacy, even in the face of the widespread feeling that what they are doing is “creepy.” And that’s why the industry will fight any effort to rein them in.

But unlike moral incentives, economic incentives don’t judge. So the issue isn’t just the existence of economic incentives that favor privacy abuse, it’s the absence of incentives to respect user privacy. Simply put, companies pay no economic penalty for abusing privacy, so they act rationally in doing so.

Based on the intense interest in the topic, however, it appears quite possible that respect for user privacy could be every bit as powerful as a competitive differentiator as a larger network, or better features. In reality, every day companies compete on all sorts of characteristics -- but it seems nobody competes on privacy.

What’s missing is a mechanism for companies to easily compete on the basis of their privacy policies. But what would that mechanism look like? And why hasn’t it happened yet?

How We Got Here

It’s important to understand the three major drivers that underly the internet privacy problem. You really need to fix all three; it’s unlikely that solving one or two would do the trick.

They are:
  • What is motivating companies to abuse consumer privacy?
  • What mechanism enables these privacy abuses to exist?
  • Why is it so hard to correct the problem?

Motivation

As we have already discussed, the easy question to answer is the first one -- if people hate it so much, why is it happening? The answer is money. (But then, it’s always money.) And the history is quite important here.

The major internet companies, and their prospective competitors, are for-profit enterprises. There’s certainly nothing wrong with that; the whole goal of capitalism is to create wealth, in turn generating jobs, taxes, and market value.

But a successful business model on the internet has proved to be elusive for all but a handful of companies. Going back to the “dot-com” boom and crash of the early 2000s, it’s been, “Get the eyeballs now and we’ll figure out how to make money on them later.”

The monetization strategy always comes down to one thing -- advertising. There’s a certain comfort with the model that brought us previous no- or low-cost content, ranging from newspapers to radio and TV networks. But the translation to the internet isn’t as natural as it seems, for a number of reasons.

Most obviously, there are infinitely more content sources, due to the low cost of entry in building a website. This is the “long tail” at work, as many websites appeal to very narrow audiences.

The other major difference is the interactive nature of the internet itself. Computers do things with unprecedented speed and flexibility, and marketers take full advantage of that.

Traditionally, advertising has been matched to content at, or before, creation time. It's based solely on context -- that is, ads are matched to the content they sponsor. In perhaps the simplest example, sports content tends to attract ads for beer and cars, because of its predominantly male audience. Much of the mainstream internet still follows the same general model, where marketers try to find content that would appeal to specific audiences, and target ads to those audiences.

But the dynamic web can offer much more granularity, matching ads to content in real time, according to a virtually limitless number of variables. And that requires the sophisticated tools, something that long tail entities are unlikely to be able to do on their own.

Most websites don’t have scale or capacity to also sell and host their own ads. This led to the creation of ad networks, which make small payments to websites in return for the right to display ads. The actual advertisement may never have even been seen by the content creator, and the choice of ad is completely up to the ad network. (This led to the “punch the monkey” variety of terrible multimedia ad, which often has no connection at all to the website itself.)

The breakthrough was Google’s development of “Adwords,” which enabled the company to match ads to any search term. The results page included a clearly marked set of text-only ads related to the search term. Unlike the traditional “spray and pray” model of advertising, the ads only showed when a user, through search, indicated an interest in a particular topic. They weren't obtrusive and often actually helpful, so they felt like a fair trade for the services received. And the advertisers who bid against each other, via automated auction for specific search terms, were only charged when a user actually clicked on the ad. This was a real and quantifiable return on advertising investment, and quickly turned Google into a billion-dollar company.

Adsense was Google’s next innovation. This allowed the text ads to be placed on any web page, matching terms to the page contents, which Google already had in its database to support its search business. This benefited millions of websites, which no longer had to find advertisers or subscribe to ad networks, and could be reasonably certain that the ads would be relevant and relatively unobtrusive to its audience. Again, it felt like a fair deal to advertisers and users alike.

But with targeted internet advertising proving it could command high prices, the race was on to innovate further and faster. The next frontier was to not just optimize for the context of the ad, but to the individual user. Ad networks turned to the common “cookie,” a small piece of code developed to maintain state during and across user sessions. By tying the cookie to the ad network, not the hosting site, it provides the ability to follow users around the internet wherever that ad network is used (and many websites use multiple ad networks). This allows marketers to track user behavior across sites, and increases exponentially the volume of information available about each user.

And then social networks happened. Rather than needing to interpret users’ tendencies from their clicks and actions, users began telling websites explicitly... and in amazing detail. This accelerated the arms race exponentially, as Facebook, Twitter, and LinkedIn collected enough data to make marketers drool. Google+ is a direct response to this gap.

So the driver for this privacy abuse is the belief that hyper-targeting is the key to money. Or more specifically, the fear of being out-hyper-targeted by the competition, and thus being less valuable in a world of personalized advertising. That in turn is driving the abuse innovations, such as correlating user information across multiple online and offline data sets, tracking movement via mobile devices, and especially the use of high-powered analytic software to squeeze every last bit of actionable information out of that user data.

Mechanism

That brings us to the second question: What mechanism enables these privacy abuses? If it’s such a hot-button issue, with even non-privacy-obsessed people finding it objectionable, how could the problem become so big, so fast?

The answer to that is the one-sided contracts we all sign each time we interact with a website. And that’s what we do every time we click on a new “Terms of Use” -- usually without reading it. If we don’t sign the contract, we don’t get to use the companies’ services. So we hold our noses and click through.

Beneath it all, we understand those contracts -- and the privacy policies they contain -- aren’t written by our lawyers to protect us. Rather, they’re written by lawyers working on behalf of the companies, and whose economic incentives (billings) dictate that they’re structured for the companies’ benefit. Absent our own lawyers working for our benefit, the outcome is fixed.

But if you remember your seventh grade health class, any relationship where the power is held by one party exclusively is at high risk of becoming abusive. Internet privacy is no different. With all the power, and no economic incentives to respect your privacy, the internet companies are acting rationally in doing everything they can to sell your privacy to the highest bidder.

Inertia

The last question is related but different: Why is it so hard to correct the internet privacy problem? The answer lies in the sheer number and variety of contracts we sign.

Here’s an exercise. Try to think of all the Terms of Use you’ve signed over the years. Could you estimate a number? Could you remember them all? Do you have any idea what you signed up for, or which companies have better or worse privacy policies? And how many have changed since you signed them? (It’s common practice for continued use to signify acceptance of any changes in terms over time.)

And even if we all made the effort to read and understand these contracts, there’s no way we could negotiate terms for all of them. Besides, there’s little incentive for the companies to negotiate with individual users; it’s economically sane to just write one contract and say, “accept our terms or don’t use our service.”

For most people. the natural human response is to just throw your hands up and say it’s hopeless. You may not like the privacy abuse, but you like rich internet applications and interacting with your friends, so you put up with it.

It’s what Scott Adams, creator of the Dilbert comic, calls a “confusopoly.” When competitors in a market don’t want to compete (and despite what many people think, companies hate to compete because it drives down profitability), they typically try to make it so hard to understand their terms that people stop trying to. Consider wireless carrier plans or insurance policies, and you’ll see what he means.

Sure, some will try to fight back through technological means such as ad/cookie/script blockers, anonymized surfing, or “do not track” settings in their browsers (which are still optional for companies to comply with). But that’s hard, and the vast majority of non-technical users couldn’t do that effectively. Others may push for legal protections from their governments, but those are subject to intense lobbying efforts by the affected companies, as well as inconsistencies across jurisdictions.

So if technical and legal approaches won’t stop the internet privacy problem, what will? Well, in a very real sense the free market created the problem. Therefore nothing except the free market can change the equation.

A free market solution

To recap: Economic incentives are very powerful, at least for for-profit companies. The free market assumes that all participants are rational actors, and will exercise their best efforts to maximize profits.

As we’ve discussed, this is exactly what internet companies are doing when they choose to abuse our privacy. The incentives are clear: companies that demonstrate the ability to deliver deeply personalized marketing command the highest prices and profits. And the growth of these companies, in both revenue but particularly in market value, is a clear indicator that this behavior is smart.

But just because a situation exists doesn’t guarantee that it will persist. In fact, the principles of free markets dictate that imbalanced situations like the internet privacy problem will be fixed, as competitors innovate ways to better satisfy market demands. Even artificial barriers like one-sided contracts and “confusopolies” are destined to fall as markets equilibrate toward a state that best satisfies the most consumers.

Thus what’s needed is an economic incentive which rewards companies that respect user privacy. Currently, companies are unable to compete on this attribute even if they wanted to. Not that they want to, since doing so would logically lower the prices and profits they can command.


Therefore, to align the economic incentives with our personal privacy preferences, a mechanism must be put in place to enable companies to compete in that area. That mechanism must counter the perverse economic incentives currently in place, and just as importantly, neutralize the legal framework which enabled the situation to happen and persist.

The Internet Privacy Registry

To this end, I posit the creation of the Internet Privacy Registry -- a simple, free website where we can all declare our own personal privacy policies. These preferences will then be made available to internet companies via web services and application programming interfaces (APIs), so that they will be able to compete on the basis of respecting your privacy.

Created for the benefit of the public, and from the perspective of the end user, this service will enable individuals to record their preferences in a variety of privacy-related areas. It will be designed for ease of use, introducing a common set of privacy variables, with clearly defined meanings and requirements. It will cover all of the areas currently subject to abuse, from tracking and clickstream harvesting, to data retention and sharing, to profiling and analytics. And it will be flexible enough to accommodate new variables as marketers continue to innovate new ways to monetize user privacy.

To be credible and comprehensive, it will be constructed with the inputs of experts in the field of privacy, and hosted by a privacy-focused organization such as the Electronic Frontier Foundation (EFF) or like-minded entity. It could even happen as a Kickstarter project. The main point is, it needs to be regarded as an independent and unencumbered project, not another “PR tactic” as we constantly see from the internet companies, whose words are pro-privacy while their actions prove the opposite.

What are some of the straightforward settings that may be offered?
  • Do not track me anywhere outside your site
  • Do not combine information from other sources in your profiling of me
  • Do not serve me personalized ads, beyond the context of the page/site
  • Do not track or use my geographic location
  • Do not retain my information for more than x days
  • Do not share my information with third parties
Of course, it’s easy to imagine hundreds of individual variables, which would be unmanageable for typical web users. Therefore it will also feature a handful of “privacy profiles,” ranging from complete privacy lock-down to, well, what is happening today. Additionally, third party profiles can be applied using simple text or XML files, for example custom profiles created by the EFF or American Civil Liberties Union (ACLU).

Importantly, websites can simply declare compliance with one of these policies, so that they can easily compete on the privacy component without explicitly integrating with the Privacy Registry APIs, or do so while that engineering work takes place.

In fact, none of this requires any real technical invention. Cloud services make this a fairly simple deployment. The back-end database is little more than a user directory keyed on email address, with privacy attributes associated. The APIs will be similarly simple to deploy, for those companies that choose to support full user customization rather than just declaring support for a particular general privacy policy (and full support has advantages, as we’ll cover shortly).

The real innovation needed is in the design of the privacy variables. For it to be effective, the Internet Privacy Registry must be built with the same skill and care as the privacy policies of the internet companies. Experts in the field of privacy, as well as legal review, will be needed to prevent loopholes where companies can claim compliance, while still finding ways to abuse privacy.

How it changes economic incentives

So... How does this impact the three main factors driving internet privacy abuse? Let’s examine them.

Money -- The Internet Privacy Registry gives privacy-oriented companies a clear and powerful way to compete. By introducing a mechanism to measure compliance with user preferences, it acts as a brake on the runaway privacy problem, providing an economic counterweight which simply does not exist today.

Obviously companies are under no obligation to accept users’ personal privacy preferences, and their own terms of use will still apply. But those who don’t participate -- either through explicit support for individual user preferences, or through support of one of the general profiles -- will be in competition with those that do. This is a simple, but very real, economic incentive to counter the opposing incentives existing today.

Contracts -- If the goal is to restore the balance of power between internet companies and users, the one-sided contracts must end. Having a set of terms that is dictated from the user’s perspective achieves that.

If a user declares, “Do not track me outside of your site,” or, “Do not serve me personalized ads,” those aren’t subject to fudging. Just as users are currently stuck with words prepared by company lawyers in Terms of Use, any company that claims to support the Internet Privacy Registry agrees to the terms listed there.

In practice, this will help the companies as well as the users. Even privacy-oriented companies must devise terms of use and privacy policies, and each of these basically has to be crafted from scratch. This leads to inconsistency, ambiguity, and suspicion. Since users reasonably assume that they are being abused by many, if not all of these contracts, having clear, objective terms helps both parties.

Manageability -- Having a single place to control your personal privacy policy achieves two things. First, it ends the kind of continual tweaking that is endemic to the internet privacy policies we agree to, which is the complaint many people have whenever their preferred internet sites change policies. Just as importantly, it gives users a way to easily adjust settings when they choose to. For example, if you have chosen settings that prove to be too restrictive for the types of sites you want to use, you can tune them appropriately.

And that leads us to the next part of the conversation.

Why would this work?

The first question will be, why would companies agree to honor anyone’s personal privacy policies? After all, they currently have their own carefully crafted contracts, which align with their business strategies in ways that the Internet Privacy Registry certainly would not.

The answer, again, is free markets and competition.

Take Facebook as an example. As the undisputed king of social networks, Facebook has the largest active network of people, with the self-reinforcing dynamic that people use it because their friends do too. Because of that massive population, Facebook commands similarly massive revenue from the same marketers who want every last bit of personal information in the quest for better targeting. That, in turn, drives investment in user experience and infrastructure and also, unfortunately, the kind of privacy practices people find so objectionable.

But what if a competitor emerged, one with perhaps less evolved UI and infrastructure, but a commitment to honor personal privacy as defined by the Internet Privacy Registry? Many people, this writer included, would modify their behavior by moving to the new network, and encourage their friends to do the same. Over time, this would make Facebook less attractive to marketers, and threaten its revenue and growth.

In short, by enabling companies to compete on privacy, it forces them to do so.

Imagine a scenario where millions of users invest the few minutes it would take to register and declare a personal privacy policy. That’s a strong market message that privacy matters to people, and indicates clearly that they are willing to “vote with their feet.”

But there are other, more subtle advantages to having an independent arbiter of user privacy, even for the companies that are fueling the arms race in this area.

Let’s optimistically assume that most companies are run by people who have the same privacy concerns most of us do. They may not want to participate in the privacy abuse arms race, but if the market is rewarding competitors who do, then everyone is forced into the same behavior, in the interest of investor/shareholder value.

Put another way, strong economic incentive exists to behave badly, especially in the absence of a counter-incentive to respect user privacy. The Internet Privacy Registry changes the calculus because it mitigates the bad incentives, while creating good incentives.

A related point, which we touched on earlier, involves the technical work needed to support fully personalized privacy policies.

Say a company elects to participate by declaring support for one of the mid-range general policies. Perhaps that’s in the range of, “Don’t track me outside your site, don’t combine outside data sources in your profile of me, and don’t share my information with others.” That still leaves clickstream tracking and personalization within a domain, with suggestions and ads based on previous behaviors. That’s basically how Amazon and Google built their businesses, before the creepy stuff started happening.

That simple step goes a long way toward establishing a company as someone willing to meet reasonable expectations of user privacy. But it’s a missed opportunity, economically speaking, because it treats all users the same. It’s still less restrictive than some users want, but, as importantly, it’s less restrictive than others may choose.

If a user purposely chooses a privacy policy that is highly restrictive, he is unlikely to use a site with even a moderate policy. One may argue that it’s not a profitable user, but it’s a user, and especially on social sites, losing that user is a net loss to the network.

If a user chooses a permissive privacy policy, then that’s an “opt in” user and, according to the economic incentives prevalent today, that’s a valuable user. It’s easy to see sites competing for that user in other ways, for example with extra features or benefits.

The point is that it’s unlikely that users will all choose the most restrictive policies, or even the same policies, based on normal bell curve distributions. Having the ability to truly customize to every user’s preferences isn’t just a great competitive position, it’s also an economically wise one.

Conclusion


From a practical perspective, expecting moral outrage to stop internet privacy abuse is akin to a gazelle expecting its moral outrage to stop a lion from killing and eating it. Absent sufficient counter-incentives, we can expect the privacy abuse arms race to continue, or likely accelerate.


It may well be that the Internet Privacy Registry isn't the only approach to restoring balance, or even the best one. What is clear, however, is that legislative and technical approaches will continue to fall short, because both are outweighed by the economic incentives of large, sophisticated, well-capitalized players. Only by addressing the economic vector will we have any hope of enjoying an internet that treats our privacy in the ways that we expect and deserve.

No comments:

Post a Comment

Play nice please. Mean stuff will be deleted.