Business Categories | Free Thinking | Future Perspectives | Reports | Social Change | Technology | Technology

Privacy


Why privacy matters

Over the next 10 years, the intelligent use of data will become one of the biggest competitive advantages a company can have. At the same time, the loss of customer data is one of the bigger risks to a modern business. As consumers become more aware of security issues, and of the value of their data, privacy is moving away from its former locus—primarily in operations— towards the heart of a business’s relationship with its customers. Privacy is increasingly important to a company’s wider reputation.

The capability of new digital technologies has led to the collection, storage and trading of personal data on an unprecedented scale, and largely imperceptible to consumers. The potential value of these data to companies has transformed privacy into a transaction—one in which consumers are not benefitting from equally. Currently, the transfer of data between consumers and companies is a one-sided handshake in which consumers have little power and less control.

200
The number of agencies in the UK that are authorized to access your personal data

50,000
The number of Internet police monitoring Internet use in China

77.26 million
The amount in dollars American company Acxiom makes out of selling data every year

1,800 billion
The amount of gigabytes of data produced by humans in 2011

68
Percentage of people globally who agree ‘I am concerned about people or companies misusing my personal data’

608

Number of hours it takes to read the privacy policies on the top 75 websites on the Internet

However, this is changing. Privacy is entering a time of flux, and social norms and legal systems are trying to catch up with the changes that digital technology has brought about. In this report we examine how recent privacy controversies are beginning to alert consumers to both the dangers of data misuse and the value of their personal data.

We will explore how ideas of privacy have changed and the influence of new technologies on its evolution. We chart three phases of privacy and examine what the next era of privacy might look like in a world of increasingly smart devices and ever-more expansive forms of data collection.

The invention of privacy

Taking a long-range historical view, privacy is a relatively recent concept. It goes back only about 400 years. The first layer of privacy was controlled through architecture (the ‘bricks and walls’ layer). The second was controlled through legislation and laws (the ‘paper’ layer). And the third—and newest— layer is privacy controlled through digital technology (the ‘code’ layer). It may seem odd to talk about bricks in the digital age, but the insight from that era, when the idea of privacy was constructed, has important resonances today.

The ‘bricks and walls’ layer of privacy

Ideas about privacy have evolved in response to technology

The notion of privacy first emerged in Britain during the 16th century as a result of changes in the design of the home. Before then, the majority of homes had consisted of a single shared space where family and household members slept, cooked, ate and worked together. This changed between 1570 and 1640 following the invention of the chimney, which meant that homes no longer needed to be constructed around an open fire; instead houses could be divided into smaller rooms separated by walls. The result was a construction boom in Britain that became known as the ‘Great Rebuilding.’ People began adding floors, walls and stairs into their homes, creating smaller, separate rooms, each with its own special purpose. The home became divided into public rooms such as the living room and private rooms such as the bedroom.

Privacy: what you need to know

At the heart of the problem of online privacy are companies’ attitudes towards data: at present the exchange of data between people and organizations is unbalanced and resembles a one-sided handshake.

Privacy as we know it is a relatively modern concept that was ‘invented’ around 400 years ago. Every time privacy has developed another layer of meaning, it has been in response to the introduction of a new technology into people’s lives–from the chimney, to the camera and printing press, to the Internet. The future of privacy will be characterized by the analysis of patterns of consumers’ behavioural data, with consumers seeking to protect themselves from the assumptions companies have made about their personal data patterns.

Privacy controversies most often happen because consumers are surprised when their personal data is used unexpectedly; companies need a ‘no-surprises’ approach to privacy if they want to avoid privacy pitfalls. In order to ensure there are no surprises around privacy we need to move the debate away from privacy as a permanent and immovable ‘right’, and recognize it for what it is in the digital age: a mediating factor in a constantly shifting and iterative exchange between people and organizations. Until companies stop viewing personal data–or ‘bits of people’–as a commodity to be farmed, and rather realize its role in an exchange which should be beneficial to both parties, they will fail to handle it with the care and respect consumers expect.

As long as this ‘one-sided handshake’ continues, companies and organizations that manage personal data not only risk a collapse in consumer trust, they also stand to miss out on huge potential sources of value to their business and society as a whole, from new unexpected uses of personal data co-created by their customers.

Two American lawyers invented the idea of the ‘right to be left alone’ after Eastman Kodak marketed the portable camera

These changes in home design happened rapidly, as demonstrated by an inventory survey 1 conducted in the East Midlands from that period; in the 16th century the average number of rooms in a house was 21/2; by the 17th century this had increased to 61/2 rooms. It is unclear whether the desire for privacy led to the architectural reconfiguration of homes or the other way around. What is certain, though, is that the Great Rebuilding filtered concepts of privacy down to the broad population, with the creation of personal living spaces giving people a sense of individual privacy that had previously been experienced only by the rich.

That the home as a private space now seems normal and unremarkable exposes one assumption often made about privacy—that it has always been valued. Far from it: privacy is a relatively modern invention that was enabled by changes in our material conditions and surroundings. But as time elapsed, bricks and walls were no longer enough to protect personal privacy.

The ‘paper’ layer of privacy

This next layer of privacy was expressed and controlled through law. In 1890, the eminent American lawyers Samuel D. Warren and Louis Brandeis published an article in The Harvard Law Review which gave birth to the recognition of privacy as an individual right to be protected by law. As chimneys led to the notion of privacy through the physical architecture of the home, it was the invention of the camera that motivated the campaign for the ‘right to privacy.’

The invention of the inexpensive and portable ‘snap camera’ by Eastman Kodak a few years before had created a new threat to privacy; individuals could now be photographed at home, at work or at play and the results published in newspapers for all to see. The motivation for the lawyers 2 to write their article apparently stemmed from their irritation at photos of Warren’s dinner parties appearing in a Boston high-society gossip magazine.

In their article, Warren and Brandeis noted that the common law in America was primarily about the protection of the physical person and individual property. However, they went on to argue that the threat to privacy from recent technological inventions and the new forms of publicity associated with the development of the modern press meant that the common law needed to be extended to protect privacy. This eventually came to be known as ‘the right to be left alone.’

Competing worldviews on privacy

As the layers of privacy become thicker over time, legal institutions across the world have adapted different ways to keep up with the shifting privacy landscape. Political history, cultural sensitivity and economic prosperity all play a part in the ways that privacy is adapted into constitutions and law.

Privacy: from bricks to pattern

Privacy: up in smoke?

Since then, privacy has repeatedly been invoked by the US Supreme Court as a constitutional right. There have been similar developments around the world over the course of the 19th and 20th centuries. Both the 1948 UN Declaration of Human Rights and the 1950 European Convention on Human Rights included articles specifically about personal privacy.

The ‘code’ layer of privacy

In turn, the third and most recent layer of privacy as code is intimately associated with online digital technologies. Technologies ranging from cookies on websites to GPS sensors on smartphones mean new types of data about people’s behaviors can be easily collected on a mass scale. As a result of the Web, information that used to be forgettable simply because it was harder to find is now more easily circulated, stored and retrieved. In short, the Internet’s memory is a lot better than the human memory: the Internet never forgets. To describe the current phase of privacy as that of ‘code’ points to what makes digital technologies unique—namely that they are underpinned by binary codes that convert real world information into discrete data formed of 0s and 1s. As Cory Doctorow has recently described in MIT’s Technology Review3, there are currently only two ways to browse the Web. Either you turn off cookies (the bits of code used to track online behavior) and live with the fact that many websites won’t work, or you turn on all cookies and accept that this gives companies wholesale permission to extract the data collected by the cookies. Your privacy settings are, in other words, either on or off. This goes against the way people actually want to control their privacy, which is ‘on’ in some contexts (e.g., using a social network while at work) and ‘off’ in other contexts (e.g., using a social network while at home).

In the digital age our privacy settings are either ‘on’ or ‘off’; but people assess their privacy according to the context they are in when they are online

There are signs that a more contextually sensitive approach to privacy is emerging. For example, Google Circles provides a more layered way of controlling who gets to see what on a social network. However, the code layer is still in the process of developing. Emerging technologies such as the Internet of Things (where objects such as fridges, cars and light bulbs are embedded with computational capabilities) will push the code layer of privacy further; privacy will no longer be about only the data collected and stored on devices like mobile phones and computers but also about the data collected by seemingly innocuous household objects.

Avoiding today’s privacy pitfalls: no surprises

As we have seen, each additional layer of privacy has been a response to the invention of a new technology, such as the chimney, the camera and, most recently, digital technologies. It is not fanciful to suggest that privacy is created by technology; it helps protect people’s privacy in some situations while threatening it in others. Saying that technology creates privacy goes against the grain of the oft-heard mantra of technologists that “privacy is dead”, as Mark Zuckerberg famously pronounced in 2010, when he said that “privacy was no longer a social norm.”

In fact, what is happening is far more nuanced: our ideas about privacy are simply continuing to evolve in light of the new social technologies. To explore what this means for building an approach to privacy that fits with current consumer attitudes towards it, we analyze here the new phenomenon of the ‘creep factor’ that is fast becoming the headline in today’s media stories about privacy.

The creep factor is the name given by journalists to the feeling consumers 4 report after experiencing an infringement on their privacy. It has recently been said that the creep factor is caused by people feeling as though they are being spied upon. We argue instead that this is about context: that what lies at the heart of the creep factor is actually the surprise consumers feel when personal information shared in one context pops up unexpectedly in a different context (see panel).

Our ideas about privacy are simply evolving in light of the new social technologies

Helen Nissenbaum, Professor of Media, Culture and Communication, and Computer Science at New York University, describes5 the tendency of technology companies to move information they hold about people in a relatively unconstrained way, with little regard for the context in which that information was first shared or the social norms that govern what is considered appropriate behavior in that particular context. This explains the recent outrage sparked by the ‘Girls Around Me’ app, which took information from one social situation (e.g., people openly sharing their location and personal profile within the context of their social networks) and made it available in a different context (e.g., in an app utilized by strangers on their mobile phones). All the information was publicly available already, and the designers did nothing illegal in accessing it, but the shift in context made the app seem creepy.

The implication is that, depending on the context, there are many sets of social norms to be understood, not just one. In order to avoid the creep factor, companies must make sure that they minimize the surprise caused by the unexpected leakage of personal data from one social context to another.

In other words, companies need a no-surprises approach to privacy if they want to avoid practices that might appear as creepy to consumers.

Privacy controversy 1: Girls Around Me app (2012)

What is it?
An app that allowed users to find nearby women who had ‘checked in’ on a social network, using publicly available data from Foursquare. According to the app, it “scans your surroundings and helps you find out where girls or guys are hanging out.” The app was downloaded over 70,000 times from iTunes before being withdrawn by its Russian developer i-Free following widespread public controversy.

Why was it controversial?
The app was quickly labeled as the ‘creepy girl-stalking app’ by the media 6 following reports that Girls Around Me allowed users to stalk and find out information about women without their knowledge. Popular blog, Cult of Mac, described the app as a ‘wake up call for privacy.’ The developer of the app, however, claimed that the reaction was a ‘serious misunderstanding’ because the app did not actually reveal any personal data that the women had not already made publicly available.

What does this tell us about privacy?
The description of Girls Around Me app as creepy, even though it only aggregated publicly available data, suggests that the creepiness stemmed from the unexpected re-use of geo-location data. Women who had shared their location in the context of Foursquare—for the purposes of letting their friends know where they were or for receiving discounts from local companies—had not expected that that information would then be made available in a completely different context such as a mobile app used by strangers.

Privacy controversy 2: Facebook’s introduction of News Feeds feature (2006)

What is it?
In 2006 Facebook launched a feature called News Feeds which meant that when users logged in, the first thing they saw was an aggregated list of all their friends’ activities— who had become friends with whom, who had been tagged in a photo, who had posted a new status update and who was now single or in a relationship—displayed in reverse chronological order. The feature initially outraged users and over 70,000 people joined groups such as Students against Facebook News Feeds.

Why was it controversial?
The introduction of the News Feeds feature and subsequent mishandling of users’ concerns was described famously by danah boyd as “Facebook’s privacy trainwreck.” Facebook appeared to be completely surprised by the reaction to the new feature, as the aggregated information was already public, and argued that it made information sharing more efficient. While this was true, the News Feed feature made this information much more visible than it had been, as previously it was only possible to see it by actively going onto someone’s profile. boyd claimed that in making the information more visible, the feature fundamentally changed the social dynamic of Facebook—and this is what had upset users.

What does this tell us about privacy?
Boyd writes that, “information is not private because no one knows it; it is private because the knowing is limited and controlled.” 7 By converging the activities from multiple Facebook profiles into a single News Feed, the feature collapsed multiple social contexts. Because people behave differently in different social contexts, the collapsing of context made people feel that they had lost control over the context in which other people saw their posts, not the posts themselves. The fact that the News Feed feature has since become an unremarkable aspect of Facebook would seem to suggest that social norms have changed. However, it’s more likely that people have instead just adapted to the new environment of Facebook and become more aware that their activities are more visible.

The next layer of privacy: ‘pattern’ privacy

We expect that the next layer of privacy will be characterized by the behavioral patterns of consumers. The sheer increase in so-called ‘Big Data’ is leading to a greater focus on both data analysis to identify patterns and statistical modeling of those patterns to predict future behaviors. The need for this kind of analysis is set to grow rapidly; last year McKinsey Global projected that, in order to keep up with the increase in data, the United States will need between 140,000 and 190,000 more workers with ‘deep analytical’ expertise.

But what does this mean for privacy? As companies increasingly use large datasets to profile consumers and predict their future behaviors, may consumers find themselves on the wrong side of the assumptions that companies have made about them on the basis of their personal data? Take, for example, developments in the credit industry. In 2009, The New York Times reported 8 that the Canadian retail company Canadian Tire had conducted a vast analysis of almost every piece of information that it had collected from its shoppers’ in-store credit card transactions. The company found that people who had bought premium birdseed and protective pads for furniture were much more likely to pay their credit card bills on time, but that people who bought chrome-skull car accessories were highly likely to miss their payment schedule. Further analysis revealed that use of these purchase-history data led to more accurate predictions than those gleaned from the analytics traditionally used by the credit card industry to forecast cardholder risk.

Consumers will want to know how their data is analyzed

We can begin to imagine the future evolution of business models–particularly in the insurance industries–based upon making predictive inferences about people and the offering of tailored products/services based upon those predictions. As a consumer, though, this becomes problematic if you have bought a chrome-skull car accessory at some point in your past and are now penalized for doing so, and invisibly, because of assumptions that companies have made about you on the basis of your purchasing history. Since these assumptions are embedded into the very algorithms and models that companies design to analyze data, it becomes difficult for the consumer to construct and negotiate an alternative, and perhaps a more accurate picture of who they really are.

In this scenario, it is likely that consumers will become protective about their purchasing histories and distrustful of companies which seek to use such data to predict their customers’ future behaviors, particularly if companies get these predictions wrong, or if their algorithms are exposed to public view. One can imagine, for example, the redress sought by the consumer who bought a chrome-skull car accessory as a gift for a friend. The implication is that the next layer of privacy will be about more than the kind of personal data companies have access to and how it is stored. What will become more important to consumers is how their personal data is analyzed and how companies make inferences about them as individuals.

Privacy controversy 3: Target supermarket’s targeted advertising

What is it?
The US giant retailer Target made headlines recently following an investigation by The New York Times into the way it has been leveraging the ‘Big Data’ it holds about consumers—specifically credit card data and purchase history—to build detailed profiles about the people who shop in its stores.

Why was it controversial?
Retailers have been using data to profile shoppers for years. However, what made Target’s use of data controversial 9 was that they were modeling shoppers’ changes in recent purchases to predict whether female customers had become pregnant. In one instance, this modeling worked so well that they predicted a teenage girl was pregnant before her father knew.


What does this tell us about privacy?

One of the most interesting revelations to have emerged is how Target’s marketing department had to devise ways of making the targeted advertising for baby products look random. For example, Target carefully put coupons for lawn mowers next to coupons for diapers in order to make it less obvious that potentially pregnant women are being targeted for baby products. This was found to be more effective as it avoided shoppers’ feelings of being spied upon, a feeling that had been found to result in a decrease in use of the targeted coupons. This tells us that the creep factor associated with targeted advertising can actually be detrimental for sales.

Planning for privacy

So how do you assess how privacy fits into your business environment and how it will change?

After all, privacy is a complex construct, influenced by many factors, and it can be difficult to future-proof business plans so they keep up with evolving technological developments and consumer expectations about the topic. Planning for change around privacy can be made easier by looking at the work of the American Internet theorist Lawrence Lessig. His work has been influential because it sets out a way of understanding what happens when the status quo is disrupted. Given the significant parallels between how digital technologies disrupted the status quo of copyright and what is now happening with privacy, we can use Lessig’s work as a guide through the complex and uncertain landscape of online privacy.

Lessig sets out a framework that describes four factors influencing a person’s behavior at any one time: laws, social norms, the market and architecture. Each of these act to constrain—or, alternatively, enable—a person to behave in a particular way. So, for example, laws shape behavior through the threat of punishment, social norms constrain behavior through stigmas imposed by communities, markets constrain through price and architecture constrains through physical design.

By looking at the interaction of all of these factors, we get a full picture of how privacy will change in the future and consequently how we can plan for it. The following is a model developed by The Futures Company, based on Lessig’s work, to help you plan your thinking about privacy and avoid potential pitfalls.

Eight questions to track changes in the privacy landscape

Architecture

1. What are the emerging technologies that will present the next challenge to consumers’ privacy?

2. How will consumers use new technologies to control who can access their data and for what purpose?

Social norms

3. What are the dominant social norms in the context that consumers share their personal data with your business?

4. How can your business ensure that there are no surprises around data sharing across different social contexts?

Law

5. What new legislation is going to impact how personal data can be stored, tracked or shared?

6. How will the regulation of privacy in one country impact upon the development of legislation in other countries?

Market

7. How would your business be affected by growing consumer mistrust of commercial uses of personal data?

8. What other models for new growth are there that do not depend upon advertising for revenue?

The evolving value of privacy

One way to ensure there are no surprises around privacy is by seeing it not as a right, but rather as an exchange between people and organizations that is bound by the same principles of trust that facilitate effective social and business relationships. This is an alternative to the approach of ‘privacy as right,’ that instead positions privacy as a social construct to be explicitly negotiated so that it is appropriate to the social context within which the exchange takes place.

Privacy is an exchange which needs to be negotiated between people and business

In many ways, privacy is 10 already an exchange between people and businesses, and this is increasing. For example, according to an often-cited quote about Facebook by media theorist Douglas Rushkoff, “we are not the customers of Facebook, we are the
product.” By using Facebook, people give consent to the company to use and sell their personal data in exchange for accessing a free social networking service. This is fine if consumers accept the trade-off associated with the monetization of their personal data (e.g., Facebook selling information about what people have ‘liked’ to advertisers), but this reciprocal exchange is delicately balanced. If consumers start feeling as if they are being taken advantage of—that the exchange is unfair—then their attitudes will almost certainly harden.

The direction in which attitudes towards privacy will evolve is uncertain. There is plenty of evidence suggesting that the current consumer state of mind about privacy is confused. A survey in the US conducted in 2011 11 shows that 70 percent of people do not trust Facebook with their private information, while 60 percent have actually changed their privacy settings.

62 percent of people are concerned about their online privacy, yet only half said they’d deleted their cookies

Similarly, a recent report published 12 by the Internet Advertising Bureau and ValueClick reveals that 62 percent of people are concerned about online privacy, but only half said they had deleted their cookies within the past six months. This points to a contradiction in terms of what consumers say and do when it comes to privacy; they say their privacy is important to them but only some take the necessary steps to protect it online.

One reason for this gap is that many consumers are poorly educated about how their personal data is collected by companies and are unsure about what it is actually used for. Investigation into the recent implementation of the EU Cookie Law has highlighted how misinformed consumers in Europe currently are. For example, 81 percent of people 12 who delete cookies do not distinguish between the ‘first-party’ cookies that give a website its basic functionality (e.g., remembering what items the consumer has placed in their shopping basket) and the ‘third-party’ cookies that advertisers place on websites to track user viewing.

This demonstrates a basic lack of understanding of how web browsing works. At the same time, 14 percent 12 said they thought the data used to show them relevant ads included information that could identify them personally, while 43 percent were not sure if this meant their identity was known.

With such widespread perplexity around the issue of data tracking, and the inability of governments and their agencies to implement prompt and effective solutions, control over online privacy has shifted from individuals and public institutions to the companies who collect data. This shift in control away from individuals towards businesses is set against the backdrop of a growing ‘Big Data’ economy and the development of business models scrambling to collect and monetize consumers’ behavioral data. Indeed, we are witnessing a new gold rush as companies chase the promise of business agility, new insights and areas for growth that Big Data may bring.

How much is your personal data worth?

The first signs of consumer awakening to the value 13 of their data are here. A recent article in The Atlantic reported that your data is worth somewhere between $4.39 (Facebook’s ARPU) and $20 (Google’s ARPU). The first web-based companies are appearing to help empower users to monetize their own data.

Personal

Personal is a new web and mobile service offering 14 individuals a secure data vault in which to add all their data. Personal metes the data out to advertisers in exchange for discounts which the individual can take advantage of. Data paired with purchase intent leads to great efficiency for advertisers, while consumers feel they are benefitting from sharing more about themselves in a setting where they can control exactly what is divulged.

The rapid development of technological devices and software 15 has proved challenging for consumers and legal systems alike to keep up with, while fuelling businesses’ pursuit of larger and richer stores of data. This data rush means that, according to IBM, we now create 2.5 quintillion bytes of data, the equivalent of 625 million DVDs, every day—and this number is only growing. Arguably, the biggest change in terms of technological infrastructure in the next four to five years will be the introduction of the 4G network set to be rolled out in 2016 (for more on this see The Futures Company’s Future Perspective Technology 2020). The effect of this will be to encourage many more people to use location-based services to interact with their nearby friends and to receive tailored and targeted offers from companies in particular localities. In short, the potential will soar for consumers to share far more data with both the people and the objects around them, whether they want to or not.

At present, companies view the collection of large quantities of data as a transaction, but it is not an equal one. Consumers are giving up information without being sure of what the contractual terms of that agreement are, whom the information will be passed on to, how much of it is made anonymous or its monetary value.

Companies bind consumers into a long yet unenforceable contract sealed by a one-sided handshake

The outcome is a market in which companies bind their customers into a long yet unenforceable contract sealed by a one-sided handshake. We are already seeing the first signs that consumers are awakening to this unequal transaction. Some are questioning the value of their data and why they should be giving it away, in many instances for free. In a world where ethical business practices and transparency are increasingly valued, it will be the businesses who create and share standards for the use of consumer data who will build trust and ensure that their data supply is not cut off or corrupted by consumers in the future.

In the scramble to turn data into money, several businesses have found themselves at the center of controversies highlighting breaches of privacy (some of which have already been explored in this report). One of the earlier, and most infamous, controversies was the Sony BMG rootkit scandal of 2005. The addition of rootkit software to millions of music CDs, designed to monitor how many copies of the music owners made, caused uproar. The initial response from Thomas Hesse, then president of Sony 16, became infamous as one of the worst misunderstandings of consumer mindset: “Most people don’t even know what a rootkit is, so why should they care about it?” Sony damaged their reputation further last year when they announced that millions of customers’ personal data had been stolen from their PlayStation Network.

Sony’s inability to pinpoint the exact type of data that was stolen enraged customers even more and opened its data security policies to public scrutiny. These two privacy failures, one in data collecting and one in data securing, are permanent blemishes on Sony’s reputation. The most recent blunder evoked questions over why a proper risk assessment of the situation was not carried out and appropriate contingency plans put into place.

Data is ‘bits of people’ and aspects of their identity; companies that understand this will be winners

Sony should not be singled out though. In recent years there have been many big data leakages reported, from stolen email addresses on sites such as LinkedIn to government loss of records.

All the controversies, while different in circumstance, are united in consequence: a loss of consumer trust and long-term reputational damage.

At the heart of the problem of online privacy are companies’ attitudes towards data. Take the phrase ‘data is the new oil’ 17 for example, a phrase increasingly used in business blogs and conferences such as SXSW. This phrase positions consumers’ personal data as a commodity from which value can be extracted. But it also indicates why corporate thinking about data is misaligned, for you are unlikely to hear consumers’ talking about their personal data in the same way. And this is where the problem lies: the way companies think and talk about personal data is vastly different from the way most consumers talk about personal data.

Consumers, in short, don’t think of their data as data; instead, their personal data is actually bits of people and bits of their identity—their name, their birthday, what they like, what they ate for dinner yesterday and so on.

The ramifications of the loss of personal data for an individual can be devastating. Identity fraud is at its highest-ever level and so, for many, a stolen piece of data represents a stolen piece of themselves. Until companies stop viewing data as a commodity and start seeing it more humanely as ‘bits of people,’ they will fail to handle it with the care and respect consumers expect.

Acting on privacy

1. Think beyond information to people

Avoiding privacy pitfalls in the future means changing your mindset about data now. Start thinking about the personal data your company collects, stores or has access to as ‘bits of people’—not just as abstracted information. Switching to a mindset like this will naturally encourage a more responsible and sensitive approach to personal data usage.

2. Context is everything

Most privacy controversies occur because of consumer surprise at finding out that their personal data have leaked from one social context to a different, and unexpected, context. In the future, the surprise will increasingly come from consumers finding out that companies have made erroneous (or just plain ridiculous) inferences about them as a person on the basis of their personal data. Companies are already starting to make greater claims about their ability to predict consumers’ future shopping, social networking or lifestyle behaviors through the extended use of algorithms and modeling techniques. In doing so, companies will need to be aware of the risks that come with this ‘presumption of pre-consumption.’ What we mean by this is that, just like in the film Minority Report, where the ‘pre-crime’ detection system was used to arrest people before they had committed a crime, companies which claim to predict the pre-consumption intentions of consumers will face greater scrutiny than they do now. If companies make inaccurate predictions about consumers, the products or services they provide will simply be irrelevant. If, however, companies make accurate predictions (as Target did in the case of the pregnant girl), they run the risk of taking the creep factor to a whole new level.

Companies which try to predict the intentions of consumers will face the most scrutiny

3. Be ahead of the transparency wave

The companies which are first to be transparent about what personal data they collect about consumers and how they use it, will stand to gain the most consumer trust. It is probable that privacy will develop in the market in much the same way that sustainability has, where companies that started earlier, such as the Body Shop, took on and retained a positive aura. The big technology companies are already beginning to do this. For example, Google—a company that has much to gain from securing the trust of its users—launched its Good To Know campaign earlier this year in a public
effort to educate people about their online personal information security.

4. Re-balance the one-sided handshake

It is safe to say that many of the potential uses of personal data have not yet been fully explored; we really are at the frontier of finding ways to extract value from the data already available. However, there is a real risk that innovation can be threatened if public attitudes harden as a result of the current imbalance between consumers and companies over control of data. Therefore, companies need to start thinking about how to redress that balance. Possible ways include:

  • Giving consumers the right to view, correct and delete the personal data stored about them
  • Open up access to old data in a similar way to the public records system in the UK and give consumers data analysis and visualization tools that allow them to find interesting patterns and stories about themselves
  • Help consumers see how good you are at profiling them and give them the opportunity to help companies profile them better.

5. Know where your advice comes from

Much of what is said or has been written about privacy is driven by people who have an agenda, whether that is media or technology companies who would rather understate consumer concerns so as to not call attention to the vulnerabilities of how their business operates, or privacy experts in academia who stress the areas of tension that correspond most closely to their own ideological standpoint or the focus for their next research grant. You have to make your own mind up about the future of privacy and how it will affect your organization and category, and this requires independently monitoring and making sense of trends in Lessig’s four categories of technology, regulation, social values and industry change. Within each of these there will be technological, legal and consumer ‘experts’ who can keep you informed of possible future developments. Listen to their advice, but don’t count it as gospel—privacy is too contested an issue for you to risk hearing only one side of the argument.

Questions we can help you answer


What are the major trends that are going to affect the future of privacy?

Using the Lessig privacy planning framework introduced earlier in this report, we can scan for trends and future developments in technology, regulation, social norms and business. We do this by building on our proprietary knowledge base of macro drivers and trends, supplemented by a horizon scan of trends specific to your category or organizational operating environment.

What are the shocks and pitfalls of which I need to be aware?

Scanning for emerging issues and wildcards across the four Lessic privacy planning categories of technology, regulation, social norms and business/market. We can then use a series of workshop exercises to test your organizational response against these shocks, and therefore advise on what you should do to make your organization more resilient and responsive.

How are my consumers’ attitudes to privacy going to change in the future?

Future-facing qualitative and quantitative research with leading edge users or consumers to identify pain-points and emerging issues related to privacy. This can be used to anticipate shifts in consumer attitudes and behaviour before they present a significant threat. Equally though, this kind of research is also likely to identify new sources of value or innovation opportunities related to the use of your customer’s data – moving you one step closer towards redressing the one-sided handshake.

What can I do now to be better prepared for how privacy is going to change my organization’s operating landscape?

Developing a ‘map’ of the future of your category or operating landscape, showing the major contextual issues and uncertainties related to privacy that are going to shape the future over a certain time horizon. We can then use this map of the future operating landscape to identify innovation opportunities, and plan your organizational response against likely future changes now.

“Never click on Amazon Recommends. What Amazon have done is remember everything you’ve ever bought, every page you’ve ever looked at, ... and come up with an accurate mathematical picture of your very soul. Never click on that unless you’re very sure within yourself as a person of who you are.”

 

Chris Addison, British standup comedian, Live, 2011

Privacy was written by Anita Beveridge, Chloe Cook and Andy Stubbings. Design by Tania Conrad.

 


Latest

VIEW ALL

Beyond the Loyalty Paradox

Customers are changing. They expect businesses to want to have an appropriate relationship with them, in which they get...
READ MORE

Articles

The Pivot to Passive

NEW TECHNOLOGIES are set to make the emerging consumer marketplace more like the one that existed before the internet. This...
READ MORE

Articles

The Roadmap to the New America

Out of the crucible of America’s cultural and demographic melting pot, a new and profoundly different day is dawning. For...
READ MORE