When Shoshana Zuboff returns my call 15 minutes late, it’s because her previous call with an organization in Israel dropped halfway through and it took them a while to reconnect. Such is the peril of functioning in quarantine, even as tech companies exert more power than ever.
Rather than having time over the summer to reflect and plan her next book as she intended, Zuboff has been very busy with people wanting to speak with her and do virtual events. It’s part of the reason that for the last four months we’ve been trying to schedule a call, only to have the date repeatedly pushed back.
Birds are chirping in the background as we speak over the phone, part of the ambience of Zuboff’s home in the country. She says she’s lucky to be there, given the challenges her friends face balancing COVID-19 and living in cities. The birds beat the dystopian jingle of ice cream trucks as they rove New York City, looking for customers amid a pandemic.
“Pandemic life just takes so much time,” she says. “Between figuring out how to get groceries and everything else, it is just so painstaking.”
Zuboff is the author of “The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power,” and the Charles Edward Wilson Professor Emerita at Harvard Business School. Zuboff says the book (which is 660 pages long) “synthesizes years of research and thinking in order to reveal a world in which technology users are neither customers, employees, nor products. Instead they are the raw material for new procedures of manufacturing and sales that define an entirely new economic order: a surveillance economy.”
Zuboff and I speak about the framework of surveillance capitalism. But I’m keen to hear her views on the roiling protests in the U.S., and President Donald Trump’s executive order on Section 230, a law that affords social media companies immunity from content liability, with which the president has taken issue. It feels like a good time to think about the context the internet gives to these events,and who controls it.
This conversation has been edited for length and clarity.
Describe surveillance capitalism and what that means for people who might not be familiar with it.
Surveillance capitalism was invented at Google between 2000 and 2001 as a response to the financial emergency during the dot-com bust. They were the smartest guys with the best search engine and the swankiest venture capital investors. But even they came under the gun with their investors threatening to withdraw. At that time they decided they had to find a fast track to monetization, and it was going to have to be through advertising, which they’d rejected previously.
They discovered leftover behavioral data on their servers, called data exhaust, was actually full of rich predictive signals. And those predictive signals were just lying around unused, more than what was needed for product or service improvement. I call these data behavioral “surpluses.” It was by training their already highly sophisticated analytical capabilities on these surplus flows and pulling out those predictive signals, while using them for analysis, that they discovered that they could predict what kind of ad somebody is likely to click on and if they would click through to the website. That became what we now know as the “click-through rate.”
The click-through rate is a computational product that predicts a fragment of human behavior. It turned out that there was a very substantial market of business customers who wanted to know what customers will do, who wanted behavioral predictions of customer behavior and user behavior.
So advertisers and their clients surrendered the traditional relationship between a product and its ad, where a company decides where to place its ads based on alignment with its brand values. Even the first years of online advertising maintain that continuity. But Google made them an offer they couldn’t refuse and they agreed to it after quite a bit of debate and conflict. They agreed to buy the product without asking to see what was inside Google’s black box and let the machines decide where the ads go.
How does this model expand to enmesh almost all of the internet?
This is not just an accident that happened at Google. This is an economic logic that was so successful at Google that within just a few years, it became the default model throughout the tech sector and then spread through the normal economy and has become the dominant economic logic in our time.
Between 2001, when this logic first started being systematically applied, and 2004, when Google went public (the first time we got to see any of their numbers) their revenue increased by 3,590%. That exponential increase represents what I call the surveillance dividend. At that point, they had cracked the code and many companies found a path to monetization. Now everybody from your TV manufacturer to Ford Motor Company started to say “to heck with the product, we want the data.” Everyone in every sector is chasing the surveillance dividend.
There is a story about the top young folks at Google sitting around in an office in 2001, trying to answer the question: “What is Google?” And nobody had a cogent way to answer that question. Larry Page ultimately began to share things and what he said was if Google had a business, it would be personal information. People are going to produce so much data. There will be cheap cameras and sensors everywhere. There will be so much data about people’s lives that all of human experience will be searchable and indexable. He had the vision that personal information was the game. Surveillance capitalism is an economic logic founded on the unilateral, secret theft of private experience as a limitless source of free raw material, and that free raw material becomes the zero-cost asset [meaning that, after set-up costs, it is free to produce]. It can be translated into behavioral data. That behavioral data is now claimed as proprietary and it’s gathered into new complex supply chain ecosystems.
Everything feeds the supply chain. Not only what you do online, but everything on your phone, all the apps on your phone, and as Page predicted, all the cameras and sensors are gathering data. All of behavioral data is now claimed as proprietary and flows into complex ecosystems before being conveyed to surveillance capitalism’s computational factories, called artificial intelligence. The [output] is computational products that predict human behavior that are sold in markets, just like we have markets for pork belly futures or oil futures.
What does this mean for people’s daily life?
Human futures markets have competitive dynamics. What the actors and the sellers in these markets are competing on is certainty. They’re selling certainty to their customers and the best predictions win. We had some insight into these factory hubs a couple years ago with a leaked Facebook document in 2018. The document revealed that in Facebook’s AI hub, trillions of data points are ingested every day and 6 million predictions of behavior are produced every second. So this is the kind of scale that we’re talking about. When we think about the competition in these prediction markets, and you kind of deconstruct that competition, you begin to see the economic imperatives at work here very clearly.
The first one is scale. For AI to be effective in producing predictions, it needs a lot of data. The second one is scope. In addition to volume, you need variety. That involves getting people off their desktop, off their laptop, and out into the world and getting them moving around their house, in their cars, through their cities. Give them a little computer, they can take it in their pocket and it will tell us everything they’re doing. We’ll call it a phone. Those are economies of scope.
The final discovery was that the very best predictive data comes from digitally intervening in people’s behavior and learning how to tune and herd their behavior in the direction that maximizes the strength of their predictions and therefore maximize customer outcomes. This became a new zone of experimentation. The extraction scale is huge, but conceptually straightforward. The scope is huge but has required a lot of invention. Facebook, for example, is now working on how to translate brainwaves into language.
How do we actually modify behavior in the direction that optimizes revenue flows? This is not as straightforward. This is a new zone of experimentation and so the companies went to work experimenting with it. Things like Facebook’s massive scale contagion experiments, and things like Google’s Pokemon Go, the augmented reality game which experimented with how to herd people through their cities, towns, and villages to the establishments that were paying Niantic Labs, which made Pokemon Go and which was spun off of Google, for guaranteed footfall. This is exactly the same structure as the online ad market markets who are paying for click through rate and now you have a real world establishment paying for guaranteed footfall.
See also: Why Bitcoin’s ‘Culture War’ Matters
This is what data scientists call the shift from monitoring to actuation. That’s when you actually have enough knowledge about a machine system to be able to control it remotely and automate it. You can change the parameters or do whatever you need to do remotely because you have so much information now about the system monitoring the actuation. This is the arc that surveillance capitalism is traveling: Not only to know everything and use it for prediction, but to actuate human behavior, social behavior, and individual behavior to drive behavior in the direction that is optimal for revenue.
We see this in psychologically-based micro targeting. We see this in the real-time use of rewards and punishments, delivered through your phone. We see this through the importation of gamification in order to point people in the direction that satisfies commercial outcomes. Pokemon Go was an example of that. The point is that when people think about these issues, they just think about targeted ads. They think this is just about advertising. It no longer is. This is about your insurance company rewarding and punishing you in real time for the amount of pressure that your foot places on the gas pedal. In real time it can raise or lower your premiums based on your immediate behavior.
So what’s the end game in this scenario? You reference Sidewalk Lab’s previous experimentation with Toronto as a “smart city” that exchanges data for all sorts of privileges. What does that look like?
Such an experiment replaces decisions that citizens make about how they want to live together, which are the building blocks of every democracy. The citizen has no role other than just to be part of this larger system. And these companies say if you agree to give us all your data and make your life completely accessible to us in every way, then you will be eligible for all these cool new services.
If you choose privacy and anonymity though, you will be excluded from the service offerings. You won’t be able to take advantage of the new transit systems or the new security systems or the food delivery systems. These are the real-time rewards and punishments in action. Google spoke about using data to construct reputation scores. People and businesses that behave within the algorithmic parameters get higher reputation scores and that privileges them when it comes to bank loans or other kinds of services. People who violate the algorithmic parameters are punished because they’re excluded from these kinds of relationships and services, and they can’t advance their lives because they’re excluded.
This is a vision of a future: a private corporation with unaccountable power. It’s a future where we don’t have the great democratization of information that we expected in the digital century, but just the opposite. We revert to a feudal pattern with these huge concentrations of knowledge and this new kind of power.
This power is not soldiers coming to your house in the middle of the night and whisking you away to the gulag. This is not violence and terror and murder. This is power that operates remotely through the milieu of digital instrumentation. For anyone who thinks that such systems are only the subject of “Black Mirror” episodes, go and read the history of the 20th century where it took the entire Western alliance to fight back another kind of totalizing power that wanted total control over individuals and society and that was totalitarianism. This is different because it tends to come bearing a cappuccino rather than a gun.
How might Trump’s executive order attacking Section 230 – which absolves companies from civil liability for online content – impact this, if at all?
Disinformation is a routine consequence of the economic logic that we have just discussed. It’s a consequence of the imperatives of economies of scale and economies of scope. All systems have been engineered right from the start to maximize supply chain flows. In the euphemistic language of the surveillance capitalists, it is engagement. There is no room in this economic logic to judge the quality of supply. It doesn’t matter. Scale matters. Scope matters. Actuation that allows us to increase the accuracy of prediction matters. That’s all.
This is what I call radical indifference. We don’t care if you’re happy or sad. We just care that we can get the data. We don’t care if you have cancer if you’re getting married or if you’re planning a terrorist attack, we just care that we get the data. Radical indifference is about maximizing flows of data, not because these are evil people, but because this is the compulsion of this economic logic. Until we interrupt and outlaw that economic logic, we will have disinformation.
The nature of the human being is if you’re driving down a road, and there’s a car accident, you’re gonna stop and look. If you’re driving down the road, and there’s a beautiful willow tree, you’re gonna keep driving. It turns out that violent, contentious, hateful, rabble rousing and mendacious content gets people to stop and look. That’s the car wreck.
Because the systems are engineered to maximize supply, and because people stop and look at car wrecks, it enables armies of bots and trolls. That’s Mr Trump.
Section 230 had no way of anticipating surveillance capitalism. There’s no incentive to take down bad stuff and massive incentives to keep the supply chains full. It turns out that the internet is not a bulletin board, as the creators of Section 230 envisioned. The internet is more like the bloodstream of the global body politic. Thanks to the economic imperatives of surveillance capitalism, the people who own and operate the internet, are incentivized to allow anybody to put any kind of poison into the bloodstream without an antidote. That’s where we are today.
So does Section 230 need scrutiny? Yes, but it needs scrutiny as part of a larger discussion about legislative frameworks, regulatory paradigms, charters of rights or the institutions that we need to make the internet compatible with democracy.
This is the third decade of the digital century. We have to figure this out. Mr. Trump is coming along and shining attention on Section 230, which one might think was a good thing, but now here we have the second whiplash. That whiplash is that Mr. Trump is fighting for the right to put poison at will into the global bloodstream. He’s fighting for the right to lie. He’s fighting for the right to put counterfactual information into the body politic.
We need to construct a rule of law compatible with democracy that addresses these core questions of surveillance capitalism and who owns and operates the internet. We need to do it so that we make the internet safe for truth. Not safe for lies. There are areas where there’s opinion but there are areas where there are facts. Now we have a global bloodstream in which there is no institutional operation that comes under democratic protection and democratic oversight. This has made our democracies untenable.