Senator Josh Hawley’s just-published The Tyranny of Big Tech (Regnery, 2021) raises important issues. Hawley asks, for example, Do Facebook, Twitter, and YouTube censor views that that their managers do not like? It seems clear that the answer is yes. Many people who have tried to post Facebook comments that criticize the “official” line on covid-19 have had their posts removed and have been sentenced to “Facebook Jail.” YouTube removed a popular video by Tom Woods that argues lockdowns and masks are ineffective. What, if anything, should be done about this? Hawley notes also that these media giants often rely on government aid to enhance their power. Hawley embeds his discussion in a larger argument about independence and self-government in the American tradition, which I think is mistaken but nevertheless of considerable interest. I hope to address some of the book’s good points elsewhere. What I’d like to talk about in this week’s article is a common fallacy that Hawley falls into.
He starts from an indisputable fact. People spend a great deal of time on Facebook and YouTube. He adds to this a premise that is more disputable, but one I’m not going to question, namely that people are spending “too much” time on these media. Other uses of their time are “better.” He goes on to suggest that people are addicted to using these media, and this what I’d like to challenge.
“Addiction” suggests that people find it difficult to stop a particular sort of behavior. In some cases, there may be pains from withdrawal, as, for example, habitual smokers who attempt to “kick the habit” report. The addict, it is claimed, doesn’t really have free choice over whether to continue. Maybe in some sense he could quit (and in fact many people do give up smoking), but it takes extraordinary “willpower” to do this.
Hawley applies the addiction model to the media giants in this way. He says that their “algorithms”—a key word in the book—can use the vast amounts of information collected by tracking consumers’ use of the internet to predict what people are likely to purchase. On Facebook, these items will appear on the consumers’ pages, and—sure enough—people buy them. On Google, you will see advertisements based on these algorithms when you type in a search. What better proof of addiction do we need?
Hawley says this:
Google developed a formula, a series of mathematical algorithms, for predicting which consumers would click on which ads, which consumers would make purchases, and what they would buy…. With its massive and ever-growing store of user information, the company could direct tailored ads to individuals that its data machine suggested would have a high probability of leading to a purchase. Which in turn led to a profit.” (p. 65)
This isn’t a case of “addicting America.” Contrary to cases of addiction, there has been no evidence showing that if you don’t buy the products that the algorithms indicate you would, you will experience any untoward effects. You won’t have “withdrawal” symptoms; all that would happen is that the algorithm predicted incorrectly.
But maybe this objection demands too much of Hawley. Even if there is nothing akin to withdrawal symptoms if you don’t buy the products, isn’t it enough for Hawley’s case that something bad is happening that the algorithm has a high rate of predictive success?
No, it isn’t. This response wrongly assumes that a successful prediction about what you will do takes the matter out of your control. If I predicted with very high probability that you would do something and you did it, then how can it be up to you whether you will do it?
We can see that something is wrong with this argument by looking at an example. I predict that when you read this article, you won’t respond by smashing your computer screen, even if you don’t like the article. Does it follow that you had no choice about the matter? If you had intended to smash the screen, would you have found your hand blocked? No—obviously, you didn’t smash the screen, because you didn’t want to. This conjunction can’t be true: I correctly predict that you will smash the screen and you don’t smash the screen. If you smash the screen, then the prediction was true, and if you don’t, then the prediction was false. But it doesn’t follow that if you smashed the screen, you had to smash it, i.e., that the prediction had to turn out true. It’s just that it did turn out true. Hawley is making a big deal about the tautology that a successful prediction isn’t false.
If Hawley has correctly described the algorithms, they enable advertisers to provide you with products that you want. Why is this a problem? Here it is important not to be sidetracked by another issue raised by Hawley, one that has more substance. Do social media have the right to subject users to surveillance, in order to gather the data on which the algorithms are based? Have the consumers consented to this? There is a reasonable case that they haven’t, but this is a different question that leaves untouched the claim that successful predictions about what you will buy take away your freedom.
There is another problem in what Hawley says. He says that “Big Tech’s business model is based principally on data collection and advertising, which means devising ways to manipulate individuals to change their behavior” (p. 5). What actually happens, according to Hawley’s own account, is that the advertisers offer consumers products they have good reason to think they want. This is not “manipulating” people, which suggests that the advertisements induce desires in people that they didn’t already have. To the contrary, the algorithms are based on data about already existing consumer preferences. The algorithms don’t create the data on which they are based. “Change their behavior” is also misleading. If you buy a product, your behavior has changed: you weren’t buying it before you bought it. But just as he did in the prediction case, Hawley has turned a tautology—buying something is a change—into something that sounds sinister. I hope his readers aren’t fooled by his attempt to manipulate them.
The Mises Institute exists to promote teaching and research in the Austrian school of economics, and individual freedom, honest history, and international peace, in the tradition of Ludwig von Mises and Murray N. Rothbard. These great thinkers developed praxeology, a deductive science of human action based on premises known with certainty to be true, and this is what we teach and advocate. Our scholarly work is founded in Misesian praxeology, and in self-conscious opposition to the mathematical modeling and hypothesis-testing that has created so much confusion in neoclassical economics. Visit https://mises.org