The Social Dilemma and the naming/knowing dichotomy
If there were to be a nutgraf for the Netflix documentary The Social Dilemma, it comes near the beginning when Justin Rosenstein observes that there is no single succinct problem with tech and social media. Not long after, Tristan Harris agrees, “There’s a problem happening in the tech industry and it doesn’t have a name…Is this normal or have we fallen under a spell?”
At core, the movie is about the naming of a phenomena. A story Richard Feynman once told about his father helps to make the point,
He had taught me to notice things and one day when I was playing with what we call an express wagon, which is a little wagon which has a railing around it for children to play with that they can pull around. It had a ball in it – I remember this – it had a ball in it, and I pulled the wagon and I noticed something about the way the ball moved, so I went to my father and I said, “Say, Pop, I noticed something: When I pull the wagon the ball rolls to the back of the wagon, and when I’m pulling it along and I suddenly stop, the ball rolls to the front of the wagon.” and I says, “why is that?” And he said, “That nobody knows,” he said. “The general principle is that things that are moving try to keep on moving and things that are standing still tend to stand still unless you push on them hard.” And he says, “This tendency is called inertia but nobody knows why it’s true.” Now that’s a deep understanding – he doesn’t give me a name, he knew the difference between knowing the name of something and knowing something, which I learnt very early.
To borrow Feynman’s language, The Social Dilemma aims to name a thing, with that thing being the harm from social media. But it is less clear that we know anything about those harms as the credits roll.
Take for example, Shoshana Zuboff who advances the theory made in The Age of Surveillance Capitalism. There Zuboff argued that users create behavioral surpluses that are crafted by social media companies into prediction products that are then sold on behavioral futures markets. In my previous review of Zuboff’s book, which is a deeper dive into the subject, I noted how big of a category error this conception was,
By way of background, futures markets are typically contrasted to spot markets. Futures markets deal with products or commodities that are delivered in the future and spot markets deal with products that are delivered immediately. The ad markets underlying Google and Facebook are best understood as spot markets, where advertisers bid on clicks and views at the time that they occur. Advertisers might buy a large number of ads all at once that are placed over time, but ad inventory and placement doesn’t magically turn a spot market into a futures market.
If you aren’t convinced these markets are spot markets, consider what it means that “many companies are eager to lay bets on our future behavior.” If I make a bet about the future price of corn in a futures market and it doesn’t work out, then I pay up. I don’t say this lightly. I did very poorly in my graduate financial economics final because I didn’t properly price corn futures. That brings me to my snarky question for Zuboff: What’s the Iron Condor for surveillance capitalism?
The act of naming matters, especially since Zuboff is taking a concept from another discipline. Still, it is a huge category error to call these markets *futures markets *and to compare them to pork futures as Zuboff does. They are very clearly spot markets. Regardless, her work and the work of the movie is to give a name to a thing, however misapplied it may be.
The naming/knowing dichotomy was also picked up by Antonio Garcia-Martinez. After hearing Zuboff say that, “This is a world of certainty,” he quipped, “Then why am I, crusty ad tech veteran, building probabilistic models all day?” Continuing, Garcia-Martinez then touched on the issue more directly,
her titular catchphrase “surveillance capitalism” has become a sort of reverse shibboleth: if anyone uses it, you’re absolutely certain they’re outsiders to this world and have no idea what they’re talking about. In a way, it’s a time-saving convenience.
The entire movie has this reverse shibboleth problem. It is too on the nose. The structure of The Social Dilemma is a hybrid, combining interviews of tech critics and former tech executives alongside a story about a fictional family trying to cope with tech problems. Each of the family members embodies a tech failing, in what seems to be a close reading of Wittgenstein’s family resemblance concept, that “things which could be thought to be connected by one essential common feature may in fact be connected by a series of overlapping similarities, where no one feature is common to all of the things.” At one point, the movie cuts to a shot of Mad Men actor Vincent Kartheiser playing the personification of an villainous AI algorithm. The scene is cartoonish as Kartheiser and his henchmen toy with one of the main characters in what seems to be a spaceship.
The scene rightly garnered ridicule online, but to lay audiences, the personification of an algorithm is powerful imagery. People think algorithms have an interior life like the one shown on screen. People don’t see Facebook or Twitter as tools, or, in the traditional language of philosophy, an object. Instead, they often consider these sites as subjects of their own. Facebook and Twitter seem to be alive. [For a good treatment of social media as a tool, see Niall Docherty’s “More than tools: who is responsible for the social dilemma?']
As I explained elsewhere, this tendency should be called techno-animism:
Animism often conjures up regressive image of a “primitive religion” where “trees, mountains, rivers and other natural formations possess an animating power or spirit.'' But animism should be understood as a knowledge system, a way of relating to the world, which stands in contrast to our normal modes of knowing with a subject and an object.
Botanists might collect a specimen to categorize it, sort it, and place it within a larger system of knowledge. Here, the botanists are the subject and the trees are the objects. But that isn’t the only way of understanding. Animists “talk with trees” to understand the tree’s relationship in their world. Instead of subject and objects, trees are understood as subjects of their own in animism. As anthropologist Nurit Bird-David explained it, against the common Western understanding of ‘‘I think, therefore I am,’’ stands in contrast the animist which might say ‘‘I relate, therefore I am’’ and ‘‘I know as I relate.’’
Humans have a tendency to assign agency to inanimate objects. The study of animacy perception, as it is called, stems from work by psychologists Fritz Heider and Marianne Simmel. In 1944, they created short videos, only about 2 minutes long, showing two triangles and a circle moving towards and away from each other in what seemed like a story. Participants watching the video later described a rich interior life to the triangles and the circle, ascribing intentions, emotions, and personality to the simple objects. The findings have since been replicated across a number of different domains.
But animacy perception doesn’t just apply to simple objects, it constantly pops up in our modern world. Robot dog owners give their techno-pups funerals. Some 80 percent of Roombas are named, according to the company. Corporations, as well, are granted agency.
Social media users have a tendency to anthropomorphize the sites, as well. As a user, it is difficult to understand how content is moderated, how stories, pictures, videos, and ads are ordered, and just how much platform operators know about their product. As a result, users “make sense of content moderation processes by drawing connections between related phenomena, developing non-authoritative conceptions of why and how their content was removed,” according to work from researcher Sarah Myers West. Users tend to believe platforms are “powerful, perceptive, and ultimately unknowable.”
The real kicker from animacy perception research is that the tendency seems to disappear when people are given control over the objects and thus understand how control is wielded. The same is true of social media. Knowing how algorithms work dispels many of the myths that the sites are all perceptive and unknowable. To me, that should have been the goal of the movie, but it would have turned out far duller.
While there are other criticisms to be applied to the documentary, there is something important in the name. Indeed, for a movie titled, The Social Dilemma, it is odd that the social dilemma is never really explained. What is presented instead are problems that arise from tech use. But dilemmas only come about if there are two hard choices to make. To me, that is the yawning gap in the movie: what is the equally difficult alternative that stands in contrast to social media harm?
To end this unorganized essay, let me leave you with a poem from Czeslaw Milosz that my friend Joseph sent me on this issue of naming:
And so it befell me that after so many attempts at naming the world, I am able only to repeat, harping on one string, the highest, the unique avowal beyond which no power can attain: I am, she is. Shout, blow the trumpets, make thousands-strong marches, leap, rend your clothing, repeating only: is!