Far worse is how the technology may be acting as a vast optimized engine of social degradation. That's the argument of a recent book by former tech engineer Jaron Lanier, known for his early work on virtual reality. In the current business model, Google, Twitter and Facebook offer free services and use them to gather immense quantities of user data. The companies' algorithms then use that data to help advertisers feed users optimized stimuli to modify their behavior – encouraging them to buy stuff, for example. Of course, lots of the free services are great, as are the things the ads often help people find. We're so used to this model that, aside from sporadic privacy concerns, we see it as almost natural.
But Lanier's insightful point is that this model may also be a natural route to disaster, for a disconcertingly simple reason. Facebook, for example, makes money by helping advertisers target messages – including lies and conspiracies – to the people most likely to be persuaded. The algorithms looking for the best ways to engage users have no conscience, and will simply exploit anything that works. Lanier believes that the algos have learned that we're more energized if we're made to feel negative emotions, such as hatred, suspicion or rage.
is biased not to the left or the right,” as he puts it, “but downward,” toward an explosive amplification of negativity in human affairs. In learning how to best to manipulate people, tech algorithms may inadvertently be causing mass violence and progressive social degradation.
Lanier doesn't support this argument with hard data, but plenty of other research makes the hypothesis sound all too plausible. For example, studies looking at how different kinds of emotions affect the engagement of online viewers find that messages designed to stir negative emotions including fear or anger tend to work better. A United Nations report concluded that that the spread of rumors on Facebook and other social media
was crucial in sparking genocidal violence against the Rohingya in Myanmar. Such messaging also appears to have played a significant role in driving the recent outbreak of anti-refugee feeling in Germany.
The link seems to be quite general, as suggested by another recent study linking usage of Facebook with outbreaks of violence against immigrants across Germany. European researchers looked at all the anti-refugee attacks in Germany over two years, 3,335, seeking to find correlations between their locations and other variables such as local wealth, support for far-right politics, number of refugees and so on. The most significant explanatory factor turned out to be local Facebook use. In the data, a rise in per-person Facebook use of one standard deviation above the national average meant a 50 percent increase in the number of attacks on refugees.
At least in part, these may be the tragic human consequences of mechanical algorithms relentlessly acting to exploit a truth they've discovered – that paranoid messaging taps into deep human emotions and instincts, and therefore tends to get the most attention.
What can be done? There's no reason the advertising-based model needs to remain dominant, especially if we realize the immense damage it's causing. An alternative would be to give up our free services – Gmail, Facebook, Twitter – and pay for them directly. If social media companies made money from their users, instead of from third parties aiming to prey on those users, they would be more likely to serve users' needs. Making it happen will take concerted government pressure, and from users as well, as the companies profit so handsomely from the current set up, despite the toll on the rest of the world. But many computer scientists, such as those at the Center for Human Technology, have recognized the problem and think it can be fixed.
Get rid of the advertising model, Lanier notes, and anyone will still be completely free to pay to see poisonous propaganda. It's just that no one will be able to pay in secret to have poison directed at someone else. That would make a big difference.