Why Social Media Might Be Impossible to Fix
Why Social Media Might Be Impossible to Fix

In the early 2000s, when Facebook was still a dorm-room project and Twitter hadn’t even been conceived, many believed that social media would create a global coffeehouse. A digital commonroom. A place where anyone could share ideas and build communities across borders. Two decades later, that dream feels almost unrecognizable. Instead of open conversation, we see a handful of loud accounts shaping the tone of public debate.
It would be easy to blame the algorithms as Noah Yuval Harari did in Nexus. Algorithms, those hidden formulas designed to keep us scrolling. It’s easy to assume that if feeds returned to a simple chronological order or if “likes” and follower counts disappeared, then the problem would fade. But according to a new study I saw two days ago that belief may be naïve. The dysfunction might not come from tweaks in design but from something far deeper, something structural in how online networks grow and operate.
Researchers Maik Larooij and Petter Törnberg wanted to test whether social media could be improved. Instead of looking at existing platforms, they built one from scratch, using artificial intelligence personas to behave like human users. These digital characters could create accounts, follow one another, share posts, and react to what they saw. Then the researchers tested a series of reforms that social scientists had suggested over the years.
The interventions ranged from rearranging feeds to hiding user biographies. Some changes seemed simple, like switching entirely to time-based posts. Others were more inventive, such as “bridging algorithms” designed to highlight content that encourages mutual understanding rather than outrage.
None of them worked.
In fact, in several cases the interventions made things worse. Chronological feeds did reduce attention inequality, meaning fewer celebrities and influencers dominated the conversation, but they also made divisive content more visible. Bridging algorithms did reduce partisan echo chambers, yet they concentrated attention in even fewer hands. Efforts to expose users to a wider range of political views did not change polarization at all.
The outcome was frustrating but also illuminating. The dysfunctions appeared on their own, without any malicious algorithmic nudging. As soon as people, or AI personas standing in for people, started connecting, the same familiar patterns emerged: echo chambers and an outsized megaphone for extreme voices.
To understand why, it helps to think about how networks grow. In a village market, everyone has roughly equal standing. Conversations may cluster, but no one person automatically commands global attention. Online, the mechanics are different. Attention attracts attention. A post with more reactions and more visibility becomes even more likely to be shared. This creates what mathematicians call a power law distribution: a very small number of nodes dominate the entire system.
That’s why one influencer with a million followers can sway political debate more than a thousand local voices combined. It’s why fringe conspiracy theories that would once remain in a bar corner can suddenly flood timelines worldwide. The very structure of the network rewards concentration and amplification, and no small policy tweak seems to undo that.
Even those who stay away from social media feel its ripple effects. Newspapers adapt their headlines to maximize clicks from Facebook shares. Politicians tailor speeches to generate viral soundbites on TikTok. Activist groups organize campaigns that live or die depending on Twitter’s currents. Social media no longer mirrors society; it reshapes it.
That feedback loop makes reform difficult. Imagine a classroom where one student shouts louder than the rest. The teacher may try rearranging seats, turning down the lights, or banning snacks, but the dynamic remains. The problem lies not in the small rules but in the structure of the room itself.
When the Internet first spread into homes, many believed it would revive the spirit of the Enlightenment salons or the 18th-century coffeehouses where pamphlets and debates flourished. People imagined an age of digital democracy. Instead, we have inherited platforms where performative outrage drives visibility and where small groups wield enormous influence.
It’s important to remember that this was not inevitable. Early experiments like ICQ or early chat rooms allowed strangers to connect in ways that were often clumsy but surprisingly equal. You could stumble into a conversation with someone across the world without celebrity hierarchies defining the exchange. Those experiments faded as platforms grew, chasing scale and revenue. What replaced them was a system that rewarded whatever kept attention longest, whether that meant humor or rage.
The recent study explains why efforts to “fix” social media often disappoint. Eliminating follower counts may reduce vanity metrics, but users still sense influence through other signals. Switching to time-based feeds may sound fair, yet the most sensational posts still rise because they spread faster. Even when platforms deliberately boost diverse viewpoints, the strongest personalities dominate, muting quieter voices.
It is a bit like pruning branches on a tree when the roots themselves are invasive. The design of online networks naturally produces inequality and polarization. Without rethinking that foundation, most reforms are cosmetic.
Yet the researchers do not end on pure pessimism. Törnberg argues that social media may be reaching a breaking point, not because of internal reform but because of external pressure. The rise of AI-generated content, designed to maximize clicks and outrage, threatens to overwhelm the system. Already, bots are filling feeds with synthetic articles, manipulated images, cooking up videos, and fabricated conversations. The sheer flood of content may erode trust so completely that current models of social media cannot survive.
That future is uncertain. People may retreat into private, smaller-scale groups, as many young users already do on WhatsApp or Discord. They may return to curated brands that act as filters. Or new structures may emerge that look nothing like the platforms we know.
The story of social media is less about any single company than about how humans interact when scaled to a planetary level. We are social creatures, drawn to signals of influence, prone to emotional reactions, and eager for connection. When those traits meet the mathematics of network growth, certain outcomes repeat themselves: amplification of extremes, dominance by elites, and separation into echo chambers.
This does not mean conversation is doomed forever. Offline, communities continue to find ways to compromise, and to exchange ideas without collapsing into chaos. The challenge is that scaling those conversations into billions of interactions has effects we still barely understand.
Two decades after the first optimism, we are learning that the problem lies not in a single algorithm or in one company’s greed. It lies in the architecture of the platforms themselves. And unless those foundations change, the dream of a global coffeehouse will remain out of reach.