Breaking the Loop: What the Amsterdam Study Teaches Us About Social Networks and AI

By Casey Draper

Social networks were once hailed as tools to revitalize democracy; places where citizens could connect, deliberate, and build new forms of civic engagement. That hope has faded in recent years. Platforms today are associated more with polarization, disinformation, and the corrosion of trust than with democratic renewal.

A new study from the University of Amsterdam should sharpen our understanding of why. The research team did something ingenious: they stripped social media down to its most basic elements, building a bare-bones network populated only by AI agents who could post, repost, and follow. They left out recommendation algorithms and personalized feeds, and excluded the human “messiness” we usually blame for dysfunction (Larooij & Törnberg, 2025).

Even in this stripped-down environment, the same failures we see in real social networks reappeared. The agents clustered into echo chambers, a small elite captured almost all of the attention, and the most extreme voices reliably rose to the top. These were not just unfortunate byproducts of algorithmic feeds but structural outcomes of the way engagement drives visibility.

The researchers then tested six fixes often proposed as cures: chronological feeds, hiding metrics, boosting diverse viewpoints, promoting empathetic content, muting dominant voices, and removing bios. None solved the problem. Chronological feeds reduced inequality but gave extremists more reach. Bridging content softened polarization but made inequality worse. The rest barely moved the needle.

The takeaway is stark. Dysfunction in social networks is not an accident of a few bad design choices. It is baked into the loop that governs attention online: a post provokes a reaction, gets reposted, attracts followers, and that visibility snowballs. This feedback loop rewards outrage, affirmation, and extremes while sidelining nuance. Cosmetic tweaks cannot patch over that reality.

For Europe, the lesson is urgent. The EU has positioned itself as a leader in digital governance with the Digital Services Act (DSA) and the AI Act, but these frameworks risk fighting the last war if they treat dysfunction as a problem of content moderation or algorithmic transparency alone. The Amsterdam study shows that the root problem is architectural. Unless we rethink how visibility, amplification, and following are wired into platforms, we will continue to reproduce the same corrosive outcomes.

And this challenge does not stop with human users. As autonomous AI agents begin to populate digital spaces and start running customer service bots, coordinating logistics, even shaping decisions in financial systems — the risks compound. Unlike humans, agents have no ethics, no social norms, no centuries of law and community practice shaping their behavior. Left unchecked, they will follow the same destructive spirals, only at machine speed.

This is why guardrails matter. Just as Europe once built food safety and environmental protections into its Single Market, it now needs to embed societal resilience into its digital systems. The risk is not only polarization online but also systemic fragility across sectors that depend on trustworthy AI.

What might this look like in practice? At least three priorities stand out.

First, Europe must invest in standards for digital provenance and traceability. Just as we require labeling on food products, every post, bot, and interaction should carry verifiable information about origin and authenticity. Without provenance, trust cannot scale.

Second, regulators must look beyond algorithms to the architecture of amplification itself. If visibility and following are governed by engagement loops, then reform must target those loops directly. This can be achieved by experimenting with mechanisms that slow down virality, diversify attention, or break the “winner-takes-all” dynamic of online influence.

Third, Europe should extend its thinking about systemic risk from finance and energy to the digital domain. Social networks are now critical infrastructure for democracy; AI agents will soon be critical infrastructure for the economy. Treating their governance as a matter of resilience, complete with stress tests, circuit breakers, and fail-safes is essential.

The Amsterdam study is not the final word. Generative simulations carry limitations, and the researchers are clear that more work is needed. But the signal is unmistakable. If dysfunction emerges even in a minimal system, then it is the design of social networks themselves, not just algorithms or bad actors, that corrodes trust.

For years, Europe has been striving to build a society that is both open and safe and its willingness to regulate the digital world proves they mean it. That tradition of embedding guardrails into the fabric of European democracy must now extend further into digital life. Because unless we break the loop, the same outcome will repeat in echo chambers, inequality, amplified extremes — resulting in a democracy weakened by the very systems that were meant to strengthen it.

References

European Parliament. (2024). Regulation (EU) 2024/1689 on a Single Market for Digital Services (Digital Services Act). Official Journal of the European Union, L 277/1. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689

European Parliament. (2024). Regulation (EU) 2024/1686 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 276/1. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1686

Larooij, M., & Törnberg, P. (2025). Can we fix social media? Testing prosocial interventions using generative social simulation. Institute for Logic, Language, and Computation, University of Amsterdam. arXiv. https://arxiv.org/abs/2508.03385