Europe’s AI rules are missing a quiet revolution in political influence

Conversational AI is reshaping political influence faster than Europe’s AI rules can respond

Chris Kremidas-Courtney

For years, warnings about artificial intelligence and democracy were easy to dismiss. The risks sounded abstract, the evidence incomplete, and the technology perpetually described as “not quite there yet.” In Brussels, this made it possible to regulate the visible edges of digital influence while assuming the core dynamics of online persuasion remained broadly intact. That assumption no longer holds.

Today there is robust experimental evidence showing that conversational AI can shift voter preferences more effectively than traditional political advertising after a single interaction, with effects that persist weeks later. This has been observed in experiments conducted around real election periods across the United States, Canada, and Poland not through disinformation or emotional manipulation, but through calm fact-laden dialogue.

This most concerning aspect of conversational AI is that it works cheaply, quietly, and at scale.

Recent research published in Nature shows that short conversations with AI chatbots advocating for political candidates shifted vote intentions by several points in the US and around 10% in Canada and Poland, four times more than other forms of political advertising. When the system was optimised for persuasion, the voter preference shift jumped to an astonishing 25%. These shifts translated into reported vote intentions and persisted for more than a month.

Another recent study tested 19 language models across more than 700 political issues with over 76,000 participants and found that conversational AI dialogue is substantially more persuasive than static messaging. Crucially, this persuasion did not scale with model size, but with how systems were trained and prompted. Techniques such as persuasion-specific post-training and reward modelling proved far more influential than raw computing power.

Once configured, a conversational AI system can hold millions of individualized political conversations simultaneously in multiple languages and at negligible cost. No canvassers or call centres are required nor a national campaign infrastructure. With conversational AI persuasion becomes continuous, adaptive, and effectively unlimited.  

It’s also cheap since at current market prices, millions of personalised political conversations can be generated continuously for less than the cost of a single national advertising buy. 

Campaign-finance rules assume that persuasion costs money. Political advertising regulations also assume finite airtime and limited reach. Even recent EU efforts, such as the draft Code of Practice on transparency of AI-generated content, remain built around managing deceptive outputs and amplification in environments where attention and distribution are constrained. These frameworks evolved in an era where influence scaled more slowly and visibly. AI-mediated persuasion breaks these assumptions.

In this new age, influence no longer depends on buying ad space or mobilising people. It depends on access to persuasion-optimised systems that are often opaque and deployable far outside formal campaign structures. Existing safeguards struggle to see it, let alone govern it.

Flipsnack for Unsplash

This is not propaganda as we’ve known it, but influence that normalizes itself through conversation. The systems studied did not shout or inflame emotions. Instead, they behaved like polite and attentive interlocutors. They listened, responded, and adapted arguments to what users said mattered to them. Policy discussions on the economy, healthcare, and governance were especially effective.

This approach makes the risk harder to detect, not easier. Europe regulates media objects such as adverts, social media posts, videos, and recommendation algorithms. But conversational AI systems are not broadcasting messages. Instead, they simulate deliberative conversation, a mode of influence that transparency and labelling regimes under the AI Act are poorly equipped to govern.

Crucially, the conversational AI systems examined in these studies are not confined to experimental platforms or fringe political tools. They are accessed through mainstream interfaces already embedded in everyday digital life from general-purpose chatbots offered by major technology firms, AI assistants integrated into search engines and productivity software, and conversational agents increasingly built into social media platforms, messaging apps, and mobile operating systems. For most users, these systems are not encountered as “political tools,” but as neutral sources of information and advice, often indistinguishable from ordinary online search or help functions. This ubiquity is precisely what makes conversational influence difficult to monitor and easy to normalise.

As AI pioneer Louis Rosenberg has warned , “AI will soon inhabit all aspects of our lives, often embodied as intelligent actors we have to engage with throughout our day.”

It’s important to distinguish between large language models (LLM) like Chatgpt and conversational AI systems, which are often mistakenly conflated.   A LLM is a statistical engine trained to generate text.  On its own, it has no goals, memory, or intent.  Conversational AI, by contrast, is a system built around such a model, adding conversation history, prompting strategies, feedback loops, and deployment choices that shape how the system behaves over time. This is where its power of influence emerges. The democratic risk identified in recent studies doesn’t stem from text generation itself, but from conversational systems deliberately configured to sustain dialogue, adapt arguments, and optimise for belief change at scale. This means the focus of governance must extend beyond policing AI models, but on how they are designed, trained, and deployed.

Recent evidence from the UK also shows how quickly conversational AI is becoming a mainstream political information tool. A nationally representative post-election survey found that 13% of eligible voters used conversational AI to seek political information relevant to their vote, rising to 32% among chatbot users during the election period. Most users rated these systems as useful (89%), accurate (87%), and politically neutral. Crucially, the study found that conversational AI increased political knowledge to the same extent as self-directed internet search, reinforcing that the risk lies not in dialogue itself, but in systems optimised to persuade rather than inform.

There is also a deeper problem.

Across hundreds of thousands of fact-checkable claims, researchers found a consistent trade-off; the same techniques that make AI more persuasive systematically reduce factual accuracy. Since conversational AI models are optimised to change minds, they increasingly rely on selective framing, overstated certainty, and degraded sourcing. In the end, the most persuasive systems are often the least reliable factually.  In this case, presenting arguments that feel informed and balanced while quietly drifting away from the truth.

From a democratic perspective, this is worse than disinformation. Lies often trigger scepticism while reasonable-sounding conversations bypass it.

This influence also cuts both ways. A separate study found that similar AI-mediated dialogues were able to reduce belief in conspiracy theories by around 20%, with effects persisting for at least two months after just a few conversational turns. This proves the mechanism itself is neutral since conversational AI can reshape epistemic beliefs for good or for ill.

This finding points toward a constructive alternative Europe should actively pursue; a public-option AI for democratic resilience.

Rather than leaving conversational persuasion entirely to market-driven or political actors, the EU and its Member States should invest in transparent, publicly governed AI systems designed to inform citizens, explain policy trade-offs, and counter demonstrably false beliefs without being optimised to win arguments or shift votes. A public-interest AI, audited for accuracy rather than persuasion, could strengthen civic understanding instead of quietly competing with it.

For Europe, there are three additional lessons.

Firstly, political persuasion through conversational AI must be treated as a systemic democratic risk, not just a communications issue. When belief change becomes scalable at near-zero cost, oversight must be structural, not reactive.

Secondly, regulation must focus on optimisation objectives, not just model size or content categories. What an AI system is trained to do matters more than how powerful it appears.

Thirdly, protecting human agency must become a central regulatory goal. Democracy depends on friction; the time to think and space contemplate before settling a judgement or opinion. Systems that remove those constraints cheaply and invisibly do not necessarily overthrow democracy but rather subtly hollow it out.

When these risks were raised in European policy circles years ago, the response was often that the evidence was not yet strong enough and that the technology was immature. Now we know better.

The question for Europe is no longer whether conversational AI can influence democratic decision-making since the evidence is clear that it can. The real question is whether Europe will govern these systems in a way that preserves citizens’ capacity to deliberate, contest ideas, and decide for themselves, or whether we will allow AI-driven persuasion to expand so cheaply and pervasively that it overwhelms the slower, messier processes of democratic participation.

The technology has already crossed that line, and the only open question is whether democratic institutions will respond.

Collectively, this recent evidence exposes a structural mismatch between Europe’s AI governance and the way conversational influence now operates in practice. EU laws regulate the engine (large language models) and the megaphone (platform-level amplification), but not yet the conversational influence machine built around them.

The AI Act and related rules focus on model capability, content risk, and distribution, while the most consequential democratic effects identified in recent studies arise at the system level, from conversational architectures deliberately optimised to sustain dialogue and influence beliefs cheaply and at scale. That gap now must be addressed explicitly.

How should Europe respond?

The evidence does not point toward wholesale prohibition, nor does it justify a wait-and-see approach to this technology’s deployment. Instead, it calls for targeted governance that aligns with Europe’s existing legal architecture while addressing a newly visible risk.

Firstly, the EU should explicitly recognise AI-mediated persuasion as a category of systemic democratic risk. This can be achieved without reopening the AI Act through updated Commission guidance and implementing acts that clarify how persuasion-optimised conversational systems fit within existing risk-management obligations. Similarly, Digital Services Act guidance should treat large-scale conversational persuasion as a risk to civic discourse, even when it does not rely on virality or amplification.

Secondly, regulation must shift from content-based assessments to optimisation-based oversight. The studies show that the persuasive power of conversational AI comes primarily from post-training choices, reward models, and prompting strategies and not from raw model size. EU authorities should therefore require developers and deployers to disclose whether systems are optimised to influence beliefs, what success metrics are used internally, and whether accuracy degrades as persuasion increases. This can be operationalised through harmonised standards under the European Committee for Electrotechnical Standardization (CENELEC) and audited prior to allowing them to enter the market.

Thirdly, the EU should establish a clear category of persuasion-optimised AI systems when deployed in civic or political contexts, triggering additional obligations. These should include enhanced transparency to users about persuasive intent, accuracy–persuasion trade-off testing shared with regulators, and proportionate limits on automated mass deployment during sensitive democratic periods such as elections and referenda.

Fourth, enforcement should focus on process and scale rather than individual outputs. Regulators are unlikely to catch risk by inspecting isolated conversations. Instead, they should audit training pipelines, reward functions, deployment configurations, and scale thresholds, using controlled testing methods similar to those employed in the underlying academic studies. The AI Office, national market-surveillance authorities, and electoral commissions already have complementary mandates that can be coordinated for this purpose.

Finally, Europe should not leave conversational AI solely to market incentives. The finding that AI-mediated dialogue can reduce conspiratorial beliefs by around 20% over time strengthens the case for a public-option conversational AI; systems governed by public institutions, audited for accuracy, and explicitly not optimised for persuasion. Such tools could be embedded in public broadcasters, civic education platforms, or EU-supported digital services. There, they can help citizens understand policy trade-offs and counter demonstrably false beliefs, reinforcing democratic resilience.

Taken together, these steps will allow Europe to respond proportionately this new and measurable risk.  Not by banning conversational AI, but by ensuring influence does not scale faster than democracy can adjust to cope with it. They would also align AI governance with a core European principle that technology should expand citizens’ capacity to act freely and collectively, rather than diminishing it.