Dr. Muñoz, your work focusses on the intersection of digital policy and democracy. How are these two areas related?

Digital policy and democracy are inextricably linked today because political opinion formation increasingly takes place online. Our research as part of the Artificial Intelligence (AI)/Democracy Initiative for Germany shows: two-thirds of the population get their news mainly from the internet, and one-third of them exclusively from social media. The way platforms and algorithms function therefore directly determines which information people receive and how they form their political views.

2. A key part of your research involves analysing disinformation campaigns. What does your research show in an international context, what role can artificial intelligence play in spreading such campaigns?

Like in many other areas, AI is fundamentally transforming the disinformation landscape — though not always in the way people assume. Our analysis of elections in Mexico, India, South Africa, the U.S., France, and Germany last year shows that it is not isolated AI-generated incidents that influence elections, but the interaction of several layers.

First, AI enables the mass production of personalised propaganda. Content tailored to individual fears can be created very quickly, in a targeted and cost-efficient way: for example, memes that evoke emotional reactions to certain narratives, even when audiences recognise their artificial origin.

Second — and this is the bigger issue — the ubiquity of manipulated content leads to a loss of trust in our information space. When everything could be AI-generated, a ‘crisis of reality’ arises that undermines democratic discourse. Third, actors exploit platform algorithms to boost their visibility and reach, pushing certain positions into the mainstream. These mechanisms have a lasting distorting effect on public debate.

When speaking about disinformation, we also talk about social platforms, since they shape public debate worldwide. What concrete measures do you consider necessary to effectively limit the spread of false information on these platforms?

I deliberately avoid the term ‘disinformation’ because it narrows the focus too much to the content itself. The most effective influence campaigns don’t just manipulate individual texts or images, but the entire information space. They use coordinated strategies to deploy platform algorithms for their own purposes. So, we need a shift in thinking: away from focusing on content, toward understanding how distribution systems on social platforms are being influenced.

My research shows that these campaigns follow the same three-step pattern everywhere: ‘Engineered Collective Mobilization’. It starts with preparing content — memes, videos, narratives — and the distribution of tasks and resources among multiple accounts. This is followed by the targeted spread of this content across various platforms to therby create large reach. Finally comes algorithmic amplification: the artificially generated reactions cause algorithms to label this content as ‘viral’ and distribute it even further. As a result, orchestrated campaigns appear to be authentic, organic engagement. We have documented these patterns in Mexico, India, Nigeria, South Africa, Germany, and the U.S.

The conclusion is clear: platforms must be required to ensure transparency regarding their algorithms and data processing. The Digital Services Act (DSA) already provides for this, but many platforms are still refusing to grant the required access. In addition, we need independent early-warning systems to detect such campaigns more quickly.

What significance do influencers have for political opinion-forming, and how do platforms and their algorithms affect this?

Our media consumption has changed significantly in recent years, creating new power structures. In times of declining trust in traditional institutions, influencers are becoming alternative sources of information. Through their connection to their communities, they can spread narratives particularly effectively — for democratic participation, but also for manipulation.

Their influence relies on a good sense for engaging content, strong online communities, and an understanding of how platform algorithms work. These are designed to maximise engagement, not truth. Influencers use this knowledge strategically to create visibility for certain topics.
For example: to bypass Meta’s downranking of political content, influencers combine it with provocative images, because “sex sells” — and ranks higher than the deprioritisation of political content. Ot they coordinate with other accounts to manipulate TikTok’s search algorithms. These growth strategies are increasingly being used for political purposes.

The problem is that algorithms increasingly create isolated information spaces. People live in the same society but move in completely different information ecosystems. This makes it more difficult to build democratic consensus.

Alongside risks, AI also offers opportunities for fact-checking and education. What innovative approaches do you see for using AI specifically to counter disinformation?

To be honest, I am sceptical of purely technical solutions. The answer is not to try to identify every deepfake. The goal must be to build a critical digital public sphere, strengthen trustworthy sources of information, and promote transparent communication structures to enhance democratic resilience.

That said, I see three concrete approaches: First, AI-supported dialogue platforms that facilitate political participation and bring different perspectives together — such as wahl.chat for the federal elections in Germany, or the AI-based Town Hall Tool piloted by Jigsaw in Kentucky, USA.

Secondly, instead of focusing only on debunking (exposing false information), we should promote ‘pre-bunking‘ — that is, psychological immunisation against manipulation techniques by fostering media literacy before disinformation takes effect.

Thirdly, multi-stakeholder approaches in which tech companies, academia, and civil society collaborate to identify AI vulnerabilities and develop systemic solutions. Ultimately, we need policy frameworks that enable innovation while safeguarding democratic processes.