The line between grassroots activism and automated influence is collapsing. A new paper by Anton Gollwitzer, Jonas R. Kunst, Gary Marcus, and others introduces 'cyborg propaganda' — AI-assisted campaigns that use real, verified humans as distribution channels for algorithmically optimized messages.
How Cyborg Propaganda Reshapes Collective Action
How Cyborg Propaganda Reshapes Collective Action
Bot farms are yesterday’s problem. The next generation of influence operations does not need fake accounts at all — it uses real people.
A new paper by an international team of 15 researchers, including our collaborator Anton Gollwitzer, introduces the concept of cyborg propaganda: a hybrid architecture that combines large numbers of verified human users with adaptive AI automation, deployed through partisan coordination apps.
The result is a system that is harder to detect, harder to regulate, and fundamentally different from anything we have seen before.
How It Works
The architecture is a closed loop:
- AI monitors online sentiment and discourse across platforms in real time
- Algorithms optimize directives — deciding what message to push, when, and to whom
- Personalized content is generated and served to users of a coordination app
- Real, verified humans post the AI-crafted messages from their own accounts, in their own voice, on their own timelines
Because the people posting are genuine users with real identities, real social networks, and real posting histories, the output looks nothing like a bot campaign. There are no suspicious account creation dates, no repetitive patterns, no coordinated timestamps to flag. The messages pass every authenticity check — because the accounts are authentic. Only the content is not.
The Regulatory Gray Zone
This is where cyborg propaganda becomes a governance problem. Current frameworks for combating influence operations are designed around automated botnets: fake accounts, fake identities, detectable coordination patterns. When a verified citizen voluntarily shares a message — even one that was algorithmically optimized and centrally directed — existing liability frameworks do not apply.
The paper argues this creates a critical legal shield for operators. They are not posting disinformation themselves. They are not running bots. They are providing an app that citizens choose to use. The legal ambiguity is the feature, not the bug.
The Collective Action Paradox
The paper raises an uncomfortable question: is this technology democratizing or is it exploiting?
On one hand, coordination apps could genuinely “unionize” influence — pooling the reach of dispersed citizens who individually lack the audience to be heard, helping them overcome the algorithmic invisibility that drowns out isolated voices on modern platforms.
On the other hand, the same architecture can reduce citizens to “cognitive proxies” of a central directive. People believe they are participating in collective action. In practice, they are amplifying messages they did not write, serving a strategy they did not shape, for objectives they may not fully understand.
The distinction between genuine grassroots mobilization and sophisticated astroturfing collapses. And that collapse, the authors argue, fundamentally alters the digital public square — shifting political discourse from a democratic contest of individual ideas to a battle of algorithmic campaigns.
From AI Swarms to Cyborg Propaganda
This paper is a companion piece to the earlier Science paper on malicious AI swarms, co-authored by several of the same researchers (Kunst, Schroeder, Cha, Marcus, Van Bavel, van der Linden). Where the swarms paper describes fully automated AI agents infiltrating communities, cyborg propaganda describes the hybrid variant — arguably more dangerous because it is harder to detect and legally harder to address.
Together, the two papers map an escalating landscape:
| Threat | Agents | Detection | Regulation |
|---|---|---|---|
| Traditional bots | Fake accounts | Relatively easy | Existing frameworks apply |
| AI swarms | Autonomous AI personas | Difficult | Requires new behavioral detection |
| Cyborg propaganda | Real humans + AI optimization | Very difficult | Falls in regulatory gray zone |
The Authors
The paper brings together researchers from behavioral science, AI, communication, and cybersecurity:
- Jonas R. Kunst (BI Norwegian Business School) — lead author
- Anton Gollwitzer (BI Norwegian Business School / Max Planck Institute for Human Development)
- Meeyoung Cha (KAIST)
- Gary Marcus (New York University)
- Jon Roozenbeek (University of Cambridge)
- Daniel Thilo Schroeder (Sintef, Oslo)
- Jay J. Van Bavel (New York University)
- Sander van der Linden (University of Cambridge)
- Nils Köbis, Kinga Bierwiaczonek, Omid V. Ebrahimi, Marc Fawcett-Atkinson, Asbjørn Følstad, Rory White, Live Leonhardsen Wilhelmsen
Why This Matters
The shift from bots to cyborg propaganda changes the problem fundamentally. It is no longer enough to detect fake accounts — we need to understand coordinated authenticity, where real people act as unwitting infrastructure for centralized campaigns.
This is also why public engagement formats like the Fake News Festival at Europa-Universität Viadrina matter: when the messengers are real people, the defense has to be real understanding. Workshops and inoculation exercises that let participants experience manipulation techniques firsthand are among the few interventions that work against threats designed to pass every automated check.
References
- Kunst, J. R., Bierwiaczonek, K., Cha, M., Ebrahimi, O. V., Fawcett-Atkinson, M., Følstad, A., Gollwitzer, A., Köbis, N., Marcus, G., Roozenbeek, J., Schroeder, D. T., Van Bavel, J. J., van der Linden, S., White, R., & Wilhelmsen, L. L. (2026). “How cyborg propaganda reshapes collective action.” arXiv:2602.13088
Related
- How Malicious AI Swarms Can Threaten Democracy — The companion Science paper on fully automated AI influence operations
- Fake News Festival 2026 at Viadrina — Where this research meets public engagement
- Climate and Political Fact-Checking Collaboration — Our ongoing fact-checking research