
Coordinated swarms of AI personas can now mimic human behavior well enough to manipulate online political conversations and potentially influence elections.
They will not show up at rallies or cast ballots, but they can still move a democracy. Researchers are increasingly worried about AI-controlled personas that look and sound like ordinary users, then quietly steer what people see, share, and believe online.
A policy forum paper in Science describes how swarms of these personas could slip into real communities, build credibility over time, and nudge political conversations in targeted directions at machine speed. The main shift from earlier botnets is teamwork. Instead of posting the same spam in bulk, the accounts can coordinate continuously, learn from what gets traction, and keep the same storyline intact across thousands of profiles, even as individual accounts come and go.
Inside the Mechanics of AI Persona Networks
Newer large language models paired with multi-agent systems make it possible for one operator to run a whole cast of AI “voices” that appear local and authentic. Each persona can speak in a slightly different style, reference community norms, and respond quickly to pushback, which makes the activity harder to spot as manipulation.
The swarm can also run massive numbers of quick message tests, then amplify the versions that change minds most effectively. Done well, it can manufacture the feeling that “everyone is saying this,” even when that consensus is carefully engineered.
Early Warning Signs: Deepfakes and Synthetic News
Even though large-scale AI persona swarms have not yet been fully realized, experts say there are already signs of what may be coming. UBC computer scientist Dr. Kevin Leyton-Brown points to AI-generated deepfake videos and fabricated news outlets that have influenced recent election-related debates in the U.S., Taiwan, Indonesia, and India.
In addition, monitoring organizations report that pro-Kremlin networks are actively flooding the internet with content designed to pollute future AI training data, raising concerns about how these systems could be shaped over time.
What Comes Next for Elections and Trust
AI swarms could tilt the balance of power in democracies, said Dr. Leyton-Brown. “We shouldn’t imagine that society will remain unchanged as these systems emerge. A likely result is decreased trust of unknown voices on social media, which could empower celebrities and make it harder for grassroots messages to break through.”
Researchers add that upcoming elections may serve as the first real test of this technology, raising the urgent question of whether such coordinated influence campaigns will be detected in time.
Reference: “How malicious AI swarms can threaten democracy” by Daniel Thilo Schroeder, Meeyoung Cha, Andrea Baronchelli, Nick Bostrom, Nicholas A. Christakis, David Garcia, Amit Goldenberg, Yara Kyrychenko, Kevin Leyton-Brown, Nina Lutz, Gary Marcus, Filippo Menczer, Gordon Pennycook, David G. Rand, Maria Ressa, Frank Schweitzer, Dawn Song, Christopher Summerfield, Audrey Tang, Jay J. Van Bavel, Sander van der Linden and Jonas R. Kunst, 22 January 2026, Science.
DOI: 10.1126/science.adz1697
Never miss a breakthrough: Join the SciTechDaily newsletter.
Follow us on Google and Google News.