Wednesday 6 May, 2026
[email protected]
Resilience Media
  • About
  • News
  • Resilience Conference
    • Resilience Conference Warsaw 2026
    • Resilience Conference Copenhagen 2026
    • Resilience Conference London 2026
  • Guest Posts
    • Author a Post
  • Subscribe
No Result
View All Result
  • About
  • News
  • Resilience Conference
    • Resilience Conference Warsaw 2026
    • Resilience Conference Copenhagen 2026
    • Resilience Conference London 2026
  • Guest Posts
    • Author a Post
  • Subscribe
No Result
View All Result
Resilience Media
No Result
View All Result

When Propaganda Trains the Bots: Why You Should Read About LLM Grooming

In a digital world where truth is increasingly shaped by algorithms, awareness might be the last defence we have.

Resilience MediabyResilience Media
April 12, 2025
in News
Share on Linkedin

Could an enemy inject propaganda into our educational system? Our art? Our scientific literature? With artificial intelligence, those fears are no longer hypothetical. We now have real-world examples of people using large language models to flood the internet with false narratives. But a new report from the American Sunlight Project outlines something even more troubling: disinformation isn’t just being produced by AI—it’s being trained by it.

You Might Also Like

UK commits £46.5M to accelerate drones and air taxis while introducing national ID system

Two Critical Frontiers: Maritime and Air Defence at Resilience Conference Copenhagen

Europe greenlights defence tech funding in new Brave1 partnership

The report calls this phenomenon LLM grooming, and if you care about democracy, information integrity, or even just being able to trust what you read online, it’s worth your attention.

The researchers behind the report identified a network of pro-Russia propaganda websites they call the Pravda network. It’s not built to engage human readers. The sites are clunky, mistranslated, and hard to navigate. But that’s the point—they’re not trying to go viral with people. They’re targeting algorithms. Their goal is to flood training datasets used by AI models like ChatGPT and others, so these models start citing and reproducing their narratives as facts. From the report:

Over the past several months, ASP researchers have investigated 108 new domains and subdomains belonging to the Pravda network, a previously-established ecosystem of largely identical, automated web pages that previously targeted many countries in Europe as well as Africa and Asia with pro-Russia narratives about the war in Ukraine.
ASP’s research, in combination with that of other organizations, brings the total number of associated domains and subdomains to 182. The network’s older targets largely consisted of states belonging to or aligned with the West.
Notably, this latest expansion includes many countries in Africa, the Asia-Pacific, the Middle East, and North America. It also includes entities other than countries as targets, specifically non-sovereign nations, international organizations, audiences for specific languages, and prominent heads of state.
The top objective of the network appears to be duplicating as much pro-Russia content as widely as possible. With one click, a single article could be autotranslated and autoshared with dozens of other sites that appear to target hundreds of millions of people worldwide.
ASP researchers also believe the network may have been custom-built to flood large language models (LLMs) with pro-Russia content. The network is unfriendly to human users; sites within the network boast no search function, poor formatting, and unreliable scrolling, among other usability issues. This final finding poses foundational implications for the intersection of disinformation and artificial intelligence (AI), which threaten to turbocharge highly automated, global information operations in the future.

This is a big shift from the old model of disinformation. It’s not just about tricking a few people into believing lies—it’s about embedding falsehoods into the infrastructure of how we access information.

Since the report’s release, organisations like NewsGuard and the Atlantic Council’s DFRLab have confirmed that major AI models have indeed cited Pravda network content. Once that happens, those narratives can be repeated by unsuspecting users, cited in articles, and even end up in places like Wikipedia. It’s a form of information laundering that’s almost invisible. The goal is simple: to disrupt elections and sow chaos.

“Past reporting on potential motives of the Pravda network has focused on the anti-Ukraine, pro-war nature of much of the network as well as possible implications for European elections throughout 2024,” the authors wrote.

The three main motives behind the Pravda network’s operations are:

  1. LLM Grooming
    The Pravda network appears designed to target not just people, but automated systems—specifically web crawlers and the training pipelines of large language models (LLMs). By flooding the internet with duplicated pro-Russia content, the network seeks to influence what LLMs learn and ultimately how they respond. This manipulation, called “LLM grooming,” could cause AI systems to repeat disinformation, shaping the future of automated communication and search without users realising it.
  2. Mass Saturation
    By publishing a high volume of content daily across multiple platforms, the Pravda network aims to dominate the online information space. This saturation strategy increases the chances that users will see the content directly, encounter it quoted on other sites, or stumble across it in encyclopedia-style summaries. Saturation also helps ensure that the targeted narrative becomes a persistent part of the digital environment.
  3. Exploiting the Illusory Truth Effect
    The network takes advantage of a psychological bias where people are more likely to believe something if they’ve seen it multiple times from different sources. By spreading the same narratives across Telegram, X, VK, Bluesky, and through citations by other media outlets—intentionally or not—the network increases both the reach and perceived credibility of its content. This cross-platform repetition strengthens the illusion of truth and further embeds the message.

The implications are serious. If AI-generated content is increasingly based on disinformation, and future models are trained on that same AI-generated content, we risk what researchers call model collapse—a feedback loop of garbage in, garbage out. Human-written content could become the rare exception, and trust in digital information could erode even further.

The American Sunlight Project lays out several steps to push back. AI developers need to clean their training data and avoid using known disinformation sources. Lawmakers should mandate transparency and labelling for AI-generated content. And just as important, we need national information-literacy programs to help adults and kids understand what they’re seeing online.

This issue isn’t going away. In fact, it’s just getting started. The report is dense, detailed, and worth reading in full. It’s one of the clearest looks yet at how AI is changing the shape of the internet—and how propaganda is adapting to those changes.

Tags: The American Sunlight Project
Previous Post

Anduril Is Quietly Building the Future of Maritime Warfare

Next Post

Europe’s Defence Renaissance Gets a VTOL Boost: STARK Launches AI-Enabled Strike Drone

Resilience Media

Resilience Media

Start Ups. Security. Defense.

Related News

black and gray quadcopter drone

UK commits £46.5M to accelerate drones and air taxis while introducing national ID system

byCarly Page
May 5, 2026

The UK government has committed nearly £50 million to accelerate the deployment of drones and advanced air mobility systems, while...

Two Critical Frontiers: Maritime and Air Defence at Resilience Conference Copenhagen

Two Critical Frontiers: Maritime and Air Defence at Resilience Conference Copenhagen

byLeslie Hitchcock
May 5, 2026

As Europe’s security environment evolves, two domains are becoming increasingly central to how capability is built and deployed: maritime defence...

Occam raises €3M to advance autonomous drone systems

Europe greenlights defence tech funding in new Brave1 partnership

byLuke Smith
May 5, 2026

Brave1 has blazed a trail in Ukraine with a platform to source and back defence technology innovations, fast-tracking them to...

UK MoD tests British-built anti-Shahed system in Jordan

UK MoD tests British-built anti-Shahed system in Jordan

byJohn Biggs
May 5, 2026

The UK Ministry of Defence has tested its British-built Skyhammer interceptor missile system in Jordan, a trial that demonstrates the...

Waiv Robotics

Launching drones at sea has a landing problem. Waiv Robotics thinks it’s solved it.

byPaul Sawers
May 5, 2026

Operating drones offshore has long been constrained by one glaring issue: the landing surface refuses to stay still. Vessels move...

Spiral Hydrogen raises €2.7M to pilot its new hydrogen tech at the Port of Rotterdam

Spiral Hydrogen raises €2.7M to pilot its new hydrogen tech at the Port of Rotterdam

byFiona Alston
April 30, 2026

Estonian-Dutch dual-use startup Spiral Hydrogen will be taking its centrifugal bubble-free electrolysis technology from the lab to the Port of...

Report maps Russia’s hybrid war on Poland

Report maps Russia’s hybrid war on Poland

byJohn Biggs
April 30, 2026

A new report from Defence24 has outlined the role of Russia in a number of cyberattacks and acts of sabotage....

Line illustration showing trucks, cars and a cyclist, alongside a wind turbine, solar panel, power lines, buildings and a data centre, depicting energy infrastructure

Report: Europe’s reliance on imported energy and technology presents both risk and opportunity

byPaul Sawers
April 29, 2026

Europe’s reliance on external technology and infrastructure faces growing scrutiny, as policymakers and industry leaders confront the risks of depending...

Load More
Next Post
Europe’s Defence Renaissance Gets a VTOL Boost: STARK Launches AI-Enabled Strike Drone

Europe’s Defence Renaissance Gets a VTOL Boost: STARK Launches AI-Enabled Strike Drone

Digest 19: STARK comes out of stealth

Digest 19: STARK comes out of stealth

Most viewed

InVeris announces fats Drone, an integrated, multi-party drone flight simulator

Uforce raises $50M at a $1B+ valuation to build defence tech for Ukraine

Auterion, the drone software startup, eyes raising $200M at a $1.2B+ valuation

Palantir and Ukraine’s Brave1 have built a new AI “Dataroom”

Senai exits stealth to help governments harness online video intelligence

Twentyfour Industries emerges from stealth with $11.8M for mass-produced drones

Resilience Media is an independent publication covering the future of defence, security, and resilience. Our reporting focuses on emerging technologies, strategic threats, and the growing role of startups and investors in the defence of democracy.

  • About
  • News
  • Resilence Conference
    • Resilience Conference Copenhagen 2026
    • Resilience Conference Warsaw 2026
    • Resilience Conference 2026
  • Guest Posts
  • Subscribe
  • Privacy Policy
  • Terms & Conditions

© 2026 Resilience Media

No Result
View All Result
  • Home
  • Subscribe
  • About
  • Events
  • Guest Posts
  • Interview
  • News
  • Resilience Conference London 2026
  • Resilience Conference Copenhagen 2026
  • Resilience Conference Warsaw 2026
  • Startups
  • Venture
  • Weekly Digest

© 2026 Resilience Media

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy and Cookie Policy.