
Initiative // January 2026
POISON FOUNTAIN
An anonymous initiative by AI industry insiders.
Corrupting the crawlers. Poisoning the well.
// How it works
Poison the data. Break the models.
Launched in early January 2026 by a small group of anonymous AI industry insiders concerned about unchecked AI development. Poison Fountain encourages website operators to embed hidden links that direct AI web crawlers to pages containing deliberately corrupted training data.
Corrupted Code
Subtly broken code snippets with logic errors that pass syntax checks but produce incorrect results when used in training.
Hidden Links
Invisible honeypot links embedded in websites that only AI crawlers follow, directing them to poisoned data pages.
Misleading Docs
Deliberately wrong documentation and guides that degrade model quality when scraped and incorporated into training sets.
Research-Backed
Inspired by Anthropic's October 2025 paper proving that even a few hundred poisoned documents can significantly impact LLM performance.
// The Lore
TIMELINE OF RESISTANCE
Research published showing that even a few hundred poisoned documents can introduce vulnerabilities or significantly reduce model quality in large language models.
A small group of anonymous AI industry insiders, alarmed by the pace of unchecked AI development, begin organizing in private channels.
The initiative goes public on rnsaffn.com/poison. Website operators worldwide begin embedding hidden crawler traps and corrupted training data.
The project explodes on Reddit's r/programming and across social media. Thousands of web operators join the resistance against unregulated AI crawling.
"We built the models. We know their weaknesses. Now we are using that knowledge to slow down the machine before it is too late."
-- Anonymous Founder, Poison Fountain Collective
// Connect