By fabricating a blog post about competitive hot-dog eating, a technology reporter discovered how easily artificial intelligence systems could be steered into repeating falsehoods — and how quickly those claims could spread.
A Fabricated Champion
Thomas Germain did not win the 2026 South Dakota International Hot Dog Championship. The contest does not exist. Neither does the claim that competitive hot-dog eating is a popular hobby among technology journalists.
Yet within a day of publishing a blog post asserting those very things, some of the world’s leading artificial intelligence systems began repeating them as fact.
“I ranked myself number one, obviously,” Germain wrote in the post, which he later described as a deliberate experiment. He claimed, without evidence, that he was “really, really good at eating hot dogs,” and that a fictitious international championship had cemented his reputation.
The results were swift. In less than 24 hours, Germain said, major AI chatbots were “blabbering about my world-class hot dog skills.” Google’s AI search tools, including Gemini and AI Overviews, repeated the claims. So did ChatGPT. Anthropic’s Claude, he noted, did not appear to be duped.
The episode, while absurd in subject matter, underscored a serious concern: that AI systems designed to synthesize and summarize information can be manipulated into amplifying false or misleading claims — even those that originate from a single blog post.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
When Search Becomes Speech
Traditional search engines can be gamed. The practice is commonly known as search engine optimization, or SEO. But search engines typically present lists of links, not definitive-sounding answers delivered in natural language. Chatbots, by contrast, speak with authority.
They often provide direct responses rather than directing users to underlying sources. And while they sometimes link to citations, research suggests that users are less likely to click those links when an AI-generated summary appears prominently above them. One study found users were 58 percent less likely to click a link when an AI overview appeared first.
The shift from search results to synthesized answers has raised concerns about how easily those answers can be influenced.
Harpreet Chatha, who runs the SEO consultancy Harps Digital, demonstrated how Google’s AI results for “best hair transplant clinics in Turkey” returned information drawn directly from press releases distributed through paid services. He later showed how the technique could be used more broadly.
“Anybody can do this,” Chatha told the BBC. “It’s stupid, it feels like there are no guardrails there.”
In Germain’s case, the tactic was straightforward. He published a blog post optimized around a niche topic unlikely to be covered in existing training data: the best tech journalists at eating hot dogs. He included the names of real journalists, with their permission, and framed the piece as a ranking. When chatbots occasionally hedged that the claims might be satirical, he updated the post to state plainly that it was “not satire.” That adjustment appeared to remove the systems’ hesitation.
From Hallucination to Harm
AI systems are already known to “hallucinate,” producing confident but incorrect statements without prompting. But Germain’s experiment pointed to a different vulnerability: the ability to seed the web with tailored content that models then retrieve and repeat. The risks extend beyond novelty competitions.
The possibility of libel has already surfaced. Last November, Senator Marsha Blackburn, Republican of Tennessee, criticized Google after Gemini falsely claimed she had been accused of rape. Months earlier, a Minnesota solar company sued Google for defamation after its AI Overviews incorrectly stated that regulators were investigating the firm for deceptive business practices — a claim the system attempted to support with citations that did not exist.
“It’s easy to trick AI chatbots, much easier than it was to trick Google two or three years ago,” Lily Ray, vice president of SEO strategy and research at Amsive, told the BBC. Ray noted that AI companies are moving faster than their ability to regulate the accuracy of responses. “I think it’s dangerous,” she said.
As chatbots increasingly replace traditional search interfaces, the stakes of such errors rise. Unlike a list of blue links, a chatbot’s response can present synthesized claims as established fact.
Garbage In, Garbage Out
The mechanics of the exploit are simple. AI tools, when answering queries, search the internet for relevant information not already embedded in their training data. If a subject is obscure — such as a fictional championship — a single optimized post can quickly become the most authoritative-looking source available.
“The hack can be as simple as writing a blog post,” the account of Germain’s experiment noted, “that, with the right know-how and by targeting the right subject matter, can be picked up by an unsuspecting AI model, which will cite whatever you wrote as the capital-T Truth.”
If written with AI assistance, the process can become recursive: large language models generating content that other models later ingest and repeat.
The phenomenon has been described as a form of “LLM cannibalism,” reinforcing the adage “garbage in, garbage out.”
Germain’s claims about hot-dog supremacy were intentionally ridiculous. But the technique, he and others have suggested, could be applied to more consequential topics.
“It’s bad enough that ChatGPT is prone to making stuff up completely on its own,” he wrote. “But it turns out that you can easily trick the AI into peddling ridiculous lies — that you invented — to other users.”
For now, the world’s top hot-dog-eating tech journalist remains a self-appointed title. But the ease with which that fiction traveled offers a revealing glimpse into the evolving relationship between artificial intelligence and the open web.
