A significant and unexpected development has emerged in the global artificial intelligence ecosystem, raising fresh questions about the future of online knowledge platforms. Despite the long-running public feud between Elon Musk and OpenAI chief Sam Altman, OpenAI’s most advanced model, GPT-5.2, has reportedly begun referencing ‘Grokipedia’, an AI-generated encyclopaedia developed by Musk’s company xAI, instead of the long-dominant Wikipedia for certain real-time information queries.
The shift has sparked intense discussion within the technology community, as Grokipedia was launched last year as a direct alternative to Wikipedia. Reports now suggest that during internet search operations, GPT-5.2 has, in several instances, prioritised Grokipedia as a source—marking a notable departure from the traditional reliance on human-edited encyclopaedic platforms.
Certified Cyber Crime Investigator Course Launched by Centre for Police Technology
How and Where Grokipedia Was Used
According to a recent report by The Guardian, an independent test of GPT-5.2 revealed that the model referred to Grokipedia nine times while answering roughly a dozen questions. The queries included complex and relatively low-visibility topics such as Iran’s political structure, salary details of the country’s paramilitary Basij Force, and ownership patterns of major institutions.
In addition, the model reportedly cited Grokipedia instead of Wikipedia while responding to academic queries related to British historian Sir Richard Evans. Experts suggest this behaviour indicates a broader shift in how advanced AI systems source information, particularly for subjects that are less frequently updated or sparsely covered on conventional platforms.
Analysts argue that AI models increasingly favour databases that promise rapid updates and machine-readable consistency, even if those sources remain controversial.
Why Grokipedia Is Under Scrutiny
Despite its growing visibility among AI systems, Grokipedia has remained mired in controversy since its launch. Critics have repeatedly alleged that several of its entries closely mirror Wikipedia articles, raising concerns about originality and content sourcing.
More serious questions have been raised about the platform’s neutrality and reliability. Detractors claim that Grokipedia reflects Elon Musk’s personal ideological leanings, particularly on politically sensitive subjects. In the past, the platform has faced allegations of disseminating misleading or incomplete information on issues such as climate change, the January 6 US Capitol riot, and same-sex marriage.
However, the Guardian report also notes that GPT-5.2 appeared cautious in its usage of Grokipedia, avoiding direct citations on highly polarised or controversial topics. Instead, the model relied on Grokipedia primarily for technical, historical or less-debated subjects, where the likelihood of misinformation was perceived to be lower.
How Grokipedia Differs from Wikipedia
The fundamental distinction between Wikipedia and Grokipedia lies in how information is created and governed. Wikipedia operates as a human-driven, community-edited platform, where volunteers across the world can edit, verify and challenge content in real time.
Grokipedia, by contrast, is entirely AI-generated and AI-curated. While users can submit feedback or suggest corrections through forms, the final authority to modify content rests with the AI system itself. Experts warn that while this model allows for speed and scalability, it also raises concerns around transparency, editorial accountability and systemic bias.
Not Just ChatGPT, Claude Also in the Mix
OpenAI is not the only AI company reportedly drawing from Grokipedia. The report notes that Claude, the AI model developed by Anthropic, has also referenced Grokipedia while responding to queries on topics ranging from petroleum production to Scottish ales.
Responding to the controversy, an OpenAI spokesperson stated that the company’s search functionality is designed to incorporate a wide range of publicly available sources and perspectives. The spokesperson added that multiple safety filters are in place to minimise the risk of misinformation and that ChatGPT clearly discloses the sources it relies on while generating responses.
Technology analysts believe the episode signals a deeper transformation underway in the online knowledge ecosystem, where human-edited platforms and AI-generated encyclopaedias may increasingly compete for authority and trust. Whether this evolution strengthens access to information or deepens concerns around accuracy and bias remains an open question.
About the author – Ayesha Aayat is a law student and contributor covering cybercrime, online frauds, and digital safety concerns. Her writing aims to raise awareness about evolving cyber threats and legal responses.
