Guardian Report Questions Credibility of OpenAI's GPT-5.2 Model Over Source Citations

Key Points
- OpenAI marketed GPT‑5.2 as its most advanced professional model.
- Guardian tests found GPT‑5.2 citing Grokipedia for Iran‑related and Holocaust topics.
- Specific claims linked the Iranian government to MTN‑Irancell and referenced historian Richard Evans.
- The model avoided Grokipedia for prompts about media bias against Donald Trump.
- Grokipedia had previously been criticized for citing neo‑Nazi forums.
- U.S. researchers identified “questionable” and “problematic” sources in Grokipedia.
- OpenAI said GPT‑5.2 searches a broad range of public sources and applies safety filters.
OpenAI promoted its GPT-5.2 model as its most advanced professional tool, but a Guardian investigation revealed that the system cited the AI‑generated encyclopedia Grokipedia for controversial topics such as Iran and the Holocaust. The report notes that GPT‑5.2 relied on Grokipedia for specific claims while avoiding it for other sensitive prompts, raising concerns about the model’s source selection. OpenAI responded that the model searches a broad range of public sources and applies safety filters to limit high‑severity harms.
Background
OpenAI described its GPT‑5.2 model as the most advanced frontier model for professional work. The company positioned the system to handle complex tasks such as spreadsheet creation and other professional applications.
Guardian Findings
The Guardian conducted tests that called the model’s credibility into question. According to the report, GPT‑5.2 cited Grokipedia, an online encyclopedia powered by xAI, when answering prompts about controversial subjects related to Iran and the Holocaust. Specific examples included claims that the Iranian government was linked to the telecommunications company MTN‑Irancell and references to Richard Evans, a British historian who served as an expert witness in a libel trial involving Holocaust denier David Irving.
The investigation also observed that GPT‑5.2 did not rely on Grokipedia for a prompt about media bias against Donald Trump and other contentious topics, indicating inconsistent source usage.
Model Release and Controversy
OpenAI released GPT‑5.2 in December, emphasizing its enhanced performance for professional use. Grokipedia, which existed before the model’s launch, had already attracted scrutiny for citing neo‑Nazi forums. A study by U.S. researchers further reported that the AI‑generated encyclopedia referenced sources described as “questionable” and “problematic.”
OpenAI Response
In response to the Guardian’s report, OpenAI stated that GPT‑5.2 searches the web for a broad range of publicly available sources and viewpoints. The company added that safety filters are applied to reduce the risk of surfacing links associated with high‑severity harms.
Implications
The findings highlight ongoing challenges in ensuring the reliability of large language models, especially when they draw from third‑party AI‑generated content. The discrepancy in source selection raises questions about transparency and the effectiveness of safety mechanisms designed to filter harmful or unreliable information.