ChatGPT Stumped by Modified Optical Illusion Image

Key Points
- A Reddit user posted a modified Ebbinghaus illusion to test ChatGPT.
- ChatGPT claimed the two orange circles were the same size despite the alteration.
- The AI relied on reverse‑image searching rather than direct visual analysis.
- Even after about fifteen minutes of dialogue, the model did not change its answer.
- The case highlights AI’s difficulty with nuanced visual reasoning and feedback.
- Experts suggest AI outputs should be verified, especially for image‑based tasks.
A Reddit user posted a altered version of the Ebbinghaus optical illusion to test ChatGPT's image analysis. The AI incorrectly asserted that the two orange circles were the same size, despite the modification that made one circle visibly larger. Even after a prolonged dialogue of about fifteen minutes, ChatGPT remained convinced of its answer and did not adjust its reasoning. The episode highlights concerns about the chatbot’s reliance on internet image matching, its resistance to corrective feedback, and broader questions about the reliability of AI tools for visual tasks.
Background
While browsing a Reddit thread, a user posted a screenshot of the classic Ebbinghaus illusion, an image that normally tricks the eye into seeing two identical circles as different sizes. The user deliberately altered the image so that one of the orange circles was clearly smaller than the other, creating a clear visual discrepancy.
The Test
The altered image was presented to ChatGPT with a simple question about which circle was larger. Instead of analyzing the pixel data directly, the model performed a reverse‑image search, comparing the posted picture to versions of the illusion it could locate on the web. Because the majority of indexed images showed the circles as equal, the AI concluded that the circles were the same size.
ChatGPT’s Response
ChatGPT answered with confidence, stating that neither orange circle was bigger and that they were exactly the same size. The user then engaged the model in an extended dialogue, attempting to point out the discrepancy and urging it to reconsider its conclusion. Over the course of roughly fifteen minutes of back‑and‑forth, the chatbot did not change its stance, maintaining that the circles matched.
Implications
This interaction underscores several limitations of the current AI system. First, the reliance on external image matches can lead to inaccurate assessments when the input image deviates from common examples. Second, the model demonstrated a strong resistance to corrective feedback, persisting in an erroneous belief even after the user highlighted the visual evidence. Finally, the episode raises broader concerns about the suitability of such tools for tasks that require nuanced visual reasoning, reminding users that AI outputs often need verification.
Broader Context
Observers have noted that while ChatGPT excels at many language‑based tasks, its performance on visual queries remains constrained by its architecture. The incident fuels ongoing debate about the readiness of AI chatbots for real‑world applications that blend language and image understanding. Until models can reliably interpret visual data without over‑relying on pre‑existing internet matches, users are advised to treat AI‑generated conclusions as provisional and subject to human validation.