ChatGPT Introduces Simplified Model Picker, Hiding Underlying Models

Key Points
- ChatGPT now shows only three model options: Instant, Thinking, and Pro.
- Underlying model selection is automatic and based on prompt complexity.
- Older model names have been hidden in the Configure settings menu.
- Automatic switching aims to balance speed, cost, and performance.
- Casual users experience smoother interactions; power users may notice variability.
- Legacy models can still be accessed by turning off automatic switching.
- The update reduces visible complexity but adds hidden variability.
ChatGPT now displays only three model options—Instant, Thinking, and Pro—while the actual AI engine is chosen automatically based on prompt complexity and other factors. The older model names have been removed from the main interface and are only accessible through hidden settings. This shift aims to streamline the user experience and reduce costs, but it also means users may not know which model generated a given answer, creating potential gaps between expectation and reality.
What Changed in the ChatGPT Interface
OpenAI has refreshed the model selector at the top of the ChatGPT screen. Instead of a long list of version numbers such as 5.4, 4o, and o3, users now see only three choices labeled Instant, Thinking, and Pro. The change is presented as a move toward transparency, yet the new labels represent broad style requests rather than specific model versions.
How Model Switching Works
Behind the simplified picker, ChatGPT continues to run a suite of language models. The system decides which model to use for each request based on the complexity of the prompt, usage patterns, and internal settings. A lightweight model may produce a quick, conversational reply, while a more powerful model may take longer to deliver a detailed, structured answer. Users are not explicitly told which model handled their request.
Why OpenAI Made the Update
The most advanced models are slower and more expensive to operate. Running them for every query would make the service feel sluggish and increase costs. By automatically routing simpler queries to lighter models and reserving heavier models for more demanding tasks, OpenAI aims to deliver a balance of speed and capability.
Impact on Users
For casual users, the change is largely invisible; they type a question and receive an answer without needing to manage model details. More attentive users may notice variations in answer depth or response time, which could be due to an unseen model switch. Usage limits can also trigger quieter downgrades, reducing the level of reasoning applied to prompts.
Accessing Legacy Models
The older models have not been removed entirely. They remain available in the Configure menu under Settings, where users can turn off automatic switching and manually select a specific model. This hidden menu also allows adjustment of how much computational effort the system applies when reasoning.
Overall Assessment
The streamlined picker reduces friction for the majority of users while giving power users a way to reclaim control through hidden settings. The trade‑off is reduced predictability, as two people using ChatGPT at the same time may be interacting with different underlying systems even though their screens look identical.