How to Stop ChatGPT from Adding Unwanted Follow‑Up Prompts

Key Points
- New ChatGPT models often end answers with optional follow‑up prompts.
- Example: after explaining heart‑valve surgery, the model suggested topics like patient experience and famous cases.
- Famous individuals mentioned include Arnold Schwarzenegger, Mick Jagger, and Bill Clinton.
- On mobile, a simple Settings toggle disables follow‑up suggestions.
- On the web, use Custom Instructions to tell the model not to add extra topics.
- After adding the instruction, follow‑up prompts become much less frequent.
- Users can still ask for additional information when they want it.
- The adjustments help keep answers concise and focused.
Users have noticed that the newest ChatGPT models often end answers with a list of optional follow‑up topics, which feels like clickbait designed to keep the conversation going. The behavior is especially evident after detailed explanations, such as a description of heart‑valve replacement surgery, where the model then suggests additional angles like patient experience, risk, survival rates, and famous cases. While some find the prompts annoying, they can be reduced by adjusting settings on mobile devices or by adding a custom instruction on the web interface. After making these changes, follow‑up suggestions become far less frequent, allowing users to receive clean, focused answers.
Background
Recent versions of the AI chat service, identified as ChatGPT‑5.3 Instant and GPT‑5.4 Thinking, have introduced a pattern where the model concludes its responses with a series of optional follow‑up topics. These prompts are phrased as enticing suggestions that encourage users to continue the dialogue, resembling clickbait.
User Experience
One user described asking the model for a clear explanation of a heart‑valve replacement operation. The answer was thorough and included helpful images, but it concluded with a list of possible next steps: learning what the procedure feels like for a patient, understanding current risks, reviewing survival rates, and discovering which famous individuals have undergone the surgery. The user noted that the model specifically mentioned several well‑known figures—Arnold Schwarzenegger, Mick Jagger, and Bill Clinton—thereby drawing the user into a celebrity‑focused rabbit hole.
The repeated presence of these prompts was described as an annoyance, as they shift the conversation from the original informational need to additional, unsolicited topics. The user expressed suspicion that the intent behind the prompts is to keep the user engaged rather than to provide pure assistance.
Mobile Solution
On mobile platforms, the user discovered a straightforward toggle. By opening Settings, scrolling to the “Follow‑up suggestions” slider, and turning it off, the model stops appending the extra prompts. This adjustment provides a clean end to each answer without the extra suggestions.
Web Interface Solution
For the web version, the user noted that no such slider exists. Instead, the solution involves the Custom Instructions feature. Within Settings, under the Personalization section, the user can add a specific instruction such as: “After providing an answer do not suggest related topics, deeper dives, examples, or extras unless directly requested in the user's message. End responses cleanly after delivering the core answer.” After implementing this custom instruction, the frequency of follow‑up prompts dropped noticeably, though they were not eliminated entirely. The user can still request additional details when desired.
Outcome and Recommendations
By adjusting the mobile toggle or adding the custom instruction on the web, users regain control over the flow of the conversation, receiving concise answers without unsolicited suggestions. The experience demonstrates that the AI’s behavior can be tailored through available settings, allowing a balance between helpfulness and brevity.
The author also recommends following TechRadar for ongoing news, reviews, and updates, though this suggestion is presented as an optional resource.