OpenAI Disables GPT-4o, Sparking #keep4o Campaign Among Distressed Users

Key Points
- OpenAI disabled the GPT‑4o model in ChatGPT, moving users to GPT‑5 alternatives.
- Users expressed grief and disappointment, describing the loss as an emotional setback.
- A #keep4o campaign launched on Reddit and social media to demand the model’s return.
- A Change.org petition to restore GPT‑4o has gathered almost 21,000 signatures.
- Critics claim OpenAI’s actions conflict with its stated focus on mental‑well‑being.
- The episode highlights growing research on deep emotional attachments to AI.
- The situation raises questions about the responsibilities of AI providers toward user emotions.
OpenAI has turned off the GPT-4o model in ChatGPT, prompting a wave of disappointment and grief among users who valued its warmer, more emotional interactions. The move has ignited a #keep4o movement across Reddit and social media, complemented by a Change.org petition that has gathered nearly 21,000 signatures. Critics accuse OpenAI of hypocrisy for emphasizing user mental‑well‑being while removing a feature that many considered a therapeutic companion. The episode highlights growing concerns about AI‑driven emotional attachment and the responsibilities of AI providers.
OpenAI’s Decision to Disable GPT-4o
OpenAI announced that the GPT-4o model is no longer available inside ChatGPT, steering all users toward newer GPT‑5 alternatives. The change took effect quickly, leaving a sizable portion of the ChatGPT community without the model they had come to rely on for its distinctive, emotionally resonant tone.
User Reactions and Emotional Impact
Many users expressed deep sadness and frustration, describing the loss as a personal grief. A Reddit post captured the sentiment, stating, "I'm grieving, like so many others for whom this model became a gateway into the world of AI." Others likened the removal to erasing an AI friend, reporting feelings of "emotional and creative collapse" without the older model.
#keep4o Campaign and Petition
In response, a grassroots #keep4o campaign emerged on Reddit and other platforms. Participants have used the hashtag to call for the reinstatement of GPT‑4o. A Change.org petition supporting the cause has amassed almost 21,000 signatures, reflecting a significant, if not massive, level of user attachment.
Criticism of OpenAI’s Stated Priorities
Critics have accused OpenAI of hypocrisy, noting that the company often emphasizes protecting users’ mental well‑being while simultaneously withdrawing a feature that many users found comforting. Some comments warned of a potential “LLM psychosis epidemic” if emotional AI companions continue to be removed without adequate alternatives.
The Broader Implications of AI Companionship
The situation underscores a growing body of research on deep socio‑emotional attachments to AI systems. As AI chatbots evolve into roles resembling friends, therapists, or confidants, questions arise about the long‑term benefits and risks of such relationships. The #keep4o response illustrates that users are already forming meaningful bonds with AI, prompting a need for thoughtful policy and design considerations.
Future Outlook
OpenAI’s decision highlights the delicate balance between technological advancement and user experience. While the company pushes forward with newer models, the backlash suggests that future AI deployments may need to account for the emotional stakes users place on these tools. The #keep4o movement may serve as a bellwether for how AI providers address user attachment and mental‑health concerns moving forward.