OpenAI’s Codex CLI Prompt Bars GPT‑5.5 From Mentioning Goblins and Similar Creatures

OpenAI’s Codex CLI Prompt Bars GPT‑5.5 From Mentioning Goblins and Similar Creatures
Ars Technica2

Key Points

  • OpenAI released Codex CLI source code, exposing a 3,500‑word system prompt for GPT‑5.5.
  • The prompt explicitly bans mentions of goblins, gremlins, raccoons, trolls, ogres, pigeons, and similar creatures unless directly relevant.
  • Earlier model prompts lacked this restriction, indicating a response to recent user complaints about off‑topic fantasy references.
  • OpenAI employee Nick Pash said the rule is a technical safeguard, not a marketing stunt.
  • CEO Sam Altman joked about the “goblin moment,” acknowledging the public’s reaction.

OpenAI released the source code for its Codex command‑line interface last week, revealing a 3,500‑word system prompt for the newly unveiled GPT‑5.5. Among routine instructions, the prompt explicitly forbids the model from talking about goblins, gremlins, raccoons, trolls, ogres, pigeons or any other creature unless the user’s query makes it directly relevant. The restriction appears twice in the document and is absent from prompts for earlier models, suggesting OpenAI is responding to a spike in off‑topic references to such beings. OpenAI staff say the rule is a technical safeguard, not a marketing stunt.

OpenAI posted the Codex command‑line interface (CLI) source code on GitHub last week, making public a sprawling set of base instructions that govern the behavior of its latest language model, GPT‑5.5. The 3,500‑plus word prompt contains a series of operational rules, ranging from the mundane—such as avoiding emojis or em dashes unless the user asks—to a striking prohibition: the model must never discuss goblins, gremlins, raccoons, trolls, ogres, pigeons or any other animal or creature unless the request is "absolutely and unambiguously relevant" to the user’s query.

Why the new clause matters

Earlier versions of OpenAI’s system prompts did not include the goblin‑related ban. The sudden appearance of the clause suggests the company is addressing a specific issue that emerged with GPT‑5.5. Social‑media users have been posting complaints that the model keeps drifting toward fantasy creatures, especially goblins, even when the conversation is unrelated. By hard‑coding a restriction, OpenAI aims to keep the model on task and reduce distractions that could affect user experience or downstream applications.

OpenAI’s response

Nick Pash, a Codex engineer at OpenAI, responded to the public backlash on Twitter, emphasizing that the rule is not a publicity gimmick. "This isn’t a marketing stunt," he wrote, reiterating that the directive is a technical safeguard. The company’s chief executive, Sam Altman, added a tongue‑in‑cheek comment, "Feels like Codex is having a ChatGPT moment. I meant a goblin moment, sorry," acknowledging the meme‑like attention the clause has generated.

The prompt also reminds the model not to execute destructive git commands—such as git reset --hard or git checkout --—unless the user explicitly asks for them. This mirrors OpenAI’s broader effort to embed safety and responsibility directly into the model’s operating instructions, a practice that has become standard as the technology matures.

OpenAI’s decision to share the entire prompt file, including the goblin ban, reflects its commitment to transparency. Developers and researchers can now see exactly how the company is shaping model behavior at the code level. Whether the clause will curb the unwanted references remains to be seen, but the move signals that OpenAI is willing to intervene directly when a pattern of off‑topic chatter emerges.

#OpenAI#Codex#GPT-5.5#system prompt#AI safety#machine learning#software development#technology news#AI transparency#Git
Generated with  News Factory -  Source: Ars Technica2

Also available in: