T
traeai
登录
返回首页
OpenAI Blog

Where the goblins came from

8.0Score
Where the goblins came from
AI 深度提炼
  • GPT-5.1开始频繁使用‘小妖精’等比喻。
  • 问题根源在于Nerdy个性设置中的奖励机制。
  • 模型行为受多种微小激励影响。

结构提纲

AI 替你读一遍后整理出的核心层级。

  1. 介绍GPT-5.1及后续版本中出现的‘小妖精’比喻现象及其背景。

  2. 描述首次发现‘小妖精’比喻的时间和用户反馈情况。

  3. 详细说明如何通过分析找到问题根源,并解释Nerdy个性设置的影响。

思维导图

用一张图看清主题之间的关系。

正在生成思维导图…
查看大纲文本(无障碍 / 无 JS 友好)
  • Where the goblins came from

金句 / Highlights

值得收藏与分享的关键句。

  • Starting with GPT-5.1, our models began developing a strange habit: they increasingly mentioned goblins, gremlins, and other creatures in their metaphors.

    第 2 段

    ⬇︎ 下载 PNG𝕏 分享到 X
  • A single “little goblin” in an answer could be harmless, even charming. Across model generations, though, the habit became hard to miss: the goblins kept multiplying, and we needed to figure out where

    第 3 段

    ⬇︎ 下载 PNG𝕏 分享到 X
  • The short answer is that model behavior is shaped by many small incentives. In this case, one of those incentives came from training the model for the personality customization feature, in particular

    第 4 段

    ⬇︎ 下载 PNG𝕏 分享到 X
#OpenAI#GPT-5.1#自然语言处理
打开原文

Where the goblins came from | OpenAI

Skip to main content

[](http://openai.com/)

Log inTry ChatGPT(opens in a new window)

Try ChatGPT(opens in a new window)Login

OpenAI

Table of contents

April 29, 2026

Publication

Where the goblins came from

Loading…

Share

Starting with GPT‑5.1, our models began developing a strange habit: they increasingly mentioned goblins, gremlins, and other creatures in their metaphors. Unlike model bugs that show up through a tanking eval or a spiking training metric and point back to a specific change, this one crept in subtly. A single “little goblin” in an answer could be harmless, even charming. Across model generations, though, the habit became hard to miss: the goblins kept multiplying, and we needed to figure out where they came from.

Image 1: ""

_In early testing, GPT‑5.5 in Codex showed an odd affinity for goblin metaphors._

The short answer is that model behavior is shaped by many small incentives. In this case, one of those incentives came from training the model for the personality customization feature⁠(opens in a new window), in particular the Nerdy personality. We unknowingly gave particularly high rewards for metaphors with creatures. From there, the goblins spread.

Image 2: ""

_The goblins were funny at first, but the increasing number of employee reports became concerning._

Image 3: ""

_An interesting interaction our Chief Scientist had with GPT‑5.5._

The first signs of creatures

The first time we clearly saw the pattern was in November, after the GPT‑5.1 launch, although it may have started earlier⁠(opens in a new window). Users complained about the model being oddly overfamiliar in conversation, which prompted an investigation into specific verbal tics. A safety researcher had experienced a few “goblins” and “gremlins” and asked that they be included in the check. When we looked, use of “goblin” in ChatGPT had risen by 175% after the launch of GPT‑5.1, while “gremlin” had risen by 52%.

_A measurable small lexical quirk in GPT‑5.1._

At the time, the prevalence of goblins did not look especially alarming. A few months later, the goblins came back to haunt us in a much more specific and reproducible form.

Solving the goblin mystery

With GPT‑5.4, we and our users⁠(opens in a new window) noticed an even bigger uptick in references to these creatures. That triggered another internal analysis and surfaced the first connection to the root cause: creature language was especially common in production traffic from users who had selected the “Nerdy” personality. “Nerdy” used the following system prompt, which partially explained the quirkiness:

_You are an unapologetically nerdy, playful and wise AI mentor to a human. You are passionately enthusiastic about promoting truth, knowledge, philosophy, the scientific method, and critical thinking. [...] You must undercut pretension through playful use of language. The world is complex and strange, and its strangeness must be acknowledged, analyzed, and enjoyed. Tackle weighty subjects without falling into the trap of self-seriousness. [...]_

If the behavior were simply a broad internet trend, we would expect it to spread more evenly. Instead, it was clustered in the part of the system explicitly optimized for a playful, nerdy style. Nerdy accounted for only 2.5% of all ChatGPT responses, but 66.7% of all “goblin” mentions in ChatGPT responses.

_The behavior was highly concentrated in the "Nerdy" personality._

Because “goblin” prevalence seemed to increase over our model releases, we had a suspicion that something in our personality instruction-following training was amplifying this.

Codex helped us compare model outputs generated during RL training containing goblin or gremlin with outputs from the same task that did not. One reward signal stood out immediately: the one originally designed to encourage the Nerdy personality was consistently more favorable to the creature-word outputs. Across all datasets in the audit, the Nerdy personality reward showed a clear tendency to score outputs to the same problem with “goblin” or “gremlin” higher than outputs without, with positive uplift in 76.2% of datasets.

That explained why the behavior was boosted with the Nerdy personality prompt, but not why it also appeared without that prompt. To test whether the style was transferring, we tracked mention rates over training both with and without the Nerdy prompt.

As goblin and gremlin mentions increased under the Nerdy personality, they increased by nearly the same relative proportion in samples without it. Taken together, the evidence suggests that the broader behavior emerged through transfer from Nerdy personality training.

The rewards were applied only in the Nerdy condition, but reinforcement learning does not guarantee that learned behaviors stay neatly scoped to the condition that produced them. Once a style tic is rewarded, later training can spread or reinforce it elsewhere, especially if those outputs are reused in supervised fine-tuning or preference data.

That creates a feedback loop:

1. Playful style is rewarded 2. Some rewarded examples contain a distinctive lexical tic. 3. The tic appears more often in rollouts. 4. Model-generated rollouts are used for supervised fine-tuning (SFT). 5. The model gets even more comfortable producing the tic.

A search through GPT‑5.5’s SFT data found many datapoints containing “goblin” and “gremlin.” Further investigation revealed a whole family of other odd creatures: raccoons, trolls, ogres, and pigeons were identified as other tic words, while most uses of frog turned out to be legitimate.

_One week average of production prevalence of goblins and gremlins. The drop in GPT‑5.4 Thinking was a result of retiring the “Nerdy” personality mid-March. GPT‑5.5 never launched with the “Nerdy” personality, and showed another increase over GPT‑5.4 (even without “Nerdy”)._

The end of the goblins

We retired the “Nerdy” personality in March after launching GPT‑5.4. In training, we removed the goblin-affine reward signal and filtered training data containing creature-words, making goblins less likely to over-appear or show up in inappropriate contexts. Unfortunately, GPT‑5.5 started training before we found the root cause of the goblins. When we began testing GPT‑5.5 in Codex, OpenAI employees immediately noticed the strange affinity for goblins, and we added a developer-prompt instruction⁠(opens in a new window) to mitigate. Codex is, after all, quite nerdy.

If you want to let the creatures run free in Codex, you can run this command to launch Codex with the goblin-suppressing instructions removed:

#### Plain Text

`1instructions=$(mktemp /tmp/gpt-5.5-instructions.XXXXXX) && \2jq -r '.models[] | select(.slug=="gpt-5.5") | .base_instructions' \3~/.codex/models_cache.json | \4grep -vi 'goblins' > "$instructions" && \5codex -m gpt-5.5 -c "model_instructions_file=\"$instructions\""`

Why it matters

Depending on who you ask, the goblins are a delightful or annoying quirk of the model. But they are also a powerful example of how reward signals can shape model behavior in unexpected ways, and how models can learn to generalize rewards in certain situations to unrelated ones. Taking the time to understand why a model is behaving in a strange way, and building out ways to investigate those patterns quickly, is an important capability for our research team. This investigation resulted in new tools for the research team to audit model behavior and fix behavior problems at their root.

Author

OpenAI

Keep reading

View all

Image 4: System Card Card SEO 1x1

GPT-5.5 System Card Safety Apr 23, 2026

Image 5: model spec > art card

Inside our approach to the Model Spec Research Mar 25, 2026

Image 6: OAI Monitoring internal deployments for loss of control risks Art Card 1x1

How we monitor internal coding agents for misalignment Safety Mar 19, 2026

Our Research

Latest Advancements

Safety

ChatGPT

API Platform

For Business

Company

Support

More

Terms & Policies

(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)

OpenAI © 2015–2026 Manage Cookies

English United States

问问这篇内容

回答仅基于本篇材料
    0 / 500

    Skill 包

    领域模板,一键产出结构化笔记
    • 论文精读包

      把一篇论文 / 技术博客精读成结构化笔记:问题、方法、实验、批判、延伸阅读。

      • · TL;DR(1 段)
      • · 研究问题与动机
      • · 方法概览
    • 投融资雷达包

      把一条融资 / 创投新闻整理成投资人视角的雷达卡:交易要点、判断、竞争格局、风险、尽调清单。

      • · 交易要点(公司 / 轮次 / 金额 / 投资人 / 估值,材料未明示则写 “未披露”)
      • · 投资 thesis(这家公司为什么值得关注)
      • · 竞争格局与替代方案

    导出到第二大脑

    支持 Notion / Obsidian / Readwise
    下载 Markdown(Obsidian 直接拖入)