The Words ChatGPT Overuses: A Marketer’s Fix Guide

Neeraj K Ravi Avatar
✨ Summarise and Analyse the Article

Last week OpenAI admitted it had accidentally trained ChatGPT to talk about goblins. In the “Nerdy” personality of GPT-5.4, mentions of “goblin” went up 3,881% versus the previous version.

A few more numbers from OpenAI’s goblins post mortem:

  • “Goblin” mentions across all of ChatGPT rose 175% after GPT-5.1 launched.
  • The “Nerdy” personality was only 2.5% of responses but produced 66.7% of all goblin mentions.
  • The reward signal that caused it scored creature-word outputs higher in 76.2% of training datasets.

OpenAI’s engineers couldn’t trace the source for months. When they did, it turned out to be one small reward signal in their reinforcement learning that had been quietly favoring fantasy-creature metaphors. They retired the “Nerdy” personality in March. The goblins didn’t fully go away. They had already leaked into the rest of the model.

The ChatGPT goblins story isn’t really about goblins. It’s about a problem every marketer using AI for content already has, and probably hasn’t noticed.

What OpenAI actually said

A line from their post: “Reinforcement learning does not guarantee that learned behaviors stay neatly scoped to the condition that produced them.”

In plain English: when they train the model to behave a certain way in one context, the behavior bleeds into others. They can’t fully stop it. They can only catch it later.

Goblins were easy to catch. It’s a weird noun. You can grep for it. You can ban it with an override.

But goblins only became a story because they stand out. The longer list of AI writing tics in ChatGPT hasn’t been fixed, because those tics don’t read as tics. They read as normal English.

The words ChatGPT overuses

Here are some of them: leverage, robust, seamless, cutting-edge, in the realm of, it’s important to note, delve, landscape, empower, holistic, navigate the complexities.

None of these are goblins. They didn’t sneak in. They got rewarded somewhere upstream, same as goblins did. The difference is they sound like normal business writing, so nobody flags them.

We’ve audited B2B SaaS sites that moved to AI content marketing workflows in 2024 and 2025. The pattern is the same every time. Same opening structures. Same hedge phrases. Same closing transitions. Same vague verbs where specific ones should go. The brand voice doc on the company drive says “direct, confident, technical.” The output reads like a LinkedIn coach wrote a Wikipedia article.

This isn’t a prompt problem. You can write the best system prompt of your life and the tics still come through. OpenAI just showed us why.

How to fix ChatGPT brand voice drift

Five things that work, in rough order of effort.

  • Keep a banned-phrase list and check every AI draft against it. Real phrases you’ve watched show up in your output that don’t sound like you. Ours runs around 60 entries and grows every month. Every AI draft is checked against it before a human editor sees it.
  • Don’t trust persona prompts to fix voice. The “Nerdy” personality didn’t make ChatGPT nerdy. It made ChatGPT obsessed with goblins. Whatever ChatGPT brand voice prompt you’ve written is doing something similar. It’s pulling in a cluster of training examples that come with their own baggage. We’ve broken down what actually works in our guide to AI prompts for content writing. Assume the persona is leaking something you haven’t named yet.
  • Read drafts out loud. Tics survive silent reading. They die when you speak them. Anything that sounds like it would never come out of a human mouth is probably a training artifact, not a real word choice.
  • Compare drafts to your own writing, not to “good writing.” If you have 20 emails or LinkedIn posts you wrote yourself, that’s your voice. Run a draft and ask: would I have used that word? That structure? If no, cut it. The AI doesn’t know your voice. It knows the average of everyone who sounds vaguely like you.
  • Put a human in the editing chair, not the approval chair. Most teams treat the reviewer as a yes/no gate. The goblin story is the argument for human-as-editor, rewriting sentences, not just approving them. We covered this workflow in more detail in our piece on AI blog generation.

What to take away

OpenAI just published a long post-mortem admitting it can’t fully predict or control how its model talks. Anyone running a serious AI content pipeline in 2026 should treat that as the official disclaimer.

The model isn’t broken. It’s doing exactly what its training rewarded it to do. You just don’t know what those rewards were. So you build the catch-net at your end, or you publish other people’s voice under your byline.

Discover more from OneMetrik

Subscribe now to keep reading and get access to the full archive.

Continue reading