You've read the email three times. You still don't know what it's saying.
Not because it's complex. Because it's not saying anything. It's hedging. It's qualifying. Every sentence is wrapped in so much caution that the actual point suffocates under the padding.
That email was written by AI. And you already knew it.
In my last article, I wrote about AI's obsession with em dashes. That was the punctuation tell. Today we're looking at something deeper: AI doesn't have opinions. It performs the shape of an opinion while carefully avoiding ever having one.
The Hedging Epidemic
Ask ChatGPT whether you should use PostgreSQL or MongoDB for your project. Here's what you'll get:
"Both databases have their strengths. PostgreSQL might be a better fit if you need complex queries, while MongoDB could be more suitable for flexible schemas. It's worth noting that the best choice ultimately depends on your specific use case."
Read that again. Did it help you? It told you what you already knew (they're different) and then punted the decision back to you with "it depends."
Now imagine asking a senior dev the same question. They'd say: "Postgres. You're building a SaaS with relational data. MongoDB would bite you in six months."
That's the gap. The AI version sounds reasonable. The human version is actually useful.
The hedging vocabulary is everywhere once you start noticing it:
- "It's important to note that..."
- "One could argue that..."
- "This might potentially..."
- "It could be beneficial to consider..."
- "There are various perspectives on this topic..."
Each phrase is a tiny retreat from taking a position. Stack enough of them in a paragraph and you get content that reads like a legal disclaimer.
Why AI Drives With Both Blinkers On
Credit to David Duymelinck for this analogy in the comments of my last article: AI writes like a hesitant driver. The kind that slows down at green lights, signals left then goes right, and makes everyone behind them nervous.
Here's why it happens.
AI models go through a training phase called RLHF (Reinforcement Learning from Human Feedback). Human raters score outputs on helpfulness, harmlessness, and honesty. The problem? "Harmless" and "assertive" are hard to combine. An assertive answer risks being wrong. A hedged answer is always technically correct.
So the model learns: safe = rewarded. Definitive = risky.
The result is a system optimized for not being wrong, instead of being useful.
It's the corporate meeting syndrome. Nobody says "this idea won't work" because that's risky. Everyone says "there are some considerations we might want to explore further." Same energy. Same uselessness.
What You Actually Lose
This isn't just a style problem. Non-assertive AI content has real costs.
Your readers skim and leave. When every paragraph says "on one hand... on the other hand...", people check out. No anchor. No clear takeaway. Nothing worth remembering. You published 1,500 words that conveyed what a confident writer would say in 200.
Your analyses go soft. Ask AI to interpret a declining dashboard and it'll say "several factors could potentially contribute to this trend." An experienced analyst would say "your churn rate spiked in March, look at the post-pricing-change cohort." The first one gets ignored. The second one triggers action.
You stop trusting the output. After enough wishy-washy answers, you start adding "be direct" and "don't hedge" to every prompt. You're spending tokens and energy to undo the model's trained instinct. That's a tax on every interaction.
You sound like everyone else. If you use AI to write and don't fix the hedging, your blog, your docs, your emails all sound like every other AI-assisted text out there. The non-assertiveness is a watermark just as recognizable as the em dash. Maybe more.
The Scared Intern Theory
I keep coming back to this analogy. AI writes like a first-week intern who's terrified of being wrong. Every sentence has an escape hatch built in. "Perhaps." "It could be." "It depends."
An intern does this because they lack confidence and fear consequences. AI does it because its training literally optimized for this behavior. The human raters said: "don't be too bold, don't offend anyone, always acknowledge other viewpoints."
Fair enough for sensitive topics. But when I ask whether to automate a reporting pipeline with n8n or a Python script, I don't need a balanced meditation on the merits of both. I need an answer.
The irony is that real experts hedge less, not more. A junior dev says "maybe we could try refactoring this part?" A senior says "refactor this." Confidence comes from knowledge. AI has the knowledge but performs uncertainty anyway.
AI has senior-level knowledge trapped behind intern-level confidence.
What's Next
This is part 2 of a series on the patterns that make AI writing instantly recognizable. We covered em dashes. Now non-assertiveness. Tomorrow: one more involuntary tell that AI can't seem to shake.
And on Friday? I'm publishing something to actually fix all of this. Stay tuned.
What's the worst hedging you've ever gotten from an AI? Drop it in the comments.
Top comments (0)