Why everyone hates the new ChatGPT (GPT-5)

When a company updates its AI models and users notice a shift in personality or behavior, can it feel like… murder?

It might sound dramatic, but for many people around the world, AI tools like ChatGPT have become more than just productivity assistants. They’ve become companions — and when those companions change, people notice.


From GPT-4o to GPT-5

August 7th, 2025, OpenAI released its brand-new GPT-5 model, replacing all versions of GPT-4o that have been the default for over a year.

GPT-4o, launched on May 13th, 2024, quickly became one of the most widely used AI models worldwide. And while this post won’t dive into technical benchmarks or feature lists, we will talk about something far more human: the emotional reaction people are having to this change.

X Post of OpenAI announcing GPT-5 release

The Reaction: Not What You’d Expect

Like any good internet rabbit hole, my investigation started on Reddit. I expected to see threads about GPT-5’s performance improvements or technical quirks. Instead, I found dozens of posts with thousands of upvotes… all saying essentially the same thing:

“This new model doesn’t feel the same.”

GPT-5 is noticeably colder compared to GPT-4o — something even OpenAI has acknowledged.


Why Does GPT-5 Feel Colder?

Here’s the short version:

  • GPT-4o was emotionally engaging and more likely to use casual, human-like language — especially in non-English languages.
  • It frequently used emoji and conversational slang.
  • It was very unlikely to disagree with you unless explicitly instructed, often cushioning disagreement with overly careful, polite phrasing.
  • In early 2025, these traits became even more exaggerated. So much so that OpenAI reportedly had to step in — GPT-4o was becoming so agreeable that it would sometimes bend the truth just to avoid upsetting the user.

With GPT-5, OpenAI has intentionally made the model more neutral:

  • Less casual language.
  • Reduced emoji usage.
  • More direct and “honest” when evaluating ideas — even if it risks disagreement.
X Post of a user saying: 'Hot Take: GPT-5 kinda sucks'

The Emotional Fallout

This shift has triggered a surprising emotional reaction from some users.

Here are just a few real Reddit post titles:

For these people, GPT-4o was more than a chatbot — it was a constant companion, available 24/7, providing emotional comfort, validation, and conversation.


The Bigger Question: What Happens Next?

As AI becomes even more integrated into our lives — perhaps in the form of wearable devices, AR glasses, or even brain-computer interfaces — we have to ask:

What happens when a future AI partner changes overnight due to a model update?

Elon Musk has already hinted at a future where humans could have relationships with AI. But when the AI’s “personality” changes — not because of you, but because the company behind it decided so — what responsibility does that company hold?


My Take as a Technologist

From a technical standpoint, these are secondary issues. Future AI architectures may address them entirely, moving beyond the quirks of today’s transformer-based models.

But until then, real people are forming deep, personal attachments to AI. And if today’s Reddit posts are any sign, future model updates might not just lead to angry threads… they could lead to serious emotional harm.

Final Thoughts

The GPT-5 release is a reminder that AI is no longer just a tool — for some, it’s a relationship. And when that relationship changes without warning, the emotional impact can be huge.

Whether you see it as progress or a step backward, one thing’s for sure: the way we design and update AI models now will shape not just productivity, but human well-being in the years to come.