GPT-5 Doesn't Vibe?

1 anupshinde 2 9/1/2025, 5:16:45 AM
TL;DR - I have been struggling with GPT-5 to get it to 'vibe' reliably.

I don't know what's wrong with GPT-5 (on ChatGPT.com). On the surface, everything looks hunky-dory until you get into the details.

Sometimes it is too smart, and sometimes it is stupid. All this within the same chat-session. Sometimes it makes mistakes, other times it does not. Sometimes it remembers the context, sometimes it forgets and has to be reminded multiple times. With code - it writes easier code or complex code - buggy code or working code - and it is hard to predict when it would do what.

Problems often have multiple solutions. And GPT-5 seems to adhere to that philosophy. Given it the same problem twice, it will come up with different solutions. But not reliably do so every time.

It has become quite difficult to use it (or 'vibe' with it) - compared to Claude models or GPT-4o.

The GPT-4o model was not better - but at least it was reliable. There was a subtle pattern to its behavior. I knew when it would work, what mistake it would make, and how to get it to work, and when it would not work. And when writing, GPT-4o would usually sound like a philosophical poet.

First, GPT-5 disappointed after the hyped launch. And now this.

This experience is mostly based on ChatGPT, and a bit on GitHub Copilot. With Copilot, I stick to Claude Sonnet 4.

The Canvas also feels buggy. When you ask it to edit a part of the text, it sometimes deletes other sections as well. Albeit, I do not use Canvas as much.

I thought it may be due to auto-routing. And tried to fix it by selecting the GPT-5 type, no luck.

Has anybody else faced the same problem? And how did you resolve it?

Comments (2)

CuriouslyC · 2h ago
GPT5 thinking is scary smart. GPT5 no thinking is surprisingly dim. Make sure you have thinking set and tell it to think hard.
anupshinde · 39m ago
Exactly what I have experienced, I have tried with "thinking set" and saw varying results. An analogy - Each prompt in a conversation felt like I was talking to a different support representative, who missed one or the other part of the overall context, and I had to repeat myself many times. But it's also an inherent human part of me that forgets I'm talking to an AI.

However, I never explicitly told it to "think hard" - I will start doing that. I believe that is the key to making it work consistently.

Thanks!