GPT-5 Doesn't Vibe?
I don't know what's wrong with GPT-5 (on ChatGPT.com). On the surface, everything looks hunky-dory until you get into the details.
Sometimes it is too smart, and sometimes it is stupid. All this within the same chat-session. Sometimes it makes mistakes, other times it does not. Sometimes it remembers the context, sometimes it forgets and has to be reminded multiple times. With code - it writes easier code or complex code - buggy code or working code - and it is hard to predict when it would do what.
Problems often have multiple solutions. And GPT-5 seems to adhere to that philosophy. Given it the same problem twice, it will come up with different solutions. But not reliably do so every time.
It has become quite difficult to use it (or 'vibe' with it) - compared to Claude models or GPT-4o.
The GPT-4o model was not better - but at least it was reliable. There was a subtle pattern to its behavior. I knew when it would work, what mistake it would make, and how to get it to work, and when it would not work. And when writing, GPT-4o would usually sound like a philosophical poet.
First, GPT-5 disappointed after the hyped launch. And now this.
This experience is mostly based on ChatGPT, and a bit on GitHub Copilot. With Copilot, I stick to Claude Sonnet 4.
The Canvas also feels buggy. When you ask it to edit a part of the text, it sometimes deletes other sections as well. Albeit, I do not use Canvas as much.
I thought it may be due to auto-routing. And tried to fix it by selecting the GPT-5 type, no luck.
Has anybody else faced the same problem? And how did you resolve it?
However, I never explicitly told it to "think hard" - I will start doing that. I believe that is the key to making it work consistently.
Thanks!