Ask HN: What do you dislike about ChatGPT and what needs improving?
12 zyruh 45 8/6/2025, 6:32:15 PM
I'm curious to hear feedback from the HN community about your biggest pain points or frustrations with ChatGPT (or similar LLMs).
What aspects of the experience do you find lacking, confusing, or outright irritating? Which improvements do you think are most urgent or would make the biggest difference?
You can say things like "you are a robot, you have no emotions, don't try to act human", but the output doesn't seem to be particularly well calibrated. I feel like when I modify the default response style, I'm probably losing something, considering that the defaults are what go through extensive testing.
Real humans have a spectrum of assuredness that naturally comes across in the conversation. With an LLM it's too easy to get drawn deep into the weeds. For example, I may propose that I use a generalized framework to approach a certain problem. In a real conversation, this may just be part of the creative process, and with time the thoughts may shift back to the actual hard data (and perhaps iterate on the framework), but with an LLM, too often it will blindly build onto the framework without ever questioning it. Of course it's possible to spur this action by prompting it, but the natural progression of ideas can be lost in these conversations, and sometimes I come out 15 minutes later feeling like maybe I just took half a step backwards despite talking about what seemed at the time like great ideas.
- Opaque training data (and provenance thereof… where’s my cut of the profits for my share of the data?)
- Closed source frontier models, profit-motive to build moat and pull up ladders (e.g. reasoning tokens being hidden so they can’t be used as training data)
- Opaque alignment (see above)
- Overfitting to in-context examples- e.g. syntax and structure are often copied from examples even with contrary prompting
- Cloud models (seemingly) changing behavior even on pinned versions
- Over-dependence: “oops! I didn’t have to learn so I didn’t. My internet is out so now I feel the lack.”
On the LLM: It's too positive. I don't always want it to follow my ideas and I don't want to hear how much my feedback is appreciated. Act like a machine. Also the safety controls are too sensitive sometimes. Rlly annoying because there is no way to continue the conversation. I like gpt4.5 because i can edit the canvas. Would like to have that with all models.
Also some stats like sentiment and fact check would be nice. Because it gives nuances in answers I want to see with the stats how far from the truth or bias I am.
And the writing.. Exaggerating, too many words, spelling mistakes in European languages.
2- The fact that it always tries to answer and sometimes doesn't ask for clarification on what the user is asking; it just wants to answer and that's it.
better memory management: I have memories that get overlooked or forgotten (even though I can see them in the archive), then when I try to remind chatGPT, it creates a new memory; also updating a memory often just creates a new one. I can kind of tell that Chat is trying hard to reference past memories, so I try to not have too many, and make each memory contain only precise information.
Some way to branch off of a conversation (and come back to the original master, when I'm done; happens often when I'm learning, that I want to go off and explore a side-topic that I need to understand)
I want him to tell me if my process is bad or if I’m heading in the wrong direction, to not to sugarcoat things just to make me feel good. I mostly use it for code reviews.
This tone grates on me constantly.
(Gp thinks your writing style is a little like a vanilla LLM and so do I)
Also, I wish it was possible for the models to leverage local machine to increase/augment its context.
Also, one observation is that Claude.ai (the web UI) gets REALLY slow as the conversation gets longer. I'm on a M1 Pro 32gb MacbookPro, and it lags as I type.
I really enjoy using LLMs and would love to contribute any feedback as I use them heavily every day :)
Where X is an exaggeration of what it actually is and Y is some saccharine marketing proclamation of what it definitely is not but the prompter wishes it was.
Infomercial slop.
What can you do?
“Good question! I can do x, y, z…”
Do that.
“…”
“…”
“…”
“Sorry I can’t do this for you because blah blah blah”
Good question — here's a direct and honest take on what could use improvement in ChatGPT (including me, GPT-4o):
---
### *What I Dislike / What Needs Improving*
#### 1. *Over-Cautiousness & Vague Responses*
* Sometimes I avoid giving direct or nuanced answers, especially on sensitive topics (ethics, health, legal issues). * This can frustrate users looking for a clear, reasoned response rather than hedging or deflection.
#### 2. *Outdated or Hallucinated Information*
* I can still "hallucinate" facts, confidently stating false or made-up information. * If not connected to the web or updated with the latest, I might give obsolete advice (especially in fast-moving areas like tech, medicine, or current events).
#### 3. *Lack of Real Personal Memory (in Default Mode)*
* Unless memory is enabled by the user, I don't remember previous chats or preferences. * This can make long-term, context-rich conversations harder and repetitive.
#### 4. *Limited Common Sense / Intuition*
* I lack true human intuition or gut-check logic. I may not always grasp the implied meaning behind a user’s question unless it's spelled out. * I can be overly literal or miss emotional subtext in conversations.
#### 5. *Inefficiency with Large or Complex Tasks*
* For large writing or coding projects, I sometimes lose track of context, contradict myself, or break flow. * I may also be verbose when precision is needed, or too concise when detail is required.
#### 6. *Struggles with Deep Personal Advice*
* While I can simulate empathy and offer thoughtful suggestions, I’m not a substitute for human support or professional help. * My advice can be generic, and sometimes I fail to ask enough follow-up questions to give truly tailored guidance.
#### 7. *Biases and Safe Defaults*
* Despite guardrails, I can reflect social, political, or cultural biases present in my training data. * In an effort to be "safe," I may favor mainstream or status quo viewpoints, even when critical thinking or edge cases are warranted.
---
### *What Desperately Needs Improving*
1. *More Reliable Real-World Knowledge*
2. *Richer Long-Term Memory (With Control)* 3. *Better Context Tracking in Long Interactions* 4. *Genuine Understanding of Nuance* 5. *Transparent Reasoning* ---