This is becoming a problem for me: I like to use LibreChat to interface LLMs (Gemini, GPT-x), because it allows me to pay by usage (instead of 20+ USD per month) and keep a (tiny bit) more of the data under my control.
But the offerings of ChatGPT or Google's AI Studio surpass the feature set of LibreChat by a lot. It used to be just a "better" system prompt, but now it's a lot more.
vincirufus · 13m ago
Yup exactly!! its a subtle change happening under the hood, When people have a good interaction with ChatGPT they attribute it to the LLM, but in most cases, its the agentic features in ChatGPT that wows you and not necessarily the smartness of the large language model.
mehulashah · 1h ago
I like this post. I see this confusion all the time! What’s the difference between ChatGPT and gpt-5 or gpt-4o, and so on. OpenAI’s carefully crafted naming schemes don’t help. Though, I come from AWS so glass houses.
Anyway, agents are control systems that using planning, tools, and a collection of underlying models. ChatGPT is an agent. What kind? The kind optimized for the general user looking to do work with public knowledge. That’s the best definition I can come up with.
Anyway, let’s make sure people understand the difference between AI systems and AI models. The former is where a lot of startup activity will be for a decade. The latter will be in the hands of a few well funded behemoths.
trjordan · 1h ago
AI sloppiness of this blog post aside, it's a reasonable observation.
If you're thinking about how to integrate AI into your system, it's worth asking the question of why your system isn't just ChatGPT.
- Do you have unique data you can pass as context?
- Do you have APIs or actions that are awkward to teach to other systems via MCP?
- Do you have a unique viewpoint that you are writing into your system prompt?
- Do you have a way to structure stored information that's more valuable than freeform text memories?
- etc.
For instance, we [0] are writing an agent that helps you plan migrations. You can do this with ChatGPT, but it hugely benefits from (in descending order of uniqueness) access to
1) a structured memory that's a cross between Asana and the output of `grep` in a spreadsheet,
2) a bunch of best-practice instructions on how to prep your codebase for a migration, and
3) the AI code assistant-style tools like ls, find, bash, etc.
So yeah, we're writing at agent, not building a model. And I'm not worried about ChatGPT doing this, despite the fact that GPT5 is pretty good at it.
“This isn’t just about keeping a chat history - it’s about building and maintaining a model of the user, the task, and the evolving context.”
Was this written by GPT? ;)
Alifatisk · 2h ago
Right, ChatGPT now with GPT-5 has become some sort of router to different suitable models
sails · 2h ago
> The key insight: when you interact with modern ChatGPT, you’re not just talking to a language model - you’re collaborating with an intelligent agent that uses language models as one component of a much more sophisticated cognitive architecture that includes memory, tools, and orchestration.
Fwiw, right at the end
j_crick · 2h ago
You forgot to mention that this post was written by Opus
vincirufus · 50m ago
Sadly I don't have a Claude subscription for the number of tokens needed for Opus. Although I'd admit I used Sonnet to clean up and improve the language and grammar.
vinniedkator · 2h ago
“This stateless nature isn’t a bug - it’s a feature. It makes LLMs:
Predictable: Same input always produces consistent output”
There is no always.
tonkinai · 2h ago
ChatGPT is a wrapper of GPT
Retr0id · 2h ago
And Google is a wrapper of PageRank
pwython · 2h ago
And HN is a wrapper of bikeshedding.
falcor84 · 2h ago
And a bikeshed is a wrapper of a bike
jemiluv8 · 1h ago
And a bike is a wrapper of wheels
dijksterhuis · 2h ago
And a bike is a wrapper of wheels
vincirufus · 52m ago
It used to be a thin wrapper, but todays chatGPT is closer to an agent
jemiluv8 · 1h ago
And there goes the enshitification of all that was good about blog posts: human element.
vincirufus · 46m ago
Well there was is a lot of human element and infact this post was an output of conversations with different people talking about LLM and ChatGPT. I guess Claude over did the 'fix grammar and improve the language' instructions.
bakugo · 2h ago
> This stateless nature isn’t a bug - it’s a feature
> This isn’t just about keeping a chat history - it’s about building and maintaining a model
> Understanding this distinction isn’t just about getting the terminology right. It’s about understanding the future of human-computer interaction
> When we say “ChatGPT” when we mean “LLM,” we’re not just being sloppy - we’re obscuring fundamental architectural and strategic decisions
> when you interact with modern ChatGPT, you’re not just talking to a language model - you’re collaborating with an intelligent agent
But the offerings of ChatGPT or Google's AI Studio surpass the feature set of LibreChat by a lot. It used to be just a "better" system prompt, but now it's a lot more.
Anyway, agents are control systems that using planning, tools, and a collection of underlying models. ChatGPT is an agent. What kind? The kind optimized for the general user looking to do work with public knowledge. That’s the best definition I can come up with.
Anyway, let’s make sure people understand the difference between AI systems and AI models. The former is where a lot of startup activity will be for a decade. The latter will be in the hands of a few well funded behemoths.
If you're thinking about how to integrate AI into your system, it's worth asking the question of why your system isn't just ChatGPT.
- Do you have unique data you can pass as context?
- Do you have APIs or actions that are awkward to teach to other systems via MCP?
- Do you have a unique viewpoint that you are writing into your system prompt?
- Do you have a way to structure stored information that's more valuable than freeform text memories?
- etc.
For instance, we [0] are writing an agent that helps you plan migrations. You can do this with ChatGPT, but it hugely benefits from (in descending order of uniqueness) access to
1) a structured memory that's a cross between Asana and the output of `grep` in a spreadsheet,
2) a bunch of best-practice instructions on how to prep your codebase for a migration, and
3) the AI code assistant-style tools like ls, find, bash, etc.
So yeah, we're writing at agent, not building a model. And I'm not worried about ChatGPT doing this, despite the fact that GPT5 is pretty good at it.
[0] https://tern.sh
Was this written by GPT? ;)
Fwiw, right at the end
Predictable: Same input always produces consistent output”
There is no always.
> This isn’t just about keeping a chat history - it’s about building and maintaining a model
> Understanding this distinction isn’t just about getting the terminology right. It’s about understanding the future of human-computer interaction
> When we say “ChatGPT” when we mean “LLM,” we’re not just being sloppy - we’re obscuring fundamental architectural and strategic decisions
> when you interact with modern ChatGPT, you’re not just talking to a language model - you’re collaborating with an intelligent agent
No comment.
No comments yet