16 dontlike2chat 0 8/10/2025, 8:39:54 AM

Comments (0)

Molitor5901 · 39m ago
I am becoming increasingly frustrated with ChatGPT, when I should be a prototypical user. I analyze and help develop U.S. public policy. Cgpt is great for pulling in vast amounts of data, especially surface level, very quickly. I can then churn through what it has found, ask questions, etc. in seconds.

However, when you give something to Cgpt is when it seems to fail. The first is that it has a terrible time with cross-memory contamination. New Chat should mean a new session, instead at times it acts like a continuation of all previous sessions. Ergo if I give it a paper to summarize, it will pull in data from other previous sessions and insert that as if it belonged. It will also pull in data (words) from the internet and erroneously cite them.

Some of that can be rectified with the most explicit and logic drive questions, but that sort of defeats the purpose of treating it "like a research assistant," as it is often referred to.

Second, is that it cannot process and return large documents. For example: A sixty page plain text file of a conference speaker transcript. Asking it to summarize the conference does not work, and if you ask it why, it will tell you: can't hold large documents in memory, or it will go into a loop, or it will give nothing. Another error limitation it reports is that it can only return so much to the web interface, that i can understand.

Third, is in how files are handled. Am I the only one that tries to download a file and it gives me the "failed to get status for /mnt/data..." Even small files, or it gives a very partial correct/incorrect document, the same document you gave it initially, or a mess...

Those have been the biggest problem. I recognize that as a user I need to adapt in a few ways: Be very explicit and logic driven in my requests; chunk to the smallest reasonable number of parts of documents; and others. Understand that it's ability to recall can be limited, even if give explicit instruction, so each query should be a healthy repeat of the last results.

But we're paying for this..which has us testing multiple LLMs alongside Cgpt before we seriously dive into one.

Rzor · 2h ago
Good for you, but they will probably fix that in the coming days. One of these versions also had a sort of "botched" launch in the same way, but they promptly addressed it.
DomB · 1h ago
4o was good, but slow, prefer Grok and Gemini