Cache loop and memory loss in GPT – a user-side fix (tested with GPT itself)

15 sks38317 6 4/17/2025, 3:29:21 PM github.com ↗
I’m a Korean high school student currently preparing for the CSAT (college entrance exam), and I happened to notice some persistent cache-loop behavior while using GPT in document-heavy tasks.

Repeated PDF failures seemed to create token overload and session slowdowns. So I tried manually analyzing the session, tracking token counts, and testing some user-side “optimizations”—like auto-removing failed outputs and cleaning redundant versions.

I used GPT itself to help write the report and interpret the data. It was a mix of curiosity, frustration, and… maybe procrastination. But it turned into a fun experiment.

I’ve only been exploring GitHub and ChatGPT for less than a month, so there are still many things I’m unfamiliar with.

So if there’s anything I’ve overlooked or could improve, I’d really appreciate your feedback.

Comments (6)

teruakohatu · 10d ago
This is a good first effort. You asked for feedback:

ChatGPT is a nice interface/UI to the GPT models, but the models themselves are available via. an API and industry users will be using it via. the API.

Your report is very light on details. Your methodology is so short I do not know what you actually did.

This is probably sufficient for high school but a ML/AI report or paper would have a short introduction, maybe a review of existing research, a detailed methodology and a conclusion. It should also include references to external research. Ideally it would have an appendix that includes or references and links to the data (in the release?)

When doing research create a notebook documenting everything you did and the results. The notebook could just be a Google Doc.

You mentioned that you used ChatGPT to write the report. It is better to learn to write it yourself with assistance (grammar etc.) from ChatGPT.

I think the reply you got from OpenAI support was also generated by ChatGPT.

I want to commend you on taking the initiative to run experiments that allowed you to have some insight into what ChatGPT was doing on the backend. I have graded university students with far less initiative than you.

Keep experimenting and have fun… but if you have school exams coming up focus on studying for them rather than optimising your study using ChatGPT.

sks38317 · 10d ago
Thank you for taking the time to share such thoughtful feedback. The more I reflect on it, the more I realize you’re absolutely right.

I did rely heavily on GPT throughout the process, and it’s clear to me now that this is something I need to take more ownership of. Working on that will naturally help me improve how I source and cite materials as well.

I’m also taking your final piece of advice to heart—exams are coming up, and I know I need to shift my focus there for now. Thanks again for your honesty and encouragement. I really appreciate it.

SoMomentary · 10d ago
Pro tip:Those em dashes are always a red flag for LLM usage. Normal humans use - because it's actually on the keyboard (at least in English)
sks38317 · 10d ago
Just to clarify—I’ve mostly relied on GPT for translations because Google Translate often produces awkward or incorrect phrasing, especially in nuanced or technical contexts. That’s probably why things like em dashes or certain sentence structures came through. Not intentional, just a side effect of using GPT for accuracy
semanticc · 9d ago
I know this is somewhat true, but it's just a pity. Proper grammar is something I try to cherish, and I've specifically added – and — to my custom keyboard layout for convenient access.
KMnO4 · 10d ago
On Apple devices (iOS and Mac), typing two hyphens converts to a dash: like—this