To put it simply, that's just not how LLMs work. They don't improve or learn anything. They just predict the next word after your prompt, then the next word after that, etc.
If you'd like a prediction about whether a paper should be accepted, you could ask an LLM but even that would be a mistake as LLM training only holds old, published papers and gives no automatic ability to evaluate new information.
If you'd like a prediction about whether a paper should be accepted, you could ask an LLM but even that would be a mistake as LLM training only holds old, published papers and gives no automatic ability to evaluate new information.