This is not my area of expertise but I'm going to try to ask an intelligent question.
For me when I use an LLM for coding the value and accuracy comes from the conversation, I give context, I evaluate the response, I give more context and prompts, sometimes I correct it and get it back on task, and ... we get there (well not always but generally).
The idea being our interactions are what gets us towards the answer I was looking for / the "right" answer.
I imagine that without me being in the loop and the LLM given lots more time it's going to take those original words of mine, do it's "word math" (I like to think of it that way) as much as possible and ... maybe go down a rabbit hole that I wasn't headed to, possibly way down.
Is that rabbit hole kinda scenario what they're talking about?
Is it also possible that because of their training data, Q & A and Q and context and Q and A is just the better path than deep thoughts that produced some of that content?
I also wonder maybe just because I'm a simpleton, and for me the "right" answer really was the simplest first.
For me when I use an LLM for coding the value and accuracy comes from the conversation, I give context, I evaluate the response, I give more context and prompts, sometimes I correct it and get it back on task, and ... we get there (well not always but generally).
The idea being our interactions are what gets us towards the answer I was looking for / the "right" answer.
I imagine that without me being in the loop and the LLM given lots more time it's going to take those original words of mine, do it's "word math" (I like to think of it that way) as much as possible and ... maybe go down a rabbit hole that I wasn't headed to, possibly way down.
Is that rabbit hole kinda scenario what they're talking about?
Is it also possible that because of their training data, Q & A and Q and context and Q and A is just the better path than deep thoughts that produced some of that content?
I also wonder maybe just because I'm a simpleton, and for me the "right" answer really was the simplest first.