I had a similar take after my first experience using AI to help me code. I put it aside as a curiosity. But when I went back recently, it's not that it's perfect, but the improvement in that time was massive. Does that mean it will continue to improve at that pace? Not necessarily, but we haven't seen the end state yet, so anything we say is just a judgment on what we have at the moment.
rage4774 · 2d ago
But do you use it now to help you code and if yes, how? The negative effects of relying to heavily on AI while coding are greatly discussed, hence I am wondering what a „good“ use case would be.
palmotea · 1d ago
> The negative effects of relying to heavily on AI while coding are greatly discussed, hence I am wondering what a „good“ use case would be.
Really depends on your perspective. For some executives, a "good" use case may be the equivalent of burning goodwill to generate cash: push devs to use AI extensively on everything, realize a short term productivity bump while their skills atrophy (by haven't atrophied yet), then let the next guy deal with the problem of devs that have fully realized the "negative effects of relying to heavily on AI."
rage4774 · 1d ago
That’s a pretty dark perspective but it would imply that those executives are some kind of evil geniuses that grasp the extent of this situation. I personally try to count this kind of behavior on the statistics one of the ignobels present: 80% of asked uni professors felt they’re above the average (iq wise).
garbagecoder · 2d ago
I haven't used it directly on anything except little test projects. But my general view is that it's like being an editor as opposed to a writer. I have to have mastered the craft of writing to edit someone else's copy.
rage4774 · 2d ago
I couldn’t agree more, thanks for answering! Anecdotally I’ve witnessed people using and talking big about ML/ LLM‘s while being in shock when learning about the fact that there are fundamentally basic statistical concepts behind those.
bitmasher9 · 2d ago
Not OP, but I specifically like to use AI to explain obtuse sections of code that would take me longer periods of time to understand by reading.
If I have a bug reported and I’m not sure where it is, pasting the bug report into an LLM and asking it to find the bug has yielded some mixed results but ultimately saved me time.
I use AI more for reading than writing code.
rage4774 · 2d ago
Interestingly enough, I also was wondering if I could improve my efficiency by condensing written text. The idea would be to remove the usual padding or „slop“ you have within most of the modern web environment.
Wouldn’t you loose a bit of that brain power if you stop to make those connections yourself while trying to understand those code sections?
bitmasher9 · 2d ago
I don’t think so for two reasons.
I still have to relay on my own wits to read the most complicated code.
I don’t spend less time reading code. I just read more code.
ofjcihen · 2d ago
I appreciate the insights here.
The author grazes something I’ve been thinking about for a while while watching LLMs evolve along with their uses: will this tool result in more significant work being accomplished or just more…work in general.
By that I mean it accommodates completing small well documented projects well but seems to flounder on larger more meaningful work.
We already have a problem with junior/mid tier knowledge workers not scoping their efforts effectively and just doing work for works sake. Will the ease of completing small but ultimately useless work result in more of this?
Not a jab at LLMs really. More our propensity to miss the forest in our rush to view a tree.
bitmasher9 · 2d ago
I wonder if the author specifically writes in a difficult to comprehend style to demonstrate their humanity. They specifically relay heavily on sharing niche cultural context, to the point where I’m sure most people will not extract all of the meaning.
Reading this feels like only getting half of the inside jokes of a friend group.
rage4774 · 2d ago
I appreciate the feedback. I was not trying to prove I’m human. It was rather the case of trying to get the idea out as long as it’s fresh. Being a non-native English speaker didn’t help I presume.
Really depends on your perspective. For some executives, a "good" use case may be the equivalent of burning goodwill to generate cash: push devs to use AI extensively on everything, realize a short term productivity bump while their skills atrophy (by haven't atrophied yet), then let the next guy deal with the problem of devs that have fully realized the "negative effects of relying to heavily on AI."
If I have a bug reported and I’m not sure where it is, pasting the bug report into an LLM and asking it to find the bug has yielded some mixed results but ultimately saved me time.
I use AI more for reading than writing code.
Wouldn’t you loose a bit of that brain power if you stop to make those connections yourself while trying to understand those code sections?
I still have to relay on my own wits to read the most complicated code.
I don’t spend less time reading code. I just read more code.
The author grazes something I’ve been thinking about for a while while watching LLMs evolve along with their uses: will this tool result in more significant work being accomplished or just more…work in general.
By that I mean it accommodates completing small well documented projects well but seems to flounder on larger more meaningful work.
We already have a problem with junior/mid tier knowledge workers not scoping their efforts effectively and just doing work for works sake. Will the ease of completing small but ultimately useless work result in more of this?
Not a jab at LLMs really. More our propensity to miss the forest in our rush to view a tree.
Reading this feels like only getting half of the inside jokes of a friend group.