Inappropriately choosing LLMs as a solution then forcing it onto employees, customers and anyone it directly or indirectly effects is not a curve—it is its own circle.
LLMs probably have some kind of value, but the immediate default of "can we solve this problem with an LLM?" from non-tech normies—and equal fetishization by techbros who present it as a false dichotomy of being pro or anti "AI"—highlights why managerial roles, and similar, shouldn't create tools that they themselves don't use. See developers making absolutely crap user interfaces for tools they themselves will never use and thus don't understand what makes it a good or bad solution because the tool isn't for their own job.
On that note, people saying the code output from LLMs is consistently high quality should ask their LLMs to define the Dunning-Kruger effect for them.
LLMs probably have some kind of value, but the immediate default of "can we solve this problem with an LLM?" from non-tech normies—and equal fetishization by techbros who present it as a false dichotomy of being pro or anti "AI"—highlights why managerial roles, and similar, shouldn't create tools that they themselves don't use. See developers making absolutely crap user interfaces for tools they themselves will never use and thus don't understand what makes it a good or bad solution because the tool isn't for their own job.
On that note, people saying the code output from LLMs is consistently high quality should ask their LLMs to define the Dunning-Kruger effect for them.