ChatGPT Is a Gimmick

100 blueridge 115 5/22/2025, 4:04:25 AM hedgehogreview.com ↗

Comments (115)

keiferski · 3h ago
These “AI is a gimmick that does nothing” articles mostly just communicate to me that most people lack imagination. I have gotten so much value out of AI (specifically ChatGPT and Midjourney) that it’s hard to imagine that a few years ago this was not even remotely possible.

The difference, it seems, is that I’ve been looking at these tools and thinking how I can use them in creative ways to accomplish a goal - and not just treating it like a magic button that solves all problems without fine-tuning.

To give you a few examples:

- There is something called the Picture Superiority Effect, which states that humans remember images better than merely words. I have been interested in applying this to language learning – imagine a unique image for each word you’re learning in German, for example. A few years ago I was about to hire an illustrator to make these images for me, but now with Midjourney or other image creators, I can functionally make unlimited unique images for $30 a month. This is a massive new development that wasn’t possible before.

- I have been working on a list of AI tools that would be useful for “thinking” or analyzing a piece of writing. Things like: analyze the assumptions in this piece; find related concepts with genealogical links; check if this idea is original or not; rephrase this argument as a series of Socratic dialogues. And so on. This kind of thing has been immensely helpful in evaluating my own personal essays and ideas, and prior to AI tools it, again, was not really possible unless I hired someone to critique my work.

The key for both of these example use cases is that I have absolutely no expectation of perfection. I don’t expect the AI images or text to be free of errors. The point is to use them as messy, creative tools that open up possibilities and unconsidered angles, not to do all the work for you.

lm28469 · 40m ago
> These “AI is a gimmick that does nothing” articles mostly just communicate to me that most people lack imagination.

Either that or different people have different views on life, tech, &c. If you're not going through life as some sort of minmax rpg not using LLM to "optimise" every single aspects of your life is perfectly fine. I don't need a LLM to summarise an article, I want to read it during my 15 min coffee time in the morning. I don't need an LLM to tell me how my text should be rewritten to look like the statistical average of a good text...

keiferski · 39m ago
That’s perfectly fine, but the article is making a broad statement, not an individual opinion.
lm28469 · 31m ago
For the vast majority of people LLMs are deep in gimmick territory, it's the funny thing you use to generate your Ghibli style profile image or the HR email you can't bother writing.

If you not part of a very small subset of tech enthusiasts or companies directly profiting from it it really isn't that big of a deal

nsteel · 3h ago
> These “AI is a gimmick that does nothing” articles

I don't think that's an accurate summary of this article. Are you basing that just on the title, or do you fundamentally disagree with the author here?

> We call something a gimmick, the literary scholar Sianne Ngai points out, when it seems to be simultaneously working too hard and not hard enough. It appears both to save labor and to inflate it, like a fanciful Rube Goldberg device that allows you to sharpen a pencil merely by raising the sash on a window, which only initiates a chain of causation involving strings, pulleys, weights, levers, fire, flora, and fauna, including an opossum. The apparatus of a large language model really is remarkable. It takes in billions of pages of writing and figures out the configuration of words that will delight me just enough to feed it another prompt. There’s nothing else like it.

keiferski · 2h ago
Not sure how that definition of a gimmick applies to what I wrote. Labeling AI tools as gimmicks would imply that they both save labor and inflate it and therefore offer no real fundamental improvements or value.

In my own experience, that is absolute nonsense, and I have gotten immense amounts of value from it. Most of the critical arguments (like the link) are almost always from people that use them as basic chatbots without any sort of deeper understanding or exploration of the tools.

-__---____-ZXyw · 1h ago
Another commenter on here talked about AI's ability to "impress an idiot". I see lots of this. Your usage sounds decidedly unidiotic, and I'm not saying you are an idiot - but it sounds like your view of the criticism is based on the idea that everyone who isn't using it as cleverly as you essentially is an idiot who simply hasn't realised how to get to a "deeper understanding" in the "exploration" of these tools.

Please consider that there are some very clever people out there. I can respond to your point about languages personally - I speak three, and have lived and operated for extended periods in two others which I wouldn't call myself "fluent" in as it's been a number of years. I would not use an LLM to generate images for each word, as I have methods that I like already that work for me, and I would consider that a wasteful use of resources. I am into permacomputing, minimising resources, etc.

When I see you put the idea forward, I think, oh, neat, but surely it'd be much more effective if you did a 30s sketch for each word, and improved your drawing as you went.

In summary - do read the article, it's very good! You're responding to an imagined argument based on a headline, ignoring a nuanced and serious argument, by saying: "yeah, but I use it well, so?! It's not a gimmick then, for me!"

namaria · 1h ago
> When I see you put the idea forward, I think, oh, neat, but surely it'd be much more effective if you did a 30s sketch for each word, and improved your drawing as you went.

Or, you know, just imagine something. Which is what I have done for learning to speak 3 languages fluently other than my mother tongue.

-__---____-ZXyw · 1h ago
I actually don't really imagine anything, I like the sounds and contours of words and find them easy to learn in the context of punchy sentences. But vividly imagining sounds fun too
keiferski · 1h ago
I did read the article and wrote my comment afterward. I didn't find it to be nuanced at all, and the author's description of using LLMs was amateurish at best, which is my point.

30 second sketches also are not nearly as effective as detailed images and would likely have dubious value in implementing the Picture Superiority Effect.

Nowhere did I say that people who write essays about AI being useless are idiots. That's your terminology, not mine. Merely that they lack imagination and creativity when it comes to exploring the potential of a new tool and instead just make weak criticisms.

-__---____-ZXyw · 38m ago
I feel like we're talking past each other. In the interest of understanding, this is what I feel like I'm reading from you:

1. In a couple of contexts, as a non-expert, I'm getting excellent use out of these LLM tools, because I'm imaginative and creative in my use of them.

2. I get such great use out of them, as a non-expert, in these areas, that any expert claiming they are gimmicks, is simply wrong. They just need to get more imaginative and creative, like me.

Am I misunderstanding you here? Is this really what you're saying?

The holes in the thinking seem obvious, if I may be blunt. I would suggest you ask an LLM to help you analyse it, but I think they're quite bad at that, as they are programmed to reflect your biases back at you in a positive way. The largest epistemic issue they have is probably that - it is only possible to overcome this tendency to placate the user if the user has great knowledge of their biases, an issue even the best experts face!

kumarvvr · 3h ago
> value out of AI (specifically ChatGPT and Midjourney)

The one area I would agree that AI and ML tools have been surprisingly good, art generation.

But then, I see the flood of AI generated pictures and overall, feel it has made a already troublesome world, even more troublesome. I am starting to see the "the picture is AI made, or AI modified" excuses coming into mainstream.

A picture now, has lost all meaning.

> be useful for “thinking” or analyzing a piece of writing

This, I am highly skeptical of. If you train an LLM with words of "trains can fly", then it spits that out. They may be good as summarizing or search tools, but to claim them to be "thinking" and "analyzing", nah.

keiferski · 3h ago
The fact that most ai art is generic garbage just reflects the lack of imagination most people have when making it. Sad but true. The actual tools themselves are incredible.

And I meant myself thinking and analyzing a piece of writing with the help of ChatGPT, not ChatGPT itself “thinking.” (Although I frankly think this is somewhat of an irrelevant point, if the machine is thinking.) Because I have absolutely gained tons of new insights and knowledge by asking ChatGPT to analyze an idea and suggest similar concepts.

namaria · 1h ago
> Because I have absolutely gained tons of new insights and knowledge by asking ChatGPT to analyze an idea and suggest similar concepts.

Are you going to test them by building something or using these concepts in conversation with specialists?

keiferski · 53m ago
Not sure what you mean by testing them. I specifically mean knowledge, historical facts, new books and philosophers to study, etc. I have discovered new writers that I didn’t know about because ChatGPT suggested them.

And likewise, using AI to critique a piece of writing is already “testing it,” as it definitely makes useful suggestions.

sincerecook · 17m ago
Those are all gimmicks. Normal people don't care about any of that.
professor_v · 3h ago
Your examples are both quite gimmicky and not a fundamental value shift.
danlitt · 4h ago
It is refreshing to see I am not the only person who cannot get LLMs to say anything valuable. I have tried several times, but the cycle "You're right to question this. I actually didn't do anything you asked for. Here is some more garbage!" gets really old really fast.

It makes me wonder whether everyone else is kidding themselves, or if I'm just holding it wrong.

loveparade · 3h ago
I use LLMs to check solutions for graduate level math and physics problem I'm working on. Can I 100% trust their final output? Of course not, but I know enough about the domain to tell whether they discovered mistakes in my solutions or not. And they do a pretty good job and have found mistakes in my reasoning many times.

I also use them for various coding tasks and they, together with agent frameworks, regularly do refactoring or small feature implementations in 1-2 minutes that would've taken me 10-20 minutes. They've probably increased my developer productivity by 2-3x overall, and by a lot more when I'm working with technology stacks that I'm not so familiar with or haven't worked with for a while. And I've been an engineer for almost 30 years.

So yea, I think you're just using them wrong.

bsaul · 3h ago
i could have written all of this myself. I use it exactly for the same purposes ( except i don't do undergrad physics, just maths) and with the same outcome.

It's also pretty useful for brainstorming : talking to AI helps you refine your thoughts. It probably won't give you any innovative idea, only a survey of mainstream ones, but it's a pretty good start for thinking about a problem.

alkonaut · 2h ago
I think this is the key. If you have a problem where it's slow to produce a plausible answer but quick to check if it's correct (writing a shell script, solving an equation, making up a verse for a song) then you have a good tool. It's the Prime-factorization category of problems. Recognizing when you have one and going to an LLM when you do, is key.

But what if you _don't_ have that kind of problem? Yes LLMs can be useful to solve the above. But for many problems you ask for a solution and what you get is a suggested solution which takes a long to verify. Meaning: unless you are somewhat sure it will solve the problem you don't want to do it. You need some estimate of confidence. LLMs are useless for this. As a developer I find my problems are very rarely in the first category and more often in the second.

Yes it's "using them wrong". It's doing what they struggle with. But it's also what I struggle with. It's hard to stop yourself when you have a difficult problem and you are weighing googling it for an hour or chatgpt-ing it for an hour. But I often regret going the ChatGPT route after several hours.

1a527dd5 · 3h ago
I think it's starting to change.

I'm an AI sceptic (and generally disregard most AI announcements). I don't think it's going to replace SWE at all.

I've been chunking the same questions both to Gemini and GPT and I'd say about until ~8 months ago they were both as bad as each other and basically useless.

However, recently Gemini has gotten noticeable better and has never hallucinated.

I don't let it write any code for me. Instead I treat Gemini as a 10+ YoE on {{subject}}.

Working as platform engineer, my subjects are broad so it's very useful to have a rubber duck ready to go on almost any topic.

I don't use copilot or any other AI. So I can't compare it to those.

-__---____-ZXyw · 33m ago
YoE means "Years of Experience", for anyone interested. I had to look it up, and perhaps I can save a different me some time.
badmintonbaseba · 3h ago
I mostly use it as a replacement of a search engine and exploration, mostly for subjects that I'm learning from scratch, and I don't have a good grasp of the official documentation and good keywords yet. It competes with searching for guides in traditional search engines, but that's easy to beat with today's SEO infested web.

Its quality seems to vary wildly between various subjects, but annoyingly it presents itself with uniform confidence.

-__---____-ZXyw · 19m ago
I hate the confident obsequious waffling. The cultural origins of the tool is evident.

If you aren't already, I suggest making sure to not forget, every 3-5 prompts, to throw in: "no waffling", "no flattery", "no obsequious garbage", etc. You can make it as salty as you like. If the AI says "Have fun!", or "Let's get coding!", you know you need to get the whip out haha.

Also, "3 sentences max on ...", "1 sentence explaining ...", "1 paragraph max on ...".

Another improvement for me was, you want to do procedure x in situation y, so you go "I'm in situation y, I'm considering procedure x, but I know I've missed something. Tell me what I could have missed". Or "list specific scenarios in which procedure x will lead to catastrophe".

Accepting the tool as a fundamentally dumb synthesiser and summariser is the first step to it getting a lot more useful, I think.

All that said, I use it pretty rarely. The revolution in learning we need is with John Holt and similar thinkers from that period, and is waiting to happen, and won't be provided by the next big tech thing, I fear.

mnky9800n · 3h ago
This why I like how perplexity forces citations. I use it more like I’m googling then I care about what the LLM writes. The LLM simply acts as a sometimes unreasonable interface to the search engine. So really, I’m more focused on if whatever embeddings the LLM is trained on found some correlations between different documents, etc that were not obvious to a different kind of search engine.
wazoox · 2h ago
Perplexity often quotes references that simply don't exist. Recent examples provided by perplexity :

Google Cloud. (2024). "Broadcast Transformation with Google Cloud." https://cloud.google.com/solutions/media-entertainment/broad...

Microsoft Azure. (2024). "Azure for Media and Entertainment." https://azure.microsoft.com/en-us/solutions/media-entertainm...

IBC365. (2023). "The Future of Broadcast Engineering: Skills and Training." https://www.ibc.org/tech-advances/the-future-of-broadcast-en...

Broadcast Bridge. (2023). "Cloud Skills for Broadcast Engineers." https://www.thebroadcastbridge.com/content/entry/18744/cloud...

SVG Europe. (2023). "OTT and Cloud: The New Normal for Broadcast." https://www.svgeurope.org/blog/headlines/ott-and-cloud-the-n...

None of these exist, neither at the provided URLs or elsewhere.

pishpash · 3h ago
You're over-representing the usefulness here. On topics where traditional search reaches a dead end, you will find the AI citations to be the same ones you might have found, except that upon checking, they were clearly misread or misrepresented. Dangerous and a waste of time.

It's much more helpful on popular topics where summarization itself is already high quality and sufficient.

mnky9800n · 3h ago
I dunno. I think of it like a recommendation engine on Netflix. I don’t like everything Netflix tells me to watch. Same with perplexity. I don’t agree with everything it suggests me. People need to stop expecting the computer to think for them and instead see it as a tool to amplify their own thinking.
terhechte · 3h ago
Can you give some examples where it didn't work for you? I'm curious because I derive a lot of value from it and my guess is that we're trying very different things with it.
wazoox · 2h ago
Not OP, but yesterday I was working on NFS server tuning on Linux, a typically quite difficult thing to find relevant info about through search engines. I asked Claude 3.5 to suggest some kernel settings or compile-time tweaks, and it provided me with entirely made up answers about kernel variables that don't exist, and makefile options that don't exist.

So maybe another LLM would have fared better, but still, so far it's mostly wasted time. It works quite well to summarise texts and creating filler images, but overall I still find them not reliable enough to care out of these two limited use cases.

Yiin · 2h ago
I mean you answered yourself why it didn't work, if there is no useful data in its training corpus, it would be a miracle if it could correctly guess unknown information.
rndmio · 1h ago
How are you supposed to know in advance if it is going to be able to usefully answer your question or will just make up something?
exe34 · 3h ago
From my experience so far, most "AI skeptics" seem to be trying to catch the LLM in an error of reasoning or asking it to turn a vague description into a polished product in one shot. To make the latter worse, they often try to add context after the first wrong answer, which tends to make the LLM continue to be wrong - stop thinking about the pink elephant. No, I said don't think about the pink elephant! Why do you keep mentioning the pink elephant? I said I don't want a pink elephant in the text!
sausagefeet · 3h ago
I've had the same feeling for awhile. I tried to articulate it last night actually, I don't know to how much success: https://pid1.dev/posts/ai-skeptic/
guappa · 3h ago
I use them to troll. Like when I want to obviously make an annoying coworker angry I tell chatgpt to write an overly long and very AI sounding reply saying what I need to say.
otabdeveloper4 · 4h ago
It works better if you treat it like a compressed database of Google queries. (Which is kind of actually is.)

Ask it something where the Google SERP is full of trash and you might have a more sane result from the LLM.

th0ma5 · 4h ago
Some opinions out there that people are using them as slot machines, are using them in place of project templates and aren't really working on things of substance, or they are in the business of producing artifacts that look like work but the fitness is determined elsewhere. If they are producing working code beyond autocomplete style capabilities they either are disinterested in the long term supportability, ignoring all of the handholding they have to do to get things to work, or don't understand that experienced programmers ultimately get to a point where they are not writing that much code to begin with which isn't something these tools can help much with other than rubber duck style bouncing off of ideas... but even then on the bleeding edge of capabilities you get exponentially more and more bits from unrelated patterns that make the internal weights work the closer you get away from well worn development paths.
alexdowad · 3h ago
There are absolutely times when one can get LLMs to "say something valuable". I am still learning how to put them to good use, but here are some areas where I have found clear wins:

* Super-powered thesaurus

A traditional thesaurus can only take a word and provide alternative words; with an LLM, you can take a whole phrase or sentence and say: "give me more ways to express the same idea".

I have done this occasionally when writing, and the results were great. No, I do not blindly cut-and-paste LLM output, and would never do so. But when I am struggling to phrase something just right, often the LLM will come up with a sentence which is close, and which I can tweak to get it exactly the way I want.

* Explaining a step in a mathematical proof.

When reading mathematical research papers or textbooks, I often find myself stuck at some point in a proof, not able to see how one step follows from the previous ones. Asking an LLM to explain can be a great way to get unstuck.

When doing so, you absolutely cannot take whatever the LLM says as 'gospel'. They can and will get confused and say illogical things. But if you call the LLM out on its nonsense, it can often correct itself and come up with a better explanation. Even if it doesn't get all the way to the right answer, as long as it gets close enough to give me the flash of inspiration I needed, that's enough for me.

* Super-powered programming language reference manual

I have written computer software in more than 20 programming languages, and can't remember all the standard library functions in each language, what the order of parameters are, and so on.

There are definitely times when going to a manpage or reference manual is better. But there are also times when asking an LLM is better.

khazhoux · 3h ago
Others aren't kidding themselves. You haven't found the use yet.

I was up extremely late last night writing a project-status email. I could tell my paragraphs were not tight. I told Cursor: rewrite this 15% smaller. I didn't use the output verbatim, but it gave me several perfect rewrite ideas and the result was a crisp email.

I have it summarize my sloppy notes after interviewing someone, into full sentences. I double-check it for completeness and correctness, of course. But it saves me an hour of sweating the language.

I used it to get a better explanation to a polynomial problem with my child.

I use it to generate Google Spreadsheet formulas that I would never want to spend time figuring out on my own ("give me a formula that extracts the leading number from each cell, and treats blank cells as zero").

Part of the magic is finding a new use case that shaves another hour here and there.

admissionsguy · 4h ago
> It makes me wonder whether everyone else is kidding themselves, or if I'm just holding it wrong.

Have been wondering this ever since 1 week after the initial ChatGPT release.

My cynical take is that most people don't do real work (i.e. one that is objectively evaluated against reality), so are not able to see the difference between a gimmick and the real thing. Most people are in the business of making impressions, and LLMs are pretty good at that.

It's good that we have markets that will eventually sort it out.

Hoasi · 3h ago
> It's good that we have markets that will eventually sort it out.

But then again, it's not as if markets always rewarded real work either.

bsaul · 3h ago
Ultimately markets do ask the only relevant question : are people going to pay for LLMs' output.
admissionsguy · 3h ago
It's noisy, biased, and slow, but it generally does eventually reward what works better, in most cases.
melagonster · 3h ago
There were many people just input text from paper to computer. Did they do real jobs in the past?
tsurba · 4h ago
I do machine learning research and it is very useful for working out equations and checking for ”does this concept already have an established name” etc.

It is also excellent for writing one-off code experiments and plots, saving some time from having to write them from scratch.

I’m sorry but you are just using it wrong.

Incipient · 3h ago
I personally find your use cases to be the same as mine for AI, along with "fancy autocomplete" in larger files...however it's a fairly disappointingly limited use compared to the "it's nearly AGI" vision companies are selling.

The code it also generates is...questionable, and I'm a pretty middling dev.

jrflowers · 3h ago
> It makes me wonder whether everyone else is kidding themselves, or if I'm just holding it wrong.

It is the former.

When LLMs blew up a few years ago I was pretty excited about the novelty of the software, and that excitement was driven by what they might do rather than what they did do.

Now, years and many iterations later, the most vocal proponents of this stuff still pitch what it might do with a volume loud enough to drown out almost any discussion of what it does. What little discussion of what it does for individuals usually boils down to some variation of “it gives me answers to the questions for which I do not care about the answers”, but, —how ridiculous, wasteful, and contrary to the basic ideas of knowledge and reasoning that statement is aside— even that is usually given with a wink and a nod to suggest that maybe one day it will give answers to questions that matter.

jiggawatts · 3h ago
Something I noticed a long time ago is that going from 90% correct to 95% correct is not a 5% difference, it’s a 2x difference. As you approach 100%, the last few 0.01% error rates going away make a qualitative difference.

“Computer” used to be a job, and human error rates are on the order of 1-2% no matter what level of training or experience they had. Work had to be done in triplicate and cross-checked if it mattered.

Digital computers are down to error rates roughly 10e-15 to 10e-22 and are hence treated as nearly infallible. We regularly write code routines where a trillion steps have to be executed flawlessly in sequence for things not to explode!

AIs can now output maybe 1K to 2K tokens in a sequence before they make a mistake. That’s 99.9% to 99.95%! Better than human already.

Don’t believe me?

Write me a 500 line program with pen and paper (not pencil!) and have it work the first time!

I’ve seen Gemini Pro 2.5 do this in a useful way.

As the error rates drop, the length of usefully correct sequences will get to 10K, then 100K, and maybe… who knows?

There was just a press release today about Gemini Diffusion that can alter already-generated tokens to correct mistakes.

Error rates will drop.

Useful output length will go up.

hatefulmoron · 3h ago
I don't think the length you're talking about is that much of an issue. As you say, depending on how you measure it, LLMs are better at remaining accurate over a long span of text.

The issue seems to be more in the intelligence department. You can't really leave them in an agent-like loop with compiler/shell output and expect them to meaningfully progress on their tasks past some small number of steps.

Improving their initial error-free token length is solving the wrong problem. I would take less initial accuracy than a human but equally capable of iterating on their solution over time.

pishpash · 3h ago
You are having low expectations here. People used to enter machine code on switches and punched paper tape, so yes they made sure it worked the first time. Later, people had code reviews by marking up printouts of code, and software got sent out in boxes that couldn't be changed until the next year.

Programmers who "iterate" buggy shit for 10 rounds until they get it right are a post-Google push-update phenomenon.

jiggawatts · 2h ago
Been there, done that. I made mistakes and had to try again or correct the input (when that was an option).
dist-epoch · 3h ago
As a non-physicist I found it's explanations to physics questions I've asked amazing, better then watching videos on them, since you can iterate exactly on the point you don't understand.

Same for philosophy questions, "explain this piece of news through the lens of X philosopher's Y concept".

cess11 · 3h ago
I don't think you are. I've found some use for cheap and somewhat unreliable machine translation of formal documents, but it doesn't work for idiomatic or rude texts, for example, LLM:s commonly try to avoid 'saying' something offensive about the house of Saud or the White House so I need to push them around to do the thing I want. Sometimes I also 'one-shot' HTML scaffolds because I suck at Tailwind and I only rarely save templates of single file things, they just end up in an unstructured pile.

Some people seem to use them as a database of common programming patterns, but that's something I already have, both hundreds of scaffolds in many programming languages I've made myself and hundreds of FOSS and non-FOSS git repos I've collected out of interest or necessity. Often I also just go look at some public remote repo if I'm reading up on some topic in preparation for an implementation or experiment, mainly because when I ask an LLM the code usually has defects and incoherences and when I look at something that is already in production somewhere it's working and sits in a context I can learn from as well.

But hey, I rarely even use IDE autocomplete for browsing library methods and the like, in part because I've either read the relevant library code or picked a library with good documentation since that tells a lot more about intended use patterns and pitfalls.

visarga · 3h ago
I too have sat at pianos and banged the keys but no valuable music came out. It must be because the piano is a bad instrument. /s
johnisgood · 1h ago
Thanks, I am going to steal this one. :D
exe34 · 3h ago
> most people don't do real work (i.e. one that is objectively evaluated against reality), so are not able to see the difference between a gimmick and the real thing

No, no, you have to realise, most pianists don't make real music!

ddxv · 4h ago
I personally feel like some of the AI hype is driven by it's ability to create flashy demos which become dead end projects.

It's so easy to spin up an example "write me a sample chat app" or whatever and be amazed how quickly and fully it realizes this idea, but it does kinda beg the question, now what?

I think in the same way that image generation is akin to clipart (wildly useful, but lacking in depth and meaning) the AI code generation projects are akin to webpage templates. They can help get you started, and take you further than you could on your own, but ultimately you have to decide "now what" after you take that first (AI) step.

cess11 · 3h ago
"It's so easy to spin up an example "write me a sample chat app" or whatever and be amazed how quickly and fully it realizes this idea, but it does kinda beg the question, now what?"

Which we already had, it's just a 'git clone https://github.com/whatevs/huh' away, or doing one of millions of tutorials on whatever topic. Pretty much everyone who can build something out of Elixir/Phoenix has a chat app, an e-commerce store and a scraping platform just laying around.

th0ma5 · 4h ago
The demos I see all make compromises in order to work that hobble you from hardening them or otherwise lock you to very specific conceptualizations that you simply wouldn't have building from the smallest low level building blocks or even starting at a super high level state machine placeholder. In my experience no matter how hard I try it will be guided by the weights of the total generated output towards something that doesn't understand the value of compartmentalization, and will add tokens that make its probabilities work internally above all.
wiseowise · 3h ago
AI is gimmick, smartphones are gimmick, computers are gimmick, automation is gimmick, books are gimmick, only %MY_ENLIGHTMENT% is not.

Seriously, I understand saying something lime this about crypto or whatever meme of the day, but even current LLMs are literal magic. Instead of reading 10 pages of empty water and wasting my time, ChatGPT can summarize this as

> Malesic argues that AI hype—especially in education—is a shallow gimmick: it overpromises revolutionary change but delivers banal, low-value outputs. True teaching thrives on slow, sacrificial human labor and deep discussion, which no AI shortcut can replicate.

Hardly any revolutionary thought.

AndrewDucker · 3h ago
Turns out that AI is not good at summarising things:

https://futurism.com/ai-chatbots-summarizing-research

Gud · 1h ago
Ok, I disagree.

Out of curiosity, I used ChatGPT to make a summary of “FreeBSD vs Linux comparison”, and if came out as extremely fair and to the point, in my opinion.

wiseowise · 2h ago
It was good enough with summarizing this empty rant.
johnb231 · 2h ago
As usual the paper is dead on arrival. They tested with obsolete models and non-reasoning models.

Try again with any SOTA reasoning model (GPT-o3, Gemini 2.5 Pro, Grok 3).

lm28469 · 48m ago
> Instead of reading 10 pages of empty water and wasting my time, ChatGPT can summarize this as

Definitely worth investing billions and wasting insane amount of energy... idk how people merge the "this is a revolution!" and "it kinda summed up a 10 pages pdf that I couldn't bother to read in the first place" without noticing the insane amount of mental gymnastic you have to go through to reconcile these two ideas.

Not even mentioning the millions of new LLM generated pages that are now polluting the web

mort96 · 3h ago
Where did they say smartphones and computers are gimmicks?
wiseowise · 2h ago
lm28469 · 34m ago
I don't think you can "fear" something you consider a gimmick. Also good fucking luck arguing smartphones don't have massive negative effects.

If LLMs were even 50% as good as they're pretending to be we'd see huge productivity increase across the board, we simply don't, and it's been almost 3 years since chatgpt was released now. Where is the productivity increase ? Where is the extra wealth generated ?

elric · 4h ago
> “Human interaction is not as important to today’s students,” Latham claims

Goodness that's depressing. Is this going to crank individualism up to 11?

I remember hating having to do group projects in school. Most often, 3/5 of the group would contribute jack shit, while the remaining people had to pick up the slack. But even with lazy gits, the interactions were what made it valuable.

Maybe human/-I cooperation is an important skill for people to learn, but it shouldn't come at the cost of losing even more human-human cooperation and interaction.

DaSHacka · 3h ago
> Most often, 3/5 of the group would contribute jack shit, while the remaining people had to pick up the slack.

Never fear, nowadays 3/5 do squat with the 4th sending you largely-incoherent GPT sludge, before dropping off the face of the earth until 11:30PM on the night the assignment's due.

I've seen it said college is supposed to teach you the skills to navigate working with others moreso than your specific field of study. Glad to see they've still got it.

BlindEyeHalo · 3h ago
For me the usefulness of LLMs is proportional to how shitty google has become. When searching for something you get a bunch of blog spam or other SSO optimised shit results to pages that open dozens of popups asking you to subscribe or make an account. ChatGPT gives you the answer immediately and I must say I find it helpful 90% of the time.

For simple coding questions it is also very good because it takes your current context into account. It is basically a smarter "copy paste from stack overflow".

At least for now LLMs do not replace any meaningful work for me, but they replace google more and more.

lexandstuff · 3h ago
One of the realisations I've had recently is that the AI hype feels like another level from what's come before because AI itself is creating the "hype" content fed to me (and my bosses and colleagues) all over social media.

The FOMO tech people are having with AI is out of control - everyone assumes that everyone else is having way more success with it than they are.

namaria · 59m ago
A product that hypes itself. What a world. That does explain a lot of the cognitive dissonance going around.
pzo · 2h ago
I used AI to summarize this whole article and give me takeaways - it already saved me like 0.5h of reading something that in the end I would disagree with since the article is IMHO to harsh on AI.

I found AI extremely useful and easy sell for me to spend $20/m even if not used professionally for coding and I'm the person who avoid any type of subscription as a plague.

Even in educational setting that this article mostly focus about it can be super useful. Not everyone has access to mentors and scholars. I saved a lot of time helping family with typical tech questions and troubleshooting by teaching them how to use it and trying to solve their tech problem themselves.

alkonaut · 2h ago
I always found myself to be very good at Googling/Searching. Or asking: like emailing an expert or colleague. I'm good at condensing what I'm trying to ask and good at knowing what they could be misunderstanding, or what follow up questions they might have, to save some back- and forth. The corresponding thing on google is predicting what I might see, and adding negative search terms for them.

BUT, and this is I think why some of us feel ChatGPT is poor: asking in this way that guides a human or a search engine, makes ChatGPT produce worse answers(!).

If you say "What can be wrong with X? I'm pretty sure it's not Y or Z which I ruled out, could it be Q or perhaps W"? Then ChatGPT and other language models quickly reinforce your belief instead of challenging them. It would rather give you an incorrect reason why you are right, than provide you an additional problem, or challenge your assumptions. If LLMs could get over the bullshit problem, it would be so much better. Having a confidence and being able to express it is invaluable. But somehow I doubt it's possible - if it was, then they would be doing it already as it's a killer feature. So I fear that it's somehow not achievable with LLMs? In which case the title is correct.

blixt · 3h ago
I think it's in human nature to force any topic to be all "good" or "bad". I agree with most criticisms this author has about the performance of AI -- it _is_ very bad at writing essays, and dare I say most things (including code), based on a single prompt. But to say it is a gimmick and compare it with technologies that died or are dying seems to me like a visceral response, perhaps after experiencing the overflow of AI-generated homework (a use of AI that ultimately just wastes everyone's time).

I think most people in here know at least a few ways they can use AI that is genuinely useful to them. I suppose if you're _very_ positive about AI, then it's good to have a polarized negative article to make us remember all the ways AI is being overpromised. I'm definitely very excited about finding new ways to apply AI, and that explorative phase can come off as trying to sell snake oil. We have to be realistic and acknowledge this is a technology that can produce content faster than we can consume it. Content that takes effort to distinguish useful vs. not.

All that said I disagree with the idea that the only way "to help students break out of their prisons, at least for an hour, so they can see and enhance the beauty of their own minds" is via teaching and not via technologies such as AI. The education system certainly failed me and I found a lot of joy in technology instead. For me it was the start of the internet, but I can only imagine for many today it will be the start of AI.

mort96 · 3h ago
> I think most people in here know at least a few ways they can use AI that is genuinely useful to them

The only thing that really comes to mind is making something in a domain where I have almost no prior expertise.

But then ChatGPT is so frequently wrong, and so frequently repeatedly wrong when it tries to "correct" problems when pointed out, that even then I always have to go and read relevant documentation and re-write the thing regardless. Maybe there's some slight usefulness here in giving me a starting point, but it's marginal.

blixt · 3h ago
My list of uses of AI includes:

- Turning a lot of data into a small amount of data, such as extracting facts from a text, translating and querying a PDF, cleaning up a data dump such as getting a clean Markdown table from a copy/pasted HTML source of a web page etc (IMO it often goes wrong when you go the other way and try to turn a small prompt into a lot of data)

- Creating illustrations representing ephemeral data (eg my daily weather report illustration which I enjoy looking at every day even if the data it produces is not super useful: https://github.com/blixt/sol-mate-eink)

- Using Cursor to perform coding tasks that are tedious but I know what the end result should look like (so I can spend low effort verifying it) -- it has an 80% success rate and I deem it to save time but it's not perfect

- Exploration of a topic I'm not familiar with (I've used o3 extensively while double checking facts, learning about laws, answering random questions that would be too difficult to Google, etc etc) -- o3 is good at giving sources so I can double check important things

Beyond this, AI is also a form of entertainment for me, like using realtime voice chat, or video/image generation to explore random ideas and seeing what comes out. Or turning my ugly sketches into nicer drawings, and so forth.

isaacfrond · 4h ago
> The claim of inevitability is crucial to technology hype cycles, from the railroad to television to AI.

Well. You know. We still have plenty of railroad, and television has had a pretty good run too. So if that are the models to compare AI to, then I have bad news for how 'hype cycle' AI is going to be.

bushbaba · 3h ago
About 60% of my job is writing. Writing slack, writing code, writing design docs, writing strategies, writing calibrations.

ChatGPT has allowed me to write 50%+ faster with 50%+ better quality. It’s been one of the largest productivity boosts in the last 10+ years.

windowshopping · 3h ago
One of?? Please tell me what other tools have been more impactful for you, I want to use them.
ohxh · 3h ago
This seems unusually shallow for the hedgehog review. I thought we'd largely moved on from this sort of sentimental, "I can't get good outputs therefore nobody can" style essay -- not to mention the water use argument! They've published far better writing on LLMs too: see "Language Machinery" from fall 23 [1]

[1] https://hedgehogreview.com/issues/markets-and-the-good/artic...

panstromek · 3h ago
If anyone is interested in AI in relation to learning, I think the best take on that I've seen so far was from Derek (Veritasium) in this recent talk: https://www.youtube.com/watch?v=0xS68sl2D70

It's a lot more balanced compared to the doomy attitude in the primary post.

fedeb95 · 4h ago
Some companies may save money by employing LLMs to do shallow things. Others may not. Also, LLMs are not all AIs. AI is a broad field with many models and applications that were already omnipresent in our lives but less marketable to the general public as revolutionary, such as spam filter. AI is NOT a gimmick per se. Some users are.

P.S.: consider that when there are huge investments in something, people will do anything to see a return, including paying other people to create hype.

babyent · 3h ago
For AI to be useful, it needs smart humans to progress its knowledge base.

An educators job (like an actual teacher) should be to help people (key) progress and be smarter humans.

Deal with progress.

frank20022 · 3h ago
As Woody Allen said: ChatGPT is a meaningless autocomplete, but as far as meaningless autocompletes go, its pretty damn good.
mirekrusin · 3h ago
> After I got her feedback, I finally asked ChatGPT if generative AI could be considered a gimmick in Ngai’s sense. I did not read its answer carefully. Whenever I see the words cascade down my computer screen, I get a sinking feeling. Do I really have to read this?

Does the author know he can include "be concise" in the prompt if that's what he wants?

I do agree with the author this whole thing is challenging. Frankly I wouldn't like to be youngster nowadays – so much information, so many options, so much flood of tip of success that makes you feel like shit, so easy not to learn anything, so much feeling of "pointless" discipline and hard work, such a wide distance to excel at something - just summarizing available avenues is project on its own.

Anyway there is no turning back. What we see now as best of models will get replaced quickly with better ones and that change will only accelerate with time. I'm still positive – I think we'll find a way to be happy in this completely new reality.

I like it at work - time from business idea to PoC did shrink so much, it's easier than ever to win business (not sure for how long but that's today), agentic coding helps a lot with documentation, tests, finding medium-obvious mistakes that sit above linter/typechecker – that part is amazing as well. We'll continue focusing on low effort/high value tasks it currently excels at and keep expanding it.

At the same time we all know where it's going and it makes me uneasy as well.

I don't think I have anything substantial to add – just advice to try to enjoy the ride, take it easy and keep in mind well being of your colleagues. There is a sweet spot to use it – don't overuse it (don't fight with it where it struggles), don't under-use it either (don't say all of it is shit and you won't touch it ever), don't abuse it (do not drop llm output for others to review without knowing what you're pushing).

crowcroft · 3h ago
Microprocessors are a gimmick, they're just toys compared to mainframes.
potato-peeler · 3h ago
Slightly meta, this entire article is filled with quotes from other speakers to highlight a point the author is trying to make, in many cases too hard.

It’s as if the author himself didn’t have his own thoughts, and borrowed some sentences others made to write this piece.

I don’t know what kind of writing style this is.

If you are writing an opinion, why not devote some effort to articulate your own thoughts, or at the very least, provide reasons why other people which the author relies on to make their point, is correct?

johnisgood · 49m ago
He may have used an LLM. :D
wilg · 4h ago
> But look at what people actually use this wonder for: brain-dead books and videos, scam-filled ads, polished but boring homework essays. Another presenter at the workshop I attended said he used AI to help him decide what to give his kids for breakfast that morning.

The last example is actually the most interesting! The essays are whatever, dumb or lazy kids are gonna cheat on their homework, schools have long needed better ways of teaching kids than regurgitative essays, but in the mean time just use an in-class essay or exam. But people aren't really making the brain-dead books and videos as anything other than a curiosity, despite the fears of various humanities professors.

The interesting part of AI, and I suspect the primary actual use case, is everything else.

snickerer · 2h ago
I love to use ChatGPT for creative cooking.

In my camping car, somewhere in the desert, I sometimes have limited resources. Like a can of beans, some fresh potatoes, an apple, Italian spices, and so on.

I like to ask ChatGPT: Listen, I have this stuff, I want to create some food with strong umami taste, do you have an idea?

It is very good at that, the results were often amazing.

This is its core feature: 'feel' loose connections between concepts. Italian pasta with maple syrup? Yes, but only if you add some Arabic spices...

"AI" is, due to the nature of artificial neuronal networks, not intelligent. It does not learn intelligence, it does learn feelings. Not emotions, but feelings in the sense of unconscious learning ('I get a feeling how to ride the bicycle off-road ').

panstromek · 3h ago
There's also a selection bias, people use AI for a ton of stuff but you don't notice because it's not slop. The most sloppy examples are the most visible ones.
SuperHeavy256 · 3h ago
In the dictionary if you look up the term 'pessimist' it shows you a picture of the author of this article.
SebFender · 1h ago
For the time being - These are just another type of search engine - just with "better" answers.
notepad0x90 · 4h ago
Is it that ChatGPT is a gimmick or is it that people are using it as such?

A lot of the author's arguments could have been said about the internet in the 90's. This is a baby 4 year old leap in technology, why are people expecting it to be mature?

It is human nature to try and find silver bullets, to take solutions and find problems. The way I would look at the LLM-centered future is to consider LLM agents assistants and suggestion makers, personal consultants even. You don't ask an agent to write an essay for you, you write an essay, and as you write consider its suggestions and corrections. The models should be familiar with your writing style and preferences. Don't blame ChatGPT for human laziness.

There was this fad about every thing being smart* (smart home,smart tooth brush , smart sex toy,etc...). that wasn't smart, it was just connected to a network. This is "smart". and in the future technology might get past "smart" and become "intelligent" (we're not there yet, outside of scifi at least).

At the end of the day, everyone needs to step back and consider this: It's just a tool. period. it's not "AI", not really. there is no intelligence.

The problem is, the world is full of enshittification capitalists and their doomsday bandwagons.

bigstrat2003 · 4h ago
> This is a baby 4 year old leap in technology, why are people expecting it to be mature?

Because its fans act as though it is, and this article is a response to that overly-enthusiastic outlook on what the tool can do.

khazhoux · 3h ago
I don't think people are "acting" like its mature. It's extremely useful to many (myself included). And it's nonsensical for people to claim it's not useful simply because they haven't found the use.
johnisgood · 45m ago
This pretty much sums it up. I have found it to be incredibly useful. Others might have not, but then again, you have to approach it the right way. Feed it context, be as specific as possible, and do not expect 100% accuracy. I once a fed a conversation to ChatGPT and it missed something that I had to point out. The fact that it missed something does not make me go: "AI sucks", it is just simply not perfect, and it still can be very useful. I had better experiences with Claude than ChatGPT anyways.
kumarvvr · 3h ago
> Don't blame ChatGPT for human laziness

I thought the very nature of technology and progress is to allow humans to be lazy.

We build technology to reduce our own burdens.

And most of the AI marketing is revolving around giving you the luxury to think less and do more for a price.

> The way I would look at the LLM-centered future is to consider LLM agents assistants and suggestion makers, personal consultants even

I find this highly dubious. All the names (agents, assistants, suggestion makers) are synonyms. They are just pieces of text that come off a screen, for inputs given to them. I am highly skeptical of intelligence emanating from them, mainly because real innovation and insight seems to come from a brains ability to devolve something into its abstract self, mush it around other abstract ideas and find a link in the abstract level, that is then applied to the problem at hand. (Andrew Wiles solution to the Euler's problem comes to my mind)

Even problem solving ability or the ability to plan or the ability to anticipate, is not part of the regular content that you find on the internet.

For example, I may read about something a farmer does in Arkansas, and then relate it to something completely different, in a different domain.

Nowhere in the content on internet would I find those two things together.

Most of the agentic systems, the MCP stuff, seems to be a pseudo-deterministic system that is harder to debug.

notepad0x90 · 2h ago
the laziness you're talking about and intellectual laziness are different things. Wanting to do less work and wanting to think less.

> And most of the AI marketing is revolving around giving you the luxury to think less and do more for a price.

So yeah, intellectual laziness.

max_ · 4h ago
The millenial tech bros have mostly made thier money off gimmicks like Instagram, TikTok et al

I was very disgusted when I saw VC firms with billions in AUM put money into things like FartCoin, Digital Twins

The Boomer VCs financed stuff that is genuinely useful, MRI Scanners, Google, Apple Computers, Genetech (brought insulin to the masses).

The milenial VCs fund stuff that is at best convenient to have (Airbnb, Uber) but usually gimmicks, Instagram, Tiktok.

Sam Altman is the master of gimmicks.

He took the GPT model that already existed and wrapped it into chat format similar to Elizer[0]

Got Neural style that existed for a long time and paired it with Studio Ghibli fanatics. [1]

[0]: https://en.m.wikipedia.org/wiki/ELIZA_effect

[1]: https://en.m.wikipedia.org/wiki/Neural_style_transfer

liamwire · 3h ago
I read the entire essay. It comes across wholly uninspired. Some thoughts:

> But do the apologists even believe it themselves? Latham, the professor of strategy, gives away the game at the end of his reverie. “None of this can happen, though,” he writes, “if professors and administrators continue to have their heads in the sand.” So it’s not inevitable after all? Whoops.

This self-assured ‘gotcha’ attitude is pungent throughout the whole piece, but this may be as good an example as any. It’s ridden with cherry-picked choices and quotes from singular actors as if they’re representative of every educator, every decision maker, and it’s such a bad look from someone that clearly knows better. I don’t expect the author to take the most charitable position, but one of intellectual honesty would be nice. To pretend there isn’t, or perhaps ignore, those out there applying technological advancement, including current AI, in education in thoughtful, meaningful, and beneficial even if challenging to quantify ways, is obtuse. To decide there isn’t the possibility of those things being true, given their exclusion, is to do the same head-burying he ridicules others for.

> After I got her feedback, I finally asked ChatGPT if generative AI could be considered a gimmick in Ngai’s sense. I did not read its answer carefully. Whenever I see the words cascade down my computer screen, I get a sinking feeling. Do I really have to read this? I know I am unlikely to find anything truly interesting or surprising, and the ease with which the words appear really does cheapen them.

It may have well been the author’s point, but the disdain for the technology that drips from sentences like these, which are rife throughout, taints any appreciation for the argument they’re trying to make — and I’m really trying to take it in good faith. Knowing they come in with such strongly held preconceived notions makes me reflexively question their own introspection before putting pen to paper.

Ultimately, are you writing to convince me, or yourself, of your point?

jrflowers · 3h ago
> Knowing they come in with such strongly held preconceived notions makes me reflexively question their own introspection before putting pen to paper.

>Ultimately, are you writing to convince me, or yourself, of your point?

I like that you point out here that the author clearly has a strong opinion, and then immediately say that the act of expressing that opinion may suggest that they do not hold that opinion at all.

By this logic, are you trying to convince us that you don’t love the way this article is written, or are you trying to convince yourself of that?

liamwire · 3h ago
Is that what I did, though? I disagree.

Rather, what I hoped to articulate was a sense that being able to viscerally feel that an author holds a very obvious position from the outset of an article, and then not seeing them make even the faintest attempt to proactively argue their point against the most obvious—the easiest—criticisms, comes across lazy.

I expect arguing in good faith, and this wasn’t that.

jrflowers · 3h ago
Good faith argument has at no point in history required supporting an opposite proposition. “Taking a position and arguing it” is literally what an argument is. That is what the endeavor entails.

Anything else is just aesthetics and personal preference

diogolsq · 2h ago
Agree, that is not required. It is an Essay after all.

That said, I disagree with the idea that it’s merely about aesthetics.(Hegel’s dialectic, for example, isn’t just a stylistic choice — its structure actively shapes meaning and allows for a better synthesis.)

I don't think the author wants to engage and have meaningful conversations, his position is clear.

A meaningful conversation - at least how i see it -, involves acknowledging both the pros and cons of any position. Even if you believe the pros outweigh the cons — which is a subjective judgment — you should still be able to clearly enumerate the cons. That’s is an analytical approach.

liamwire · 3h ago
So, to recap, your gripe, with my gripe, is that I hold the author to aesthetic standards that differ from your own—and that that’s… wrong? Do I have that right?

I ask genuinely. I want to understand your position better here.

jrflowers · 2h ago
I think it’s silly to confuse aesthetic preference with the difference between good and bad faith argumentation. Like if you insist that someone painstakingly take the time and effort to convince you that they don’t have an opinion on a topic while trying to convey their opinion about a topic, that’s so absurd that it itself borders on a bad faith request.

Also my original gripe was very clear. “Are you trying to convince yourself?” indicates that the author didn’t believe what they wrote. And your reasoning here for mentioning that is that they wrote it. It is a no-win scenario in which another person literally couldn’t hold an opinion that doesn’t conform to your aesthetic. That is insane!

itchyjunk · 3h ago
This sounds like the "self-assured gotcha" the parent is frowning about.
alexdowad · 3h ago
Wholeheartedly agree.
eru · 4h ago
Seems to be a very wordy article that complains that only a proper education teaches you to think?

In any case, even contemporary LLM---as primitive as they will look like in even a few months time---are already pretty useful as assistants when eg writing software programmes. They ain't gimmicks. They are also useful as a more interactive addition to an encyclopedia. Amongst other uses.

The article also conflates AI in general with LLM. It's a common enough mistake to make these days, so I won't ding the author for that.

Summary of the article: contemporary LLMs aren't very useful for highfalutin liberal arts people (yet). (However they can already churn out the kind of essays and corporate writing that people do in practice.)

frereubu · 4h ago
I think you missed the entire point of the article. They're not saying that AI cannot be useful in the way you describe. They're saying that too many people are using it as a shortcut to producing verbiage that mimics the outcomes of learning, missing out the valuable things that come from the process of learning.
wilg · 4h ago
Is there any evidence to suggest more people are cheating now that there is AI than before, or is everybody just flipping out because the cheaters have changed tactics?