ChatGPT Is a Gimmick

67 blueridge 36 5/22/2025, 4:04:25 AM hedgehogreview.com ↗

Comments (36)

keiferski · 5m ago
These “AI is a gimmick that does nothing” articles mostly just communicate to me that most people lack imagination. I have gotten so much value out of AI (specifically ChatGPT and Midjourney) that it’s hard to imagine that a few years ago this was not even remotely possible.

The difference, it seems, is that I’ve been looking at these tools and thinking how I can use them in creative ways to accomplish a goal - and not just treating it like a magic button that solves all problems without fine-tuning.

To give you a few examples:

- There is something called the Picture Superiority Effect, which states that humans remember images better than merely words. I have been interested in applying this to language learning – imagine a unique image for each word you’re learning in German, for example. A few years ago I was about to hire an illustrator to make these images for me, but now with Midjourney or other image creators, I can functionally make unlimited unique images for $30 a month. This is a massive new development that wasn’t possible before.

- I have been working on a list of AI tools that would be useful for “thinking” or analyzing a piece of writing. Things like: analyze the assumptions in this piece; find related concepts with genealogical links; check if this idea is original or not; rephrase this argument as a series of Socratic dialogues. And so on. This kind of thing has been immensely helpful in evaluating my own personal essays and ideas, and prior to AI tools it, again, was not really possible unless I hired someone to critique my work.

The key for both of these example use cases is that I have absolutely no expectation of perfection. I don’t expect the AI images or text to be free of errors. The point is to use them as messy, creative tools that open up possibilities and unconsidered angles, not to do all the work for you.

danlitt · 30m ago
It is refreshing to see I am not the only person who cannot get LLMs to say anything valuable. I have tried several times, but the cycle "You're right to question this. I actually didn't do anything you asked for. Here is some more garbage!" gets really old really fast.

It makes me wonder whether everyone else is kidding themselves, or if I'm just holding it wrong.

visarga · 1m ago
I too have sat at pianos and banged the keys but no valuable music came out. It must be because the piano is a bad instrument. /s
mnky9800n · 3m ago
This why I like how perplexity forces citations. I use it more like I’m googling then I care about what the LLM writes. The LLM simply acts as a sometimes unreasonable interface to the search engine. So really, I’m more focused on if whatever embeddings the LLM is trained on found some correlations between different documents, etc that were not obvious to a different kind of search engine.
1a527dd5 · 5m ago
I think it's starting to change.

I'm an AI sceptic (and generally disregard most AI announcements). I don't think it's going to replace SWE at all.

I've been chunking the same questions both to Gemini and GPT and I'd say about until ~8 months ago they were both as bad as each other and basically useless.

However, recently Gemini has gotten noticeable better and has never hallucinated.

I don't let it write any code for me. Instead I treat Gemini as a 10+ YoE on {{subject}}.

Working as platform engineer, my subjects are broad so it's very useful to have a rubber duck ready to go on almost any topic.

I don't use copilot or any other AI. So I can't compare it to those.

loveparade · 6m ago
I use LLMs to check solutions for graduate level math and physics problem I'm working on. Can I 100% trust their final output? Of course not, but I know enough about the domain to tell whether they discovered mistakes in my solutions or not. And they do a pretty good job and have found mistakes in my reasoning many times.

I also use them for various coding tasks and they, together with agent frameworks, regularly do refactoring or small feature implementations in 1-2 minutes that would've taken me 10-20 minutes. They've probably increased my developer productivity by 2-3x overall, and by a lot more when I'm working with technology stacks that I'm not so familiar with or haven't worked with for a while. And I've been an engineer for almost 30 years.

So yea, I think you're just using them wrong.

terhechte · 2m ago
Can you give some examples where it didn't work for you? I'm curious because I derive a lot of value from it and my guess is that we're trying very different things with it.
alexdowad · 6m ago
There are absolutely times when one can get LLMs to "say something valuable". I am still learning how to put them to good use, but here are some areas where I have found clear wins:

* Super-powered thesaurus

A traditional thesaurus can only take a word and provide alternative words; with an LLM, you can take a whole phrase or sentence and say: "give me more ways to express the same idea".

I have done this occasionally when writing, and the results were great. No, I do not blindly cut-and-paste LLM output, and would never do so. But when I am struggling to phrase something just right, often the LLM will come up with a sentence which is close, and which I can tweak to get it exactly the way I want.

* Explaining a step in a mathematical proof.

When reading mathematical research papers or textbooks, I often find myself stuck at some point in a proof, not able to see how one step follows from the previous ones. Asking an LLM to explain can be a great way to get unstuck.

When doing so, you absolutely cannot take whatever the LLM says as 'gospel'. They can and will get confused and say illogical things. But if you call the LLM out on its nonsense, it can often correct itself and come up with a better explanation. Even if it doesn't get all the way to the right answer, as long as it gets close enough to give me the flash of inspiration I needed, that's enough for me.

* Super-powered programming language reference manual

I have written computer software in more than 20 programming languages, and can't remember all the standard library functions in each language, what the order of parameters are, and so on.

There are definitely times when going to a manpage or reference manual is better. But there are also times when asking an LLM is better.

otabdeveloper4 · 18m ago
It works better if you treat it like a compressed database of Google queries. (Which is kind of actually is.)

Ask it something where the Google SERP is full of trash and you might have a more sane result from the LLM.

admissionsguy · 20m ago
> It makes me wonder whether everyone else is kidding themselves, or if I'm just holding it wrong.

Have been wondering this ever since 1 week after the initial ChatGPT release.

My cynical take is that most people don't do real work (i.e. one that is objectively evaluated against reality), so are not able to see the difference between a gimmick and the real thing. Most people are in the business of making impressions, and LLMs are pretty good at that.

It's good that we have markets that will eventually sort it out.

Hoasi · 5m ago
> It's good that we have markets that will eventually sort it out.

But then again, it's not as if markets always rewarded real work either.

admissionsguy · 2m ago
It's noisy, biased, and slow, but it generally does eventually reward what works better, in most cases.
melagonster · 5m ago
There were many people just input text from paper to computer. Did they do real jobs in the past?
tsurba · 16m ago
I do machine learning research and it is very useful for working out equations and checking for ”does this concept already have an established name” etc.

It is also excellent for writing one-off code experiments and plots, saving some time from having to write them from scratch.

I’m sorry but you are just using it wrong.

Incipient · 6m ago
I personally find your use cases to be the same as mine for AI, along with "fancy autocomplete" in larger files...however it's a fairly disappointingly limited use compared to the "it's nearly AGI" vision companies are selling.

The code it also generates is...questionable, and I'm a pretty middling dev.

th0ma5 · 19m ago
Some opinions out there that people are using them as slot machines, are using them in place of project templates and aren't really working on things of substance, or they are in the business of producing artifacts that look like work but the fitness is determined elsewhere. If they are producing working code beyond autocomplete style capabilities they either are disinterested in the long term supportability, ignoring all of the handholding they have to do to get things to work, or don't understand that experienced programmers ultimately get to a point where they are not writing that much code to begin with which isn't something these tools can help much with other than rubber duck style bouncing off of ideas... but even then on the bleeding edge of capabilities you get exponentially more and more bits from unrelated patterns that make the internal weights work the closer you get away from well worn development paths.
dist-epoch · 3m ago
As a non-physicist I found it's explanations to physics questions I've asked amazing, better then watching videos on them, since you can iterate exactly on the point you don't understand.

Same for philosophy questions, "explain this piece of news through the lens of X philosopher's Y concept".

ohxh · 3m ago
This seems unusually shallow for the hedgehog review. I thought we'd largely moved on from this sort of sentimental, "I can't get good outputs therefore nobody can" style essay -- not to mention the water use argument! They've published far better writing on LLMs too: see "Language Machinery" from fall 23 [1]

[1] https://hedgehogreview.com/issues/markets-and-the-good/artic...

ddxv · 36m ago
I personally feel like some of the AI hype is driven by it's ability to create flashy demos which become dead end projects.

It's so easy to spin up an example "write me a sample chat app" or whatever and be amazed how quickly and fully it realizes this idea, but it does kinda beg the question, now what?

I think in the same way that image generation is akin to clipart (wildly useful, but lacking in depth and meaning) the AI code generation projects are akin to webpage templates. They can help get you started, and take you further than you could on your own, but ultimately you have to decide "now what" after you take that first (AI) step.

th0ma5 · 25m ago
The demos I see all make compromises in order to work that hobble you from hardening them or otherwise lock you to very specific conceptualizations that you simply wouldn't have building from the smallest low level building blocks or even starting at a super high level state machine placeholder. In my experience no matter how hard I try it will be guided by the weights of the total generated output towards something that doesn't understand the value of compartmentalization, and will add tokens that make its probabilities work internally above all.
elric · 32m ago
> “Human interaction is not as important to today’s students,” Latham claims

Goodness that's depressing. Is this going to crank individualism up to 11?

I remember hating having to do group projects in school. Most often, 3/5 of the group would contribute jack shit, while the remaining people had to pick up the slack. But even with lazy gits, the interactions were what made it valuable.

Maybe human/-I cooperation is an important skill for people to learn, but it shouldn't come at the cost of losing even more human-human cooperation and interaction.

DaSHacka · 7m ago
> Most often, 3/5 of the group would contribute jack shit, while the remaining people had to pick up the slack.

Never fear, nowadays 3/5 do squat with the 4th sending you largely-incoherent GPT sludge, before dropping off the face of the earth until 11:30PM on the night the assignment's due.

I've seen it said college is supposed to teach you the skills to navigate working with others moreso than your specific field of study. Glad to see they've still got it.

isaacfrond · 17m ago
> The claim of inevitability is crucial to technology hype cycles, from the railroad to television to AI.

Well. You know. We still have plenty of railroad, and television has had a pretty good run too. So if that are the models to compare AI to, then I have bad news for how 'hype cycle' AI is going to be.

lexandstuff · 10m ago
One of the realisations I've had recently is that the AI hype feels like another level from what's come before because AI itself is creating the "hype" content fed to me (and my bosses and colleagues) all over social media.

The FOMO tech people are having with AI is out of control - everyone assumes that everyone else is having way more success with it than they are.

crowcroft · 11m ago
Microprocessors are a gimmick, they're just toys compared to mainframes.
fedeb95 · 27m ago
Some companies may save money by employing LLMs to do shallow things. Others may not. Also, LLMs are not all AIs. AI is a broad field with many models and applications that were already omnipresent in our lives but less marketable to the general public as revolutionary, such as spam filter. AI is NOT a gimmick per se. Some users are.

P.S.: consider that when there are huge investments in something, people will do anything to see a return, including paying other people to create hype.

liamwire · 14m ago
I read the entire essay. It comes across wholly uninspired. Some thoughts:

> But do the apologists even believe it themselves? Latham, the professor of strategy, gives away the game at the end of his reverie. “None of this can happen, though,” he writes, “if professors and administrators continue to have their heads in the sand.” So it’s not inevitable after all? Whoops.

This self-assured ‘gotcha’ attitude is pungent throughout the whole piece, but this may be as good an example as any. It’s ridden with cherry-picked choices and quotes from singular actors as if they’re representative of every educator, every decision maker, and it’s such a bad look from someone that clearly knows better. I don’t expect the author to take the most charitable position, but one of intellectual honesty would be nice. To pretend there isn’t, or perhaps ignore, those out there applying technological advancement, including current AI, in education in thoughtful, meaningful, and beneficial even if challenging to quantify ways, is obtuse. To decide there isn’t the possibility of those things being true, given their exclusion, is to do the same head-burying he ridicules others for.

> After I got her feedback, I finally asked ChatGPT if generative AI could be considered a gimmick in Ngai’s sense. I did not read its answer carefully. Whenever I see the words cascade down my computer screen, I get a sinking feeling. Do I really have to read this? I know I am unlikely to find anything truly interesting or surprising, and the ease with which the words appear really does cheapen them.

It may have well been the author’s point, but the disdain for the technology that drips from sentences like these, which are rife throughout, taints any appreciation for the argument they’re trying to make — and I’m really trying to take it in good faith. Knowing they come in with such strongly held preconceived notions makes me reflexively question their own introspection before putting pen to paper.

Ultimately, are you writing to convince me, or yourself, of your point?

alexdowad · 3m ago
Wholeheartedly agree.
jrflowers · 59s ago
> Knowing they come in with such strongly held preconceived notions makes me reflexively question their own introspection before putting pen to paper.

>Ultimately, are you writing to convince me, or yourself, of your point?

I like that you point out here that the author clearly has a strong opinion, and then immediately say that the act of expressing that opinion may suggest that they do not hold that opinion at all.

By this logic, are you trying to convince us that you don’t love the way this article is written, or are you trying to convince yourself of that?

wilg · 23m ago
> But look at what people actually use this wonder for: brain-dead books and videos, scam-filled ads, polished but boring homework essays. Another presenter at the workshop I attended said he used AI to help him decide what to give his kids for breakfast that morning.

The last example is actually the most interesting! The essays are whatever, dumb or lazy kids are gonna cheat on their homework, schools have long needed better ways of teaching kids than regurgitative essays, but in the mean time just use an in-class essay or exam. But people aren't really making the brain-dead books and videos as anything other than a curiosity, despite the fears of various humanities professors.

The interesting part of AI, and I suspect the primary actual use case, is everything else.

notepad0x90 · 33m ago
Is it that ChatGPT is a gimmick or is it that people are using it as such?

A lot of the author's arguments could have been said about the internet in the 90's. This is a baby 4 year old leap in technology, why are people expecting it to be mature?

It is human nature to try and find silver bullets, to take solutions and find problems. The way I would look at the LLM-centered future is to consider LLM agents assistants and suggestion makers, personal consultants even. You don't ask an agent to write an essay for you, you write an essay, and as you write consider its suggestions and corrections. The models should be familiar with your writing style and preferences. Don't blame ChatGPT for human laziness.

There was this fad about every thing being smart* (smart home,smart tooth brush , smart sex toy,etc...). that wasn't smart, it was just connected to a network. This is "smart". and in the future technology might get past "smart" and become "intelligent" (we're not there yet, outside of scifi at least).

At the end of the day, everyone needs to step back and consider this: It's just a tool. period. it's not "AI", not really. there is no intelligence.

The problem is, the world is full of enshittification capitalists and their doomsday bandwagons.

bigstrat2003 · 16m ago
> This is a baby 4 year old leap in technology, why are people expecting it to be mature?

Because its fans act as though it is, and this article is a response to that overly-enthusiastic outlook on what the tool can do.

max_ · 20m ago
The millenial tech bros have mostly made thier money off gimmicks like Instagram, TikTok et al

I was very disgusted when I saw VC firms with billions in AUM put money into things like FartCoin, Digital Twins

The Boomer VCs financed stuff that is genuinely useful, MRI Scanners, Google, Apple Computers, Genetech (brought insulin to the masses).

The milenial VCs fund stuff that is at best convenient to have (Airbnb, Uber) but usually gimmicks, Instagram, Tiktok.

Sam Altman is the master of gimmicks.

He took the GPT model that already existed and wrapped it into chat format similar to Elizer[0]

Got Neural style that existed for a long time and paired it with Studio Ghibli fanatics. [1]

[0]: https://en.m.wikipedia.org/wiki/ELIZA_effect

[1]: https://en.m.wikipedia.org/wiki/Neural_style_transfer

eru · 39m ago
Seems to be a very wordy article that complains that only a proper education teaches you to think?

In any case, even contemporary LLM---as primitive as they will look like in even a few months time---are already pretty useful as assistants when eg writing software programmes. They ain't gimmicks. They are also useful as a more interactive addition to an encyclopedia. Amongst other uses.

The article also conflates AI in general with LLM. It's a common enough mistake to make these days, so I won't ding the author for that.

Summary of the article: contemporary LLMs aren't very useful for highfalutin liberal arts people (yet). (However they can already churn out the kind of essays and corporate writing that people do in practice.)

frereubu · 26m ago
I think you missed the entire point of the article. They're not saying that AI cannot be useful in the way you describe. They're saying that too many people are using it as a shortcut to producing verbiage that mimics the outcomes of learning, missing out the valuable things that come from the process of learning.
wilg · 22m ago
Is there any evidence to suggest more people are cheating now that there is AI than before, or is everybody just flipping out because the cheaters have changed tactics?