Broadly agreed with all the points outlined in there.
But for me the biggest issue with all this — that I don't see covered in here, or maybe just a little bit in passing — is what all of this is doing to beginners, and the learning pipeline.
> There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though.
> I glimpsed someone on Twitter a few days ago, also scoffing at the idea that anyone would decide not to use the Whatever machine. I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?”
When you're a beginner, it's totally normal to not really want to put in the hard work. You try drawing a picture, and it sucks. You try playing the guitar, and you can't even get simple notes right. Of course a machine where you can just say "a picture in the style of Pokémon, but of my cat" and get a perfect result out is much more tempting to a 12 year old kid than the prospect of having to grind for 5 years before being kind of good.
But up until now, you had no choice and to keep making crappy pictures and playing crappy songs until you actually start to develop a taste for the effort, and a few years later you find yourself actually pretty darn competent at the thing. That's a pretty virtuous cycle.
I shudder to think where we'll be if the corporate-media machine keeps hammering the message "you don't have to bother learning how to draw, drawing is hard, just get ChatGPT to draw pictures for you" to young people for years to come.
raincole · 9h ago
People will write lengthy and convoluted explanation on why LLM isn't like calculator or microwave oven or other technology before. (Like OP's article)
But it really is. Humans have been looking for easier and lazier ways to do things since the dawn of civilization.
Tech never ever prevents people who really want to hone their skills from doing so. World record of 100m sprint kept improving even since car was invented. World record of how many digits of pi memorized kept improving even when a computer does that indefinitely times better.
It's ridiculous to think drawing will become a lost art because of LLM/Diffusal models when we live in a reality where powerlifting is a thing.
maleno · 6h ago
I think it's interesting that practically every time this point is made (and it is made so very often), the examples that are used to prove the point are objective and easy to measure. A 100m sprint time or a calculation of Pi is not the same as a work of art, because they can be measured objectively while art cannot. There is no equivalent in art-making to running a 100m sprint. The evaluation of a 100m sprint is not subjective, does not require judgement, does not depend on taste, context, history, and all the other many things the reputation and impact of a work of art depends on.
As ever, the standard defence of LLM and all gen AI tech rests on this reduction of complex subjectivity to something close to objectivity: the picture looks like other pictures, therefore it is a good picture. The sentence looks plausibly like other sentences, therefore it is a good sentence. That this argument is so pervasive tells me only that the audience for 'creative work' is already so inundated with depthless trash, that they can no longer tell the difference between painting and powerlifting.
It is not the artists who are primarily at risk here, but the audience for their work. Artists will continue to disappear for the same reason they always have: because their prospective audience does not understand them.
bsenftner · 5h ago
There is at least three major art markets: 1) pretty pictures to fill in a void (empty walls, dress up an article...), 2) prestige purchases for those trying to fill that void in their imposter syndrome, and 3) fellow artists who are really philosophers working beyond language. The whole reason art is evaluated with vague notions like taste, context, history and so on is because the work of artists left their audience's understanding several generations ago, but they still need to make a living, so these proxies are used so the general public does not feel left out. Serious art is leading edge philosophy operating in a medium beyond language, and for what it's worth AI will never be there, just like the majority of people.
MichaelZuo · 2h ago
There’s an even deeper issue, not just for art, for all things.
The majority of artists, and of all other groups, are in fact mediocre with mediocre virtues, so enough incentives would turn most of them into Whatever shillers like the post describes.
So a non expert cannot easily determine, even if they do stumble upon “Serious art” by happenstance, whether it’s just another empty scheme or indeed someting more serious.
Maybe if they spend several hours puzzling over the artist’s background, incentives, network, claims, past works, etc… they can be 99% sure. But almost nobody likes any particular piece of work that much upon first glance, to put in that much effort.
Miraltar · 3h ago
The example might be bad but the argument still stands. Painting hasn't disappeared when photography was invented. Drummers still drum after the invention of drum machines.
globnomulous · 30m ago
Music is actually a terrific counterexample to your point. It perfectly demonstrates the culturally and artistically destructive power of the steady march of progress in computer technology -- which really has led to fewer drummers.
Far fewer people make their living as musicians than did even thirty years ago. Being a musician is no longer a viable middle-class career. Jaron Lanier, who has written on this, has argued that it's the direct result of the advent of the internet, music piracy, and streaming -- two of which originally were expected or promised to provide more opportunities for artists, not take them away.
So there really are far fewer drummers, and fewer, worse opportunities for those who remain, than there were within the living memory of even most HN users, not because some specific musical technology advanced but because technological advancement provided an easier, cheaper alternative to human labor.
Sound familiar yet?
dingnuts · 15m ago
> which really has led to fewer drummers.
what's your basis for this claim? please provide some data showing number of drummers over time, or at least musicians, over the last fifty years or so. I tried searching and couldn't find anything but you're so confident, I'm sure you have a source you could link
Better analogy might be all those gloomy Victorian artists wandering around declaring the death of portraiture after photography really got going.
Unearned5161 · 7h ago
Something notable to recognize when comparing LLM's to calculators, is the fact that the skill a calculator is replacing can be learned by any competent adult in about a week. Manual division, addition, even more complicated stuff, it will just take much longer. However, the skills that an LLM is targeting, once atrophied are not replaceable in such short time frames.
Being good at coming up with ideas, at critically reading something, at synthesizing research, at writing and editing, are all things that take years to learn. This is not the same as learning the mechanics that a calculator does for you.
bryanrasmussen · 9h ago
>LLM isn't like calculator or microwave oven or other technology before. (Like OP's article) But it really is.
I would not buy a calculator that hallucinated wrong answers part of the time. Or a microwave oven that told you it grilled the chicken but it didn't and you have to die from Salmonella poisoning.
lan321 · 8h ago
The microwave analogy is good. I still use it, even though it often makes half my food scalding hot while the other half remains fridge cold.
badpun · 8h ago
You should set the microwave to much lower power and let it heat for much longer, so that the heat gets to transfer evenly across the mass of the food. It even says so in the instruction manual. If you blast with full power, leave the food for at least 2 minutes after it's heated for the heat to balance out across the food (again, it's in the manual).
fhe · 6h ago
or turn the food over, or move it to a different position inside the microwave -- the way microwave works is that it heats up the food unevenly (there's a wave involved).
lcnPylGDnU4H9OF · 3h ago
I don’t really think repositioning it has a direct effect. An indirect effect of moving it around is that you turn the microwave off for around 30 seconds or more in order to do it. The reason some parts increase in heat faster is that they have higher concentrations of water; allowing the water to stop boiling and all of the heat to spread through is the magic.
(I’ve heard the fans that you hear are there to reflect the micro waves and make them bounce all over the place but I don’t know if that’s true. Regardless, most models have a spinning plate which will constantly reposition the food as it cooks.)
immibis · 1h ago
The fan you hear is to keep the microwave generator cool. It's outside the part of the microwave where the microwaves go.
Older microwaves had a fan-like metal stirrer inside the cooking box, that would continuously re-randomize where the waves went. This has been out of fashion for several decades.
lan321 · 5h ago
Yes, my point was that microwaves are advertised as a 'throw your lunch in and get it warm in 1-2 minutes' appliance, but kinda like an LLM, they require some manual effort to do it well (or decently, depending on your standards).
Like:
1- Put it on the edge of the plate, not in the middle
2- Check every X seconds and give it a stir
3- Don't put metal in
4- Don't put sealed things in
5- Adjust time for wetness
6- Probably don't put in dry things? (I believe you needed water for a microwave to really work? Not sure, haven't tried heating a cup of flour or making a caramel in the microwave)
7- Consider that some things heat weirdly, for example bread heats stupid quick and then turns into stone equally as quick once you take it out.
...
analog31 · 3h ago
Just an odd aside that occurred to me: Would you buy a calculator that hallucinates wrong answers part of the time, but gets enough correct answers and "partial credit" to earn you a certificate for being competent in math?
CuriouslyC · 2h ago
I would buy a calculator that could help me break down the problem and show my work though, that's the hardest part. I can always double check the numbers, and I would get partial credit for a miscalculation with the right process, but if I can't figure out how to represent the problem mathematically, I'm cooked.
thayne · 1h ago
Even if it is wrong half the time, and even when it gives you the right answer the work it shows isn't correct?
raincole · 9h ago
Microwave oven does quite unexpected things when you cook a shelled egg or a dish on metal plate.
We teach our kids about microwave oven safety for this reason.
bigstrat2003 · 8h ago
On the contrary, those things are quite predictable. Once you know those issues exist, you can reliably avoid them. But with LLMs you can't reliably avoid hallucinations. The unreliability is baked into the very nature of the tool.
No comments yet
Cthulhu_ · 8h ago
Except that we know that it does that when you put those things in, so they aren't "quite unexpected".
JW_00000 · 9h ago
My grandma did not have a microwave oven because she didn't see the point of it.
Al-Khwarizmi · 8h ago
I'm in my 40s and don't have a microwave oven because I don't see the point of it... when I lived in a rented apartment, I got gifted one because how could I not have one? I tried it for a few days and just didn't find it useful. When I bought my own apartment and renovated the kitchen, I didn't bother to install one.
Semaphor · 8h ago
The use is heating up single portions of leftovers.
Al-Khwarizmi · 8h ago
Which can be done in an induction cooker almost as fast, with the result tasting better, and without the need of a specific appliance that takes up considerable space.
Semaphor · 7h ago
I guess, if you have one of those. Vastly more expensive and more involved to install, especially when renting. I’ve never used one because I’ve never been at a place with one.
Al-Khwarizmi · 7h ago
When I rented I had a standard ceramic hob and still didn't see the point... sure, you gain some time, but it's maybe 5 minutes of unattended time where you can often be doing something else, vs. much worse taste. But I understand that with slow cookers it can make sense for other people. With induction I think it's outright pointless.
As an anecdote, in my country there is a very popular brand of supermarket pizzas, Casa Tarradellas. I never buy them but a friend of mine used to eat them really frequently. So once he shows up at my house with one, and I say OK, I'm going to heat it. I serve it, he tries a bite and is totally blown away. He says "What did you do? I've been eating these pizzas for years and they never taste like this, this is amazing, the best Casa Tarradellas pizza I've ever had".
The answer was that he used the microwave and I had heated it in the regular oven...
Semaphor · 6h ago
> vs. much worse taste
I have never had that issue when heating stuff up. Your pizza example is not reheating (and generally you never want to reheat anything that’s supposed to be crispy in the microwave; though not on the stove top either).
ted_bunny · 5h ago
Reheat pizza on a stovetop covered on low heat. Better than the first time it was cooked. Yw
Semaphor · 4h ago
I prefer convection ovens for that.
kasey_junk · 4h ago
I don’t own a microwave because I don’t mind the trade offs of other tools that do the same job. But I don’t go around telling people who find microwaves useful that they are bringing about the end of cooking and should feel bad because of it.
No comments yet
Daisywh · 6h ago
Maybe the real question isn’t whether the microwave is useful, but whether she wanted what it offered. That seems to apply to a lot of tech debates today too.
kenjackson · 9h ago
My microwave regularly doesn’t cool things as the instructions describe. I’ve learned to pay attention.
rob_c · 8h ago
You would if you were able to do basic mental maths and you learned to engage and run basic sanity checks. That's still much faster than grabbing the slide rule. (And it's not like people are infallible)
Obviously if one product hallucinated and one doesn't it's a no brainer (cough Intel FPUs). But in a world where the only calculators were available hallucinated at the 0.5% level you'd probably have one in your pocket still.
And obviously if the calculator hallucinated at the 90% of the time for a task which could otherwise be automated you'd just use that approach.
eesmith · 7h ago
I've seen my accountant's fingers flawlessly fly using a calculator to track expenses down to the penny. Few people have those mental skills even in the days before calculators - either mechanical or digital.
Slide rule are good for only a couple of digits of precision. That's why shopkeepers used abacuses not slide rules.
I have a hard time understanding your hypothetical. What does it mean to hallucinate at the 0.5% level? That repeating the same question has a 0.5% chance of giving the wrong answer but otherwise it's precise? In that case you can repeat the calculation a few times to get high certainty. Or that even if you repeat the same calculation 100 times and choose the most frequent response then there's still a 0.5% chance of it being the wrong one?
Or that values can be consistently off by within 0.5% (like you might get from linear interpolation)? In that case you are a bit better than a slide rule for estimating, but not accurate enough for accounting purposes, to name one.
Does this hypothetical calculator handle just plus, minus, multiply, and divide? Or everything that a TI 84 can handle? Or everything that WolframAlpha can handle?
If you had a slide rule and knew how to use it, when would you pay $40/month for that calculator service?
SAI_Peregrinus · 1h ago
> Slide rule are good for only a couple of digits of precision. That's why shopkeepers used abacuses not slide rules.
Shopkeepers did integer math, not decimal. They had no need for a slide rule, an abacus is faster at integer math, a slide rule is used for dealing with real numbers.
fireflash38 · 4h ago
Slide rules were used in astronomy, engineering, and aviation. You could get them more accurate than 2 decimal places.
While yes, "Astronomical work also required precise computations, and, in 19th-century Germany, a steel slide rule about two meters long was used at one observatory. It had a microscope attached, giving it accuracy to six decimal places" (same Wikipedia page), remember that this thread is about calculating devices one might carry in one's pocket, have on one's self, or otherwise be able to "grab".
(There's a scene in a pre-WWII SF story where the astrogators on a large interstellar FTL spacecraft use a multi-meter long slide rule with a microscope to read the vernier scale. I can't remember the story.)
My experience is that I can easily get two digits, but while I'm close to the full three digits, I rarely achieve it, so I wouldn't say you get three decimal digits from a slide rule of the sort I thought was relevant.
> With the ordinary slide rule, the accuracy obtainable will largely depend upon the precision of the scale spacings, the length of the rule, the speed of working, and the aptitude of the operator. With the lower scales it is generally assumed that the readings are accurate to within 0.5 per cent. ; but with a smooth-working slide the practised user can work to within 0.25 per cent
That's between 2 and 3 digits. You wouldn't do your bookkeeping with it.
bigstrat2003 · 8m ago
"a couple" always means two. "A few" always means three. That wiki is wrong.
zpeti · 8h ago
Do you use a GPS? That sometimes gets the route wrong, but overall gets you to where you want to go in less traffic than if you didn't use it?
And occasionally really delights you with new routes?
(thanks Rory Sutherland for this analogy)
8n4vidtmkvmk · 7h ago
The success rate and failure mode matters. Gps/maps you can look at before driving and confirm it's not completely insane quite easily, and if it takes a suboptimal route you still get to your destination.
If an LLM hallucinates and you don't know better, it can be bad. Hopefully people are double checking things that really matter, but some things are a little harder to fact check.
kqr · 3h ago
Wait, do people generally use GPS routes to go places? I find GPS to be great to locate myself when I can't figure it out from landmarks, but I very rarely use it to select a route – I can just look at the map in the same app and figure out a route on my own.
Muromec · 1h ago
I used GPS routing today to cycle to from a station to a place I never been to before. Used it before for driving on highways in a weird place. Its very helfull when you dont know the area very well
sfn42 · 2h ago
I can't remember a whole route just from looking at the map once. Also maps generally don't show whether streets are one way and things like that.
I just type the address into Google maps, or place a pin manually, then hit the start button. It'll tell me every step of the way. Keep right at the fork. In a hundred meters, turn left. Turn left. Take the second exit in the roundabout. Your destination is a hundred meters ahead on the right.
It's great and it works almost flawlessly. Even better if you have a passenger able to keep an eye on it for those times when it isn't flawless.
1718627440 · 2m ago
> Also maps generally don't show whether streets are one way and things like that.
Citation needed.
eesmith · 8h ago
Rarely. I feel lost when I use GPS to get places.
Alec Watson of Technology Connections points out that GPS routing defaults to minimizing time, even when that may not the most logical way to get somewhere.
His commentary, which starts at https://youtu.be/QEJpZjg8GuA?t=1804 , is an example of his larger argument about the complacency of letting automation do things for you.
His example is a Google Maps routing which saves one minute by going a long way to use a fast expressway (plus $1 toll), rather than more direct but slower state routes and surface streets. It optimizes one variable - time - of the many variables which might be important to you - wear&tear, toll costs, and the delight of knowing more about what's going on in the neighborhood.
His makes the point that he is not calling for a return to paper maps, but rather to reject automation complacency, which I'll interpret as letting the GPS figure everything out for you.
We've all heard stories of people depending on their GPS too much then ending up stuck on a forest road, or in a lake, or other place which requires rescue - what's the equivalent failure mode with a calculator?
sfn42 · 2h ago
If you drive into a lake or anything like that it's your own fault not the GPS. It doesn't control the car it just tells you directions. And if you know the area well enough to make judgements like the other things you mentioned, you don't need gps. Gps is specifically for when you don't know where to go.
I use it all the time, pretty much zero issues.
zpeti · 8h ago
OK I don't think I'm going to persuade you if you don't use GPS. Buy 95% of the population do.
eesmith · 7h ago
Pardon? I said I use GPS.
I'm also aware of the failure modes with GPS complacency, including its incomplete knowledge of the variables that I find important.
And that's with something that makes mistakes far less often than LLMs and related technology.
Which is why I don't think that your mention of GPS use is a strong counter-example to bryanrasmussen's comment against using hallucinating devices.
cess11 · 7h ago
I don't, I use static maps and aerial photos, and sometimes satellite photos. I also don't have a microwave oven, in part because they are highly unreliable depending on where certain molecules ended up in the bucket in the freezer and so on.
However, I do have a pressure cooker and a rice cooker that gets a lot of use. They're extremely reliable and don't use much electricity and I can schedule what they do, which is bulk cooking without me having to care about it while it happens.
KaiserPro · 7h ago
Up until recently, I could, if I wanted to have a living doing VFX. I could, if I wanted to, craft new worlds, and get paid for it.
In two years, that won't be the case.
Its the same for virtually all other Arts based job. An economy that currently support say 100% of the people now, will at most be able to support 10-30% in a few years time.
> It's ridiculous to think drawing will become a lost art because of LLM/Diffusal
Map reading is pretty much a dead art now (as someone who leads hikes, I've seen it first hand)
Memorising books/oral history is also a long dead art.
Oral story telling is also a dead art, as is folk music, compared to its peak.
Sure _rich_ people will be able to do all the arts they want. Everyone else won't
scripper · 7h ago
I agree. I am at mid-career. I know many people who dedicated years of their lives learning a craft and building a dignified, somewhat-creative career. I admire these people greatly. The rewards from putting in this effort have disappeared.
For example, I have no knowledge of film editing or what “works” in a sequence, but if I wanted to I could create something more than passable with AI.
scrollaway · 7h ago
My girlfriend is a ceramist. She makes porcelain pieces (https://malinamore.art/) that are sold for hundreds or even thousands of euros.
Why would someone buy a plate off her, when they could get one from IKEA for 1.50 eur?
Yet ceramics is not a dead art. Beats me?
KaiserPro · 6h ago
Correct!
but 200 years ago there were loads of ceramic manufactures, employing hundreds of thousands of skilled potters. even 50 years ago, there were thousands of skilled ceramists in the UK. now its single person artisans, like your very talented other half.
Now, that reduction in work force too 200 years and mirrors the industrial revolution. GenAI is looking like its going to speed run that in ~5-7 years
I should be more clear, there is a difference between dead art (memorizing stories) and non viable career for all but 1% of people compared to now. I'm talking about the latter.
tetraodonpuffer · 4h ago
there will always be a market for exceptional artists, but what about the other 80-90% of people that used to be able to make a living and now can't anymore? What are they going to do? And without the possibility of a particular profession leading to gainful employment, very few people will even start it, making the funnel smaller and smaller until even exceptional artists won't be able to emerge at all.
CuriouslyC · 2h ago
We still have amazing master blacksmiths who've reached the pinnacle of the craft despite no economic demand for their skills, so clearly the lack of a market doesn't deter curious people looking for a hobby.
KaiserPro · 1h ago
> doesn't deter curious people looking for a hobby.
curious rich people.
1718627440 · 18m ago
But you are not allowed to use either until you can already cook and calculate.
dingnuts · 13m ago
comparing LLMs to microwaves makes me think of the time my grandmother cooked the Thanksgiving turkey in a microwave because it was new and easy and the obvious thing to do!
I guess the analogy isn't that bad! I'd be pretty upset if a professional cook made my steak in a microwave.
lloeki · 4h ago
> World record of 100m sprint kept improving even since car was invented.
A very good example! (...although probably not how you think it is ;)
Indeed the world record is achieved by a very limited number of people under stringent conditions.
Meanwhile people by and large† take their cars to go to the bakery which by foot would be 10min away, to disastrous effect on their health.
And by "cars" I mean "technology", which, while a fantastic enabler of things impossible before, has turned more people into couch potatoes than athletes.
† Comparatively to world record holders.
Cthulhu_ · 8h ago
> People will write lengthy and convoluted explanation on why LLM isn't like calculator or microwave oven or other technology before. (Like OP's article) But it really is.
No it's not (like OP's article says). With a calculator you punch in 10 + 9 and get 2 immediately, and this was 50+ years ago. With an LLM you type in "what is 10 + 9" and get three paragraphs of text after a few seconds. (this is false, I just tried it and the response is "10 + 9 = 19" but I'm exaggerating for dramatic effect). With a microwave you yeet in food and press a button and stuff happens the same way, every time.
Sure, if you abstract it to "doing things in an easier and lazier way", LLMs are just the next step, like IDEs with built in error checking and code generation were since 20 years ago. But it's more vague than press button to do a thing.
andreasmetsala · 5h ago
> No it's not (like OP's article says). With a calculator you punch in 10 + 9 and get 2 immediately, and this was 50+ years ago.
Your calculator is broken.
> With an LLM you type in "what is 10 + 9" and get three paragraphs of text after a few seconds. (this is false, I just tried it and the response is "10 + 9 = 19" but I'm exaggerating for dramatic effect).
So you’re arguing against a strawman?
No comments yet
aredox · 7h ago
>World record of 100m sprint kept improving even since car was invented.
Obesity rates keep "improving" since the car was invented, up to becoming a major public health crisis and the main amplifier of complications and mortality when the pandemic stroke.
Oh, and the 100m sprint world record has been set for more than a decade and a half now, which means either we reached human optimum, or progress on anti-doping technology has forced a regression on performance.
andrepd · 5h ago
> People will write lengthy and convoluted explanation on why LLM isn't like calculator or microwave oven or other technology before. (Like OP's article) But it really is.
Well you sure showed them.
The TFA makes a very concrete point about how the Whatever machine is categorically different from a calculator or a handsaw. A calculator doesn't sometimes hallucinate a wrong result. A saw doesn't sometimes cut wavy lines instead of straight lines. They are predictable and learnable tools. I don't see anyone addressing this criticism, only straw manning.
tgv · 9h ago
> Tech never ever prevents people who really want to hone their skills from doing so
Even though that is a generalization that you cannot prove, you implicitly admit that it will prevent everybody else from gettings any skills. Which is quite a bad outcome.
> powerlifting is a thing
Those people have a different motivation: looks, competition, prestige, power. That doesn't motivate people to learn to draw.
Your easy dismissal is undoubtedly shared by many, but it is hubris.
mystified5016 · 54m ago
AI is the equivalent of going from stone abacuses straight to smartphones, skipping all computer and calculator development in between.
We go from a society where only a very few people are literate in math to one where everyone has a literal supercomputer at all times. What do you think that would do for math literacy in a society? Would everyone suddenly went to learn algebra and calculus? Or would the vast majority of people use the easy machine and accept its answers without question or understanding?
armchairhacker · 3h ago
IMO AI isn't like a calculator, it is like a microwave. Another analogy would be like takeout. The difference is that you don't get to choose the details, and usually get worse quality, but sometimes that's OK.
Ygg2 · 5h ago
> People will write lengthy and convoluted explanation on why LLM isn't like calculator or microwave oven or other technology before. (Like OP's article) But it really is.
There are clear differences. First of a calculator and microwave are quite different, but so is LLM. Both are time savers, in the sense of microwave saves time defrosting and calculator saves time calculating vs human.
They save time to achieve a goal. However calculators come with a penalty, by making multiplication easier they make user worse at it.
LLMs are like calculators but worse. They both are effort savers, and thus come with a huge learning penalty and unprecise enough that you need to learn to know better than them.
Caelus9 · 6h ago
Totally agree. Calculators didn’t kill math. Cameras didn’t kill painting. Tools change the baseline, but people still push the edges. The ones who love the craft won’t stop just because it got easier for others.
ninetyninenine · 9h ago
>People will write lengthy and convoluted explanation on why LLM isn't like calculator or microwave oven or other technology before. (Like OP's article) But it really is.
You generally don't need a lengthy explanation because it's common sense. When someone doesn't get it then people have to go into lengthy convoluted explanations because they are trying to elucidate common sense to someone who doesn't get it.
I mean how else do I elucidate it?
LLMs are different from any revolutionary technology that came before it. The first thing is we don't understand it. It's a black box. We understand the learning algorithm that trains the weights, but we don't understand conceptually how an LLM works. They are black boxes and we have limited control over them.
You are talking to a thing that understands what you say to it, yet we don't understand this how this thing works. Nobody in the history of science has created anything similar. And yet we get geniuses like you who can use a simple analogy to reduce the creation of an LLM to something like the invention of a car and think there's utterly no difference.
There is a sort of inflection point here. It hasn't happened yet but a possible future is becoming more tangible. A future where technology surpasses humanity in intelligence. You are talking to something that is talking back and could surpass us.
I know the abundance of AI slop has made everyone numb to the events that happened in the past couple of years. But we need to look past that. Something major has happened, something different then the achievements and milestones humanity has surpassed before.
rob_c · 8h ago
> You generally don't need a lengthy explanation because it's common sense. When someone doesn't get it then people have to go into lengthy convoluted explanations because they are trying to elucidate common sense to someone who doesn't get it.
Maybe you're new here friend...
intrasight · 6h ago
> The first thing is we don't understand it.
Perhaps you do not understand it, but many software engineers do understand.
Of the human brain we can still say that we don't understand it.
Attrecomet · 2h ago
>Perhaps you do not understand it, but many software engineers do understand.
No, they do not. LLMs are by nature a black box problem solving system. This is not true about all the other machines we have, which may be difficult to understand for specific or even most humans, but allow specialists to understand WHY something is happening. This question is unanswerable for an LLM, no matter how good you are at Python or the math behind neural networks.
ninetyninenine · 1h ago
Why do we keep getting people who say we understand LLMs.
Let me put it plainly. If we understood LLMs we would understand why hallucinations happen and we would subsequently be able to control and stop hallucinations from happening. But we can’t. We can’t control the LLM because of lack of understanding.
All the code is available on a computer for us to modify every single parameter. We have full access and we can’t control the LLM because we don’t understand or KNOW what to do. This is despite the fact that we have absolute control over the value of every single atomic unit of an LLM
zwnow · 9h ago
My guy its not only about the art its about killing passion and the lifeline of people. Your take is incredibly ignorant to people who value human created work. These things will kill industries. What jobs should people work in, who got their income cut by LLMs? Force them into blue collar work?
worldsayshi · 9h ago
Your point is very valid. It is the luddite argument. And that is valid. But the problem is never the technology itself but, as you point out, the loss of livelihood and meaning and especially the shifts in power from the many to the few.
We need to learn to make technology truly benefit the many. Also in terms of power.
zwnow · 9h ago
Yes, fully agree. Can't believe we live in a timeline in which big tech companies steal data from the many, use this data to train models, sell this data while also convincing technological illiterate people their propaganda machines (social media) is useful to them... Now they want to buy nuclear power plants too. Im sure nothing will go wrong there.
worldsayshi · 6h ago
It's all downstream from the way the economy works, and the economy is (I think) downstream from the tools we use to coordinate effort. If we can evolve the way which we handle resource allocation, trust and effort coordination I emphatically believe there's at least some some hope that we can create an alternative economy. Which it seems that we urgently need as a civilization.
JW_00000 · 9h ago
But isn't that the same as saying: what about all the horse carrier drivers who lost their jobs due to cars? What about all the bank tellers we lost after inventing the automated teller machine?
zwnow · 9h ago
There is a difference in killing off passion work and mundane work. We are also killing off empathy while we are at it.
dale_glass · 8h ago
I don't think there's a real difference. Thinking a job is "mundane" IMO is mostly a case of not working that job. Many "mundane" jobs have depth and rewards, even if not in every instance.
I've heard people express that they liked working in retail. By extension somebody must have enjoyed being a bank teller. After all, why not? You get to talk to a lot of people, and every time you solve some problem or answer a question and get thanked for it you get a little rush of endorphins.
Many jobs that suck only suck due to external factors like having a terrible boss or terrible customers, or having to enforce some terrible policy.
zwnow · 8h ago
This sounds like a strawman tbh, I have worked retail for years and I do not know a single person enjoying retail work. Especially not cashiers. I can understand what you are on about, but do you think this is the majority of people? The issue is being able to support yourself which these mundane jobs hardly are able to.
Personally I want these mundane things automated because I don't want to interact with people. I appreciate art though and I want to support human art. I appreciate everything from ancient architecture and stone cutting to renaissance paintings to basement drawings of amateurs. Art used to have character and now its all the same AI slop. Video games will become unplayable for me in the near future. Advertisements will be fully AI slop. Sure there are still artists out there, but they get overshadowed by AI slop.
dale_glass · 8h ago
I mean, retail has many different instances of it. Yes, I can imagine working in a busy supermarket owned by a giant like Walmart would be unpleasant.
But imagine working in a nice cafe in a quiet small town, or a business that's not too frantically paced, like a clothing store. Add some perks like not always doing the same job and a decent boss, and it shouldn't be too bad. Most any job can be drastically improved by decreasing the workload, cutting hours and adding some variety. I don't think being a cashier is inherently miserable. It's just the way we structure things most of the time makes it suck.
Just like you think a human touch makes art special, a human touch can make a mundane job special. A human serving coffee instead of a machine can tell you about what different options are available, recommend things, offer adjustments a machine might not, chat about stuff while it's brewing... You may end up patronizing a particular cafe because you like the person at the counter.
badpun · 8h ago
A lot of people who are passionate about creative fields work jobs that are pretty mundane, e.g. painting drab environmental textures every day for the next iteration of Call of Duty, or cutesy barfy crap for the next Candy Crush Saga. The jobs are very rarely alligned with their own taste and interests, plus they're terribly dull because, as a specialist, you're constantly working only one specific kind of assignments.
AnonymousPlanet · 8h ago
Not exactly. It depends on how many professions get extinct at the same time. If you have ever lived in a place that is in an economic decline because professions have moved abroad and the new professions replacing the old ones just don't provide the scale or only benefit a few in society, you know where things might be headed.
nostrebored · 8h ago
What do you think happened in the rise of industrial agriculture?
AnonymousPlanet · 7h ago
We're talking about places that even after decades haven't recovered. What do you think is happening there right now?
There's a common fallacy that tries to argue that it'll be alright over time, no matter what happens. Given enough time, you can also say that about atomic wars. But that won't help the generations that are affected.
sfn42 · 2h ago
If you live in a dead town with no opportunities then you either make your own opportunities or you move to a place with opportunities.
If you just sit on your hands complaining about the lack of opportunities then you won't get any sympathy from me. People aren't entitled to live wherever they want, humanity's entire thing is adaptability. So adapt. Life is what you make it.
AnonymousPlanet · 45m ago
When I say 'place' that includes entire countries. Adapting then depends on the kindness of strangers towards foreign refugees.
I wouldn't be surprised if at some point in the near future something like "Adapt. Life is what you make it" could be read in big bold letters above the entrance of a place like Alligator Alcatraz.
zwnow · 1h ago
Humans are entitled to live wherever they want. Capitalists destroying rural regions with false promises (prosperous jobs) is a thing since the industrialization. Should all people move to overrun big cities? Small once established markets are getting destroyed by big discounters or stuff like Amazon. Also adapting is and never was a thing for most people. I dont know where you got that from but this isn't the wild west anymore. People are trying to set up a life for themselves without moving every 2 years. Entitled city person viewpoint.
sfn42 · 1h ago
There's plenty of rural areas with plenty of opportunities. Cities are not the only option. If I lived in a dead mining town I'd move elsewhere.
You can blame corporations or whatever you want, doesn't matter whose fault it is. Complaining and blaming doesn't solve anything. Finding solutions does. Stop complaining, start finding solutions.
I grew up in a beautiful rural place. I'd like to live there, but what I like even more is not having to drive for over an hour to work every day. So I moved. I also went to university in my late 20s and some of my peers were in their 40s and 50s.
People adapt to all kinds of stuff all the time. Saying adapting isn't a thing for most people is ridiculous. Of course it's a thing. It's what you do when your current situation isn't working. You adapt.
That said, yes, what about them? These are people with real skin the the game - people who spent years learning their craft expecting it will be their life-long career.
Do we simply exclaim "sucks to be you!"?
Do we tell out-of-work coal miners to switch to a career in programming with the promise it will be a lucrative career move? And when employment opportunities in software development collapse, then what?
All while we increasingly gate health care on being employed?
sfn42 · 2h ago
Yeah. If society no longer needs your job then you need to find something else to do. Doesn't have to be software, we mine other things than coal. We need builders, plumbers, electricians, lots of possibilities.
Software dev opportunities won't collapse any time soon, any half decent dev who's tried vibe coding will tell you that much. It's a tool developers can use, it's not a replacement.
girvo · 8h ago
No, this is far far more wide reaching and it’s intellectually dishonest to pretend otherwise.
It’s why it’s so exciting.
erwincoumans · 4h ago
Agreed. However, according to the author LLM's mostly produce crap, and he doesn't seem to be able to imagine (or want?) that to improve (beyond crap/hallucination and become very useful to many).
zwnow · 4h ago
Tell me how is it going to be useful to the many? Even better marketing emails? More precise targeted advertising? Even more automated job application rejections? Mass firings due to AI replacing most office jobs?
Whats the benefit of LLMs to the many who barely can operate a search machine?
I am sorry but thinking this will benefit the many is delusional. It's main use is making rich people richer by saving expenses on people's wages. Tell me, how are these people going to get a job once their skills are made useless?
AlexeyBrin · 3h ago
I'm not an LLM enthusiast, but I can think of at least one example where these are useful to the many: decent/fast translation from one language to another. It won't be perfect, but it is usually good enough when you are visiting a foreign country for a few weeks and you have no time or interest in learning the language.
autumnstwilight · 9h ago
I learned Japanese by painstakingly translating interviews and blog posts from my favorite artist 15+ years ago, dictionary in hand. I also live and work in Japan now. Today I can click a button under the artist's tweets and get an instant translation that looks coherent (and often is, though it can also be quite wrong maybe 1/10 times).
In terms of the artist being accessible to overseas fans it's a great improvement, but I do wonder if I had grown up with this, would I have had any motivation to learn?
franciscop · 9h ago
I am learning Japanese (again) now and it's such a stark improvement vs when I first tried. When I don't understand something, LLMs explain it perfectly well, and with a bit of prompting they give me the right practice bits I need for my level.
For a specific example, when 2 grammar points seem to mean the same thing, teachers here in Japan would either not explain the difference, or make a confusing explanation in Japanese.
It's still private-ish/only for myself, but I generated all of this with LLMs and using it to learn (I'm around N4~N3) :
It's true though that you still need the motivation, but there are 2 sides of AI here and just wanted to give the other side.
jops · 9h ago
Exactly this. LLMs make learning faster and easier for those who _want_ to learn, but conversely make it harder for those who don’t.
andrepd · 5h ago
> When I don't understand something, LLMs explain it perfectly well
But my man, how do you know if it explains perfectly well or is just generating plausible-sounding slop? You're learning, so by definition you don't know!
franciscop · 5h ago
Because at the beginning I didn't trust it and verified it in many different ways. I've got a fairly decent understanding of what LLMs hallucinate regarding language learning and levels, and luckily/unluckily I'm far enough for this to be a concern. e.g. I asked recently differences between 回答 vs 解答 and it was pretty good.
I also checked with some Japanese and my own notes contain more errors than the LLMs output by a large margin.
ted_bunny · 4h ago
GPT was dismally, consistently wrong about 101-level French grammar. Isn't that info all over the internet? Shouldn't that be an easy task?
Tijdreiziger · 2h ago
Yeah, my experience with AI and Japanese is quite the opposite. I used to use the Drops app for learning vocabulary, until they added genAI explanations, because the explanations were just wrong half the time! I had to uninstall the app!
Similarly, I used the Busuu app for a while. One of its features is that you can speak or type sentences, and ask native speakers to give feedback. But of course, you have to wait for the feedback (especially with time zone differences), so they added genAI feedback.
Like, what’s the point of this? It’s like that old joke: “We have purposely trained him wrong, as a joke”!
liendolucas · 6h ago
I absolutely agree with you, it is like having human beings fed constantly whatever their want into their minds for free, effortlessly, without knowing nothing at all. Getting the whatevers ready for consumption. Perhaps this will lead to a new generation where everyone is the "expert novice".
It's killing the accumulative and progressive way of learning that rewards who tries and fail many times before getting it right.
The "learning" is effectively starting to being killed.
I just wonder what would happen to a person after many years using "AI" and suddenly not having access to it. My guess is that you become useless and with a highly diminished capacity to perform even the most basic things by yourself.
This is one of many reasons why I'm so against all the hype that's going on in the "AI" space.
I keep doing things the old school way because I fully comprehend the value of reading real books, trying, failing and repeating the process again and again. There's no other way to truly learn anything.
Does this generation understand the value of it? Will the next one?
maegul · 10h ago
Agreed!
The only silver lining I can see is that a new perspective may be forced on how well or badly we’ve facilitated learning, usability, generally navigating pain points and maybe even all the dusty presumptions around the education / vocational / professional-development pipeline.
Before, demand for employment/salary pushed people through. Now, if actual and reliable understanding, expertise and quality is desirable, maybe paying attention to how well the broader system cultivates and can harness these attributes can be of value.
Intuitively though, my feeling is that we’re in some cultural turbulence, likely of a truly historical magnitude, in which nothing can be taken for granted and some “battles” were likely lost long ago when we started down this modern-computing path.
bruce511 · 10h ago
To be fair, LLMs are just the most recent step in a long road of doing the same thing.
At any point of progress in history you can look backwards and forwards and the world is different.
Before tractors a man with an ox could plough x field in y time. After tractors he can plough much larger areas. The nature of farming changes. (Fewer people needed to farm more land. )
The car arrives, horses leave. Computers arrive, the typing pool goes away. Typing was a skill, now everyone does it and spell checkers hide imperfections.
So yeah LLMs make "drawing easier". Which means just that. Is that good or bad? Well I can't draw the old fashioned way so for me, good.
Cooking used to be hard. Today cooking is easy, and very accessible. More importantly good food (cooked at home or elsewhere) is accessible to a much higher % of the population. Preparing the evening meal no longer starts with "pluck 2 chickens" and grinding a kilo of dried corn.
So yeah, LLMs are here. And yes things will change. Some old jobs will become obsolete. Some new ones will appear. This is normal, it's been happening forever.
thankyoufriend · 9h ago
The difference between GenAI and your examples is a theft component.
They stole our data - your data - and used it to build a machine that diverts wealth to the rich. The only equitable way for GenAI to move forward is if we all own a share of it, since it would not exist in its current form without our data. GenAI should be a Universal Basic Asset.
CuriouslyC · 2h ago
There isn't any more theft in this than in artists copying the styles and techniques of popular artists to improve their craft.
This is 100% just the mechanization of a cultural refinement process that has been going on since the dawn of civilization.
I agree with you regarding how the bounty of GenAI is distributed. The value of these GenAI systems is derived far more from the culture they consume than the craft involved in building them. The problem isn't theft of data, but a capitalist culture that normalizes distribution of benefit in society towards those that are already well off. If the income of those billionaires and the profits of their corporations were more equitably taxed, it would solve a larger class of problems, of which this problem is an instance.
bruce511 · 9h ago
I appreciate the idealism but your argument has some flaws.
Firstly the "theft component" isn't exactly new. There have always been rich and poor.
Secondly everyone is standing on the shoulders of giants. The Beatles were influenced by the works of others. Paul and John learned to write by mimicking other writers.
That code you right is the pinicle of endless work dine by others. By Ada Lovelace, and Charles Babbage, and Alan Turing and Brian Kernigan and Denis Ritchie and Doug Englebart and thousands and thousands more.
By your logic the entire output of all industries for all foreseeable generations should be universally owned. [1]
But that's not the direction we have built society on. Rather society has evolved in the US to reward those who create value out of the common space. The oil in Texas doesn't belong to all Texans, it doesn't belong to the pump maker, it belongs to the company that pumps the oil.
Equally there's no such thing as 'your data'. It's your choice to publish or not. Information cannot be 'owned'. Works can be copyrighted, but frankly you have a bigger argument on that front going after Google (and Google Books, not to mention the Internet Archive) than AI. AI may train on data you produced, but it does not copy it.
[1] I'm actually for a basic income model, we don't need everyone working all day like it's 1900 anymore. That means more taxes on companies and the ultra wealthy. Apparently voters disagree as they continue to vote for people who prefer the opposite.
sirwhinesalot · 9h ago
I think your last point is very reductionist. Nearly every country ends up in a voting situation where only 2 parties can realistically win. A diverse parlament results in paralysis and the fall of government (happened in my home country multiple times).
The two parties that end up viable tend to be financed quite heavily by said wealthy, including being proped by the media said wealthy control.
The more right wing side will promise tax cuts (also for the poor that don't seem to materialize) while the more left wing side will promise to tax the rich (but in an easily dodgeable way that only ends up affecting the middle class).
Many people understand this and it is barely part of the consideration in their vote. The last election in the US was a social battle, not really an economic one. And I think the wealthy backers wanted it that way.
bruce511 · 6h ago
Im not sure why you are being downvoted. You make a reasonable argument.
I would contest some of your points though.
Firstly, not every country votes, not all that vote have 2 viable parties, so that's a flaw in your argument.
Equally most elections produce a winner. That winner can, and does, get stuff done. The US is paralyzed because it takes 60% to win the senate, which hasn't happened for a while. So US elections are set up so "no one wins". Which of course leads to overreach etc that we're seeing currently.
There's a danger when living inside a system that you assume everywhere else is the same. There's a danger when you live in a system that heavily propagandizes its own superiority, that you start to feel like everywhere else is worse.
If we are the best, and this system is the best, and it's terrible, then clearly all hope is lost.
But what I maybe, just maybe, all those things you absolutely, positively, know to be true, are not true? Is that even worth thinking about?
sirwhinesalot · 5h ago
Just to be clear, I'm not a US citizen.
But I know people whose preference would be something like Ron Paul > Bernie Sanders > Trump > Kamala, which might sound utterly bizarre until you realize that there are multiple factors at play and "we want tax cuts for the rich" is not one of them.
bruce511 · 2h ago
When you vote for a guy who plans to raise prices, when you vote for a guy who already tried to remove Healthcare, when you vote for a guy who gives tax breaks to the rich, when you vote for a guy who is a grifter, then don't complain when you get what you voted for.
People are welcome to whatever preference they like. Democracy let's them choose. But US democracy is deliberately planned to prefer the "no one wins" scenario. That's not the democracy most of the world uses.
ako · 9h ago
The scare for most people is that AI isn't better tools, but outsourced work. In the past we would create our own products, now other countries do this. In the past we did our own thinking and creative activities, now LLMs will.
If we don't have something better to do we'll all be at home doing nothing. We all need jobs to afford living, and already today many have bullshit jobs. Are we going to a world where 99.9% of the people need a bullshit job just to survive?
bruce511 · 9h ago
Personally I think your basic premise is false, hence your conclusion is false.
>> We all need jobs to afford living
In many countries this is already not true. There is already enough wealth that there is enough for everyone.
Yes, the western mindset is kinda "you don't work, you don't get paid". The idea that people can "free load" on the system is offensive at a really deep emotional level. If I suggest that a third of the people can work, and the other 2 thirds do nothing, but get supported, most will get distressed [1]. The very essence of US society is that we are defined by our work.
And yet if 2% of the work force is in agriculture, and produce enough food for all, why is hunger a thing?
As jobs become ever more productive, perhaps just -considering- a world where worth is not linked to output is a useful thought exercise.
No country has figured this out perfectly yet. Norway is pretty close. Lots of Europe has endless unemployment benefits. Yes, there's still progress to be made there.
[1] of course, even in the US, already it's OK for only a 3rd to work. Children don't work. Neither do retirees. Both essentially live off the labor of those in-between. But imagine if we keep raising the start-working age, while reducing retirement benefits age....
ako · 7h ago
Sounds great in theory, but doesn't seem very realistic. There will always be people that want power over other people, and having more than others will give them that power.
And universally, if you have nothing, you lead a very poor life. You life in a minimal house (trailer park, slums, or housing without running water nor working sewage). You don't have a car, you can't travel, education opportunities are limited.
Most kids want to become independent, so they have control over their spending and power over their own lives. Poor retirees are unhappy, sometimes even have to keep working to afford living.
Norway is close because they have oil to sell, but if no one can afford to buy oil, and they can't afford to buy cars, nor products made with oil, Norway will soon run out of money.
You can wonder, why is Russia attacking Ukraine, russia has enough land, doesn't need more. But in the end there will always be people motivated by more power and money, which makes it impossible to create this communism 2.0 that you're describing.
bruce511 · 7h ago
You have equated a basic income with equality. That's a misunderstanding.
I'm not suggesting equality or communism. I'm suggesting a bottom threshold where you get enough even if you don't work.
Actually Norway gets most of that from investments, not oil. They did sell oil, but invested that income into other things. The sovereign wealth fund now pays out to all citizens in a sustainable way.
Equally your understanding of dole living in Europe is incomplete. A person on the dole in the UK is perfectly able yo live in a house with running water etc. I know people who are.
Creating a base does not mean "no one works". Lots of people in Europe have a job despite unemployment money. And yes most-all jobs pay better than unemployment. And yes lifestyles are not equal. It's not really communism (as you understand it.)
This is not about equal power or equal wealth. It's about the idea that a job should not be linked to survival.
Why is 60 the retirement age? Why not 25? That sounds like a daft question, but understanding it can help understand how dome things that seem cast in stone, really aren't.
ako · 6h ago
In live in europe, so understand some of it, part of my family comes from eastern europe, so have also seen that form of communism in the past.
Living on welfare in the Netherlands is not a good life, and definitely not something we should accept for the majority of the people.
Being retired on only a state pension is a bad life, you need to save for retirement to have a good life. And saving takes time, that's why you can't retire at 25.
bruce511 · 2h ago
I am not saying that the reality exists.
I'm saying that the blind acceptance of the status quo does not allow for that status to be questioned.
You see the welfare amounts, or retirement amounts as limited. Well then, what would it take to increase them? How could a society increase productivity such that more could be produced in less time?
Are some of our mindsets preventing us from seeing alternatives?
Given that society has reinvented itself many times through history, are more reinvention possible?
ako · 45m ago
I hope you're right, but considering human nature, it's not something i would bet my money on. It's not how humans are wired.
ako · 9h ago
Agreed, it'll be a big problem if we don't keep our skills and rely on AI too much. Same with outsourcing manufacturing, at some point you loose the skill to produce products completely and are dependent on other countries.
With the WWW we thought everyone having access to all information would enlighten them, but without knowledge people do not recognize the right information, and are more likely to trust (mis)information that they think they understand.
What if LLMs give us all the answers that we need to solve all problems, but we are too uninformed and unskilled to recognize these answers? People will turn away from AI, and return to information that they can understand and trust, even if it's false.
Anyway, nothing new actually, we've seen this with science for some time now. It's too advanced for most people to understand and validate, so people distrust it and turn to other sources of information.
uh_uh · 9h ago
What other sources of information will people turn to? Kids are growing up asking ChatGPT in school. I just can't see a mass exodus happening.
ako · 7h ago
Misinformation, lies and populism, see for example discussions around vaccines where people no longer bother to understand the science, climate change, or religion where people randomly choose 1 out of 3000 available gods and then pretend like their choice is the only correct one.
PeterStuer · 8h ago
The first time I had the "beginner" reflex was when I got an always on computer with an editor and storage.
Before that, I had an TI-99 4A at home without a tape drive and the family tv as a display. I mainly was into creating games for my friends. I did all my programming on paper, as the "screen time" needed to be maximized for actually playing the games after typing it in from the paper notebook. Believe it or not, but bugs were very rare.
Much later at uni there were computer rooms with Mac's with a floppy drive. You could actually just program at the keyboard, and the IDE even had a debugger!
I remember observing my fellow students endlessly type-run-bug-repeat until it "worked" and thinking "these guys never learned to reason through their program before running it. This is just trial and error. Beginners should start on paper".
Fortunately I immediately caught myself and thought, no, this is genuine progress. Those that "abuse" it would more than likely not have programmed 'ye old way' anyways, and some others will genuinely become very good regardless.
A second thing: in the early home computer year(s) you had to program. The computer just booted into the (most often BASIC) prompt, and there was no network or packaged software. So anyone that got a computer programmed.
Pretty soon, with systems like the Vic-20, C64 and ZX Spectrum there was a huge market in off the shelf game cassettes. These systems became hugely popular because they allowed anyone to play games at home without learning to program. So only those that liked programming did. Did that lose beginner programmers? Maybe some, for sure.
CuriouslyC · 2h ago
The transformation is to aesthetic awareness over raw technical facility, and to "freshness" over skillful adherence to norms.
The best artists will spot holes in the culture, and present them to us in a way that's expertly composed, artful and meticulously polished. The tools will let them do it faster, and to reach a higher peak of polish than in the past, but the artfulness will still be the artist's.
Futuristic tools aren't replacing art, they're creating a substrate for a higher order of art. Collages are art, and at its most crude, this higher order art reduces to digital collages of high quality generated assets with human intention. With futuristic tools, art becomes reductive rather than constructive. To quote Michelangelo's response to how he made David: "It is simple, I just removed everything that wasn't David"
Cthulhu_ · 8h ago
> When you're a beginner, it's totally normal to not really want to put in the hard work. You try drawing a picture, and it sucks. You try playing the guitar, and you can't even get simple notes right. Of course a machine where you can just say "a picture in the style of Pokémon, but of my cat" and get a perfect result out is much more tempting to a 12 year old kid than the prospect of having to grind for 5 years before being kind of good.
Fair point; I think this feeling is exacerbated by all the social media being full of people looking like they're good at what they do already, but it rarely shows the years of work they put in beforehand. But that's not new, compare with athletes, famous people, fictional characters, etc. There's just more of it and it's on a constant feed.
It does feel like people will just stop trying though. And when there's a shortcut in the form of an LLM, that's easy. I've used ChatGPT to write silly stories or poems a few times; I look at it and think "you know, if I were to sit down with it proper I could've written that myself". But that'd be a time and effort investment, and for a quick gag that will be pushed down the Discord chat within a few minutes anyway, it's not worth it.
worldsayshi · 9h ago
> I shudder to think where we'll be if the corporate-media machine keeps hammering the message "you don't have to bother learning how to draw, drawing is hard, just get ChatGPT to draw pictures for you" to young people for years to come.
This should be comparable to how much fewer people in the west today know how to work a farm or build machinery. Each technological shift comes at a cost of population competence.
I do have a feeling that this time it could be different. Because this shift has this meta-quality to it. It has never been easier to acquire, at least theoretical, knowledge. But the incentives for learning are shifting in strange directions.
pjc50 · 9h ago
More fundamental question: if everyone can generate an album in an afternoon, why would anyone else listen to any of those? It turns into dust in the long tail.
dale_glass · 8h ago
Anyone can write a comment here in less than a minute. Why should anyone read it?
IMO, because it's good in a way or another. I'm not reading your writing because I imagine you toiled over every word of it, but simply because I started reading and it seemed worthwhile to read the rest.
Attrecomet · 1h ago
What's implied in the previous comment is that reading a comment takes a few seconds, while listening to an album, or even really enjoying it, takes a higher investment.
Or, to use a different metaphor, these comments are mentally nutritional Doritos, not a nicely prepared restaurant meal. If your restaurant only serves Dorito-level food, I won't go there even if I do consume chips quite often at home.
dvaun · 9h ago
All we are is dust in the wind.
ninetyninenine · 9h ago
The things that were revolutionary in the past all eventually become common place and boring. It's happened to almost everything and continues to happen to anything new that comes out.
LLMs will accelerate the pace of this assimilation. New trends and new things will become popular and generic so fast that we'll have to get really inventive to stay ahead of the curve.
this15testingg · 8h ago
ahead of what curve? intrinsically human endeavors are drowned in noise. what is the point? if even drawing/writing/singing are not worth doing anymore both because effort and the experience itself is worthless, I might as well step in front of a tesla taxi so I can escape this world. human ingenuity is amazing, but this whole mess is embarrassing
ramon156 · 8h ago
My friend actually went to "yeah well I don't like it enough to hobby program in my free time, otherwise I might lose the enjoyment"
2 years later and he thought of a project he really wanted to make. He didn't succeed, but its very clear he changes his mind
pier25 · 4h ago
Something rather new in the history of civilization is the gamification of everything. It has created the expectation of receiving (fake) constant results for very little effort.
guicen · 5h ago
Maybe the point isn’t whether LLMs replace skills, but whether they help more people reach those skills. Lifting the floor is not the same as lowering the ceiling.
lloeki · 5h ago
> You try drawing a picture, and it sucks. You try playing the guitar, and you can't even get simple notes right.
> up until now, you had no choice and to keep making crappy pictures and playing crappy songs until you actually start to develop a taste for the effort, and a few years later you find yourself actually pretty darn competent at the thing. That's a pretty virtuous cycle.
Only putting the work is going to get anyone places. And yes it takes _time_, like, tons, and there's no shortcut.
And I can explain in excruciating detail how to do an ollie or a kickflip even and from a physics point of view you would totally get it but to land the damn thing you simply have to put a shitload of time on the board and fail over and over and over again.
We come from a place where we've been trained as engineers or whatever to do this or that and - somewhat - critically think about things. Instead picture yourself in the shoes of a beginner: how would you, a beginner who has not built their own mental model of discipline $foo, even begin to be critical of AI output?
But we're being advertised magic powder and sweating overalls and whathaveyou that makes you lose weight a) instantly† and b) without going to the gym and well putting in the effort††.
LLMs are the speed diet of the mind.
† comparatively
†† not that putting any arbitrary amount of effort is going to get you places, there _is_ a thing such as wasteful effort; but NOT putting the effort is a solid guarantee that you won't.
ninetyninenine · 9h ago
age-ism will disappear.
safety1st · 6h ago
I don't DISAGREE with anything he said, particularly.
But personally, I don't feel as upset over all this as he does. It seems that all my tech curmudgeonliness over the years is paying off these days, in spades.
Humbly, I suggest that he and many others simply need to disconnect more from The Current Thing or The Popular Thing.
Let's look at what he complains about:
* Bitcoin. Massive hype, never went anywhere. He's totally right. That's why I never used it and barely even read about it. I have no complaints because I don't care. I told myself I'd care if someone ever built something useful with Bitcoin. 10 years later they haven't. I'm going back to bed.
* Windows. Man I'm glad I dodged that bullet and became a Linux user almost 15 years ago. Just do it. Stuff will irk you either way but Linux irks don't make you feel like your dignity as a human being is being violated. Again, he's right that Windows sucks; I just don't have to care, because I walked away.
* Bluesky, Twitter, various dumb things being said on social media. Those bother him too. Fortunately, these products are optional. I haven't logged into my Twitter account for three years. I'll certainly never create a Bluesky one. On some of my devices I straight up block many of these crapo social sites like Reddit etc. in /etc/hosts. I follow some RSS feeds of a few blogs, one of the local timeline for a Mastodon instance. Takes ten minutes and then I go READ BOOKS in my spare time. That's it. He is yet again right, social media sucks, it's the place where you hear about all this dumb stuff like Bitcoin; I just am not reading it.
I'm not trying to toot my own horn here it's just that when you disconnect from all the trash, you never look back, and the frustrations of people who haven't seem a little silly. You can just turn all of this stuff off. Why don't you? Is it an addiction? Treat it like one if so. I used to spend 6 hours a day on my phone and now it's 1 hour, mainly at lunch, because the rest of the time it's on silent, in a bag, or turned off, just like a meth addict trying to quit shouldn't leave meth lying around.
Listen to Stallman. Listen to Doctorow. These guys are right. They were always right. The free alternatives that respect you exist. Just make the leap and use them.
chrismorgan · 8h ago
> Like, just to calibrate here: you know how some code editors will automatically fill in a right bracket or quote when you type a left one? You type " and the result is "|"? Yeah, that drives me up the wall. It saves no time whatsoever, and it’s wrong often enough that I waste time having to correct for it.
I have not yet figured out why anyone would choose this behaviour in a text editor. You have to press something to exit the delimited region anyway, whether that be an arrow key or the closing delimiter, so just… why did the first person even invent the idea, which just complicates things and also makes it harder to model the editor’s behaviour mentally? Were they a hunt-and-peck typist or something?
In theory, it helps keep your source valid syntax more of the time, which may help with syntax highlighting (especially of strings) and LSP/similar tooling. But it’s only more of the time: your source will still be invalid frequently, including when it gets things wrong and you have to relocate a delimiter. In practice, I don’t think it’s useful on that ground.
tehnub · 8h ago
Pair programming with coworkers over the years, many seem to have trouble with the keyboard, to the point where pressing right parenthesis is a significant burden and they don’t use right or down arrow to get out of the span but actually move their hand to their mouse and click out.
thasso · 7h ago
I'm shocked every time I go to City Hall and wait while the clerk types my name letter by letter with two fingers. Doesn't he do that every day?! How as it never occurred to him or anyone else that maybe, just maybe, they would benefit from a typing course. It’s just one example of a pattern I’ve noticed with a lot of office workers.
jjcob · 5h ago
Maybe it's not that important? Taking 5 seconds vs 2 seconds to type a name is probably not that much of a difference? Especially when most of the time you are typing stuff that you need to ask how to spell anyway?
chrismorgan · 4h ago
It’s not five seconds versus two; it’s fifteen, and with more mistakes remaining at the end (which will sometime waste hours down the line). Because your inferior typist can’t keep up with the phone number being told them, and lose their place in the digits; and are having to concentrate so much on their typing that they don’t correctly interpret what you say to them; and so on. It’s a compounding effect.
Poor typists always slow down processes, and frequently become a bottleneck, local or global. If you can speed up a process by only ten seconds per Thing, by improving someone’s typing skills or by fixing bad UI and workflow, you only have to process 360 Things in a day (which is about one minute per Thing) to have saved an entire hour.
It can be very eye-opening to watch a skilled typist experienced with a software system that was designed for speed, working. In more extreme cases, it can be that one person can do the work of ten. In more human-facing things, it can still be at least a 50% boost, so that two skilled people can beneficially replace three mediocre.
Cthulhu_ · 8h ago
I've said it in another comment (might be here or Reddit, I don't even know anymore) and it feels like basic skills are just overlooked or taken for granted these days - computer use, mouse / keyboard / typing skills, reading comprehension, writing ability, communication skills, etc.
I'm nowhere near a hiring position but if I was I'd add assessing that to the application procedure.
It feels like this is part of a set of growing issues, with millennials being the only generation in between gen X / boomers and gen Z that have computer skills and can do things like manage files or read a whole paragraph of text without a computer generated voice + RSVP [0] + Subway Surfers gameplay in the background.
But it was also millennials that identified their own quickly diminishing attention spans, during the rise of Twitter, Youtube, Netflix and the like [1].
I want to believe all of this is giving me some job security at least.
If you have to use multiple keyboards, arrows, end, home etc tend to be at different position on the keyboard. Almost no better than using a mouse.
That's were old school vi / emacs shine. CTRL? Always same area, so ctrl-f to go forward? Same gesture whatever brand of a laptop I have to work on.
BlindEyeHalo · 8h ago
I think it is practical when highlighting text and then pressing " once puts quotes and the start and the end of the highlighted region.
But I agree that in normal input it is often annoying.
matejn · 8h ago
I hate that even more, especially since Visual Studio introducted it. I had the habit of selecting some text, and then typing to replace it. Now when my replacement starts with a parenthesis or quote, the text just gets surrounded instead!
I think for the feature of wrapping, it is useful enough + just typing a->backspace->" is easy enough that I think it's a net win
matsemann · 8h ago
> and it’s wrong often enough
How is it ever wrong, though? If I insert a (, and then a {, and the editor appends so that it's ({}), that's always ?correct. Can it ever not be.
Maybe because on a Norwegian keyboard { is a bit awkward, but I like it. Then even if we're 5 levels deep with useEffect(() => {(({[{[ I can just press ctrl+shift+enter and it just magically finishes up everything and put my caret at the correct place, instead of me trying to write ]}]})) in the correct order.
Kon5ole · 7h ago
>How is it ever wrong, though?
Whenever you edit something existing that already has the ), ] or } further down and you end up with a ()), []] or {}}. Or when you select some text that you want to replace and start with a quote only to end up with "the text you wanted to replace" instead of the expected ".
I never notice when it works but get annoyed every time it doesn't, so I feel like it never works and always sucks.
I guess it's muscle memory and some people are used to it, but it feels fundamentally wrong to me to have the editor do different basic editing things based on which character is being pressed.
jltsiren · 7h ago
Most editors are not smart enough to do it consistently right. For example, VS Code often inserts extra quotes when I try to break "long string" into "long " + something() + " string". And when I try to write a half-open interval [a, b) in a comment or within a string, the editor inserts an extra ].
Attrecomet · 1h ago
> can just press ctrl+shift+enter and it just magically finishes up everything and put my caret at the correct place, instead of me trying to write ]}]})) in the correct order.
I think here you are talking about a different thing -- completion of already started parentheses/"/whatever with content in-between, not the pre-application of paired braces or quotation marks, as the author did, no?
rcxdude · 7h ago
It certainly can, because it doesn't necessarily know where the closing brace should be, especially when inserting as opposed to writing a completely new line. I'm often deleting random crap in the editors I use with the 'feature' as I'm adding delimiters. '"' tends to be even worse, because the distinction between opening and closing is not obvious.
e.g.:
(a + b > c) -> ((a + b > c) -> (()a + b > c) -> no, I was aiming for ((a + b) > c)
(it sound like you're talking about a different feature/implementation, though, since in the annoying case there's no 'completion' shortcut, it just appears)
8n4vidtmkvmk · 7h ago
I hated this feature until I realized I could just type the closing quote anyway and it wouldn't double up. Doesn't seem to bother me now that I'm used to it. Once in awhile my editor tries to get too clever and messes things up, but not often
whoisyc · 6h ago
That’s what GP said about “harder to model the editor’s behaviour mentally” though. In a dumb editor you type a quote and you get a quote, in a “smart” editor whether or not you get two quotes, one quote, or no quote at all is context dependent and more confusing.
kqr · 3h ago
I've never noticed some editors do this because I always type both opening and closing symbols at the same time, and then back up into them if I want to fill them out. I think I learned it from my father and just anecdotally I make mistakes of unbalanced symbols nowhere near as often as others.
feelamee · 4h ago
> You have to press something to exit the delimited region anyway, whether that be an arrow key or the closing delimiter, so just…
Hah, fun about this is that I press exactly the matched symbol ( `}` to `{` , etc) to exit this delimited region and VS even understand what I want! Incredibly useless thing
rasur · 8h ago
Emacs user here, and the whole "electric-mode" stuff (for matching parens or other balanced pairs of things) I find really quite useful. And closing a pair is usually something like shift+enter, which is quite simple (but also - at least in Emacs - generally completely configurable). I think the benefits outweigh the pitfalls, personally.
Can't speak to other editors though.. I don't want to sound like I'm trolling, but they generally feel quite clunky, compared to Emacs (ducks, runs ;p )
bbarnett · 7h ago
There's nothing wrong with emacs. Both vim and emacs are just targeting different segments of humanity, that's all. Vim is clean, concise, slim, where as emacs is more bulky, cluttered, stifling.
It's just matching, and reflecting the way different humans think, and reason, that's all.
(yes, said in jest)
sonofhans · 8h ago
Preach it. I’d rather hit right bracket than right arrow.
arkh · 4h ago
Maybe the origin is the fact not everyone uses qwerty mappings.
On a french keyboard, ~#{[|\\^@]} all require the "alt gr" modifier key which is usually just right of the space key. So totally outside the realm of shift, caps lock, ctrl or alt.
_thisdot · 8h ago
I have this turned on in my code editors and Obsidian. The main advantage is reducing the cognitive load. You don’t have to double-check whether you remembered to close your string, bracket, or parenthesis — it’s just there.
Cthulhu_ · 8h ago
You don't have to anyway, a syntax error will show up on your screen pretty much immediately.
_thisdot · 7h ago
Which I'd like to avoid as early as possible
elric · 8h ago
The cognitive load of typing two quotes? Golly. That term is starting to take on "whatever" meaning, apparently.
_thisdot · 7h ago
The cognitive load of keeping track of all the open delimiters.
In my perceived experience, every time a delimiter is opened, it automatically closes, allowing you to move away from it without thinking.
Even in places where this is not available (Slack, comment boxes, etc.), I close the delimiter as soon as I open it
cess11 · 8h ago
My REPL-style interfaces don't have it while my editors do, I don't feel either is particularly special and there are little pros and cons with both.
atemerev · 8h ago
Because otherwise it would break syntax highlighting until you finish writing the string. And no, sorry, I like my syntax highlighting, I won't turn it off.
This feature is useful for me. So are LLMs. If someone doesn't want to use this or that, they are not obliged to. But don't tell me that features that I find useful "suck".
chmod775 · 8h ago
> Because otherwise it would break syntax highlighting until you finish writing the string.
You can always insert the second " as a ghost(?) character to keep syntax highlighting working. But it's not like any modern language server really struggles with this anyways.
willvarfar · 8h ago
Sad this is downvoted, because syntax highlighting is a very plausible explanation for this editor behaviour.
You can perhaps imagine an editor that only inserts the delimiter if you type the start-string symbol in the middle of a line.
dalemhurley · 8h ago
> "live in some futuristic utopia like the EU where banks consider "send money to people" to be core functionality. But here in the good ol' U S of A, where material progress requires significant amounts of kicking and screaming, you had PayPal."
I remember when PayPal came to Australia, I was so confused by it as I could just send money via internet banking. Then they tried to lobby the government to make our banking system worse so they could compete, much like Uber.
layer8 · 42m ago
In the EU, the value proposition of PayPal is (1) that it it hello instantaneous instead of taking one or two days like a regular bank transfer, and (2) it doesn’t disclose your banking info to the other party. The first one is now finally being obsoleted by SEPA instant payments.
robin_reala · 8h ago
In the EU PayPal caved and officially got a banking licence from Luxembourg.
zpeti · 8h ago
I don't get this sentence. It's pretty damn hard sending money in the EU too. We only had SWIFT and CHAPS too like in the USA. The EU isn't some banking haven with ultrafast transfers. If they are talking about the new legislation about fast transfers (SEPA), that came 1 decade after paypal.
quonn · 8h ago
> pretty damn hard sending money in the EU too
You literally enter an IBAN and the transfer will appear in the other account the next day. And if you need the money in the target account immediately (within 10 seconds) you can do it, too, by checking a checkbox for a small fee and that fee will drop to ZERO across the EU in October 2025.
Attrecomet · 57m ago
Even before SEPA, we didn't use checks in the EU -- or at least in Germany -- because bank transfers were a thing and just worked.
elric · 8h ago
What do you mean? Europe has had SEPA payments pretty much since the Euro came out. And most of Europe had functional bank transfers using online banking (including international ones) long before the Euro was a thing.
Edit: Do you mean that the speed of the transfers was the problem?
qsort · 8h ago
SPC Inst transfers up to 15,000EUR take 10 seconds, literally.
elric · 8h ago
I'm aware, but I think zpeti was saying that this is a recent thing, whereas fast PP payments were already a thing ages ago.
zpeti · 8h ago
SEPA came 10 years after paypal.
adastra22 · 7h ago
The US doesn't have SWIFT btw.
dofubej · 8h ago
We currently (as in the for the last months) have instant transfers but for the longest time we didn’t and had to use PayPal as well if we wanted to send somebody money instantly without paying the bank an extra for it. I’m confused as to what the article means. It’s possible the author is misinformed.
quonn · 8h ago
Instant transfers have been available for many years. they were not free, but most banks supported doing them.
lotsofpulp · 3h ago
In the US, I have been sending money electronically for free, and instantly, to my friends and family for almost 15 years.
Zelle, previously known as clearXchange, and whatever else, but if you had an account at one of the bigger bank, it has long been trivial to send money to each other.
> In April 2011, the clearXchange service was launched. It was originally owned and operated by Bank of America, JPMorgan Chase, and Wells Fargo.[6][7] The service offered person-to-person (P2P), business-to-consumer (B2C), and government-to-consumer (G2C) payments.[8]
icameron · 10h ago
Love this writing. One paragraph hit very close to home. I used to be the guy who could figure out obscure scripts by google-fu and rtfm and willpower. Now that skill has been completely obliterated by LLMs and everyone’s doing it- except it’s mostly whatever
> I don’t want to help someone who opens with “I don’t know how to do this so I asked ChatGPT and it gave me these 200 lines but it doesn’t work”.
N_Lens · 10h ago
I use LLMs for coding everyday and agree with most of the article, even if it does attack me as an "indignant HackerNews mudpie commenter".
In the same vein, I've actually worked on crypto projects in both DeFi and NFT spaces, and agree with the "money for criminals" joke assessment of crypto, even if the technology is quite fascinating.
Shorel · 9h ago
I am still the guy doing google-fu and rtfm.
The skill has not been obliterated. We still need to fix the slop written by the LLMs, but it is not that bad.
Some people copy and paste snippets of code without knowing what it does, and in a sense, they spread technical debt around.
LLMs lower the technical debt spread by the clueless, to a lower baseline.
The issue I see is that the amount of code having this level of technical debt is created at a much faster speed now.
sunrunner · 8h ago
I always imagine that there's essentially a "knowledge debt" when doing almost any development today, unless you're operating at the lowest level (or you understand it all the way down, and there's also almost a level below).
The copy-paste of usable code snippets is somewhat comparable to any use of a library or framework in the sense that there's an element of not understanding what the entire thing is doing or at least how, and so every time this is done it adds to the knowledge debt, a borrowing of time, energy and understanding needed to come up with the thing being used.
By itself this isn't a problem and realistically it's impossible to avoid, and in a lot of cases you may never get to the point where you have to pay this back. But there's also a limit on the rate of debt accumulation which is how fast you can
pull in libraries, code snippets and other abstractions, and as you said LLMs ability to just produce text at a superhuman rate potentially serves to _rapidly_ increase the rate of knowledge debt accumulation.
If debt as an economic force is seen as something that can stimulate short-term growth then there must be an equivalent for knowledge debt, a short-term increase in the ability of a person to create a _thing_ while trading off the long-term understanding of it.
Shorel · 8h ago
That's where documentation matters.
Take this snippet of code, and this is what each part means, and how you can change it.
It doesn't explain how it is implemented, but it explains the syntax and the semantics of it, and that's enough.
Good documentation makes all the difference, at least for me.
8n4vidtmkvmk · 7h ago
I have to frequently tell the LLM to RTFM because it's wrong. But I can usually paste the manual in which saves me some reading. It's scary because when it's wrong and you don't happen to know better... Then your code or whatever is just a little worse
darkwater · 8h ago
> LLMs lower the technical debt spread by the clueless, to a lower baseline.
I'm SO stealing this!! <3
ZYbCRq22HbJ2y7 · 8h ago
> LLMs lower the technical debt spread by the clueless, to a lower baseline.
Yeah? What about what LLMs help with? Do you have no code that could use translation (move code that looks like this to code that looks like that)? LLMs are real good with that, and they save dozens of hours on single sentence prompt tasks, even if you have to review them.
Or is it all bad? I have made $10ks this year alone on what LLMs do, for $10s of dollars of input, but I must understand what I am doing wrong.
Or do you mean, if you are a man with a very big gun, you must understand what that gun can do before you pull the trigger? Can only the trained can pull the trigger?
lmm · 8h ago
> Do you have no code that could use translation (move code that looks like this to code that looks like that)?
Only bad code, and what takes the time is understanding it, not rewriting it, and the LLM doesn't make that part any quicker.
> they save dozens of hours on single sentence prompt tasks, even if you have to review them
Really? How are you reviewing quicker than you could write? Unless the code is just a pile of verbose whatever, reviewing it is slower than writing it, and a lot less fun too.
ZYbCRq22HbJ2y7 · 8h ago
> Really? How are you reviewing quicker than you could write? Unless the code is just a pile of verbose whatever, reviewing it is slower than writing it, and a lot less fun too.
Well, humans typically read way faster than they write, and if you own the code, have a strict style guide, etc, it is often pretty simple to understand new or modified code, unless you are dealing with a novel concept, which I wouldn't trust a LLM with anyway.
Also, these non-human entities we are discussing tend to output code very fast.
lmm · 6h ago
> humans typically read way faster than they write
When it's just reading, perhaps, but to review you have to read carefully and understand. It's like the classic quote that if you're writing code at the limits of your ability you won't be able to debug it.
> if you own the code, have a strict style guide, etc, it is often pretty simple to understand new or modified code, unless you are dealing with a novel concept, which I wouldn't trust a LLM with anyway
The way I see it if the code is that simple and repetitive then probably that repetition should be factored out and the code made a lot shorter. The code should only need to express the novel/distinctive parts of the problem - which, as you say, are the parts we wouldn't trust an LLM with.
Shorel · 8h ago
A lower baseline of technical debt is a positive thing.
You don't want more technical debt.
Ideally, you want zero technical debt.
In practice only a hello world program has zero technical debt.
ZYbCRq22HbJ2y7 · 8h ago
No one is losing that skill, as LLMs are wrong a lot of the time.
No one is becoming a retard omniscient using LLMs and anyone saying they are is lying and pushing a narrative.
Humans still correct things, humans understand systems have flaws, and they can utilize them and correct them.
This is like saying someone used Word's grammar correction feature and accepted all the corrections. It doesn't make sense, and the people pushing the narrative are disingenuous.
wiseowise · 7h ago
> a retard omniscient
That’s a nice description, to be honest.
wiseowise · 7h ago
> I used to be the guy who could figure out obscure scripts by google-fu and rtfm and willpower. Now that skill has been completely obliterated by LLMs and everyone’s doing it- except it’s mostly whatever
And thank fuck it happened. All of shell and obscure Unix tools that require brains molded in 80s to use on a day to day basis should’ve been superseded by something user friendly long time ago.
resonious · 9h ago
I agree with a lot of this at the outset, but don't really like the gloomy outlook. I don't think there's much to gain by writing off all this unfortunate stuff as people being stupid and greedy. I mean sure, that may be true, but you can flip it around and say that it's impressive that we have it as good as we do despite having to co-exist with stupidity and greed. Better yet, you can see it as a challenge to overcome.
And I'm not the only one saying this but - the bit about LLMs is likely throwing the baby out with the bathwater. Yes the "AI-ification" of everything is horrible and people are shoehorning it into places where it's not useful. But to say that every single LLM interaction is wrong/not useful is just not true (though it might be true if you limit yourself to only freely available models!). Using LLMs effectively is a skill in itself, and not one to be underestimated. Just because you failed to get it to do something it's not well-suited to doesn't mean it can't do anything at all.
Though the conclusion (do things, make things) I do agree with anyway.
rglover · 2h ago
You can—except for the researchers and pioneers at the low-level—write most of the fervor off as mimetic behavior.
We live in a world now where people scare one another into making significant choices with limited information. Person A claims it's the future you don't want to miss, Person B takes that at face value and starts figuring out how to get in on the scam, and Person C looks at A and B and says "me too." Rinse and repeat.
That's why so much of the AI world is just the same app with a different name. I'd imagine a high percentage of the people involved in these projects don't really care about what they're working on, just that it promises to score them more money and influence (or so they think).
So in a way, for the majority, it is just stupid and greedy behavior, but perhaps less conscious.
notpachet · 6h ago
> you can flip it around and say that it's impressive that we have it as good as we do despite having to co-exist with stupidity and greed
I have a feeling that line of thinking is going to be of diminishing consolation as the world veers further into systemic and environmental collapse.
ZYbCRq22HbJ2y7 · 8h ago
> every single LLM interaction is wrong/not useful
I think it is defense mechanism, you see it everywhere, and you have to wonder, "why are people thinking this way?".
I think those with an ethical or related argument deserve to be heard, but opposite of that, it seems like full blinders, ignoring the reality presented before us.
8n4vidtmkvmk · 7h ago
The free models are also useful. A little more limited, but still useful.
moritzwarhier · 8h ago
Cool post, but:
> And the only real hope I have here is that someday, maybe, Bitcoin will be a currency, and circulating money around won’t be the exclusive purview of Froot Loops. Christ
PLEASE NO. The only thing this will lead to is people who didn't get rich with this scheme funding the returns of people who bought in early.
Whatever BTC becomes, everyone who advocates for funneling public money of people who actually work for their salary into Bitcoin is a fraud.
I don't think the blog author actually wants this, but vaguely calling for Bitcoin to become "real money" indirectly will contribute to this bailout.
And yes, I'm well aware that funneling pension funds money etc into this pyramid scheme is already underway. Any politician or bank who supports this should be sued if you ask me.
rcxdude · 6h ago
Yeah, I think for crypto to actually turn into something net-positive, Bitcoin needs to lose a lot of value. It would almost certainly be some other network that would actually solve this problem, given Bitcoin's essentially frozen in a half-completed state now (Ethereum still seems to be trying to make something that scales to a point that would be usable, but it is also the nexus of a lot of the scams for the same reason)
fsflover · 3h ago
> Bitcoin needs to lose a lot of value
Why is that? You can just buy 0.00000001 BTC.
immibis · 1h ago
Having currency is only meaningful if the amount of currency someone has correlates, at least loosely, to some kind of merit or work.
Let's say me and my friends agree to carve off 0.00002 BTC supply and pretend that is the whole world of currency. We could run a whole country using that 0.00002 BTC as money. Except that anyone who has 1 BTC can break into our walled garden and, with a tiny fraction of their holdings, buy the entire walled garden, and there's no way to prevent this as long as our money is fungible with theirs. It's the same reason you wouldn't use immibiscoins as a currency: I could just grant myself a zillion of them and buy everything you have. Except that in the case of bitcoin the grant is pre-existing.
Deflationary currencies are fundamentally unstable, just like currencies that one guy can print at will, because they decorrelate quantity and merit.
amiga386 · 3h ago
But can't you see what he actually wants?
He wants normal banking and money transfer... but just to anybody, and for any reason. As an example, he'd like people to be able to pay him to draw bespoke furry porn for them. Or as another example, why can't a US citizen pay an Iranian citizen to do some work for them? (e.g. write a computer program)
That is totally possible. The only thing that stands in his way, and drives him into the arms of the cryptocurrency frauds, are moralising and realpolitiking governments that intentionally use their control of banks to control what bank customers can do with their money.
In an ideal world, government would only regulate banks on fiscal propriety and fair-dealing, and would not get in the way of consenting adults exchanging money for goods and services. But because government does fuck with banks, and sometimes the banks just do the fuckery anyway and government doesn't compel them to offer services to all (e.g. Visa/Mastercard refuse to allow porn merchants?), normal people start listening to the libertarians, the sovereign citizens, and the pump-and-dump fraudsters hyping cryptocurrencies.
He wants decentralised digital cash. How can it be done, if not Bitcoin et al?
moritzwarhier · 1h ago
Use the a similar protocol with better properties (less energy consumption, better transaction usability) and start from zero.
Also, I'm not sure if a radical lack of regulation / full decentralization is a good thing when we are talking about money.
In my opinion, money should be regulated by governments.
But this discussion tends to escalate and the arguments have been made ad nauseam, so I'm tuning out here, sorry.
m0wer · 4h ago
Would it change your view if they mined instead of buying?
If you were to create a decentralized and limited supply currency, how would you distribute it so that it's “fair”?
Sounds a bit like if the world was running only on proprietary software created by Microsoft and you criticized the move to open source because that would enrich Linus Torvalds and other code creators/early adopters.
Are people better off by continuing to use centralized broken software that they have to pay a subscription for (inflation) than if they did a lump sum buy of a GNU/Linux distro copy from a random guy and become liberated for the rest of their life?
Al-Khwarizmi · 8h ago
I disagree with many of the points on LLMs (but broadly agree with 80% of the post). But regardless of agreeing or not, it was a pleasure to read this because it's beautifully written, the arguments are solid, it makes you think, and the website has a personality, which is rare nowadays.
I clicked halfheartedly, started to read halfheartedly, and got sucked into a read that threw me back into the good old days of the internet.
A pity that the micropayments mentioned in the post never materialized, I'd surely throw a few bucks at the author but the only option is a subscription and I hate those.
lmm · 6h ago
Eevee writes well but this is not one of her better posts IMO. Too many micro-digs at people who are white or straight, too much of the Twitter/Bluesky tone where you drop a snark bomb on the heckin' evil du jour and then just move on. If anything I'd say this has a lot less personality than older posts I remember.
windenntw · 5h ago
100 times this
sunnybeetroot · 7h ago
It is a nice read but something tells me it’s AI generated due to the frequent em dashes so I wouldn’t place all bets on it being entirely human written.
Edit: I apologies, the author has pre-gpt posts that use em dashes so likely it’s part of their writing style.
EraYaN · 6h ago
If you use a nice enough writing tool it will do en/em and any other dash for you semi automatically anyway. Even Word does it when AutoFormat is turned on. Although it normally chooses the En-dash (U+2013) instead of the Em-dash (U+2014), this also depends on your language.
silveraxe93 · 6h ago
I disagreed with a _lot_ of what they said. But I really hope the author doesn't suffer the indignity of reading this comment.
arcanemachiner · 5h ago
I've been accused of being an LLM before — I got over it pretty quickly.
JBits · 7h ago
Some people just like em dashes—myself included. You can find em dashes in articles written by the author before LLMs became a thing.
rcxdude · 7h ago
This strikes me as written by someone who would bother to put in em dashes themselves.
Al-Khwarizmi · 7h ago
It would be a monument to hypocrisy if this piece with this specific content were AI-generated.
Color me naive but honestly I'm pretty sure it's not, though. We all tend to be worse at distinguishing AI from human text than we think, but the text sounds genuine to me and transpires an author with a quirky personality that seems difficult to imitate by an LLM. And that could include using em dashes.
washmyelbows · 10h ago
Couldn't agree more on many of these points. There is so much 'whatever' everywhere on the web that I legitimately don't understand people being interested in the platforms that suck everyone's time. Its frustrating as someone who used to enjoy the early web a lot and it's frustrating to see people that I have a lot of respect for buying into these awful systems with their time and attention. Worse still, I'm something of an outsider in many situations for opting out of them.
The author lost me a little on the AI rant. Yes, everything and everyone is shoving LLMs into places that I don't want it. Just today Bandcamp sent me an email about upcoming summer albums that was clearly in part written by AI. You can't get away from it, it's awful. That being said, the tooling for software development is so powerful that I feel like I'd be crazy not to use it. I save so so much time with banal programming tasks by just writing up a paragraph to cursor about what I want and how I want it done.
ZYbCRq22HbJ2y7 · 9h ago
IMO, this is more ranting about people who meet the metrics of the platform.
You're a platform drone, you have no mind, yada. Yet, we are reading the author's blog.
The author may hate LLMs, but they will lead to many people realizing things they never were aware of, like the author's superficial ability to take information and present it in a way that engages others. Soon that will be a thing that is known. Not many will make money sharing information in prose.
What the author refers to as "LLMs" today, will continually improve and "get better" at everything the author has issues with, maybe in novel ways we can't think of at the moment.
Alternative take:
"Popular culture" has always been a "lesser" ideal of experience, and now that ontological grouping now includes the Internet, as a whole. There are no safe corners, everything you experience on the Internet, if someone shared it with you, is now "Popular culture".
Everyone knows what you know, and you are no longer special or have special things to share, because awareness is ubiquitous.
This is good for society in many ways.
For example, with information asymmetry, where assholes made others their food, it will become less common that people are food.
Things like ad-driven social networks will fade away as this realization becomes normalized.
Unfortunately, we are at the very early stages of this, and it takes a very long time for people to become aware of things like hoaxes.
wiseowise · 7h ago
I’m in your camp, but what makes you think assholes won’t suffocate the technology in its infancy and use it to oppress others even further?
Arch-TK · 8h ago
> Is everyone else working on projects built exclusively out of lists of primes and rebalancing binary trees?
Yes that is actually roughly the take away here. LLMs are getting so popular in programming not because they are good at solving problems but because they are good at reproducing a solution to some minor variation of an existing problem which has already been solved many times.
Most of the work that most of the industry does is just re-solving the same set of problems. This is not just because of NIH but also because code reuse is a hard problem.
This is not to say that everything is the same product. The set or problems you solve and how you chain those solutions together (the overarching architecture) as well as the small set of unique problems you solve are the real value in a product. But its often not the majority of any single codebase.
vasco · 10h ago
Well they even call them "content creators", is there anything more whatever? It's literally "whatever" content needed to load the ads around it. It's not painters or musicians or documentarians, it's content creators.
kzrdude · 6h ago
I'm delighted to see the point about "content" in the blog post, finally someone saying that (I also think so and have thought so).
vasco · 6h ago
I always found it weird that the new generation embraced being called the id of the div they put stuff in.
dwedge · 8h ago
It's probably just me but I really struggle to read this whiney tone that became common around ten years ago. It's not about the subject matter itself, it's about the jokes that are always sad somehow, the font choice, the tone of the language. Some kind of perpetual-victim style of writing.
No comments yet
nottorp · 8h ago
Emphasis:
-----
This is why I absolutely cannot fucking stand creative work being referred to as "content". "Content" is how you refer to the stuff on a website when you're designing the layout and don't know what actually goes on the page yet. "Content" is how you refer to the collection of odds and ends in your car's trunk. "Content" is what marketers call the stuff that goes around the ads.
"Content"... is Whatever.
-----
People, please don't think of yourself as "content consumers".
layer8 · 35m ago
I upvoted your content.
marcus_holmes · 7h ago
I think I share the conclusion, but coming at it from the other side.
The point of doing things is the act of doing them, not the result. And if we make the result easily obtainable by using an LLM then this gets reinforced not destroyed.
I'm going to use sketching as an example, because it's something I enjoy but am very bad at. But you could talk in the same way about playing a musical instrument, writing code, writing anything really, knitting, sports, anything.
I derive inspiration from other people who can sketch really well, and I enjoy and admire their ability. But I'm happy that I will never be that good. The point of sketching (for me) is not to produce a fantastic drawing. The point is threefold: firstly to really look at the world, and secondly to practice a difficult skill, and thirdly the meditative time of being fully absorbed in a creative act.
I like that the fact that LLMs remove the false idea that the point of this is to produce Art. The LLM can almost certainly produce better Art than I can. Which is great because the point of sketching, for me, is the process not the result, and having the result be almost completely useless helps make that point. It also helps that I'm really bad at sketching, so I never want to hang the result on my wall anyway.
I understand that if you're really good at something, and take pride in the result of that, and enjoy the admiration of others at your accomplishments, then this might suck. That's gotta be tough. But if you only ever did it for the results and admiration, then maybe find something that you actually enjoy doing?
prmph · 3h ago
Are you not contradicting yourself? If the point of doing things is the doing itself, not the result, then how is it meaningful to just issue a low-effort prompt for the AI to do it?
AndrewDucker · 6h ago
The point of nearly everything I do in the office is the result, not the doing.
For art/craft you are completely correct though.
rednafi · 8h ago
Software programming used to be a blue-collar thing in the early days, when hardware wiring was all the rage.
Then it became hip, and people would hand-roll machine-specific assembly code. Later on, it became too onerous when CPU architecture started to change faster than programmers could churn out code. So we came up with compilers, and people started coding at a higher level of abstraction. No one lamented the lost art of assembly.
Coding is just a means to an end. We’ve always searched for better and easier ways to convince the rocks to do something for us. LLMs will probably let us jump another abstraction level higher.
I too spent hours looking for the right PHP or Perl snippet in the early days to do something. My hard-earned bash-fu is mostly useless now. Am I sad about it? Nah. Writing bash always sucked, who am I kidding. Also, regex. I never learned it properly. It doesn’t appeal to me. So I’m glad these whatever machines are helping me do this grunt work.
There are sides of programming I like, and implementation isn’t one of them. Once upon a time I could care less about the binary streams ticking the CPU. Now I’m excited about the probable prospect of not having to think as much about “higher-level” code and jumping even higher.
To me, programming is more like science than art. Science doesn’t care how much profundity we find in the process. It moves on to the next thing for progress.
eddiewithzato · 8h ago
LLMs will not be doing that. I wish they could, but they just spit out whatever without verifying anything. Even in Cursor which has the agent tell you to run the test script they generated to verify the output, it just says “yep seems fine to me!”.
AI at the current state in my workflow is a decent search engine and stackoverflow. But it has far greater pitfalls as OP pointed out (it just assumes the code is always 100% accurate and will “fake” API).
wiseowise · 7h ago
That’s where you, human, come into the scene.
eddiewithzato · 5h ago
And that’s where I end up wasting more time investigating and fixing issues, rather than creating a solution ;)
I only use AI for small problems rather than let it orchestrate entire files.
archagon · 7h ago
LLMs are not an abstraction. If anything, they are the opposite of an abstraction.
bgwalter · 4h ago
What a great article, I hope it becomes Internet lore. We have been in the Whatever Economy for quite a while and the LLM hype is the logical conclusion.
Like the author, I'm mystified by those who accept the appearance of output as a valid goal.
Even with constrained algorithmic "AI" like Stockfish, which, unlike LLMs, actually works, chess players frown heavily on using it for cheating. No chess player can go to a tournament and say: "I made this game with Stockfish."
redhale · 6h ago
> But whose creative output consists solely of doing things a million people have already done? Is everyone else working on projects built exclusively out of lists of primes and rebalancing binary trees?
Yes. Most developers in the corporate world are building CRUD apps that are 90% boilerplate. Hopefully this helps explain the disconnect.
And even if GenAI progress stops here and it never gets better, it's incredibly useful. Why do people realize that it can't do EVERYTHING and then get stuck to the view that they can't use it for ANYTHING, confused as to why others are getting benefit from it?
Two things can be true (and I believe they are): the hype can be "staggeringly overblown" AND it can still be useful in many cases.
thombles · 8h ago
Speaking as a grump who recently chilled out, put reservations on hold and gave Claude a crack... it turns out that the anti-AI crowd (which still includes me in many regards) gets a lot wrong about the experience of using it, as demonstrated in TFA. You don't get reams of wishy-washy code unless you ask for it. If you're an experienced developer who Knows What They Want then you can wield it like a scalpel. None of the output is a surprise because you discussed the context and requirements first. It just gets there (probably) faster than you might have typing out keywords yourself. If the goal is Whatever, then sure, you will get that faster.
hyper57 · 7h ago
> You don't get reams of wishy-washy code unless you ask for it.
I've found that they get pretty wishy-washy when you correct them. As an example, yesterday I was working on porting a function from the open-source CUE4Parse project from C# to Python (for a hobby project), and the LLM (Gemini 2.5 Pro) suggested the following translation of a C# method:
@staticmethod
def from_name(name: str) -> 'FPackageId':
# C# Encoding.Unicode is UTF-16 LE in Python
lower_name_bytes = name.lower().encode('utf-16-le')
hashed_id = CityHash64(lower_name_bytes)
return FPackageId(hashed_id)
I noted that the original used a custom ToLower() implementation:
> This custom ToLower(), does that produce the same result as .lower() in Python?
Gemini answered with a lot of text and concluded: "You should use Python's standard lower() method for your port."
I pushed back with:
> Right, but for this to work (looking up an asset by its hash as contained in global.utoc), I probably have to match the behavior of Unreal Engine...
Gemini: You are absolutely right. My apologies, your insight is spot on. For the purpose of matching hashes that are pre-calculated by Unreal Engine and stored in files like `global.utoc`, you **must** replicate the engine's behavior exactly. My previous advice to use Python's `.lower()` was incorrect in this specific context.
You need to implement a function that behaves identically to the C# `ToLower` you highlighted. The engine uses this specific, non-Unicode-aware, ASCII-only lowercasing function before hashing the path to generate the `FPackageId`. Using Python's standard `.lower()` would produce a different byte sequence for any path containing non-English characters, leading to a hash mismatch.
From my experience this is representative of the typical LLM interaction once one ventures into niche topics like Unreal Engine modding.
prmph · 2h ago
Exactly.
But, to make a comparison here with Claude Code, I was initially impressed with Geminis ability to hold a keep a conversation on track, but it rarely gets the hint when I express annoyance with its output. Claude has an uncanny ability to guess what I find wrong with its output (even when I just respond with WTF!) and will try to fix it often in actually useful ways, Gemini just keeps repeating its last output after acknowledging my annoyance.
rcxdude · 7h ago
It's pretty difficult to have a useful back and forth with an LLM, because they're really heavily finetuned to be agreeable (and also they're not particularly smart, just knowledgeable, so their 'system 1' is a lot better than their 'system 2', to analogize with 'thinking fast and slow'). Generally speaking if they don't get a useful answer in one shot or with relatively simple, objective feedback, they're just going to flop around and agree with whatever you last suggested.
No comments yet
nottorp · 8h ago
> If you're an experienced developer who Knows What They Want then you can wield it like a scalpel.
But that's not what the marketing says. The marketing says it will do your entire job for you.
In reality, it will save you some typing if you already know what to do.
On HN at least, where most people are startup/hustle culture and experts in something, they don't think long term enough to see the consequences for non experts.
thombles · 8h ago
Well I never set much store by marketing and I'm not planning to start. :) More seriously though it helps explain the apparent contradiction that it sounds scammy at a macro level yet many individuals report getting a lot of value out of it.
nottorp · 7h ago
> many individuals report getting a lot of value
I'm not sure it's a lot of value. It probably is in the short term, but in the long run...
There have already been studies saying that you don't retain the info about what a LLM does for you. Even if you are already an expert (a status which you have attained the traditional way), that cuts you off from all those tiny improvements that happen every day without noticing.
adastra22 · 6h ago
> In reality, it will save you some typing if you already know what to do.
This goes too far in the other direction. LLMs can do far more than merely saving you typing. I have successfully used coding agents to implement code which at the outset I had no business writing as it was far outside my domain expertise. By the end I'd gained enough understanding to be able to review the output and guide the LLM towards a correct solution, far faster than the weeks or months it would have taken to acquire enough background info to make an attempt at coding it myself.
nottorp · 3h ago
I'd love it if everyone who posts statements like this would also include a link to their professional experience. Or at least state a number of years they've been developers.
I'm sure I can do what you describe as well. I've actually used LLMs to get myself current on some stuff I knew (old) basics for and they were useful indeed as you say.
I'm also sure it wouldn't help your interns to grow to your level.
tyre · 10h ago
The author is mad at Stripe and PayPal for banning transactions involving unicorn wieners but this is imposed on them by the backing banks.
The reason behind banning adult materials has to do with Puritanism and with the high rates of refunds on adult websites.
anonzzzies · 10h ago
I have an issue with Stripe or Paypal banning merchants without recourse that do not sell adult stuff or anything else bad/high refund, just because 'AI flagged the account'. And I know from Paypal that they have used 'AI' (statistics) to flag accounts, without recourse, almost since they started. It did cost us a lot of money; we had no refunds or anything and paypal was simply (in email and on the phone) repeating that there was no recourse. We moved to an EU system which we are using for 15 years now and never had any issues of course (as we never did anything weird, ever); also, I can call them or visit them if anything happens unlike the impersonal big guys. Far cheaper too. Screw paypal & stripe (their fees are an absolute joke, no idea why they are so popular), thanks.
dspillett · 9h ago
> no idea why they are so popular
Momentum. They are the big games in town because so many people use them, so many people use them because they are the big games in town. There was a time for both when they didn't suck as much as they do now, at least relative to what other options existed.
movetheworld · 9h ago
Which EU system did you move to?
tavavex · 8h ago
I still think it's mostly puritanism. "Adult transactions" is a massive category of goods and services, and I'm willing to bet that John the average guy buying an overpriced subscription on generic porn website #729 and regretting it 20 minutes later is much more likely to trigger a refund or chargeback than someone commissioning an artist or buying goods (anything from real-life things to 3D models).
Yet, the payment processors will all reliably treat anything NSFW equally by suppressing it as much as they can. From banning individuals who dare do transactions they don't approve of to directly pressuring websites that might tolerate NSFW content by threatening to take away their only means of making money. If they only cared about refunds and profitability, they wouldn't ban individual artists - because the fact how these artists often manage to stay undetected for years suggests that many of their customers aren't the kind to start complaining.
It's quite fascinating how this is the one area where the companies are willing to "self-regulate". They don't process sales of illicit drugs because the governments above them said no and put in extensive guardrails to make these illegal uses as difficult as reasonably possible. Yet, despite most first-world governments not taking issue with adult content at large (for now), the payment processors will act on their own and diligently turn away any potential revenue they could be collecting.
N_Lens · 10h ago
There's a reason "Paypal mafia" is in the lexicon.
OvbiousError · 8h ago
> Bitcoin failed as a currency because the people who got most invested in it do not care about currency
As far as I understand, bitcoin is fundamentally unusable as a currency. Transactions are expensive and limited to ?7k? every few seconds. It's also inherently deflationary, you want inflationary currency, you want people spending, not hoarding.
fsh · 7h ago
It's much worse, the maximum on-chain transaction rate is something like 7 per second. Also the time intervals between blocks have a huge spread, so it can take more than an hour for a transaction to be confirmed if you are unlucky. This is obviously impractical, so people came up with schemes such as Lightning to avoid touching the blockchain as much as possible. Of course this makes it much more difficult to judge whether the system can be cheated in some way...
m0wer · 4h ago
Blockchains don't scale. But that's a feature, not a bug.
Great protocols are built in layers.
You have decentralized instant settlement for an average of 0.005% even for micropayments with the Lightning Network (another protocol built on top of Bitcoin). That's orders of magnitude away from the settlement time and resilience of the current payment networks.
Ezhik · 7h ago
I feel it. We can debate AI over and over and over but my ultimate problem is not even the tech itself but the "whatever" part.
I'm a bit annoyed with LLMs for coding, because I care about the craft. But I understand the premise of using them when the end goal is not "tech as a craft" but "tech as a means". But that still requires having some reason to use the tech.
Hell, I feel the "tech as a means to get money" part for people trying to climb up the social ladder.
But for a lot of people who already did get to the top of it?
At some point we gotta ask what the point of SEO-optimizing everything even is.
Like, is the end goal optimizing life out of life?
Why not write a whole app using LLMs? Why not have the LLM do your course work? Why do the course work at all? Why not have the LLM make a birthday card for your partner? Why even get up in the morning? Why not just go leave and live in a forest? Why live at all?
What is even the point?
gattr · 5h ago
On a more positive note, LLMs (or their successors) could be used to create a perfect tutor. Taylored for every individual, automatically adjusting learning material difficulty, etc.
But yeah, first we'll go through a few (?) years of the self-defeating "ChatGPT does my homework" and the necessary adjustments of how schools/unis function.
vrighter · 2h ago
So you suggest training a model for each individual student? Because LLM "inference" sure as hell isn't capable of tailoring anything to anything, or change in any way.
And also, how is personalized bullshit better than generic bullshit? We'd need to solve the bullshit problem in the first place, which is mathematically guaranteed NOT to be possible with these types of architectures.
Ezhik · 3h ago
Oh yeah, it's going to be really interesting when the hype dies down and we start seeing the actual good use cases get homed in.
wiseowise · 7h ago
You need to take a break from your “craft” if fancy autocomplete makes you question reason to live.
Ezhik · 7h ago
And do what, given that AI-as-marketed optimizes out having relationships with people and enjoying art/cinema/music?
Touch grass all by myself?
No comments yet
ChrisMarshallNY · 5h ago
Reading this person's blog, I came upon this article[0].
I can absolutely relate. That was ten years ago, so I'm not exactly sure where they are, now, but they still seem to be going strong.
I understand the frustration people have with technology but it's not the technology that keeps selling them out. Bad actors take control of the technology and use it to gain and consolidate power. Identify those bad actors and bad use cases for technology and avoid giving them power. Writing whiny tropey angst ridden blog posts doesn't help anyone.
> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.
The reaction to that post has been interesting. It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.
Analogies like this will inevitably get people hung up on the details of the analogy though. Lots of people jumped straight to "a table saw does a single job reliably, unlike LLMs which are non-deterministic".
I picked table saws because they are actually really dangerous and can cut your thumb off if you don't know how to use them.
latexr · 3h ago
> Looks like I was the inspiration for this post then.
You were not, as is patently obvious from the sentence preceding your quote (emphasis mine):
> Another Bluesky quip I saw earlier today, and the reason I picked up writing this post (which I’d started last week)
The post had already been started, your comment was simply a reason to continue writing it at that point in time. Had your comment not existed, this post would probably still have been finished (though perhaps at a later date).
> It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.
Despite your restating, your point still reads to me as the opposite as what you claim to have intended. Inventing the table saw is a poor analogy because the problem with the LLM hype has nothing to do with their invention. It’s the grifts and the irresponsible shoving of it down everyone’s throats that’s a problem. That’s why the comparison fails, you’re juxtaposing things which aren’t even slightly related. The invention of a technology and the hype around it are two entirely orthogonal matters.
simonw · 2h ago
For your benefit I will make two minor edits to things I have said.
> Looks like I was the inspiration for this post then
I replace that with:
> Looks like I was the inspiration for finishing this post then
And this:
> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.
I can rephrase as:
> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the introduction of the table saw.
latexr · 1h ago
> For your benefit
If that’s your true impetus, please don’t bother. There’s nothing which benefits me about your words being clearer and less open to misinterpretation. You are, of course, completely welcome to disagree with and ignore my suggestions.
> thanks to the introduction of the table saw.
That makes absolutely no difference at all. And it doesn’t matter anymore either, the harm to your point is already done, no one’s going back to it now to reinterpret it. I was merely pointing out what I see as having gone wrong so you can avoid it in the future. But again, entirely up to you what you do with the feedback.
No comments yet
rcxdude · 7h ago
Also, if you don't have a table saw, just cutting a straight line efficiently and accurately is a fairly important baseline skill for doing carpentry, something which becomes a lot less of an issue with a table saw, and that makes some of the skillset of carpentry less important for getting good results (especially if you then make things that only consist of straight lines and so you also don't need to be able to do more complex shapes well). I think it's a pretty decent analogy.
ninetyninenine · 9h ago
You have to realize that we're only a couple years into wide spread adoption of LLMs as agentic coding partners. It's obvious too everyone, and you that LLMs currently cannot replace coders.
People are talking about the trendline, what AI was 5 years ago versus what AI is today points to a different AI 5 years down the line. Whatever AI will be 5 years from now it is immensely possible that LLMs may eliminate programming as a career. If not 5 years... give it 10. If not 10, give it 15. Maybe it happens in a day, a major break through in AI, or maybe it will be like what's currently happening, slow erosion and infiltration into our daily tasks where it takes on more and more responsibilities until one day, it's doing everything.
I mean do I even have to state the above? We all know it. What's baffling to me is how I get people saying shit like this:
>"LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.
I mean it's an obvious complete misrepresentation. People are talking about the future. Not the status quo and we ALL know this yet we still make comments like that.
simonw · 9h ago
The more time I spend using LLMs for code (and being impressed at how much better they are compared to six months ago) the less I worry for my career.
Using LLMs as part of my process helps me understand how much of my job isn't just bashing out code.
My job is to identify problems that can be solved with code, then solve them, then verify that the solution works and has actually addressed the problem.
An even more advanced LLM may eventually be able to completely handle the middle piece. It can help with the first and last pieces, but only when operated by someone who understands both the problems to be solved and how to interact with the LLM to help solve them.
No matter how good these things get, they will still need someone to find problems for them to solve, define those problems and confirm that they are solved. That's a job - one that other humans will be happy to outsource to an expert practitioner.
It's also about 80% of what I do as a software developer already.
indigoabstract · 8h ago
I don't know what will come in the future, but to me it's obvious that any variation of LLMs, no matter how advanced won't replace a skilled human who knows what they're doing.
Through no fault of their own, but they're literally blind. They don't have eyes to see, ears to hear or fingers to touch and feel & have no clue if what they've produced is any good to the original purpose. They are still only (amazing) tools.
ninetyninenine · 1h ago
LLMs produce video and audio data and can parse and change audio and visual data. They hear, see and read and the only reason they can’t touch is because we don’t have the training data.
You do not know if LLMs I the future can’t replace humans. You can only say right now they can’t replace humans. In the future the structure of the LLM may be modified or it become one module out of multiple that is required for agi.
These are all plausible possibilities. But you have narrowed it all down to a “no”. LLMs are just tools with no future.
The real answer is nobody knows. But there are legitimate possibilities here. We have a 5 year trend line projecting higher growth into the future.
indigoabstract · 22m ago
> In the future the structure of the LLM may be modified or it become one module out of multiple that is required for agi.
> The real answer is nobody knows.
This is all just my opinion of course, but it's easy to expect that being an LLM that knows all there is to know about every subject written in books and the internet would be enough to do every office work that can be done with a computer. Yet strangely enough, it isn't.
At this point they still lack the necessary feedback mechanism (the senses) and ability to learn on the job so they can function on their own independently. And people have to trust them, that they don't fail in some horrible way and things like that. Without all these they can still be very helpful, but can't really "replace" a human in doing most activities. And also, some people seem to possess a sense of aesthetics and a wonderful creative imagination, things that LLMs don't really display at this time.
I agree that nobody knows the answer. If and when they arrive at that point, by then the LLM part would probably be just a tiny fraction of their functioning.
Maybe we can start worrying then. Or maybe we could just find something else to do.
Because people aren't tools, even when economically worthless.
squidbeak · 9h ago
The thing is that at this stage, LLMs, and perhaps AI in other forms, also have careers. Right now they're junior developers. But whose career will develop faster or go further? Theirs? or the new programmer's?
wiseowise · 7h ago
Who cares?
squidbeak · 1h ago
The person who'd chosen programming as a career, if AI overtakes human programmers.
vincnetas · 9h ago
After reading "...the press release and in the fine print it says that now it can count the number of letters in “Mississippi” correctly or whatever" tried to count letters in another word :)
You said:
how many letters are in the lithuanian word "nebeprisikaspinaudamas"? Just give me one number.
ChatGPT said:
23
You said:
how many letters are in the lithuanian word 'nebeprisikaspinaudamas'. Just give me one number.
ChatGPT said:
21
Both are incorrect by the way. It's 22
fsh · 7h ago
I'm all for making fun of LLMs, but asking this from a software that processes vectors of tokens is a bit silly. The information isn't really there in the input.
layer8 · 29m ago
Then it shouldn’t pretend to know the correct answer.
molteanu · 9h ago
I got 22 the first time.
But then 21 the second time. And the third time...
ZoomZoomZoom · 6h ago
> But the dream has died.
I don't know why shitty ecosystem around cryptocurrencies drives the author to this conclusion. Crypto is the most convenient and respectful of your human rights way to just send money. It's there, it's useful and in many cases absolutely vital.
Andrew_nenakhov · 6h ago
Yeah, people in Russia and Iran use it all the time to circumvent the sanctions. A godsend, basically.
Almondsetat · 7h ago
The author claims that computers aren't fun anymore and then talks about payment methods and social networks. Not only I fail to see how a website would magically make a piece of hardwars boring, but computers have always been boring and it is the job of the user to get something fun out of them
gwd · 5h ago
Er, they all seem pretty connected:
1. Computers could be more fun if you could buy things more easily. Crypto could solve that, but the people involved don't care about crypto, they care about Whatever.
2. Computers can be fun when people do cool things on them. The web could make that possible, but unfortunately the economics of ads incentivize people to post Whatever.
3. Programming can be fun, but LLMs just generate Whatever.
mumbisChungo · 10h ago
I think there's a decent chance that folks on the fringes find interesting uses for immutable and programmable distributed ledgers, even if the prevailing culture of crypto is hellish.
Analemma_ · 10h ago
I keep hearing this theory, that “eventually” we will find the use cases for distributed ledgers, and I don’t buy it. Bitcoin was invented at roughly the same time as the iPhone, and the iPhone immediately found use cases. Right away the global economy reoriented itself around the smartphone, because it demonstrated real value to actual people. We did not need to wait and twiddle our fingers for years going “I think there’s a decent chance people find interesting uses for this someday”.
Immutable distributed ledgers, by contrast, have found no use cases other than crime and financial speculation in coming up on twenty years. Exactly how long do we have to wait for these interesting uses that are “surely” coming?
m0wer · 4h ago
Isn't being able of anonymously send you something of value that no one can take away from you a pretty big use case?
A third of the world is unbanked. A permissionless monetary system makes a huge difference for those.
When I was still very skeptical about Bitcoin, I met a guy in Turkey who was from a very poor African country and was just studying there. His father would buy Bitcoin in their home country with the local currency (P2P) and send it to his son, that would then convert it also P2P for Turkish Liras. They could do this securely an within minutes. The alternative was using Western Union and paying taxes in both countries, which in total added up to ~50% of the sent amount.
It's great not needing Bitcoin, as it is great not needing Tor. But that doesn't mean there's no use case for them.
yieldcrv · 10h ago
there are many people with many use cases for distributed ledgers. we are already aware of the flowchart of your argument path
“list them”
“oh I can do that in this other convoluted way that doesnt solve any of these users goals or problems”
“I’m not the target audience for that so it doesnt count”
“ah so financial speculation, that doesnt count despite being the largest application and sector on the planet”
“marketcap doesnt matter and isnt indicative of anything in that economy, I would rather hold digital assets at a separate different standard than every other asset on the planet out of total ignorance that my same arguments apply to asset classes that I respect”
“see I proved my point for myself, there is no use case after 17 years, classic HN”
“those are strawman arguments despite all conversations following this same predictable path enough for any language model to regurgitate it verbatim”
pjc50 · 9h ago
It's clear that financial speculation is a big use for crypto, probably the biggest.
But since almost all the tokens bear neither interest nor dividends, it looks a lot more like a casino.
yieldcrv · 9h ago
if you made every incorporated and unincorporated organization and product line and revenue stream to be fractionalized and publicly traded it would look the same
its just a filtering problem
there are screeners to narrow everything down just like for the stock exchanges
georgeecollins · 9h ago
To quote the article:
>> But also… why do you care? Why would someone using a really cool tool that makes them more productive… feel compelled to sneer and get defensive at the mere suggestion that someone else isn’t doing the same?
>> It feels like the same attitude that happened with Bitcoin, the same smug nose-wrinkling contempt. Bitcoin is the future. It’ll replace the dollar by 2020. You’re gonna be left behind. Enjoy being poor. Sure thing, Disco Stu!
yieldcrv · 6h ago
Evangelists can be ignored if you want, not all promoted use cases are relevant
Its all about blockspace and the commodity that pays for the blockspace
That was true when bitcoin was 2 cents and its true when bitcoin is $109,000 and 2 cents
I mean, are you enjoying your socioeconomic status? the chronology was very clear to some and they were right. It wasn’t luck, it wasn’t a really binary proposition. you can read old threads around the net from 2012 to see exactly where we are going. you can help make that happen or passively act surprised. pretty much every theorized issue can be programmed away, thats what gives people confidence in this asset class compared to others.
greyface- · 4h ago
> Its all about blockspace
> That was true when bitcoin was 2 cents
I largely agree with you, but to nitpick: when bitcoin was 2 cents, blockspace was free, and miners regularly accepted zero-fee transactions into blocks. Today, you're not getting a transaction into a block without paying at least a couple hundred sats. Your statement is true today, but it wasn't like this until April-May 2012 when Satoshi Dice started filling blocks up. See Fig 3 on page 9 of <https://maltemoeser.de/paper/transaction-fees.pdf> or look through some early blocks.
yieldcrv · 47m ago
yes, blocks are and were periodically empty
if you expected applications to be deployed that would take up block space when used, and were going to build those applications yourself, then it was still rational
in 2012 people were describing smart contracts, joint custody accounts to secure assets better, and many other applications that are commonplace and have their own critique and discussion now
its like seeing an island full of resources and realizing that the bridges and ferry routes haven't been built yet. that
1) you can get to that island yourself before everyone else
2) you can also build the bridge and put up a toll booth
3) other bridges will be built
4) and other people can also come to the island early at great difficulty too
the same play is still true on other blockchains, and sometimes back again on bitcoin
I’ve done the trade many times over the past 15 years
greyface- · 18m ago
Today, when blocks are empty, you're paying 1 sat/vB. Back then, when blocks were empty, you paid zero. That's the difference between "all about blockspace" and "only about blockspace when you care about next-block latency".
If you had said "2 dollars" instead of "2 cents", we would be in complete agreement. All I'm saying is that mandatory transaction fees were not baked in at 2 cents.
lovich · 9h ago
A lot of your examples are fallacious arguments that are weak or just generally wrong but your first one is
>”list them”
If you are arguing that something exists, then being asked to prove its existence is table stakes, not poor arguments
You starting with a strong argument in your list of bad arguments and then ending with shit that mocks anyone calling you out makes me believe that you are not discussing this topic in good faith
yieldcrv · 6h ago
unfortunately the same thread has been had for any tangentially crypto thread on HN for the last decade and a half, so it just doesn’t feel like my prerogative to answer such a lazy and abstract question
yes, that means we are at an impasse. use the search, ask an LLM, if even that is too much initiative for a quite outdated skeptic to take even now then I can’t help you
there are hundreds of billions, maybe trillions in volume going through financial services on blockchains and it doesn’t matter if financial services isn’t a sector you care about or are the target audience for, there are people there who will pay to solve problems they have
EraYaN · 6h ago
Most of the financial services and systems are all still on very boring old databases (hell quite some cobol still touches a ton of it). Since well they don't need to get hype funding for their Series A or whatever. Databases are just pretty good and processing data efficiently. It's a tiny tiny part that actually runs on some blockchain.
yieldcrv · 56m ago
there are hundreds of billions, maybe trillions in volume going through financial services on blockchains
and yes that is a tiny fraction of all financial services volume at all, or even involving crypto assets.
I was referring to the traffic onchain as that’s what’s interesting
permissionless liquidity providing in unincorporated
partnerships is still novel and unique to those platforms and highly lucrative. on assets that dont need permission to be listed anywhere.
mumbisChungo · 10h ago
You make it sounds like it's a struggle between two sides. It's just another option for a creative person who has a problem to solve. Maybe it'll be used for something interesting. Maybe it won't be. It's just a piece of maybe frivolous optional technology.
Havoc · 6h ago
That really code have done with some editing and pruning cause that's just an omnibus stream of thought talking about whatever came to mind.
Agree with the sense that we're at a weird moment though.
cousin_it · 9h ago
This reminds me of the normie/autistic/sociopath triangle. The idea is that sociopaths can see through normies, normies can see through autists, and autists can see through sociopaths - when there's a sociopath, often the normies in the group will be easily fooled by him, but the autists will be onto him right away. Don't know why that is, but it's true in my experience.
Same with AI. I'm notably more autistic (or more aspie, or whatever) than my friend group, and also I much more easily recognize AI text and images as uncanny slop, while my friends are more easily wowed by it. Maybe AI output has the same "superficially impressive but empty inside" quality as the stuff that sociopaths say.
sanitycheck · 6h ago
Is this a known thing? I've called myself 'immune to charisma' which seems at least related, and I've thought perhaps it's an autistic/aspie trait but have never come across any studies or articles mentioning it.
test1235 · 8h ago
is it to do with patten matching maybe? maybe autists can spot sociopaths 'cos they behave just ever so slightly differently ... and maybe you can recognize AI text 'cos you see a pattern in the content which non-autists do not?
cousin_it · 7h ago
It's the opposite of that. All the superficial patterns are there, all the words and their combinations. But the core, the meaning, isn't there.
wiseowise · 7h ago
The theory is bullshit, if you didn’t understand from lack of any studies.
Most likely yet another flawed output from human-LLM (4chan) so online schizos have something to identify themselves with.
recursinging · 7h ago
Aside from the old-man-in-a-wooden-rocking-chair-on-a-porch tone, it seems to me that the author's beef is mainly about back-patting, and how the "Whatever" machines are flooding the pat-me-on-the-back platforms with "Content" that makes their own stick out less, resulting in fewer back-pats.
The last line of the article summarizes it perfectly.:
> Do things. Make things. And then put them on your website so I can see them.
I subscribe fully to the first two sentences, but the last one is bullshit. The gloom in the article is born from the authors attaching the value of "making things" to the recognition received for the effort. Put your stuff out there if you think it is of value to someone else. If it is, cool, and if it's not, well, who cares.
> I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?” This is kind of darkly fascinating to me, because it gives rise to such an obvious question: if anyone can do that, then why listen to your music? It takes a significant chunk of 3.5 hours just to listen to an album, so how much manual work was even done here? Apparently I can just go generate an endless stream of stuff of the same quality! Why would I want your particular brand of Whatever?
This gem implies that the value of the music (or art in general) is partially or even wholly dependent on whether or not someone else thinks it's good. I can't even...
If you eliminate the back-patting requirements, and the stuff we make is genuine, then it's value is intrinsic. The "Whatever" machines are just tools, like the rest of the tools we use, to make things. So, just make your things and get on with it.
probably_wrong · 6h ago
I think there are more generous interpretations than "the value of art is dependent on whether someone else thinks it's good".
I had an interesting discussion with a piano teacher once. Some of his students, he told me, would play for themselves but never for any kind of audience. As the saying goes: if a musician plays a piano in a closed room with no one to hear it, does it make a sound?
Obviously there's nothing wrong with extremely personal art that never gets released to the wider public - not every personal diary should be a blog. But there's also the question of what happens to art when none of it gets shared around, and vibrant art communities are, in my opinion (and I think also the author's), something to encourage.
recursinging · 3h ago
> if a musician plays a piano in a closed room with no one to hear it, does it make a sound?
I get what you're after, but that's not a very good example. If a musician is playing an instrument, then of course the musician hears it.
Now, imagine instead that it's a player piano, and the lone "musician" is not actually playing anything at all, but hears the sound of the tones he/she had randomly generated by a "Whatever" machine, resonating through the actual struck strings, and resonant body of a piano, and the hair on the back of their neck stands on end. Then the music ends, the vibrations stop, and all that is left of the moment is whatever memory the "musician" retains.
Was that music while being heard by the "musician"? Is it music when it's just an melody in the "musician's" head? What if it's wasn't a piano at all, but just birds singing? Is it still music? If it is, is it "good" music?
Yes, the world is changing fast, and no, we humans don't seem to handle it well. I agree with the article in that sense. But I see no use in categorizing technology as dystopian, just because it's been misused. You don't have to misuse it yourself, or even use it at all if you don't want to. Complaining about it though... we humans are great at that.
pebble · 6h ago
> and the stuff we make is genuine
hmmm
tsurba · 6h ago
I agree with everything up until the AI part, and for that part too, the general idea is good and worth worrying about. I’m scared af about what happens to kids who do all their homework with LLMs. Thankfully at least we still have free and open models, and are not just centralizing everything.
But chatgpt does help me work through some really difficult mathematical equations in newest research papers by adding intermediate steps. I can easily confirm when it gets them right and when not, as I do have some idea. It’s super useful.
If you are not able to make LLMs work for you at all, and complain about them on the internet, you are an old man yelling at clouds. The blog post devolves from an insightful viewpoint into a long sad ramble.
It’s 100% fine if you don’t want to use them yourself, but complaining to others gets tired quick.
VMG · 8h ago
> Finally, society says, with a huge sigh of relief. I don’t have to write a letter to my granddaughter. I don’t have to write a three-line fetch call.
one of these things is not like the other
I agree with the author, I usually do not care how exactly the fetch call looks like. Whatever indeed.
827a · 9h ago
> What are we actually saying here — that even Microsoft has to evaluate usage of “AI” directly, because it doesn’t affect performance enough to have an obvious impact otherwise
Oh wow.
arkh · 4h ago
> If it’s something I don’t know, I can go find out about it, and now I know more things.
Do you realize how many person-hour of highly intelligent people have been spent on ORM just so people could not learn SQL? Most people don't value learning.
bflesch · 9h ago
Excellent and well-written article.
lovich · 9h ago
I read through the essay and really resonated with some parts and didn’t resonate with others, but I think they put some words to the feelings I have had on AI and its effect in the tech industry
> There are people who use these, apparently. And it just feels so… depressing. There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though. That’s management, a fairly different job. I’m not interested in managing. I’m certainly not interested in managing this bizarre polite lying daydream machine. It feels like a vizier who has definitely been spending some time plotting my demise.
I was several minutes of reading before this paragraph when the idea hit me that this person hates managing. Because everyone I’ve met who hates using AI to produce software describes to me problems like the AI not being correct or lying to them if the model thought that would please you better, and that’s my experience with junior engineers as a manager.
And everyone I’ve met who loves AI at some point makes an analogy to it, that compares it to a team of eager juniors who can do a lot of work fast but can’t have their output trusted blindly, and that’s my experience with junior engineers as a manager.
And then anyone whose been trying to get an Engineering manager job over the past few months and tracking their applications metadata has seen the number of open postings for their requirements go down month after month unless you drop the manager part and keep all the same criteria but as IC
And then I read commentary from megacorps about their layoffs and read between the lines like here[1]
>… a Microsoft spokesperson said in a statement, adding that the company is reducing managerial layers …
I think our general consternation around this is coming from creators being forced into management instead of being able to outsource those tasks to their own managers.
I think there's still a difference even if you look at it as "supervising a bunch of juniors". I'm happy to review the output of a human in that case because I believe that even if they got some stuff wrong and it might have been quicker and more interesting for me to just do the thing, the process is helping them learn and get better, which is both good in itself and also means that over time I have to do the supervision and support part less and less. Supervising an LLM misses out both of those aspects, so it's just not-very-fun work.
lovich · 7h ago
>… the process is helping them learn and get better, which is both good in itself and also means that over time I have to do the supervision and support part less and less. Supervising an LLM misses out both of those aspects, so it's just not-very-fun work.
Legitimately I think you are missing my point. What I quoted out of your response could be applied to prompt engineering/managment/tinkering. I think everyone who likes doing this with juniors and hates it with AI is conflating their enjoyment of teaching juniors with the dopamine you get from engaging with other primates.
I think most people I’ve met who hated AI would have the same level of hate for a situation where their boss made them actually manage an underperforming employee instead of letting them continue on as is ad infinitum.
It’s hard work both mentally and emotionally to correct an independent agent well enough to improve their behavior but not strongly enough to break them, and I think most AI haters are choking on this fact.
I’m saying that from the position of an engineer who got into management and choked on the fact that sometimes upper leadership was right and the employee complaining to me about the “stupid rules” or trying to lie to me to get a gold star instead of a bronze one was the agent in the system who was actually at fault
pm215 · 6h ago
No, I really don't think that prompt engineering is the same thing. Anything I put in the prompt may help this particular conversation, but a fresh instance of the LLM will be exactly the way it was before I started. Improvements in the LLM will happen because the LLM vendor releases a new model, not because I "taught" it anything.
eddiewithzato · 5h ago
You also don’t get the satisfaction of watching something grow. Teaching and being a mentor is entirely separate to massaging a prompt
pm215 · 5h ago
Yeah. I do agree with lovich that there's a lot of stuff about management that's just not fun (and that's part of why I've always carefully avoided it!) -- and one thing about AI is not just that it's management but that it's management with a lot of the human-interaction, mentoring, etc upsides removed.
senko · 7h ago
This reads like people have been sleeping through mountains of shit being thrown at them (content, news, user-hostile services...) for decades, and then suddenly woke up and blame the most recent hype.
Most of Internet is crap. Most of media is crap. This does need to stop you (or me) from creating.
rcxdude · 7h ago
Sturgeon's law multiplies as accessibility increases. You see more of everything, but many more times more crap than not. Generative AI does substantially lower barriers to making all kinds of things, so the increase in crap is substantial.
saubeidl · 8h ago
This entire article is a critique of capitalism and how it's ruined everything. I'm not sure the author is aware of that, however.
aredox · 8h ago
>Also, making your own website is kinda hard? You have to, like, learn things.
That seems snarky ("you don't want to learn things") when the reality is that the problem, the finite ressource that is the bottleneck, is _time_.
This is also why hyperlinks are underused: few people go through every single one of the words in their last post to try to add appropriate hyperlinks.
PoshBreeze · 4h ago
> The only browser with built-in tipping is the one spearheaded by a man whose other claims to fame are inventing JavaScript and wanting to outlaw my marriage, and the token it uses apparently had 80 whole sellers in the past 24 hours. Sounds like all of that is going great.
Brenden Eich's beliefs about marriage aren't relevant. It been what well over a decade since he was ousted from Mozilla? Wasn't that enough? People constantly bring this up as a reason why you shouldn't use Brave. There are valid reasons not to use Brave Browser. Brendan Eich's beliefs about marriage isn't one of them. It some tired old jab at Eich who like most older people have beliefs that are considered backwards by today's standards.
> This is starting to get away from the main thesis of Whatever but every time I hear about students coasting through school just using LLMs, I wonder what we are doing to humanity’s ability to think critically about anything. It already wasn’t great, but now we’re raising a whole generation on a machine that gives them Whatever, and they just take it. You’ve seen anecdotes of people posting comments and submitting papers and whatnot with obvious tells like “As a large language model…” in them. That means they aren’t even reading the words they claim as their own! They just produce Whatever.
People were cut and pasting Wikipedia articles into University work and doing zero effort back in the mid-2000s while I was in University. There is a deeper problem with education generally and it isn't people copying stuff with AI.
> The most obnoxious people like to talk about how Stable Diffusion is “democratizing art” and that is the dumbest thing I’ve ever heard. There is no fucking King of Art decreeing who is allowed to draw and who isn’t. You could do it. You could do it right now. But it’s hard, so you’d rather spend that time crying on Twitter about how unfair it is that learning a skill takes work and thank god the computer can give you all of the admiration with none of the effort now.
This isn't what is meant when people say this.
What people typically mean is that people can cheaply create things in the AI that match what they have in their head.
e.g.
- There are parody songs / music videos made for internet streams I watch by other fans of the show. In the past people used to cheaply copy and paste stuff into a video editor and crudely animate it and they weren't great. I literally laughed a parody song that was done like a sea shanty that mocked a well known e-celeb.
- I make cheesy YouTube Thumbnails for my videos because I have zero budget for an artist and my skills with image editing software isn't stellar. I can use the AI to generate me some of the thumbnail and the rest I can do in GIMP. I get something that looks better than if I didn't have the AI IMO. This does democratise it, because I don't have to spend literally hundreds on a graphic designer.
- AI can help with animations. A friend of mind could take a 20 FPS animation and have the AI interpolate the animations accordingly. He has told me this saves him a huge amount of time.
f38zf5vdt · 3h ago
open blog
> But yes, thanks: I was once offered this challenge when faced with a Ren’Py problem, so I grit my teeth and posed my question to some LLM. It confidently listed several related formatting tags that would solve my problem. One teeny tiny issue: those tags did not and had never existed. Just about anything might be plausible! It can just generate Whatever! I cannot stress enough that this is worse than useless to me.
The probabilistic machine generated a probabilistic answer. Unable to figure out a use for the probabilistic machine in two tries, I threw it into the garbage.
Unfortunately, humans are also probabilistic machines. Despite speaking English for nearly a lifetime, errors are constantly produced by my finger-based output streams. So I'm okay talking to the machine that might be wrong in addition to the human that might be wrong.
> It feels like the same attitude that happened with Bitcoin, the same smug nose-wrinkling contempt. Bitcoin is the future. It’ll replace the dollar by 2020. You’re gonna be left behind. Enjoy being poor.
I mean, you were left behind. I was left behind. I am not enjoying being poor. Most of us were left behind. If we invested in Bitcoin like it was the future in 2011 we'd all be surfing around on yachts right now given the current valuation.
rob_c · 7h ago
Learning to use LLM effectively is like learning how to use scripting in office. Useless to a lot of people. Dangerous if you get it wrong. Full of many bad things from the internet. But still the most powerful thing to happen to computing in a generation. And probably soon about to be turned off by default for security reasons to follow the analogy.
Just because you failed to use an LLM effectively the first time, or it doesn't live up to your version of the hype doesn't mean you have a magic window into how sh1t they are. Many people who use them regularly know just how bad they are and what it's like to move beyond the original context window length. But having auto complete on steroids is still the best thing with regards to making computing "fun" again in a generation. No more boiler plate, no more beating your head against a desk looking for that minor coding bug that could have been fixed by a awesome regex if only you had the time to learn it. No more having to break your stride and go through about 20 websites with popups to find that someone else solved the problem got you, just getting on with stuff and having fun whilst doing it.
Edit: no retort, just flagging and down-voting... lovely
moralestapia · 10h ago
>by a man whose other claims to fame are inventing JavaScript
But he did invent JavaScript ...
alternatex · 5h ago
I have a feeling you don't know what the "claim to fame" expression means :)
Doxin · 7h ago
calling it a "claim to fame" is not implying it's not true. It's an idiom. Wiktionary defines it as "That for which one has bragging rights; one's reason for being well-known or famous."
moralestapia · 4h ago
Nice, TIL.
paulddraper · 10h ago
> Either that, or live in some futuristic utopia like the EU where banks consider "send money to people" to be core functionality. But here in the good ol' U S of A, where material progress requires significant amounts of kicking and screaming, you had PayPal.
The irony of this rant next to the AI rant.
Progress is not uniformly distributed I guess.
WesolyKubeczek · 3h ago
I imagine the future people, unless WW3 obliterates this iteration of civilisation, will be using slop for everything, and craftsmanship and doing the thing will be done in buildings not unlike gyms of today. Where you will pay for the privilege to exercise your mind.
defanor · 7h ago
Rather annoyed by the hype around LLMs myself, but I notice, both in this article and more generally, that some of the criticism seems to attribute to LLMs specifically the issues that are neither unique to them, nor caused by them.
> Why would someone using a really cool tool that makes them more productive… feel compelled to sneer and get defensive at the mere suggestion that someone else isn’t doing the same?
It sounds like it is about people seeing others doing things in a way they view as inefficient or wrong, and then trying to help. "Sneer and get defensive" does not sound like trying to be helpful, but they probably would not describe themselves as sneering and getting defensive, either.
> I might have strong opinions about what code looks like, because I might have to read it, but why would I — why would anyone — have such an intense reaction to the hypothetical editor setup of a hypothetical stranger?
As above, but even closer to this particular question, see the editor war.
> But the Bitcoin people make more money if they can shame everyone else into buying more Bitcoin, so of course they’re gonna try to do it. What do programmers get out of this?
Apart from "helping" others, a benefit of promoting technologies one uses and prefers may be a wider user base, leading to a better support, proliferation of technologies they view as good and useful.
> We’ve never had a machine that can take almost any input and just do Whatever.
Well, there is Perl. It is a joke (the statement, not the language), but the previous points actually made me to think of programming languages with dynamic and weak typing, similarly allowing to pretend that some errors do not happen, at the cost of being less correct, and doing whatever when things go wrong. Ambiguities in natural languages come to mind, too.
> That means they aren’t even reading the words they claim as their own!
Both homework and online discussions featured that before LLMs, too. For instance, with people sometimes linking (not necessarily copying) materials to prove a point, but materials contradicting it. Carelessness, laziness, lack of motivation to spend time and effort, are all old things.
> I can’t imagine publishing a game with, say, Midjourney-generated art, even if it didn’t have uncanny otherworldly surfaces bleeding into each other. I would find that humiliating. But there are games on the Switch shop that do it.
I heard "AI-generated game" mentioned as a curiosity or a novelty, apparently making it a selling point. Same as with all the "AI-powered" stuff, before LLMs. There is much of that used for marketing: as block chaining and "big data" were added everywhere when those were hyped, and many silly things are similarly added into items outside of computing if they have a potential to sound cool at least to some (e.g., fad diets, audiophile hardware).
> But I think the core of what pisses me off is that selling this magic machine requires selling the idea that doing things is worthless.
This also sounds like yet another point in the clash between prevalent business requirements and more enthusiastic human aspirations. The economic and social systems, and cultures, probably have more to do with it than particular technologies. Pretty much any bureaucracy/corporate/enterprise-focused technologies tend to lessen the fun and enjoyment.
zpeti · 8h ago
> But the dream has died. It almost came true, and then it was immediately co-opted by a bunch of get-rich-quick grifters and a bunch of turbo-libertarians whose entire identities are defined by the Things that they Own and who want to cryptographically impose that on everyone else too because they’re mad that World of Warcraft nerfed warlock or something.
Oh come on. As someone who works with plenty of entrepeneurs, you don't get much more enthusiasm from anyone about what they are doing than them, who care about it as much. It's up there with professional sports and professional artists.
Just because things didn't get built that were the absolute dream of what things could be, doesn't mean people didn't care and didn't put in all their efforts to build things. Just because they didn't meet this couch critic's expectations doesn't mean people didn't put the effort in.
I really don't like this attitude. He's really unhappy about paypal and stripe existing? What exactly is the alternative? What alternate universe are they dreaming about? Perfection doesn't exist.
alternatex · 5h ago
Presumably the universe promoted by crypto enthusiasts. Which is to say the universe they don't actually care for, but act like they want because it can bring them more money in the current universe.
rcxdude · 6h ago
>I really don't like this attitude. He's really unhappy about paypal and stripe existing? What exactly is the alternative? What alternate universe are they dreaming about? Perfection doesn't exist.
They're unhappy that something better doesn't exist, and that the dream of cryptocurrencies, that they would actually become a useful technology for everyday movement of money, has essentially been killed and buried by the sheer quantity of grift in the ecosystem (to be fair, there's a lot of fundamental technology issues that make that a difficult dream to achieve in practice, but it's sad that it basically doesn't seem to get much at all of the substantial resources poured into the ecosystem, and the general stench of it now makes legitimate use so much harder. So many people associate crypto == scam that adoption for real payments has actively gone backwards since the first few years).
doug_durham · 9h ago
Wow, the lack of self-awareness on the part of the writer of this is stunning. To say "I don't go out of my way to dunk on people who use LLMs", and then write a multipage blog post doing exactly that is fascinating.
You do you, and I'll do me. Perhaps spend more time coding which you say you like to do and less time sneering at people who use different tools.
arthurbrown · 9h ago
You've read that paragraph backwards. They are talking about LLM enthusiasts dunking on people who aren't using.
doug_durham · 9h ago
The author mocks LLM enthusiasts, because they "dunk" on him. While writing an blog post that "dunks" on LLM enthusiasts. Total lack of awareness. If they had taken the time to read their blog before posting it, they might have avoided embarrassment.
arthurbrown · 9h ago
So just misquoting the author intentionally then?
You should pick some of the substance of the article to take issue with instead of jumping to be a victim. They clearly read it if they wrote it.
doug_durham · 8h ago
Here you go, a direct quote:
I know there are people who oppose, say, syntax coloring, and I think that’s pretty weird, but I don’t go out of my way to dunk on them.
Did you even read the blog post? The entire blog is dunking on people who use the tools.
latexr · 3h ago
> they might have avoided embarrassment.
What embarrassment? This post, like plenty of Evelyn’s writing, reached the front page of HN and has a ton of commentary in agreement. Your comment, on the other hand, was downvoted to the bottom of the thread.
The author also draws and publishes what is, in their own words, “pretty weird porn”.
You should voice your disagreement with the contents of the post and explain what they are, if that is what you’re feeling. Discussion is what HN is for. But to believe the author is or should be suffering any kind of embarrassment about this post is detached from reality.
tptacek · 10h ago
LLM output is crap. It’s just crap. It sucks, and is bad.
Still don't get it. LLM outputs are nondeterministic. LLMs invent APIs that don't exist. That's why you filter those outputs through agent constructions, which actually compile code. The nondeterminism of LLMs don't make your compiler nondeterministic.
All sorts of ways to knock LLM-generated code. Most I disagree with, all colorable. But this article is based on a model of LLM code generation from 6 months ago which is simply no longer true, and you can't gaslight your way back to Q1 2024.
eviks · 9h ago
> or they suggested an elaborate and tedious workaround that would technically solve the problem (but introduce new ones).
There is no value in randomly choosing an API that exists. There is value in choosing an API that works.
When LLM makes up an API that doesn't even exist it indicates that it's not tied to the reality of the task of fetching a working API, so filtering out nonexistent APIs will not help the other results match better. But yes, they'll compile.
tptacek · 8h ago
Give me a break. First, that's not the claim the article makes. Second, that's not the experience of anybody who actually uses Claude Code or Gemini Desktop or whatever people are using this week. This is what I'm talking about: people just gaslighting.
LLMs can write truly shitty code. I have a <50% hit rate on stuff I don't have to rewrite, with Sketch.dev, an agent I like a lot. But they don't fail the way you or this article claim they do. Enough.
eviks · 8h ago
First, it is, you've just reduced the forest article claim down to a single tree to make it appear like your "solution" cuts it.
Second, speak for yourself, you have no clue about everybody's experience to make such a universal claim.
Lastly, the article talks about author's experience, not yours, so you're the only one who can gaslight the author, not the other way around
tptacek · 8h ago
I'm comfortable with the assertions I'm making here and stand by them.
beckthompson · 10h ago
AIs still frequently make up stuff up - there isn't really a way to get out of that. Have they improved a lot in the last six months? 100%! But they still make mistakes its quite common
tptacek · 10h ago
LLM calls make stuff up. Your compiler can't make things up. An agent iterates LLM calls. When your LLM call makes an API up, your compiler will generate errors. The errors get fed back into the iterative loop. In pretty much ever real case, the LLM corrects, but either way: the result is clear. The code may be wrong, but it shouldn't hallucinate entire APIs.
beckthompson · 10h ago
But just compiling doesn't mean that much and doesn't really solve the core issue of AIs making stuff up. I could hook up a random word generator into a compiler and it also would also pass that test!
For example, just yesterday I asked an AI a question about how to approach a specific problem. It gave an answer that "worked" (it compiled!) but in reality it didn't really make any sense and would add a very nasty bug. What it wrote (It used a FrameUpdate instead of a normal Update) just didn't make sense on a basic level of how the framework worked.
tptacek · 10h ago
I'm not interested in this Calvinball argument. The post we're commenting on makes a clear claim: an LLM hallucinating entire APIs. Not surreptitiously sneaking subtly shitty stuff past a compiler.
This is my problem: not that people are cynical about LLM-assisted coding, but that they themselves are hallucinating arguments about it, expecting their readers to nod along. Not happening here.
kgwgk · 9h ago
> The post we're commenting on makes a clear claim: an LLM hallucinating entire APIs
You made a similar claim: LLMs invent APIs that don't exist
The AES block cipher core: also grievously insecure if used naively, without understanding what a block cipher can and can't do, by itself. Thus also an LLM call.
loire280 · 10h ago
A great solution to this problem, but it doesn't seem like this approach will generalize to problems in other fields, or even to more suble coding confabulations that can't be detected by the compiler or static analysis.
tptacek · 10h ago
I vehemently agree with this. But it doesn't change the falsity of the claim in the article.
LLMs will always make stuff up because they are lossy. In the same way that if I ask you to list the methods for some random object lib you'd not be able to do that; you use the documentation to pull that up or your code-complete companion. LLMs are just getting the tools for that.
beckthompson · 10h ago
Oh for sure I agree 100%! I was just saying that they will always make stuff up no matter what. Those are both good fixes but at its core it can only "make stuff up".
ipdashc · 10h ago
There was an article that made the rounds a few weeks ago that still rings true. Basically, it feels like one is going crazy reading either "extreme" of the whole LLM conversation, with one extreme (obviously) being the "AI can do anything" Twitter techbro types, but the other extreme being articles like this that claim it can't do anything.
I know the author already addressed this, literally calling out HN by name, but I just don't get it. You don't even need agents (though I'm sure they help), I still just use regular ChatGPT or Copilot or whatever and it's still occasionally useful. You type in what you want it to do, it gives you code, and usually the code works. Can we appreciate how insane this would have been, what, half a decade ago? Are our standards literally "the magic english-to-code machine doesn't work 100% of the time, so it's total crap, utterly useless"?
I absolutely agree with the general thrust of the article, the overall sense of disillusionment, the impact LLM abuse is going to have on education, etc. I don't even particularly like LLMs. But it really does feel like gaslighting to the extent that when these essays make this sort of argument (LLMs being entirely useless for coding) it just makes me take them less seriously.
paulddraper · 10h ago
> it just makes me take them less seriously
Indeed. This is how to spot an ideologue with an axe to grind, not someone whose beliefs are shaped by dispassionate observation.
happytoexplain · 9h ago
Aside from trivial facts, beliefs can not be, and should not be, shaped by dispassionate observation alone. Even yours are not. And framing it the way you have is simply the same but oppositely-positioned fallacy as the one the author is accused of.
d4rkn0d3z · 6h ago
Isn't this saying "We get it that our nondeterministic bullshit machine writes crap, but we are wrapping it in a deterministic finite state machine in order to bring back determinism. We call it 'Agentic'".
Seems like 40 years of effort making deterministic computing work in a non-deterministic universe is being cast aside because we thought nondeterminism might work better. Turns out, we need determinism after all.
Following this out, we might end up with alternating layers of determinism and nondeterminism each trying to correct the output of the layer below.
I would argue AI is a harder problem than any humans have ever tried to solve, how does it benefit me to make every mundane problem into the hardest problem ever? As they say on the internet ...and now you have two problems, the second of which is always the hardest one ever.
62702b077f3 · 10h ago
> The garbage generator generates garbage, but if you run it enough times it gets something slightly-less-garbage that can satisfy a compiler! You're stupid if you don't think this is awesome!
ipdashc · 10h ago
I don't understand the point of this style of argument.
There are oh-so-many issues with LLMs - plagiarism/IP rights, worsening education, unmaintainable code - this should be obvious to anyone. But painting them as totally useless just doesn't make sense. Of course they work. I've had a task I want to do, I ask the LLM in plain English, it gives me code, the code works, I get the task done faster than I would have figuring out the code myself. This process has happened plenty of times.
Which part of this do you disagree with, under your argument? Am I and all the other millions of people who have experienced this all collectively hallucinating (pun intended) that we got working solutions to our problems? Are we just unskilled for not being able to write the code quickly enough ourselves, and should go sod off? I'm joking a bit, but it's a genuine question.
Shorel · 8h ago
You are right about this.
Also, someone mathematically proved that's enough. And then someone else proved it empirically.
There was an experiment where they trained 16 pigeons to detect cancerous or benign tumours from photographies.
Individually, each pigeon had an average 85% accuracy. But all pigeons (except for one outlier) together had an accuracy of 99%.
If you add enough silly brains, you get one super smart brain.
Lariscus · 8h ago
Its also mathematically proven that infinite monkeys typing on typewriters for eternity will recreate all works of Shakespeare. It still takes someone with an actual brain to recognize the correct output.
Shorel · 8h ago
Yep, there's some positive feedback loop missing in all these LLMs stuff.
literalAardvark · 10h ago
Counter point: the brain also generates mostly garbage, just slower.
KPGv2 · 10h ago
I had Copilot for a hot minute. When I wrote things like serializers and deserializers, it was incredible. So much time saved. But I didn't do it enough to make the personal cost worth it, so I cancelled.
It's annoying to have to hand-code that stuff. But without Copilot I have to. Or I can write some arcane regex and run it on existing code to get 90% of the way there. But writing the regex also takes a while.
Copilot was literally just suggesting the whole deserialization function after I"d finished the serializer, 100% correct code.
Shorel · 8h ago
I remember writing LISP code that created the serialisers and deserialisers for me.
Now that everything is containerised and managed by docker style environments, I am thinking about giving SBCL another try, the end users only need to access the same JSON REST APIs anyway.
Everything old is new again =)
csomar · 10h ago
> LLM outputs are nondeterministic.
LLM outputs are deterministic. There is no intrinsic source of randomness. Users can add randomness (temperature) to the output and modify it.
> But this article is based on a model of LLM code generation from 6 months ago
There hasn't been much change in models from 6 months ago. What happened is that we have better tooling to sift through the randomly generated outputs.
I don't disagree with your message. You are being downvoted because a lot of software developers are butt-hurt by it. It is going to force a change in the labor market for developers. In the same way the author is butt-hurt that they didn't buy Bitcoin in the very early days (as they were aware of it) and missed the boat on that.
reasonableklout · 9h ago
Nit: in practice, even at temperature 0, production LLM implementations have some non-determinism. One reason is because many floating point computations are technically non-commutative even when the mathematical operation is, and the order can vary if they are carried out in parallel by the GPU. For example, see: https://www.twosigma.com/articles/a-workaround-for-non-deter...
jkhdigital · 9h ago
I ran into this a bit while working on my PhD research that used LLMs for steganography. The output had to be deterministic to reverse the encoding, and it was—as long as you used the same hardware. Encoding a message on a PC and then trying to decode on a phone broke everything.
tptacek · 10h ago
There hasn't been much change in models from 6 months ago.
I made the same claim in a widely-circulated piece a month or so back, and have come to believe it was wildly false, the dumbest thing I said in that piece.
So far the only model that showed significant advancement and differentiation was GPT-4.5. I advise to look at the problem and read GPT-4.5 answer. It'll show the difference to other "normal models" (including GPT-3.5) as it shows considerable levels of understanding.
Other normal models are now more chatty and have a bit more data. But they do not show increased intelligence.
Karrot_Kream · 8h ago
I was able to have Opus 4 one-shot it. Happy to share a screenshot if that wasn't your experience.
csomar · 5h ago
Interested to see your Opus 4 one-shot. I tried it very recently on Opus 4 and it burbled non-sense.
lovich · 9h ago
> agent constructions
> But this article is based on a model of LLM code generation from 6 months ago which is simply no longer true, and you can't gaslight your way back to Q1 2024.
You’re ahead of the curve and wondering why others don’t know what you do. If you’re not an AI company, a faang, or an AI evangelist you likely haven’t heard of those solutions.
I’ve been trying to keep up with AI developments, and only learned about MCP and agentic workflows 1-2 months ago and I consider myself failing at keeping up with cutting edge AI development
Edit:
Also six months ago is Q1 2025, not 2024. Not sure if that was a typo or a need to remind you at how rapidly this technology is iterating
roncesvalles · 7h ago
>It feels like the same attitude that happened with Bitcoin, the same smug nose-wrinkling contempt. [...] But the Bitcoin people make more money if they can shame everyone else into buying more Bitcoin, so of course they’re gonna try to do it. What do programmers get out of this? Unless you work at Microsoft and have a lot of stock options, you aren’t getting rich off of how many people use Copilot.
The reason for this is as follows: A good number of software engineers make a fuckton of money, but not all of them (it's bimodal/trimodal/whatever). SWEs are also one of the highest paid people within tech companies, often earning more than the MBAs, the CPAs, and the JDs (who tend to come with massive chips on account of actually having actual credentials, and tech companies being prestigious destinations within these profession).
It seems that over time, this phenomenon has caused the emergence of a large coterie of people whom I would kindly describe as being "bearish" of these high-end software engineering salaries and double-down on any narrative that undermines the livelihood of the software engineer, basically just to make themselves feel better.
tl;dr people desperately want AI to succeed because they're envious of software engineers
But for me the biggest issue with all this — that I don't see covered in here, or maybe just a little bit in passing — is what all of this is doing to beginners, and the learning pipeline.
> There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though.
> I glimpsed someone on Twitter a few days ago, also scoffing at the idea that anyone would decide not to use the Whatever machine. I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?”
When you're a beginner, it's totally normal to not really want to put in the hard work. You try drawing a picture, and it sucks. You try playing the guitar, and you can't even get simple notes right. Of course a machine where you can just say "a picture in the style of Pokémon, but of my cat" and get a perfect result out is much more tempting to a 12 year old kid than the prospect of having to grind for 5 years before being kind of good.
But up until now, you had no choice and to keep making crappy pictures and playing crappy songs until you actually start to develop a taste for the effort, and a few years later you find yourself actually pretty darn competent at the thing. That's a pretty virtuous cycle.
I shudder to think where we'll be if the corporate-media machine keeps hammering the message "you don't have to bother learning how to draw, drawing is hard, just get ChatGPT to draw pictures for you" to young people for years to come.
Tech never ever prevents people who really want to hone their skills from doing so. World record of 100m sprint kept improving even since car was invented. World record of how many digits of pi memorized kept improving even when a computer does that indefinitely times better.
It's ridiculous to think drawing will become a lost art because of LLM/Diffusal models when we live in a reality where powerlifting is a thing.
As ever, the standard defence of LLM and all gen AI tech rests on this reduction of complex subjectivity to something close to objectivity: the picture looks like other pictures, therefore it is a good picture. The sentence looks plausibly like other sentences, therefore it is a good sentence. That this argument is so pervasive tells me only that the audience for 'creative work' is already so inundated with depthless trash, that they can no longer tell the difference between painting and powerlifting.
It is not the artists who are primarily at risk here, but the audience for their work. Artists will continue to disappear for the same reason they always have: because their prospective audience does not understand them.
The majority of artists, and of all other groups, are in fact mediocre with mediocre virtues, so enough incentives would turn most of them into Whatever shillers like the post describes.
So a non expert cannot easily determine, even if they do stumble upon “Serious art” by happenstance, whether it’s just another empty scheme or indeed someting more serious.
Maybe if they spend several hours puzzling over the artist’s background, incentives, network, claims, past works, etc… they can be 99% sure. But almost nobody likes any particular piece of work that much upon first glance, to put in that much effort.
Far fewer people make their living as musicians than did even thirty years ago. Being a musician is no longer a viable middle-class career. Jaron Lanier, who has written on this, has argued that it's the direct result of the advent of the internet, music piracy, and streaming -- two of which originally were expected or promised to provide more opportunities for artists, not take them away.
So there really are far fewer drummers, and fewer, worse opportunities for those who remain, than there were within the living memory of even most HN users, not because some specific musical technology advanced but because technological advancement provided an easier, cheaper alternative to human labor.
Sound familiar yet?
what's your basis for this claim? please provide some data showing number of drummers over time, or at least musicians, over the last fifty years or so. I tried searching and couldn't find anything but you're so confident, I'm sure you have a source you could link
Being good at coming up with ideas, at critically reading something, at synthesizing research, at writing and editing, are all things that take years to learn. This is not the same as learning the mechanics that a calculator does for you.
I would not buy a calculator that hallucinated wrong answers part of the time. Or a microwave oven that told you it grilled the chicken but it didn't and you have to die from Salmonella poisoning.
(I’ve heard the fans that you hear are there to reflect the micro waves and make them bounce all over the place but I don’t know if that’s true. Regardless, most models have a spinning plate which will constantly reposition the food as it cooks.)
Older microwaves had a fan-like metal stirrer inside the cooking box, that would continuously re-randomize where the waves went. This has been out of fashion for several decades.
Like:
1- Put it on the edge of the plate, not in the middle
2- Check every X seconds and give it a stir
3- Don't put metal in
4- Don't put sealed things in
5- Adjust time for wetness
6- Probably don't put in dry things? (I believe you needed water for a microwave to really work? Not sure, haven't tried heating a cup of flour or making a caramel in the microwave)
7- Consider that some things heat weirdly, for example bread heats stupid quick and then turns into stone equally as quick once you take it out.
...
We teach our kids about microwave oven safety for this reason.
No comments yet
As an anecdote, in my country there is a very popular brand of supermarket pizzas, Casa Tarradellas. I never buy them but a friend of mine used to eat them really frequently. So once he shows up at my house with one, and I say OK, I'm going to heat it. I serve it, he tries a bite and is totally blown away. He says "What did you do? I've been eating these pizzas for years and they never taste like this, this is amazing, the best Casa Tarradellas pizza I've ever had".
The answer was that he used the microwave and I had heated it in the regular oven...
I have never had that issue when heating stuff up. Your pizza example is not reheating (and generally you never want to reheat anything that’s supposed to be crispy in the microwave; though not on the stove top either).
No comments yet
Obviously if one product hallucinated and one doesn't it's a no brainer (cough Intel FPUs). But in a world where the only calculators were available hallucinated at the 0.5% level you'd probably have one in your pocket still.
And obviously if the calculator hallucinated at the 90% of the time for a task which could otherwise be automated you'd just use that approach.
Slide rule are good for only a couple of digits of precision. That's why shopkeepers used abacuses not slide rules.
I have a hard time understanding your hypothetical. What does it mean to hallucinate at the 0.5% level? That repeating the same question has a 0.5% chance of giving the wrong answer but otherwise it's precise? In that case you can repeat the calculation a few times to get high certainty. Or that even if you repeat the same calculation 100 times and choose the most frequent response then there's still a 0.5% chance of it being the wrong one?
Or that values can be consistently off by within 0.5% (like you might get from linear interpolation)? In that case you are a bit better than a slide rule for estimating, but not accurate enough for accounting purposes, to name one.
Does this hypothetical calculator handle just plus, minus, multiply, and divide? Or everything that a TI 84 can handle? Or everything that WolframAlpha can handle?
If you had a slide rule and knew how to use it, when would you pay $40/month for that calculator service?
Shopkeepers did integer math, not decimal. They had no need for a slide rule, an abacus is faster at integer math, a slide rule is used for dealing with real numbers.
https://en.wiktionary.org/wiki/couple - "(informal) a small number"
FWIW, "Maximum accuracy for standard linear slide rules is about three decimal significant digits" - https://en.wikipedia.org/wiki/Slide_rule
While yes, "Astronomical work also required precise computations, and, in 19th-century Germany, a steel slide rule about two meters long was used at one observatory. It had a microscope attached, giving it accuracy to six decimal places" (same Wikipedia page), remember that this thread is about calculating devices one might carry in one's pocket, have on one's self, or otherwise be able to "grab".
(There's a scene in a pre-WWII SF story where the astrogators on a large interstellar FTL spacecraft use a multi-meter long slide rule with a microscope to read the vernier scale. I can't remember the story.)
My experience is that I can easily get two digits, but while I'm close to the full three digits, I rarely achieve it, so I wouldn't say you get three decimal digits from a slide rule of the sort I thought was relevant.
I'm a novice at slide rules, so to double-check I consulted archive.org and found "The slide rule: a practical manual" at https://archive.org/details/sliderulepractic00pickrich/page/...
> With the ordinary slide rule, the accuracy obtainable will largely depend upon the precision of the scale spacings, the length of the rule, the speed of working, and the aptitude of the operator. With the lower scales it is generally assumed that the readings are accurate to within 0.5 per cent. ; but with a smooth-working slide the practised user can work to within 0.25 per cent
That's between 2 and 3 digits. You wouldn't do your bookkeeping with it.
(thanks Rory Sutherland for this analogy)
If an LLM hallucinates and you don't know better, it can be bad. Hopefully people are double checking things that really matter, but some things are a little harder to fact check.
I just type the address into Google maps, or place a pin manually, then hit the start button. It'll tell me every step of the way. Keep right at the fork. In a hundred meters, turn left. Turn left. Take the second exit in the roundabout. Your destination is a hundred meters ahead on the right.
It's great and it works almost flawlessly. Even better if you have a passenger able to keep an eye on it for those times when it isn't flawless.
Citation needed.
Alec Watson of Technology Connections points out that GPS routing defaults to minimizing time, even when that may not the most logical way to get somewhere.
His commentary, which starts at https://youtu.be/QEJpZjg8GuA?t=1804 , is an example of his larger argument about the complacency of letting automation do things for you.
His example is a Google Maps routing which saves one minute by going a long way to use a fast expressway (plus $1 toll), rather than more direct but slower state routes and surface streets. It optimizes one variable - time - of the many variables which might be important to you - wear&tear, toll costs, and the delight of knowing more about what's going on in the neighborhood.
His makes the point that he is not calling for a return to paper maps, but rather to reject automation complacency, which I'll interpret as letting the GPS figure everything out for you.
We've all heard stories of people depending on their GPS too much then ending up stuck on a forest road, or in a lake, or other place which requires rescue - what's the equivalent failure mode with a calculator?
I use it all the time, pretty much zero issues.
I'm also aware of the failure modes with GPS complacency, including its incomplete knowledge of the variables that I find important.
And that's with something that makes mistakes far less often than LLMs and related technology.
Which is why I don't think that your mention of GPS use is a strong counter-example to bryanrasmussen's comment against using hallucinating devices.
However, I do have a pressure cooker and a rice cooker that gets a lot of use. They're extremely reliable and don't use much electricity and I can schedule what they do, which is bulk cooking without me having to care about it while it happens.
In two years, that won't be the case.
Its the same for virtually all other Arts based job. An economy that currently support say 100% of the people now, will at most be able to support 10-30% in a few years time.
> It's ridiculous to think drawing will become a lost art because of LLM/Diffusal
Map reading is pretty much a dead art now (as someone who leads hikes, I've seen it first hand)
Memorising books/oral history is also a long dead art.
Oral story telling is also a dead art, as is folk music, compared to its peak.
Sure _rich_ people will be able to do all the arts they want. Everyone else won't
For example, I have no knowledge of film editing or what “works” in a sequence, but if I wanted to I could create something more than passable with AI.
Why would someone buy a plate off her, when they could get one from IKEA for 1.50 eur?
Yet ceramics is not a dead art. Beats me?
but 200 years ago there were loads of ceramic manufactures, employing hundreds of thousands of skilled potters. even 50 years ago, there were thousands of skilled ceramists in the UK. now its single person artisans, like your very talented other half.
Now, that reduction in work force too 200 years and mirrors the industrial revolution. GenAI is looking like its going to speed run that in ~5-7 years
I should be more clear, there is a difference between dead art (memorizing stories) and non viable career for all but 1% of people compared to now. I'm talking about the latter.
curious rich people.
I guess the analogy isn't that bad! I'd be pretty upset if a professional cook made my steak in a microwave.
A very good example! (...although probably not how you think it is ;)
Indeed the world record is achieved by a very limited number of people under stringent conditions.
Meanwhile people by and large† take their cars to go to the bakery which by foot would be 10min away, to disastrous effect on their health.
And by "cars" I mean "technology", which, while a fantastic enabler of things impossible before, has turned more people into couch potatoes than athletes.
† Comparatively to world record holders.
No it's not (like OP's article says). With a calculator you punch in 10 + 9 and get 2 immediately, and this was 50+ years ago. With an LLM you type in "what is 10 + 9" and get three paragraphs of text after a few seconds. (this is false, I just tried it and the response is "10 + 9 = 19" but I'm exaggerating for dramatic effect). With a microwave you yeet in food and press a button and stuff happens the same way, every time.
Sure, if you abstract it to "doing things in an easier and lazier way", LLMs are just the next step, like IDEs with built in error checking and code generation were since 20 years ago. But it's more vague than press button to do a thing.
Your calculator is broken.
> With an LLM you type in "what is 10 + 9" and get three paragraphs of text after a few seconds. (this is false, I just tried it and the response is "10 + 9 = 19" but I'm exaggerating for dramatic effect).
So you’re arguing against a strawman?
No comments yet
Obesity rates keep "improving" since the car was invented, up to becoming a major public health crisis and the main amplifier of complications and mortality when the pandemic stroke.
Oh, and the 100m sprint world record has been set for more than a decade and a half now, which means either we reached human optimum, or progress on anti-doping technology has forced a regression on performance.
Well you sure showed them.
The TFA makes a very concrete point about how the Whatever machine is categorically different from a calculator or a handsaw. A calculator doesn't sometimes hallucinate a wrong result. A saw doesn't sometimes cut wavy lines instead of straight lines. They are predictable and learnable tools. I don't see anyone addressing this criticism, only straw manning.
Even though that is a generalization that you cannot prove, you implicitly admit that it will prevent everybody else from gettings any skills. Which is quite a bad outcome.
> powerlifting is a thing
Those people have a different motivation: looks, competition, prestige, power. That doesn't motivate people to learn to draw.
Your easy dismissal is undoubtedly shared by many, but it is hubris.
We go from a society where only a very few people are literate in math to one where everyone has a literal supercomputer at all times. What do you think that would do for math literacy in a society? Would everyone suddenly went to learn algebra and calculus? Or would the vast majority of people use the easy machine and accept its answers without question or understanding?
There are clear differences. First of a calculator and microwave are quite different, but so is LLM. Both are time savers, in the sense of microwave saves time defrosting and calculator saves time calculating vs human.
They save time to achieve a goal. However calculators come with a penalty, by making multiplication easier they make user worse at it.
LLMs are like calculators but worse. They both are effort savers, and thus come with a huge learning penalty and unprecise enough that you need to learn to know better than them.
You generally don't need a lengthy explanation because it's common sense. When someone doesn't get it then people have to go into lengthy convoluted explanations because they are trying to elucidate common sense to someone who doesn't get it.
I mean how else do I elucidate it?
LLMs are different from any revolutionary technology that came before it. The first thing is we don't understand it. It's a black box. We understand the learning algorithm that trains the weights, but we don't understand conceptually how an LLM works. They are black boxes and we have limited control over them.
You are talking to a thing that understands what you say to it, yet we don't understand this how this thing works. Nobody in the history of science has created anything similar. And yet we get geniuses like you who can use a simple analogy to reduce the creation of an LLM to something like the invention of a car and think there's utterly no difference.
There is a sort of inflection point here. It hasn't happened yet but a possible future is becoming more tangible. A future where technology surpasses humanity in intelligence. You are talking to something that is talking back and could surpass us.
I know the abundance of AI slop has made everyone numb to the events that happened in the past couple of years. But we need to look past that. Something major has happened, something different then the achievements and milestones humanity has surpassed before.
Maybe you're new here friend...
Perhaps you do not understand it, but many software engineers do understand.
Of the human brain we can still say that we don't understand it.
No, they do not. LLMs are by nature a black box problem solving system. This is not true about all the other machines we have, which may be difficult to understand for specific or even most humans, but allow specialists to understand WHY something is happening. This question is unanswerable for an LLM, no matter how good you are at Python or the math behind neural networks.
Let me put it plainly. If we understood LLMs we would understand why hallucinations happen and we would subsequently be able to control and stop hallucinations from happening. But we can’t. We can’t control the LLM because of lack of understanding.
All the code is available on a computer for us to modify every single parameter. We have full access and we can’t control the LLM because we don’t understand or KNOW what to do. This is despite the fact that we have absolute control over the value of every single atomic unit of an LLM
We need to learn to make technology truly benefit the many. Also in terms of power.
I've heard people express that they liked working in retail. By extension somebody must have enjoyed being a bank teller. After all, why not? You get to talk to a lot of people, and every time you solve some problem or answer a question and get thanked for it you get a little rush of endorphins.
Many jobs that suck only suck due to external factors like having a terrible boss or terrible customers, or having to enforce some terrible policy.
But imagine working in a nice cafe in a quiet small town, or a business that's not too frantically paced, like a clothing store. Add some perks like not always doing the same job and a decent boss, and it shouldn't be too bad. Most any job can be drastically improved by decreasing the workload, cutting hours and adding some variety. I don't think being a cashier is inherently miserable. It's just the way we structure things most of the time makes it suck.
Just like you think a human touch makes art special, a human touch can make a mundane job special. A human serving coffee instead of a machine can tell you about what different options are available, recommend things, offer adjustments a machine might not, chat about stuff while it's brewing... You may end up patronizing a particular cafe because you like the person at the counter.
There's a common fallacy that tries to argue that it'll be alright over time, no matter what happens. Given enough time, you can also say that about atomic wars. But that won't help the generations that are affected.
If you just sit on your hands complaining about the lack of opportunities then you won't get any sympathy from me. People aren't entitled to live wherever they want, humanity's entire thing is adaptability. So adapt. Life is what you make it.
I wouldn't be surprised if at some point in the near future something like "Adapt. Life is what you make it" could be read in big bold letters above the entrance of a place like Alligator Alcatraz.
People adapt to all kinds of stuff all the time. Saying adapting isn't a thing for most people is ridiculous. Of course it's a thing. It's what you do when your current situation isn't working. You adapt.
That said, yes, what about them? These are people with real skin the the game - people who spent years learning their craft expecting it will be their life-long career.
Do we simply exclaim "sucks to be you!"?
Do we tell out-of-work coal miners to switch to a career in programming with the promise it will be a lucrative career move? And when employment opportunities in software development collapse, then what?
All while we increasingly gate health care on being employed?
Software dev opportunities won't collapse any time soon, any half decent dev who's tried vibe coding will tell you that much. It's a tool developers can use, it's not a replacement.
It’s why it’s so exciting.
Whats the benefit of LLMs to the many who barely can operate a search machine?
I am sorry but thinking this will benefit the many is delusional. It's main use is making rich people richer by saving expenses on people's wages. Tell me, how are these people going to get a job once their skills are made useless?
In terms of the artist being accessible to overseas fans it's a great improvement, but I do wonder if I had grown up with this, would I have had any motivation to learn?
For a specific example, when 2 grammar points seem to mean the same thing, teachers here in Japan would either not explain the difference, or make a confusing explanation in Japanese.
It's still private-ish/only for myself, but I generated all of this with LLMs and using it to learn (I'm around N4~N3) :
- Grammar: https://practice.cards/grammar
- Stories, with audio (takes a bit to load): https://practice.cards/stories
It's true though that you still need the motivation, but there are 2 sides of AI here and just wanted to give the other side.
But my man, how do you know if it explains perfectly well or is just generating plausible-sounding slop? You're learning, so by definition you don't know!
I also checked with some Japanese and my own notes contain more errors than the LLMs output by a large margin.
Similarly, I used the Busuu app for a while. One of its features is that you can speak or type sentences, and ask native speakers to give feedback. But of course, you have to wait for the feedback (especially with time zone differences), so they added genAI feedback.
Like, what’s the point of this? It’s like that old joke: “We have purposely trained him wrong, as a joke”!
It's killing the accumulative and progressive way of learning that rewards who tries and fail many times before getting it right.
The "learning" is effectively starting to being killed.
I just wonder what would happen to a person after many years using "AI" and suddenly not having access to it. My guess is that you become useless and with a highly diminished capacity to perform even the most basic things by yourself.
This is one of many reasons why I'm so against all the hype that's going on in the "AI" space.
I keep doing things the old school way because I fully comprehend the value of reading real books, trying, failing and repeating the process again and again. There's no other way to truly learn anything.
Does this generation understand the value of it? Will the next one?
The only silver lining I can see is that a new perspective may be forced on how well or badly we’ve facilitated learning, usability, generally navigating pain points and maybe even all the dusty presumptions around the education / vocational / professional-development pipeline.
Before, demand for employment/salary pushed people through. Now, if actual and reliable understanding, expertise and quality is desirable, maybe paying attention to how well the broader system cultivates and can harness these attributes can be of value.
Intuitively though, my feeling is that we’re in some cultural turbulence, likely of a truly historical magnitude, in which nothing can be taken for granted and some “battles” were likely lost long ago when we started down this modern-computing path.
At any point of progress in history you can look backwards and forwards and the world is different.
Before tractors a man with an ox could plough x field in y time. After tractors he can plough much larger areas. The nature of farming changes. (Fewer people needed to farm more land. )
The car arrives, horses leave. Computers arrive, the typing pool goes away. Typing was a skill, now everyone does it and spell checkers hide imperfections.
So yeah LLMs make "drawing easier". Which means just that. Is that good or bad? Well I can't draw the old fashioned way so for me, good.
Cooking used to be hard. Today cooking is easy, and very accessible. More importantly good food (cooked at home or elsewhere) is accessible to a much higher % of the population. Preparing the evening meal no longer starts with "pluck 2 chickens" and grinding a kilo of dried corn.
So yeah, LLMs are here. And yes things will change. Some old jobs will become obsolete. Some new ones will appear. This is normal, it's been happening forever.
This is 100% just the mechanization of a cultural refinement process that has been going on since the dawn of civilization.
I agree with you regarding how the bounty of GenAI is distributed. The value of these GenAI systems is derived far more from the culture they consume than the craft involved in building them. The problem isn't theft of data, but a capitalist culture that normalizes distribution of benefit in society towards those that are already well off. If the income of those billionaires and the profits of their corporations were more equitably taxed, it would solve a larger class of problems, of which this problem is an instance.
Firstly the "theft component" isn't exactly new. There have always been rich and poor.
Secondly everyone is standing on the shoulders of giants. The Beatles were influenced by the works of others. Paul and John learned to write by mimicking other writers.
That code you right is the pinicle of endless work dine by others. By Ada Lovelace, and Charles Babbage, and Alan Turing and Brian Kernigan and Denis Ritchie and Doug Englebart and thousands and thousands more.
By your logic the entire output of all industries for all foreseeable generations should be universally owned. [1]
But that's not the direction we have built society on. Rather society has evolved in the US to reward those who create value out of the common space. The oil in Texas doesn't belong to all Texans, it doesn't belong to the pump maker, it belongs to the company that pumps the oil.
Equally there's no such thing as 'your data'. It's your choice to publish or not. Information cannot be 'owned'. Works can be copyrighted, but frankly you have a bigger argument on that front going after Google (and Google Books, not to mention the Internet Archive) than AI. AI may train on data you produced, but it does not copy it.
[1] I'm actually for a basic income model, we don't need everyone working all day like it's 1900 anymore. That means more taxes on companies and the ultra wealthy. Apparently voters disagree as they continue to vote for people who prefer the opposite.
The two parties that end up viable tend to be financed quite heavily by said wealthy, including being proped by the media said wealthy control.
The more right wing side will promise tax cuts (also for the poor that don't seem to materialize) while the more left wing side will promise to tax the rich (but in an easily dodgeable way that only ends up affecting the middle class).
Many people understand this and it is barely part of the consideration in their vote. The last election in the US was a social battle, not really an economic one. And I think the wealthy backers wanted it that way.
I would contest some of your points though.
Firstly, not every country votes, not all that vote have 2 viable parties, so that's a flaw in your argument.
Equally most elections produce a winner. That winner can, and does, get stuff done. The US is paralyzed because it takes 60% to win the senate, which hasn't happened for a while. So US elections are set up so "no one wins". Which of course leads to overreach etc that we're seeing currently.
There's a danger when living inside a system that you assume everywhere else is the same. There's a danger when you live in a system that heavily propagandizes its own superiority, that you start to feel like everywhere else is worse.
If we are the best, and this system is the best, and it's terrible, then clearly all hope is lost.
But what I maybe, just maybe, all those things you absolutely, positively, know to be true, are not true? Is that even worth thinking about?
But I know people whose preference would be something like Ron Paul > Bernie Sanders > Trump > Kamala, which might sound utterly bizarre until you realize that there are multiple factors at play and "we want tax cuts for the rich" is not one of them.
People are welcome to whatever preference they like. Democracy let's them choose. But US democracy is deliberately planned to prefer the "no one wins" scenario. That's not the democracy most of the world uses.
If we don't have something better to do we'll all be at home doing nothing. We all need jobs to afford living, and already today many have bullshit jobs. Are we going to a world where 99.9% of the people need a bullshit job just to survive?
>> We all need jobs to afford living
In many countries this is already not true. There is already enough wealth that there is enough for everyone.
Yes, the western mindset is kinda "you don't work, you don't get paid". The idea that people can "free load" on the system is offensive at a really deep emotional level. If I suggest that a third of the people can work, and the other 2 thirds do nothing, but get supported, most will get distressed [1]. The very essence of US society is that we are defined by our work.
And yet if 2% of the work force is in agriculture, and produce enough food for all, why is hunger a thing?
As jobs become ever more productive, perhaps just -considering- a world where worth is not linked to output is a useful thought exercise.
No country has figured this out perfectly yet. Norway is pretty close. Lots of Europe has endless unemployment benefits. Yes, there's still progress to be made there.
[1] of course, even in the US, already it's OK for only a 3rd to work. Children don't work. Neither do retirees. Both essentially live off the labor of those in-between. But imagine if we keep raising the start-working age, while reducing retirement benefits age....
And universally, if you have nothing, you lead a very poor life. You life in a minimal house (trailer park, slums, or housing without running water nor working sewage). You don't have a car, you can't travel, education opportunities are limited.
Most kids want to become independent, so they have control over their spending and power over their own lives. Poor retirees are unhappy, sometimes even have to keep working to afford living.
Norway is close because they have oil to sell, but if no one can afford to buy oil, and they can't afford to buy cars, nor products made with oil, Norway will soon run out of money.
You can wonder, why is Russia attacking Ukraine, russia has enough land, doesn't need more. But in the end there will always be people motivated by more power and money, which makes it impossible to create this communism 2.0 that you're describing.
I'm not suggesting equality or communism. I'm suggesting a bottom threshold where you get enough even if you don't work.
Actually Norway gets most of that from investments, not oil. They did sell oil, but invested that income into other things. The sovereign wealth fund now pays out to all citizens in a sustainable way.
Equally your understanding of dole living in Europe is incomplete. A person on the dole in the UK is perfectly able yo live in a house with running water etc. I know people who are.
Creating a base does not mean "no one works". Lots of people in Europe have a job despite unemployment money. And yes most-all jobs pay better than unemployment. And yes lifestyles are not equal. It's not really communism (as you understand it.)
This is not about equal power or equal wealth. It's about the idea that a job should not be linked to survival.
Why is 60 the retirement age? Why not 25? That sounds like a daft question, but understanding it can help understand how dome things that seem cast in stone, really aren't.
Living on welfare in the Netherlands is not a good life, and definitely not something we should accept for the majority of the people.
Being retired on only a state pension is a bad life, you need to save for retirement to have a good life. And saving takes time, that's why you can't retire at 25.
I'm saying that the blind acceptance of the status quo does not allow for that status to be questioned.
You see the welfare amounts, or retirement amounts as limited. Well then, what would it take to increase them? How could a society increase productivity such that more could be produced in less time?
Are some of our mindsets preventing us from seeing alternatives?
Given that society has reinvented itself many times through history, are more reinvention possible?
With the WWW we thought everyone having access to all information would enlighten them, but without knowledge people do not recognize the right information, and are more likely to trust (mis)information that they think they understand.
What if LLMs give us all the answers that we need to solve all problems, but we are too uninformed and unskilled to recognize these answers? People will turn away from AI, and return to information that they can understand and trust, even if it's false.
Anyway, nothing new actually, we've seen this with science for some time now. It's too advanced for most people to understand and validate, so people distrust it and turn to other sources of information.
Before that, I had an TI-99 4A at home without a tape drive and the family tv as a display. I mainly was into creating games for my friends. I did all my programming on paper, as the "screen time" needed to be maximized for actually playing the games after typing it in from the paper notebook. Believe it or not, but bugs were very rare.
Much later at uni there were computer rooms with Mac's with a floppy drive. You could actually just program at the keyboard, and the IDE even had a debugger!
I remember observing my fellow students endlessly type-run-bug-repeat until it "worked" and thinking "these guys never learned to reason through their program before running it. This is just trial and error. Beginners should start on paper".
Fortunately I immediately caught myself and thought, no, this is genuine progress. Those that "abuse" it would more than likely not have programmed 'ye old way' anyways, and some others will genuinely become very good regardless.
A second thing: in the early home computer year(s) you had to program. The computer just booted into the (most often BASIC) prompt, and there was no network or packaged software. So anyone that got a computer programmed.
Pretty soon, with systems like the Vic-20, C64 and ZX Spectrum there was a huge market in off the shelf game cassettes. These systems became hugely popular because they allowed anyone to play games at home without learning to program. So only those that liked programming did. Did that lose beginner programmers? Maybe some, for sure.
The best artists will spot holes in the culture, and present them to us in a way that's expertly composed, artful and meticulously polished. The tools will let them do it faster, and to reach a higher peak of polish than in the past, but the artfulness will still be the artist's.
Futuristic tools aren't replacing art, they're creating a substrate for a higher order of art. Collages are art, and at its most crude, this higher order art reduces to digital collages of high quality generated assets with human intention. With futuristic tools, art becomes reductive rather than constructive. To quote Michelangelo's response to how he made David: "It is simple, I just removed everything that wasn't David"
Fair point; I think this feeling is exacerbated by all the social media being full of people looking like they're good at what they do already, but it rarely shows the years of work they put in beforehand. But that's not new, compare with athletes, famous people, fictional characters, etc. There's just more of it and it's on a constant feed.
It does feel like people will just stop trying though. And when there's a shortcut in the form of an LLM, that's easy. I've used ChatGPT to write silly stories or poems a few times; I look at it and think "you know, if I were to sit down with it proper I could've written that myself". But that'd be a time and effort investment, and for a quick gag that will be pushed down the Discord chat within a few minutes anyway, it's not worth it.
This should be comparable to how much fewer people in the west today know how to work a farm or build machinery. Each technological shift comes at a cost of population competence.
I do have a feeling that this time it could be different. Because this shift has this meta-quality to it. It has never been easier to acquire, at least theoretical, knowledge. But the incentives for learning are shifting in strange directions.
IMO, because it's good in a way or another. I'm not reading your writing because I imagine you toiled over every word of it, but simply because I started reading and it seemed worthwhile to read the rest.
Or, to use a different metaphor, these comments are mentally nutritional Doritos, not a nicely prepared restaurant meal. If your restaurant only serves Dorito-level food, I won't go there even if I do consume chips quite often at home.
LLMs will accelerate the pace of this assimilation. New trends and new things will become popular and generic so fast that we'll have to get really inventive to stay ahead of the curve.
2 years later and he thought of a project he really wanted to make. He didn't succeed, but its very clear he changes his mind
> up until now, you had no choice and to keep making crappy pictures and playing crappy songs until you actually start to develop a taste for the effort, and a few years later you find yourself actually pretty darn competent at the thing. That's a pretty virtuous cycle.
https://www.deviantart.com/scotchi/art/keep-tryin-690533685
Exactly.
Only putting the work is going to get anyone places. And yes it takes _time_, like, tons, and there's no shortcut.
And I can explain in excruciating detail how to do an ollie or a kickflip even and from a physics point of view you would totally get it but to land the damn thing you simply have to put a shitload of time on the board and fail over and over and over again.
We come from a place where we've been trained as engineers or whatever to do this or that and - somewhat - critically think about things. Instead picture yourself in the shoes of a beginner: how would you, a beginner who has not built their own mental model of discipline $foo, even begin to be critical of AI output?
But we're being advertised magic powder and sweating overalls and whathaveyou that makes you lose weight a) instantly† and b) without going to the gym and well putting in the effort††.
LLMs are the speed diet of the mind.
† comparatively
†† not that putting any arbitrary amount of effort is going to get you places, there _is_ a thing such as wasteful effort; but NOT putting the effort is a solid guarantee that you won't.
But personally, I don't feel as upset over all this as he does. It seems that all my tech curmudgeonliness over the years is paying off these days, in spades.
Humbly, I suggest that he and many others simply need to disconnect more from The Current Thing or The Popular Thing.
Let's look at what he complains about:
* Bitcoin. Massive hype, never went anywhere. He's totally right. That's why I never used it and barely even read about it. I have no complaints because I don't care. I told myself I'd care if someone ever built something useful with Bitcoin. 10 years later they haven't. I'm going back to bed.
* Windows. Man I'm glad I dodged that bullet and became a Linux user almost 15 years ago. Just do it. Stuff will irk you either way but Linux irks don't make you feel like your dignity as a human being is being violated. Again, he's right that Windows sucks; I just don't have to care, because I walked away.
* Bluesky, Twitter, various dumb things being said on social media. Those bother him too. Fortunately, these products are optional. I haven't logged into my Twitter account for three years. I'll certainly never create a Bluesky one. On some of my devices I straight up block many of these crapo social sites like Reddit etc. in /etc/hosts. I follow some RSS feeds of a few blogs, one of the local timeline for a Mastodon instance. Takes ten minutes and then I go READ BOOKS in my spare time. That's it. He is yet again right, social media sucks, it's the place where you hear about all this dumb stuff like Bitcoin; I just am not reading it.
I'm not trying to toot my own horn here it's just that when you disconnect from all the trash, you never look back, and the frustrations of people who haven't seem a little silly. You can just turn all of this stuff off. Why don't you? Is it an addiction? Treat it like one if so. I used to spend 6 hours a day on my phone and now it's 1 hour, mainly at lunch, because the rest of the time it's on silent, in a bag, or turned off, just like a meth addict trying to quit shouldn't leave meth lying around.
Listen to Stallman. Listen to Doctorow. These guys are right. They were always right. The free alternatives that respect you exist. Just make the leap and use them.
I have not yet figured out why anyone would choose this behaviour in a text editor. You have to press something to exit the delimited region anyway, whether that be an arrow key or the closing delimiter, so just… why did the first person even invent the idea, which just complicates things and also makes it harder to model the editor’s behaviour mentally? Were they a hunt-and-peck typist or something?
In theory, it helps keep your source valid syntax more of the time, which may help with syntax highlighting (especially of strings) and LSP/similar tooling. But it’s only more of the time: your source will still be invalid frequently, including when it gets things wrong and you have to relocate a delimiter. In practice, I don’t think it’s useful on that ground.
Poor typists always slow down processes, and frequently become a bottleneck, local or global. If you can speed up a process by only ten seconds per Thing, by improving someone’s typing skills or by fixing bad UI and workflow, you only have to process 360 Things in a day (which is about one minute per Thing) to have saved an entire hour.
It can be very eye-opening to watch a skilled typist experienced with a software system that was designed for speed, working. In more extreme cases, it can be that one person can do the work of ten. In more human-facing things, it can still be at least a 50% boost, so that two skilled people can beneficially replace three mediocre.
I'm nowhere near a hiring position but if I was I'd add assessing that to the application procedure.
It feels like this is part of a set of growing issues, with millennials being the only generation in between gen X / boomers and gen Z that have computer skills and can do things like manage files or read a whole paragraph of text without a computer generated voice + RSVP [0] + Subway Surfers gameplay in the background.
But it was also millennials that identified their own quickly diminishing attention spans, during the rise of Twitter, Youtube, Netflix and the like [1].
I want to believe all of this is giving me some job security at least.
[0] https://en.wikipedia.org/wiki/Rapid_serial_visual_presentati...
[1] https://randsinrepose.com/archives/nadd/ (originally published 2003, updated over time to reference newer trends)
If you have to use multiple keyboards, arrows, end, home etc tend to be at different position on the keyboard. Almost no better than using a mouse.
That's were old school vi / emacs shine. CTRL? Always same area, so ctrl-f to go forward? Same gesture whatever brand of a laptop I have to work on.
But I agree that in normal input it is often annoying.
Maybe this is just an XKCD moment https://xkcd.com/1172/ ...
How is it ever wrong, though? If I insert a (, and then a {, and the editor appends so that it's ({}), that's always ?correct. Can it ever not be.
Maybe because on a Norwegian keyboard { is a bit awkward, but I like it. Then even if we're 5 levels deep with useEffect(() => {(({[{[ I can just press ctrl+shift+enter and it just magically finishes up everything and put my caret at the correct place, instead of me trying to write ]}]})) in the correct order.
Whenever you edit something existing that already has the ), ] or } further down and you end up with a ()), []] or {}}. Or when you select some text that you want to replace and start with a quote only to end up with "the text you wanted to replace" instead of the expected ".
I never notice when it works but get annoyed every time it doesn't, so I feel like it never works and always sucks.
I guess it's muscle memory and some people are used to it, but it feels fundamentally wrong to me to have the editor do different basic editing things based on which character is being pressed.
I think here you are talking about a different thing -- completion of already started parentheses/"/whatever with content in-between, not the pre-application of paired braces or quotation marks, as the author did, no?
e.g.:
(a + b > c) -> ((a + b > c) -> (()a + b > c) -> no, I was aiming for ((a + b) > c)
(it sound like you're talking about a different feature/implementation, though, since in the annoying case there's no 'completion' shortcut, it just appears)
Hah, fun about this is that I press exactly the matched symbol ( `}` to `{` , etc) to exit this delimited region and VS even understand what I want! Incredibly useless thing
Can't speak to other editors though.. I don't want to sound like I'm trolling, but they generally feel quite clunky, compared to Emacs (ducks, runs ;p )
It's just matching, and reflecting the way different humans think, and reason, that's all.
(yes, said in jest)
On a french keyboard, ~#{[|\\^@]} all require the "alt gr" modifier key which is usually just right of the space key. So totally outside the realm of shift, caps lock, ctrl or alt.
In my perceived experience, every time a delimiter is opened, it automatically closes, allowing you to move away from it without thinking.
Even in places where this is not available (Slack, comment boxes, etc.), I close the delimiter as soon as I open it
This feature is useful for me. So are LLMs. If someone doesn't want to use this or that, they are not obliged to. But don't tell me that features that I find useful "suck".
You can always insert the second " as a ghost(?) character to keep syntax highlighting working. But it's not like any modern language server really struggles with this anyways.
You can perhaps imagine an editor that only inserts the delimiter if you type the start-string symbol in the middle of a line.
I remember when PayPal came to Australia, I was so confused by it as I could just send money via internet banking. Then they tried to lobby the government to make our banking system worse so they could compete, much like Uber.
You literally enter an IBAN and the transfer will appear in the other account the next day. And if you need the money in the target account immediately (within 10 seconds) you can do it, too, by checking a checkbox for a small fee and that fee will drop to ZERO across the EU in October 2025.
Edit: Do you mean that the speed of the transfers was the problem?
Zelle, previously known as clearXchange, and whatever else, but if you had an account at one of the bigger bank, it has long been trivial to send money to each other.
https://en.wikipedia.org/wiki/Zelle
> In April 2011, the clearXchange service was launched. It was originally owned and operated by Bank of America, JPMorgan Chase, and Wells Fargo.[6][7] The service offered person-to-person (P2P), business-to-consumer (B2C), and government-to-consumer (G2C) payments.[8]
> I don’t want to help someone who opens with “I don’t know how to do this so I asked ChatGPT and it gave me these 200 lines but it doesn’t work”.
In the same vein, I've actually worked on crypto projects in both DeFi and NFT spaces, and agree with the "money for criminals" joke assessment of crypto, even if the technology is quite fascinating.
The skill has not been obliterated. We still need to fix the slop written by the LLMs, but it is not that bad.
Some people copy and paste snippets of code without knowing what it does, and in a sense, they spread technical debt around.
LLMs lower the technical debt spread by the clueless, to a lower baseline.
The issue I see is that the amount of code having this level of technical debt is created at a much faster speed now.
The copy-paste of usable code snippets is somewhat comparable to any use of a library or framework in the sense that there's an element of not understanding what the entire thing is doing or at least how, and so every time this is done it adds to the knowledge debt, a borrowing of time, energy and understanding needed to come up with the thing being used.
By itself this isn't a problem and realistically it's impossible to avoid, and in a lot of cases you may never get to the point where you have to pay this back. But there's also a limit on the rate of debt accumulation which is how fast you can pull in libraries, code snippets and other abstractions, and as you said LLMs ability to just produce text at a superhuman rate potentially serves to _rapidly_ increase the rate of knowledge debt accumulation.
If debt as an economic force is seen as something that can stimulate short-term growth then there must be an equivalent for knowledge debt, a short-term increase in the ability of a person to create a _thing_ while trading off the long-term understanding of it.
Take this snippet of code, and this is what each part means, and how you can change it.
It doesn't explain how it is implemented, but it explains the syntax and the semantics of it, and that's enough.
Good documentation makes all the difference, at least for me.
I'm SO stealing this!! <3
Yeah? What about what LLMs help with? Do you have no code that could use translation (move code that looks like this to code that looks like that)? LLMs are real good with that, and they save dozens of hours on single sentence prompt tasks, even if you have to review them.
Or is it all bad? I have made $10ks this year alone on what LLMs do, for $10s of dollars of input, but I must understand what I am doing wrong.
Or do you mean, if you are a man with a very big gun, you must understand what that gun can do before you pull the trigger? Can only the trained can pull the trigger?
Only bad code, and what takes the time is understanding it, not rewriting it, and the LLM doesn't make that part any quicker.
> they save dozens of hours on single sentence prompt tasks, even if you have to review them
Really? How are you reviewing quicker than you could write? Unless the code is just a pile of verbose whatever, reviewing it is slower than writing it, and a lot less fun too.
Well, humans typically read way faster than they write, and if you own the code, have a strict style guide, etc, it is often pretty simple to understand new or modified code, unless you are dealing with a novel concept, which I wouldn't trust a LLM with anyway.
Also, these non-human entities we are discussing tend to output code very fast.
When it's just reading, perhaps, but to review you have to read carefully and understand. It's like the classic quote that if you're writing code at the limits of your ability you won't be able to debug it.
> if you own the code, have a strict style guide, etc, it is often pretty simple to understand new or modified code, unless you are dealing with a novel concept, which I wouldn't trust a LLM with anyway
The way I see it if the code is that simple and repetitive then probably that repetition should be factored out and the code made a lot shorter. The code should only need to express the novel/distinctive parts of the problem - which, as you say, are the parts we wouldn't trust an LLM with.
You don't want more technical debt.
Ideally, you want zero technical debt.
In practice only a hello world program has zero technical debt.
No one is becoming a retard omniscient using LLMs and anyone saying they are is lying and pushing a narrative.
Humans still correct things, humans understand systems have flaws, and they can utilize them and correct them.
This is like saying someone used Word's grammar correction feature and accepted all the corrections. It doesn't make sense, and the people pushing the narrative are disingenuous.
That’s a nice description, to be honest.
And thank fuck it happened. All of shell and obscure Unix tools that require brains molded in 80s to use on a day to day basis should’ve been superseded by something user friendly long time ago.
And I'm not the only one saying this but - the bit about LLMs is likely throwing the baby out with the bathwater. Yes the "AI-ification" of everything is horrible and people are shoehorning it into places where it's not useful. But to say that every single LLM interaction is wrong/not useful is just not true (though it might be true if you limit yourself to only freely available models!). Using LLMs effectively is a skill in itself, and not one to be underestimated. Just because you failed to get it to do something it's not well-suited to doesn't mean it can't do anything at all.
Though the conclusion (do things, make things) I do agree with anyway.
We live in a world now where people scare one another into making significant choices with limited information. Person A claims it's the future you don't want to miss, Person B takes that at face value and starts figuring out how to get in on the scam, and Person C looks at A and B and says "me too." Rinse and repeat.
That's why so much of the AI world is just the same app with a different name. I'd imagine a high percentage of the people involved in these projects don't really care about what they're working on, just that it promises to score them more money and influence (or so they think).
So in a way, for the majority, it is just stupid and greedy behavior, but perhaps less conscious.
I have a feeling that line of thinking is going to be of diminishing consolation as the world veers further into systemic and environmental collapse.
I think it is defense mechanism, you see it everywhere, and you have to wonder, "why are people thinking this way?".
I think those with an ethical or related argument deserve to be heard, but opposite of that, it seems like full blinders, ignoring the reality presented before us.
> And the only real hope I have here is that someday, maybe, Bitcoin will be a currency, and circulating money around won’t be the exclusive purview of Froot Loops. Christ
PLEASE NO. The only thing this will lead to is people who didn't get rich with this scheme funding the returns of people who bought in early.
Whatever BTC becomes, everyone who advocates for funneling public money of people who actually work for their salary into Bitcoin is a fraud.
I don't think the blog author actually wants this, but vaguely calling for Bitcoin to become "real money" indirectly will contribute to this bailout.
And yes, I'm well aware that funneling pension funds money etc into this pyramid scheme is already underway. Any politician or bank who supports this should be sued if you ask me.
Why is that? You can just buy 0.00000001 BTC.
Let's say me and my friends agree to carve off 0.00002 BTC supply and pretend that is the whole world of currency. We could run a whole country using that 0.00002 BTC as money. Except that anyone who has 1 BTC can break into our walled garden and, with a tiny fraction of their holdings, buy the entire walled garden, and there's no way to prevent this as long as our money is fungible with theirs. It's the same reason you wouldn't use immibiscoins as a currency: I could just grant myself a zillion of them and buy everything you have. Except that in the case of bitcoin the grant is pre-existing.
Deflationary currencies are fundamentally unstable, just like currencies that one guy can print at will, because they decorrelate quantity and merit.
He wants normal banking and money transfer... but just to anybody, and for any reason. As an example, he'd like people to be able to pay him to draw bespoke furry porn for them. Or as another example, why can't a US citizen pay an Iranian citizen to do some work for them? (e.g. write a computer program)
That is totally possible. The only thing that stands in his way, and drives him into the arms of the cryptocurrency frauds, are moralising and realpolitiking governments that intentionally use their control of banks to control what bank customers can do with their money.
In an ideal world, government would only regulate banks on fiscal propriety and fair-dealing, and would not get in the way of consenting adults exchanging money for goods and services. But because government does fuck with banks, and sometimes the banks just do the fuckery anyway and government doesn't compel them to offer services to all (e.g. Visa/Mastercard refuse to allow porn merchants?), normal people start listening to the libertarians, the sovereign citizens, and the pump-and-dump fraudsters hyping cryptocurrencies.
He wants decentralised digital cash. How can it be done, if not Bitcoin et al?
Also, I'm not sure if a radical lack of regulation / full decentralization is a good thing when we are talking about money.
In my opinion, money should be regulated by governments.
But this discussion tends to escalate and the arguments have been made ad nauseam, so I'm tuning out here, sorry.
If you were to create a decentralized and limited supply currency, how would you distribute it so that it's “fair”?
Sounds a bit like if the world was running only on proprietary software created by Microsoft and you criticized the move to open source because that would enrich Linus Torvalds and other code creators/early adopters.
Are people better off by continuing to use centralized broken software that they have to pay a subscription for (inflation) than if they did a lump sum buy of a GNU/Linux distro copy from a random guy and become liberated for the rest of their life?
I clicked halfheartedly, started to read halfheartedly, and got sucked into a read that threw me back into the good old days of the internet.
A pity that the micropayments mentioned in the post never materialized, I'd surely throw a few bucks at the author but the only option is a subscription and I hate those.
Edit: I apologies, the author has pre-gpt posts that use em dashes so likely it’s part of their writing style.
Color me naive but honestly I'm pretty sure it's not, though. We all tend to be worse at distinguishing AI from human text than we think, but the text sounds genuine to me and transpires an author with a quirky personality that seems difficult to imitate by an LLM. And that could include using em dashes.
The author lost me a little on the AI rant. Yes, everything and everyone is shoving LLMs into places that I don't want it. Just today Bandcamp sent me an email about upcoming summer albums that was clearly in part written by AI. You can't get away from it, it's awful. That being said, the tooling for software development is so powerful that I feel like I'd be crazy not to use it. I save so so much time with banal programming tasks by just writing up a paragraph to cursor about what I want and how I want it done.
You're a platform drone, you have no mind, yada. Yet, we are reading the author's blog.
The author may hate LLMs, but they will lead to many people realizing things they never were aware of, like the author's superficial ability to take information and present it in a way that engages others. Soon that will be a thing that is known. Not many will make money sharing information in prose.
What the author refers to as "LLMs" today, will continually improve and "get better" at everything the author has issues with, maybe in novel ways we can't think of at the moment.
Alternative take:
"Popular culture" has always been a "lesser" ideal of experience, and now that ontological grouping now includes the Internet, as a whole. There are no safe corners, everything you experience on the Internet, if someone shared it with you, is now "Popular culture".
Everyone knows what you know, and you are no longer special or have special things to share, because awareness is ubiquitous.
This is good for society in many ways.
For example, with information asymmetry, where assholes made others their food, it will become less common that people are food.
Things like ad-driven social networks will fade away as this realization becomes normalized.
Unfortunately, we are at the very early stages of this, and it takes a very long time for people to become aware of things like hoaxes.
Yes that is actually roughly the take away here. LLMs are getting so popular in programming not because they are good at solving problems but because they are good at reproducing a solution to some minor variation of an existing problem which has already been solved many times.
Most of the work that most of the industry does is just re-solving the same set of problems. This is not just because of NIH but also because code reuse is a hard problem.
This is not to say that everything is the same product. The set or problems you solve and how you chain those solutions together (the overarching architecture) as well as the small set of unique problems you solve are the real value in a product. But its often not the majority of any single codebase.
No comments yet
-----
This is why I absolutely cannot fucking stand creative work being referred to as "content". "Content" is how you refer to the stuff on a website when you're designing the layout and don't know what actually goes on the page yet. "Content" is how you refer to the collection of odds and ends in your car's trunk. "Content" is what marketers call the stuff that goes around the ads.
"Content"... is Whatever.
-----
People, please don't think of yourself as "content consumers".
The point of doing things is the act of doing them, not the result. And if we make the result easily obtainable by using an LLM then this gets reinforced not destroyed.
I'm going to use sketching as an example, because it's something I enjoy but am very bad at. But you could talk in the same way about playing a musical instrument, writing code, writing anything really, knitting, sports, anything.
I derive inspiration from other people who can sketch really well, and I enjoy and admire their ability. But I'm happy that I will never be that good. The point of sketching (for me) is not to produce a fantastic drawing. The point is threefold: firstly to really look at the world, and secondly to practice a difficult skill, and thirdly the meditative time of being fully absorbed in a creative act.
I like that the fact that LLMs remove the false idea that the point of this is to produce Art. The LLM can almost certainly produce better Art than I can. Which is great because the point of sketching, for me, is the process not the result, and having the result be almost completely useless helps make that point. It also helps that I'm really bad at sketching, so I never want to hang the result on my wall anyway.
I understand that if you're really good at something, and take pride in the result of that, and enjoy the admiration of others at your accomplishments, then this might suck. That's gotta be tough. But if you only ever did it for the results and admiration, then maybe find something that you actually enjoy doing?
For art/craft you are completely correct though.
Then it became hip, and people would hand-roll machine-specific assembly code. Later on, it became too onerous when CPU architecture started to change faster than programmers could churn out code. So we came up with compilers, and people started coding at a higher level of abstraction. No one lamented the lost art of assembly.
Coding is just a means to an end. We’ve always searched for better and easier ways to convince the rocks to do something for us. LLMs will probably let us jump another abstraction level higher.
I too spent hours looking for the right PHP or Perl snippet in the early days to do something. My hard-earned bash-fu is mostly useless now. Am I sad about it? Nah. Writing bash always sucked, who am I kidding. Also, regex. I never learned it properly. It doesn’t appeal to me. So I’m glad these whatever machines are helping me do this grunt work.
There are sides of programming I like, and implementation isn’t one of them. Once upon a time I could care less about the binary streams ticking the CPU. Now I’m excited about the probable prospect of not having to think as much about “higher-level” code and jumping even higher.
To me, programming is more like science than art. Science doesn’t care how much profundity we find in the process. It moves on to the next thing for progress.
AI at the current state in my workflow is a decent search engine and stackoverflow. But it has far greater pitfalls as OP pointed out (it just assumes the code is always 100% accurate and will “fake” API).
I only use AI for small problems rather than let it orchestrate entire files.
Like the author, I'm mystified by those who accept the appearance of output as a valid goal.
Even with constrained algorithmic "AI" like Stockfish, which, unlike LLMs, actually works, chess players frown heavily on using it for cheating. No chess player can go to a tournament and say: "I made this game with Stockfish."
Yes. Most developers in the corporate world are building CRUD apps that are 90% boilerplate. Hopefully this helps explain the disconnect.
And even if GenAI progress stops here and it never gets better, it's incredibly useful. Why do people realize that it can't do EVERYTHING and then get stuck to the view that they can't use it for ANYTHING, confused as to why others are getting benefit from it?
Two things can be true (and I believe they are): the hype can be "staggeringly overblown" AND it can still be useful in many cases.
I've found that they get pretty wishy-washy when you correct them. As an example, yesterday I was working on porting a function from the open-source CUE4Parse project from C# to Python (for a hobby project), and the LLM (Gemini 2.5 Pro) suggested the following translation of a C# method:
I noted that the original used a custom ToLower() implementation:> This custom ToLower(), does that produce the same result as .lower() in Python?
Gemini answered with a lot of text and concluded: "You should use Python's standard lower() method for your port."
I pushed back with:
> Right, but for this to work (looking up an asset by its hash as contained in global.utoc), I probably have to match the behavior of Unreal Engine...
From my experience this is representative of the typical LLM interaction once one ventures into niche topics like Unreal Engine modding.But, to make a comparison here with Claude Code, I was initially impressed with Geminis ability to hold a keep a conversation on track, but it rarely gets the hint when I express annoyance with its output. Claude has an uncanny ability to guess what I find wrong with its output (even when I just respond with WTF!) and will try to fix it often in actually useful ways, Gemini just keeps repeating its last output after acknowledging my annoyance.
No comments yet
But that's not what the marketing says. The marketing says it will do your entire job for you.
In reality, it will save you some typing if you already know what to do.
On HN at least, where most people are startup/hustle culture and experts in something, they don't think long term enough to see the consequences for non experts.
I'm not sure it's a lot of value. It probably is in the short term, but in the long run...
There have already been studies saying that you don't retain the info about what a LLM does for you. Even if you are already an expert (a status which you have attained the traditional way), that cuts you off from all those tiny improvements that happen every day without noticing.
This goes too far in the other direction. LLMs can do far more than merely saving you typing. I have successfully used coding agents to implement code which at the outset I had no business writing as it was far outside my domain expertise. By the end I'd gained enough understanding to be able to review the output and guide the LLM towards a correct solution, far faster than the weeks or months it would have taken to acquire enough background info to make an attempt at coding it myself.
I'm sure I can do what you describe as well. I've actually used LLMs to get myself current on some stuff I knew (old) basics for and they were useful indeed as you say.
I'm also sure it wouldn't help your interns to grow to your level.
The reason behind banning adult materials has to do with Puritanism and with the high rates of refunds on adult websites.
Momentum. They are the big games in town because so many people use them, so many people use them because they are the big games in town. There was a time for both when they didn't suck as much as they do now, at least relative to what other options existed.
Yet, the payment processors will all reliably treat anything NSFW equally by suppressing it as much as they can. From banning individuals who dare do transactions they don't approve of to directly pressuring websites that might tolerate NSFW content by threatening to take away their only means of making money. If they only cared about refunds and profitability, they wouldn't ban individual artists - because the fact how these artists often manage to stay undetected for years suggests that many of their customers aren't the kind to start complaining.
It's quite fascinating how this is the one area where the companies are willing to "self-regulate". They don't process sales of illicit drugs because the governments above them said no and put in extensive guardrails to make these illegal uses as difficult as reasonably possible. Yet, despite most first-world governments not taking issue with adult content at large (for now), the payment processors will act on their own and diligently turn away any potential revenue they could be collecting.
As far as I understand, bitcoin is fundamentally unusable as a currency. Transactions are expensive and limited to ?7k? every few seconds. It's also inherently deflationary, you want inflationary currency, you want people spending, not hoarding.
Great protocols are built in layers.
You have decentralized instant settlement for an average of 0.005% even for micropayments with the Lightning Network (another protocol built on top of Bitcoin). That's orders of magnitude away from the settlement time and resilience of the current payment networks.
I'm a bit annoyed with LLMs for coding, because I care about the craft. But I understand the premise of using them when the end goal is not "tech as a craft" but "tech as a means". But that still requires having some reason to use the tech.
Hell, I feel the "tech as a means to get money" part for people trying to climb up the social ladder.
But for a lot of people who already did get to the top of it?
At some point we gotta ask what the point of SEO-optimizing everything even is.
Like, is the end goal optimizing life out of life?
Why not write a whole app using LLMs? Why not have the LLM do your course work? Why do the course work at all? Why not have the LLM make a birthday card for your partner? Why even get up in the morning? Why not just go leave and live in a forest? Why live at all?
What is even the point?
But yeah, first we'll go through a few (?) years of the self-defeating "ChatGPT does my homework" and the necessary adjustments of how schools/unis function.
And also, how is personalized bullshit better than generic bullshit? We'd need to solve the bullshit problem in the first place, which is mathematically guaranteed NOT to be possible with these types of architectures.
Touch grass all by myself?
No comments yet
I can absolutely relate. That was ten years ago, so I'm not exactly sure where they are, now, but they still seem to be going strong.
[0] https://eev.ee/blog/2015/06/09/i-quit-the-tech-industry/
I understand the frustration people have with technology but it's not the technology that keeps selling them out. Bad actors take control of the technology and use it to gain and consolidate power. Identify those bad actors and bad use cases for technology and avoid giving them power. Writing whiny tropey angst ridden blog posts doesn't help anyone.
> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.
The reaction to that post has been interesting. It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.
Analogies like this will inevitably get people hung up on the details of the analogy though. Lots of people jumped straight to "a table saw does a single job reliably, unlike LLMs which are non-deterministic".
I picked table saws because they are actually really dangerous and can cut your thumb off if you don't know how to use them.
You were not, as is patently obvious from the sentence preceding your quote (emphasis mine):
> Another Bluesky quip I saw earlier today, and the reason I picked up writing this post (which I’d started last week)
The post had already been started, your comment was simply a reason to continue writing it at that point in time. Had your comment not existed, this post would probably still have been finished (though perhaps at a later date).
> It's mainly intended to be an argument against the LLM hype! I'm pushing back against all the people who are saying "LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.
Despite your restating, your point still reads to me as the opposite as what you claim to have intended. Inventing the table saw is a poor analogy because the problem with the LLM hype has nothing to do with their invention. It’s the grifts and the irresponsible shoving of it down everyone’s throats that’s a problem. That’s why the comparison fails, you’re juxtaposing things which aren’t even slightly related. The invention of a technology and the hype around it are two entirely orthogonal matters.
> Looks like I was the inspiration for this post then
I replace that with:
> Looks like I was the inspiration for finishing this post then
And this:
> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw.
I can rephrase as:
> Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the introduction of the table saw.
If that’s your true impetus, please don’t bother. There’s nothing which benefits me about your words being clearer and less open to misinterpretation. You are, of course, completely welcome to disagree with and ignore my suggestions.
> thanks to the introduction of the table saw.
That makes absolutely no difference at all. And it doesn’t matter anymore either, the harm to your point is already done, no one’s going back to it now to reinterpret it. I was merely pointing out what I see as having gone wrong so you can avoid it in the future. But again, entirely up to you what you do with the feedback.
No comments yet
People are talking about the trendline, what AI was 5 years ago versus what AI is today points to a different AI 5 years down the line. Whatever AI will be 5 years from now it is immensely possible that LLMs may eliminate programming as a career. If not 5 years... give it 10. If not 10, give it 15. Maybe it happens in a day, a major break through in AI, or maybe it will be like what's currently happening, slow erosion and infiltration into our daily tasks where it takes on more and more responsibilities until one day, it's doing everything.
I mean do I even have to state the above? We all know it. What's baffling to me is how I get people saying shit like this:
>"LLMs are so incredible at programming that nobody should consider programming as a career any more" - I think that's total nonsense, like a carpenter quitting because someone invented the table saw.
I mean it's an obvious complete misrepresentation. People are talking about the future. Not the status quo and we ALL know this yet we still make comments like that.
Using LLMs as part of my process helps me understand how much of my job isn't just bashing out code.
My job is to identify problems that can be solved with code, then solve them, then verify that the solution works and has actually addressed the problem.
An even more advanced LLM may eventually be able to completely handle the middle piece. It can help with the first and last pieces, but only when operated by someone who understands both the problems to be solved and how to interact with the LLM to help solve them.
No matter how good these things get, they will still need someone to find problems for them to solve, define those problems and confirm that they are solved. That's a job - one that other humans will be happy to outsource to an expert practitioner.
It's also about 80% of what I do as a software developer already.
Through no fault of their own, but they're literally blind. They don't have eyes to see, ears to hear or fingers to touch and feel & have no clue if what they've produced is any good to the original purpose. They are still only (amazing) tools.
You do not know if LLMs I the future can’t replace humans. You can only say right now they can’t replace humans. In the future the structure of the LLM may be modified or it become one module out of multiple that is required for agi.
These are all plausible possibilities. But you have narrowed it all down to a “no”. LLMs are just tools with no future.
The real answer is nobody knows. But there are legitimate possibilities here. We have a 5 year trend line projecting higher growth into the future.
This is all just my opinion of course, but it's easy to expect that being an LLM that knows all there is to know about every subject written in books and the internet would be enough to do every office work that can be done with a computer. Yet strangely enough, it isn't.
At this point they still lack the necessary feedback mechanism (the senses) and ability to learn on the job so they can function on their own independently. And people have to trust them, that they don't fail in some horrible way and things like that. Without all these they can still be very helpful, but can't really "replace" a human in doing most activities. And also, some people seem to possess a sense of aesthetics and a wonderful creative imagination, things that LLMs don't really display at this time.
I agree that nobody knows the answer. If and when they arrive at that point, by then the LLM part would probably be just a tiny fraction of their functioning. Maybe we can start worrying then. Or maybe we could just find something else to do. Because people aren't tools, even when economically worthless.
You said: how many letters are in the lithuanian word "nebeprisikaspinaudamas"? Just give me one number. ChatGPT said: 23
You said: how many letters are in the lithuanian word 'nebeprisikaspinaudamas'. Just give me one number. ChatGPT said: 21
Both are incorrect by the way. It's 22
I don't know why shitty ecosystem around cryptocurrencies drives the author to this conclusion. Crypto is the most convenient and respectful of your human rights way to just send money. It's there, it's useful and in many cases absolutely vital.
1. Computers could be more fun if you could buy things more easily. Crypto could solve that, but the people involved don't care about crypto, they care about Whatever.
2. Computers can be fun when people do cool things on them. The web could make that possible, but unfortunately the economics of ads incentivize people to post Whatever.
3. Programming can be fun, but LLMs just generate Whatever.
Immutable distributed ledgers, by contrast, have found no use cases other than crime and financial speculation in coming up on twenty years. Exactly how long do we have to wait for these interesting uses that are “surely” coming?
A third of the world is unbanked. A permissionless monetary system makes a huge difference for those.
When I was still very skeptical about Bitcoin, I met a guy in Turkey who was from a very poor African country and was just studying there. His father would buy Bitcoin in their home country with the local currency (P2P) and send it to his son, that would then convert it also P2P for Turkish Liras. They could do this securely an within minutes. The alternative was using Western Union and paying taxes in both countries, which in total added up to ~50% of the sent amount.
It's great not needing Bitcoin, as it is great not needing Tor. But that doesn't mean there's no use case for them.
“list them”
“oh I can do that in this other convoluted way that doesnt solve any of these users goals or problems”
“I’m not the target audience for that so it doesnt count”
“ah so financial speculation, that doesnt count despite being the largest application and sector on the planet”
“marketcap doesnt matter and isnt indicative of anything in that economy, I would rather hold digital assets at a separate different standard than every other asset on the planet out of total ignorance that my same arguments apply to asset classes that I respect”
“see I proved my point for myself, there is no use case after 17 years, classic HN”
“those are strawman arguments despite all conversations following this same predictable path enough for any language model to regurgitate it verbatim”
But since almost all the tokens bear neither interest nor dividends, it looks a lot more like a casino.
its just a filtering problem
there are screeners to narrow everything down just like for the stock exchanges
>> It feels like the same attitude that happened with Bitcoin, the same smug nose-wrinkling contempt. Bitcoin is the future. It’ll replace the dollar by 2020. You’re gonna be left behind. Enjoy being poor. Sure thing, Disco Stu!
Its all about blockspace and the commodity that pays for the blockspace
That was true when bitcoin was 2 cents and its true when bitcoin is $109,000 and 2 cents
I mean, are you enjoying your socioeconomic status? the chronology was very clear to some and they were right. It wasn’t luck, it wasn’t a really binary proposition. you can read old threads around the net from 2012 to see exactly where we are going. you can help make that happen or passively act surprised. pretty much every theorized issue can be programmed away, thats what gives people confidence in this asset class compared to others.
> That was true when bitcoin was 2 cents
I largely agree with you, but to nitpick: when bitcoin was 2 cents, blockspace was free, and miners regularly accepted zero-fee transactions into blocks. Today, you're not getting a transaction into a block without paying at least a couple hundred sats. Your statement is true today, but it wasn't like this until April-May 2012 when Satoshi Dice started filling blocks up. See Fig 3 on page 9 of <https://maltemoeser.de/paper/transaction-fees.pdf> or look through some early blocks.
if you expected applications to be deployed that would take up block space when used, and were going to build those applications yourself, then it was still rational
in 2012 people were describing smart contracts, joint custody accounts to secure assets better, and many other applications that are commonplace and have their own critique and discussion now
its like seeing an island full of resources and realizing that the bridges and ferry routes haven't been built yet. that
1) you can get to that island yourself before everyone else
2) you can also build the bridge and put up a toll booth
3) other bridges will be built
4) and other people can also come to the island early at great difficulty too
the same play is still true on other blockchains, and sometimes back again on bitcoin
I’ve done the trade many times over the past 15 years
Here's a 2010 Satoshi post in a thread about transaction fees stating "we should always allow at least some free transactions": https://satoshi.nakamotoinstitute.org/posts/bitcointalk/thre...
If you had said "2 dollars" instead of "2 cents", we would be in complete agreement. All I'm saying is that mandatory transaction fees were not baked in at 2 cents.
>”list them”
If you are arguing that something exists, then being asked to prove its existence is table stakes, not poor arguments
You starting with a strong argument in your list of bad arguments and then ending with shit that mocks anyone calling you out makes me believe that you are not discussing this topic in good faith
yes, that means we are at an impasse. use the search, ask an LLM, if even that is too much initiative for a quite outdated skeptic to take even now then I can’t help you
there are hundreds of billions, maybe trillions in volume going through financial services on blockchains and it doesn’t matter if financial services isn’t a sector you care about or are the target audience for, there are people there who will pay to solve problems they have
and yes that is a tiny fraction of all financial services volume at all, or even involving crypto assets.
I was referring to the traffic onchain as that’s what’s interesting
permissionless liquidity providing in unincorporated partnerships is still novel and unique to those platforms and highly lucrative. on assets that dont need permission to be listed anywhere.
Agree with the sense that we're at a weird moment though.
Same with AI. I'm notably more autistic (or more aspie, or whatever) than my friend group, and also I much more easily recognize AI text and images as uncanny slop, while my friends are more easily wowed by it. Maybe AI output has the same "superficially impressive but empty inside" quality as the stuff that sociopaths say.
Most likely yet another flawed output from human-LLM (4chan) so online schizos have something to identify themselves with.
The last line of the article summarizes it perfectly.:
> Do things. Make things. And then put them on your website so I can see them.
I subscribe fully to the first two sentences, but the last one is bullshit. The gloom in the article is born from the authors attaching the value of "making things" to the recognition received for the effort. Put your stuff out there if you think it is of value to someone else. If it is, cool, and if it's not, well, who cares.
> I can’t remember exactly what they said, but it was something like: “I created a whole album, complete with album art, in 3.5 hours. Why wouldn’t I use the make it easier machine?” This is kind of darkly fascinating to me, because it gives rise to such an obvious question: if anyone can do that, then why listen to your music? It takes a significant chunk of 3.5 hours just to listen to an album, so how much manual work was even done here? Apparently I can just go generate an endless stream of stuff of the same quality! Why would I want your particular brand of Whatever?
This gem implies that the value of the music (or art in general) is partially or even wholly dependent on whether or not someone else thinks it's good. I can't even...
If you eliminate the back-patting requirements, and the stuff we make is genuine, then it's value is intrinsic. The "Whatever" machines are just tools, like the rest of the tools we use, to make things. So, just make your things and get on with it.
I had an interesting discussion with a piano teacher once. Some of his students, he told me, would play for themselves but never for any kind of audience. As the saying goes: if a musician plays a piano in a closed room with no one to hear it, does it make a sound?
Obviously there's nothing wrong with extremely personal art that never gets released to the wider public - not every personal diary should be a blog. But there's also the question of what happens to art when none of it gets shared around, and vibrant art communities are, in my opinion (and I think also the author's), something to encourage.
I get what you're after, but that's not a very good example. If a musician is playing an instrument, then of course the musician hears it.
Now, imagine instead that it's a player piano, and the lone "musician" is not actually playing anything at all, but hears the sound of the tones he/she had randomly generated by a "Whatever" machine, resonating through the actual struck strings, and resonant body of a piano, and the hair on the back of their neck stands on end. Then the music ends, the vibrations stop, and all that is left of the moment is whatever memory the "musician" retains.
Was that music while being heard by the "musician"? Is it music when it's just an melody in the "musician's" head? What if it's wasn't a piano at all, but just birds singing? Is it still music? If it is, is it "good" music?
Yes, the world is changing fast, and no, we humans don't seem to handle it well. I agree with the article in that sense. But I see no use in categorizing technology as dystopian, just because it's been misused. You don't have to misuse it yourself, or even use it at all if you don't want to. Complaining about it though... we humans are great at that.
hmmm
But chatgpt does help me work through some really difficult mathematical equations in newest research papers by adding intermediate steps. I can easily confirm when it gets them right and when not, as I do have some idea. It’s super useful.
If you are not able to make LLMs work for you at all, and complain about them on the internet, you are an old man yelling at clouds. The blog post devolves from an insightful viewpoint into a long sad ramble.
It’s 100% fine if you don’t want to use them yourself, but complaining to others gets tired quick.
one of these things is not like the other
I agree with the author, I usually do not care how exactly the fetch call looks like. Whatever indeed.
Oh wow.
Do you realize how many person-hour of highly intelligent people have been spent on ORM just so people could not learn SQL? Most people don't value learning.
> There are people who use these, apparently. And it just feels so… depressing. There are people I once respected who, apparently, don’t actually enjoy doing the thing. They would like to describe what they want and receive Whatever — some beige sludge that vaguely resembles it. That isn’t programming, though. That’s management, a fairly different job. I’m not interested in managing. I’m certainly not interested in managing this bizarre polite lying daydream machine. It feels like a vizier who has definitely been spending some time plotting my demise.
I was several minutes of reading before this paragraph when the idea hit me that this person hates managing. Because everyone I’ve met who hates using AI to produce software describes to me problems like the AI not being correct or lying to them if the model thought that would please you better, and that’s my experience with junior engineers as a manager.
And everyone I’ve met who loves AI at some point makes an analogy to it, that compares it to a team of eager juniors who can do a lot of work fast but can’t have their output trusted blindly, and that’s my experience with junior engineers as a manager.
And then anyone whose been trying to get an Engineering manager job over the past few months and tracking their applications metadata has seen the number of open postings for their requirements go down month after month unless you drop the manager part and keep all the same criteria but as IC
And then I read commentary from megacorps about their layoffs and read between the lines like here[1]
>… a Microsoft spokesperson said in a statement, adding that the company is reducing managerial layers …
I think our general consternation around this is coming from creators being forced into management instead of being able to outsource those tasks to their own managers.
I am not really sure what to do with this insight
[1] https://www.cnn.com/2025/07/02/tech/microsoft-layoffs-9000-e...
Legitimately I think you are missing my point. What I quoted out of your response could be applied to prompt engineering/managment/tinkering. I think everyone who likes doing this with juniors and hates it with AI is conflating their enjoyment of teaching juniors with the dopamine you get from engaging with other primates.
I think most people I’ve met who hated AI would have the same level of hate for a situation where their boss made them actually manage an underperforming employee instead of letting them continue on as is ad infinitum.
It’s hard work both mentally and emotionally to correct an independent agent well enough to improve their behavior but not strongly enough to break them, and I think most AI haters are choking on this fact.
I’m saying that from the position of an engineer who got into management and choked on the fact that sometimes upper leadership was right and the employee complaining to me about the “stupid rules” or trying to lie to me to get a gold star instead of a bronze one was the agent in the system who was actually at fault
Most of Internet is crap. Most of media is crap. This does need to stop you (or me) from creating.
That seems snarky ("you don't want to learn things") when the reality is that the problem, the finite ressource that is the bottleneck, is _time_.
This is also why hyperlinks are underused: few people go through every single one of the words in their last post to try to add appropriate hyperlinks.
Brenden Eich's beliefs about marriage aren't relevant. It been what well over a decade since he was ousted from Mozilla? Wasn't that enough? People constantly bring this up as a reason why you shouldn't use Brave. There are valid reasons not to use Brave Browser. Brendan Eich's beliefs about marriage isn't one of them. It some tired old jab at Eich who like most older people have beliefs that are considered backwards by today's standards.
> This is starting to get away from the main thesis of Whatever but every time I hear about students coasting through school just using LLMs, I wonder what we are doing to humanity’s ability to think critically about anything. It already wasn’t great, but now we’re raising a whole generation on a machine that gives them Whatever, and they just take it. You’ve seen anecdotes of people posting comments and submitting papers and whatnot with obvious tells like “As a large language model…” in them. That means they aren’t even reading the words they claim as their own! They just produce Whatever.
People were cut and pasting Wikipedia articles into University work and doing zero effort back in the mid-2000s while I was in University. There is a deeper problem with education generally and it isn't people copying stuff with AI.
> The most obnoxious people like to talk about how Stable Diffusion is “democratizing art” and that is the dumbest thing I’ve ever heard. There is no fucking King of Art decreeing who is allowed to draw and who isn’t. You could do it. You could do it right now. But it’s hard, so you’d rather spend that time crying on Twitter about how unfair it is that learning a skill takes work and thank god the computer can give you all of the admiration with none of the effort now.
This isn't what is meant when people say this.
What people typically mean is that people can cheaply create things in the AI that match what they have in their head.
e.g.
- There are parody songs / music videos made for internet streams I watch by other fans of the show. In the past people used to cheaply copy and paste stuff into a video editor and crudely animate it and they weren't great. I literally laughed a parody song that was done like a sea shanty that mocked a well known e-celeb.
- I make cheesy YouTube Thumbnails for my videos because I have zero budget for an artist and my skills with image editing software isn't stellar. I can use the AI to generate me some of the thumbnail and the rest I can do in GIMP. I get something that looks better than if I didn't have the AI IMO. This does democratise it, because I don't have to spend literally hundreds on a graphic designer.
- AI can help with animations. A friend of mind could take a 20 FPS animation and have the AI interpolate the animations accordingly. He has told me this saves him a huge amount of time.
> But yes, thanks: I was once offered this challenge when faced with a Ren’Py problem, so I grit my teeth and posed my question to some LLM. It confidently listed several related formatting tags that would solve my problem. One teeny tiny issue: those tags did not and had never existed. Just about anything might be plausible! It can just generate Whatever! I cannot stress enough that this is worse than useless to me.
The probabilistic machine generated a probabilistic answer. Unable to figure out a use for the probabilistic machine in two tries, I threw it into the garbage.
Unfortunately, humans are also probabilistic machines. Despite speaking English for nearly a lifetime, errors are constantly produced by my finger-based output streams. So I'm okay talking to the machine that might be wrong in addition to the human that might be wrong.
> It feels like the same attitude that happened with Bitcoin, the same smug nose-wrinkling contempt. Bitcoin is the future. It’ll replace the dollar by 2020. You’re gonna be left behind. Enjoy being poor.
I mean, you were left behind. I was left behind. I am not enjoying being poor. Most of us were left behind. If we invested in Bitcoin like it was the future in 2011 we'd all be surfing around on yachts right now given the current valuation.
Just because you failed to use an LLM effectively the first time, or it doesn't live up to your version of the hype doesn't mean you have a magic window into how sh1t they are. Many people who use them regularly know just how bad they are and what it's like to move beyond the original context window length. But having auto complete on steroids is still the best thing with regards to making computing "fun" again in a generation. No more boiler plate, no more beating your head against a desk looking for that minor coding bug that could have been fixed by a awesome regex if only you had the time to learn it. No more having to break your stride and go through about 20 websites with popups to find that someone else solved the problem got you, just getting on with stuff and having fun whilst doing it.
Edit: no retort, just flagging and down-voting... lovely
But he did invent JavaScript ...
The irony of this rant next to the AI rant.
Progress is not uniformly distributed I guess.
> Why would someone using a really cool tool that makes them more productive… feel compelled to sneer and get defensive at the mere suggestion that someone else isn’t doing the same?
It sounds like it is about people seeing others doing things in a way they view as inefficient or wrong, and then trying to help. "Sneer and get defensive" does not sound like trying to be helpful, but they probably would not describe themselves as sneering and getting defensive, either.
> I might have strong opinions about what code looks like, because I might have to read it, but why would I — why would anyone — have such an intense reaction to the hypothetical editor setup of a hypothetical stranger?
As above, but even closer to this particular question, see the editor war.
> But the Bitcoin people make more money if they can shame everyone else into buying more Bitcoin, so of course they’re gonna try to do it. What do programmers get out of this?
Apart from "helping" others, a benefit of promoting technologies one uses and prefers may be a wider user base, leading to a better support, proliferation of technologies they view as good and useful.
> We’ve never had a machine that can take almost any input and just do Whatever.
Well, there is Perl. It is a joke (the statement, not the language), but the previous points actually made me to think of programming languages with dynamic and weak typing, similarly allowing to pretend that some errors do not happen, at the cost of being less correct, and doing whatever when things go wrong. Ambiguities in natural languages come to mind, too.
> That means they aren’t even reading the words they claim as their own!
Both homework and online discussions featured that before LLMs, too. For instance, with people sometimes linking (not necessarily copying) materials to prove a point, but materials contradicting it. Carelessness, laziness, lack of motivation to spend time and effort, are all old things.
> I can’t imagine publishing a game with, say, Midjourney-generated art, even if it didn’t have uncanny otherworldly surfaces bleeding into each other. I would find that humiliating. But there are games on the Switch shop that do it.
I heard "AI-generated game" mentioned as a curiosity or a novelty, apparently making it a selling point. Same as with all the "AI-powered" stuff, before LLMs. There is much of that used for marketing: as block chaining and "big data" were added everywhere when those were hyped, and many silly things are similarly added into items outside of computing if they have a potential to sound cool at least to some (e.g., fad diets, audiophile hardware).
> But I think the core of what pisses me off is that selling this magic machine requires selling the idea that doing things is worthless.
This also sounds like yet another point in the clash between prevalent business requirements and more enthusiastic human aspirations. The economic and social systems, and cultures, probably have more to do with it than particular technologies. Pretty much any bureaucracy/corporate/enterprise-focused technologies tend to lessen the fun and enjoyment.
Oh come on. As someone who works with plenty of entrepeneurs, you don't get much more enthusiasm from anyone about what they are doing than them, who care about it as much. It's up there with professional sports and professional artists.
Just because things didn't get built that were the absolute dream of what things could be, doesn't mean people didn't care and didn't put in all their efforts to build things. Just because they didn't meet this couch critic's expectations doesn't mean people didn't put the effort in.
I really don't like this attitude. He's really unhappy about paypal and stripe existing? What exactly is the alternative? What alternate universe are they dreaming about? Perfection doesn't exist.
They're unhappy that something better doesn't exist, and that the dream of cryptocurrencies, that they would actually become a useful technology for everyday movement of money, has essentially been killed and buried by the sheer quantity of grift in the ecosystem (to be fair, there's a lot of fundamental technology issues that make that a difficult dream to achieve in practice, but it's sad that it basically doesn't seem to get much at all of the substantial resources poured into the ecosystem, and the general stench of it now makes legitimate use so much harder. So many people associate crypto == scam that adoption for real payments has actively gone backwards since the first few years).
You do you, and I'll do me. Perhaps spend more time coding which you say you like to do and less time sneering at people who use different tools.
You should pick some of the substance of the article to take issue with instead of jumping to be a victim. They clearly read it if they wrote it.
I know there are people who oppose, say, syntax coloring, and I think that’s pretty weird, but I don’t go out of my way to dunk on them.
Did you even read the blog post? The entire blog is dunking on people who use the tools.
What embarrassment? This post, like plenty of Evelyn’s writing, reached the front page of HN and has a ton of commentary in agreement. Your comment, on the other hand, was downvoted to the bottom of the thread.
The author also draws and publishes what is, in their own words, “pretty weird porn”.
You should voice your disagreement with the contents of the post and explain what they are, if that is what you’re feeling. Discussion is what HN is for. But to believe the author is or should be suffering any kind of embarrassment about this post is detached from reality.
Still don't get it. LLM outputs are nondeterministic. LLMs invent APIs that don't exist. That's why you filter those outputs through agent constructions, which actually compile code. The nondeterminism of LLMs don't make your compiler nondeterministic.
All sorts of ways to knock LLM-generated code. Most I disagree with, all colorable. But this article is based on a model of LLM code generation from 6 months ago which is simply no longer true, and you can't gaslight your way back to Q1 2024.
There is no value in randomly choosing an API that exists. There is value in choosing an API that works.
When LLM makes up an API that doesn't even exist it indicates that it's not tied to the reality of the task of fetching a working API, so filtering out nonexistent APIs will not help the other results match better. But yes, they'll compile.
LLMs can write truly shitty code. I have a <50% hit rate on stuff I don't have to rewrite, with Sketch.dev, an agent I like a lot. But they don't fail the way you or this article claim they do. Enough.
Second, speak for yourself, you have no clue about everybody's experience to make such a universal claim.
Lastly, the article talks about author's experience, not yours, so you're the only one who can gaslight the author, not the other way around
For example, just yesterday I asked an AI a question about how to approach a specific problem. It gave an answer that "worked" (it compiled!) but in reality it didn't really make any sense and would add a very nasty bug. What it wrote (It used a FrameUpdate instead of a normal Update) just didn't make sense on a basic level of how the framework worked.
This is my problem: not that people are cynical about LLM-assisted coding, but that they themselves are hallucinating arguments about it, expecting their readers to nod along. Not happening here.
You made a similar claim: LLMs invent APIs that don't exist
https://news.ycombinator.com/item?id=44461381
1. A type-strict compiler.
2. https://github.com/isaacphi/mcp-language-server
LLMs will always make stuff up because they are lossy. In the same way that if I ask you to list the methods for some random object lib you'd not be able to do that; you use the documentation to pull that up or your code-complete companion. LLMs are just getting the tools for that.
I know the author already addressed this, literally calling out HN by name, but I just don't get it. You don't even need agents (though I'm sure they help), I still just use regular ChatGPT or Copilot or whatever and it's still occasionally useful. You type in what you want it to do, it gives you code, and usually the code works. Can we appreciate how insane this would have been, what, half a decade ago? Are our standards literally "the magic english-to-code machine doesn't work 100% of the time, so it's total crap, utterly useless"?
I absolutely agree with the general thrust of the article, the overall sense of disillusionment, the impact LLM abuse is going to have on education, etc. I don't even particularly like LLMs. But it really does feel like gaslighting to the extent that when these essays make this sort of argument (LLMs being entirely useless for coding) it just makes me take them less seriously.
Indeed. This is how to spot an ideologue with an axe to grind, not someone whose beliefs are shaped by dispassionate observation.
Seems like 40 years of effort making deterministic computing work in a non-deterministic universe is being cast aside because we thought nondeterminism might work better. Turns out, we need determinism after all.
Following this out, we might end up with alternating layers of determinism and nondeterminism each trying to correct the output of the layer below.
I would argue AI is a harder problem than any humans have ever tried to solve, how does it benefit me to make every mundane problem into the hardest problem ever? As they say on the internet ...and now you have two problems, the second of which is always the hardest one ever.
There are oh-so-many issues with LLMs - plagiarism/IP rights, worsening education, unmaintainable code - this should be obvious to anyone. But painting them as totally useless just doesn't make sense. Of course they work. I've had a task I want to do, I ask the LLM in plain English, it gives me code, the code works, I get the task done faster than I would have figuring out the code myself. This process has happened plenty of times.
Which part of this do you disagree with, under your argument? Am I and all the other millions of people who have experienced this all collectively hallucinating (pun intended) that we got working solutions to our problems? Are we just unskilled for not being able to write the code quickly enough ourselves, and should go sod off? I'm joking a bit, but it's a genuine question.
Also, someone mathematically proved that's enough. And then someone else proved it empirically.
There was an experiment where they trained 16 pigeons to detect cancerous or benign tumours from photographies.
Individually, each pigeon had an average 85% accuracy. But all pigeons (except for one outlier) together had an accuracy of 99%.
If you add enough silly brains, you get one super smart brain.
It's annoying to have to hand-code that stuff. But without Copilot I have to. Or I can write some arcane regex and run it on existing code to get 90% of the way there. But writing the regex also takes a while.
Copilot was literally just suggesting the whole deserialization function after I"d finished the serializer, 100% correct code.
Now that everything is containerised and managed by docker style environments, I am thinking about giving SBCL another try, the end users only need to access the same JSON REST APIs anyway.
Everything old is new again =)
LLM outputs are deterministic. There is no intrinsic source of randomness. Users can add randomness (temperature) to the output and modify it.
> But this article is based on a model of LLM code generation from 6 months ago
There hasn't been much change in models from 6 months ago. What happened is that we have better tooling to sift through the randomly generated outputs.
I don't disagree with your message. You are being downvoted because a lot of software developers are butt-hurt by it. It is going to force a change in the labor market for developers. In the same way the author is butt-hurt that they didn't buy Bitcoin in the very early days (as they were aware of it) and missed the boat on that.
I made the same claim in a widely-circulated piece a month or so back, and have come to believe it was wildly false, the dumbest thing I said in that piece.
So far the only model that showed significant advancement and differentiation was GPT-4.5. I advise to look at the problem and read GPT-4.5 answer. It'll show the difference to other "normal models" (including GPT-3.5) as it shows considerable levels of understanding.
Other normal models are now more chatty and have a bit more data. But they do not show increased intelligence.
> But this article is based on a model of LLM code generation from 6 months ago which is simply no longer true, and you can't gaslight your way back to Q1 2024.
You’re ahead of the curve and wondering why others don’t know what you do. If you’re not an AI company, a faang, or an AI evangelist you likely haven’t heard of those solutions.
I’ve been trying to keep up with AI developments, and only learned about MCP and agentic workflows 1-2 months ago and I consider myself failing at keeping up with cutting edge AI development
Edit:
Also six months ago is Q1 2025, not 2024. Not sure if that was a typo or a need to remind you at how rapidly this technology is iterating
The reason for this is as follows: A good number of software engineers make a fuckton of money, but not all of them (it's bimodal/trimodal/whatever). SWEs are also one of the highest paid people within tech companies, often earning more than the MBAs, the CPAs, and the JDs (who tend to come with massive chips on account of actually having actual credentials, and tech companies being prestigious destinations within these profession).
It seems that over time, this phenomenon has caused the emergence of a large coterie of people whom I would kindly describe as being "bearish" of these high-end software engineering salaries and double-down on any narrative that undermines the livelihood of the software engineer, basically just to make themselves feel better.
tl;dr people desperately want AI to succeed because they're envious of software engineers