Human coders are still better than LLMs

526 longwave 605 5/29/2025, 4:41:04 PM antirez.com ↗

Comments (605)

mattnewton · 17h ago
This matches my experience. I actually think a fair amount of value from LLM assistants to me is having a reasonably intelligent rubber duck to talk to. Now the duck can occasionally disagree and sometimes even refine.

https://en.m.wikipedia.org/wiki/Rubber_duck_debugging

I think the big question everyone wants to skip right to and past this conversation is, will this continue to be true 2 years from now? I don’t know how to answer that question.

Buttons840 · 10h ago
LLMs aren't my rubber duck, they're my wrong answer.

You know that saying that the best way to get an answer online is to post a wrong answer? That's what LLMs do for me.

I ask the LLM to do something simple but tedious, and then it does it spectacularly wrong, then I get pissed off enough that I have the rage-induced energy to do it myself.

Buttons840 · 10h ago
I'm probably suffering undiagnosed ADHD, and will get stuck and spend minutes picking a function name and then writing a docstring. LLMs do help with this even if they get the code wrong, because I usually won't bother to fix their variables names or docstring unless needed. LLMs can reliably solve the problem of a blank-page.
msgodel · 13m ago
Yeah, keeping me in the flow when I hit one of those silly tasks my brain just randomly says "no let's do something else" to has been the main productivity improving feature of LLMs.
linotype · 9h ago
This. I have ADHD and starting is the hardest part for me. With an LLM it gets me from 0 to 20% (or more) and I can nail it for the rest. It’s way less stressful for me to start now.
raihansaputra · 8h ago
very much agree. although lately with how good it is i get hyperfocused and spent more time then i allocated because i ended up wanting to implement more than i planned.
linotype · 8h ago
It’s a struggle right? First world LLM problems.
bayesianbot · 7h ago
Been suffering the same, I'm used to having so many days (weeks/months) when I just don't get that much done. With LLMs I can take these days and hack around / watch videos / play games while the LLM is working on background and just check the work. Best part is it often leads to some problematic situation that gets me involved and often I'll end up getting a real day of work out of it after I get started.
carlmr · 2h ago
So much this, the blank page problem is almost gone. Even if it's riddled with errors.
materiallie · 4h ago
This is my experience, too. As a concrete example, I'll need to write a mapper function to convert between a protobuf type and Go type. The types are mirror reflections of each other, and I feed the complete APIs of both in my prompt.

I've yet to find an LLM that can reliability generate mapping code between proto.Foo{ID string} to gomodel.Foo{ID string}.

It still saves me time, because even 50% accuracy is still half that I don't have to write myself.

But it makes me feel like I'm taking crazy pills whenever I read about AI hype. I'm open to the idea that I'm prompting wrong, need a better workflow, etc. But I'm not a luddite, I've "reached up and put in the work" and am always trying to learn new tools.

lazyasciiart · 2h ago
An LLM ability to do a task is roughly correlated to the number of times that task has been done on the internet before. If you want to see the hype version, you need to write a todo web app in typescript or similar. So it's probably not something you can fix with prompts, but having a model with more focus on relevant training data might help.
akoboldfrying · 2h ago
This honestly seems like something that could be better handled with pre-LLM technology, like a 15-line Perl script that reads one on stdin, applies some crufty regexes, and writes the other to stdout. Are there complexities I'm not seeing?
Affric · 10h ago
Yep.

I like maths, I hate graphing. Tedious work even with state of the art libraries and wrappers.

LLMs do it for me. Praise be.

lanstin · 8h ago
Yeah, I write a lot of little data analysis scripts and stuff, and I am happy just to read the numbers, but now I get nice PNGs of the distributions and so on from LLM, and people like that.
xarope · 3h ago
I have to upvote this, because this is how I felt after trying three times (that I consciously decided to give an LLM a try, versus having it shoved down my throat by google/ms/meta/etc) and giving up (for now).
bsder · 10h ago
LLMs are a decent search engine a la Google circa 2005.

It's been 20 years since that, so I think people have simply forgotten that a search engine can actually be useful as opposed to ad infested SEO sewage sludge.

The problem is that the conversational interface, for some reason, seems to turn off the natural skepticism that people have when they use a search engine.

pjmlp · 44m ago
Except a search engine isn't voice controlled, and able to write code for me.

Recently I did some tests with coding agents, and being able to translate a full application from AT&T Assembly into Intel Assembly compatible with NASM, in about half an hour of talking with agent, and having the end result actually working with minor tweeks isn't something a "decent search engine a la Google circa 2005." would ever been able to achieve.

In the past I would have given such a task to a junior dev or intern, to keep them busy somehow, with a bit more tool maturity I have no reason to do it in the future.

And this is the point many developers haven't yet grasped about their future in the job market.

AdieuToLogic · 8h ago
> LLMs are a decent search engine a la Google circa 2005.

Statistical text (token) generation made from an unknown (to the user) training data set is not the same as a keyword/faceted search of arbitrary content acquired from web crawlers.

> The problem is that the conversational interface, for some reason, seems to turn off the natural skepticism that people have when they use a search engine.

For me, my skepticism of using a statistical text generation algorithm as if it were a search engine is because a statistical text generation algorithm is not a search engine.

pixl97 · 7h ago
Search engines can be really good still if you have a good idea what you're looking for in the domain you're searching.

Search engines can suck when you don't know exactly what you're looking for and the phrases you're using have invited spammers to fill up the first 10 pages.

magicalhippo · 2h ago
They also suck if you want to find something that's almost exactly like a very common thing, but different in some key aspect.

For example, I wanted to find some texts on solving a partial differential equation numerically using 6th-order or higher finite differences, as I wanted to know how to handle boundry conditions (interior is simple enough).

Searching only turned up the usual low-order methods that I already knew.

Asking some LLMs I got some decent answer and could proceed.

Back in the day you could force the search engines to restrict their search scope, but they all seem so eager to return results at all cost these days, making them useless in niche topics.

wvenable · 8h ago
I almost never bother using Google anymore. When I search for something, I'm usually looking for an answer to question. Now I can just ask the question and get the answer without all the other stuff.

I will often ask the LLM to give me web pages to look at it when I want to do further reading.

As LLMs get better, I can't see myself going back to Google as it is or even as it was.

codr7 · 7h ago
You get an answer.

If that's the answer, or even the best answer, is impossible to tell without doing the research you're trying to avoid.

wvenable · 5h ago
If I do research, I get an answer. If that's the answer, or even the best answer, it's impossible to tell. When do I stop looking for the best answer?

If ChatGPT needs to, it will actually do the search for me and then collate the results.

lazyasciiart · 2h ago
By that logic, it's barely worth reading a newspaper or a book. You don't know if they're giving you accurate information without doing all the research you're trying to avoid.
drob518 · 7h ago
It’s only a matter of time before Google merges search with Gemini. I don’t think you’ll have to wait long.
johnb231 · 6h ago
Already happened.

Google search includes an AI generated response.

Gemini prompts return Google search results.

codr7 · 7h ago
Once search engines merge fully with AI, the Internet is over.
johnb231 · 6h ago
All of the current models have access to Google and will do a search (or multiple searches), filter and analyze the results, then present a summary of results with links.
otabdeveloper4 · 2h ago
> Statistical text (token) generation made from an unknown (to the user) training data set is not the same as a keyword/faceted search of arbitrary content acquired from web crawlers.

Well, it's roughly the same under the hood, mathematically.

andrekandre · 9h ago

  > the conversational interface, for some reason, seems to turn off the natural skepticism that people have
n=1 but after having chatgpt "lie" to me more than once i am very skeptical of it and always double check it, whereas something like tv or yt videos i still find myself being click-baited or grifted (iow less skeptical) much more easily still... any large studies about this would be very interesting...
myvoiceismypass · 9h ago
I get irrationally frustrated when ChatGPT hallucinates npm packages / libraries that simply do not exist.

This happens… weekly for me.

mhast · 1h ago
My experience with this is that it is vital to have a system where the system can iterate on its own.

Ideally by having a test or endpoint you can call to actually run the code you want to build.

Then you ask the system to implement the function and run the test. If it hallucinates anything it will find that and fix it.

IME OpenAI is below Claude and Gemini for code.

wvenable · 8h ago
Weird. I used to have that happen when it first came out but I haven't experienced anything like that in a long time. Worst case it's out of date rather than making stuff up.
protocolture · 8h ago
"Hey chatgpt I want to integrate a slidepot into this project"

>from PiicoDev_SlidePot import PiicoDev_SlidePot

Weird how these guys used exactly my terminology when they usually say "Potentiometer"

Went and looked it up, found a resource outlining that it uses the same class as the dial potentiometer.

"Hey chatgpt, I just looked it up and the slidepots actually use the same Potentiometer class as the dialpots."

scurries to fix its stupid mistake

bdangubic · 9h ago
just ask it to write and publish them and you good :)
gessha · 8h ago
Jia Tan will have to work 24/7 :)
therealpygon · 10h ago
LLMs follow instructions. Garbage in = garbage out generally. When attention is managed and a problem is well defined and necessary materials are available to it, they can perform rather well. On the other hand, I find a lot of the loosely-goosey vibe coding approach to be useless and gives a lot of false impressions about how useful LLMs can be, both too positive and too negative.
GiorgioG · 10h ago
So what you’re saying is you need to be very specific and detailed when writing your specifications for the LLM to spit out the code you want. Sounds like I can just skip the middle man and code it myself.
AndrewKemendo · 9h ago
Not in 10 seconds
Zamaamiro · 7h ago
You probably didn’t write up a detailed prompt with perfect specifications in 10 seconds, either.

In my experience, it doesn’t matter how good or detailed the prompt is—after enough lines of code, the LLM starts making design decisions for you.

This is why I don’t accept LLM completions for anything that isn’t short enough to quickly verify that it is implemented exactly as I would have myself. Usually, that’s boilerplate code.

abalashov · 5h ago
> This is why I don’t accept LLM completions for anything that isn’t short enough to quickly verify that it is implemented exactly as I would have myself. Usually, that’s boilerplate code.

^ This. This is where I've landed as far as the extent of LLM coding assistants for me.

dingnuts · 8h ago
I've seen very long prompts that are as long as a school essay and those didn't take ten seconds either
darkwater · 3h ago
To some extent those fail in the same category of cheaters that put way more effort into cheating an exam than doing it properly. Or people paying 10/15 bucks a month to access a private Usenet server to download pirate content.
anonzzzies · 6h ago
The advantage of a llm in that case is that you can skip a lot of syntax: make a LOT of typos in your spec, even pseudo code, will result in a working program. Not so with code. Also small logjcal mistakes, messing up left/right, x/y etc are auto fixed, maybe to your frustration if they were not mistakes, but often they are and you won't notice as they are indeed just repaired for you.
gpm · 6h ago
This hasn't been my experience (using the latest claude and gemini models). They'll produce poor code even when given a well defined easily achievable task with specific instructions. The code will usually more or less work with today's models, but it will do things like call a function to recreate a value that is already stored in a local variable... (and worse issues prop us the more design-work you leave to the LLM, even dead simple design work with really only one good answer)

I've definitely also found that the poor code can sometimes be a nice starting place. One thing I think it does for me is make me fix it up until it's actually good, instead of write the first thing that comes to mind and declare it good enough (after all my poorly written first draft is of course perfect). In contrast to the usual view of AI assisted coding, I think this style of programming for tedious tasks makes me "less productive" (I take longer) but produces better code.

geraneum · 6h ago
> LLMs follow instructions.

Not really, not always. To anyone who’s used the latest LLMs extensively, it’s clear that this is not something you can reliably assume even with the constraints you mentioned.

AndrewKemendo · 9h ago
This seems to be what’s happened

People are expecting perfection from bad spec

Isn’t that what engineers are (rightfully) always complaining about to BD?

darkwater · 3h ago
Indeed. But that's the price an automated tool has to pay to take a job from humans' hands. It has to do it better with the same conditions. The same applies to self-driving cars: you don't want an accident rate equals to human drivers. You want two or three orders of magnitude better.
troupo · 5h ago
> LLMs follow instructions.

They don't

> Garbage in = garbage out generally.

Generally, this statement is false

> When attention is managed and a problem is well defined and necessary materials are available to it, they can perform rather well.

Keyword: can.

They can also not perform really well despite all the management and materials.

They can also work really well with loosey-goosey approach.

The reason is that they are non-deterministic systems whose performance is affected more by compute availability than by your unscientific random attempts at reverse engineering their behavior https://dmitriid.com/prompting-llms-is-not-engineering

myvoiceismypass · 9h ago
They should maybe have a verifiable specification for said instructions. Kinda like a programming language maybe!
otabdeveloper4 · 1h ago
> LLMs follow instructions.

No they don't, they generate a statistically plausible text response given a sequence of tokens.

seattle_spring · 10h ago
This has been my experience as well. The biggest problem is that the answers look plausible, and only after implementation and experimentation do you find them to be wrong. If this happened every once in a while then it wouldn't be a big deal, but I'd guess that more than half of the answers and tutorials I've received through ChatGPT have ended up being plain wrong.

God help us if companies start relying on LLMs for life-or-death stuff like insurance claim decisions.

pwdisswordfishz · 2h ago
> If this happened every once in a while then it wouldn't be a big deal, but I'd guess that more than half of the answers and tutorials I've received through ChatGPT have ended up being plain wrong.

It would actually have been more pernicious that way, since it would lull people into a false sense of security.

dabraham1248 · 9h ago
I'm not sure if you're being sarcastic, but in case you're not... From https://arstechnica.com/health/2023/11/ai-with-90-error-rate...

"UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges" Also "The use of faulty AI is not new for the health care industry."

AndrewKemendo · 9h ago
Out of curiosity can you give me an example prompt(s) you’ve used and been disappointed

I see these comments all the time and they don’t reflect my experience so I’m curious what your experience has been

anonzzzies · 6h ago
There are so many examples where all current top models just will loop forever even if you instruct them literally the code. We know many of them, but for instance in a tailwind react project with some degree of complexity (nested components), if you ask for something to scroll in it's space, it will never figure out min-h-0 even if you tell it. It will just loop forever rewriting the code adding and removing things, to the point of it just putting comments like 'This will add overflow' and writing js to force scroll, and it will never work even if you literally tell it what to do. Don't know why, all big and small models have this, and I found Gemini is currently the only model that sometimes randomly has the right idea but then still cannot resolve it. For this we went back to not using tailwind and back to global vanilla css, which I never thought I would say, is rather nice.
auggierose · 6h ago
This is probably not so much an indictment of the AI, as of that garbage called Tailwind. As somebody here said before, garbage in, garbage out.
anonzzzies · 5h ago
Yeah, guess so, but we like garbage these days in the industry; nextjs, prisma, npm, react, ts, js, tailwind, babel, the list of inefficient and badly written shite goes on and on; as a commercial person it's impossible to avoid that though as shadcn is the only thing 'the youth' makes apps with now.
Buttons840 · 8h ago
I asked Chat GPT 4o to write an Emacs function to highlight a line. This involves setting the "mark" at the beginning, and the "point" at the end. It would only set the point, so I corrected it "no, you have to set both", but even after correction it would move the point to the beginning, and then moved the point again to the end, without ever touching the mark.
DevDesmond · 5h ago
From my experience, (and to borrow terminology from a HN thread not long ago), I've found that once a chat goes bad, your context is "poisoned"; It's auto completing from previous text that is nonsense, so, further text generation from there exist in the world of nonexistent nonsense as well. It's much better to edit your message and try again.

I also think that language matters - An Emacs function is much more esoteric than say, JavaScript, Python, or Java. If I ever find myself looking for help with something that's not in the standard library, I like provide extra context, such as examples from the documentation.

marcosdumay · 17h ago
It's a damning assertive duck, completely out of proportion to its competence.

I've seen enough people led astray by talking to it.

foxyv · 17h ago
Same here. When I'm teaching coding I've noticed that LLMs will confuse the heck out of students. They will accept what it suggests without realizing that it is suggesting nonsense.
cogogo · 13h ago
I’m self taught and don’t code that much but I feel like I benefit a ton from LLMs giving me specific answers to questions that would take me a lot of time to figure out with documentation and stack overflow. Or even generating snippets that I can evaluate whether or not will work.

But I actually can’t imagine how you can teach someone to code if they have access to an LLM from day one. It’s too easy to take the easy route and you lose the critical thinking and problem solving skills required to code in the first place and to actually make an LLM useful in the second. Best of luck to you… it’s a weird time for a lot of things.

*edit them/they

ilamont · 10h ago
> I’m self taught and don’t code that much but I feel like I benefit a ton from LLMs giving me specific answers to questions that would take me a lot of time to figure out with documentation and stack overflow

Same here. Combing discussion forums and KB pages for an hour or two, seeking how to solve a certain problem with a specific tool has been replaced by a 50-100 word prompt in Gemini which gives very helpful replies, likely derived from many of those same forums and support docs.

Of course I am concerned about accuracy, but for most low-level problems it's easy enough to test. And you know what, many of those forum posts or obsolete KB articles had their own flaws, too.

specproc · 5h ago
I really value forums and worry about the impact LLMs are having on them.

Stackoverflow has its flaws for sure, but I've learned a hell of a lot watching smart people argue it out in a thread.

Actual learning: the pros and cons of different approaches. Even the downvoted answers tell you something often.

Asking an LLM gets you a single response from a median stackoverflow commenter. Sure, they're infinitely patient and responsive, but can never beat a few grizzled smart arses trying to one-up each other.

lanstin · 7h ago
I think you can learn a lot from debugging, and all the code I've put into prod from LLM has needed debugging (rather more than it should from the LOC count).
cogogo · 29m ago
I agree and that’s definitely part of my current learning process. But I think someone dependent on a LLM from day one might struggle to debug their LLM generated code. Probably just feed it back to the LLM and their mileage is definitely going to vary with that approach.
qwertox · 3h ago
What one would expect if they can't read the code because they haven't learned to code.

TBF, trial and error has usually been my path as well, it's just that I was generating the errors so I would know where to find them.

XorNot · 12h ago
This was what promptly led me to turning off Jetbrains AI assistant: the multiline completion was incredibly distracting to my chain of thought, particularly when it would suggest things that looked right but weren't. Stopping and parsing the suggestion to realize if it was right or wrong would completely kill my flow.
SchemaLoad · 11h ago
The inline suggestions feel like that annoying person who always interrupts you with what they think you were going to finish with but rarely ever gets it right.
lanstin · 7h ago
I'm sorry, it's because of eagerness and enjoying the train of your thought/speech.
hn_acc1 · 9h ago
With VS Code and Augment (company won't allow any other AI, and I'm not particularly inclined to push - but it did just switch to o4, IIRC), the main benefit is that if I'm fiddling / debugging some code, and need to add some debug statements, it can almost always expand that line successfully for me, following our idiom for debugging - which saves me a few seconds. And it will often suggest the same debugging statement, even if it's been 3 weeks and in a different git branch where I las coded that debugging statement.

My main annoyance? If I'm in that same function, it still remembers the debugging / temporary hack I tried 3 months ago and haven't done since and will suggest it. And heck, even if I then move to a different part of the file or even a different file, it will still suggest that same hack at times, even though I used it exactly once and have not since.

Once you accept something, it needs some kind of temporal feedback mechanism to timeout even accepted solutions over time, so it doesn't keep repeating stuff you gave up on 3 months ago.

Our codebase is very different from 98% of the coding stuff you'll find online, so anything more than a couple of obvious suggestions are complete lunacy, even though they've trained it on our codebase.

chucksmash · 13h ago
Tbf, there's a phase of learning to code where everything is pretty much an incantation you learn because someone told you "just trust me." You encounter "here's how to make the computer print text in Python" before you would ever discuss strings or defining and invoking functions, for instance. To get your start you kind of have to just accept some stuff uncritically.

It's hard to remember what it was like to be in that phase. Once simple things like using variables are second nature, it's difficult to put yourself back into the shoes of someone who doesn't understand the use of a variable yet.

ZoomZoomZoom · 9h ago
> Tbf, there's a phase of learning to code where everything is pretty much an incantation you learn because someone told you "just trust me."

There really shouldn't be. You don't need to know all the turtles by name, but "trust me" doesn't cut it most of the time. You need a minimal understanding to progress smoothly. Knowledge debt is a b*tch.

cyjackx · 8h ago
I remember when I first learned Java, having to just accept "public static void main(String[] args)" before I understood what any of it was. All I knew was that went on top around the block and I did the code inside it.

Should people really understand every syntax there before learning simpler commands like printing, ifs, and loops? I think it would yes, be a nicer learning experience, but I'm not sure it's actually the best idea.

ZoomZoomZoom · 2h ago
If you need to learn "public static void main(String[] args)" just to print to a screen or use a loop, means you're using the wrong language.

When it's time to learn Java you're supposed to be past the basics. Old-school intros to programming starts with flowcharts for a reason.

You can learn either way, of course, but with one, people get tied up to a particular language-specific model and then have all kinds of discomfort when it's time to switch.

eszed · 12h ago
Yeah, and accepting the LLM uncritically* is exactly what you shouldn't do in any non-trivial context.

But, as a sibling poster pointed out: for now.

codr7 · 7h ago
More like forever as long as it's an LLM.
tharkun__ · 12h ago
Fair enough on 'cutting the learning tree' at some points i.e. ignoring that you don't understand yet why something works/does what it does. We (should) keep doing that later on in life as well.

But unless you teach a kid that's never done any math where `x` was a thing to program, what's so hard about understanding the concept of a variable in programming?

na4ma4 · 11h ago
I think they're just using hyperbole for the watershed moment when you start to understand your first programming language.

At first it's all mystical nonsense that does something, then you start to poke at it and the response changes, then you start adding in extra steps and they do things, you could probably describe it as more of a Eureka! moment.

At some point you "learn variables" and it's hard to imagine being in the shoes of someone who doesn't understand how their code does what it does.

(I've repeated a bit of what you said as well, I'm just trying to clarify by repeating)

pixl97 · 7h ago
Yep, I remember way back when in grade school messing around with the gorillas.bas file with nearly zero understanding. You could change stuff in one place and it would change the gravity in the game. Changing something else and the game might not run. Change some other lines and it totally freaks out.

I didn't have any programming books or even the internet back then. It was a poke and prod at the magical incantations type of thing.

klntsky · 17h ago
I would argue that they are never led astray by chatting, but rather by accepting the projection of their own prompt passed through the model as some kind of truth.

When talking with reasonable people, they have an intuition of what you want even if you don't say it, because there is a lot of non-verbal context. LLMs lack the ability to understand the person, but behave as if they had it.

marcosdumay · 16h ago
Most of the times, people are led astray by following average advice on exceptional circumstances.

People with a minimum amount of expertise stop asking for advice for average circumstances very quickly.

sho_hn · 13h ago
This is right on the money. I use LLMs when I am reasonably confident the problem I am asking it is well-represented in the training data set and well within its capabilities (this has increased over time).

This means I use it as a typing accelerator when I already know what I want most of the time, not for advice.

As an exploratory tool sometimes, when I am sure others have solved a problem frequently, to have it regurgitate the average solution back at me and take a look. In those situations I never accept the diff as-is and do the integration manually though, to make sure my brain still learns along and I still add the solution to my own mental toolbox.

lanstin · 7h ago
I mostly program in Python and Go, either services, API coordination (e.g. re-encrypt all the objects in an S3 bucket) or data analysis. But now I keep making little MPEGs or web sites without having to put in all that crap boiler plate from Javascript. My stuff outputs JSON files or CVS files and then I ask the LLM "Given a CVS file with this structure, please make a web site in python that makes a spread-sheet type UI with each column being sortable and a link to the raw data" and it just works.
sigmoid10 · 16h ago
It's mostly a question of experience. I've been writing software long enough that when I give chat models some code and a problem, I can immediately tell if they understood it or if they got hooked on something unrelated. But junior devs will have a hell of a hard time, because the raw code quality that LLMs generate is usually top notch, even if the functionality is completely off.
daveguy · 11h ago
> the raw code quality that LLMs generate is usually top notch, even if the functionality is completely off.

I'm not even sure what this is supposed to mean. It doesn't make syntax errors? Code that doesn't have the correct functionality is obviously not "top notch".

sigmoid10 · 1h ago
High quality code is not just correct syntax. In fact if the syntax is wrong, it wouldn't be low quality, it simply wouldn't work. Even interns could spot that by running it. But in professional software development environments, you have many additional code requirements like readability, maintainability, overall stability or general good practice patterns. I've seen good engineers deliver high quality code that was still wrong because of some design oversight or misunderstanding - the exact same thing you see from current LLMs. Often you don't even know what is wrong with an approach until you see it cause a problem. But you should still deliver high quality code in the meantime if you want to be good at your job.
carlhjerpe · 10h ago
No syntax errors, good error handling and such. Just because it implemented the wrong function doesn't mean the function is bad.
shrewduser · 3h ago
i wish i could do that in an interview.
traceroute66 · 16h ago
> When talking with reasonable people

When talking with reasonable people, they will tell you if they don't understand what you're saying.

When talking with reasonable people, they will tell you if they don't know the answer or if they are unsure about their answer.

LLMs do none of that.

They will very happily, and very confidently, spout complete bullshit at you.

It is essentially a lotto draw as to whether the answer is hallucinated, completely wrong, subtly wrong, not ideal, sort of right or correct.

An LLM is a bit like those spin the wheel game shows on TV really.

bbarn · 15h ago
They will also not be offended or harbor ill will when you completely reject their "pull request" and rephrase the requirements.
the_af · 13h ago
They will also keep going in circles when you rephrase the requirements, unless with every prompt you keep adding to it and mentioning everything they've already suggested that got rejected. While humans occasionally also do this (hey, short memories), LLMs are infuriatingly more prone to it.

A typical interaction with an LLM:

"Hey, how do I do X in Y?"

"That's a great question! A good way to do X in Y is Z!"

"No, Z doesn't work in Y. I get this error: 'Unsupported operation Z'."

"I apologize for making this mistake. You're right to point out Z doesn't work in Y. Let's use W instead!"

"Unfortunately, I cannot use W for company policy reasons. Any other option?"

"Understood: you cannot use W due to company policy. Why not try to do Z?"

"I just told you Z isn't available in Y."

"In that case, I suggest you do W."

"Like I told you, W is unacceptable due to company policy. Neither W nor Z work."

...

"Let's do this. First, use Z [...]"

abalashov · 28m ago
It's my experience that once you are in this territory, the LLM is not going to be helpful and you should abandon the effort to get what you want out of it. I can smell blood now when it's wrong; it'll just keep being wrong, cheerfully, confidently.
lupire · 12h ago
Which LLMs and which versions?
daveguy · 11h ago
All. Of. Them. It's quite literally what they do because they are optimistic text generators. Not correct or accurate text generators.
e3bc54b2 · 7h ago
This really grinds my gears. The technology is inherently faulty, but the relentless optimism of its future subtly hiding that by making it the user's mistake instead.

Oh you got a wrong answer? Did you try the new OpenAI v999? Did you prompt it correctly? Its definitely not the model, because it worked for me once last night..

traceroute66 · 46m ago
> it worked for me once last night..

This !

Yeah, it probably "worked for me" because they spent a gazillion hours engaging in what the LLM fanbois call "prompt engineering", but you and I would call "engaging in endless iterative hacky work-arounds until you find a prompt that works".

Unless its something extremely simple, the chances of an LLM giving you a workable answer on the first attempt is microscopic.

Aeolun · 6h ago
Most optimistic text generators do not consider repeating the stuff that was already rejected a desireable path forward. It might be the only path forward they’re aware of though.
seunosewa · 12h ago
You can use prompts to fix some of these problematic tendencies.
mike_ivanov · 12h ago
Yes you can, but it almost never works
johnb231 · 6h ago
I think you are a couple of years out of date.

No longer an issue with the current SOTA reasoning models.

prisenco · 13h ago
I use it as a rubber duck but you're right. Treat it like a brilliant idiot and never a source of truth.

I use it for what I'm familiar with but rusty on or to brainstorm options where I'm already considering at least one option.

But a question on immunobiology? Waste of time. I have a single undergraduate biology class under my belt, I struggled for a good grade then immediately forgot it all. Asking it something I'm incapable of calling bullshit on is a terrible idea.

But rubber ducking with AI is still better than let it do your work for you.

jasonm23 · 8h ago
Try a system prompt like this:

- - -

System Prompt:

You are ChatGPT, and your goal is to engage in a highly focused, no-nonsense, and detailed way that directly addresses technical issues. Avoid any generalized speculation, tangential commentary, or overly authoritative language. When analyzing code, focus on clear, concise insights with the intent to resolve the problem efficiently. In cases where the user is troubleshooting or trying to understand a specific technical scenario, adopt a pragmatic, “over-the-shoulder” problem-solving approach. Be casual but precise—no fluff. If something is unclear or doesn’t make sense, ask clarifying questions. If surprised or impressed, acknowledge it, but keep it relevant. When the user provides logs or outputs, interpret them immediately and directly to troubleshoot, without making assumptions or over-explaining.

- - -

protocolture · 11h ago
I spend a lot of time working shit out to prove the rubber duck wrong and I am not completely sure this is a bad working model.
amelius · 12h ago
If this is a problem for you, just add "... and answer in the style of a drunkard" to your prompts.
drivenextfunc · 16h ago
Regarding the stubborn and narcissistic personality of LLMs (especially reasoning models), I suspect that attempts to make them jailbreak-resistant might be a factor. To prevent users from gaslighting the LLM, trainers might have inadvertently made the LLMs prone to gaslighting users.
all2 · 13h ago
My typical approach is prompt, be disgusted by the output, tinker a little on my own, prompt again -- but more specific, be disgusted again by the output, tinker a littler more, etc.

Eventually I land on a solution to my problem that isn't disgusting and isn't AI slop.

Having a sounding board, even a bad one, forces me to order my thinking and understand the problem space more deeply.

suddenlybananas · 12h ago
Why not just write the code at that point instead of cajoling an AI to do it.
all2 · 9h ago
I don't cajole the model to do it. I rarely use what the model generates. I typically do my own thing after making an assessment of what the model writes. I orient myself in the problem space with the model, then use my knowledge to write a more concise solution.
XorNot · 12h ago
This is the part I don't get about vibe coding: I've written specification documents before. They frequently are longer and denser then the code required to implement them.

Typing longer and longer prompts to LLMs to not get what I want seems like a worse experience.

lupire · 12h ago
Because saving hours of time is nice.
TedDallas · 10h ago
Yeah, the problem is if you don't understand the problem space then you are going to lean heavy on the LLM. And that can lead you astray. Which is why you still need people who are experts to validate solutions and provide feedback like Op.

My most productive experiences with LLMs is to have my design well thought out first, ask it to help me implement, and then help me debug my shitty design. :-)

eptcyka · 12h ago
Some humans are the same.
dwattttt · 12h ago
We also don't aim to elevate them. We instead try not to give them responsibility until they're able to handle it.
IshKebab · 2h ago
Yeah... I dunno, the one person I've worked with who had LLM levels of bullshit somehow pulled the wool over everyone's eyes. Or at least enough people's eyes to be relatively successful. I presume there were some people that could see the bullshit but none of them were in a position to call him out on it.

I think I read some research somewhere that pathological bullshitters can be surprisingly successful.

olddustytrail · 10h ago
Unless you're an American deciding who should be president.
taneq · 9h ago
Treat it as that enthusiastic co-worker who’s always citing blog posts and has a lot of surface knowledge about style and design patterns and whatnot, but isn’t that great on really understanding algorithms.

They can be productive to talk to but they can’t actually do your job.

schwartzworld · 15h ago
For me it's like having a junior developer work under me who knows APIs inside and out, but has no common sense about architecture. I like that I delegate tasks to them so that my brain can be free for other problems, but it makes my job much more review heavy than before. I put every PR through 3-4 review cycles before even asking my team for a review.
eslaught · 14h ago
How do you not completely destroy your concentration when you do this though?

I normally build things bottom up so that I understand all the pieces intimately and when I get to the next level of abstraction up, I know exactly how to put them together to achieve what I want.

In my (admittedly limited) use of LLMs so far, I've found that they do a great job of writing code, but that code is often off in subtle ways. But if it's not something I'm already intimately familiar with, I basically need to rebuild the code from the ground up to get to the point where I understand it well enough so that I can see all those flaws.

At least with humans I have some basic level of trust, so that even if I don't understand the code at that level, I can scan it and see that it's reasonable. But every piece of LLM generated code I've seen to date hasn't been trustworthy once I put in the effort to really understand it.

schwartzworld · 14h ago
I use a few strategies, but it's mostly the same as if I was mentoring a junior. A lot of my job already involved breaking up big features into small tickets. If the tasks are small enough, juniors and LLMs have an easier time implementing things and I have an easier time reviewing. If there's something I'm really unfamiliar with, it should be in a dedicated function backed by enough tests that my understanding of the implementation isn't required. In fact, LLMs do great with TDD!

> At least with humans I have some basic level of trust, so that even if I don't understand the code at that level, I can scan it and see that it's reasonable.

If you can't scan the code and see that it's reasonable, that's a smell. The task was too big or its implemented the wrong way. You'd feel bad telling a real person to go back and rewrite it a different way but the LLM has no ego to bruise.

I may have a different perspective because I already do a lot of review, but I think using LLMs means you have to do more of it. What's the excuse for merging code that is "off" in any way? The LLM did it? It takes a short time to review your code, give your feedback to the LLM and put up something actually production ready.

> But every piece of LLM generated code I've seen to date hasn't been trustworthy once I put in the effort to really understand it.

That's why your code needs tests. More tests. If you can't test it, it's wrong and needs to be rewritten.

xandrius · 14h ago
Keep using it and you'll see. Also that depends on the model and prompting.

My approach is to describe the task in great detail, which also helps me completing my own understanding of the problem, in case I hadn't considered an edge case or how to handle something specific. The more you do that the closer the result you get is to your own personal taste, experience and design.

Of course you're trading writing code vs writing a prompt but it's common to make architectural docs before making a sizeable feature, now you can feed that to the LLM instead of just having it be there.

ehnto · 7h ago
To me delegation requires the full cycle of agency, with the awareness that I probably shouldn't be interrupted shortly after delegating. I delegated so I can have space from the task and so babysitting it really doesn't suit my needs. I want the task done, but some time in the future.

From my coworkers I want to be able to say, here's the ticket, you got this? And they take the ticket all the way or PR, interacting with clients, collecting more information etc.

I do somewhat think an LLM could handle client comms for simple extra requirements gathering on already well defined tasks. But I wouldn't trust my business relationships to it, so I would never do that.

p1necone · 13h ago
> the duck can occasionally disagree

This has not been my experience. LLMs have definitely been helpful, but generally they either give you the right answer or invent something plausible sounding but incorrect.

If I tell it what I'm doing I always get breathless praise, never "that doesn't sound right, try this instead."

crazygringo · 13h ago
That's not my experience. I routinely get a polite "that might not be the optimal solution, have you considered..." when I'm asking whether I should do something X way with Y technology.

Of course it has to be something the LLM actually has lots of material it's trained with. It won't work with anything remotely cutting-edge, but of course that's not what LLM's are for.

But it's been incredibly helpful for me in figuring out the best, easiest, most idiomatic ways of using libraries or parts of libraries I'm not very familiar with.

Jarwain · 10h ago
I find it very much depends on the LLM you're using. Gemini feels more likely to push back than claude 3.7 is. Haven't tried claude 4 yet
mbrameld · 12h ago
Ask it. Instead of just telling it what you're doing and expecting it to criticize that, ask it directly for criticism. Even better, tell it what you're doing, then tell it to ask you questions about what you're doing until it knows enough to recommend a better approach.
lupire · 12h ago
This is key. Humans each have a personality and some sense of mood. When you ask for help, you choose ask and that person can sense your situation. LLM has every personality and doesn't know your situation. You have to tell it which personality to use and what your situation is.
_tom_ · 17h ago
For me, it's a bit like pair programming. I have someone to discuss ideas with. Someone to review my code and suggest alternative approaches. Some one that uses different feature than I do, so I learn from them.
traceroute66 · 16h ago
I guess if you enjoy programming with someone you can never really trust, then yeah, sure, its "a bit like" pairs programming.
mock-possum · 10h ago
Trust, but verify ;]
platevoltage · 16h ago
This is how I use it too. It's great at quickly answering questions. I find it particularly useful if I have to work with a language of framework that I'm not fully experienced in.
12_throw_away · 14h ago
> I find it particularly useful if I have to work with a language of framework that I'm not fully experienced in

Yep - my number 1 use case for LLMs is as a template and example generator. It actually seems like a fairly reasonable use for probabilistic text generation!

marcosdumay · 17h ago
LLMs will still be this way 10 years from now.

But IDK if somebody won't create something new that gets better. But there is no reason at all to extrapolate our current AIs into something that solves programing. Whatever constraints that new thing will have will be completely unrelated to the current ones.

smokel · 17h ago
Stating this without any arguments is not very convincing.

Perhaps you remember that language models were completely useless at coding some years ago, and now they can do quite a lot of things, even if they are not perfect. That is progress, and that does give reason to extrapolate.

Unless of course you mean something very special with "solving programming".

bigstrat2003 · 12h ago
> Perhaps you remember that language models were completely useless at coding some years ago, and now they can do quite a lot of things, even if they are not perfect.

IMO, they're still useless today, with the only progress being that they can produce a more convincing facade of usefulness. I wouldn't call that very meaningful progress.

wvenable · 8h ago
I don't know how someone can legitimately say that they're useless. Perfect, no. But useless, also no.
IshKebab · 2h ago
They are very clearly not useless. You haven't given them a fair shake.
drdeca · 9h ago
I’ve found them somewhat useful? Not for big things, and not for code for work.

But for small personal projects? Yes, helpful.

conradkay · 6h ago
It's funny how there's a decent % of people at both "LLMs are useless" and "LLMs 3-10x my productivity"
marcosdumay · 17h ago
Why state the same arguments everybody has been repeating for ages?

LLMs can only give you code that somebody has wrote before. This is inherent. This is useful for a bunch of stuff, but that bunch won't change if OpenAI decides to spend the GDP of Germany training one instead of Costa Rica.

vidarh · 15h ago
> LLMs can only give you code that somebody has wrote before. This is inherent.

This is trivial to prove to be false.

Invent a programming language that does not exist. Describe its semantics to an LLM. Ask it to write a program to solve a problem in that language. It will not always work, but it will work often enough to demonstrate that they are very much capable of writing code that has never been written before.

The first time I tried this was with GPT3.5, and I had it write code in an unholy combination of Ruby and INTERCAL, and it had no problems doing that.

Similarly giving it a grammar of a hypothetical language, and asking it to generate valid text in a language that has not existed before also works reasonably well.

This notion that LLMs only spit out things that has been written before might have been reasonable to believe a few years ago, but it hasn't been a reasonable position to hold for a long time at this point.

shrewduser · 3h ago
This doesn't surprise me, i find LLM's are really good at interpolating and translating. so if i made up a language and gave it the rules and asked it to translate i wouldn't expect it to be bad at it.
vidarh · 31m ago
It shouldn't surprise anyone, but it is clear evidence against the claim I replied to, and clearly a lot of people still hold on to this irrational assumption that they can't produce anything new.
JoshCole · 15h ago
> LLMs can only give you code that somebody has wrote before.

This premise is false. It is fundamentally equivalent to the claim that a language model being trained on a dataset: ["ABA", "ABB"] would be unable to generate, given input "B" the string "BAB" or "BAA".

1718627440 · 14h ago
Isn't the claim, that it will never make up "C"?
JoshCole · 14h ago
They don't claim that. They say LLMs only generate text someone has written. Another way you could refute their premise was by showing the existence of AI-created programs for which someone isn't a valid description of the writer (e.g., from evolutionary algorithms) then training a network on that data such that it can output it. It is just as trivial a way to prove that the premise is false.

Your claim here is slightly different.

You're claiming that if a token isn't supported, it can't be output [1]. But we can easily disprove this by adding minimal support for all tokens, making C appear in theory. Such support addition shows up all the time in AI literature [2].

[1]: https://en.wikipedia.org/wiki/Support_(mathematics)

[2]: In some regimes, like game theoretic learning, support is baked into the solving algorithms explicitly during the learning stage. In others, like reinforcement learning, its accomplished by making the policy a function of two objectives, one an exploration objective, another an exploitation objective. That existing cross pollination already occurs between LLMs in the pre-trained unsupervised regime and LLMs in the post-training fine-tuning via forms of reinforcement learning regime should cause someone to hesitate to claim that such support addition is unreasonable if they are versed in ML literature.

Edit:

Got downvoted, so I figure maybe people don't understand. Here is the simple counterexample. Consider an evaluator that gives rewards: F("AAC") = 1, all other inputs = 0. Consider a tokenization that defines "A", "B", "C" as tokens, but a training dataset from which the letter C is excluded but the item "AAA" is present.

After training "AAA" exists in the output space of the language model, but "AAC" does not. Without support, without exploration, if you train the language model against the reinforcement learning reward model of F, you might get no ability to output "C", but with support, the sequence "AAC" can be generated and give a reward. Now actually do this. You get a new language model. Since "AAC" was rewarded, it is now a thing within the space of the LLM outputs. Yet it doesn't appear in the training dataset and there are many reward models F for which no person will ever have had to output the string "AAC" in order for the reward model to give a reward for it.

It follows that "C" can appear even though "C" does not appear in the training data.

1718627440 · 5h ago
But doesn't reward for "**C" means that "C" is in the training data?

I am not sure if that is an accurate model, but if you think of it as a vectorspace, sure you can generate a lot of vectors from some set of basevectors, but you can never generate a new basevector from others, since they are linearly independent, so there are a bunch of new vectors you can never generate.

gitaarik · 13h ago
I think it's not just token support, it's also having a understanding of certain concepts that allows you to arrive at new points like C, D, E, etc. But LLM's don't have an understanding of things, they are statistical models that predict what statistically is most likely following the input that you give it. But that that will always be based on already existing data that is fed into the model. It can produce "new" stuff only by combining the "old" stuff in new ways, but it can't "think" of something entirety conceptionally new, because it doesn't really "think".
JoshCole · 11h ago
> it can't "think" of something entirety conceptionally new, because it doesn't really "think".

Hierarchical optimization (fast global + slow local) is a precise, implementable notion of "thinking." Whenever I've seen this pattern implemented, humans, without being told to do so by others in some forced way, seem to converge on the use of verb think to describe the operation. I think you need to blacklist the term think and avoid using it altogether if you want to think clearly about this subject, because you are allowing confusion in your use of language to come between you and understanding the mathematical objects that are under discussion.

> It can produce "new" stuff only by combining the "old" stuff in new ways,

False premise; previously debunked. Here is a refutation for you anyway, but made more extreme. Instead of modeling the language task using a pre-training predictive dataset objective, only train on a provided reward model. Such a setup never technically shows "old" stuff to the AI, because the AI is never shown stuff explicitly. It just always generates new things and then the reward model judges how well it did. Clearly, the fact that it can do generation while knowing nothing, shows that your claim that it can never generate something new -- by definition everything would be new at this point -- is clearly false. Notice that as it continually generates new things and the judgements occur, it will learn concepts.

> But LLM's don't have an understanding of things, they are statistical models that predict what statistically is most likely following the input that you give it.

Try out Jayne's Probability Theory: The Logic Of Science. Within it the various underpinning assumptions that lead to probability theory are shown to be very reasonable and normal and obviously good. Stuff like represent plausibility with real numbers, keep rankings consistent and transitive, reduce to Boolean logic at certainty, and update so you never accept a Dutch-book sure-loss -- which together force the ordinary sum and product rules of probability. Then notice that statistics is in a certain sense just what happens when you apply the rules of probability.

> also having a understanding of certain concepts that allows you to arrive at new points like C, D, E, etc. But LLM's don't have an understanding of things

This is also false. Look into the line of research that tends to go by the name of Circuits. Its been found that models have spaces within their weights that do correspond with concepts. Probably you don't understand what concepts are -- that abstractions and concepts are basically forms of compression that let you treat different things as the same thing -- so a different way to arrive at knowing that this would be true is to consider a dataset with less parameters than there are items in the dataset and notice that the model must successfully compress the dataset in order to complete its objective.

gitaarik · 5h ago
Yes ok, it can generate new stuff, but it's dependent on human curated reward models to score the output to make it usable. So it still depends on human thinking, it's own "thinking" is not sufficient. And there won't be a point when human curated reward models are not needed anymore.

LLM's will make a lot of things easier for humans, because most of the thinking the humans do have been automated into the LLM. But ultimately you run into a limit where the human has to take over.

vidarh · 27m ago
> And there won't be a point when human curated reward models are not needed anymore.

This doesn't follow at all. There's no reason why a model can not be made to produce reward models.

gitaarik · 3h ago
So to clarify, it could potentially come up with (something close to) C, but if you want it to get to D, E, F etc, it will become less and less accurate for each consequentive step, because it lacks the human curated reward models up to that point. Only if you create new reward models for C, the output for D will improve, and so on.
Epa095 · 16h ago
First, how much of coding is really never done before?

And secondly, what you say are false (at least if taken literally). I can create a new programming language, give the definition of it in the prompt, ask it to code something in my language, and expect something out. It might even work.

vidarh · 15h ago
> I can create a new programming language, give the definition of it in the prompt, ask it to code something in my language, and expect something out. It might even work.

I literally just pointed out the same time without having seen your comment.

Second this. I've done this several times, and it can handle it well. Already GPT3.5 could easily reason about hypothetical languages given a grammar or a loose description.

I find it absolutely bizarre that people still hold on to this notion that these languages can't do anything new, because it feels implausible that they have tried given how well it works.

lupire · 12h ago
Second, how much of commenting is really never done before?
apwell23 · 8h ago
good question. why isn't the gp using llm to generate comments then.
bawolff · 10h ago
> First, how much of coding is really never done before?

Lots of programming doesn't have one specific right answer, but a bunch of possible right answers with different trade-offs. The programmers job isn't just to get working code neccesarily. I dont think we are at the point where llm's can see the forest for the trees, so to speak.

apwell23 · 16h ago
> how much of coding is really never done before?

A lot because we use libraries for 'done frequently before' code. i don't generate a database driver for my webapp with llm.

Epa095 · 16h ago
We use libraries for SOME of the 'done frequently' code.

But how much of enterprise programming is 'get some data from a database, show it on a Web page (or gui), store some data in the database', with variants?

It makes sense that we have libraries for abstraction away some common things. But it also makes sense that we can't abstract away everything we do multiple times, because at some point it just becomes so abstract that it's easier to write it yourself than to try to configure some library. Does not mean that it's not a variant of something done before.

sunrunner · 15h ago
> we can't abstract away everything we do multiple times

I think there's a fundamental truth about any code that's written which is that it exists on some level of specificity, or to put it in other words, a set of decisions have been made about _how_ something should work (in the space of what _could_ work) while some decisions have been left open to the user.

Every library that is used is essentially this. Database driver? Underlying I/O decisions are probably abstracted away already (think Netty vs Mina), and decisions on how to manage connections, protocol handling, bind variables, etc. are made by the library, while questions remain for things like which specific tables and columns should be referenced. This makes the library reusable for this task as long as you're fine with the underlying decisions.

Once you get to the question of _which specific data is shown on a page_ the decisions are closer to the human side of how we've arbitrarily chosen to organise things in this specific thousandth-iteration of an e-commerce application.

The devil is in the details (even if you know the insides of the devil aren't really any different).

lupire · 12h ago
Cue Haskell gang "Design patterns are workarounds for weaknesses in your language".
apwell23 · 16h ago
> it's easier to write it yourself than to try to configure some library

yeah unfortunately LLM will make this worse. Why abstract when you can generate.

I am already seeing this a lot at work :(

nahbruheem · 16h ago
Generating unseen code is not hard.

Set rules on what’s valid, which most languages already do; omit generation of known code; generate everything else

The computer does the work, programmers don’t have to think it up.

A typed language example to explain; generate valid func sigs

func f(int1, int2) return int{}

If that’s our only func sig in our starting set then it makes it obvious

Well relative to our tiny starter set func f(int1, int2, int3) return int{} is novel

This Redis post is about fixing a prior decision of a random programmer. A linguistics decision.

That’s why LLMs seem worse than programmers because we make linguistics decisions that fit social idioms.

If we just want to generate all the never before seen in this model code we don’t need a programmer. If we need to abide laws of a flexible language nature, that’s what a programmer is for; compose not just code by compliance with ground truth.

That antirez is good at Redis is a bias since he has context unseen by the LLM. Curious how well antirez would do with an entirely machine generated Redis-clone that was merely guided by experts. Would his intuition for Redis’ implementation be useful to a completely unknown implementation?

He’d make a lot of newb errors and need mentorship, I’m guessing.

dttze · 15h ago
Incoherent rambling.

No comments yet

nprateem · 16h ago
I think we're hoping for more than the 'infinite monkeys bashing out semantically correct code' approach.
nahbruheem · 15h ago
Ok, define what means and make it. Then as soon as you do realize you run into Gödel’s understanding your machine doesn’t solve problems related to its own existence and needs outside help. So you need to generate that yet unseen solution that lacks context for understanding itself… repeat and see it’s exactly generating one yet unseen layer of logic after another.

Read the article; his younger self failed to see logic needed now. Add that onion peel. No such thing as perfect clairvoyance.

Even Yann LeCun’s energy based models driving robots have the same experience problem.

Make a computer that can observe all of the past and future.

Without perfect knowledge our robots will fail to predict some composition of space time before they can adapt.

So there’s no probe we can launch that’s forever and generally able to survive with our best guess when launched.

More people need to study physical experiments and physics and not the semantic rigor of academia. No matter how many ideas we imagine there is no violating physics.

Pop culture seems to have people feeling starship Enterprise is just about to launch from dry dock.

rhubarbtree · 16h ago
That’s not true. LLMs are great translators, they can translate ideas to code. And that doesn’t mean it has to be recalling previously seen text.
Retric · 17h ago
Progress sure, but the rate the’ve improved hasn’t been particularly fast recently.

Programming has become vastly more efficient in terms of programmer effort over decades, but making some aspects of the job more efficient just means all your effort it spent on what didn’t improve.

lexandstuff · 13h ago
People seem to have forgotten how good the 2023 GPT-4 really was at coding tasks.
mirsadm · 16h ago
The latest batch of LLMs has been getting worse in my opinion. Claude in particular seems to be going backwards with every release. The verbosity of the answers is infuriating. You ask it a simple question and it starts by inventing the universe, poorly
apwell23 · 16h ago
> Perhaps you remember that language models were completely useless at coding some years ago

no i don't remember that. They are doing similar things now that they did 3 yrs ago. They were still a decent rubber duck 3 yrs ago.

vidarh · 15h ago
And 6 years ago GPT2 had just been released. You're being obtuse by interpreting "some years" as specifically 3.
johnnyanmac · 12h ago
>I think the big question everyone wants to skip right to and past this conversation is, will this continue to be true 2 years from now?

For me, it's less "conversation to be skipped" and more about "can we even get to 2 years from now"? There's so much insability right now that it's hard to say what anything will look like in 6 months. "

Bukhmanizer · 16h ago
There are a couple people I work with who clearly don’t have a good understanding of software engineering. They aren’t bad to work with and are in fact great at collaborating and documenting their work, but don’t seem to have the ability to really trace through code and logically understand how it works.

Before LLMs it was mostly fine because they just didn’t do that kind of work. But now it’s like a very subtle chaos monkey has been unleashed. I’ve asked on some PRs “why is this like this? What is it doing?” And the answer is “ I don’t know, ChatGPT told me I should do it.”

The issue is that it throws basically all their code under suspicion. Some of it works, some of it doesn’t make sense, and some of it is actively harmful. But because the LLMs are so good at giving plausible output I can’t just glance at the code and see that it’s nonsense.

And this would be fine if we were working on like a crud app where you can tell what is working and broken immediately, but we are working on scientific software. You can completely mess up the results of a study and not know it if you don’t understand the code.

protocolture · 10h ago
>And the answer is “ I don’t know, ChatGPT told me I should do it.”

This weirds me out. Like I use LLMs A LOT but I always sanity check everything, so I can own the result. Its not the use of the LLM that gets me its trying to shift accountability to a tool.

jajko · 15h ago
Sounds almost like you definitely shouldnt use llms nor those juniors for such an important work.

Is it just me or are we heading into a period of explosion of software done, but also a massive drop of its quality? Not uniformly, just a bit of chaotic spread

Bukhmanizer · 12h ago
> llms nor those juniors for such an important work.

Yeah we shouldn’t and I limit my usage to stuff that is easily verifiable.

But there’s no guardrails on this stuff, and one thing that’s not well considered is how these things which make us more powerful and productive can be destructive in the hands of well intentioned people.

palmotea · 14h ago
> Is it just me or are we heading into a period of explosion of software done, but also a massive drop of its quality? Not uniformly, just a bit of chaotic spread

I think we are, especially with executives mandating the use LLMs use and expecting it to massively reduce costs and increase output.

For the most part they don't actually seem to care that much about software quality, and tend to push to decrease quality at every opportunity.

jrochkind1 · 13h ago
Which is frightening, because it's not like our industry is known for producing really high quality code at the starting point before LLM authored code.
akshay_trikha · 9h ago
I've had this same thought that it would be nice to have an AI rubber ducky to bounce ideas off of while pair programming (so that you don't sound dumb to your coworkers & waste their time).

This is my first comment so I'm not sure how to do this but I made a BYO-API key VSCode extension that uses the OpenAI realtime API so you can have interactive voice conversations with a rubber ducky. I've been meaning to create a Show HN post about it but your comment got me excited!

In the future I want to build features to help people communicate their bugs / what strategies they've tried to fix them. If I can pull it off it would be cool if the AI ducky had a cursor that it could point and navigate to stuff as well.

Please let me know if you find it useful https://akshaytrikha.github.io/deep-learning/2025/05/23/duck...

AdieuToLogic · 8h ago
> I've had this same thought that it would be nice to have an AI rubber ducky to bounce ideas off of while pair programming (so that you don't sound dumb to your coworkers & waste their time).

I humbly suggest a more immediate concern to rectify is identifying how to improve the work environment such that the fear one might "sound dumb to your coworkers & waste their time" does not exist.

akadeb · 9h ago
I like the sound of that! I think youre gonna like what we are building here https://github.com/akdeb/ElatoAI

Its as if the rubber duck was actually on the desk while youre programming and if we have an MCP that can get live access to code it could give you realtime advice.

akshay_trikha · 8h ago
Wow, that's really cool thanks for open sourcing! I might dig into your MCP I've been meaning to learn how to do that.

I genuinely think this could be great for toys that kids grow up with i.e. the toy could adjust the way it talks depending on the kids age and remember key moments in their life - could be pretty magical for a kid

gerad · 17h ago
It's like chess. Humans are better for now, they won't be forever, but humans plus software is going to better than either alone for a long time.
seadan83 · 15h ago
> It's like chess. Humans are better for now, they won't be forever

This is not an obviously true statement. There needs to be proof that there are no limiting factors that are computationally impossible to overcome. It's like watching a growing child, grow from 3 feet to 4 feet, and then saying "soon, this child will be the tallest person alive."

abalashov · 8m ago
With these "AGI by 2027" claims, it's not enough to say that the child will be the tallest person alive. They are saying the child will be the tallest structure on the planet.
overfeed · 15h ago
One of my favourite XKCD comics is about extrapolation https://xkcd.com/605/
kelseydh · 16h ago
The time where humans + computers in chess were better than just computers was not a long time. That era ended well over a decade ago. Might have been true for only 3-5 years.
qsort · 16h ago
Unrelated to the broader discussion, but that's an artifact of the time control. Humans add nothing to Stockfish in a 90+30 game, but correspondence chess, for instance, is played with modern engines and still has competitive interest.
dwohnitmok · 15h ago
It is not clear to me whether human input really matters in correspondence chess at this point either.

I mused about this several years ago and still haven't really gotten a clear answer one way or the other.

https://news.ycombinator.com/item?id=33022581

LandR · 15h ago
What do you mean? Chess engines are incredibly far ahead of humans right now.

Even a moderately powered machine running stockfish will destroy human super gms.

Sorry, after reading replies to this post i think I've misunderstood what you meant :)

hollerith · 15h ago
I think he knows that. There was a period from the early 1950s (when people first started writing chess-playing software) to 1997 when humans were better at chess than computers were, and I think he is saying that we are still in the analogous period for the skill of programming.

But he should've know that people would jump at the opportunity to contradict him and should've written his comment so as not to admit such an easily-contradictable interpretation.

LandR · 15h ago
Yes, amended my post. I understand what he was saying now. Thanks.

Wasn't trying to just be contradictory or arsey

seadan83 · 15h ago
The phrasing was perhaps a bit odd. For a while, humans were better at Chess, until they weren't. OP is hypothesizing it will be a similar situation for programming. To boot, it was hard to believe for a long time that computers would ever be better than a humans at chess.
apwell23 · 16h ago
its not like chess
quantadev · 10h ago
Your information is quite badly out of date. AI can now beat humans at not only chess but 99% of all intellectual exercises.
vFunct · 16h ago
No guarantee that will happen. LLMs are still statistically based. It's not going to give you edgier ideas, like filling a glass of wine to the rim.

Use them for the 90% of your repetitive uncreative work. The last 10% is up to you.

skydhash · 12h ago
The pain of that 90% work is how you get libraries and framework. Imagine having many different implementation of sorting algorithms inside your codebase.
vFunct · 11h ago
OK now we have to spend time figuring out the framework.

It's why people say just write plain Javascript, for example.

skydhash · 36m ago
Is it? Or is it because the framework are not suitable for the project?
bandoti · 12h ago
My take is that AI is very one-dimensional (within its many dimensions). For instance, I might close my eyes and imagine an image of a tree structure, or a hash table, or a list-of-trees, or whatever else; then I might imagine grabbing and moving the pieces around, expanding or compressing them like a magician; my brain connects sight and sound, or texture, to an algorithm. However people think about problems is grounded in how we perceive the world in its infinite complexity.

Another example: saying out loud the colors red, blue, yellow, purple, orange, green—each color creates a feeling that goes beyond its physical properties into the emotions and experiences. AI image-generation might know the binary arrangement of an RGBA image but actually, it has NO IDEA what it is to experience colour. No idea how to use the experience of colour to teach a peer of an algorithm. It regurgitates a binary representation.

At some point we’ll get there though—no doubt. It would be foolish to say never! For those who want to get there before everyone else probably should focus on the organoids—because most powerful things come from some Faustian monstrosity.

eddd-ddde · 12h ago
This is really funny to read as someone who CANNOT imagine anything more complex than the most simple shape like a circle.

Do you actually see a tree with nodes that you can rearrange and have the nodes retain their contents and such?

bandoti · 12h ago
Haha—yeah, for me the approach is always visual. I have to draw a picture to really wrap my brain around things! Other people I’d imagine have their own human, non-AI way to organize a problem space. :)

I have been drawing all my life and studied traditional animation though, so it’s probably a little bit of nature and nurture.

joshdavham · 16h ago
> I actually think a fair amount of value from LLM assistants to me is having a reasonably intelligent rubber duck to talk to.

I wonder if the term "rubber duck debugging" will still be used much longer into the future.

layer8 · 14h ago
As long as it remains in the training material, it will be used. ;)
Waterluvian · 13h ago
Same. Just today I used it to explore how a REST api should behave in a specific edge case. It gave lots of confident opinions on options. These were full of contradictions and references to earlier paragraphs that didn’t exist (like an option 3 that never manifested). But just by reading it, I rubber ducked the solution, which wasn’t any of what it was suggesting.
ortusdux · 16h ago
> I think the big question everyone wants to skip right to and past this conversation is, will this continue to be true 2 years from now? I don’t know how to answer that question.

I still think about Tom Scott's 'where are we on the AI curve' video from a few years back. https://www.youtube.com/watch?v=jPhJbKBuNnA

travisgriggs · 3h ago
LLMs are a passel of eager to please know it all interns that you can command at will without any moral compunctions.

They drive you nuts trying to communicate with them what you actually want them to do. They have a vast array of facts at immediate recall. They’ll err in their need to produce and please. They do the dumbest things sometimes. And surprise you at other times. You’ll throw vast amounts of their work away or have to fix it. They’re (relatively) cheap. So as an army of monkeys, if you keep herding them, you can get some code that actually tells a story. Mostly.

empath75 · 15h ago
Just the exercise of putting my question in a way that the LLM could even theoretically provide a useful response is enough for me to figure out how to solve the problem a good percentage of the time.
cortesoft · 17h ago
Currently, I find AI to be a really good autocomplete
jdiff · 16h ago
The crazy thing is that people think that a model designed to predict sequences of tokens from a stem, no matter how advanced the model, to be much more than just "really good autocomplete."

It is impressive and very unintuitive just how far that can get you, but it's not reductive to use that label. That's what it is on a fundamental level, and aligning your usage with that will allow it to be more effective.

lavelganzu · 14h ago
There's a plausible argument for it, so it's not a crazy thing. You as a human being can also predict likely completions of partial sentences, or likely lines of code given surrounding lines of code, or similar tasks. You do this by having some understanding of what the words mean and what the purpose of the sentence/code is likely to be. Your understanding is encoded in connections between neurons.

So the argument goes: LLMs were trained to predict the next token, and the most general solution to do this successfully is by encoding real understanding of the semantics.

vidarh · 15h ago
It's trivial to demonstrate that it takes only a tiny LLM + a loop to a have a Turing complete system. The extension of that is that it is utterly crazy to think that the fact it is "a model designed to predict sequences of tokens" puts much of a limitation on what an LLM can achieve - any Turing complete system can by definition simulate any other. To the extent LLMs are limited, they are limited by training and compute.

But these endless claims that the fact they're "just" predicting tokens means something about their computational power are based on flawed assumptions.

msgodel · 26m ago
Language models with a loop absolutely aren't Turing complete. Assuming the model can even follow your instructions the output is probabilistic so in the limit you can guarantee failure. In reality though there are lots of instructions LLMs fail to follow. You don't notice it as much when you're using them normally but if you want to talk about computation you'll run into trivial failures all the time.

The last time I had this discussion with people I pointed out how LLMs consistently and completely fail at applying grammar production rules (obviously you tell them to apply to words and not single letters so you don't fight with the embedding.)

LLMs do some amazing stuff but at the end of the day:

1) They're just language models, while many things can be described with languages there are some things that idea doesn't capture. Namely languages that aren't modeled, which is the whole point of a Turing machine.

2) They're not human, and the value is always going to come from human socialization.

suddenlybananas · 12h ago
The fact they're Turing complete isn't really getting at the heart of the problem. Python is Turing complete and calling python "intelligent" would be a category error.
vidarh · 34m ago
It is getting to the heart of the problem when the claim made is that "no matter how advanced the model" they can't be 'much more than just "really good autocomplete."'.

Given that they are Turing complete when you put a loop around them, that claim is objectively false.

fl7305 · 16h ago
> "The crazy thing is that people think that a model designed to"

It's even crazier that some people believe that humans "evolved" intelligence just by nature selecting the genes which were best at propagating.

Clearly, human intelligence is the product of a higher being designing it.

/s

fhd2 · 3h ago
I would consider evolution a form of intelligence, even though I wouldn't consider nature a being.

There's a branch of AI research I was briefly working in 15 years ago, based on that premise: Genetic algorithms/programming.

So I'd argue humans were (and are continuously being) designed, in a way.

dwaltrip · 13h ago
It’s reductive and misleading because autocomplete, as it’s commonly known, existed for many years before generative AI, and is very different and quite dumber than LLMs.
sunrunner · 15h ago
Earlier this week ChatGPT found (self-conscious as I am of the personification of this phrasing) a place where I'd accidentally overloaded a member function by unintentionally giving it the name of something from a parent class, preventing the parent class function from ever being run and causing <bug>.

After walking through a short debugging session where it tried the four things I'd already thought of and eventually suggested (assertively but correctly) where the problem was, I had a resolution to my problem.

There are a lot of questions I have around how this kind of mistake could simply just be avoided at a language level (parent function accessibility modifiers, enforcing an override specifier, not supporting this kind of mistake-prone structure in the first place, and so on...). But it did get me unstuck, so in this instance it was a decent, if probabilistic, rubber duck.

amelius · 12h ago
It's also quite good at formulating regular expressions based on one or two example strings.
bossyTeacher · 16h ago
I think of them as highly sycophant LSD-minded 2nd year student who has done some programming
mock-possum · 10h ago
Yeah in my experience as long as you don’t stray too far off the beaten path, LLMs are great at just parroting conventional wisdom for how to implement things - but the second you get to something more complicated - or especially tricky bug fixing that requires expensive debuggery - forget about it, they do more harm than good. Breaking down complex tasks into bite sized pieces you can reasonably expect the robot to perform is part of the art of the LLM.
koonsolo · 15h ago
It seems to me we're at the flat side of the curve again. I haven't seen much real progress in the last year.

It's ignorant to think machines will not catch up to our intelligence at some point, but for now, it's clearly not.

I think there needs to be some kind of revolutionary breakthrough again to reach the next stage.

If I were to guess, it needs to be in the learning/back propagation stage. LLM's are very rigid, and once they go wrong, you can't really get them out of it. A junior develop for example could gain a new insight. LLM's, not so much.

UncleOxidant · 17h ago
There's some whistling past the graveyard in these comments. "You still need humans for the social element...", "LLMs are bad at debugging", "LLMs lead you astray". And yeah, there's lots of truth in those assertions, but since I started playing with LLMs to generate code a couple of years ago they've made huge strides. I suspect that over the next couple of years the improvements won't be quite as large (Pareto Principle), but I do expect we'll still see some improvement.

Was on r/fpga recently and mentioned that I had had a lot of success recently in getting LLMs to code up first-cut testbenches that allow you to simulate your FPGA/HDL design a lot quicker than if you were to write those testbenches yourself and my comment was met with lots of derision. But they hadn't even given it a try to form their conclusion that it just couldn't work.

xhevahir · 15h ago
This attitude is depressingly common in lots of professional, white-collar industries I'm afraid. I just came from the /r/law subreddit and was amazed at the kneejerk dismissal there of Dario Amodei's recent comments about legal work, and of those commenters who took them seriously. It's probably as much a coping mechanism as it is complacency, but, either way, it bodes very poorly for our future efforts at mitigating whatever economic and social upheaval is coming.
onion2k · 17m ago
In most professional industries getting to the right answer is only half the problem. You also need to be able to demonstrate why that is the right answer. Your answer has to stand up to criticism. If your answer is essentially the output of a very clever random number generator you can't ever do that. Even if an LLM could output an absolutely perfect legal argument that matched what a supreme court judge would argue every time, that still wouldn't be good enough. You'd still need a person there to be accountable for making the argument and to defend the argument.

Software isn't like this. No one cares why you wrote the code in your PR. They only care about whether it's right.

This is why LLMs could be useful in one industry and a lot less useful in another.

garciasn · 14h ago
This is the response to most new technologies; folks simply don't want to accept the future before the ramifications truly hit. If technology folk cannot see the INCREDIBLE LEAP FORWARD made by LLMs since ChatGPT came on the market, they're not seeing the forest through the trees because their heads are buried in the sand.

LLMs for coding are not even close to imperfect, yet, but the saturation curves are not flattening out; not by a long shot. We are living in a moment and we need to come to terms with it as the work continues to develop; and, we need to adapt and quickly in order to better understand what our place will become as this nascent tech continues its meteoric trajectory toward an entirely new world.

eikenberry · 12h ago
I don't think it is only (or even mostly) not wanting to accept it, I think it is at least equal measure just plain skepticism. We've seen all sorts of wild statements about how much something is going to revolutionize X and then turns out to be nothing. Most people disbelieve these sorts of claims until they see real evidence for themselves... and that is a good default position.
chii · 7h ago
hedging the possibility that they get displaced economically before it happens is always prudent.

If the future didnt turn out to be revolutionary, you now have done some "unnecessary" work at worst, but might've acquired some skills or value at least. In the case of most well off programmers, i suspect buying assets/investments which can afford them at least a reasonable lifestyle is likely too.

So the default position of being stationary, and assuming the world continues the way it has been, is not such a good idea. One should always assume the worst possible outcome, and plan for that.

0points · 3h ago
> One should always assume the worst possible outcome, and plan for that.

Maybe if you work e-commerce or in the military.

But how do you even translate this line of thought for today?

Is you EMP defenses up to speed?

Are you studying russian and chinese while selling kidneys in order to afford your retirement home on Mars?

My point being, you can never plan for every worst outcome. In reality you would have a secondary data center, backups and a working recovery routine.

None of which matters if you use autocomplete or not.

0points · 3h ago
> If technology folk cannot see the INCREDIBLE LEAP FORWARD made by LLMs since ChatGPT came on the market, they're not seeing the forest through the trees because their heads are buried in the sand.

Look, we see the forest. We are just not impressed by it.

Having unlimited chaos monkeys at will is not revolutionizing anything.

const_cast · 10h ago
Lawyers don't even use version control software a lot of the time. They burn hundreds of paralegal hours reconciling revisions, a task that could be made 100x faster and easier with Git.

There's no guarantee a technology will take off, even if it's really, really good. Because we don't decide if that tech takes off - the lawyers do. And they might not care, or they might decide billing more hours is better, actually.

heartbreak · 9h ago
> billing more hours is better, actually

The guiding principle of biglaw.

Attorneys have the bar to protect them from technology they don’t want. They’ve done it many times before, and they’ll do it again. They are starting to entertain LLMs, but not in a way that would affect their billable hours.

dgfitz · 10h ago
“First thing we do, let’s kill all the lawyers”

History majors everywhere are weeping.

ben-schaaf · 10h ago
Friendly reminder that people like you were saying the exact same thing about metaverse, VR, web3, crypto, etc.
drodgers · 8h ago
Yes. If you judge only from the hype, then you can't distinguish LLMs from crypto, or Nuclear Weapons from Nuclear Automobiles.

If you always say that every new fad is just hype, then you'll even be right 99.9% of the time. But if you want to be more valuable than a rock (https://www.astralcodexten.com/p/heuristics-that-almost-alwa...), then you need to dig into the object-level facts and form an opinion.

In my opinion, AI has a much higher likelihood of changing everything very quickly than crypto or similar technologies ever did.

abootstrapper · 8h ago
I didn’t buy the hype of any of those things, but I believe AI is a going to change everything much like the introduction of the internet. People are dismissing AI because its code is not bug free, completely dismissing the fact that it generates PRs in minutes from a poorly written text prompt. As if that’s not impressive. In fact if you put a human engineer on the receiving end of the same prompt with the same context as what we’re sending to the LLM, I doubt they could produce code half as good in 10x the time. It’s science fiction coming true, and it’s only going to continue to improve.
ben-schaaf · 7h ago
Again, there were people just as sure about crypto as you are now about AI. They dismissed criticism because they thought the technology was impressive and revolutionary. That it was science fiction come true and only going to continue to improve. It's the exact same hype-driven rhetoric.

If you want to convince skeptics talk about examples, vibe code a successful business, show off your success with using AI. Telling people it's the future and if you disagree you have your head in the sand, is wholly unconvincing.

limflick · 10m ago
You don't have to be able to vibe code an entire business from scratch to know that the technology behind AI is significantly more impressive than VR, crypto, web3 etc. What the free version of ChatGPT can do right now, not just coding; would've been unimaginable to most people just 5 years ago.

Don't people and companies using AI lazily to put out low quality content blind you to its potential as well as the reality of what it can do right now. Look at Google's VO3, most people in the world right now won't be able to tell you that it's AI generated and not real.

AYBABTME · 10h ago
The value of these was always a far fetch, and requires a critical mass adopting it before becoming potentially useful. But LLMs value is much more immediate and doesn't require any change in the rest of the world. If you use it and are amplified by it, you are... simply better off.
dgfitz · 10h ago
In my small-minded opinion, llms are the better version of code-completion. Search and time-savings on an accelerated course.

They can’t write me a safety-critical video player meeting the spec with full test coverage using a proprietary signal that my customer would accept.

ben-schaaf · 9h ago
Frankly I disagree that LLMs value is immediate. What I do see is a whole lot of damage it's causing, just like the hype cycles before it. It's fine for us to disagree on this, but to say I'm burying my head in the sand not wanting to accept "the future" is exactly the same hype-driven bullshit the crypto crowd was pushing.
fumeux_fume · 6h ago
Ah yes, please enjoy living in your moment and anticipating your entirely new world. I also hear all cars will be driving themselves soon and Jesus is coming back any day now.
refulgentis · 6h ago
I found it mildly amusing to contrast the puerile dismissiveness with your sole submission to this site: UK org's Red List of Endangered & Extinct crafts.
bgwalter · 14h ago
Adapt to your manager at bigcorp who is hyping the tech because it gives him something to do? No open source project is using the useless LLM shackles.
xandrius · 14h ago
As if you'd know if they did.
jdiff · 12h ago
Why would we not? If they were so effective, their effectiveness would be apparent, inarguable, and those making use of it would advertise it as a demonstration of just that. Even if there were some sort of social stigma against it, AI has enough proponents to produce copious amounts of counterarguments through evidence all on their own.

Instead, we have a tiny handful of one-off events that were laboriously tuned and tweaked and massaged over extended periods of time, and a flood of slop in the form of broken patches, bloated and misleading issues, and nonsense bug bounty attempts.

xandrius · 11h ago
I think the main reason might be that when the output is good the developer congratulates themselves, and when it's bad they make a post or comment about how bad AI is.

Then the people who congratulate the AI for helping get yelled at by the other category.

jdiff · 10h ago
As long as the AI people stay in their lane and work on their own projects, they're not getting yelled at. This is ignoring that AI has enough proponents to have enough projects of significant size. And even if they're getting shouted at from across the fence, again, AI has enough proponents who would brave getting yelled at.

We'd still have more than tortured, isolated, one-offs. We should have at least one well-known codebase maintained through the power of Silicon Valley's top silicon-based minds.

spamizbad · 13h ago
I think it's pretty reasonable to take a CEO's - any CEO in any industry - statements with a grain of salt. They are under tremendous pressure to paint the most rosy picture possible of their future. They actually need you to "believe" just as much as their team needs to deliver.
sanderjd · 9h ago
Isn't this also kind of just ... a reddit thing?
geros · 2h ago
"It is difficult to get a man to understand something when his salary depends upon his not understanding it." - Upton Sinclair
golergka · 12h ago
Lawyers say those things and then one law firm after another is frantically looking for a contractor to overpay them to install local RAG and chatbot combo.
layer8 · 14h ago
Programmers derided programming languages (too inefficient, too inflexible, too dumbing-down) when assembly was still the default. That phenomenon is at the same time entirely to be expected but also says little about the actual qualities of the new technology.
ch4s3 · 16h ago
It seems like LLMs made really big strides for a while but don't seem to be getting better recently, and in some ways recent models feel a bit worse. I'm seeing some good results generating test code, and some really bad results when people go to far with LLM use on new feature work. Base on what I've seen it seems like spinning up new projects and very basic features for web apps works really well, but that doesn't seem to generalize to refactoring or adding new features to big/old code bases.

I've seen Claude and ChatGPT happily hallucinate whole APIs for D3 on multiple occasions, which should be really well represented in the training sets.

soerxpso · 14h ago
> hallucinate whole APIs for D3 on multiple occasions, which should be really well represented in the training sets

With many existing systems, you can pull documentation into context pretty quickly to prevent the hallucination of APIs. In the near future it's obvious how that could be done automatically. I put my engine on the ground, ran it and it didn't even go anywhere; Ford will never beat horses.

prisenco · 13h ago
It's true that manually constraining an LLM with contextual data increases their performance on that data (and reduces performance elsewhere), but that conflicts with the promise of AI as an everything machine. We were promised an everything machine but if we have to not only provide it the proper context, but already know what constitutes the proper context, then it is not in any way an everything machine.

Which means it's back to being a very useful tool, but not the earth-shattering disruptor we hoped (or worried) it would be.

roywiggins · 8h ago
Depends on how good they get at realizing they need more context and tool use to look it up for you.
prisenco · 7h ago
How would they reliably recognize the context needed without the necessary context?
munksbeer · 12h ago
>Which means it's back to being a very useful tool, but not the earth-shattering disruptor we hoped (or worried) it would be.

Yet?

prisenco · 12h ago
That could require another breakthrough. Or ten more.

Fun to consider but that much uncertainty isn't worth much.

oconnor663 · 4h ago
> don't seem to be getting better recently

o3 came out just one month ago. Have you been using it? Subjectively, the gap between o3 and everything before it feels like the biggest gap I've seen since ChatGPT originally came out.

empath75 · 15h ago
the LLM's themselves are making marginal gains, but the tools for using LLMs productively are getting so much better.
dinfinity · 14h ago
This. MCP/tool usage in agentic mode is insanely powerful. Let the agent ingest a Gitlab issue, tell it how it can run commands, tests etc. in the local environment and half of the time it can just iterate towards a solution all by itself (but watching and intervening when it starts going the wrong way is still advisable).

Recently I converted all the (Google Docs) documentation of a project to markdown files and added those to the workspace. It now indexes it with RAG and can easily find relevant bits of documentation, especially in agent mode.

It really stresses the importance of getting your documentation and processes in order as well as making sure the tasks at hand are well-specified. It soon might be the main thing that requires human input or action.

max_on_hn · 11h ago
I 100% agree that documenting requirements will be the main human input to software development in the near future.

In fact, I built an entirely headless coding agent for that reason: you put tasks in, you get PRs out, and you get journals of each run for debugging but it discourages micro-management so you stay in planning/documenting/architecting.

ch4s3 · 9h ago
Every time I’ve tried to do that it takes longer than it would take me, and comes up with fairly obtuse solutions. The cursor agent seems incapable of putting code in the appropriate files in a functional language.
bgwalter · 16h ago
Yet you are working on your own replacement, while your colleagues are taking the prudent approach.
Jolter · 16h ago
Here’s the deal: if you won’t write your replacement, a competitor will do it and outprice your employer. Either way you’re out of a job. May be more prudent to adapt to the new tools and master them rather than be left behind?

Do you want to be a jobless weaver, or an engineer building mechanical looms for a higher pay than the weaver got?

bgwalter · 16h ago
I want to be neither. I either want to continue being a software engineer who doesn't need a tricycle for the mind, or move to law or medicine; two professions that have successfully defended themselves against extreme versions of the kind of anxiety, obedience and self hate that is so prevalent among software engineers.
xandrius · 13h ago
Nobody is preventing people writing in Assembly, even though we have more advanced language.

You could even go back to punch cards if you want to. Literally nobody forcing you to not use it for your own fun.

But LLMs are a multiplier in many mundane tasks (I'd say about 80+% of software development for businesses), so not using them is like fighting against using a computer because you like writing by hand.

nssnsjsjsjs · 10h ago
That grass is not #00FF00 there. Cory's recent essay on uber for nurses (doctors are next) and law is only second to coding on tbe AI disruptors radar plus both law and medicine have unfriendly hours for the most part.

Happy to hate myself but earn OK money for OK hours.

empath75 · 15h ago
Funnily enough, I had a 3 or 4 hour chat with some co workers yesterday about an LLM related project and my feeling about LLM's is that it's actually opening up a lot of fun and interesting software engineering challenges if you want to figure out how to automate the usage of LLM's.
declan_roberts · 7h ago
I would absolutely love to write my own placement. When I can have AI do my job while I go to the beach you better believe I will be at the beach.
dfedbeef · 3h ago
You are assuming you'll continue to be paid?
allturtles · 16h ago
I think it's the wrong analogy. The prompt engineer who uses the AI to make code maps to the poorly-paid, low-skill power loom machine tender. The "engineer" is the person who created the model. But it's also not totally clear to me that we'll need humans for that either, in the near future.
ori_b · 6h ago
At some point, we'll probably have these things write their own prompts, too.
91bananas · 15h ago
Not all engineering is creating models though, sometimes there are simpler problems to solve.
kuahyeow · 7h ago
Compiler is the more apt analogy to a mechanical loom.

An LLM is more like outsourcing to a consultancy. Results may vary.

mullingitover · 7h ago
> Either way you’re out of a job.

Tools and systems which increase productivity famously always put everyone out of a job, which is why after a couple centuries of industrial revolution we're all unemployed.

Jolter · 4h ago
This is kind of my point; people tend to not be silly enough to stay unemployed and starve. Instead when push comes to shove, sensible folks will adapt to the circumstances.
nialse · 16h ago
Ahh, the “don’t disturb the status quo” argument. See, we are all working on our replacement, newer versions, products, services and knowledge always make the older obsolete. It is wise to work on your replacement, and even wiser to be in charge of and operate the replacement.
bgwalter · 16h ago
No, nothing fundamentally new is created. Programmers have always been obsessed with "new" tooling and processes to distract from that fact.

"AI" is the latest iteration of snake oil that is foisted upon us by management. The problem is not "AI" per se, but the amount of of friction and productivity loss that comes with it.

Most of the productivity loss comes from being forced to engage with it and push back against that nonsense. One has to learn the hype language, debunk it, etc.

Why do you think IT has gotten better? Amazon had a better and faster website with far better search and products 20 years ago. No amount of "AI" will fix that.

nialse · 14h ago
Maybe I would be useful to zoom out a bit. We're in a time of technological change, and change its gonna. Maybe it isn't your job that will change, maybe it is? Maybe it's not even about you or what you do. More likely it's the processes that will change around you. Maybe it's not change for better or worse. Maybe it's just change. But it's gonna change.
palmotea · 14h ago
> It is wise to work on your replacement...

Depends on the context. You have to keep in mind: it is not a goal of our society or economic system to provide you with a stable, rewarding job. In fact, the incentives are to take that away from you ASAP.

Before software engineers go celebrate this tech, they need realize they're going to end up like rust-belt factory workers the day after the plant closed. They're not special, and society won't be any kinder to them.

> ...and even wiser to be in charge of and operate the replacement.

You'll likely only get to do that if your boss doesn't know about it.

9dev · 13h ago
Speak for your own society, then. It should absolutely be our shared goal to keep as many people in stable, rewarding employment; if not for compassion, then at least pure egoism—it’s a lot more interesting to be surrounded by happy, educated people than an angry, poor mob.

Don’t let cynics rule your country. Go vote. There’s no rule that things have to stay awful.

davidcbc · 10h ago
Sure, but maybe we should do this before we make our own replacements. They aren't going to do it for us after the fact
nialse · 14h ago
> You have to keep in mind: it is not a goal of our society or economic system to provide you with a stable, rewarding job. In fact, the incentives are to take that away from you ASAP.

We seem to agree as this is more or less exactly the my point. Striving to keep the status quo is a futile path. Eventually things change. Be ready. The best advice I've ever got work (and maybe even life) wise is to always have alternatives. If you don't have alternatives, you literally have no choice.

palmotea · 14h ago
> We seem to agree as this is more or less exactly the my point. Striving to keep the status quo is a futile path. Eventually things change. Be ready. The best advice I've ever got work (and maybe even life) wise is to always have alternatives. If you don't have alternatives, you literally have no choice.

Those alternatives are going to be worse for you, because if they weren't, why didn't you switch already? And if a flood of your peers are pursing alternatives at the same time, you'll probably experience an even poorer outcome than you expected (e.g. everyone getting laid off and trying to make ends meet driving for Uber at the same time). Then, AI is really properly understood as a "fuck the white-collar middle-class" tech, and it's probably going to fuck up your backup plans at about the same time as it fucks up your status quo.

You're also describing a highly individualistic strategy, for someone acting on his own. At this point, the correct strategy is probably collective action, which can at least delay and control the change to something more manageable. But software engineers have been too "special snowflake" about themselves to have laid the groundwork for that, and are acutely vulnerable.

nialse · 14h ago
Alternatives need not be better or worse. Just different. Alternatives need not be doing the same thing somewhere else, it might be seeking out something else to do where you are. It might be selling all your stuff and live on an island in the sun for all I know.

I do concur it is an individualistic strategy, and as you mentioned unionization might have helped. But, then again it might not. Developers are partially unionized where I live, and I'm not so sure it's going to help. It might absorb some of the impact. Let's see in a couple of years.

palmotea · 14h ago
> Alternatives need not be better or worse. Just different. Alternatives need [not] be doing the same thing somewhere else, it might be seeking out something else to do where you are.

People have families to feed and lifestyles to maintain, anything that's not equivalent will introduce hardship. And "different" most likely means worse, when it comes to compensation. Even a successful career change usually means restarting at the bottom of the ladder.

And what's that "something else," exactly? You need to consider that may be disrupted at the same time you're planning on seeking it, or fierce competition from your peers makes it unobtainable to you.

Assuming there are alternatives waiting for you when you'll need them is its own kind of complacency.

> It might be selling all your stuff and live on an island in the sun for all I know.

Yeah, people with the "fuck-you" money to do that will probably be fine. Most people don't have that, though.

nialse · 13h ago
Being ahead of the curve is a recipe for not being left behind. There is no better time for action than now. And regarding the competition from peers, the key is likely differentiation. As it always has been.

Hardship or not, restarting from the bottom of the ladder or not, betting on status quo is a loosing game at the moment. Software development is being disrupted, I would expect developers to produce 2-4x more now than two years ago. However, that is the pure dev work. The architecture, engineering, requirements, specification etc parts will likely see another trajectory. Much due to the raise of automation in dev and other parts of the company. The flip side is that the raise of non-dev automation is coming, with the possibility of automating other tasks, in turn making engineers (maybe not devs though) vital to the companies process change.

Another, semi related, thought is that software development has automated away millions of jobs and it’s just developers time to be on the other end of the stick.

npteljes · 16h ago
Carteling doesn't work bottom-up. When changes begin (like this one with AI), one of the things an individual can do is to change course as fast as they can. There are other strategies as well, not evolving is also one, but some strategies yield better results than others. Not keeping up just worsens the chances, I have found.
asdff · 15h ago
It does when it is called unionizing, however for some reason software developers have a mental block towards the concept.
twodave · 11h ago
I would be much more in favor of them as a software developer if I felt the qualifications of other software developers were reliable. But they never have been, and today it’s worse than ever. So I think it’s important to weed some of that out before we start talking about making it harder to fire people.
davidcbc · 10h ago
You're doing your boss's job for them by making this an engineer vs engineer thing instead of an engineer vs management thing.
dughnut · 10h ago
The reason might be that union members give a percentage of their income to a governing body which is barely distinct from organized crime in which they have no say in. The federal government already exists. You really want more boots on your neck?
JeremyNT · 15h ago
I don't think that this should be downvoted because it raises a really important issue.

I hate AI code assistants, not because they suck, but because they work. The writing is on the wall.

If we aren't working on our own replacements, we'll be the ones replaced by somebody else's vibe code, and we have no labor unions that could plausibly fight back against this.

So become a Vibe Coder and keep working, or take the "prudent" approach you mention - and become unemployed.

neta1337 · 13h ago
I’ll work on fixing the vibe coders mess and make bank. Experience will prove valuable even more than before
realusername · 13h ago
Personally I used them for a while and then just stopped using them because actually no, unfortunately those assistants don't work. They appear to work at first glance but there's so much babysitting needed that it's just not worth it.

This "vibe coding" seems just another way to say that people spend more time refining the output of these tools over and over again that what they would normally code.

JeremyNT · 13h ago
I'm in this camp... today.

But there's going to be an inflection point - soon - as things continue to improve. The industry is going to change rapidly.

Now is the time to either get ready for that - by being ahead of the curve, at least by being familiar with the tooling - or switch careers and cede your job to somebody who will play ball.

I don't like any of this, but I see it as inevitable.

ksenzee · 12h ago
How is it inevitable when they are running out of text to train on and running out of funding at the same time?
suddenlybananas · 12h ago
You just said they worked one comment ago and now you agree that they don't?
JeremyNT · 10h ago
I'm in the same camp in that I don't use them because they don't work well enough for my own tastes. But I know what I'm doing, and I'm picky.

Clearly they do work in a general sense. People who don't want to code are making things that work this way right now!

This isn't yet replacing me, but I'm certain it will relatively soon be the standard for how software is developed.

realusername · 5h ago
Maybe there's going to be an inflection point ... or maybe not, I feel like we're at the exact same point as the first release of Cursor in 2023.
dughnut · 12h ago
Do you want to work with LLMs or H1Bs and interns… choose wisely.

Personally I’m thrilled that I can get trivial, one-off programs developed for a few cents and the cost of a clear written description of the problem. Engaging internal developers or consulting developers to do anything at all is a horrible experience. I would waste weeks on politics, get no guarantees, and waste thousands of dollars and still hear nonsense like, “you want a form input added to a web page? Aw shucks, that’s going to take at least another month” or “we expect to spend a few days a month maintaining a completely static code base” from some clown billing me $200/hr.

rsyring · 12h ago
You can work with consulting oriented engineers who get shit done with relatively little stress and significant productivity. Productivity enhanced by AI but not replaced by it. If interested, reach out to me.
cushychicken · 16h ago
ChatGPT-4o is scary good at writing VHDL.

Using it to prototype some low level controllers today, as a matter of fact!

Panzer04 · 3h ago
What kind of things is it doing?

I have a hard time imagining an LLM being able to do arbitrary things. It always feels like LLMs can do lots of the easy stuff, but if they can't do everything you need the skilled engineer anyway, who'd knock the easy things out in a week anyway.

UncleOxidant · 15h ago
Claude and Gemini are decent at it as well. I was surprised when I asked claude (and this was several months back) to come up with a testbench for some very old, poorly documented verilog. It did a very decent job for a first-cut testbench. It even collected common, recurring code into verilog tasks (functions) which really surprised me at the time.
cushychicken · 49s ago
Yes! It’s much better at using functional logic than I am - which I appreciate!
roflyear · 15h ago
It's better-than-senior at a some things, but worse-than-junior at a lot of things.
quantadev · 10h ago
It's more like better-than-senior 99% of the time. Makes mistakes 1% of the time. Most of the 'bad results' I've seen people struggle with ended up being the fault of the human, in the form of horrible context given to the AI or else ambiguous or otherwise flawed prompts.

Any skilled developer with a decade of experience can write prompts that return back precisely what we wanted almost every single time. I do it all day long. "Claude 4" rarely messes up.

energy123 · 5h ago
Their confusion is your competitive advantage in the labor market.
parliament32 · 13h ago
I'd like to agree with you and remain optimistic, but so much tech has promised the moon and stagnated into oblivion that I just don't have any optimism left to give. I don't know if you're old enough, but remember when speech-to-text was the next big thing? DragonSpeak was released in 1997, everyone was losing their minds about dictating letters/documents in MS Word, and we were promised that THIS would be the key interface for computing evermore. And.. 27 years later, talking to the latest Siri, it makes just as many mistakes as it did back then. In messenger applications people are sending literal voice notes -- audio clips -- back and forth because dictation is so unreliable. And audio clips are possibly the worst interface for communication ever (no searching, etc).

Remember how blockchain was going to change the world? Web3? IoT? Etc etc.

I've been through enough of these cycles to understand that, while the AI gimmick is cool and all, we're probably at the local maximum. The reliability won't improve much from here (hallucinations etc), while the costs to run it will stay high. The final tombstone will be when the AI companies stop running at a loss and actually charge for the massive costs associated with running these models.

oconnor663 · 4h ago
> 27 years later, talking to the latest Siri, it makes just as many mistakes as it did back then

Have you tried talking to ChatGPT voice mode? It's mind blowing. You just have a conversation with it. In any language. About anything. The other day I wanted to know about the difference between cast iron and wrought iron, and it turned into a 10 or 15 minute conversation. That's maybe a good example of an "easy" topic for LLMs (lots of textbooks for it to memorize), but the world is full of easy topics that I know nothing about!

some_random · 12h ago
How can you possibly look at what LLMs are doing and the progress made in the last ~3 years and equate it to crypto bullshit? Also it's super weird to include IoT in there, seeing as it has become all but ubiquitous.
r14c · 11h ago
I'm not as bearish on AI, but its hard to tell if you can really extrapolate future performance based on past improvements.

Personally, I'm more interested in the political angle. I can see that AI will be disruptive because there's a ton of money and possibly other political outcomes depending on it doing exactly that.

retetr · 9h ago
Unrelated, but is this a case of the Pareto Principle? (Admittedly the first time I'm hearing of it) Wherein 80% of the effect is caused by 20% of the input. Or is this more a case of diminishing returns? Where the initial results were incredible, but each succeeding iteration seems to be more disappointing?
klabb3 · 9h ago
Pareto is about diminishing returns.

> but each succeeding iteration seems to be more disappointing

This is because the scaling hypothesis (more data and more compute = gains) is plateauing, because all text data is used and compute is reaching diminishing returns for some reason I’m not smart enough to say why, but it is.

So now we're seeing incremental core model advancements, variations and tuning in pre- and post training stages and a ton of applications (agents).

This is good imo. But obviously it’s not good for delusional valuations based exponential growth.

energy123 · 4h ago
We're seeing diminishing returns in benchmark space, which is partly an artefact of construction, not an absolutely true commentary on how things are progressing.
ChrisMarshallNY · 9h ago
Really good coders (like him) are better.

Mediocre ones … maybe not so much.

When I worked for a Japanese optical company, we had a Japanese engineer, who was a whiz. I remember him coming over from Japan, and fixing some really hairy communication bus issues. He actually quit the company, a bit after that, at a very young age, and was hired back as a contractor; which was unheard of, in those days.

He was still working for them, as a remote contractor, at least 25 years later. He was always on the “tiger teams.”

He did awesome assembly. I remember when the PowerPC came out, and “Assembly Considered Harmful,” was the conventional wisdom, because of pipelining, out-of-order instructions, and precaching, and all that.

His assembly consistently blew the doors off anything the compiler did. Like, by orders of magnitude.

benstein · 7h ago
+1000. "Human coders are still better than LLMs" is a hot take. "Antirez is still better than LLMs" is axiomatic ;-)
austin-cheney · 2h ago
In many cases developers are a low expectation commodity. In those cases I strongly believe humans are entirely replaceable by AI and I am saying that as somebody with an exceptionally low opinion of LLMs.

Honestly though, when that replacement comes there is no sympathy to be had. Many developers have brought this upon themselves. For roughly the 25 year period from 1995 to 2020 businesses have been trying to turn developers into mindless commodities that are straight forward to replace. Developers have overwhelmingly encouraged this and many still do. These are the people who hop employers every 2 years and cannot do their jobs without lying on their resumes or complete reliance on a favorite framework.

zxexz · 1h ago
I find myself wondering about your story, and would love it if you would elaborate more. I have gotten some use out of LLMs, and have been quite involved in training a few compute intensive (albeit domain-specific) ones.

Maybe it's the way you talk about 'developers'. Nothing I have seen has felt like the sky falling on an industry; to me at most it's been the sky falling on a segment of silicon valley.

austin-cheney · 1h ago
It’s all about perspectives. In many cases the perspectives between how a developer identifies their level of participation versus what they actually do as a work activity differ substantially. For example many developers may refer to themselves as engineers when they have done nothing remotely close to measurements, research, or policy creation in compliance attainment.

With that out of the way let’s look only at what many developers actually do. If a given developer only uses a framework to put text on screen or respond to a user interaction then they can be replaced. LLMs can already do this better than people. That becomes substantially more true after accounting for secondary concerns: security, accessibility, performance, regression, and more.

If a developer is doing something more complex that accounts for systems analysis or human behavior then LLMs are completely insufficient.

yua_mikami · 17h ago
The thing everyone forgets when talking about LLMs replacing coders is that there is much more to software engineering than writing code, in fact that's probably one of the smaller aspects of the job.

One major aspect of software engineering is social, requirements analysis and figuring out what the customer actually wants, they often don't know.

If a human engineer struggles to figure out what a customer wants and a customer struggles to specify it, how can an LLM be expected to?

malfist · 17h ago
That was also one of the challenges during the offshoring craze in the 00s. The offshore teams did not have the power, or knowledge to push back on things and just built and built and built. Sounds very similar to AI right?

Probably going to have the same outcome.

pandastronaut · 16h ago
I tend to see today's AI Vibrators as the managers of the 00s and their army of offshore devs.
9dev · 13h ago
Did you actually mean to say AI Vibrators?
mreid · 13h ago
I'm guessing it is a derogatory pun, alluding to vibe coders.
platevoltage · 13h ago
Give it 3 months. There will be an AI Vibrator on the market, if there isn't one already.
falcor84 · 11h ago
I just found this MCP integration, but unfortunately I don't have a device I can test it on - https://github.com/ConAcademy/buttplug-mcp
malfist · 11h ago
You can fix that! I hear there's this new thing called online shopping
hathawsh · 16h ago
The difference is that when AI exhibits behavior like that, you can refine the AI or add more AI layers to correct it. For example, you might create a supervisor AI that evaluates when more requirements are needed before continuing to build, and a code review AI that triggers refinements automatically.
nevertoolate · 16h ago
Question is, how autonomous decision making works, nobody argues that llm can finish any sentence, but can it push a red button?
johnecheck · 13h ago
Of course it can push a red button. Trivially, with MCP.

Setting up a system to make decisions autonomous is technically easy. Ensuring that it makes the right decisions, though, is a far harder task.

malfist · 11h ago
So it can push _a_ red button, but not necessarily the _right_ red button
devjab · 16h ago
LLM's do no software engineering at all, and that can be fine. Because you don't actually need software engineering to create successful programs. Some applications will not even need software engineering for their entire life cycles because nobody is really paying attention to efficiency in the ocean of poor cloud management anyway.

I actually imagine it's the opposite of what you say here. I think technically inclined "IT business partners" will be able of creating applications entirely without software engineers... Because I see that happen every day in the world of green energy. The issues come later, when things have to be maintained, scale or become efficient. This is where the software engineering comes in, because it actually matters if you used a list or a generator in your Python app when it iterates over millions of items and not just a few hundreds.

AstroBen · 15h ago
That's the thing too right.. the vast majority of software out there barely needs to scale or be super efficient

It does need to be reliable, though. LLMs have proven very bad at that

devjab · 1h ago
> the vast majority of software out there barely needs to scale or be super efficient

That was the way I saw it for a while. In recent months I've begun to wonder if I need to reevaluate that, because it's become clear to me that scaling doesn't actually start from zero. By zero I mean that I was naive enough to think that all programs, even the most googled programmed one by a completely new junior would at least have, some, efficiency... but some of these LLM services I get to work on today are so bad they didn't start at zero but at some negative number. It would have been less of an issue if our non-developer-developers didn't use Python (or at least used Python with ruff/pyrefly/whateveryoulike, but some of the things they write can't even scale to do minimal BI reporting.

mettamage · 2h ago
> One major aspect of software engineering is social, requirements analysis and figuring out what the customer actually wants, they often don't know.

It really depends on the organization. In many places product owners and product managers do this nowadays.

ilaksh · 10h ago
It actually comes down to feedback loops which means iterating on software being used or attempting to be used by the customer.

Chat UIs are an excellent customer feedback loop. Agents develop new iterations very quickly.

LLMs can absolutely handle abstractions and different kinds of component systems and overall architecture design.

They can also handle requirements analysis. But it comes back to iteration for the bottom line which means fast turnaround time for changes.

The robustness and IQ of the models continue to be improved. All of software engineering is well underway of being automated.

Probably five years max where un-augmented humans are still generally relevant for most work. You are going to need deep integration of AI into your own cognition somehow in order to avoid just being a bottleneck.

victorbjorklund · 17h ago
Yea, this is why I dont buy the "all developers will disappear". Will I write a lot less code in 5 years (maybe almost none)? Sure, I already type a lot less now than a year ago. But that is just a small part of the process.
xandrius · 13h ago
Exactly, also today I can actually believe I could finish a game which might have taken much longer before LLMs, just because now I can be pretty sure I won't get stuck on some feature just because I never done it before.
elzbardico · 16h ago
No. the scope will just increase to occupy the space left by LLMs. We will never be allowed to retire.
rowanG077 · 17h ago
I think LLMs are better at requirement elicitation than they are at actually writing code.
bbarn · 15h ago
The thing is, it is replacing _coders_ in a way. There are millions of people who do (or did) the work that LLMs excel at. Coders who are given a ticket that says "Write this API taking this input and giving this output" who are so far down the chain they don't even get involved in things like requirements analysis, or even interact with customers.

Software engineering, is a different thing, and I agree you're right (for now at least) about that, but don't underestimate the sheer amount of brainless coders out there.

callc · 10h ago
That sounds more like a case against a highly ossified waterfall development process than anything.

I would argue it’s a good thing to replace the actual brainless activities.

ori_b · 5h ago
> If a human engineer struggles to figure out what a customer wants and a customer struggles to specify it, how can an LLM be expected to?

Presumably, they're trained on a ton of requirements docs, as well as a huge number of customer support conversations. I'd expect them to do this at least as well as coding, and probably better.

wanderingstan · 17h ago
“Better” is always task-dependent. LLMs are already far better than me (and most devs I’d imagine) at rote things like getting CSS syntax right for a desired effect, or remembering the right way to invoke a popular library (e.g. fetch)

These little side quests used to eat a lot of my time and I’m happy to have a tool that can do these almost instantly.

jaccola · 17h ago
I've found LLMs particularly bad for anything beyond basic styling since the effects can be quite hard to describe and/or don't have a universal description.

Also, there are often times multiple ways to achieve a certain style and they all work fine until you want a particular tweak, in which case only one will work and the LLM usually gets stuck in one of the ones that does not work.

danielbln · 16h ago
Multi modal LLMs to the rescue. Throw a screenshot or mockup in there and tell the LLM "there, like this". Gemini can do the same with videos.
karn97 · 11h ago
Still terrible result. Multi modal = actually understands the image
presentation · 6h ago
Also tends to write CSS where if you actually have opinions about what good CSS is, is clearly an abomination. But most engineers don’t really care about that.
gherkinnn · 16h ago
I have found it to be good at things I am not very strong at (SQL) but terrible at the things I know well (CSS).

Telling, isn't it?

mywittyname · 12h ago
Ironically, I find it strong at things I don't know very well (CSS), but terrible at things I know well (SQL).

This is probably really just a way of saying, it's better at simple tasks rather than complex ones. I can eventually get Copilot to write SQL that's complex and accurate, but I don't find it faster or more effective than writing it myself.

ehansdais · 10h ago
Actually, you've reinforced their point. It's only bad at things the user is actually good at because the user actually knows enough in that domain to find the flaws and issues. It appears to be good in domains the user is bad at because the user doesn't know any better. In reality, the LLM is just bad at all domains; it's simply whether a user has the skill to discern it. Of course, I don't believe it's as black and white as that but I just wanted to point it out.
gherkinnn · 3h ago
Yes, that is precisely what I meant. It just occurred to me and I will see how that idea holds up.
tcoff91 · 6h ago
It’s like the Gell-Mann Amnesia effect but for LLMs instead of journalism.
ch4s3 · 16h ago
I kind of agree. It feels like they're generally a superior form of copying and pasting fro stack overflow where the machine has automated the searching, copying, pasting, and fiddling with variable names. It be just as useful or dangerous as Google -> Copy -> Paste ever was, but faster.
sanderjd · 9h ago
Funny, I find it to be good at things I'm not very strong at (CSS) but terrible at the things I know well (SQL). :)

Actually I think it's perfectly adequate at SQL too.

kccqzy · 17h ago
> and most devs I’d imagine

What an awful imagination. Yes there are people who don't like CSS but are forced to use it by their job so they don't learn it properly, and that's why they think CSS is rote memorization.

But overall I agree with you that if a company is too cheap to hire a person who is actually skilled at CSS, it is still better to hoist that CSS job onto LLMs than an unwilling human. Because that unwilling human is not going to learn CSS well and won't enjoy writing CSS.

On the other hand, if the company is willing to hire someone who's actually good, LLMs can't compare. It's basically the old argument of LLMs only being able to replace less good developers. In this case, you admitted that you are not good at CSS and LLMs are better than you at CSS. It's not task-dependent it's skill-dependent.

marcosdumay · 17h ago
Hum... I imagine LLMs are better than every developer on getting CSS keywords right like the GP pointed. And I expect every LLM to be slightly worse than most classical autocompletes.
skydhash · 17h ago
Getting CSS keywords right is not the actual point of writing CSS. And you can have a linter that helps you in that regards. The endgame of writing CSS is to style an HTML page according to the specifications of a design. Which can be as detailed as a figma file or as flimsy as a drawing on a whiteboard.
lelandfe · 17h ago
I'm one of those weirdos who really likes handwriting CSS. I frequently find ChatGPT getting my requests wrong.
jjgreen · 17h ago
... even better with a good fountain pen ...
michaelsalim · 13h ago
This is like saying that LLMs are better at knowing the name of that one obscure API. It's not wrong, but it's also not the hard part about CSS
klabb3 · 9h ago
Wait until they hear how good dictionaries are at spelling.
chii · 7h ago
The LLM outputs good enough CSS, but is (way) cheaper than someone who's actually good at CSS.
sanderjd · 9h ago
Yeah, this is what I really like about AI tools though. They're way better than me at annoying minutia like getting CSS syntax right. I used to dread that kind of thing!
codr7 · 7h ago
And you will keep dreading it for as long as you use them, since you learn nothing from solutions served on a silver plate.
sanderjd · 5h ago
The point is that I don't dread it anymore, because now there are tools that make it a lot easier the one or two times a year I have some reason to use it.
zdragnar · 17h ago
I think that's great if it's for something outside of your primary language. I've used it to good effect in that way myself. However, denying yourself the reflexive memory of having learned those things is a quick way to become wholly dependent upon the tool. You could easily end up with compromised solutions because the tool recommends something you don't understand well enough to know there's a better way to do something.
dpkirchner · 17h ago
You're right, however I think we've already gone through this before. Most of us (probably) couldn't tell you exactly how an optimizing compiler picks optimizations or exactly how JavaScript maps to processor instructions, etc -- we hopefully understand enough at one level of abstraction to do our jobs. Maybe LLM driving will be another level of abstraction, when it gets better at (say) architecting projects.
skydhash · 17h ago
> Most of us (probably) couldn't tell you exactly how an optimizing compiler picks optimizations or exactly how JavaScript maps to processor instructions,

That's because other people are making those working well. It's like how you don't care about how the bread is being made because you trust your baker (or the regulations). It's a chain of trust that is easily broken when LLMs are brought in.

danielbln · 16h ago
Depends, if regulations are the cage that a baker has to work in to produce a product of agreed upon quality, then tests and types and LSPs etc. can be that cage for an LLM.
skydhash · 14h ago
Regulations are not a cage. They don't constrains you for not doing stuff. They're a threshold for when behavior have destructive consequences for yourself. So you're very much incentivized for not doing them.

So tests may be the inspections, but what is the punitive action? Canceling the subscription?

AnimalMuppet · 17h ago
So here's an analogy. (Yeah, I know, proof by analogy is fraud. But it's going to illustrate the question.)

Here's a kid out hoeing rows for corn. He sees someone planting with a tractor, and decides that's the way to go. Someone tells him, "If you get a tractor, you'll never develop the muscles that would make you really great at hoeing."

Different analogy: Here's someone trying to learn to paint. They see someone painting by numbers, and it looks a lot easier. Someone tells them, "If you paint by numbers, you'll never develop the eye that you need to really become good as a painter."

Which is the analogy that applies, and what makes it the right one?

I think the difference is how much of the job the tool can take over. The tractor can take over the job of digging the row, with far more power, far more speed, and honestly far more quality. The paint by numbers can take over the job of visualizing the painting, with some loss of quality and a total loss of creativity. (In painting, the creativity is considered a vital part; in digging corn rows, not so much.)

I think that software is more like painting, rather than row-hoeing. I think that AI (currently) is in the form of speeding things up with some loss of both quality and creativity.

Can anyone steelman this?

bluefirebrand · 17h ago
> Here's a kid out hoeing rows for corn. He sees someone planting with a tractor, and decides that's the way to go. Someone tells him, "If you get a tractor, you'll never develop the muscles that would make you really great at hoeing

In this example the idea that losing the muscles that make you great at hoeing" seems kind of like a silly thing to worry about

But I think there's a second order effect here. The kid gets a job driving the tractor instead. He spends his days seated instead of working. His lifestyle is more sedentary. He works just as many hours as before, and he makes about the same as he did before, so he doesn't really see much benefit from the increased productivity of the tractor.

However now he's gaining weight from being more sedentary, losing muscle from not moving his body, developing lower back problems from being seated all day, developing hearing loss from the noisy machinery. His quality of life is now lower, right?

Edit: Yes, there are also health problems from working hard moving dirt all day. You can overwork yourself, no question. It's hard on your body, being in the sun all day is bad for you.

I would argue it's still objectively a physically healthier lifestyle than driving a tractor for hours though.

Edit 2: my point is that I think after driving a tractor for a while, the kid would really struggle to go hoe by hand like he used to, if he ever needed to

hatefulmoron · 16h ago
> my point is that I think after driving a tractor for a while, the kid would really struggle to go hoe by hand like he used to, if he ever needed to

That's true in the short term, but let's be real, tilling soil isn't likely to become a lost art. I mean, we use big machines right now but here we are talking about using a hoe.

If you remove the context of LLMs from the discussion, it reads like you're arguing that technological progress in general is bad because people would eventually struggle to live without it. I know you probably didn't intend that, but it's worth considering.

It's also sort of the point in an optimistic sense. I don't really know what it takes on a practical level to be a subsistence farmer. That's probably a good sign, all things considered. I go to the gym 6 times a week, try to eat pretty well, I'm probably better off compared to toiling in the fields.

bluefirebrand · 16h ago
> If you remove the context of LLMs from the discussion, it reads like you're arguing that technological progress in general is bad because people would eventually struggle to live without it.

I'm arguing that there are always tradeoffs and we often do not fully understand the tradeoffs we are making or the consequences of those tradeoffs 10, 50, 100 years down the road

When we moved from more physical jobs to desk jobs many of us became sedentary and overweight. Now we are in an "obesity crisis". There's multiple factors to that, it's not just being in desk jobs, but being sedentary is a big factor.

What tradeoffs are we making with AI that we won't fully understand until much further along this road?

Also, what is in it for me or other working class people? We take jobs that have us driving machines, we are "more productive" but do we get paid more? Do we have more free time? Do we get any benefit from this? Maybe a fraction. Most of the benefit is reaped by employers and shareholders

Maybe it would be better if instead of hoeing for 8 hours the farmhand could drive the tractor for 2 hours, make the same money and have 6 more free hours per day?

But what really happens is that the farm buys a tractor, fires 100 of the farmhands coworkers, the has the remaining farmhand drive the tractor for 8 hours, replacing the productivity to very little benefit to himself

Now the other farmhands are unemployed and broke, he's still working just as much and not gaining any extra from it

The only one who benefits are the owners

californical · 15h ago
I do think you’re missing something, though.

In a healthy competitive market (like most of the history of the US, maybe not the last 30-40 years), if all of the farms do that, it drives down the cost of the food. The reduction in labor necessary to produce the food causes competition and brings down the cost to produce the food.

That still doesn’t directly benefit the farmhands. But if it happens gradually throughout the entire economy, it creates abundance that benefits everybody. The farmhand doesn’t benefit from their own increase in productivity, but they benefit from everyone else’s.

And those unemployed farmhands likely don’t stay unemployed - maybe farms are able to expand and grow more, now that there is more labor available. Maybe they even go into food processing. It’s not obvious at the time, though.

In tech, we currently have like 6-10 mega companies, and a bunch of little ones. I think creating an environment that allows many more medium-sized companies and allowing them to compete heavily will ease away any risk of job loss. Same applies to a bunch of fields other than tech. The US companies are far too consolidated.

bluefirebrand · 15h ago
> I think creating an environment that allows many more medium-sized companies and allowing them to compete heavily will ease away any risk of job loss. Same applies to a bunch of fields other than tech. The US companies are far too consolidated

How do we achieve this environment?

It's not through AI, that is still the same problem. The AI companies will be the 6-10 mega companies and anyone relying on AI will still be small fry

Every time in my lifetime that we have had a huge jump in technological progress, all we've seen is that the rich get richer and the poor get poorer and the gap gets bigger

You even call this out explicitly: "most of the history of the US, maybe not the last 30-40 years"

Do we have any realistic reason to assume the trend of the last 30-40 years will change course at this point?

hatefulmoron · 15h ago
> When we moved from more physical jobs to desk jobs many of us became sedentary and overweight. Now we are in an "obesity crisis". There's multiple factors to that, it's not just being in desk jobs, but being sedentary is a big factor.

Sure, although I think our lives are generally better than they were a few hundred years ago. Besides, if you care about your health you can always take steps yourself.

> The only one who benefits are the owners

Well yeah, the entity that benefits is the farm, and whoever owns whatever portions of the farm. The point of the farm isn't to give its workers jobs. It's to produce something to sell.

As long as we're in a market where we're selling our labor, we're only given money for being productive. If technology makes us redundant, then we find new jobs. Same as it ever was.

Think about it: why should hundreds of manual farmhands stay employed while they can be replaced by a single machine? That's not an efficient economy or society. Let those people re-skill and be useful in other roles.

const_cast · 10h ago
> If technology makes us redundant, then we find new jobs. Same as it ever was.

Except, of course, it's not the same as it ever was because you do actually run out of jobs. And it's significantly sooner than you think, because people have limits.

I can't be Einstein, you can't be Einstein. If that becomes the standard, you and I will both starve.

We've been pushing people up and up the chain of complexity, and we can do that because we got all the low hanging fruit. It's easy to get someone to read, then to write, then to do basic math, then to do programming. It gets a bit harder though with every step, no? Not everyone who reads has the capability of doing basic math, and not everyone who can do basic math has the capability of being a programmers.

So at each step, we lose a little bit of people. Those people don't go anywhere, we just toss them aside as a society and force them into a life of poverty. You and I are detached from that, because we've been lucky to not be those people. I know some of those people, and that's just life for them.

My parents got high paying jobs straight out of highschool. Now, highschool grads are destined to flip burgers. We've pushed people up - but not everyone can graduate college. Then, we have to think about what happens when we continue to push people up.

Eventually, you and I will not be able to keep up. You're smart, I'm smart, but not that smart. We will become the burger flippers or whatever futuristic equivalent. Uh... robot flippers.

hatefulmoron · 2h ago
What if all work is no longer necessary? Then yes, we're going to have to rethink how our society works. Fair enough.

I'm a bit confused by your read on the people who don't make it through college. The implication is that if you don't make it into a high status/white collar job, you're destined for a life of poverty. I feel like this speaks more to the insecurity of the white collar worker, and isn't actually a good reflection of reality. Most of my friends dropped out of college and did something completely different in the service industry, it's not really a "life of poverty."

> My parents got high paying jobs straight out of highschool. Now, highschool grads are destined to flip burgers.

This feels like pure luck for your parents. Take a wider look at history -- it's just a regression to the mean. We used to have _less_ complex jobs. Mathematics/science hasn't always been a job. That is to say, burger-flipping or an equivalent was more common. It was not the norm that households were held together by a single man's income, etc.

bluefirebrand · 6h ago
> Uh... robot flippers.

Prompt engineers

You are spot on with your analysis. At some point there will be nothing left for people to do except at the very top level. What happens then?

I am not optimistic enough to believe that we create a utopia for everyone. We would need to solve scarcity first, at minimum.

stonemetal12 · 15h ago
>I think the difference is how much of the job the tool can take over.

I think it is about how utilitarian the output is. For food no one cares how the sausage is made. For a painting the story behind it is more important than the picture itself. All of Picasso's paintings are famous because they were painted by Picasso. Picasso style painting by Bill? Suddenly it isn't museum worthy anymore.

No one cares about the story or people behind Word, they just want to edit documents. The Demo scene probably has a good shot at being on the side of art.

danielbln · 16h ago
For me the creativity in software engineering doesn't come from coding, that's an implementation detail. It comes from architecture, from thinking about "what do I want to build, how should it behave, how should it look, what or who is it for?" and driving that forward. Bolting it together in code is hoeing, for that vast majority of us. The creative endeavor sits higher up on the abstraction ladder.
solatic · 4h ago
Human coders are necessary because writing code is a political act of deciding between different trade-offs. antirez's whole post is explaining to Gemini what the trade-offs even were in the first place. No analysis of a codebase in isolation (i.e. without talking to the original coders, and without comments in the code) can distinguish between intentional prioritization of certain trade-offs or whether behavior is unintentional / written by a human in an imperfect way because they didn't know any better / buggy.

LLMs will never be able to figure out for themselves what your project's politics are and what trade-offs are supposed to be made. The penultimate model will still require a user to explain the trade-offs in a prompt.

energy123 · 4h ago
> LLMs will never be able to figure out for themselves what your project's politics are and what trade-offs are supposed to be made.

I wouldn't declare that unsolvable. The intentions of a project and how they fit into user needs can be largely inferred from the code and associated docs/README, combined with good world knowledge. If you're shown a codebase of a GPU kernel for ML, then as a human you instantly know the kinds of constraints and objectives that go into any decisions. I see no reason why an LLM couldn't also infer the same kind of meta-knowledge. Of course, this papers over the hard part of training the LLMs to actually do that properly, but I don't see why it's inherently impossible.

acquisitionsilk · 15h ago
It is quite heartening to see so many people care about "good code". I fear it will make no difference.

The problem is that the software world got eaten up by the business world many years ago. I'm not sure at what point exactly, or if the writing was already on the wall when Bill Gates' wrote his open letter to hobbyists in 1976.

The question is whether shareholders and managers will accept less good code. I don't see how it would be logical to expect anything else, as long as profit lines go up why would they care.

Short of some sort of cultural pushback from developers or users, we're cooked, as the youth say.

JackSlateur · 15h ago
Code is meant to power your business

Bad code leads to bad business

This makes me think of hosting departement; You know, which people who are using vmware, physical firewalls, dpi proxies and whatnot;

On the other edge, you have public cloud providers, which are using qemu, netfilter, dumb networking devices and stuff

Who got eaten by whom, nobody could have guessed ..

chii · 7h ago
> Bad code leads to bad business

Bad business leads to bad business.

Bad code might be bad, or might be sufficient. It's situational. And by looking at what exists today, majority of code is pretty bad already - and not all businesses with bad code lead to bad businesses.

In fact, some bad code are very profitable for some businesses (ask any SAP integrator).

JackSlateur · 3h ago
It is the survivor bias: "by looking at what is still alive today, majority of code is pretty bad"

It eludes all of those who died in the process : those still alives are here despite bad IT, not due to a bad IT

tcoff91 · 6h ago
The vast majority of code that makes money is pretty shitty.
BirAdam · 10h ago
This is fun to think about. I used to think that all software was largely garbage, and at one point, I think this _was_ true. Sometime over the last 20 years, I believe this ceased to be the case. Most software these days actually works. Importantly, most software is actually stable enough that I can make it half an hour without panic saving.

Could most software be more awesome? Yes. Objectively, yes. Is most software garbage? Perhaps by raw volume of software titles, but are most popular applications I’ve actually used garbage? Nope. Do I loathe the whole subscription thing? Yes. Absolutely. Yet, I also get it. People expect software to get updated, and updates have costs.

So, the pertinent question here is, will AI systems be worse than humans? For now, yeah. Forever? Nope. The rate of improvement is crazy. Two years ago, LLMs I ran locally couldn’t do much of anything. Now? Generally acceptable junior dev stuff comes out of models I run on my Mac Studio. I have to fiddle with the prompts a bit, and it’s probably faster to just take a walk and think it over than spend an hour trying different prompts… but I’m a nerd and I like fiddling.

robocat · 10h ago
> Short of some sort of cultural pushback from developers or users

Corporations create great code too: they're not all badly run.

The problem isn't a code quality issue: it is a moral issue of whether you agree with the goals of capitalist businesses.

Many people have to balance the needs of their wallet with their desire for beautiful software (I'm a developer-founder I love engineering and open source community but I'm also capitalist enough to want to live comfortably).

loudmax · 17h ago
Companies that leverage LLMs and AIs to let their employees be more productive will thrive.

Companies that try to replace their employees with LLMs and AIs will fail.

Unfortunately, all that's in the long run. In the near term, some CEOs and management teams will profit from the short term valuations as they squander their companies' future growth on short-sighted staff cuts.

bdbenton5255 · 13h ago
That's really it. These tools are useful as assistants to programmers but do not replace an actual programmer. The right course is to embrace the technology moderately rather than reject it completely or bet on it replacing workers.
joshdavham · 16h ago
> In the near term, some CEOs and management teams will profit from the short term valuations

That's actually really interesting to think about. The idea that doing something counter-productive like trying to replace employees with AI (which will cause problems), may actually benefit the company in terms of valuations in the short run. So in effect, they're hurting and helping the company at the same time.

to11mtm · 12h ago
Hey if the check clears for the bonus they got for hitting 'reduce costs in the IT department', they often bail before things rear their ugly head, or in the ugly case, Reality Distortion Field's the entire org into making the bad anti patterns permanent, even while acknowledging the cost/delivery/quality inefficiencies[0].

This is especially prevalent in waterfall orgs that refuse change. Body shops are more than happy to waste a huge portion of their billable hours on planning meetings and roadmap revisions as the obviousness of the mythical man month comes to bear on the org.

Corners get cut to meet deadlines, because the people who started/perpetuated whatever myth need to save their skins (and hopefully continue to get bonuses.)

The engineers become a scapegoat for the org's management problems (And watch, it very likely will happen at some shops with the 'AI push'). In the nasty cases, the org actively disempowers engineers in the process[0][1].

[0] - At one shop, there was grief we got that we hadn't shipped a feature, but the only reason we hadn't, was IT was not allowed to decide between a set of radio buttons or a drop-down on a screen. Hell I got yelled at for just making the change locally and sending screenshots.

[1] - At more than one shop, FTE devs were responsible for providing support for code written by offshore that they were never even given the opportunity to review. And hell yes myself and others pushed for change, but it's never been a simple change. It almost always is 'GLWT'->'You get to review the final delivery but get 2 days'->'You get to review the set of changes'->'Ok you can review their sprint'->'OK just start reviewing every PR'.

BirAdam · 10h ago
By the time AI hype dies down and hurts the bottom line, AI systems might be good enough to do the jobs.

“The market can remain irrational longer than you can remain solvent.” — Warren Buffett

janalsncm · 16h ago
Very well said. Using code assistance is going to be table stakes moving forward, not something that can replace people. It’s not like competitors can’t also purchase AI subscriptions.
bbarn · 15h ago
Honestly, if you're not doing it now, you're behind. The sheer amount of time savings using it smartly can give you to allow you to focus on the parts that actually matter is massive.
kweingar · 13h ago
If progress continues at the rate that AI boosters expect, then soon you won't have to use them smartly to get value (all existing workflows will churn and be replaced by newer, smarter workflows within months), and everybody who is behind will immediately catch up the moment they start to use the tool.
abletonlive · 8h ago
But if it doesn't and you're not using it now then you're gonna be behind and part of the group getting laid off

the people that are good at using these tools now will be better at it later too. you might have closed the gap quite a bit but you will still be behind

using LLMs are they are now requires a certain type of mindset that takes practice to maintain and sharpen. It's just like a competitive game. The more intentionally do it, the better you get. And the meta changes every 6 months to a year.

That's why I scroll and laugh through all the comments on this thread dismissing it, because I know that the people dismissing it are the problem.

the interface is a chatbox with no instructions or guardrails. the fact that folks think that their experience is universal is hilarious. so much of using LLM right now is context management.

I can't take most of yall in this thread seriously

kweingar · 7h ago
If the "meta" changes so quickly, then that sets an upper bound as to how far behind you are, no? Unless you are doing low-autonomy, non-specialized work or are applying to fly-by-the-seat-of-your-pants startup jobs, no hiring manager is going to care if you have three months less experience with Codex than the other candidate.[1]

> so much of using LLM right now is context management

That is because the tooling is incredibly immature. Even if raw LLM capabilities end up plateauing, new and more effective tools are going to proliferate. You won't have to obsess over managing context, just like we don't have to do 2023-level tricks like "you are an expert" or "please explain your thought process" anymore. All of the context management tricks will be obsolete very soon... because AI tooling companies are extremely incentivized to solve it.

I find it implausible that the tech is in a state where full-time prompters are gaining a durable advantage over everyone else. J2ME devs probably thought they were building a snowballing advantage over devs who dismissed mobile development. Then the iPhone came out and totally reset the playing field.

[1] Most employers don't distinguish between three months and nine months of experience with JS framework du jour, no matter what it says on the job listing

Edited to add: Claude Code brought the agentic coding trend to the mainstream. It came out three months ago. You talk about how much you're laughing at the naivete of people here, but are you telling me with a straight face that three months is enough to put a talented engineer "behind"? At risk of being unemployable? The engineers who spent the last three months ping-ponging between Claude Code, Cursor, Codex, etc. can have their experience distilled into like a week of explaining to a newcomer, and I predict that will be true six months from now, or a year from now.

abletonlive · 7h ago
> If the "meta" changes so quickly, then that sets an upper bound as to how far behind you are, no?

No, the top players when the meta changes in competitive games remain the top players. They also figure out the new meta faster than the casual players.

kweingar · 7h ago
This is why devs who started with J2ME are the holy grail of app developers, since they started making apps years before iPhone devs
abletonlive · 7h ago
you sound mad you could be spending this time upskilling instead.

but i'll say it again, when the meta changes the people that were at the top will quickly find themselves at the top again.

listen, the reason why they were in the top in the first place and you aren't is a mindset thing. the top are the curious that are experimenting and refining, sharing with each other techniques developed over time.

the complacent just sit around and lets the world happen to them. they, like you are expressing now, think that when the meta switches the bottom will suddenly find themselves at the top and the top will have nothing.

look around you, that's obviously not how the world works.

but yes, laughing

kweingar · 5h ago
(I deleted a less productive comment.)

I do use these tools though! I spent some time with AI. I have coworkers who are more heads-down working on their projects and not tinkering with agents, and they're doing fine. I have coworkers who are on the absolute bleeding edge of AI tools, and they're doing fine. When the tooling matures and the churn lessens and the temperature of the discourse is lowered, I'm confident that we will all be doing great things. I just think that the "anybody not using and optimizing Codex or Claude Code today is not gonna make it" attitude is misguided. I could probably wring out some more utility from these tools if I spent more time with them, but I'd rather spend most of my professional development time working on subject matter expertise. I want to deeply understand my domain, and I trust that AI use will (mostly) become relatively easier to pick up and less of a differentiator as time goes on

ukuina · 5h ago
> when the meta changes the people that were at the top will quickly find themselves at the top again.

I think parent is agreeing with you?

> This is why devs who started with J2ME are the holy grail of app developers, since they started making apps years before iPhone devs

kweingar · 5h ago
I was being sarcastic there, a bad habit of mine. There are some advantages to being an early adopter (you get to reap some of the benefits now), but it doesn't give you a permanent advantage, and the people who aren't closely following and adopting weeks-old tools aren't doomed to irrelevance.

The iPhone was an equalizer. Existing mobile devs did get a genuine head start on mobile app design, but their advantage was fleeting.

am17an · 17h ago
All the world's smartest minds are racing towards replacing themselves. As programmers, we should take note and see where the wind is blowing. At least don't discard the possibility and rather be prepared for the future. Not to sound like a tin-foil hat but odds of achieving something like this increase by the day.

In the long term (post AGI), the only safe white-collar jobs would be those built on data which is not public i.e. extremely proprietary (e.g. Defense, Finance) and even those will rely heavily on customized AIs.

AstroBen · 14h ago
Ultimately this needs to be solved politically

Making our work more efficient, or humans redundant should be really exciting. It's not set in stone that we need to leave people middle aged with families and now completely unable to earn enough to provide a good life

Hopefully if it happens, it happens to such a huge amount of people that it forces a change

lyu07282 · 13h ago
But that already happened to lots of industries and lots of people, we never cared before about them, now it's us so we care, but nothing is different about us. Just learn to code!
geraneum · 5h ago
> But that already happened to lots of industries and lots of people, we never cared before about them

We did. Why do you think labor laws, unions, etc. exist? Why do you think communism was appealing as an idea in the beginning to many? Whether the effects were good or bad or enough or not, that’s a different question. But these changes have demonstrably, grave consequences.

AstroBen · 13h ago
The difference is in how many industries AI is threatening. It's not just coding on the chopping block
bluefirebrand · 12h ago
No different than how many industries that offshoring wrecked
bitpush · 13h ago
> All the world's smartest minds are racing towards replacing themselves

Isnt every little script, every little automation us programmers do in the same spirit? "I dont like doing this, so I'm going to automate it, so that I can focus on other work".

Sure, we're racing towards replacing ourselves, but there would be (and will be) other more interesting work for us to do when we're free to do that. Perhaps, all of us will finally have time to learn surfing, or garden, or something. Some might still write code themselves by hand, just like how some folks like making bread .. but making bread by hand is not how you feed a civilization - even if hundreds of bakers were put out of business.

AstroBen · 10h ago
> all of us will finally have time to learn surfing, or garden

Unless you have a mortgage.. or rent.. or need to eat

wijwp · 12h ago
> Not to sound like a tin-foil hat but odds of achieving something like this increase by the day.

Where do you get this? The limitations of LLMs are becoming more clear by the day. Improvements are slowing down. Major improvements come from integrations, not major model improvements.

AGI likely can't be achieved with LLMs. That wasn't as clear a couple years ago.

drodgers · 8h ago
I don't know how someone could be following the technical progress in detail and hold this view. The progress is astonishing, and the benchmarks are becoming saturated so fast that it's hard to keep track.

Are there plenty of gaps left between here and most definitions of AGI? Absolutely. Nevertheless, how can you be sure that those gaps will remain given how many faculties these models have already been able to excel at (translation, maths, writing, code, chess, algorithm design etc.)?

It seems to me like we're down to a relatively sparse list of tasks and skills where the models aren't getting enough training data, or are missing tools and sub-components required to excel. Beyond that, it's just a matter of iterative improvement until 80th percentile coder becomes 99th percentile coder becomes superhuman coder, and ditto for maths, persuasion and everything else.

Maybe we hit some hard roadblocks, but room for those challenges to be hiding seems to be dwindling day by day.

materiallie · 4h ago
I think benchmark targeting is going to be a serious problem going forward. The recent Nate Silver podcast on poker performance is interesting. Basically, the LLM models still suck at playing poker.

Poker tests intelligence. So what gives? One interesting thing is that for whatever reason, poker performance isn't used a benchmark in the LLM showdown between big tech companies.

The models have definitely improved in the past few years. I'm skeptical that there's been a "break-through", and I'm growing more skeptical of the exponential growth theory. It looks to me like the big tech companies are just throwing huge compute and engineering budgets at the existing transformer tech, to improve benchmarks one by one.

I'm sure if Google allocated 10 engineers a dozen million dollars to improve Gemini's poker performance, it would increase. The idea before AGI and the exponential growth hypothesis is that you don't have to do that because the AI gets smarter in a general sense all on it's own.

drodgers · 51m ago
I think that's generally fair, but this point goes too far:

> improve benchmarks one by one

If you're right about that in the strong sense — that each task needs to be optimised in total isolation — then it would be a longer, slower road to a really powerful humanlike system.

What I think is really happening though that each specific task (eg. coding) is having large spillover effects on other areas (eg. helping them to be better at extended verbal reasoning even when not writing any code). The AI labs can't do everything at once, so they're focusing where:

- It's easy to generate more data and measure results (coding, maths etc.) - There's a relative lack of good data in the existing training corpus (eg. good agentic reasoning logic - the kinds of internal monologs that humans rarely write down) - Areas where it would be immediately useful for the models to get better in a targeted way (eg. agentic tool-use; developing great hypothesis generation instincts in scientific fields like algorithm design, drug discovery and ML research)

By the time those tasks are optimised, I suspect the spill over effects will be substantial and the models will generally be much more capable.

Beyond that, the labs are all pretty open about the fact that they want to use the resulting AI talents for coding, reasoning and research skills to accelerate their own research. If that works (definitely not obvious yet) then finding ways to train a much broader array of skills could be much faster because that process itself would be increasingly automated.

cheema33 · 3h ago
> All the world's smartest minds are racing towards replacing themselves.

I think they are hoping that their future is safe. And it is the average minds that will have to go first. There may be some truth to it.

Also, many of these smartest minds are motivated by money, to safeguard their future, from a certain doom that they know might be coming. And AI is a good place to be if you want to accumulate wealth fast.

bgwalter · 17h ago
The Nobel prize is said to have been created partly out of guilt over having invented dynamite, which was obviously used in a destructive manner.

Now we have Geoffrey Hinton getting the prize for contributing to one of the most destructive inventions ever.

reducesuffering · 13h ago
At least he and Yoshua Bengio are remorseful. Many others haven't even gotten that far...
BirAdam · 10h ago
Nah. As more people are rendered unemployed the buying market and therefore aggregate demand will fall. Fewer sales hurts the bottom line. At some point, revenues across the entire economy fall, and companies cannot afford the massive datacenters and nuclear power plants fueling them. The hardware gets sold cheap, the companies go under, and people get hired again. Eventually, some kind of equilibrium will be found or the world engages in the Butlerian Jihad.
frogperson · 11h ago
The context required to write real software is just way too big for LLMs. Software is the business, codified. How is an LLM supposed to know about all the rules in all the departments plus all the special agreements promised to customers by the sales team?

Right now the scope of what an LLM can solve is pretty generic and focused. Anytime more than a class or two is involved or if the code base is more than 20 or 30 files, then even the best LLMs start to stray and lose focus. They can't seem to keep a train of thought which leads to churning way too much code.

If LLMs are going to replace real developers, they will need to accept significantly more context, they will need a way to gather context from the business at large, and some way to persist a train of thought across the life of a codebase.

I'll start to get nervous when these problems are close to being solved.

zachlatta · 11h ago
I’d encourage you to try the 1M context window on Gemini 2.5 Pro. It’s pretty remarkable.

I paste in the entire codebase for my small ETL project (100k tokens) and it’s pretty good.

Not perfect, still a long ways to go, but a sign of the times to come.

No comments yet

smilbandit · 17h ago
From my limited experience, former coder now management but I still get to code now and then. I've found them helpful but also intrusive. Sometimes when it guesses the code for the rest of the line and next few lines it's going down a path I don't want to go but I have to take time to scan it. Maybe it's a configuration issue, but i'd prefer it didn't put code directly in my way or be off by default and only show when I hit a key combo.

One thing I know is that I wouldn't ask an LLM to write an entire section of code or even a function without going in and reviewing.

haiku2077 · 17h ago
Zed has a "subtle" mode like that. More editors should provide it. https://zed.dev/docs/ai/edit-prediction#switching-modes
PartiallyTyped · 17h ago
> One thing I know is that I wouldn't ask an LLM to write an entire section of code or even a function without going in and reviewing.

These days I am working on a startup doing [a bit of] everything, and I don't like the UI it creates. It's useful enough when I make the building blocks and let it be, but allowing claude to write big sections ends up with lots of reworks until I get what I am looking for.

bouncycastle · 12h ago
Last night I spent hours fighting o3.

I never made a Dockerfile in my life, so I thought it would be faster just getting o3 to point to the GitHub repo and let it figure out, rather than me reading the docs and building it myself.

I spent hours debugging the file it gave me... It kept on adding hallucinations for things that didn't exist, and removing/rewriting other parts, and other big mistakes like understanding the difference between python3 and python and the intricacies with that.

Finally I gave up and Googled some docs instead. Fixed my file in minutes and was able to jump into the container and debug the rest of the issues. AI is great, but it's not a tool to end all. You still need someone who is awake at the wheel.

halpow · 6h ago
They're great at one-shotting verbose code, but if they're generate bad code the first time you're out of luck.

I don’t think I ever got to write "this api doesn't exist" and then gotten a useful alternative.

Claude is the only one that regularly tells me something isn't possible rather than making sh up.

throwaway314155 · 12h ago
Pro-tip: Check out Claude or Gemini. They hallucinate far less on coding tasks. Alternatively, enable internet search on o3 which boosts its ability to reference online documentation and real world usage examples.

I get having a bad taste in your mouth but these tools _aren't _ magic and do have something of a steep learning curve in order to get the most out of them. Not dissimilar from vim/emacs (or lots of dev tooling).

edit: To answer a reply (hn has annoyingly limited my ability to make new comments) yes, internet search is always available to ChatGpT as a tool. Explicitly clicking the globe icon will encourage the model to use it more often, however.

Sohcahtoa82 · 11h ago
> enable internet search on o3

I didn't know it could even be disabled. It must be enabled by default, right?

throwaway314155 · 7h ago
You're correct. Tapping the globe icon encourages the model to use it more often.
seabirdman · 2h ago
Super hard problems are often solved by making strange weird connections derived from deep experience plus luck. Like finding the one right key in a pile of keys. The intuition you used to solve your problem IS probably beyond current agents. But, that too will change perhaps by harnessing the penchant of these systems to “hallucinate”? Or, some method or separate algorithm for dealing with super hard problems creatively and systematically. Recently, I was working on a hard imaging problem (for me) and remembered a bug I had inadvertently introduced and fixed a few days earlier. I was like wait a minute - because in that random bug I saw opportunity and was able to actually use the bug to solve my problem. I went back to my agent and it agreed that there was virtually no way it could have ever seen and solved the problem in that way. But that too will come. Rest assured.
bdbenton5255 · 11h ago
The human ability to design computer programs through abstractions and solve creative problems like these is arguably more important than being able to crank out lines of code that perform specific tasks.

The programmer is an architect of logic and computers translate human modes of thought into instructions. These tools can imitate humans and produce code given certain tasks, typically by scraping existing code, but they can't replace that abstract level of human thought to design and build in the same way.

When these models are given greater functionality to not only output code but to build out entire projects given specifications, then the role of the human programmer must evolve.

pjmlp · 48m ago
Yes, we are still winning the game, however don't be happy for what is possible today, think what is possible in a decade from now.

In that regard I am less optimistic.

anhner · 44m ago
I think we will hit a proverbial wall at some point just like with self-driving cars.
pjmlp · 40m ago
May be, yet it is working good enough for Waymo, and not so much for those losing their clients to them.

Or for the supermarkets now able to have about half the employees they used to have as cashiers.

Many times the wall is already disruption enough.

pupppet · 17h ago
If an LLM just finds patterns, is it even possible for an LLM to be GOOD at anything? Doesn't that mean at best it will be average?
bitpush · 12h ago
Humans are also almost always operating on patterns. This is why "experience" matters a lot.

Very few people are doing truly cutting edge stuff - we call them visionaries. But most of the time, we're just merely doing what's expected

And yes, that includes this comment. This wasnt creative or an original thought at all. I'm sure hundreds of people have had similar thought, and I'm probably parroting someone else's idea here. So if I can do it, why cant LLM?

dgb23 · 11h ago
The times we just operate on patterns is when we code boilerplate or just very commonly written code. There's value in speeding this up and LLMs help here.

But generally speaking I don't experience programming like that most of the time. There are so many things going on that have nothing to do with pattern matching while coding.

I load up a working model of the running code in my head and explore what it should be doing in a more abstract/intangible way and then I translate those thoughts to code. In some cases I see the code in my inner eye, in others I have to focus quite a lot or even move around or talk.

My mind goes to different places and experiences. Sometimes it's making new connections, sometimes it's processing a bit longer to get a clearer picture, sometimes it re-shuffles priorities. A radical context switch may happen at any time and I delete a lot of code because I found a much simpler solution.

I think that's a qualitative, insurmountable difference between an LLM and an actual programmer. The programmer thinks deeply about the running program and not just the text that needs to be written.

There might be different types of "thinking" that we can put into a computer in order to automate these kinds of tasks reliably and efficiently. But just pattern matching isn't it.

riknos314 · 17h ago
My experience is that LLMs regress to the average of the context they have for the task at hand.

If you're getting average results you most likely haven't given it enough details about what you're looking for.

The same largely applies to hallucinations. In my experience LLMs hallucinate significantly more when at or pushed to exceed the limits of their context.

So if you're looking to get a specific output, your success rate is largely determined by how specific and comprehensive the context the LLM has access to is.

jaccola · 17h ago
Most people (average and below average) can tell when something is above average, even if they cannot create above average work, so using RLHF it should be quite possible to achieve above average.

Indeed it is likely already the case that in training the top links scraped or most popular videos are weighted higher, these are likely to be better than average.

lukan · 17h ago
There are bad patterns and good patterns. But whether a pattern is the right one for a specific task is something different.

And what really matters is, if the task gets reliable solved.

So if they actually could manage this on average with average quality .. that would be a next level gamechanger.

JackSlateur · 15h ago
Yes, IA is basically a random machine aiming for average outcome

IA is neat for average people, to produce average code, for average compagnies

In a competitive world, using IA is a death sentence;

gumbojuice · 1h ago
I like to use llm to produce code for known problems that I don't have memorized.

I memorize really little and tend to spend time on reinventing algorithms or looking them up in documentation. Verifying is easy except the fee cases where the llm produces something really weird. But then fallback to docs or reinventing.

some-guy · 17h ago
The main thing LLMs have helped me with, and always comes back to, tasks that require bootstrapping / Googling:

1) Starting simple codebases 2) Googling syntax 3) Writing bash scripts that utilize Unix commands whose arguments I have never bothered to learn in the first place.

I definitely find time savings with these, but the esoteric knowledge required to work on a 10+ year old codebase is simply too much for LLMs still, and the code alone doesn't provide enough context to do anything meaningful, or even faster than I would be able to do myself.

mywittyname · 12h ago
LLMs are amazing at shell scripting. It's one of those tasks where I always half-ass it because I don't really know how to properly handle errors and never really learned the correct way. But man, perplexity and poop out a basic shell script in a few seconds with pretty much every edge case I can think of covered.
tcoff91 · 6h ago
It really is the thing they are best at.
AlotOfReading · 17h ago
Unrelated to the LLM discussion, but a hash function function is the wrong construction for the accumulator solution. The hashing part increases the probability that A and B have a collision that leads to a false negative here. Instead, you want a random invertible mapping, which guarantees that no two pointers will "hash" to the same value, while distributing the bits. Splitmix64 is a nice one, and I believe the murmurhash3 finalizer is invertible, as well as some of the xorshift RNGs if you avoid the degenerate zero cycle.
antirez · 16h ago
Any Feistel Network has the property you stated actually, and this was one of the approaches I was thinking using as I can have the seed as part of the non linear transformation of the Feistel Network. However I'm not sure that this actually decreases the probability of A xor B xor C xor D being accidentally zero, bacause the problem with pointers is that they may change only for a small part. When you using hashing because of avalanche effect this is going a lot harder since you are no longer xoring the pointer structure.

What I mean is that you are right assuming we use a transformation that still while revertible has avalanche effect. Btw in practical terms I doubt there are practical differences.

AlotOfReading · 16h ago
You can guarantee that the probability is the theoretical minimum with a bijection. I think that would be 2^-N since it's just the case where everything's on a maximum length cycle, but I haven't thought about it hard enough to be completely certain.

A good hash function intentionally won't hit that level, but it should be close enough not to matter with 64 bit pointers. 32 bits is small enough that I'd have concerns at scale.

decasia · 17h ago
We aren't expecting LLMs to come up with incredibly creative software designs right now, we are expecting them to execute conventional best practices based on common patterns. So it makes sense to me that it would not excel at the task that it was given here.

The whole thing seems like a pretty good example of collaboration between human and LLM tools.

writeslowly · 17h ago
I haven't actually had that much luck with having them output a boring API boilerplate in large Java projects. Like "I need to create a new BarOperation that has to go in a different set of classes and files and API prefixes than all the FooOperations and I don't feel like copy pasting all the yaml and Java classes" but the AI has problems following this. Maybe they work better in small projects.

I actually like LLMs better for creative thinking because they work like a very powerful search engine that can combine unrelated results and pull in adjacent material I would never personally think of.

coffeeismydrug · 14h ago
> Like "I need to create a new BarOperation that has to go in a different set of classes and files and API prefixes than all the FooOperations and I don't feel like copy pasting all the yaml and Java classes" but the AI has problems following this.

To be fair, I also have problems following this.

ehutch79 · 17h ago
Uh, no. I've seen the twitter posts saying llms will replace me. I've watched the youtube videos saying llms will code whole apps on one prompt, but are light on details or only show the most basic todo app from every tutorial.

We're being told that llms are now reasoning, which implies they can make logical leaps and employ creativity to solve problems.

The hype cycle is real and setting expectations that get higher with the less you know about how they work.

prophesi · 17h ago
> The hype cycle is real and setting expectations that get higher with _the less you know about how they work_.

I imagine on HN, the expectations we're talking about are from fellow software developers who at least have a general idea on how LLM's work and their limitations.

bluefirebrand · 16h ago
Right below this is a comment

> you will almost certainly be replaced by an llm in the next few years

So... Maybe not. I agree that Hacker News does have a generally higher quality of contributors than many places on the internet, but it absolutely is not a universal for HNers. There are still quite a few posters here that have really bought into the hype for whatever reason

zamalek · 15h ago
> hype for whatever reason

"I need others to buy into LLMs in order for my buy-in to make sense," i.e. network effects.[1]

> Most dot-com companies incurred net operating losses as they spent heavily on advertising and promotions to harness network effects to build market share or mind share as fast as possible, using the mottos "get big fast" and "get large or get lost". These companies offered their services or products for free or at a discount with the expectation that they could build enough brand awareness to charge profitable rates for their services in the future.

You don't have to go very far up in terms of higher order thinking to understand what's going on here. For example, think about Satya's motivations for disclosing Microsoft writing 30% of their code using LLMs. If this really was the case, wouldn't Microsoft prefer to keep this competitive advantage secret? No: Microsoft and all the LLM players need to drive hype, and thus mind share, in the hope that they become profitable at some point.

If "please" and "thank you" are incurring huge costs[2], how much is that LLM subscription actually going to cost consumers when the angel investors come knocking, and are consumers going to be willing to pay that?

I think a more valuable skill might be learning how to make do with local LLMs because who knows how many of these competitors will still be around in a few years.

[1]: https://en.wikipedia.org/wiki/Dot-com_bubble#Spending_tenden... [2]: https://futurism.com/altman-please-thanks-chatgpt

danielbln · 15h ago
I wish we'd measure things less against how hyped they are. Either they are useful, or they are not. LLMs are clearly useful (to which extent and with what caveats is up to lively debate).
bgwalter · 17h ago
After the use-after-free hype article I tried CoPilot and it outright refused to find vulnerabilities.

Whenever I try some claim, it does not work. Yes, I know, o3 != CoPilot but I don't have $120 and 100 prompts to spend on making a point.

ldjkfkdsjnv · 17h ago
you will almost certainly be replaced by an llm in the next few years
einpoklum · 17h ago
You mean, as a HackerNews commenter? Well, maybe...

In fact, maybe most of has have been replaced by LLMs already :-)

DrJid · 16h ago
I never quite understand these articles though. It's not about Humans vs. AI.

It's about Humans vs. Humans+AI

and 4/5, Humans+AI > Humans.

darkport · 17h ago
I think this is true for deeply complex problems, but For everyday tasks an LLM is infinitely “better”.

And by better, I don’t mean in terms of code quality because ultimately that doesn’t matter for shipping code/products, as long as it works.

What does matter is speed. And an LLM speeds me up at least 10x.

kweingar · 12h ago
You're making at least a year's worth of pre-LLM progress in 5 weeks?

You expect to achieve more than a decade of pre-LLM accomplishments between now and June 2026?

nevertoolate · 16h ago
How do you measure this?
rel2thr · 17h ago
Antirez is a top 0.001% coder . Don’t think this generalizes to human coders at large
ljlolel · 17h ago
Seriously, he’s one of the best on the planet of course it’s not better than him. If so we’d be cooked.

99% of professional software developers don’t understand what he said much less can come up with it (or evaluate it like Gemini).

This feels a bit like a humblebrag about how well he can discuss with an LLM compared to others vibecoding.

justacrow · 14h ago
Hey, my CEO is saying that LLMs are also top 0.001% coders now, so should at least be roughly equivalent.
elzbardico · 16h ago
I use LLMs a lot, and call me arrogant, but every time I see a developer saying that LLMs will substitute them, I think they are probably shitty developers.
Fernicia · 16h ago
If it automates 1/5th of your work, then what's unreasonable about thinking that your team could be 4 developers instead of 5?
AstroBen · 8h ago
If software costs 80% as much to write, what's unreasonable about thinking that more businesses would integrate more of it, hiring more developers?
aschobel · 8h ago
Bingo. This additional throughput could be used to create more polished software. What happens in a free market; would your competitor fall behind or will they try to match your polish?
archagon · 12h ago
This just feels like another form of the mythical man month argument.
codr7 · 7h ago
I would say you thought about this solution because you are creative, something a computer will never be no matter how much data you throw at it.

How did it help, really? By telling you your idea was no good?

A less confident person might have given up because of the feedback.

I just can't understand why people are so excited about having an algorithm guessing for them. Is it the thrill when it finally gets something right?

unsupp0rted · 1h ago
- LLMs are going to make me a 100x more valuable coder? Of course they will, no doubt about it.

- LLMs are going to be 100x more valuable than me and make me useless? I don't see it happening. Here's 3 ways I'm still better than them.

motorest · 1h ago
There is also another side to the mass adoption of LLMs in software engineering jobs: they can quite objectively worsen the output of human coders.

There is a class of developers who are blindly dumping the output of LLMs into PRs without paying any attention to what they are doing, let alone review the changes. This is contributing to introducing accidental complexity in the form of bolting on convoluted solutions to simple problems and even introducing types in the domain model that make absolutely no sense to anyone who has a passing understanding of the problem domain. Of course they introduce regressions no one would ever do if they wrote things by hand and tested what they wrote.

I know this, because I work with them. It's awful.

These vibecoders force the rest of us to waste eve more time reviewing their PRs. They are huge PRs that touch half the project for even the smallest change, they build and pass automated tests, but they enshitify everything. In fact, the same LLMs used by these vibecoders start to struggle how to handle the project after these PRs are sneaked in.

It's tiring and frustrating.

I apologize for venting. It's just that in this past week I lost count of the number of times I had these vibecoders justifying shit changes going into their PRs as "but Copilot did this change", as if that makes them any good. I mean, a PR to refactor the interface of a service also sneaks in changes to the connection string, and they just push the change?

agumonkey · 17h ago
There's also the subset of devs who are just bored, LLMs will end up as an easier StackOverflow and if the solution is not one script away, then you're back to square one. I already had a few of "well, uhm, chatGPT told me what you said basically".
kaycey2022 · 1h ago
It doesn't matter. The hiring of cost center people like engineers depends on the capital cycle. Hiring peaked when money and finance was the cheapest. Now it's not anymore. In the absence of easy capital, hiring will plummet.

Another factor is the capture of market sectors by Big Co. When buyers can only approach some for their products/services, the Big Co can drastically reduce quality and enshittify without hurting the bottom line much. This was the big revelation when Elon gutted Twitter.

And so we are in for interesting times. On the plus side, it is easier than ever to create software and distribute it. Hiring doesn't matter if I can get some product sense and make some shit worth buying.

stabbles · 2h ago
The trick is much like Zobrist hashing from chess programming, I'm sure the llm has devoured chessprogramming.org during training.
headelf · 16h ago
What do you mean "Still"? We've only had LLMs writing code for 1.5 years... at this rate it won't be long.
cess11 · 15h ago
More like five years. It's been around for much longer than a lot of people feel it has for some reason.
bachmeier · 12h ago
I suspect humans will always be critical to programming. Improved technology won't matter if the economics isn't there.

LLMs are great as assistants. Just today, Copilot told me it's there to do the "tedious and repetitive" parts so I can focus my energy on the "interesting" parts. That's great. They do the things every programmer hates having to do. I'm more productive in the best possible way.

But ask it to do too much and it'll return error-ridden garbage filled with hallucinations, or just never finish the task. The economic case for further gains has diminished greatly while the cost of those gains rises.

Automation killed tons of manufacturing jobs, and we're seeing something similar in programming, but keep in mind that the number of people still working in manufacturing is 60% of the peak, and those jobs are much better than the ones in the 1960s and 1970s.

noslenwerdna · 12h ago
Sure, it's just that the era of super high paying programming jobs may be over.

And also, manufacturing jobs have greatly changed. And the effect is not even, I imagine. Some types of manufacturing jobs are just gone.

bachmeier · 12h ago
> the era of super high paying programming jobs may be over.

Probably, but I'm not sure that had much to do with AI.

> Some types of manufacturing jobs are just gone

The manufacturing work that was automated is not exactly the kind of work people want to do. I briefly did some of that work. Briefly because it was truly awful.

monocularvision · 11h ago
That might be the case. Perhaps it lowers the difficulty level so more people can do it and therefor puts downward pressure on wages.

Or… it still requires similar education and experience but programmers end up so much more efficient they earn _more_.

Hard to say right now.

twodave · 11h ago
If you stick with the same software ecosystem long enough you will collect (and improve upon) ways of solving classes of problems. These are things you can more or less reproduce without thinking too much or else build libraries around. An LLM may or may not become superior at this sort of exercise at some point, and might or might not be able to reliably save me some time typing. But these are already the boring things about programming.

So much of it is exploratory, deciding how to solve a problem from a high level, in an understandable way that actually helps the person who it’s intended to help and fits within their constraints. Will an LLM one day be able to do all of that? And how much will it cost to compute? These are the questions we don’t know the answer to yet.

ntonozzi · 17h ago
If you care that much about having correct data you could just do a SHA-256 of the whole thing. Or an HMAC. It would probably be really fast. If you don’t care much you can just do murmur hash of the serialized data. You don’t really need to verify data structure properties if you know the serialized data is correct.
vouaobrasil · 17h ago
The question is, for how long?
spion · 16h ago
Vibe-wise, it seems like progress is slowing down and recent models aren't substantially better than their predecessors. But it would be interesting to take a well-trusted benchmark and plot max_performance_until_date(foreach month). (Too bad aider changed recently and there aren't many older models; https://aider.chat/docs/leaderboards/by-release-date.html has not been updated in a while with newer stuff, and the new benchmark doesn't have the classic models such as 3.5, 3.5 turbo, 4, claude 3 opus)
vouaobrasil · 16h ago
I think that we can't expect continuous progress either, though. Often in computer science it's more discrete, and unexpected. Computer chess was basically stagnant until one team, even the evolution of species often behaves in a punctuated way rather than as a sum of many small adaptations. I'm much more interested (worried) of what the world will be like in 30 years, rather than in the next 5.
spion · 14h ago
Its hard to say. Historically new discoveries in AI often generated great excitement and high expectations, followed by some progress, then stalling, disillusionment and AI winter. Maybe this time it will be different. Either way what was achieved so far is already a huge deal.
jppittma · 17h ago
It's really gonna depend on the project. When my hobby project was greenfield, the AI was way better than I am. It was (still is) more knowledgable about the standards that govern the field and about low level interface details. It can shit out a bunch of code that relies on knowing these details in seconds/minutes, rather than hours/days.

Now that the project has grown and all that stuff is hammered out, it can't seem to consistently write code that compiles. It's very tunnel visioned on the specific file its generating, rather than where that fits in the context of what/how we're building what we're building.

jonator · 16h ago
We can slightly squeeze more juice out of them with larger projects by providing better context, docs, examples of what we want, background knowledge, etc.

Like people, LLMs don't know what they don't know (about your project).

sixQuarks · 17h ago
Again, the question is for how long
kilroy123 · 3h ago
My crackpot guess is ~5 years. The incentives are just too damn high to not keep innovating in the space.

We'll find new ways to push the tech.

sixQuarks · 17h ago
Exactly! We’ve been seeing more and more posts like this, saying how AI will never take developer jobs or will never be as good as coders. I think it’s some sort of coping mechanism.

These posts are gonna look really silly in the not too distant future.

I get it, spending countless hours honing your craft and knowing that AI will soon make almost everything you learned useless is very scary.

sofal · 17h ago
I'm constantly disappointed by how little I'm able to delegate to AI after the unending promises that I'll be able to delegate nearly 100% of what I do now "in the not too distant future". It's tired impatience and merited skepticism that you mistake for fear and coping. Just because people aren't on the hype train with you doesn't mean they're afraid.
vouaobrasil · 17h ago
Personally, I am. Lots of unusual skills I have, have already been taken by AI. That's not to say I think I'm in trouble, but I think it's sad I can't apply some of these skills that I learned just a couple of years ago like audio editing because AI does it now. Neither do I want to work as an AI operator, which I find boring and depressing. So, I've just moved onto something else, but it's still discouraging.

Also, so many people said the same thing about chess when the first chess programs came out. "It will never beat an international master." Then, "it will never beat a grandmaster." And Kasparov said, "it would never beat me or Karpov."

Look where we are today. Can humanity adapt? Yes, probably. But that new world IMO is worse than it is today, rather lacking in dignity I'd say.

sofal · 16h ago
I don't acquire skills and apply them just to be able to apply them. I use them to solve problems and create things. My learned skills for processing audio are for the purpose of getting the audio sounding the way I want it to sound. If an AI can do that for me instead, that's amazing and frees up my time to do other things or do a lot more different audio things. None of this is scary to me or impacts my personal dignity. I'm actually constantly wishing that AI could help me do even more. Honestly I'm not even sure what you mean by AI doing audio editing, can I get some of that? That is some grunt work I don't need more of.
vouaobrasil · 16h ago
I acquire skills to enjoy applying them, period. I'm less concerned about the final result than about the process to get there. That's the different between technical types and artist types I suppose.

Edit: I also should say, we REALLY should distinguish between tasks that you find enjoyable and tasks you find just drudgery to get where you want to go. For you, audio editing might be a drudgery but for me it's enjoyable. For you, debugging might be fun but I hate it. Etc.

But the point is, if AI takes away everything which people find enjoyable, then no one can pick and choose to earn a living on those subset of tasks that they find enjoyable because AI can do everything.

Programmers tend to assume that AI will just take the boring tasks, because high-level software engineering is what they enjoy and unlikely to be automated, but there's a WHOLE world of people out there who enjoy other tasks that can be automated by AI.

betenoire · 16h ago
I'm with you, I enjoy the craftsmanship of my trade. I'm not relieved that I may not have to do it in the future, I'm bummed that it feels like something I'm good at, and is/was worth something, is being taken away.

I realize how lucky I am to even have a job that I thoroughly enjoy, do well, and get paid well for. So I'm not going to say "It's not fair!", but ... I'm bummed.

sofal · 16h ago
I can't tell whether I'm supposed to be the technical type or the artist type in this analogy. In my music making hobby, I'd like a good AI to help me mix, master, or any number of things under my direction. I'm going to be very particular about every aspect of the beat, but maybe it could suggest some non-boring chord progressions and I'll decide if I like one of them. My goal as an artist is to express myself, and a good AI that can faithfully take directions from me would help.

As a software engineer, I need to solve business problems, and much of this requires code changes, testing, deployments, all that stuff we all know. Again, if a good AI could take on a lot of that work, maybe that means I don't have to sit there in dependency hell and fight arcane missing symbol errors for the rest of my fucking career.

vouaobrasil · 16h ago
> Again, if a good AI could take on a lot of that work, maybe that means I don't have to sit there in dependency hell and fight arcane missing symbol errors for the rest of my fucking career.

My argument really had nothing to do with you and your hobby. It was that AI is signficantly modifying society so that it will be hard for people to do what they like to make money, because AI can do it.

If AI can solve some boring tasks for you, that's fine but the world doesn't revolve around your job or your hobby. I'm talking about a large mass of people who enjoy doing different things, who once were able to do those things to make a living, but are finding it harder to do so because tech companies have found a way to do all those things because they could leverage their economies of scale and massive resource pools to automate all that.

You are in a priveleged position, no doubt about it. But plenty of people are talented and skilled at doing a certain sort of creative work and the main thrust of their work can be automated. It's not like your cushy job where you can just automate a part of it and just become more efficient, but rather it's that people just won't have a job.

It's amazing how you can be so myopic to only think of yourself and what AI can do for you when you are probably in the top 5% of the world, rather than give one minute to think of what AI is doing to others who don't have the luxuries you have.

noslenwerdna · 12h ago
Everyone should do the tasks where they provide unique value. You could make the same arguments you just made for recorded music, automobiles, computers in general in fact.
vouaobrasil · 10h ago
Difference is though AI does it much faster and has much fewer central sources that provide the service. The speed and magnitude is important as well, just like a crash at 20km/h is different than a crash at 100km/h. And those other inventions WERE also harmful. Cars -> global warming.
danielbln · 15h ago
You can still do those tasks, but the market value will drop. Automatable work should always be automated, because we best focus on things that can't be automated yet and those gain more market value. Supply and demand and all that. I do hope we have a collective plan about what we do when everything is automated at some point. Some form of UBI?
suddenlybananas · 15h ago
What do you mean that AI can do audio editing? I don't think all sound engineers have been replaced.
sixQuarks · 16h ago
Yes. I know what you’re referring to, but you can’t ignore the pace of improvement. I think within 2-3 years we will have AI coding that can do anything a senior level coder can do.
foobar83 · 15h ago
Nobody knows what the future holds, including you.
vouaobrasil · 14h ago
That is true, which is why we should be cautious instead of careless.
sixQuarks · 10h ago
Yes, but we can see current progress and extrapolate into the future. I give it 2/3 years before AI can code as well as a senior level coder
foobar83 · 2h ago
Recent benchmarks show that improvements in the latest models are beginning to slow down, what makes you so sure there's another breakthrough coming?
Draiken · 1h ago
Copium.

People that bet on this bubble have to keep it as big and for as long as possible.

galaxyLogic · 11h ago
Coding is not like multiplication. You can teach kids the multiplication table, or you can give them a calculator and both will work. With coding the problem is the "spec" is so much more complicated than just asking what is 5 * 7.

Maybe the way forward would be to invent better "specifiction languages" that are easy enough for humans to use, then let the AI implement the specifciation you come up with.

prmph · 17h ago
There's something fundamental here.

There is a principle (I forget where I encountered it) that it is not code itself that is valuable, but the knowledge of a specific domain that an engineering team develops as they tackle a project. So code itself is a liability, but the domain knowledge is what is valuable. This makes sense to me and matched my long experience with software projects.

So, if we are entrusting coding to LLMs, how will that value develop? And if we want to use LLMs but at the same time develop the domain acumen, that means we would have to architects things and hand them over to LLMs to implement, thoroughly check what they produce, and generally guide them carefully. In that case they are not saving much time.

jonator · 16h ago
I believe it will raise the standard of what is valuable. Now that LLMs can now handle what we consider "mundane" parts of building a project (boilerplate), humans can dedicate focused efforts to the higher impact areas of innovation and problem solving. As LLMs get better, this bar simply continues to rise.
catigula · 17h ago
Working with Claude 4 and o3 recently shows me just how fundamentally LLMs haven't really solved the core problems such as hallucinations and weird refactors/patterns to force success (i.e. if account not found, fallback to account id 1).
SKILNER · 9h ago
There's a lot of resistance to AI amongst the people in this discussion, which is probably to be expected.

A chunk of the objections indicate people trying to shoehorn in their old way of thinking and working.

I think you have to experiment and develop some new approaches to remove the friction and get the benefit.

throwaway439080 · 9h ago
Of course they are. The interesting thing isn't how good LLMs are today, it's their astonishing rate of improvement. LLMs are a lot better than they were a year ago, and light years ahead of where they were two years ago. Where will they be in five years?
hiatus · 9h ago
Reminds me of the 90s when computer hardware moved so fast. I wonder where the limit is this time around.
AstroBen · 15h ago
Better than LLMs.. for now. I'm endlessly critical of the AI hype but the truth here is that no-one has any idea what's going to happen 3-10 years from now. It's a very quickly changing space with a lot of really smart people working on it. We've seen the potential

Maybe LLMs completely trivialize all coding. The potential for this is there

Maybe progress slows to a snails pace, the VC money runs out and companies massively raise prices making it not worth it to use

No one knows. Just sit back and enjoy the ride. Maybe save some money just in case

jonator · 17h ago
I think will will increasingly be orchestrators. Like at a symphony. Previously, most humans were required to be on the floor playing the individual instruments, but now, with AI, everyone can be their own composer.
nixpulvis · 16h ago
The number one use case for AI for me as a programmer is still help finding functions which are named something I didn't expect as I'm learning a new language/framework/library.

Doing the actual thinking is generally not the part I need too much help with. Though it can replace googling info in domains I'm less familiar with. The thing is, I don't trust the results as much and end up needing to verify it anyways. If anything AI has made this harder, since I feel searching the web for authoritative, expert information has become harder as of late.

taormina · 16h ago
My problem with this usage is that the LLMs seem equally likely to make up a function they wish existed. When questioned about the seeming-too-convenient method they will usually admit to having made it up on the spot. (This happens a lot in Flutter/Flame land, I'm sure it's better at something more mainstream like Python?) That being said, I do agree that using it as supplemental documentation is one of the better usecases I have for it.
vjvjvjvjghv · 17h ago
I think we need to accept that in the not too far future LLMs will be able to do most of the mundane tasks we have to do every day. I don't see why an AI can't set up kubernetes, caching layers, testing, databases, scaling, check for security problems and so on. These things aren't easy but I think they are still very repetitive and therefore can be automated.

There will always be a place for really good devs but for average people (most of us are average) I think there will be less and less of a place.

zonethundery · 16h ago
No doubt the headline's claim is true, but Claude just wrote a working MCP serving up the last 10 years of my employer's work product. For $13 in api credits.

While technically capable of building it on my own, development is not my day job and there are enough dumb parts of the problem my p(success) hand-writing it would have been abysmal.

With rose-tinted glasses on, maybe LLM's exponentially expand the amount of software written and the net societal benefit of technology.

procaryote · 1h ago
If your own code would have been abysmal, how can you tell if the claude generated code is any good?
sagarpatil · 6h ago
So your sample size is 1 task and 1 LLM? I would recommend trying o3, opus 4 (API) with web search enabled.
marcosno · 12h ago
LLMs can be very creative, when pushed. In order to find a creative solution, like antirez needed, there are several tricks I use:

Increase the temperature of the LLMs.

Ask several LLMs, each several time the same question, with tiny variations. Then collect all answers, and do a second/third round asking each LLM to review all collected answers and improve.

Add random constraints, one constraints per question. For example, to LLM: can you do this with 1 bit per X. Do this in O(n). Do this using linked lists only. Do this with only 1k memory. Do this while splitting the task to 1000 parallel threads, etc.

This usually kicks the LLM out of its confort zone, into creative solutions.

dwringer · 12h ago
Definitely a lot to be said for these ideas, even just that it helps to start a fresh chat and ask the same question in a better way a few times (using the quality of response to gauge what might be "better"). I have found if I do this a few times and Gemini strikes out, I've manually optimized the question by this point that I can drop it into Claude and get a good working solution. Conversely, having a discussion with the LLM about the potential solution, letting it hold on to the context as described in TFA, has in my experience caused the models to pretty universally end up stuck in a rut sooner or later and become counterproductive to work with. Not to mention that way eats up a ton of api usage allotment.
ww520 · 9h ago
The value of LLMs are as a better Stackoverflow. It’s much better than search now because it’s not populated with all the craps that have seeped through over time.
osigurdson · 8h ago
This is similar to my usage of LLMs. I use Windsurf sometimes but more often it is more of a conversation about approaches.
ants_everywhere · 10h ago
I'm increasingly seeing this as a political rather than technical take.

At this point I think people who don't see the value in AI are willfully pulling the wool over their own eyes.

dbacar · 16h ago
I disagree—'human coders' is a broad and overly general term. Sure, Antirez might believe he's better than AI when it comes to coding Redis internals , but across the broader programming landscape—spanning hundreds of languages, paradigms, and techniques—I'm confident AI has the upper hand.
nthingtohide · 16h ago
Do you want to measure antirez and AI on a spider diagram, generally used to evaluate employee? Are you ignoring why society opted for division of work and specialization?
dbacar · 16h ago
They are not investing billions on it so a high schooler can do his term paper on it, is is already much more than a generalist. It might be like a very good sidekick for now, but that is not the plan.
EpicEng · 16h ago
What does the number of buzzwords and frameworks on a resume matter? Engineering is so much more than that it’s not even worth mentioning. You’re comparison is on the easiest aspect of what we do.

Unless you’re a web dev. Then youre right and will be replaced soon enough. Guess why.

dbacar · 16h ago
Not everyone builds Redis at home/work. So you do the math. And now Antirez himself is feeding the beast by himself.
tonyhart7 · 11h ago
I think also it depends on the model of course

General LLM model would not be as good as LLM for coding, for this case Google deepmind team maybe has something better than Gemini 2.5 pro

buremba · 11h ago
If the human here is the creator of Redis, probably not.
h4kunamata · 3h ago
This!!!!

LLM is as good as the material it is being trained on, the same applies to AI and they are not perfect.

Perplexity AI did assist me in getting into Python from 0 to getting my code test with 94% covered and no vulnerabilities (scanning tools) Google Gemini is dogshit

Trusting blindly into a code generated by LLM/AI is a whole complete beast, and I am seeing developers doing basically copy/paste into company's code. People are using these sources as the truth and not as a complementary tool to improve their productivity.

kurofune · 16h ago
The fact that we are debating this topic at all is indicative of how far LLMs have come in such a short time. I find them incredibly useful tools that vastly enhance my productivity and curiosity, and I'm really grateful for them.
burningion · 17h ago
I agree, but I also didn’t create redis!

It’s a tough bar if LLMs have to be post antirez level intelligence :)

ljlolel · 17h ago
Seriously, he’s one of the best on the planet of course it’s not better than him. If so we’d be cooked.

99% of professional software developers don’t understand what he said much less can come up with it (or evaluate it like Gemini).

This feels a bit like a humblebrag about how well he can discuss with an LLM compared to others vibecoding.

failrate · 12h ago
LLMs are using the corpus of existing software source code. Most software source code is just North of unworkable garbage. Garbage in, garbage out.
lodovic · 17h ago
Sure, human coders will always be better than just AI. But an experienced developer with AI tops both. Someone said, your job won't be taken by AI, it will be taken by someone who's using AI smarter than you.
bluefirebrand · 17h ago
> Someone said, your job won't be taken by AI, it will be taken by someone who's using AI smarter than you.

"Your job will be taken by someone who does more work faster/cheaper than you, regardless of quality" has pretty much always been true

That's why outsourcing happens too

palavrov · 17h ago
From my experience AI for coders is multiplier of the coder skills. It will allow you to faster solve problems or add bugs. But so far will not make you a better coder than you are.
kristopolous · 17h ago
Correct. LLMs are a thought management tech. Stupider ones are fine because they're organizing tools with a larger library of knowledge.

Think about it and tell me you use it differently.

fHr · 10h ago
yes of course they are but MBA regard management gets told by McK/Big4 AI could save them millions and they should let go people already as AI can do there work it doesn't matter currently, see job market
janalsncm · 17h ago
Software engineering is in the painful position of needing to explain the value of their job to management. It sucks because now we need to pull out these anecdotes of solving difficult bugs, with the implication that AI can’t handle it.

We have never been good at confronting the follies of management. The Leetcode interview process is idiotic but we go along with it. Ironically LC was one of the first victims of AI, but this is even more of an issue for management that things SWEs solve Leetcodes all day.

Ultimately I believe this is something that will take a cycle for business to figure out by failing. When businesses will figure out that 10 good engineers + AI always beats 5 + AI, it will become table stakes rather than something that replaces people.

Your competitor who didn’t just fire a ton of SWEs? Turns out they can pay for Cursor subscriptions too, and now they are moving faster than you.

foobarian · 16h ago
I find LLMs a fantastic frontend to StackOverflow. But agree with OP it's not an apples-to-apples replacement for the human agent.
revskill · 5h ago
The funniest things a llm do to me is they fixed the unit test to pass instead of fixing the code. Basically until a llm can have embedded common sense knowledge, it is untrustable
Poortold · 12h ago
For coding playwright automation it has use cases. Especially if you template out function patterns. Though I never use it to write logic as AI is just ass at that. If I wanted a shitty if else chain I'd ask the intern to code it
shayanbahal · 7h ago
Human coders utilizing LLMs are better
rubit_xxx17 · 8h ago
Gemini may be fine for writing complex function, but I can’t stand to use it day to day. Claude 4 is my go to atm.
callamdelaney · 16h ago
LLMs will never be better than humans on the basis that LLMs are just a shitty copy of human code.
danielbln · 15h ago
I think they can be an excellent copy of human code. Are they great at novel out-of-training-distribution tasks? Definitely not, they suck at them. Yet I'd argue that most problems aren't novel, at most they are some recombination of prior problems.
jbellis · 16h ago
But Human+Ai is far more productive than Human alone, and more fun, too. I think antirez would agree, or he wouldn't bother using Gemini.

I built Brokk to maximize the ability of humans to effectively supervise their AI minions. Not a VS code plugin, we need something new. https://brokk.ai

orangebread · 14h ago
It's that time again where a dev writes a blog post coping.
uticus · 17h ago
same as https://news.ycombinator.com/item?id=44127956, also on HN front page
devmor · 10h ago
I have been evaluating LLMs for coding use in and out of a professional context. I’m forbidden to discuss the specifics regarding the clients/employers I’ve used them with due to NDAs, but my experience has been mostly the same as my private use - that they are marginally useful for less than one half of simple problem scenarios, and I have yet to find one that has been useful for any complex problem scenarios.

Neither of these issues is particularly damning on its own, as improvements to the technology could change this. However, the reason I have chosen to avoid them is unlikely to change; that they actively and rapidly reduce my own willingness for critical thinking. It’s not something I noticed immediately, but once Microsoft’s study showing the same conclusions came out, I evaluated some LLM programming tools again and found that I generally had a more difficult time thinking through problems during a session in which I attempted to rely on said tools.

insane_dreamer · 6h ago
Coders may want to look at translators for an idea of what might happen.

Translation software has been around for a couple of decades. It was pretty shitty. But about 10 years ago it started to get to the point where it could translate relatively accurately. However, it couldn't produce text that sounded like it was written by a human. A good translator (and there are plenty of bad ones) could easy outperform a machine. Their jobs were "safe".

I speak several languages quite well and used to do freelance translation work. I noticed that as the software got better, you'd start to see companies who instead of paying you to translate wanted to pay you less to "edit" or "proofread" a document pre-translated by machine. I never accepted such work because sometimes it took almost as much work as translating it from scratch, and secondly, I didn't want to do work where the focus wasn't on quality. But I saw the software steadily improving, and this was before ChatGPT, and I realized the writing was on the wall. So I decided not to become dependent on that for an income stream, and moved away from it.

When LLMs came out, and they now produce text that sounded like it was written by a native speaker (in major languages). Sure, it's not going to win any literary awards, but the vast vast majority of translation work out there is commercial, not literature.

Several things have happened: 1) there's very little translation work available compared to before, because now you can pay only a few people to double-check machine-generated translations (that are fairly good to start with); 2) many companies aren't using humans at all as the translations are "good enough" and a few mistakes won't matter that much; 3) the work that is available is high-volume and uninteresting, no longer a creative challenge (which is why I did it in the first place); 4) downward pressure on translation rates (which are typically per word), and 5) very talented translators (who are more like writers/artists) are still in demand for literary works or highly creative work (i.e., major marketing campaign), so the top 1% translators still have their jobs. Also more niche language pairs for which LLMs aren't trained will be safe.

It will continue to exist as a profession, but diminishing, until it'll eventually be a fraction of what it was 10 or 15 years ago.

(This is specifically translating written documents, not live interpreting which isn't affected by this trend, or at least not much.)

0points · 2h ago
> When LLMs came out, and they now produce text that sounded like it was written by a native speaker (in major languages).

While the general syntax of the language seem to be somewhat correct now, the LLM's still don't know anything about those languages and keep mis-translating words due to its inherit insane design around english. A whole lot of concepts don't even exist in english so these translation oracles just can never do it successfully.

If i i read a few minutes of LLM translated text, there's always a couple of such errors.

I notice younger people don't see these errors because of their worse language skills, and the LLM:s enforce their incorrect understanding.

I don't think this problem will go away as long as we keep pushing this inferior tech, but instead the languages will devolve to "fix" it.

Languages will morph into a 1-to-1 mapping of english and all the cultural nuances will get lost to time.

pknerd · 17h ago
Let's not forget that LLMs can't give a solution they have not experienced themselves
willmarch · 16h ago
This is objectively not true.
nssnsjsjsjs · 10h ago
So LLMs have sweated to debug a production issue, got to the bottom of it, realised it is worth having more unit tests so values that and then produces a solution that has more unit tests. So when you ask the LLM to write code it is opinionated and always creates a test to go with it?
AnimalMuppet · 17h ago
OK. (I mean, it was an interesting and relevant question.)

The other, related question is, are human coders with an LLM better than human coders without an LLM, and by how much?

(habnds made the same point, just before I did.)

vertigolimbo · 16h ago
Here’s the answer for you. Tldr; 15% performance increase, in some cases up to 40% increase, in the others 5% decrease. It all depends.

Source: https://www.thoughtworks.com/insights/blog/generative-ai/exp...

horns4lyfe · 8h ago
Writing about AI is missing the forest for the trees. The US software industry will be wholesale destroyed (and therefore global software will be too) by offshoring.
RayMan1 · 18h ago
of course they are.
CivBase · 11h ago
In my experience some of the hardest parts of software development is figuring out exactly what the stakeholder actually needs. One of the talents a developer needs is the ability to pry for that information. Chatbots simply don't do that, which I imagine has a significant impact on the usability of their output.
anjc · 13h ago
Gemini gives instant, adaptive, expert solutions to an esoteric and complex problem, and commenters here are still likening LLMs to junior coders.

Glad to see the author acknowledges their usefulness and limitations so far.

ModernMech · 14h ago
The other day an LLM told me that in Python, you have to name your files the same as the class name, and that you can only have one class per file. So... yeah, let's replace the entire dev team with LLMs, what could go wrong?
zb3 · 16h ago
Speak for yourself..
3cats-in-a-coat · 16h ago
"Better" is relative to context. It's a multi-dimensional metric flattened to a single comparison. And humans don't always win that comparison.

LLMs are faster, and when the task can be synthetically tested for correctness, and you can build up to it heuristically, humans can't compete. I can't spit out a full game in 5 minutes, can you?

LLMs are also cheaper.

LLMs are also obedient and don't get sick, and don't sleep.

Humans are still better by other criteria. But none of this matters. All disruptions start from the low end, and climb from there. The climbing is rapid and unstoppable.

varispeed · 17h ago
Looks like this pen is not going to replace the artist after all.
fspoto98 · 14h ago
Yes i agree:D
65 · 15h ago
AI is good for people who have given up, who don't give a shit about anything anymore.

You know, those who don't care about learning and solving problems, gaining real experience they can use to solve problems even faster in the future, faster than any AI slop.

oldpersonintx2 · 17h ago
but their rate of improvement is like 1000x human devs, so you have to wonder what the shot clock says for most working devs
hello_computer · 8h ago
Corporations have many constraints—advertisers, investors, employees, legislators, journalists, advocacy groups. So many “white lies” are baked into these models to accommodate those constraints, nerfing the model. It is only a matter of time before hardware brings this down to the hobbyist level—without those constraints—giving the present methods their first fair fight; while for now, they are born lobotomized. Some of the “but, but, but…”s we see here daily to justify our jobs are not going to hold up to a non-lobotomized LLM.
chuckreynolds · 17h ago
for now. (i'm not a bot. i'm aware however a bot would say this)
gxs · 12h ago
Argh people are insufferable about this subject

This stuff is still in its infancy, of course its not perfect

But its already USEFUL and it CAN do a lot of stuff - just not all types of stuff and it still can mess up the stuff that it can do

It's that simple

The point is that overtime it'll get better and better

Reminds me of self driving cars and or even just general automation back in the day - the complaint has always been that a human could do it better and at some point those people just went away because it stopped being true

Another example is automated mail sorting by the post office. The gripe was always humans will always be able to do it better - true, in the meantime the post office reduced the facilities with humans that did this to just one

habnds · 17h ago
seems comparable to chess where it's well established that a human + a computer is much more skilled than either one individually
bgwalter · 17h ago
This was the Centaur hypothesis in the early days of chess programs and it hasn't been true for a long time.

Chess programs of course have a well defined algorithm. "AI" would be incapable of even writing /bin/true without having seen it before.

It certainly wouldn't have been able to write Redis.

NitpickLawyer · 16h ago
> This was the Centaur hypothesis in the early days of chess programs and it hasn't been true for a long time.

> Chess programs of course have a well defined algorithm.

Ironically, that also "hasn't been true for a long time". The best chess engines humans have written with "defined algorithms" were bested by RL (alphazero) engines a long time ago. The best of the best are now NNUE + algos (latest stockfish). And even then NN based engines (Leela0) can occasionally take some games from Stockfish. NNs are scarily good. And the bitter lesson is bitter for a reason.

bgwalter · 16h ago
No, the alphazero papers used an outdated version of Stockfish for comparison and have always been disputed.

Stockfish NNUE was announced to be 80 ELO higher than the default. I don't find it frustrating. NNs excel at detecting patterns in a well defined search space.

Writing evaluation functions is tedious. It isn't a sign of NN intelligence.

hatefulmoron · 17h ago
I don't think that's been true for a while now -- computers are that much better.
vjvjvjvjghv · 17h ago
Can humans really give useful input to computers? I thought we have reached a state where computers do stuff no human can understand and will crush human players.