Nobody knows how to build with AI yet

147 Stwerner 158 7/19/2025, 3:45:01 PM worksonmymachine.substack.com ↗

Comments (158)

karel-3d · 4h ago
Reading articles like this feels like being in a different reality.

I don't work like this, I don't want to work like this and maybe most importantly I don't want to work with somebody who works like this.

Also I am scared that any library that I am using through the myriad of dependencies is written like this.

On the other hand... if I look at this as some alternate universe where I don't need to directly or indirectly touch any of this... I am happy that it works for these people? I guess? Just keep it away from me

weitendorf · 3h ago
I've been working on AI dev tools for a bit over a year and I don't love using AI this way either. I mostly use it for boilerplate, ideas, or to ask questions about error messages. But I've had a very open mind about it ever since I saw it oneshotting what I saw as typical Google Cloud Functions tasks (glue together some APIs, light http stuff) a year ago.

I think in the last month we've entered an inflection point with terminal "agents" and new generations of LLMs trained on their previously spotty ability to actually do the thing. It's not "there" yet and results depend on so many factors like the size of your codebase, how well-represented that kinda stuff is in its training data, etc but you really can feed these things junior-sized tickets and send them off expecting a PR to hit your tray pretty quickly.

Do I want the parts of my codebase with the tricky, important secret sauce to be written that way? Of course not, but I wouldn't give them to most other engineers either. A 5-20 person army of ~interns-newgrads is something I can leverage for a lot of the other work I do. And of course I still have to review the generated code, because it's ultimately my responsibility, but I prefer that over having to think about http response codes for my CRUD APIs. It gives me more time to focus on L7 load balancing and cluster discovery and orchestration engines.

stillsut · 2h ago
> Just keep it away from me

I'm reminded of teaching bootcamp software engineering, when every day #1 we go through simple git workflows and it seems very intimidating to students and they don't understand the value. Which fair enough because git has a steep learning curve and you need to use it practically to start picking it up.

I think this might be analogous to the shift going on with ai-generated and agent-generated coding, where you're introducing an unfamiliar tool with a steep learning curve, and many people haven't seen the why? for its value.

Anyways, I'm 150 commits into a vibe coding project that still standing strong, if you're curious as to how this can work, you can see all the prompts and the solutions in this handy markdown I've created: https://github.com/sutt/agro/blob/master/docs/dev-summary-v1...

fragmede · 1h ago
To the article's point, I built my own version of your agro tool that I use to manage my own git worktrees. Even if I had known about your project, I still would have built my own, because if I build it (with LLM assistance, obvs) then I get to design it for myself.

Looking at other industries, music production is probably the one to look at. What was once the purview of record labels with recording studios that cost a million dollars to outfit, is now a used MacBook and, like, $1,000 of hardware/software. The music industry has changed, dramatically, as a result of the march of technology, and thus so will software. So writing software will go the way of the musician. What used to be a middle class job as a trumpet player in NYC before the advent of records, is now only a hobby except for the truely elite level practicioners.

quantiq · 3h ago
This has to be someone working solely on personal projects right? Because I don't know anyone who actually works like this and frequently the code that AI will spit out is actually quite bad.
lordnacho · 3h ago
But you also can't not swim with the tide. If you drove a horse-buggy 100 years ago, it was probably worth your while to keep your eye on whether motor-cars went anywhere.

I was super skeptical about a year ago. Copilot was making nice predictions, that was it. This agent stuff is truly impressive.

bloppe · 3h ago
An I the only one who has to constantly tell Claude and Gemini to stop making edits to my codebase because they keep messing things up and breaking the build like ten times in a row, duplicating logic everywhere, etc? I keep hearing about how impressive agents are. I wish they could automate me out of my job faster
Benjammer · 3h ago
Are you paying for the higher end models? Do you have proper system prompts and guidance in place for proper prompt engineering? Have you started to practice any auxiliary forms of context engineering?

This isn't a magic code genie, it's a very complicated and very powerful new tool that you need to practice using over time in order to get good results from.

goalieca · 3h ago
It ain’t a magic code genie. And developers don’t spend most of their day typing lines of code. Lots of it is designing, figuring out what to build, understanding the code, maintenance considerations, and adhering to the style of whatever file you’re in. All these agents needing local context and still spit junk.
tempodox · 2h ago
That's the beauty of the hype: Anyone who cannot replicate it, is “holding it wrong”.

No comments yet

QuantumGood · 2h ago

   > it's a very complicated and very powerful new tool that you need to practice using over time in order to get good results from.
Of course this is and would be expected to be true. Yet adoption of this mindset has been orders of magnitude slower than the increase in AI features and capabilities.
dingnuts · 3h ago
guy 1: I put money in the slot machine everyone says wins all the time and I lose

you: HAVE YOU PUT MORE TOKENS IN???? ARE YOU PUTTING THEM IN THE EXPENSIVE MACHINES???

super compelling argument /s

if you want to provide working examples of "prompt engineering" or "context engineering" please do but "just keep paying until the behavior is impressive" isn't winning me as a customer

it's like putting out a demo program that absolutely sucks and promising that if I pay, it'll get good. why put out the shit demo and give me this impression, then, if it sucks?

lordnacho · 3h ago
The way I ended up paying for Claude max was that I started on the cheap plan, it went well, then it wanted more money, and I paid because things were going well.

Then it ran out of money again, and I gave it even more money.

I'm in the low 4 figures a year now, and it's worth it. For a day's pay each year, I've got a junior dev who is super fast, makes good suggestions, and makes working code.

Avicebron · 2h ago
> For a day's pay each year

For anyone trying to back of the napkin at $1000 as 4-figures per year, averaged as a day salary, the baseline salary where this makes sense is about ~$260,000/yr? Is that about right lordnacho?

lordnacho · 2h ago
Yeah I thought that was a reasonable number in the ballpark. I mean, it probably makes sense to pay a lot more for it. A grand is probably well within the range where you shouldn't care about it, even if you only get a basic salary and it's a terrible year with no bonus.

And that's not saying AI tools are the real deal, either. It can be a lot less than a fully self driving dev and still be worth a significant fraction of an entry level dev.

antihipocrat · 1h ago
I assume it's after tax too..
vishvananda · 1h ago
I'm really baffled why the coding interfaces have not implemented a locking feature for some code. It seems like an obvious feature to be able to select a section of your code and tell the agent not to modify it. This could remove a whole class of problems where the agent tries to change tests to match the code or removes key functionality.

One could even imagine going a step further and having a confidence level associated with different parts of the code, that would help the LLM concentrate changes on the areas that you're less sure about.

esafak · 2h ago
Create and point them to an agent.md file http://agent.md/
exographicskip · 1h ago
I like this approach. Keeps agentic workflows vendor agnostic with practical symlinks / tweaks for other setups.

Website was hard to read on desktop, but their repo is great: https://github.com/agentmd/agent.md

exographicskip · 1h ago
Duplicate logic is definitely a thing. That and littering comments all over the place.

Worth it to me as I can fix all the above after the fact.

Just annoying haha

csomar · 2h ago
They need "context engineering" which what I'll describe best as "railing" them in. If you give them a bit of a loose space, they'll massacre your code base. You can use their freedom for exploration but not for implementation.

In essence, you have to do the "engineering" part of the app and they can write the code pretty fast for you. They can help you in the engineering part, but you still need to be able to weigh in whatever crap they recommend and adjust accordingly.

mbrumlow · 3h ago
Did you tell them to not duplicate code?
rafaelmn · 3h ago
More like people telling us there will be no more professional drivers on the road in 5-10 years 10 years ago. Agents are like lane assist, not even up to the current self driving levels.
miltonlost · 3h ago
So many people are hyping AI like it's Musk's FSD, with the same fraudulance in overestimating its capabilities.
dingnuts · 3h ago
it's exactly like this. we're 3 years into being told all white collar jobs are going to be gone next year, just like we're ten years into being told we'll have self driving cars next year
johnnienaked · 58m ago
15 years into bitcoin replacing the USD too
mnky9800n · 3h ago
I think the agent stuff is impressive because we are giving the AI scaffold and tools and things to do. And that is why it is impressive because it has some directive. But it is obvious if you don't give it good directives it doesn't know what to do. So for me, I think a lot of jobs will be making agents do things, but a lot won't. i think its really strange that people are all so against all this stuff. it's cool new computer tools, does nobody actually like computers anymore?
prinny_ · 3h ago
A lot of people join this profession because they like building stuff. They enjoy thinking about a problem and coming up with a solution and then implementing and testing it. Prompting is not the same thing and it doesn't scratch the same itch and at the end of the day it's important to enjoy your job, not only be efficient at it.

I have heard the take that "writing code is not what makes you an engineer, solving problems and providing value is what makes you an engineer" and while that's cool and all and super important for advancing in your career and delivering results, I very much also like writing code. So there's that.

johannes1234321 · 3h ago
There is code which is interesting to write, even if it isn't the area with clever algorithms or big architecture decisions or something.

But there is also the area of boilerplate, where non-LLM-AI-based IDEs for a few decades already help a lot with templates and "smart" completion. Current AI systems widen that area.

The trouble with AI is when you are reaching the boundary of its capabilities. The trivial stuff it does well. For the complex stuff it fails spectacularly. In the in between you got to review carefully, which easily becomes less fun than simply writing by oneself.

ModernMech · 1h ago
> But there is also the area of boilerplate, where non-LLM-AI-based IDEs for a few decades already help a lot with templates and "smart" completion.

The thing for me is that AI writing the boilerplate feels like the brute force solution, compared to investing in better language and tooling design that may obviate the need for such boilerplate in the first place.

johannes1234321 · 1h ago
Yeah, but building tooling is a hard sell considering the ability of contemporary AI.

The energy cost is absurdly high for the result, but in current economics, where it's paid by investors not users, it's hidden. Will be interesting to see when AI companies got to the level where they have to make profits and how much optimisation there is to come ...

SoftTalker · 3h ago
Rick Beato posted a video recently where he created a fictitious artist and a couple of songs based on a few prompts. The results were somewhat passable, generic indie/pop music but as he said (I'm paraphrasing) "I didn't create anything here. I prompted a computer to put together a bunch of words and melodies that it knew from what other people had written."
theferret · 3h ago
That's an interesting take - that you like the act of writing code. I think a lot of builders across a variety of areas feel this way. I like writing code too.

I've been experimenting with a toolchain in which I speak to text to agents, navigate the files with vim and autocomplete, and have Grok think through some math for me. It's pretty fun. I wonder if that will change to tuning agents to write code that go through that process in a semi-supervised manner will be fun? I don't know, but I'm open to the idea that as we progress I will find toolchains that bring me into flow as I build.

mnky9800n · 2h ago
Yeah but I write the code that is interesting to solve and let the LLM solve the problems that are not so important. Like making yet another webscraper tool is not the most exciting part of the process when you are trying to make some kind of real time inference tool for what people post on the internet.
closewith · 3h ago
Most people don't enjoy their jobs and go to work for one reason only - to support themselves and their families. The itch is to get paid. This is as true in software as it is in other fields.

That's not to say there aren't vocations, or people in software who feel the way you do, but it's a tiny minority.

fragmede · 1h ago
Ah yes, that "is that 6 spaces or 8" in a yaml file itch that just has to be scratched. Programming has a lot of doldrums. LLMs still get stuck at places, and that's just where the new itch to scratch is. Yeah, it's not the same as code golfing an algorithm really neatly into a few lines of really expressive C++, but things change and life goes on. Programming isn't the same as when it was on punch cards either.
majormajor · 3h ago
> does nobody actually like computers anymore

I think this is a really interesting question and an insight into part of the divide.

Places like HN get a lot of attention from two distinct crowds: people who like computers and related tech and people who like to build. And the latter is split into "people who like to build software to help others get stuff done" and "people who like to build software for themselves" too. Even in the professional-developer-world that's a lot of the split between those with "cool" side projects and those with either only-day-job software or "boring" day-job-related side projects.

I used to be in the first group, liking computer tech for its own sake. The longer I work in the profession of "using computer tools to build things for people" the less I like the computer industry, because of how much the marketing/press/hype/fandom elements go overboard. Building-for-money often exposes, very directly, the difference between "cool tools" and "useful and reliable tools" - all the bugs I have to work around, all the popular much-hyped projects that run into the wall in various places when thrown into production, all the times simple and boring beats cool when it comes to winning customers. So I understand when it makes others jaded about the hype too. Especially if you don't have the intrinsic "cool software is what I want to tinker with" drive.

So the split in reactions to articles like this falls on those lines, I think.

If you like cool computer stuff, it's a cool article, with someone doing something neat.

If you are a dev enthusiast who likes side projects and such (regardless of if it's your day job too or not), it's a cool article, with someone doing something neat.

If you are in the "I want to build stuff that helps other people get shit done" crowd then it's probably still cool - who doesn't like POCs and greenfield work? - but it also seems scary for your day to day work, if it promises a flood of "adequate", not-well-tested software that you're going to be expected to use and work with and integrate for less-technical people who don't understand what goes into reliable software quality. And that's not most people's favorite part of the job.)

(Then there's a third crowd which is the "people who like making money" crowd, which loves LLMs because they look like "future lower costs of labor." But that's generally not what the split reaction to this particular sort of article is about, but is part of another common split between the "yay this will let me make more profit" and "oh no this will make people stop paying me" crowds in the biz-oriented articles.)

oblio · 3h ago
People are afraid that instead of skilled craft guild members they will become assembly line workers like Charlie Chaplin in Modern Times. And in 10 years unemployed like people in the Rust Belt.
lucumo · 1h ago
There's a kind of karmic comedy in this. Programmers' jobs has always been to automate other people's jobs. The panic of programmers about their own jobs now is immensely funny to me.

As has been the case for all those jobs changed by programmers, the people who keep an open mind and are willing to learn new ways of working will be fine or even thrive. The people rusted to their seat, who are barely adding value as is, will be forced to choose between changing or struggling.

oblio · 50m ago
The problem is that these days we're talking about millions of people.

Those kinds of masses of people don't pivot on a dime.

selimnairb · 3h ago
This, and no one will understand the software that is created. Then you are beholden to AI companies who can charge you whatever they want to maintain the AI code. Will this be cheaper than paying software engineers? Maybe, but I could also see it costing much more.
verisimilidude · 3h ago
AI's superpower is doing mediocre work at high speed. That's okay. Great, even. There's lots of mediocre work to do. And mediocre still clears below average.

But! There's still room for expertise. And this is where I disagree about swimming with the tide. There will be those who are uninterested in using the AI. They will struggle. They will hone their craft. They will have muscle memory for the tasks everyone else forgot how to do. And they will be able to perform work that the AI users cannot.

The future needs both types.

jon-wood · 3h ago
My ongoing concern is that most of us probably got to being able to do good work via several years of doing mediocre work. We put in the hours and along the way learned what good looks like, and various patterns that allow us to see the path to solving a given problem.

What does the next generation do when we’ve automated away that work? How do they learn to recognise what good looks like, and when their LLM has got stuck on a dead end and is just spewing out nonsense?

commakozzi · 7m ago
they will be judging the merit of work in much broader context.
kellyjprice · 3h ago
I'm not trying to discount it the analogy, but I'd much rather live without cars (or a lot less).
fzeroracer · 1h ago
Sometimes it's a good thing to not swim with the tide. Enshittification comes from every single dipshit corporation racing to the bottom, and right now said tide is increasingly filling with sewage.

There's a huge disconnect I notice where experienced software engineers rage about how shitty things are nowadays while diving directly into using AI garbage, where they cannot explain what their code is doing if their lives depended on it.

beefnugs · 3h ago
This doesn't make aaaaany sense: IF this actually worked, then why would all the biggest companies in the world be firing people? They would be forcing them all to DO THE TIDE and multiple their 10 billion dollar dominance to 100 billion dollar or more dominance.

The truth is something like: for this to work, there is huge requirements in tooling/infrastructure/security/simulation/refinement/optimization/cost-saving that just could never be figured out by the big companies. So they are just like... well lets trick as many investors and plebs to try to use this as possible, maybe one of them will come up with some breakthrough we can steal

fragmede · 1h ago
> why would all the biggest companies in the world be firing people

Because of section 174, now hopefully repealed. Money makes the world go round, and the money people talk to the people with firing authority.

intended · 2h ago
I promise everyone one thing - there ain’t no such thing as a free lunch.

A lot of what is “working” in the article is closer to “jugaad”/prototyping.

Something the author acknowledges in their opening- it’s a way to prototype and get something off the ground.

Technically debt will matter for those products that get off the ground.

richardw · 3h ago
Scary part is: what if it’s inevitable? We don’t get to choose our environment, and toss one is forming around us.

A friend’s dad only knows assembly. He’s the ceo of his company and they do hardware, and he’s close to retirement now, but he finds this newfangled C and C++ stuff a little too abstract. He sadly needs to trust “these people” but really he prefers being on the metal.

majormajor · 3h ago
The market for utility software like this predates the internet, we used to pass them around on floppies. It was never subject to particularly high QA or scrutiny. It just has to be "adequate."

But it's never displaced the market for highly-produced, highly-planned, "central" software pieces that the utilities glue together and help you work with, etc.

The growth of that software-as-big-business has only enlarged the need for utilities, really, to integrate everything, but it's a tough space to work in - "it's hard to compete with free." One classic move is selling support, etc.

Might be tough to do non-LLM-driven software development there - the selling support for your LLM-created-products model is still viable, but if there's an increase in velocity in useful utility creation or maintenance, possibly the dev headcount needs are lower.

But does anyone know how to use LLMs to make those giant ones yet? Or to make those central core underlying libraries you mention? Doesn't seem like it. Time will tell if there's a meaningful path that is truly different from "an even higher level programming language." Even on the edges - "we outgrew the library and we have to fork it because of [features/perf/bugs]" is a pretty common pattern when working on those larger projects already, and the more specific the exact changes you need are, the less the LLM might be able to do it for you (e.g. the "it kept assuming this function existed because it exists in a lot of similar things" problem).

What I hope is that we can find good ways to leverage these for quality control and testing and validation. (Though this is the opposite of the sort of greenfield dev demos that get the most press right now.)

Testing/validation is hard and expensive enough that basically nobody does a thorough job of it right now, especially in the consumer space. It would be wonderful if we could find ways to release higher quality software without teams of thousands doing manual validation.

tempodox · 2h ago
It does sound horrible. No more getting in the flow, no more thinking about anything, no more understanding anything. Just touch it with a ten-foot pole every few hours, then get distracted again.

I guess if all you do is write React To-Do apps all day, it might even work for a bit.

fragmede · 1h ago
Unfortunately, I think the evolution of LLMs is going to put more areas of programming within this "React Todo app" envelope of capability that you suggest, and to have it work for longer, rather than going away.
vitaflo · 3h ago
>I don't want to work with somebody who works like this.

You will most likely get your wish but not in the way you want. In a few years when this is fully matured there will be little reason to hire devs with their inflated salaries (especially in the US) when all you need is someone with some technical know-how and a keen eye on how to work with AI agents. There will be plenty of those people all over the globe who will demand much less than you will.

Hate to break it to you but this is the future of writing software and will be a reckoning for the entire software industry and the inflated salaries it contains. It won't happen overnight but it'll happen sooner than many devs are willing to admit.

dingnuts · 2h ago
yes yes the chainsaw made lumberjacks obsolete
AndrewKemendo · 1h ago
Genuinely this is what it sounds like to accept obsolescence and I just can’t understand it.

What are you attached to and identify with that you’re rejecting new ways to work?

Change is the only constant and tools now look like superhuman tools created for babies compared to the sota at bell or NASA in the 1960s when they were literally trying to create superhuman computing.

We have more access to powerful compute and it’s never been easier to build your own everything.

What’s the big complaint?

fragmede · 1h ago
> and maybe most importantly I don't want to work with somebody who works like this.

Which, of course, is your perogative, but in what other ways do we, as fellow programmers, judge software libraries and dependencies so harshly? As a Vim user, do I care that Django was written with a lot of emacs? Or that Linus used emacs to write git? Or maybe being judgemental about programming languages; ugh, that's "just" a scripting language, it's not "real" programming unless you use a magnet up against a hard drive to program in ones and zeros. As a user, do I care that Calibre is written in Python, and not something "better"? Or that curl is written in good ole C. Or how about being opinionated as to whether or not the programmer used GDB or printf debugging to make the library?

gabrieledarrigo · 3h ago
I know, it's scary. But I guess it's the direction we are aiming for.
recursive · 3h ago
Just to clarify, I'm not a member of that "we".
gabrieledarrigo · 34m ago
And that's fine, it's your choice. But everything, driven by multiple forces (from hype, to marketing, to real progress, to early adopters) is pointing to that future.
esafak · 2h ago
Imagine a future where creating software is about the designing the UX, overseeing the architecture and quality assurance. Implementation is farmed out.
karel-3d · 2h ago
But the architecture is the important (and hard) part!!! Not the UX!
esafak · 1h ago
Does it matter if the computer can do it? Can you calculate the cube root of 4?

Users see and care about the UX; the product. They only notice the engineering when it goes wrong.

logicchains · 3h ago
>I don't work like this, I don't want to work like this and maybe most importantly I don't want to work with somebody who works like this.

It suggests you've had very positive life experiences, that you trust human developers so much more than computers.

sbalough · 3h ago
I don’t think that was his argument. It would be one thing if we reach a point where humans trust a higher AI intelligence to create/keep software systems predictably meeting requirements. We aren’t there yet. So, it’s important to make sure any AI code is reviewed and approved by humans.
raincole · 3h ago
> in a different reality.

It is. And one reality is getting bigger each day and the other is shrinking.

nirvanatikku · 4h ago
This article is spot on.

I had stumbled upon Kidlin’s Law—“If you can write down the problem clearly, you’re halfway to solving it”.

This is a powerful guiding principle in today’s AI-driven world. As natural language becomes our primary interface with technology, clearly articulating challenges not only enhances our communication but also maximizes the potential of AI.

The async approach to coding has been most fascinating, too.

I will add, I've been using Repl.it *a lot*, and it takes everything to another level. Getting to focus on problem solving, and less futzing with hosting (granted it is easy in the early journey of a product) - is an absolute game changer. Sparking joy.

I personally use the analogy of mario kart mushroom or star; that's how I feel using these tools. It's funny though, because when it goes off the rails, it really goes off the rails lol. It's also sometimes necessary to intercept decisions it will take.. babysitting can take a toll (because of the speed of execution). Having to deal with 1 stack was something.. now we're dealing with potential infinite stacks.

dustincoates · 37m ago
Repl.it is so hit or miss for me, and that's that is so frustrating. Like, it can knock out something in minutes that would have taken me an afternoon. That's amazing.

Then other times, I go to create something that is suggested _by them below the prompt box_ and it can't do it properly.

Flatcircle · 4h ago
My theory on AI is it's the next iteration of google search, a better more conversational, base layer over all the information that exists on the internet.

Of course some people will lose jobs just like what happened to several industries when search became ubiquitous. (newspapers, phone books, encyclopedias, travel agents)

But IMHO this isn't the existential crisis people think it is.

It's just a tool. Smart, clever people can do lots of cool stuff with tools.

But you still have to use it,

Search has just become Chat.

You used to have to search, now you chat and it does the searching, and more!

ivanjermakov · 3h ago
> Search has just become Chat

I think chat-like LLM interfacing is not the most efficient way. There has to be a smarter way.

majormajor · 3h ago
I think Photoshop is a good guide here.

Famously complicated interface with a million buttons and menus.

Now there's more buttons for the AI tools.

Because at the end of the day, using a "brush" tool to paint over the area containing the thing you want it to remove or change in an image is MUCH simpler than trying to tell it that through chat. Some sort of prompt like "please remove the fifth person from the left standing on the brick path under the bus stop" vs "just explicitly select something with the GUI." The former could have a lot of value for casual amateur use; it's not going to replace the precise, high-functionality tool for professional use.

In software - would you rather chat with an LLM to see the contents of a proposed code change, or use a visual diff tool? "Let the agent run and then treat it's stuff as a PR from a junior dev" has been said so many times recently - which is not suggesting just chatting with it to do the PR instead of using the GUI. I would imagine that this would get extended to something like the input not just being less of a free-form chat, but more of a submission of a Figma mockup + a link to a ticket with specs.

Fade_Dance · 3h ago
There is certainly much innovation to come in this area.

I'm thinking about Personal Knowledge Systems and their innovative ideas regarding visual representations of data (mind maps, website of interconnected notes, things like that). That could be useful for AI search. What elements are doing in a sense is building concept web, which would naturally fit quite well into visualization.

The ChatBot paradigm is quite centered around short easily digestible narratives, and will humans are certainly narrative generating and absorbing creatures to a large degree, things like having a visually mapped out counter argument can also be surprisingly useful. It's just not something that humans naturally do without effort outside of, say, a philosophy degree.

There is still the specter of the megacorp feed algo monster lurking though, in that there is a tendency to reduce the consumer facing tools to black-box algorithms that are optimized to boost engagement. Many of the more innovative approaches may involve giving users more control, like dynamic sliders for results, that sort of thing.

clickety_clack · 3h ago
There’s an efficient way to serve the results, and there’s an efficient way for a human to consume them, and I find LLMs to be much more efficient in terms of cognitive work done to explore and understand something than a google search. The next thing will have to beat that level of personal mental effort, and I can’t imagine what that next step would look like yet.
aDyslecticCrow · 3h ago
I find a well-written human article or guide to be far more efficient when it exists. But if AI rehash them... then the market for those may disappear, and in the process, the AI won't be very good either without the source to summarise.
Quitschquat · 3h ago
Google doesn’t have to change search. It already returns AI generated crap before anything useful.
arrowsmith · 3h ago
To be fair, Google also returns a lot of useless crap that wasn't generated by AI.
jenscow · 1h ago
wasn't generated by their AI, more like
patcon · 3h ago
I have systemic concerns with how Google is changing roles from "knowledge bridging" to "knowledge translating", but in terms of information: I find it very useful.

You find it gives you poor information?

aDyslecticCrow · 3h ago
Always check the sources. I've personally found it;

- Using a source to claim the opposite of what the source says.

- Point to irrelevant sources.

- Use a very untrustworthy source.

- Give our sources that do not have anything to do with what it says.

- Make up additional things like any other LLM without source or internet search capability, despite reading sources.

I've specifically found Gemeni (the one Google puts at the top of searches) is hallucination-prone, and I've had far better results with other agents with search capability.

So... presenting a false or made-up answer to a person searching the web on a topic they don't understand... I'd really like to see a massive lawsuit cooked up about this when someone inevitably burns their house down or loses their life.

mrandish · 3h ago
Append -ai to your query to omit AI results.
accrual · 2h ago
I like the way DuckDuckGo does it - it offers a button to generate a response if you want to, but it doesn't shove it down your throat.

It's handy when I just need the quick syntax of a command I rarely need, etc.

brabel · 3h ago
I was a bit wary of trusting the AI summaries Google has been including in search results… but after a few checks it seems like it’s not crap at all, it’s pretty good!
SoMomentary · 3h ago
I think their point is that all of the content out there is turning in to AI Slop so it won't matter if search changes because the results themselves have already been changed.
jayd16 · 3h ago
Unlike peak google, this reduces signal to noise and obfuscates the source data its pulling against.
hmmokidk · 3h ago
Creation of source data has been disincentivized
jopsen · 4h ago
It's clearly useful for many things other than search.
aDyslecticCrow · 3h ago
As search gives the answer rather than the path to it, the job of finding things out properly and writing it down for others is lost. If we let that be lost, then we will all be lost.

If we cannot find a way to redirect income from AI back to the creators of the information they rehash (such as good and honest journalism), a critical load-bearing pillar of democratic society will collapse.

The news industry has been in grave danger for years, and we've seen the consequences it brings (distrust, division, misinformation, foreign manipulation). AI may drive the last stake in its back.

It's not about some jobs being replaced; that is not even remotely the issue. The path we are on currently is a dark one, and dismissing it as "just some jobs being lost" is a naive dismissal of the danger we're in.

JSteph22 · 3h ago
I am looking forward to the "news industry" breathing its last breath. They're the ones primarily responsible for the distrust and division.
aDyslecticCrow · 1h ago
No, i fully disagree.

The economic viability to do proper journalism was already destroyed by the ad supported click and attention based internet. (and particular the way people consume news through algorithmic social media)

I believe most independent news sites have been economically forced into sensationalism and extremism to survive. Its not what they wilfully created.

Personally, i find that any news organisations that is still somewhat reputable have source of income beyond page visits and ads; Be it a senior demorgaphic that still subscribe to the paper, loyal reader base that pay for the paywall, or government sponsoring its existence as public service.

Now what if you cut out the last piece of income journalists rely on to stay afloat? We simply fire the humans and tell an AI to summarise the other articles instead, and phrase it how people want to hear it.

And thats a frightening world.

maqnius · 3h ago
I agree that people are using it for things they would've googled, but I doubt that it's a good replacement.

To me it mostly comes with a feeling of uncertainty. As if someone tells you something he got told on a party. I need to Google it, to find a trustful source for verification, else it's just a hint.

So I use it if I want a quick hint. Not if I really want to have information worth remembering. So it's certainly not a replacement for me. It actually makes things worse for me because of all that AI slop atm.

staplers · 3h ago
A lot of modern entry-level jobs were filled by people who knew how to use google and follow instructions.

I imagine the next generation will have a similar relationship with AI. What might seem "common sense" with the younger, more tech-saavy crowd, will be difficult for older generations whose default behavior isn't to open up chatgpt or gemini and find the solution quickly.

lordnacho · 4h ago
I'm loving the new programming. I don't know where it goes either, but I like it for now.

I'm actually producing code right this moment, where I would normally just relax and do something else. Instead, I'm relaxing and coding.

It's great for a senior guy who has been in the business for a long time. Most of my edits nowadays are tedious. If I look at the code and decide I used the wrong pattern originally, I have to change a bunch of things to test my new idea. I can skim my code and see a bunch of things that would normally take me ages to fiddle. The fiddling is frustrating, because I feel like I know what the end result should be, but there's some minor BS in the way, which takes a few minutes each time. It used to take a whole stackoverflow search + think, recently it became a copilot hint, and now... Claude simply does it.

For instance, I wrote a mock stock exchange. It's the kind of thing you always want to have, but because the pressure is on to connect to the actual exchange, it is often a leftover task that nobody has done. Now, Claude has done it while I've been reading HN.

Now that I have that, I can implement a strategy against it. This is super tedious. I know how it works, but when I implement it, it takes me a lot of time that isn't really fulfilling. Stuff like making a typo, or forgetting to add the dependency. Not big brain stuff, but it takes time.

Now I know what you're all thinking. How does it not end up with spaghetti all over the place? Well. I actually do critique the changes. I actually do have discussions with Claude about what to do. The benefit here is he's a dev who knows where all the relevant code is. If I ask him whether there's a lock in a bad place, he finds it super fast. I guess you need experience, but I can smell when he's gone off track.

So for me, career-wise, it has come at the exact right time. A few years after I reached a level where the little things were getting tedious, a time when all the architectural elements had come together and been investigated manually.

What junior devs will do, I'm not so sure. They somehow have to jump to the top of the mountain, but the stairs are gone.

Loic · 4h ago
> What junior devs will do, I'm not so sure. They somehow have to jump to the top of the mountain, but the stairs are gone.

Exactly my thinking, nearly 50, more than 30 years of experience in early every kind of programming, like you do, I can easily architect/control/adjust the agent to help me produce great code with a very robust architecture. By I do that out of my experience, both in modelling (science) and programming, I wonder how the junior devs will be able to build experience if everything comes cooked by the agent. Time will tell us.

theferret · 3h ago
I feel like we've been here before, and there was a time when if you're going to be an engineer, you needed to know core equations, take a lot of derivatives, perform mathematical analysis on paper, get results in an understandable form, and come up with solutions. That process may be analogous to what we used to think of as beginning with core data structures and algorithms, design patterns, architecture and infrastructure patterns, and analyzing them all together to create something nice. Yet today, much of the lower-level mathematics that were previously required no longer are. And although people are trained in their availability and where they are used, they form the backbone of systems that automate the vast majority of the engineering process.

It might be as simple as creating awareness about how everything works underneath and creating graduates that understand how these things should work in a similar vein.

Loic · 58m ago
Exactly right now, I am helping a big oil and gas company have a process simulation software to correctly converge on a big simulation. Full access to the source code, need to improve the Newton method in use with the right line search, validate the derivatives, etc.

I do think that for most of the people, you are right, you do not need to know a lot, but my philosophy was to always understand how the tool you use work (one level deeper), but now the tool is creating a new tool. How do you understand the tool which has been created by your Agent/AI tool?

I find this problem interesting, this is new to me and I will happily look at how our society and the engineering community evolve with these new capacities.

chamomeal · 41m ago
I also am enjoying LLMs, but I get no joy out of just prompting them again and again. I get so incredibly bored, with a little side of anxiety that I don’t really know how my program works.

I’ll probably get over it, but I’ve been realizing how much fun I get out building something as opposed to just having be built. I used to think all I cared about was results, and now I know that’s not true, so that’s fun!

Of course for the monotonous stuff that I’ve done before or don’t care a lick about, hell yeah I let em run wild. Boilerplate, crud, shell scripts, CSS. Had claude make me a terminal based version of snake. So sick

tempodox · 1h ago
> I would normally just relax and do something else. Instead, I'm relaxing and coding.

So more work gets to penetrate a part of your life that it formerly wouldn't. What's the value of “productivity gains”, when they don't improve your quality of life?

lpa22 · 3h ago
This is exactly what makes me excited as well. It really does replace the tedious parts of coding I’ve done thousands of times at this point.
ikerino · 3h ago
Hot take: Junior devs are going to be the ones who "know how to build with AI" better than current seniors.

They are entering the job market with sensibilities for a higher-level of abstraction. They will be the first generation of devs that went through high-school + college building with AI.

stefan_ · 2h ago
Where did they learn sensibility for higher-level of abstraction? AI is the opposite, it will do what you prompt and never stop to tell you its a terrible idea, you will have to learn yourself all the way down into the details that the big picture it chose for you was faulty from the start. Convert some convoluted bash script to run on Windows because thats what the office people run? Get strapped in for the AI PowerShell ride of your life.
ikerino · 2h ago
How is that different than how any self-taught programmer learns? Dive into a too-big idea, try to make it work and learn from that experience.

Repeat that a few hundred times and you'll have some strong intuitions and sensibilities.

zwnow · 4h ago
So you are relaxing and the AI is coding? Neat! Way to replace yourself, hope you won't cry after your job once it is gone.
lubujackson · 4h ago
What you miss is the constant need to refine and understand the bigger picture. AI makes everyone a lead architect. A non-coder can't do this or will definitely get lost in the weeds eventually.
jon-wood · 2h ago
It doesn’t make everyone a lead architect, it just makes everyone think they’re a lead architect. What makes people a lead architect is a decade or two of experience in designing software and learning what works and doesn’t.
nlawalker · 3h ago
Right, but a lead architect can be a lead architect on multiple projects at the same time, and the world doesn't need as many lead architects as it has programmers.

This kind of working is relaxing and enjoyable until capitalism discovers that it is, and then you have to do it on five projects simultaneously.

rhdunn · 3h ago
I'm using AI assistants as an interactive search and coding assistant. I'm still driving the development and implementing the code.

Where I use it for is:

1. Remembering what something is called -- in my case the bootstrap pills class -- so I could locate it in the bootstrap docs. Google search didn't help as I couldn't recall the right name to enter into it. For the AI I described what I wanted to do and it gave the answer.

2. Working with a language/framework that I'm familiar with but don't know the specifics in what I'm trying to do. For example:

- In C#/.NET 8.0 how do I parse a JSON string?

- I have a C# application where I'm using `JsonSerializer.Deserialize` to convert a JSON string to a `record` class. The issue is that the names of the variables are capitalized -- e.g. `record Lorem(int Ipsum)` -- but the fields in the JSON are lowercase -- e.g. `{"ipsum": 123}`. How do I map the JSON fields to record properties?

- In C# how do I convert a `JsonNode` to a `JsonElement`?

3. Understanding specific exceptions and how to solve them.

In each case I'm describing things in general terms, not "here's the code, please fix it" or "write the entire code for me". I'm doing the work of applying the answers to the code I'm working on.

logicchains · 4h ago
He's still telling the AI what to code. Prompting, i.e. deciding the right thing to build then clearly specifying and communicating it in English, is a skill in itself. People who spend time developing that skill are going to be more employable than people who just devote all their time to coding, the thing at which LLMs are more cost effective.
chamomeal · 2h ago
I don’t mean to be a dick, but stuff like this

> With enough AI assistants building enough single-purpose tools, every problem becomes shallow. Every weird edge case already has seventeen solutions. Every 2am frustration has been felt, solved, and uploaded.

> We're not drowning in software. We're wading in it. And the water's warm

Just sounds like GPT style writing. I’m not saying this blog is all written by GPT, but it sounds like it is. I wonder if those of us who are constantly exposed to AI writing are starting to adopt some of that signature fluffy, use-a-lot-of-words-without-saying-much kinda style.

Life imitates art. Does intelligence imitate artificial intelligence?? Or maybe there’s more AI written content out there than I’m willing to imagine.

(Those snippets are from another post in this blog)

shermantanktop · 4h ago
Im going to be overly picky about the subheading (which is an incidental aspect of TFA): “The future of software development might just be jazz. Everyone improvising. Nobody following the sheet music.”

That’s not jazz. Jazz being what it is, a lot of people in 2025 think it’s “everyone improvising,” but (outside of some free jazz) it’s quite structured and full of shared conventions.

Analogies work when you and your audience both understand the things being compared. In this case, the author doesn’t, and maybe some of the audience shares the same misperception, and so the analogy only works based on shared misunderstanding.

The analogy to jazz actually works better the more you know about it. But that’s accidental.

schneems · 4h ago
The input to output ratio is interesting. We are usually optimizing for volume of output, but now it’s inverted. I actually don’t want maximum output, I want the work split up into concrete, verifiable steps and that’s difficult to achieve consistently.

Ive taken to co-writing a plan with requirements with cursor and it works really well at first. But as it makes mistakes and we use those mistakes to refine the document eventually we are ready to “go” and suddenly it’s generating a large volume of code that directly contradicts something in the plan. Small annoyances like its inability to add an empty line after markdown headings have to be explicitly re added and re-reminded.

I almost wish I had more control over how it was iterating. Especially when it comes to quality and consistency.

When I/we can write a test and it can grind on that is when AI is at its best. It’s a closed problem. I need the tools to help me, help it, turn the open problem I’m trying to solve into a set of discrete closed problems.

d00mB0t · 3h ago
"I'd wander into my office, check what Claude had built, test it real quick. If it worked, great! Commit and push."

Man, I'm going to make so much money as a Cybersecurity Consultant!

fizx · 4h ago
The "time dialation" is real. I mostly manage these days, yet my fun projects progress faster than they ever have, because I can prompt in the 2 minutes between meetings, and come back to significant progress.
jvanderbot · 4h ago
Yes, it's not faster to develop with AI if you watch it work. It's faster to develop with AI if you parallelize. Typing was never the bottleneck, but is is a now-parallelizeable part of the pipeline.
nojs · 4h ago
> Yes, it's not faster to develop with AI if you watch it work.

It’s actually a lot faster. You read the diffs as soon as they start coming in, and immediately course correct or re-prompt when you see bad mistakes.

wrs · 4h ago
Indeed, I hit the stop button quite a bit when Claude goes off course. Then make a note of the right choice so maybe it won't do that again, revert and proceed. I have the feeling there is an optimal size of project proportional to the context size, where you can fit the critical design points into the context and/or there are enough examples in the code of how things should be done.
aprilthird2021 · 4h ago
I don't have this experience. Watching and course correcting like this makes me realize I could have done a better job myself
unshavedyak · 24m ago
That’s always true in my experience, but that doesn’t necessarily mean you need to. The trick I’m working towards is refining the workflow such that i can reliably produce maybe 90% as “good” as what I’d personally produce but much, much faster. All sorts of side work I was avoiding before also becomes much easier, less tedious refactors and large test coverage and etc. It can type much faster than I can, the trick is if we can constrain the thinking enough to make it useful. Keeping it as an autocomplete is as productive as it is difficult imo.
criley2 · 4h ago
It can still be faster to develop with AI watching it work. It can legitimately introduce an entire simple fullstack change across multiple projects in my monorepo including graphql queries/mutations, typeorm repository, a service layer, and a reactnative frontend using apollo client, etc. It can do that in about 10 minutes in my local. I can't. If I turned it into a speed run event and practiced, maybe I could get it done in 10 minutes but honestly, it's a machine and I'm John Henry. Since it's using my IDE, it's using my meticulously setup and maintained local and I'm able to quickly stop it and fix any mistake it makes. Level 2 driving assist.

I have enjoyed the github copilot agent style development where someone elses computer is running everything, and I can make a request and just come back half an hour later and check on it. But this level 5 driver gets the wrong destination basically every time, and then it's another 10, 20 or even 30 minutes for it to make a minor adjustment. It doesnt understand my `yarn` scripts, it runs my tests wrong, it can't do codegen, it doesn't format or lint files, etc. I asked copilot yesterday to lint and format a PR and it took 25 minutes of agentic work lol.

wrs · 4h ago
For me, one of the new superpowers is the ability to interactively do multiple drafts following different design principles and see which works better.

I just started an embedded project where two different people had implemented subsystems independently, and I asked Claude to merge the code into a single project and convert the existing synchronous code into asynchronous state machines called from a single main loop. It wrote three drafts with me giving it different stylistic principles to follow. I don't know if I would have had the patience to do that myself!

wiremine · 2h ago
I've been experimenting with model-based development lately, and this resonated strongly with me.

The section "What Even Is Programming Anymore?" hit on a lot of the thoughts and feels I've been going through. I'm using all my 25+ years of experience and CS training, but it's _not_ programming per se.

I feel like we're entering an era where we're piloting a set of tools, not hand crafting code. I think a lot of people (who love crafting) will be leaving the industry in the next 5 years, for better or worse. We'll still need to craft things by hand, but we're opening some doors to new methodologies.

And, right now, those methodologies are being discovered, and most of us are pretty bad at them. But that doesn't mean they're not going to be part of the industry.

facefactsdamnit · 4h ago
The only thing that should matter in software development is: does it work to spec?

Why are these chatbots that mangle data 1/3 to 1/2 of the time getting their budgets 10x over and over again?

This is irrational. If the code mangles data this bad, it's garbage.

iLoveOncall · 1h ago
> The only thing that should matter in software development is: does it work to spec?

Unless you've never written code outside of a classroom you should know how unbelievably wrong this is.

zkmon · 3h ago
Not sure what the complaint is about. If the coding work has to be thrown away, we need to do that and move on. We did that many times earlier. We have thrown away hunting, farming, calculations by hand, cameras and so on. Coding work might get extinct for some use cases. Nothing wrong with it. Learn how to use your tools, assistants and godzillas.

The bigger issue, would there be a need for coding and software? Who would use them? Why are they using it? Are they buying something? searching for info? The usecase will see a revolution. The new usecases won't need the traditonal kind software. But AI can only produce traditional software.

Can I ask Claude to code up its clone for local use?

asadotzler · 3h ago
I seem to have missed the part where he successfully prompted for security, internationalizability, localizability, accessibility, usability, etc., etc.

This is a core problem with amateurs pretending to be software producers. There are others, but this one is fundamental to acceptable commercial software and will absolutely derail vibe coded products from widespread adoption.

And if you think these aspects of quality software are easily reduced to prompts, you've probably never done serious work in those spaces.

jerpint · 1h ago
Funny enough, I’m building a tool that does basically what the author describes, but with a bit more software engineering driving it (context-llemur)

https://github.com/jerpint/context-llemur

The idea is to track all of the context of a project using git. It’s a CLI and MCP tool, the human guides it but the LLM contributes back to it as the project evolves

I used it to bootstrap the library itself, and have been using it more and more for context management of all sorts of things I care about

layer8 · 3h ago
Excuse me, you can't build software that fast, and definitely not while making pancakes. Please return to your regularly scheduled struggling.
sansseriff · 2h ago
There's a weird insecurity I've noticed cropping up. I want to design the codebase 'my way'. I want to decide on the fundamental data structures. But there's this worry that my preferred architecture is not massively better than whatever the machine comes up with. So by insisting on 'my way' I'm robbing everyone productivity.

I know most true programmers will vouch for me and my need to understand. But clients and project managers and bosses? Are they really gonna keep accepting a refrain like this from their engineers?

"either it gets done in a day and I understand none of it, or it gets done in a month and I fully understand it and like it"

lvl155 · 3h ago
Based on how easy it is to trigger cool-down or throttle on Claude Code, I think people know how to build with AI. Or they’re trying really hard to figure it out. The race is on and it’s wide open.

There are a lot gotchas with these new models. They get incredibly lazy if you let them. For example, I asked it to do a simple tally by year. I just assumed it’s simple enough I don’t need to ask to write a code. It counted first couple of years and just “guessed” the rest based on pattern it noticed.

Sometimes, it feels like having a lazy coworker that you have to double check constantly and email with repeated details. Other times, I just sit there in awe of how smart it is in my weekly AGI moment and how it’s going to replace me soon.

hoppp · 3h ago
I like to use it to generate python and react Ui components with tailwind css

And also to help me troubleshoot my old yacht, it taught me to be an amateur marine electrician

I do not let it into my entire codebase tho. Keep the context small and if I dont get what I want in one or two prompt I dont use it

fmbb · 4h ago
That time dilation feels a bit like what METR reported:

https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

Developers believe they complete tasks 25% faster with AI but when measured they are 19% slower when using AI.

paulgerhardt · 4h ago
This was the study with a population size of 16?
layer8 · 3h ago
That’s reminiscent of how when people are “in the flow”, they feel productive but also tend to not notice how quickly time passes.
logicchains · 4h ago
It depends on how the AI is used; there's a huge difference in productivity between a structured workflow with repeated automated feedback to the AI, and just ad-hoc prompting. For instance, Gemini 2.5 coding up a spec-compliant HTTP2.0 server in two weeks: https://outervationai.substack.com/p/building-a-100-llm-writ... . 15k lines of code and 30k lines of tests; no human coder could produce something like that so fast.
djeastm · 4h ago
Would that application not exist in the training data already?
logicchains · 3h ago
The are around three Golang HTTP2.0 servers on Github, but it wasn't just regurgitating the application from memory, as if it was then it would have been mostly correct first try, and it wouldn't have needed to spend 80%+ of the development time in a code-compile-test cycle of fixing bugs identified by integration tests and spec conformance tests.
codingdave · 3h ago
So you are saying that LLMs have no capacity to re-use existing work, and will just burn hours and money re-inventing wheels?
bavell · 3h ago
Welcome to The Future™!
akcih · 3h ago
It's cool that it made something that works, but that code is pretty bad. Trial and error is not the way to develop something like that.
reactordev · 3h ago
Congratulations, you just passed project management class.

What you describe is exactly what a project manager does. Refines the technical, stories, organizes the development towards a goal.

This doesn’t feel like programming because it isn’t. It doesn’t NOT feel like programming because you’re supervising. In the end, you are now a project manager.

mediumsmart · 2h ago
I find its all about the joy of building with the subset you can be an expert in and the AI telling you where the typo is and why it still wont work after fixing it.
foobarbaz569 · 3h ago
Impossible to read this. Very wordy and full of tangents.
therein · 3h ago
>I've been coding for long enough to remember when we carved HTML tables by hand. When CSS was a suggestion, not a lifestyle. When JavaScript was for mouseover effects and nothing else.

Cringe. The tech is half baked and the author is already fully committed to this is the future, I am living in the future, I bake cookies while Claude codes.

Pure cringe. This confirms my earlier theories that everyone just wants to be a manager. You don't need to manage humans. You just want to be a manager.

The whole article could be summed down to I always wanted to be a manager and now I am a manager of bots.

bn-l · 3h ago
Why not use the mcp inspector instead of protocollie?
wwdmaxwell · 3h ago
I only call provider APIs and try to include only devDependancies in my project.

Really helped my understanding of how apps work.

iLoveOncall · 3h ago
I feel increasingly tired of reading excuse after excuse when you bring up that AI tools simply cannot solve anything beyond extremely basic problems.

It's always a mix of:

1. "Wait for the next models", despite models having all but plateaued for the past 3 years,

2. "It's so good for boilerplate code", despite libraries and frameworks being much better suited for this task, and boilerplate code being actually rare to write in the normal lifecycle of a project,

3. "You need to prompt it differently", glossing over the fact that to prompt it so it can do what you want it to do accurately it would take longer than not to use AI at all,

4. And the worst: "We don't know how to use those models yet"

Maybe the real reason it doesn't work is because IT JUST DOESN'T FUCKING WORK.

Why is it so unfathomable that a next token generator is gonna suck at solving complex problems? It is blindingly obvious.

xyst · 3h ago
"You’re using it wrong" arguments/hype articles showing up. Speculators love it. But in reality if you need to extol the benefits of AI, then is it really the user or the technology?

Honestly reminds me of the digital currency mania that busted a couple of years ago. Same types of articles popping up too.

Look I understand the benefits of AI but it’s clear ai is limited by the compute power of today. Maybe the dream this author has will be realized some day. But it won’t be today or in current generations lifespan.

ujal · 3h ago
aprilthird2021 · 4h ago
Eh, idk. First of all, the article is really wordy to say very few things. That just frustrated me a bit.

Second of all, it's easy to fart out some program in a few days vibe coding. How will that fare as more and more features need to be added on? We all used to say "Dropbox that's just FTP wrapped in a nice UI anyone can make that". This protocollie project seems to be a documentation viewer / postman for MCP. Which is cool, but is it something that would have taken a competent dev months to build? Probably not. And eventually the actual value of such things is the extensibility and integrations with various things like corporate SAML etc.

Will the vibe code projects of today be extensible like that, enough to grab market share vs the several similar versions and open source versions anyone can make in a few days, as the author suggests? It can be hard to extend a codebase you don't understand because you didn't write...

CharlesW · 4h ago
> First of all, the article is really wordy to say very few things.

A clickbaity title in opposition with the content isn't helpful either. I would've recommended their "The Great Experiment Nobody's Running the Same Way" heading as a better choice, even thought it might not perform as well from a content marketing POV.

ModernMech · 3h ago

  You prompt. You go live your life. You come back to ten thousand lines of code. You spend 5 minutes reading. One sentence of feedback. Another ten thousand lines appear while you're making lunch.
Yeah, it strikes me the author writes prose the same way they're generating code. 20k lines? That's enough code for a whole compiler or an operating system kernel. I'd love to see what those 20k lines actually do -- notably, in these articles about AI, people tend to not link the actual code when they easily could, which is curious. I mean, my macro expander can also write 20k lines of code while I eat lunch, but no one is pretending it's sentient and about to replace devs.
wilkystyle · 3h ago
I definitely did a double take when I got to this section. I am neither an AI optimist nor an AI pessimist (probably slightly on the optimistic side of the midpoint) but this sounds insane to me for any software that people might truly depend on. Five minutes of review for 10,000 lines, happening multiple times per day?
ModernMech · 3h ago
I did some digging, and this seems to be the project the author is using AI to work on: https://github.com/sublayerapp/sublayer

You look at the PRs... there are 786(!) AI generated pull requests and an associated AI generated code review for each one. Each PR is about ~20-100 lines of Ruby (including comments) that implements an "action" for the sublayer system as a Ruby class. So probably something that could be handled by a macro expander. Or at least it's AI used as a fancy macro expander.

But yeah, there's about 20k lines of code right there easily. Although, because it's Ruby, it's not (much) of an exaggeration to say ~50% of the generated lines are a single "end" keyword.

The author is someone who before AI, would publish ~300 commits a year to Github. This year they are on track for 3000 commits using AI. But the result seems to be that PRs are accumulating in their repo, implementing hundreds of features. I'm wondering why the PRs are accumulating and not getting merged if the code is good? Is the bottleneck now review? What would happen if AI took over PR merging as well as PR creation?

nojito · 3h ago
>Which is cool, but is it something that would have taken a competent dev months to build? Probably not.

Right...but it exists today. The days of wondering "should I spend time building this" are gone.

renewiltord · 4h ago
I have a local LLM router app with profiles that set up the right system prompts and the right MCPs so I can swap between toolsets as I work.

This would take time to write if I’m doing it myself so I decided to vibe code it entirely. I had this idea that a compiled language is less likely to have errors (on account of the compiler giving the LLM quicker feedback than me) and so I chose Tauri with TS (I think).

The experience has been both wonderful and strange. The app was built by Claude Code with me intermittently prompting it between actual work sessions.

What’s funny is the bugs. If you ever played Minecraft during the Alpha days you know that Notch would be like “Just fixed lighting” in one release. And you’d get that release and it’d be weird like rain would now fall through glass.

Essentially the bugs are strange. At least in the MC case you could hypothesize (transparency bit perhaps was used for multiple purposes) but this app is strange. If the LLM configuration modal is fixed, suddenly the MCP/tool tree view will stop expanding. What the heck, why are these two related? I don’t know. I could never know because I have never seen the code.

The compile time case did catch some iterations (I let Claude compile and run the program). But to be honest, the promise of correctness never landed.

Some people have been systematic and documented the prompts they use but I just free flowed it. The results are outstanding. There’s no way I could have had this built for the $50 in Claude credits. But also there’s no way I could interpret the code.

tronicjester · 3h ago
>we dont have a word for it yet

I call it 'Orchestratic Development'.

Edit: Seriously, down voted twice when just commenting on an article? God I hate this arrogant shithole.

trallnag · 2h ago
Check out image boards like 4chan or more localized boards like Kohlchan for Germany. No votes, no accounts
andrewstuart · 3h ago
It’s amazing to me all the Luddite developers who are “against” all this.

Completely new ways of programming are forming, completely new ways of computing and the best the luddites can do is be “against it”.

A revolution came along, a change in history and instead of being excited by the possibilities, joining in, learning, discovering, creating …… the luddites are just “against it all”.

I feel sorry for them. Why be in computing at all if you don’t like new technology?

recursive · 1h ago
> Why be in computing at all if you don’t like new technology?

Because computers can be used to run programs.

You feel sorry for them. I feel sorry for the future.