Show HN: BaaS to build agents as data, not code (github.com)
5 points by ishita159 1d ago 0 comments
Show HN: Bringing Tech News from HN to My Community (sh4jid.me)
3 points by sh4jid 1d ago 2 comments
About AI
65 emil_priver 64 8/7/2025, 9:05:43 AM priver.dev ↗
From the study[0]:
> 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience.
This study continues to get a lot of airtime on HN and elsewhere. Folks probably should be skeptical of a study with the combination of a small number of subjects with a broad claim.
[0] https://arxiv.org/pdf/2507.09089
"We do not provide evidence that: AI systems do not currently speed up many or most software developers"
He Looks like he’s a typical software engineer with a very very generic opinion on AI presenting nothing new.
The arrogance the article starts off with like where he talks about how much time he’s invested in AI (1.5 years holy cow) and how that makes him qualified to give his generic opinion is just too much.
The point is this opinion is generic. It’s nothing new. It’s like someone stating “cars use gas, I’ve been driving for 1.5 years and I learned enough to say that cars use gas.”
His opinion is different. Worth reading about. His expertise and knowledge is highly relevant. What he says is also extremely likely and true.
I think the sheer amount of hype around AI has turned it into a hallucinating, brain damaged version of a human that everyone likes to scoff at. We're blinded by the significance of what happened here and also blinded at the rate at which this thing is improving in terms of raw intelligence.
The LLM is a milestone in human history equivalent to landing on the moon. But thank excessive exposure to this stuff on social media to sort of dampen the significance.
Probably not. We're deep in the hype bubble, so AI is strongly overused. Once the bubble pops and things calm down, some use-cases may well emerge from the ashes but it'll be nowhere near as overused as it is now.
> AI has become a race between countries and companies, mostly due to status. The company that creates an AGI first will win and get the most status.
There's a built-in assumption here that AGI is not only possible but inevitable. We have absolutely no evidence that's the case, and the only people saying we're even remotely close are tech CEOs who's entire business model depends on people believing that AGI is around the corner.
I don't think these things are really that correlated. In fact, kind of the opposite. Hype is all talk, not actual usage.
I think this will turn out more like the internet itself. Wildly overhyped and underused when the dotcom bubble burst. But over the coming years and decades it grew steadily and healthily until it was everywhere.
Agreed re: AGI though.
Petfoods.com IPO for about $300 million. $573 million adjusted for inflation.
Chewy is at a 14 billion market cap right now.
I think comparing LLMs and the dotcom bubble is just incredibly lazy and useless thinking. If anything , all previous bubbles show is what is not going to happen again.
Curious to hear more here. What is lazy about it? My general hypothesis is that ~95% of AI companies are overvalued by an order of magnitude or more and will end up with huge investor losses. But longer term (10+ years) many will end up being correctly valued at an order of magnitude above today's levels. This aligns perfectly with your pets.com/Chewy example.
I don't, however, see LLMs as consumer products being that prevalent in the future as currently. The cost of using LLMs is kept artificially low for consumers at the moment. That is bound to hit a wall eventually, at the very least when the bubble pops. At least that seems like an obvious analysis to make at this point in time.
Regarding usage - I don't think LLMs are going away. I think LLMs are going to be what finally topples Google search. Even my less technical friends and acquaintances are frequently reaching for ChatGPT for things they would have Googled in the past.
I also think we'll see the equivalent of Google Adwords start to pop up in free consumer LLMs.
If the results of current LLM performance is acceptable, costs to achieve these same results will inevitably go down as semiconductor process improvements bring markedly reduced operational expenses (power, density, etc.)
> As a manager, AI is really nice to get a summary of how everything is going at the company and what tasks everyone is working on and the status of the tasks, instead of having refinement meetings to get status updates on tasks.
I do not understand why they are not marketing some "GPT Middle Manager" to the executive boards so that they could cut that fat. Surely that is a huge untapped cost-cutting potential?
I want the AI to know my codebase the same way it knows the earth is round. Without any context fed to it each time.
Instead we have this weird Memento-esque setup where you have to give it context each time.
The ones profiting the most will be consultancies designed to protect the upper management reputation.
No comments yet
this insight stood out the most to me. i def agree, but what's interesting is the disconnect with the industry--it seems to be accepted rn that if coding is what ai is best at, developers must be the only ones that care, and that seems to have shown up in usage as well (i don't think i've seen much use of ai outside of personal use other than by developers, maybe i'm wrong?)
I doubt it. History has shown that credit for an invention often goes to the person or company with superior marketing skills, rather than to the original creator.
In a couple of centuries, people will sincerely believe that Bill Gates invented software, and Elon Musk invented self-driving cars.
Edit: and it's probably not even about marketing skill, but about being so full of oneself to have biographies printed and making others believe how amazing they are.
Where that leaves the rest of us is uncertain, but in many worlds the idea of status or marketing won't be relevant.
But my opinion on this has shifted a lot. The underlying technology is pretty lame. And improvements are incremental. Yes, someone will be the first, but they will be closely followed by others.
Anyway, I don't like the "impending doom" feeling that these companies create. I think we should tax them for it. If you throw together enough money, yeah, you can do crazy things. Doesn't mean you should be able to harass people with it.
Yes, it gets "smarter" each time, more accurate, but still lacks ANY creativity or actual thoughts/understanding. "You're completely right! Here, I fixed the code!" - proceeds to copy-paste the original code with the same bug.
LLMs will mostly replace: - search (find information/give people expert-level advice in a few seconds) - data processing (retrieval of information, listening and react to specific events, automatically transforming and forwarding of information) - interfaces (not entirely, but mostly augment them, sort of a better auto-complete and intention-detection). - most content ideation and creation (will not replace art, but if someone needs an ad, a business card, landing page, etc., the AI will do a good enough job). - finding errors in documents/code/security, etc.
All those use-cases are already possible today, AI will just make them more efficient.
It will be a LONG time until AI will know how to autonomously achieve the result you want and have the physical-world abilities to do so.
For AGI, the "general" part will be as broad as the training data. Also, now the AI listens too much to the human instruction and is crippled for (good) safety reasons. While we have all those limitations set, the "general intelligence" will still be limited, as it would be too dangerous to set zero limits and see where it goes (not because it's smart, but it's like letting a malware have access to the internet).
LLMs probably alone won’t be agi but humanity’s intelligence is also not just the limbic system or the neocortex. Our brains are also various different styled tools interconnected to create a greater whole. To that end LLMs may be a key part of bringing together the tools we already have been built (computing, machine learning, robotics, etc) into a larger system that is agi.
This depends on the perspective. Take a step back and consider what the actual technology is that makes this possible: neural networks, the transistor, electricity, working together in groups? All pretty cool, IMHO.
I've seen this sentiment quite a bit; I think it's really baked into human psyche. I think we understate the importance of what we don't enjoy and perhaps overstate the importance of the tasks we do enjoy and excel at. It makes sense, we're defending ourselves and our investments in learning our particular craft.
The author was hemorrhaging credibility all along the way and then this comment really drove home what he is: a bike shedder who probably deliberately introduces complexity into projects to keep himself employed. If you read between the lines of this post, it is clearly a product of that mindset and motivation.
'AI is only good at the simple parts that I don't like, but it's bad at the simple parts I do like and that are my personal expertise and keep me employed.'
Yeah okay buddy.
I could not understand this optimism, aren't we living in a capitalist world ?
Plenty of people could already work less today if they just spent less. Historically any of the last big productivity booms could have similarly let people work less, but here we are.
If AI actually comes about and if AGI replaces humans at most cognitive labor, we'll find some way to keep ourselves busy even if the jobs ultimately are as useless as the pet rock or the Jump to Conclusions Mat (Office Space reference for anyone who hasn't seen it).
It still takes basically the same amount of labour hours to give a haircut today as it did in the late 19th century. An elementary school teacher today can still not handle more than a few tens up to maybe a hundred students at the extreme limit. Yet the hairdressing and education industries must still compete — on the labour market — with the industries showing the largest productivity gains. This has the effect of raising wages in these productivity-stagnant industries and increasing the cost of these services for everyone, driving inflation.
Inflation is the real time-killer, not a fear of idleness. The cost of living has gone up for everyone — rather dramatically, in nominal terms — without even taking housing costs into account.
I'm not saying those are bad things, people can do whatever they want with their own time and effort. It just seems obvious to me that we aren't interested in working less over any meaningful period of time, if that was a goal we could have reached it a long time ago by defining a lower bar for when we have "enough."
But they're not talking about idle time, they're talking about quality time with loved ones.
> Plenty of people could already work less today if they just spent less.
But spending for leisure is often a part of that quality time. The idea is being able to work less AND maintain the same lifestyle.
I totally agree there, I wasn't trying to imply that "idle time" is a bad thing, in this context I simply meant its time not filled by obligations allowing them to choose what they do.
> But spending for leisure is often a part of that quality time.
I expect that varies a lot by person and situation. Some of the most enjoyable experiences I've had involved little or no cost; having a camp fire with friends, going on a hike, working outside in the garden, etc.
I you, I just mean what they're talking about is also not idle time as it's active time. If they were replacing work with sitting around at home, watching TV or whatever, then it would be idle time and drive them crazy no doubt. But spending time actively with their family is quite different, and would give satisfaction in a way that work does.
> I expect that varies a lot by person and situation.
Indeed. Spending isn't an inherent part of leisure. But it can be a part of it, and important part for some people. Telling them they could have more free time if they just gave up their passions or hobbies which cost money isn't likely to lead anywhere.
People could work less, but it's a group effort. As long as some narcissistic idiots who want more instead of less are in charge, this is not going to change easily.
And if not needed, culled. For being "unproductive" or "unattractive" or generally "worthless".
That's my cynical take.
As long as the rich can be reigned in in a way, the poor will not necessarily become poorer.
About this:
> So with all these tools built for developers, I realized that the people who gain the most from all these tools are not the developers—it’s all the people around who don’t write code. It’s easier for customers to show what they really want, we can enter sales meetings with a PoC which makes the selling part easier, and product owners can generate a PoC to show the developers how they think and get quicker feedback.
This is, by far, the most harrowing outcome for software engineers from the proliferation of LLMs.
The code they generate is good enough (on first glance) to _finally_ convince non-technical people that they can finally ship software without those pesky software developers.
This was central to Eric Schmidt's speech to Stanford business school students last year [^0]:
> You understand how powerful that is.
> If you can go from arbitrary language to arbitrary digital command, which is essentially what Python in this scenario is, imagine that each and every human on the planet has their own programmer that actually does what they want as opposed to the programmers that work for me who don't do what I ask, right?
> The programmers here know what I'm talking about.
> So imagine a non-arrogant programmer that actually does what you want and you don't have to pay all that money to and there's infinite supply of these programs.
> That's all within the next year or two.
Non-technical business people hold the corporate wallet almost everywhere, even in SV.
Many of them terminally consider software engineering as a means to an end, despite our best efforts to convince them otherwise.
I'm not in that social stratus, but if I had to guess: they know that LLM-generated code can be buggy and introduce loads of slop, but they also know that it takes `<n` developers to maintain all of that versus the `n` developers it takes to generate _and_ maintain software today.
While I'm at it, many of them are extremely excited about offshoring that maintainenance even more aggressively or, in this administration, leveraging H1-B (or other visa'ed) labor to do so (or perhaps not, as I'll explain in a bit).
To those people, the ideal end result of all of this is a small group of architects/10x developers that are reviewing and essentially project managing a literal army of contractors and offshore labor that is pumping out LLM-generated products faster than ever and a slightly larger, but still small, group of SRE-like senior engineers administrating it all in a not-too-dissimilar way.
Since this is going to obviously be more-or-less career-ending hundreds of thousands of people in the medium term, this will, in their (hypothetical) minds, spurn a gig economy of replaceable developers maintaining LLM-produced code. No need to compete for H1-Bs when you have disposable labor right at home.
Actually, it's worse than that. From the article:
> For everything I don’t like doing, AI is phenomenally good. Take design, for instance. I’ve used Lovable and Figma to generate beautiful UIs and then copied the HTML to implement in an Elixir dashboard, and the results have been stunning. I also use AI when I write articles to help with spelling and maintaining a clear narrative thread. It’s rare, but sometimes you get lucky and the AI handles a simple task for you perfectly.
This will apply to _almost everyone_ that is not serving a core "front-of-house" business function, not just engineering. Designers, technical writers and content marketers are already getting pummeled by LLM and stable diffusion, and that's unlikely to reverse now that these are getting even better at those tasks.
I hope I'm dead wrong about all of this and that the future of AI in software is just for supercharging developers.
Nonetheless, all of this has been making me really sad, tbh. I'm a sales engineer, so we're, fortunately, much less affected by all of this as long as sales remains a people-first career. However, I love writing software by hand ("artisanally", as it's now, depressingly, labelled). It's what made me fall in love with my career, and it's now being reduced to worthlessness by our own hand. Thanks to Big AI, whenever I open vim to write some code, I'm now forced to think "should I just hand this over to Claude Code?"
[^0]: https://news.ycombinator.com/item?id=41263143
That is indeed true. No one wants source code lines on a server. They want a problem solved. I think those of us that accept that will be closer to reality.