I think we’re going to have to deal with the stories of shareholders wetting themselves over more layoffs more than we’re going to see higher quality software produced. Everyone is claiming huge productivity gains but generally software quality and new products being created seem at best unchanged. Where is all this new amazing software? It’s time to stop all the talk and show something. I don’t care that your SQL query was handled for you, thats not the bigger picture, that’s just talk.
WXLCKNO · 2h ago
I'm working on a bunch of different projects trying out new stuff all the time for the past six months.
Every time I do something I add another layer of AI automation/enhancement to my personal dev setup with the goal of trying to see how much I can extend my own ability to produce while delivering high quality projects.
I definitely wouldn't say I'm 10x of what I could do before across the board but a solid 2-3x average.
In some respects like testing, it's perhaps 10x because having proper test coverage is essential to being able to let agentic AI run by itself in a git worktree without fearing that it will fuck everything up.
I do dream of a scenario where I could have a company that's equivalent to 100 or 1000 people with just a small team of close friends and trusted coworkers that are all using this kind of tooling.
I think the feeling of small companies is just better and more intimate and suits me more than expanding and growing by hiring.
ChrisMarshallNY · 10m ago
A while back, someone here linked to this story[0].
It's a bit simplified and idealized, but is actually fairly spot-on.
I have been using AI every day. Just today, I used ChatGPT to translate an app string into 5 languages.
> Every time I do something I add another layer of AI automation/enhancement to my personal dev setup with the goal of trying to see how much I can extend my own ability to produce while delivering high quality projects
Can you give some examples? What’s worked well?
haiku2077 · 1h ago
- Extremely strict linting and formatting rules for every language you use in a project. Including JSON, YAML, SQL.
- Using AI code gen to make your own dev tools to automate tasks. Everything from "I need a make target to automate updating my staging and production config files when I make certain types of changes" or "make an ETL to clean up this dirty database" to "make a codegen tool to automatically generate library functions from the types I have defined" and "generate a polished CLI for this API for me"
- Using Tilt (tilt.dev) to automatically rebuild and live-reload software on a running Kubernetes cluster within seconds. Essentially, deploy-on-save.
- Much more expansive and robust integration test suites with output such that an AI agent can automatically run integration tests, read the errors and use them to iterate. And with some guidance it can write more tests based on a small set of examples. It's also been great at adding formatted messages to every test assertion to make failed tests easier to understand
- Using an editor where an AI agent has access to the language server, linter, etc. via diagnostics to automatically understand when it makes severe mistakes and fix them
A lot of this is traditional programming but sped up so that things that took hours a few years ago now take literally minutes.
handfuloflight · 36m ago
Even things that took days or weeks are being done in minutes now. And a few hours on top to ensure correctness.
jay_kyburz · 19m ago
I worry that once I've done all that I won't have time for my actual work. I also have to investigate all these new AI editors, and sign up for the API's and work out which is best, then I have to learn how to prompt properly.
I worry that messing with the AI is the equivalent of tweaking my colour schemes and choosing new fonts.
jprokay13 · 1h ago
If you haven’t, adding in strict(er) linting rules is an easy win.
Enforcing documentation for public methods is a great one imo.
The more you can do to tell the AI what you want via a “code-lint-test” loop, the better the results.
malux85 · 1h ago
For us it’s been auto-generating tests - we focus efforts on having the LLM write 1 test, manually verifying it. Then use this as context and tell the llm to extend to all space groups and crystal systems.
So we get code coverage without all the effort, it works well for well defined problems that can be verified with test.
ujkhsjkdhf234 · 50m ago
Instagram was 13 employees before they were purchased by Facebook. The secret is most employees in a 1000 person company don't need to be there or cover very niche cases that your company likely wouldn't have.
teaearlgraycold · 1h ago
Definitely agree small teams are the way to go. The bigger the company the more cognitive dissonance is imposed on the employees. I need to work where everyone is forced to engage with reality and those that don’t are fired.
nemothekid · 2h ago
I'm not entirely convinced this trend is because AI is letting people "manage fleets of agents".
I do think the trend of the tiny team is growing though and I think the real driver were the laysoffs and downsizings of 2023. People were skeptical if Twitter would survive Elon's massive staff cuts and technically the site has survived.
I think the era of the 2016-2020 empire building is coming to an end. Valuing a manager on their number of reports is now out of fashion and theres now no longer any reason to inflate team sizes.
simonw · 1h ago
I think the productivity improvement you can get just from having a decent LLM available to answer technical questions is significant enough already even without the whole Agent-based tool-in-a-loop thing.
This morning I used Claude 4 Sonnet to figure out how to build, package and ship a Docker container to GitHub Container Registry in 25 minutes start to finish. Without Claude's help I would expect that to take me a couple of hours at least... and there's a decent chance I would have got stuck on some minor point and given up in frustration.
I'm not denying LLMs are useful. I believe the trend was going to happen whether regardless of how useful LLMs are.
AI ended up being a convenient excuse for big tech to justify their layoffs, but Twitter already painted a story about how bloated some organizations were. Now that there is no longer any status in having 9,001 reports the pendulum has swing the other way - it's now sexy to brag about how little people you employ.
jordanb · 1h ago
Eh I felt that way about the internet in 2010s. Seemed like virtually any question could be answered by a google query. People were making jokes that a programmer's job mostly consisted of looking things up on stack overflow. But then google started sucking and SO turned into another expertsexchange (which was itself good in the 2000s).
So far from what I've experienced AI coding agents automate away the looking things up on SO part (mostly by violating OSS licenses on Github). But that part is only bad because the existing tools for doing that were intentionally enshitified.
geremiiah · 2h ago
AI helps you cook code faster, but you still need to have a good understanding of the code. Just because the writing part is done quicker doesn't mean a developer can now shoulder more responsibility. This will only lead to burn out, because the human mind can only handle so much responsibility.
crystal_revenge · 54m ago
> but you still need to have a good understanding of the code
I've personally found this is where AI helps the most. I'm often building pretty sophisticated models that also need to scale, and nearly all SO/Google-able resources tend to be stuck at the level of "fit/predict" thinking that so many DS people remain limited to.
Being able to ask questions about non-trivial models as you build them, really diving into the details of exactly how certain performance improvements work and what trade offs there are, and even just getting feed back on your approach is a huge improvement in my ability to really land a solid understanding of the problem and my solution before writing a line of code.
Additionally, it's incredibly easy to make a simple mistake when modeling a complex problem and getting that immediate feedback is a kind of debugging you can otherwise only get on teams with multiple highly-skill people on them (which at a certain level is a luxury reserved only for people working a large companies).
For my kind of work, vibe-coding is laughably awful, primarily because there aren't tons of examples of large ML systems for the relatively unique problem you are often tasked with. But avoiding mistakes in the initial modeling process feels like a super power. On top of that, quickly being able to refactor early prototype code into real pipelines speeds up many of the most tedious parts of the process.
hnthrow90348765 · 19m ago
They often combine front end and back end roles (and sometimes sysadmin/devops/infrastructure) into one developer, so now I imagine they'll use AI to try and get even more. Burnout be damned, just going by their history.
TaylorGood · 49m ago
It’s true, especially with the “vibe” movement happening in real-time on X… “you can just do things” — I am building ai app layers b2c/b2b and while I do have an ml technical co-founder, I am largely scaling this with AI from strategy, visuals to coding. For example, with Claude created a framework for my company to scale, then built an AI powered dashboard in cursor around it as the command center. At scale we don’t need a team of more than ~5 to reach 7 fig MRR.
I read a few books the other day, The Million-dollar, One-person Business and Company of One. They both discuss how with the advances of code (to build a product with), the infrastructure to host them (with AWS so that you don't need to build data centers), and the network of people to sell to (the Internet in general, and more specifically social media, both organic and ads-based), the likelihood of running a large multi-million-dollar company all by yourself greatly increases in a way it has never done in the history of humanity before.
They were written before the advent of ChatGPT and LLMs in general, especially coding related ones, so the ceiling must be even greater now, and this is doubly true for technical founders, for LLMs aren't perfect and if your vibed code eventually breaks, you'll need to know how to fix it. But yes, in the future with agents doing work on your behalf, maybe your own work becomes less and less too.
Some excellent ideas presented in the article. It doesn't matter if they all pan out, just that they expand our thinking into the realm of AI and its role in the future of business startups and operations.
Revenue per employee, to me, is an aside that distracts from the ideas presented.
apical_dendrite · 3h ago
When I worked at a startup that tried to maximize revenue per employee, it was an absolute disaster for the customer. There was zero investment in quality - no dedicated QA and everyone was way too busy to worry about quality until something became a crisis. Code reviews were actively discouraged because it took people off of their assigned work to review other people's work. Automated testing and tooling were minimal. If you go to the company's subreddit, you'll see daily posts of major problems and people threatening class-action lawsuits. There were major privacy and security issues that were just ignored.
golergka · 2h ago
Really depends on the type of business you're in. In the startup I work in, I worked almost entirely on quality of service for the last year, rarely ever on the new features — because users want to pay for reliability. If there's no investment in quality, then either the business is making a stupid decision and will pay for it, or users don't really care about it as much as you think.
ldjkfkdsjnv · 2h ago
Theres two types of software, the ones no one uses, and the ones people complain about
hackable_sand · 2h ago
Everyone should just write their own software then.
apical_dendrite · 1h ago
I've worked at a number of companies - the frequency and seriousness of customer issues was way beyond anything I've experienced anywhere else.
ryandrake · 2h ago
It seems like a more and more recurring shareholder wet dream that companies could one day just be AI employees for digital things + robotic employees for physical things + maybe a human CEO "orchestrating" everything. No more icky employees siphoning off what should rightfully be profit for the owners. It's like this is some kind of moral imperative that business is always kind of low-key working towards. Are you rich and want to own something like a soup company? Just lease a fully-automated factory and a bunch of AI workers, and you're instantly shipping and making money! Is this capitalism's final end state?
rahimnathwani · 2h ago
If I'm buying soup, I'd prefer the manufacturer, the retailer, and any other part of the supply chain to be as efficient as possible, so they can compete in the market to offer me soup of a given quality at the lowest possible cost.
An individual consumer doesn't derive any benefit from companies missing out on automation opportunities.
Would you prefer to buy screws that are individually made on a lathe?
mikeocool · 2h ago
Personally, the best soups I’ve ever had were not made in kitchens that were optimized for efficiency or automation, they were optimized for quality.
They weren’t cheap soups, but they sure were good.
awb · 1h ago
Luxury goods and staple goods have distinct optimizations, both viable for generating profits and economic utility.
A high end soup and an affordable soup might be serving two different markets.
dyauspitr · 1h ago
Quality is a function of the ingredients used and the correct preparation. Neither of these things are something machines can’t do.
goatlover · 4m ago
I'd prefer not to live in a fully automated society where shareholders and CEOs reap all the profits while the rest of us scrape by on just enough UBI to prevent a revolution.
mechagodzilla · 2h ago
If anyone can pay-as-you-go use a fully automated factory, and the factories are interchangeable, it seems like the value of capital is nearly zero in your envisioned future. Anyone with an idea for soup can start producing it with world class efficiency, prices for consumers should be low and variety should be sky-high.
georgemcbay · 19m ago
> It seems like a more and more recurring shareholder wet dream that companies could one day just be AI employees
This is in fact the dream. Of course, one step removed from their goal it turns into a nightmare, at least if we stick with our current economic systems (which are just accepted with effectively religious belief at this point).
Who is left to pay for the stuff your AI is producing when every other company has done the same thing you have?
And never mind the fact that the people you all made irrelevant will, for a brief while anyway, have access to good enough LLMs at home that can walk them through efficient guillotine design.
dboreham · 2h ago
Like Johnny Depp in that movie..
khazhoux · 32m ago
You’re gonna have to be more specific
ldjkfkdsjnv · 2h ago
I think this is the beginning of the end of early stage venture capital in b2b saas. Growth capital will still be there, but increasingly there will be no reason to raise. It will empower individuals with actual skill sets, rather than those with fancy schools on their resume
lmeyerov · 1h ago
I think bar for b2c, prosumer, SMB, yes.. folks want to see fast revenue growth (vs eyeballs & ideas)...
but enterprise... not as much, and that's half the b2b market
Reverse there afaict, enterprise + defense tech are booming. AI means get to do a redo + extension of the code automation era. It's fairly obvious to buyers + investors this time around so don't even need to educate. Likewise, in gov/defense tech, palantir broke the dam, and most of our users there have an instinctive allergic reaction to palantir+xai, so pretty friendly.
runako · 33m ago
AI gets top billing, but the assault via tax code on engineering employment is likely a bigger factor.
Every time I do something I add another layer of AI automation/enhancement to my personal dev setup with the goal of trying to see how much I can extend my own ability to produce while delivering high quality projects.
I definitely wouldn't say I'm 10x of what I could do before across the board but a solid 2-3x average.
In some respects like testing, it's perhaps 10x because having proper test coverage is essential to being able to let agentic AI run by itself in a git worktree without fearing that it will fuck everything up.
I do dream of a scenario where I could have a company that's equivalent to 100 or 1000 people with just a small team of close friends and trusted coworkers that are all using this kind of tooling.
I think the feeling of small companies is just better and more intimate and suits me more than expanding and growing by hiring.
It's a bit simplified and idealized, but is actually fairly spot-on.
I have been using AI every day. Just today, I used ChatGPT to translate an app string into 5 languages.
[0] https://www.oneusefulthing.org/p/superhuman-what-can-ai-do-i...
Can you give some examples? What’s worked well?
- Using AI code gen to make your own dev tools to automate tasks. Everything from "I need a make target to automate updating my staging and production config files when I make certain types of changes" or "make an ETL to clean up this dirty database" to "make a codegen tool to automatically generate library functions from the types I have defined" and "generate a polished CLI for this API for me"
- Using Tilt (tilt.dev) to automatically rebuild and live-reload software on a running Kubernetes cluster within seconds. Essentially, deploy-on-save.
- Much more expansive and robust integration test suites with output such that an AI agent can automatically run integration tests, read the errors and use them to iterate. And with some guidance it can write more tests based on a small set of examples. It's also been great at adding formatted messages to every test assertion to make failed tests easier to understand
- Using an editor where an AI agent has access to the language server, linter, etc. via diagnostics to automatically understand when it makes severe mistakes and fix them
A lot of this is traditional programming but sped up so that things that took hours a few years ago now take literally minutes.
I worry that messing with the AI is the equivalent of tweaking my colour schemes and choosing new fonts.
The more you can do to tell the AI what you want via a “code-lint-test” loop, the better the results.
So we get code coverage without all the effort, it works well for well defined problems that can be verified with test.
I do think the trend of the tiny team is growing though and I think the real driver were the laysoffs and downsizings of 2023. People were skeptical if Twitter would survive Elon's massive staff cuts and technically the site has survived.
I think the era of the 2016-2020 empire building is coming to an end. Valuing a manager on their number of reports is now out of fashion and theres now no longer any reason to inflate team sizes.
This morning I used Claude 4 Sonnet to figure out how to build, package and ship a Docker container to GitHub Container Registry in 25 minutes start to finish. Without Claude's help I would expect that to take me a couple of hours at least... and there's a decent chance I would have got stuck on some minor point and given up in frustration.
Transcript: https://claude.ai/share/5f0e6547-a3e9-4252-98d0-56f3141c3694 - write-up: https://til.simonwillison.net/github/container-registry
AI ended up being a convenient excuse for big tech to justify their layoffs, but Twitter already painted a story about how bloated some organizations were. Now that there is no longer any status in having 9,001 reports the pendulum has swing the other way - it's now sexy to brag about how little people you employ.
So far from what I've experienced AI coding agents automate away the looking things up on SO part (mostly by violating OSS licenses on Github). But that part is only bad because the existing tools for doing that were intentionally enshitified.
I've personally found this is where AI helps the most. I'm often building pretty sophisticated models that also need to scale, and nearly all SO/Google-able resources tend to be stuck at the level of "fit/predict" thinking that so many DS people remain limited to.
Being able to ask questions about non-trivial models as you build them, really diving into the details of exactly how certain performance improvements work and what trade offs there are, and even just getting feed back on your approach is a huge improvement in my ability to really land a solid understanding of the problem and my solution before writing a line of code.
Additionally, it's incredibly easy to make a simple mistake when modeling a complex problem and getting that immediate feedback is a kind of debugging you can otherwise only get on teams with multiple highly-skill people on them (which at a certain level is a luxury reserved only for people working a large companies).
For my kind of work, vibe-coding is laughably awful, primarily because there aren't tons of examples of large ML systems for the relatively unique problem you are often tasked with. But avoiding mistakes in the initial modeling process feels like a super power. On top of that, quickly being able to refactor early prototype code into real pipelines speeds up many of the most tedious parts of the process.
Greg Isenberg has some of the best takes on this on X. He articulates the paradigm shift extremely well.. @gregisenberg — one example: https://x.com/gregisenberg/status/1936083456611561932?s=46)
They were written before the advent of ChatGPT and LLMs in general, especially coding related ones, so the ceiling must be even greater now, and this is doubly true for technical founders, for LLMs aren't perfect and if your vibed code eventually breaks, you'll need to know how to fix it. But yes, in the future with agents doing work on your behalf, maybe your own work becomes less and less too.
Revenue per employee, to me, is an aside that distracts from the ideas presented.
An individual consumer doesn't derive any benefit from companies missing out on automation opportunities.
Would you prefer to buy screws that are individually made on a lathe?
They weren’t cheap soups, but they sure were good.
A high end soup and an affordable soup might be serving two different markets.
This is in fact the dream. Of course, one step removed from their goal it turns into a nightmare, at least if we stick with our current economic systems (which are just accepted with effectively religious belief at this point).
Who is left to pay for the stuff your AI is producing when every other company has done the same thing you have?
And never mind the fact that the people you all made irrelevant will, for a brief while anyway, have access to good enough LLMs at home that can walk them through efficient guillotine design.
Reverse there afaict, enterprise + defense tech are booming. AI means get to do a redo + extension of the code automation era. It's fairly obvious to buyers + investors this time around so don't even need to educate. Likewise, in gov/defense tech, palantir broke the dam, and most of our users there have an instinctive allergic reaction to palantir+xai, so pretty friendly.
https://news.ycombinator.com/item?id=44226145