> You have to start from how the reality works and then derive your work.
Every philosopher eventually came to the same realization: We don't have access to the world as it is. We have access to a model of the world as it predicts and is predicted by our senses. In so far as there is a correlation between the two in whatever fidelity we can muster, we are fated to direct access to a simulacrum.
For the most part they agree, but we have a serious flaw - our model inevitably influences our interpretation of our senses. This sometimes gets us into trouble when aspects of our model become self-reinforcing by framing sense input in ways that amplify the part of the model that confers the frame. For example, you live in a very different world if you search for and find confirmation for cynicism.
Arguing over metaphysical ontology is exemplified by kids fighting about which food (their favorite) is the best. It confuses subjectivity and objectivity. It might appear radical, but all frames are subjective even ones shared by the majority of others.
Sure, Schopenhauer's philosophy is the mirror of his own nature, but there is no escape hatch. There is no externality - no objective perch to rest on, even ones shared by others. That's not to say that all subjectivities are equally useful for navigating the world. Some models work better than others for prediction, control, and survival. But we should be clear that useful does not equate with truth, as all models are wrong, some are useful.
JC, I read the rest. The author doesn’t seem to grasp how profit actually works. Price and value are not welded together: you can sell something for more or less than the value it generates. Using his own example, if the AI and the human salesperson do the same work, their value is identical, independent of what each costs or commands in the market.
He seems wedded to a kind of market value realism, and from this shaky premise, he arrives at some bizarre conclusions.
card_zero · 49m ago
Urgh. I feel the stodge of relativism weighing down on me.
OK, yes, all models (and people) are wrong. I'll also allow that usefulness is not the same as verisimilitude (truthiness). But there is externality, even though nobody can as you say "perch" on it: it's important that there is objective reality to approach closer to, however uncertainly.
harwoodjp · 1h ago
Your dualism between model and world is nearly Cartesian. The model itself isn't separate from the world but produced materially (by ideology, sociality, naturally, etc.).
nine_k · 41m ago
A map drawn on a flat piece of land is still not the whole land it depicts, even though it literally consists of that land. Any representation is a simplification, as much as we can judge, there's no adequately lossless compressing transform of large enough swaths of reality.
neuroelectron · 1h ago
I have yet to see LLMs solve any new problems. I think it's pretty clear a lot of the bouncing ball programming demos are specifically trained on to be demoed at a marketing/advertising thing. Asking AI the most basic logical question about a random video game like, what element synergies with ice spike shield in Dragon Cave Masters and it will make up some nonsense despite it being something you can look up on gamefaqs.org. Now I know it knows the game I'm talking about but in the latent space it's just another set of dimensions that flavor likely next token patterns.
Sure, if you train an LLM enough on gamefaqs.org, it will be able to answer my question as accurately as an SQL query, and there's a lot of jobs that are just looking up answers that already exist, but these systems are never going to replace engineering teams. Now, I definitely have seen some novel ideas come out of LLMs, especially in earlier models like GPT-3, where hallucinations were more common and prompts weren't normalized into templates, but now we have "mixtures" of "experts" that really keep LLMs from being general intelligences.
outworlder · 1h ago
I don't disagree, but your comment is puzzling. You start talking about a game (which probably lacks a lot of training data) and then extrapolate that to mean AI won't replace engineering teams. What?
We do not need AGI to cause massive damage to software engineering jobs. A lot of existing work is glue code, which AI can do pretty well. You don't need 'novel' solutions to problems to have useful AI. They don't need to prove P = NP
sublinear · 1h ago
Can you give an example of a non-trivial project that is pure glue code?
XenophileJKO · 1h ago
I don't know, I've had O3 create some surprisingly effective Magic the Gathering decks based on newly released cards it has never seen. It just has to look up what cards are available.
Quarrelsome · 3h ago
Do execs really dream of entirely removing their engineering departments? If this happens then I would expect some seriously large companies to fail in the future. For every good idea an exec has, they have X bad ideas that will cause problems and their engineers save them from those. Conversely an entirely AI engineering team will say "yes sir, right on it" to every request.
pjmlp · 3h ago
Yes, that is exactly how offshoring and enterprise consulting takes place.
eikenberry · 2h ago
.. and why they fail.
pjmlp · 2h ago
Apparently not, given that it is the bread and butter of Fortune 500 consulting.
crinkly · 2h ago
The consultancies are successful. The customers aren’t usually quite as fortunate from experience.
A great example is the current Tata disaster in the UK with M&S.
pjmlp · 2h ago
Yet they keep sending RFPs and hiring consultancies, because at the end of the day what matters are those Excel sheets, not what people on the field think about the service.
Some C level MBAs get a couple of lunches together, or a golf match, exchange a bit of give and take, discounts for the next gig, business as usual.
Have you seen how valuable companies like Tata are, despite such examples?
crinkly · 2h ago
Yes and you allude to the problem: you can make a turd look good with the right numbers.
pjmlp · 2h ago
Doesn't change the facts, and how appealing AI is for those companies management.
crinkly · 2h ago
Yes. Execs love AI because it’s the sycophant they need to massage their narcissism.
I’d really love to be replaced by AI. At that point I can take a few months off paid gardening leave before they are forced to rehire me.
Quarrelsome · 1h ago
Idk I feel like execs would run out of make up before they accept their ideas are a pig. I worry this stuff is gonna work "just enough" to let them fool themselves for long enough to sink their orgs.
I'm envisioning a blog post on linkedin in the future:
> "How Claude Code ruined my million dollar business"
crinkly · 1h ago
Working out how to capitalise on their failures is the only winning proposition. My brother did pretty well out of selling Aerons.
AkshatM · 2h ago
> Need an example? Good. Coding.
> You must be paying your software engineers around $100,000 yearly.
> Now that vibecoding is out there, when was the last time you committed to pay $100,000 to Lovable or Replit or Claude?
I think the author is attacking a bit of a strawman. Yes, people won't pay human prices for AI services.
But the opportunity is in democratization - becoming the dominant platform - and bundling - taking over more and more of the lifecycle.
Your customers individually spend less, but you get more customers, and each customer spends a little extra for better results.
To respond to the analogy: not everyone had $100,000 to build their SaaS before. Now everyone who has a $100 budget can buy Lovable, Replit and Claude subbscriptions. You only need 1,000 customers to match what you made before.
Sol- · 1h ago
How much demand for software is there, though? I don't buy the argument that the cake will grow faster than jobs are devalued. On the bright side, prices might collapse accordingly and we'll end up in some post scarcity world. No money in software, but also no cost, maybe.
jsnk · 3h ago
"""
Not because AI can't do the work. It can.
But because the economics don't translate the way VCs claim. When you replace a $50,000 employee with AI, you don't capture $50,000 in software revenue. You capture $5,000 if you're lucky.
"""
So you are saying, AI does replace labour.
graphememes · 3h ago
Realistically, AI makes the easiest part of the job easier, not all the other parts.
deepfriedbits · 3h ago
For now
DanHulton · 3h ago
Citation needed.
bgroins · 3h ago
History
th0ma5 · 2h ago
Good thing we solved lipid disorders with Olean, Betamax gave us all superior home video, and you can monetize your HN comments with NFTs or else I wouldn't have any money to post!
pjmlp · 3h ago
Experience in industrial revolution, and factory automation.
eikenberry · 2h ago
So you mean in a hundred years it so? I don't think that is a good counter.
pjmlp · 2h ago
If you think the time is what matters from the history lesson, good luck.
warthog · 3h ago
Maybe I should change the title indeed. Intention was to point to the fact that from the perspective of a startup, even if you replace it fully, you are not capturing 100x the previous market.
tuatoru · 3h ago
The title is slightly misleading.
What the article is really about is the idea that all of the money that is now paid in wages will somehow be paid to AI companies as AI replaces humans. That idea being muddle-headed.
It points out that businesses think of AI as software, and will pay software-level money for AI, not wage-level money. It finishes with the rhetorical question, are you paying $100k/year to an AI company for each coder you no longer need?
satyrnein · 1h ago
It's almost more of a warning to founders and VCs, that an AI developer that replaces a $100k/year developer might only get them $10k/year in revenue.
But that means that AI just generated a $90k consumer surplus, which on a societal level, is huge!
Revisional_Sin · 3h ago
> What the article is really about is the idea that all of the money that is now paid in wages will somehow be paid to AI companies as AI replaces humans.
Is anyone actually claiming this?
lelandbatey · 9m ago
Not directly, but indirectly. It's what's leading to the FOMO w.r.t. investors. See the image in the parent blog post, where VCs are directly comparing the amount of money spent on AI at the moment (tiny) against the amount of money spent on headcount in various industries, with the implication being that "AI could be making all the money that was being spent on headcount, what an opportunity!"
tines · 3h ago
Not sure I quite get the point of the article. Sure, you won't capture $100k/year/dev. But if you capture $2k/year/dev, and you replace every dev in the world... that's the goal right?
aerostable_slug · 3h ago
They're saying expectations that AI revenues will equal HR expenditures, like you can take the funds from one column to the other, are wrong-headed. That makes sense to me.
tines · 1h ago
I agree, but that doesn't have to be true for investors to be salivating, is my point.
gh0stcat · 3h ago
I don't think the value stacks like that. Hiring 10 low level workers that you can pay 1/10th the salary to replace one higher level worker doesn't work.
RedOrZed · 3h ago
Sure it does! Let me just hire 9 women for 1 month...
blibble · 3h ago
that $2k won't last long as you will never maintain a margin on a service like that
employee salaries are high because your competitors can't spawn 50000 into existence by pushing a button
competition in the industry will destroy its own margins, and then its own customer base very quickly
soon after followed by the economies of the countries they're present in
the whole thing is a capitalism self destruct button, for entire economies
Every philosopher eventually came to the same realization: We don't have access to the world as it is. We have access to a model of the world as it predicts and is predicted by our senses. In so far as there is a correlation between the two in whatever fidelity we can muster, we are fated to direct access to a simulacrum.
For the most part they agree, but we have a serious flaw - our model inevitably influences our interpretation of our senses. This sometimes gets us into trouble when aspects of our model become self-reinforcing by framing sense input in ways that amplify the part of the model that confers the frame. For example, you live in a very different world if you search for and find confirmation for cynicism.
Arguing over metaphysical ontology is exemplified by kids fighting about which food (their favorite) is the best. It confuses subjectivity and objectivity. It might appear radical, but all frames are subjective even ones shared by the majority of others.
Sure, Schopenhauer's philosophy is the mirror of his own nature, but there is no escape hatch. There is no externality - no objective perch to rest on, even ones shared by others. That's not to say that all subjectivities are equally useful for navigating the world. Some models work better than others for prediction, control, and survival. But we should be clear that useful does not equate with truth, as all models are wrong, some are useful.
JC, I read the rest. The author doesn’t seem to grasp how profit actually works. Price and value are not welded together: you can sell something for more or less than the value it generates. Using his own example, if the AI and the human salesperson do the same work, their value is identical, independent of what each costs or commands in the market.
He seems wedded to a kind of market value realism, and from this shaky premise, he arrives at some bizarre conclusions.
OK, yes, all models (and people) are wrong. I'll also allow that usefulness is not the same as verisimilitude (truthiness). But there is externality, even though nobody can as you say "perch" on it: it's important that there is objective reality to approach closer to, however uncertainly.
Sure, if you train an LLM enough on gamefaqs.org, it will be able to answer my question as accurately as an SQL query, and there's a lot of jobs that are just looking up answers that already exist, but these systems are never going to replace engineering teams. Now, I definitely have seen some novel ideas come out of LLMs, especially in earlier models like GPT-3, where hallucinations were more common and prompts weren't normalized into templates, but now we have "mixtures" of "experts" that really keep LLMs from being general intelligences.
We do not need AGI to cause massive damage to software engineering jobs. A lot of existing work is glue code, which AI can do pretty well. You don't need 'novel' solutions to problems to have useful AI. They don't need to prove P = NP
A great example is the current Tata disaster in the UK with M&S.
Some C level MBAs get a couple of lunches together, or a golf match, exchange a bit of give and take, discounts for the next gig, business as usual.
Have you seen how valuable companies like Tata are, despite such examples?
I’d really love to be replaced by AI. At that point I can take a few months off paid gardening leave before they are forced to rehire me.
I'm envisioning a blog post on linkedin in the future:
> "How Claude Code ruined my million dollar business"
> You must be paying your software engineers around $100,000 yearly.
> Now that vibecoding is out there, when was the last time you committed to pay $100,000 to Lovable or Replit or Claude?
I think the author is attacking a bit of a strawman. Yes, people won't pay human prices for AI services.
But the opportunity is in democratization - becoming the dominant platform - and bundling - taking over more and more of the lifecycle.
Your customers individually spend less, but you get more customers, and each customer spends a little extra for better results.
To respond to the analogy: not everyone had $100,000 to build their SaaS before. Now everyone who has a $100 budget can buy Lovable, Replit and Claude subbscriptions. You only need 1,000 customers to match what you made before.
But because the economics don't translate the way VCs claim. When you replace a $50,000 employee with AI, you don't capture $50,000 in software revenue. You capture $5,000 if you're lucky. """
So you are saying, AI does replace labour.
What the article is really about is the idea that all of the money that is now paid in wages will somehow be paid to AI companies as AI replaces humans. That idea being muddle-headed.
It points out that businesses think of AI as software, and will pay software-level money for AI, not wage-level money. It finishes with the rhetorical question, are you paying $100k/year to an AI company for each coder you no longer need?
But that means that AI just generated a $90k consumer surplus, which on a societal level, is huge!
Is anyone actually claiming this?
employee salaries are high because your competitors can't spawn 50000 into existence by pushing a button
competition in the industry will destroy its own margins, and then its own customer base very quickly
soon after followed by the economies of the countries they're present in
the whole thing is a capitalism self destruct button, for entire economies