Where are they going to work? We haven't starting building all the new manufacturing plants etc that we need.
It takes forever to get into even an apprenticeship at the existing places. Sure, you have lots of people retiring out but as far as new jobs go?
Neywiny · 4h ago
Not sure if it counts as a trade but I've heard ATC operators are in high demand. There are just some jobs that even now and over the past years are severely understaffed. So, I'm not sure how many new jobs are needed vs backfilling existing vacancies.
SoftTalker · 15h ago
Data Centers are growing like weeds. They need lots of HVAC installers, plumbers, and electricians
analog31 · 12h ago
Is this just a symptom of the economic cycle? I have two friends who are both retired from careers teaching at the regional community college. They've told me that the number and quality of students go up when the job market is weak. When jobs are plentiful, people don't bother with trade school.
AngryData · 12h ago
That's neat but I don't think this is really a positive thing, it just shows people are desperate. Many people have left and are still leaving the trades because for many positions the pay is subpar, the work has many health hazards and cuts 10-20 years off your lifespan and quality of living, the hours are long, and the industry is boom and bust constantly and unstable.
I see people hem and haw about trades and how "great" they are all the time, but as someone who has worked in trades for the last 15 years, not nearly as many people can handle it or like it as they think. Many people heard about their cousin or friend who broke $100K doing trade work, but what they fail to mention is they did that by working 90 hours a week every week all year to do it. They don't mention how they kept working through injury and now they will feel it the rest of their life. And they don't mention or haven't been working long enough to feel the bust between the booms when they are either taking jobs that only earn $5 an hour, or just don't have enough work for full hours.
Yes we need trade workers, but there has never been a lack of trade workers, only a lack of pay. I know far more people that have left trades than have joined. Many liked the physical work, but couldn't justify the health costs on top of the poor or unstable pay.
breakyerself · 12h ago
100%. I feel like the Mike Rowe style agitation to divert people from college into the trades is, to some extent, an effort to increase competition for these jobs to depress wages. If you work in the trades the last thing you should want is an army of people trying to join your profession.
xyzzy123 · 10h ago
Also culturally a lot of people look down on people who work with their hands.
The median outcome for someone baseline reliable & skilled but doesn't have the inclination to run a business is OK but usually not great.
Der_Einzige · 10h ago
I believe if you’re in a trade and not making minimum of 100K you’re doing it wrong.
Nearly every skilled trade costs at least 100$ an hour for their labor, often far more. Yes even in West Virginia or Mississippi. Stop getting scammed by working for someone else and work for yourself.
Trades are so lucky too in that it’s hard for normies to evaluate the quality of their work - so you can make tons of money while still being really shitty at it.
prisenco · 16h ago
For junior devs wondering if they picked the right path, remember that the world still needs software, ai still breaks down at even a small bit of complexity, and the first ones to abandon this career will be those who only did it for money anyways and they’ll do the same once the trades have a rough year (as they always do).
In the meantime keep learning and practicing cs fundamentals, ignore hype and build something interesting.
kragen · 15h ago
Nobody has any idea what AI is going to look like five years from now. Five years ago we had GPT-2; AI couldn't code at all. Five years from now AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.
Anyone who tells you they know what the future looks like five years from now is lying.
noosphr · 15h ago
Unless we have another breakthrough like attention we do know that AI will keep struggling with context and costs will grow quadratically with context.
On a codebase of 10,000 lines any action will cost 100,000,000 AI units. One with 1,000,000 it will cost 1,000,000,000,000 AI units.
I work on these things for a living and no one else seems to ever think two steps ahead on what the mathematical limitations of the transformer architecture mean for transformer based applications.
kragen · 14h ago
It's only been 8 years since the attention breakthrough. Since then we've had "sparsely-gated MoE", RLHF, BERT, "Scaling Laws", Dall-E, LoRA, CoT, AlphaFold 2, "Parameter-Efficient Fine-Tuning", and DeepSeek's training cost breakthrough. AI researchers rather than physicists or chemists won the Nobel Prizes in physics and (for AlphaFold) chemistry last year. Agentic software development, MCP, and video generation are more or less new this year.
Humans also keep struggling with context, so while large contexts may limit AI performance, they won't necessarily prevent them from being strongly superhuman.
BobbyTables2 · 13h ago
I think it’s currently too easy to get drunk on easy success cases for AI.
It’s like asking a college student 4th grade math questions and then being impressed they knew the answer.
I’ve use copilot a lot. Faster then google, gives great results.
Today I asked it for the name of a French restaurant that closed in my area a few years ago. The first answer was a Chinese fusion place… all the others were off too.
Sure, keep questions confined to something it was heavily trained on, answers will be great.
But yeah, AI going to get rid of a lot of low skilled labor.
kragen · 12h ago
Sure, we might have hit a wall in some important sense, where further progress on some kinds of abilities is blocked until we try something totally different. But we might not. Nobody has any clue.
BoiledCabbage · 9h ago
> Today I asked it for the name of a French restaurant that closed in my area a few years ago. The first answer was a Chinese fusion place… all the others were off too.
What's the point of this anecdote? That it's not omniscient? Nobody is should be thinking that it is.
I can ask it how many coins I have in my pocket and I bet you it won't know that either.
CamperBob2 · 11h ago
It’s like asking a college student 4th grade math questions and then being impressed they knew the answer.
No, it's more like asking a 4th-grader college math questions, and then desperately looking for ways to not be impressed when they get it right.
Today I asked it for the name of a French restaurant that closed in my area a few years ago. The first answer was a Chinese fusion place… all the others were off too.
What would have been impressive is if the model had replied, "WTF, do I look like Google? Look it up there, dumbass."
lossolo · 13h ago
> Since then we've had "sparsely-gated MoE", RLHF, BERT, "Scaling Laws", Dall-E, LoRA, CoT, AlphaFold 2, "Parameter-Efficient Fine-Tuning", and DeepSeek's training cost breakthrough.
OK, I will bite.
So "Sparsely-gated MoE" isn’t some new intelligence, it's a sharding trick. You trade parameter count for FLOPs/latency with a router. And MoE predates transformers anyway.
RLHF is packaging. Supervised finetune on instructions, learn a reward model, then nudge the policy. That’s a training objective swap plus preference data. It's useful, but not breakthrough.
CoT is a prompting hack to force the same model to externalize intermediate tokens. The capability was there, you’re just sampling a longer trajectory. It’s UX for sampling.
LoRA is linear algebra 101, low rank adapters to cut training cost and avoid touching the full weights. The base capability still comes from the giant pretrained transformer.
AlphaFold 2’s magic is mostly attention + A LOT of domain data/priors (MSAs, structures, evolutionary signal). Again attention core + data engineering.
"DeepSeek’s cost breakthrough" is systems engineering.
Agentic software dev/MCP is orchestration, that’s middleware and protocols, it helps use the model, it doesn’t make the model smarter.
Video generation? Diffusion with temporal conditioning and better consistency losses. It’s DALL-E style tech stretched across time with tons of data curation and filtering.
Most headline "wins" are compiler and kernel wins: FlashAttention, paged KV-cache, speculative decoding, distillation, quantization (8/4 bit), ZeRO/FSDP/TP/PP... These only move the cost curve, not the intelligence.
The biggest single driver the last few years has been the data so de dup, document quality scores, aggressive filtration, mixture balancing (web/code/math), synthetic bootstrapping, eval driven rewrites etc etc. You can swap half a dozen training "tricks" and get similar results if your data mix and scale are right.
For me a real post attention "breakthrough", would be something like: training that learns abstractions with sample efficiency far beyond scaling laws, reliable formal reasoning, causal/world-model learning that transfers out of distribution. None of the things you listed do that.
Almost everything since attention is optimization, ops, and data curation. I mean give me exact pretrain mix, filtering heuristics, and finetuning datasets for Claude/GPT-5 and without peeking at the secret sauce architecture I can get close just by matching tokens, quality filters and training schedule. The "breakthroughs" are mostly better ways to spend compute and clean data, not new ways to think.
kianN · 11h ago
This is a great summary of why despite so much progress/tricks being discovered, so little progress to the core limitations to LLMs are made.
kragen · 13h ago
I don't disagree with any of this, though it sounds like you know more about it than I do.
BobbyTables2 · 13h ago
Indeed. I’m shocked that we train “AI” pretty much as one would build a fancy auto-complete.
Not necessarily a bad approach but feels like something is missing for it to be “intelligent”.
Should really be called “artificial knowledge” instead.
jofla_net · 12h ago
This and parent are both approaching toward what I see as the main obstacle, that we as a species don't know how in its entirety a human mind thinks (and it varies among people), so trying to "model" it and reproduce it is reduced to a game of black-boxing.
We black box the mind in terms of what situations its been seen to be in and how it has performed, the millions of correlative inputs/outputs are the training data. Yet, since we don't know the fullness of the interior we can only see its outputs it becomes somewhat of a Plato's cave situation. We believe it 'thinks' this way but again we cannot empirically say it performed a task a certain way, so unlike most other engineering problems, we are grasping at straws while trying to reconstruct it.
This doesn't not mean that a human mind's inner-workings can't ever be %100 reproduced, but not until we know it further.
tempodox · 11h ago
And there is another important difference: Our environments have oodles of details that inform us, while LLM training data is just “everything humans have ever written”. Those are completely different things. And LLMs have no concept of facts, only statements about facts in their training data that may or may not be true.
kragen · 12h ago
"What do you mean, they talk?"
"They talk by flapping their meat at each other!"
tmn · 15h ago
There’s a significant difference between predicting what it will specifically look like, and predicting sets of possibilities it won’t look like
kragen · 15h ago
No, there isn't. When speaking of logically consistent possibilities, the two problems are precisely isomorphic under Boolean negation.
bryanrasmussen · 13h ago
good point, someone recently said
> Five years from now AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.
do all these seem logically consistent possibilities to you?
kragen · 13h ago
Yes, obviously. You presumably don't know what "consistent" means in logic, and your untutored intuition is misleading you into guessing that possibilities like those could conceivably be inconsistent.
or I just wanted to make sure that you were adamant that the list of those three possibilities were equally probable, to reiterate
> AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.
that each of these things, being logically consistent, have equal chances of being the case 5 years from now?
kragen · 13h ago
No. Fuck off. There's no uniform probability distribution over the reals, so stop trying to put bullshit in my mouth.
bryanrasmussen · 9h ago
OK well you obviously seem to be having some bad time about something in your life right now so I won't continue, other than to note the comment that started this said
>There’s a significant difference between predicting what it will specifically look like, and predicting sets of possibilities it won’t look like
which I took to mean there are probability distributions around what things will happen, and it seemed to be your assertion that there wasn't, that a number of things only one of which seemed especially probable, were equally probable. I'm glad to learn you don't think this as it seems totally crazy, especially for someone praising LLMs which after all spend their time making millions of little choices based on probability.
tombert · 16h ago
I think the concern isn't so much about the current state of AI replacing software engineers, but more "what if it keeps getting better at this same rate?"
I don't really agree with the reasoning [1], and I don't think we can expect this same rate of progress indefinitely, but I do understand the concern.
Jevons paradox doesn't always apply (it depends on the shape of supply-demand curves) and it is entirely possible for technology to eliminate careers. For example, a professional translator can work far faster now than twenty years ago, but the result is that positions for professional translators are rapidly disappearing rather than growing. There's a finite demand for paid translation work and it's fairly saturated. There are also far fewer personal secretaries now than there were in the '70s. That used to be a very common and reasonably well-paying career. It may happen that increasing the efficiency of software development results in even more and even-better-paid software developers, but this isn't a guaranteed outcome.
prisenco · 16h ago
| "what if it keeps getting better at this same rate?"
All relevant and recent evidence points to logarithmic improvement, not the exponential we were told (promised) in the beginning.
We're likely waiting at this point for another breakthrough on the level of the attention paper. That could be next year, it could be 5-10 years from now, it could be 50 years from now. There's no point in prediction.
tombert · 16h ago
Yeah, that's how I feel about it.
People like to assume that progress is this steady upward line, but I think it's more like a staircase. Someone comes up with something cool, there's a lot of amazing progress in the short-to-mid term, and then things kind of level out. I mean, hell, this isn't even the first time that this has happened with AI [1].
The newer AI models are pretty cool but I think we're getting into the "leveling out" phase of it.
The main problem with the current technology, to my eye, is you need these huge multi dimensional models with extremely lossy encoding in order to implement the system on a modern CPU which is effectively a 2.5D piece of hardware that ultimately accesses a 1D array of memory.
Your exponential problems have exponential problems. Scaling this system is factorially hard.
themafia · 16h ago
> logarithmic improvement
Relative to time. Not relative to capital investment. There it's nearly perfectly linear.
shikon7 · 15h ago
Shouldn't it be the other way round, linear to time, and logarithmic relative to (the exponentially growing) capital investment?
prisenco · 15h ago
I don't follow, can you explain more?
BoiledCabbage · 8h ago
> All relevant and recent evidence points to logarithmic improvement,
Any citations for this pretty strong assertion? And please don't reply with "oh you can just tell by feel".
teaearlgraycold · 16h ago
You imply the models have been improving in some capacity.
Bolwin · 11h ago
You really think gpt 3 could do half the things current models do?
echelon · 16h ago
If software developers wind up replaced by AI, I think it's safe to say every industry's labor will be replaced. Trade jobs won't be far behind, because robotics will be nipping at their heels.
If software falls, everything falls.
But as we've seen, these models can't do the job themselves. They're best thought of as an exoskeleton that requires a pilot. They make mistakes, and those mistakes multiply into a mess if a human isn't around. They don't get the big picture, and it's not clear they ever will with the current models and techniques.
The only field that has truly been disrupted is graphics design and art. The image and video models are sublime and truly deliver 10,000x speed, cost, and talent reductions.
This is probably for three reasons:
1. There's so much straightforward training data
2. The laws of optics and structure seem correspondingly easier than the rules governing intelligence. Simple animals evolved vision hundreds of millions of years ago, and we have all the math and algorithmic implementations already. Not so, for intelligence.
3. Mistakes don't multiply. You can brush up the canvas easily and deliver the job as a smaller work than, say, a 100k LOC program with failure modes.
bc569a80a344f9c · 16h ago
> If software developers wind up replaced by AI, I think it's safe to say every industry's labor will be replaced. Trade jobs won't be far behind, because robotics will be nipping at their heels.
If software falls, everything falls.
I don’t think that follows at all. Robotics is notably much, much, much harder than AI/ML. You can replace programmers without robotics. You can’t replace trades without them.
ares623 · 16h ago
There will be millions of meat based robots lining up to flood the market when every knowledge based worker is displaced.
esseph · 16h ago
Driving down the value of their labor, but still not competitive enough globally because it's just so much cheaper in other countries for that labor.
grumple · 1h ago
A laborer in Asia can't install plumbing in America, install electrical systems in America, etc...
We also should end the exploitative nature of globalization. Outsourced work should be held to the same standards as laborers in modern countries (preferably EU, rather than American, standards).
echelon · 16h ago
> Robotics is notably much, much, much harder than AI/ML.
Are you so sure?
Almost every animal has solved locomotion, some even with incredibly primitive brains. Evolution knocked this out of the park hundreds of millions of years ago.
Drosophila can do it, and we've mapped their brains.
Only a few animals have solved reasoning.
I'm sure the robotics videos I've seen lately have been cherry picked, but the results are nothing short of astounding. And there are now hundreds of billions of dollars being poured into solving it.
I'd wager humans stumble across something evolution had a cake walk with before they stumble across the thing that's only happened once in the known universe.
bc569a80a344f9c · 16h ago
Yes, robotics is harder. Here’s some links. Wiki as an intro, and a reasonably entertaining write up that explains the concept in some depth, specifically comparing the issue to LLM progress as of 2024
Edit: just to specifically address your argument, doing something evolution has optimized for hundreds of millions of years is much harder than something evolution “came up with” very recently (abstract thought).
echelon · 15h ago
> Edit: just to specifically address your argument, doing something evolution has optimized for hundreds of millions of years is much harder than something evolution “came up with” very recently (abstract thought).
You've got this backwards.
If evolution stumbled upon locomotion early -- and several times independently through convergent evolution --, that means it's an easy problem, relatively speaking.
We've come up with math and heuristics for robotics (just like vision and optics). We're turning up completely empty for intelligence.
Avshalom · 16h ago
Well a large chunk of HN thinks the existing generation of AI is capable of doing 80% of their job, this has not translated at all to robotic stevedores and even less to robotic plumbers so yeah all current evidence supports "Robotics is notably much, much, much harder than AI/ML"
bryanrasmussen · 13h ago
>Almost every animal has solved locomotion, some even with incredibly primitive brains. Evolution knocked this out of the park hundreds of millions of years ago.
>Only a few animals have solved reasoning.
the assumption here seems to be that reasoning will be able to do what evolution did hundreds of millions of years ago (with billions of years of work being put into that doing) much easier than evolution did for.. some reason that is not exactly expressed?
logically also I should note that given the premises laid out by the first quoted paragraph the second quoted paragraph should not be "only a few animals have solved reasoning" it should be "evolution has only solved reasoning a few times"
tombert · 16h ago
I think this is making an assumption that the number of potential jobs is fixed. I don't agree with that assumption. I think as people learn how to use these tools then more industries pop up to use those tools.
ETA:
You updated your post and I think I agree with most of what you said after you updated.
BobbyTables2 · 12h ago
If AI robots can replace labor, then they’ll figure out humanity only gets in their way.
RobRivera · 16h ago
>ignore hyp and build something interesting
AND don't be afraid to start small!
Making a discord or twitch chat bot, quake mod, a silly sound board of all ypur favorite Groot quotes.
echelon · 16h ago
AI coding isn't eating the industry, offshoring is.
Inflation, end of ZIRP, and IRS section 174 kicked this off back in 2022 before AI coding was even a thing.
Junior devs won't lose jobs to AI. They'll lose jobs to the global market.
American software developers have lost the stranglehold on the job market.
linotype · 16h ago
Section 174 has been restored, thankfully. We’ll see if the damage is done.
echelon · 15h ago
Fingers crossed. The interest rate is still a tremendously bad problem.
If it had been ZIRP and low interest, companies would have just borrowed to cover the amortization that 174 introduced. But unfortunately money doesn't grow on trees anymore.
cyanydeez · 16h ago
Also, morals and ethics are optional!
Animats · 15h ago
The actual report is four links down and paywalled.[1]
Their source for future estimation is apparently Google Trends tallies of searches for trade schools.
Actual growth over the last few years is 3.2% per year, from 2019 to 2024.
US population growth is about 0.5% per year, so deduct that.
The happiest workers I've met are the people fixing gas leaks for the gas company.
No one ever rushes them, plenty of work, and not stuck in a cubicle.
Every student college bound is a crime. Not everyone needs to forget calculus...
BenFranklin100 · 12h ago
This is good news for society. Too many young men have fallen behind in the last ten years and young women now outpace them in college. It’s great young women are doing well, but if young men don’t have a parallel career path, they may become disheartened about their future, and we end up with a large cohort of disenchanted young men and political instability. The trades can provide an alternative non-college career path for these young men.
mna_ · 8h ago
But do highly educated women want to date and marry plumbers, mechanics, bricklayers, plasterers, etc? Women almost always date upwards.
BenFranklin100 · 4h ago
That is an issue, but I think attitudes are changing.
On the other hand, they want to date unemployed or underemployed men in menial service jobs even less, so there’s that.
tombert · 16h ago
I've been trying to convince my brother-in-law (who works at a convenience store) to consider trade school for years now, in no small part because I think a lot of the jobs you go to trade school for are safe from automation.
For reasons kind of unclear to me, it seems like trade schools have been kind of stigmatized, as somehow "lesser" than university. I don't completely understand why that is; the world needs welders and AC technicians and Practical Nurses much more than we need more software engineers working at a Silicon Valley startup.
j7ake · 16h ago
University is a path to a more global and flexible career for positions that don’t even exist yet.
There is more mobility and more flexibility to future jobs with a university degree like mathematics than with trade.
Gud · 13h ago
I used to install high voltage switchgear worldwide for a living, I didn’t go to any university. Now I commission the same equipment.
If you are a skilled tradesman, your skills are sought after globally.
j7ake · 13h ago
Good luck trying to get a work visa to places like Europe or USA if you have a trades background
Gud · 12h ago
Well I’m European so.
Typically you are sponsored by your employer.
throwaway290 · 12h ago
You can't get a work visa to USA or Europe anyway without someone hiring you first no?
righthand · 15h ago
It’s because we built a society that prides clever “get rich quick” ideas over pride in having a job. We have lots of factory and manufacturing jobs in Usa but no one wants to work the long hours for pretty decent pay when they can vibe code for marketing companies and not work really at all.
squigz · 16h ago
> the world needs welders and AC technicians and Practical Nurses much more than we need more software engineers working at a Silicon Valley startup.
The world needs software engineers too. Silicon Valley isn't the world. Not to mention, you know... it's not just programmers that come out of universities.
Anyway, trades are "looked down" on like that because they're a lot of very hard, very physical work. I would certainly encourage my children to go to university if it's going to lead to a much more comfortable life.
tombert · 16h ago
For the record, I have a bachelors and a masters in computer science and while I didn't finish I did attend a PhD program. I'm not trying to dog on universities as a concept.
That said, I think universities aren't a good fit for a lot of people. A lot of people (and I include my brother-in-law in this group) would not be happy with a desk job, and while I think he's pretty smart I don't know that he would do well having to attend four years of a university. I think trade schools are excellent for these kinds of people.
I don't have children, but I would like to think that if I did I would try and help them get a career they would be happy with, and "comfortable" doesn't necessarily imply that.
I prefer to have a desk job, I like writing software, it's why I spend too much time on HN, but I think a lot of people would benefit from a trade school, and I don't think they should be stigmatized.
antonymoose · 16h ago
I would second this largely, I’m the son of a plumber / handyman / GC. I’ve spent my childhood on service calls and job sites since the age of 5, spent my teenage summers schlepping tools and driving lightning rods, you name it, I’ve done it. I wouldn’t trade my near 20 year software career for the trades, I don’t think.
However, the biggest thing I think the HN crowd might appreciate that they have and we lack is an easy path to freedom through self employment - if you want self-employment as a programmer you need the fortune of a novel idea, improvement, or something new in some sense. You might need to also chase the VC dragon.
You want to start a plumbing business? Work hard 5-10 years, get out on your own with a van and tools and you have a turn key business idea. Provide good service at a proper rate. End of story.
creer · 16h ago
> we lack is an easy path to freedom through self employment - if you want self-employment as a programmer you need the fortune of a novel idea [etc]
See the many software and other computing people who successfully run under a consultant / contractor model. You can absolutely be self employed. Good service at a proper rate (and pretty high too, usually.) Self employed and high percentage remote if you want it.
tombert · 16h ago
I mean it's not too hard for software people to start contracting businesses. I've done private contracting in the past between W2 jobs, and I've debated trying to do it full time.
All it took was an internet connection and a decent laptop.
antonymoose · 16h ago
I’ve tried, it’s far harder to find and succeed in a global marketplace. While I do live in a metro, it’s a small one that cannot sustain an income close to my day job.
If you’re a tradesmen you’re never going to compete with Eastern Europe, LATAM, India, or anywhere global.
tombert · 16h ago
That's true; my private contracting has always been done via connections. I reach out to coworkers from previous jobs and ask if they need any work, and sometimes they do.
Cold-calling to get work would definitely be harder because you definitely are competing with much cheaper labor.
squigz · 15h ago
> You want to start a plumbing business? Work hard 5-10 years, get out on your own with a van and tools and you have a turn key business idea. Provide good service at a proper rate. End of story.
This strikes me as underselling the hurdles here. Ignoring the whole "just start a self-sufficient business" thing, what happens when you get sick? What about medical costs as you age? Retirement plans?
cpursley · 15h ago
Huh? You purchase insurance and invest into retirement funds.
It takes forever to get into even an apprenticeship at the existing places. Sure, you have lots of people retiring out but as far as new jobs go?
I see people hem and haw about trades and how "great" they are all the time, but as someone who has worked in trades for the last 15 years, not nearly as many people can handle it or like it as they think. Many people heard about their cousin or friend who broke $100K doing trade work, but what they fail to mention is they did that by working 90 hours a week every week all year to do it. They don't mention how they kept working through injury and now they will feel it the rest of their life. And they don't mention or haven't been working long enough to feel the bust between the booms when they are either taking jobs that only earn $5 an hour, or just don't have enough work for full hours.
Yes we need trade workers, but there has never been a lack of trade workers, only a lack of pay. I know far more people that have left trades than have joined. Many liked the physical work, but couldn't justify the health costs on top of the poor or unstable pay.
The median outcome for someone baseline reliable & skilled but doesn't have the inclination to run a business is OK but usually not great.
Nearly every skilled trade costs at least 100$ an hour for their labor, often far more. Yes even in West Virginia or Mississippi. Stop getting scammed by working for someone else and work for yourself.
Trades are so lucky too in that it’s hard for normies to evaluate the quality of their work - so you can make tons of money while still being really shitty at it.
In the meantime keep learning and practicing cs fundamentals, ignore hype and build something interesting.
Anyone who tells you they know what the future looks like five years from now is lying.
On a codebase of 10,000 lines any action will cost 100,000,000 AI units. One with 1,000,000 it will cost 1,000,000,000,000 AI units.
I work on these things for a living and no one else seems to ever think two steps ahead on what the mathematical limitations of the transformer architecture mean for transformer based applications.
Humans also keep struggling with context, so while large contexts may limit AI performance, they won't necessarily prevent them from being strongly superhuman.
It’s like asking a college student 4th grade math questions and then being impressed they knew the answer.
I’ve use copilot a lot. Faster then google, gives great results.
Today I asked it for the name of a French restaurant that closed in my area a few years ago. The first answer was a Chinese fusion place… all the others were off too.
Sure, keep questions confined to something it was heavily trained on, answers will be great.
But yeah, AI going to get rid of a lot of low skilled labor.
What's the point of this anecdote? That it's not omniscient? Nobody is should be thinking that it is.
I can ask it how many coins I have in my pocket and I bet you it won't know that either.
No, it's more like asking a 4th-grader college math questions, and then desperately looking for ways to not be impressed when they get it right.
Today I asked it for the name of a French restaurant that closed in my area a few years ago. The first answer was a Chinese fusion place… all the others were off too.
What would have been impressive is if the model had replied, "WTF, do I look like Google? Look it up there, dumbass."
OK, I will bite.
So "Sparsely-gated MoE" isn’t some new intelligence, it's a sharding trick. You trade parameter count for FLOPs/latency with a router. And MoE predates transformers anyway.
RLHF is packaging. Supervised finetune on instructions, learn a reward model, then nudge the policy. That’s a training objective swap plus preference data. It's useful, but not breakthrough.
CoT is a prompting hack to force the same model to externalize intermediate tokens. The capability was there, you’re just sampling a longer trajectory. It’s UX for sampling.
Scaling laws are an empirical fit telling you "buy more compute and data" That’s a budgeting guideline, not new math or architecture. https://www.reddit.com/r/ProgrammerHumor/comments/8c1i45/sta...
LoRA is linear algebra 101, low rank adapters to cut training cost and avoid touching the full weights. The base capability still comes from the giant pretrained transformer.
AlphaFold 2’s magic is mostly attention + A LOT of domain data/priors (MSAs, structures, evolutionary signal). Again attention core + data engineering.
"DeepSeek’s cost breakthrough" is systems engineering.
Agentic software dev/MCP is orchestration, that’s middleware and protocols, it helps use the model, it doesn’t make the model smarter.
Video generation? Diffusion with temporal conditioning and better consistency losses. It’s DALL-E style tech stretched across time with tons of data curation and filtering.
Most headline "wins" are compiler and kernel wins: FlashAttention, paged KV-cache, speculative decoding, distillation, quantization (8/4 bit), ZeRO/FSDP/TP/PP... These only move the cost curve, not the intelligence.
The biggest single driver the last few years has been the data so de dup, document quality scores, aggressive filtration, mixture balancing (web/code/math), synthetic bootstrapping, eval driven rewrites etc etc. You can swap half a dozen training "tricks" and get similar results if your data mix and scale are right.
For me a real post attention "breakthrough", would be something like: training that learns abstractions with sample efficiency far beyond scaling laws, reliable formal reasoning, causal/world-model learning that transfers out of distribution. None of the things you listed do that.
Almost everything since attention is optimization, ops, and data curation. I mean give me exact pretrain mix, filtering heuristics, and finetuning datasets for Claude/GPT-5 and without peeking at the secret sauce architecture I can get close just by matching tokens, quality filters and training schedule. The "breakthroughs" are mostly better ways to spend compute and clean data, not new ways to think.
Not necessarily a bad approach but feels like something is missing for it to be “intelligent”.
Should really be called “artificial knowledge” instead.
"They talk by flapping their meat at each other!"
> Five years from now AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.
do all these seem logically consistent possibilities to you?
https://en.m.wikipedia.org/wiki/Consistency
> AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.
that each of these things, being logically consistent, have equal chances of being the case 5 years from now?
>There’s a significant difference between predicting what it will specifically look like, and predicting sets of possibilities it won’t look like
which I took to mean there are probability distributions around what things will happen, and it seemed to be your assertion that there wasn't, that a number of things only one of which seemed especially probable, were equally probable. I'm glad to learn you don't think this as it seems totally crazy, especially for someone praising LLMs which after all spend their time making millions of little choices based on probability.
I don't really agree with the reasoning [1], and I don't think we can expect this same rate of progress indefinitely, but I do understand the concern.
[1] https://en.wikipedia.org/wiki/Jevons_paradox
All relevant and recent evidence points to logarithmic improvement, not the exponential we were told (promised) in the beginning.
We're likely waiting at this point for another breakthrough on the level of the attention paper. That could be next year, it could be 5-10 years from now, it could be 50 years from now. There's no point in prediction.
People like to assume that progress is this steady upward line, but I think it's more like a staircase. Someone comes up with something cool, there's a lot of amazing progress in the short-to-mid term, and then things kind of level out. I mean, hell, this isn't even the first time that this has happened with AI [1].
The newer AI models are pretty cool but I think we're getting into the "leveling out" phase of it.
[1] https://en.wikipedia.org/wiki/AI_winter
Your exponential problems have exponential problems. Scaling this system is factorially hard.
Relative to time. Not relative to capital investment. There it's nearly perfectly linear.
Any citations for this pretty strong assertion? And please don't reply with "oh you can just tell by feel".
If software falls, everything falls.
But as we've seen, these models can't do the job themselves. They're best thought of as an exoskeleton that requires a pilot. They make mistakes, and those mistakes multiply into a mess if a human isn't around. They don't get the big picture, and it's not clear they ever will with the current models and techniques.
The only field that has truly been disrupted is graphics design and art. The image and video models are sublime and truly deliver 10,000x speed, cost, and talent reductions.
This is probably for three reasons:
1. There's so much straightforward training data
2. The laws of optics and structure seem correspondingly easier than the rules governing intelligence. Simple animals evolved vision hundreds of millions of years ago, and we have all the math and algorithmic implementations already. Not so, for intelligence.
3. Mistakes don't multiply. You can brush up the canvas easily and deliver the job as a smaller work than, say, a 100k LOC program with failure modes.
I don’t think that follows at all. Robotics is notably much, much, much harder than AI/ML. You can replace programmers without robotics. You can’t replace trades without them.
We also should end the exploitative nature of globalization. Outsourced work should be held to the same standards as laborers in modern countries (preferably EU, rather than American, standards).
Are you so sure?
Almost every animal has solved locomotion, some even with incredibly primitive brains. Evolution knocked this out of the park hundreds of millions of years ago.
Drosophila can do it, and we've mapped their brains.
Only a few animals have solved reasoning.
I'm sure the robotics videos I've seen lately have been cherry picked, but the results are nothing short of astounding. And there are now hundreds of billions of dollars being poured into solving it.
I'd wager humans stumble across something evolution had a cake walk with before they stumble across the thing that's only happened once in the known universe.
https://en.m.wikipedia.org/wiki/Moravec%27s_paradox
https://harimus.github.io/2024/05/31/motortask.html
Edit: just to specifically address your argument, doing something evolution has optimized for hundreds of millions of years is much harder than something evolution “came up with” very recently (abstract thought).
You've got this backwards.
If evolution stumbled upon locomotion early -- and several times independently through convergent evolution --, that means it's an easy problem, relatively speaking.
We've come up with math and heuristics for robotics (just like vision and optics). We're turning up completely empty for intelligence.
>Only a few animals have solved reasoning.
the assumption here seems to be that reasoning will be able to do what evolution did hundreds of millions of years ago (with billions of years of work being put into that doing) much easier than evolution did for.. some reason that is not exactly expressed?
logically also I should note that given the premises laid out by the first quoted paragraph the second quoted paragraph should not be "only a few animals have solved reasoning" it should be "evolution has only solved reasoning a few times"
ETA:
You updated your post and I think I agree with most of what you said after you updated.
AND don't be afraid to start small!
Making a discord or twitch chat bot, quake mod, a silly sound board of all ypur favorite Groot quotes.
Inflation, end of ZIRP, and IRS section 174 kicked this off back in 2022 before AI coding was even a thing.
Junior devs won't lose jobs to AI. They'll lose jobs to the global market.
American software developers have lost the stranglehold on the job market.
If it had been ZIRP and low interest, companies would have just borrowed to cover the amortization that 174 introduced. But unfortunately money doesn't grow on trees anymore.
Their source for future estimation is apparently Google Trends tallies of searches for trade schools.
Actual growth over the last few years is 3.2% per year, from 2019 to 2024.
US population growth is about 0.5% per year, so deduct that.
Always look for the actuals.
[1] https://validatedinsightstradeschools2.carrd.co/
No one ever rushes them, plenty of work, and not stuck in a cubicle.
Every student college bound is a crime. Not everyone needs to forget calculus...
On the other hand, they want to date unemployed or underemployed men in menial service jobs even less, so there’s that.
For reasons kind of unclear to me, it seems like trade schools have been kind of stigmatized, as somehow "lesser" than university. I don't completely understand why that is; the world needs welders and AC technicians and Practical Nurses much more than we need more software engineers working at a Silicon Valley startup.
There is more mobility and more flexibility to future jobs with a university degree like mathematics than with trade.
If you are a skilled tradesman, your skills are sought after globally.
Typically you are sponsored by your employer.
The world needs software engineers too. Silicon Valley isn't the world. Not to mention, you know... it's not just programmers that come out of universities.
Anyway, trades are "looked down" on like that because they're a lot of very hard, very physical work. I would certainly encourage my children to go to university if it's going to lead to a much more comfortable life.
That said, I think universities aren't a good fit for a lot of people. A lot of people (and I include my brother-in-law in this group) would not be happy with a desk job, and while I think he's pretty smart I don't know that he would do well having to attend four years of a university. I think trade schools are excellent for these kinds of people.
I don't have children, but I would like to think that if I did I would try and help them get a career they would be happy with, and "comfortable" doesn't necessarily imply that.
I prefer to have a desk job, I like writing software, it's why I spend too much time on HN, but I think a lot of people would benefit from a trade school, and I don't think they should be stigmatized.
However, the biggest thing I think the HN crowd might appreciate that they have and we lack is an easy path to freedom through self employment - if you want self-employment as a programmer you need the fortune of a novel idea, improvement, or something new in some sense. You might need to also chase the VC dragon.
You want to start a plumbing business? Work hard 5-10 years, get out on your own with a van and tools and you have a turn key business idea. Provide good service at a proper rate. End of story.
See the many software and other computing people who successfully run under a consultant / contractor model. You can absolutely be self employed. Good service at a proper rate (and pretty high too, usually.) Self employed and high percentage remote if you want it.
All it took was an internet connection and a decent laptop.
If you’re a tradesmen you’re never going to compete with Eastern Europe, LATAM, India, or anywhere global.
Cold-calling to get work would definitely be harder because you definitely are competing with much cheaper labor.
This strikes me as underselling the hurdles here. Ignoring the whole "just start a self-sufficient business" thing, what happens when you get sick? What about medical costs as you age? Retirement plans?