"AI Will Replace All the Jobs " Is Just Tech Execs Doing Marketing

189 botanicals6 275 6/4/2025, 2:38:22 PM sparktoro.com ↗

Comments (275)

ednite · 1d ago
Not an expert here, just speaking from experience as a working dev. I don’t think AI is going to replace my job as a software developers anytime soon, but it’s definitely changing how we work (and in many cases, already has).

Personally, I use AI a lot. It’s great for boilerplate, getting unstuck, or even offering alternative solutions I wouldn’t have thought of. But where it still struggles sometimes is with the why behind the work. It doesn’t have that human curiosity, asking odd questions, pushing boundaries, or thinking creatively about tradeoffs.

What really makes me pause is when it gives back code that looks right, but I find myself thinking, “Wait… why did it do this?” Especially when security is involved. Even if I prompt with security as the top priority, I still need to carefully review the output.

One recent example that stuck with me: a friend of mine, an office manager with zero coding background, proudly showed off how he used AI to inject some VBA into his Excel report to do advanced filtering. My first reaction was: well, here it is, AI replacing my job. But what hit harder was my second thought: does he even know what he just copied and pasted into that sensitive report?

So yeah, for me AI isn’t a replacement. It’s a power tool, and eventually, maybe a great coding partner. But you still need to know what you’re doing, or at least understand enough to check its work.

tines · 1d ago
I would agree with you, but the people making the decision to fire or keep you don’t care about quality, nor do they care about understanding AI or its limitations. If AI mostly does kinda the right thing 70% of the time but saves the company $80k a year, that’s a no-brainer. We’re being Pollyanna-ish to think that anyone cares about the things we care about, that you mention in your post.

If firing you saves $1.50 a year, they’ll do it.

ednite · 1d ago
Fair enough, but what happens when those same companies start realizing it’s not just about reduced quality, but also security risks and costly errors? At some point, the savings get wiped out by the consequences.

Do they go back to hiring human expertise then?

I totally agree though, the business mindset of saving a buck often outweighs everything else. I’m actually going through something similar right now with a client being swayed by a so-called “AI expert” just to cut costs. But that’s a whole other story.

keiferski · 1d ago
They go bankrupt, get acquired, or just hope that no one notices their security mistakes.

Admitting mistakes and correcting them directly is not a common thing for CEOs to do.

ednite · 1d ago
I guess that's what makes it scary. Good point. thanks.
soraminazuki · 23h ago
Just like how Equifax faced consequences for its massive data breach? Oh wait.
tines · 1d ago
> security risks and costly errors?

I hope that you're right, but the problem is that the regulatory bodies are captured by the players that they are supposed to regulate. Can you name a time in recent history where a company had to pay a penalty for a harmful action, either intentional or neglectful, that exceeded the profit they gained from the action?

ednite · 1d ago
Another good point. I guess my values don’t align with how business accountability principles. Here’s hoping things shift, but you are right, I’m not holding my breath.
davidcbc · 1d ago
And eventually they learn the lesson that many companies learned in the early 00s with offshoring. You get what you pay for
kunzhi · 1d ago
I don't think any of those companies learned a lesson.
davidcbc · 1d ago
Sure they did, or at least the ones who survived did. Offshoring went from the thing that was going to kill the US Software Engineering industry to a fairly minor portion of it with US based engineers being even more in demand than they were before
threecheese · 23h ago
Large private SaaS company, non tech hub/normal US large city: 60/40 split offshore/onshore for engineering. Not minor imo. Accelerated post-COVID from maybe 10/20% India which we’d had for a decade. They are gone now, replaced with lower cost coders and a lot more of them.

The 60 is mostly farshore, unless you are a high priority project in which case you get nearshore. Rumor is it’s expanding, Vietnam can do copilot just as good as US; requirements are written in English. They’ve started shipping Vietnamese tech leads here under visas, they alone interface between Product/eng leaders and the low-English teams. For all the Merika! I hear on the news, I certainly am not seeing it in the workforce.

palmotea · 1d ago
> Sure they did, or at least the ones who survived did. Offshoring went from the thing that was going to kill the US Software Engineering industry to a fairly minor portion of it with US based engineers being even more in demand than they were before

Huh? Offshoring is a fairly major component of my employer's workforce, and the US software engineering staff has dwindled (along with a lot of other departments). We moved into a much smaller building recently.

Now we work on godawful blended teams with the worse time-zone difference possible (initially projects we either all India or all US, until the power that be felt the need to push costs down further).

It's the same at my friend's employers, if they're of any size.

reaperducer · 1d ago
Offshoring is a fairly major component of my employer's workforce

Your employer is not the only employer.

bdangubic · 1d ago
you should look up numbers for this before posting :)
palmotea · 1d ago
I never said it was, but I'm not the only person I know in this situation, and the people in India I work have worked for other major Western companies. I also don't work at someplace like bank, where technology is seen as pure cost, but someplace where the main products are mostly software so its a strategic investment.

It just think it's pretty unbelievable that offshoring is a "a fairly minor portion" of the "US Software Engineering industry." It didn't totally kill it, like some overly pessimistic (or enthusiastic, depending on their perspective) people may have predicted, but it would take a lot of evidence to convince me it's not a major part of it.

const_cast · 23h ago
1. Please don't say "Huh?", you know what the person is talking about. You don't need to feign that their argument is so stupid it befuddles you.

2. Off-shoring didn't go away, sure, but it's pretty costly because you have to constantly translate culture differences, time differences, etc. Also reading code is harder than writing it, so just dumping more people to write code doesn't help much.

fakedang · 1d ago
You guys seem to underestimate how much traditional industry is tied in with offshoring. From retail to traditional banking to insurance, there are a shit ton of industries whose jobs are now in India and the Phillipines and not in the US or the EU.
astura · 1d ago
My first two jobs in the mid 00s was fixing or replacing software that had been offshored.
soraminazuki · 23h ago
Exactly, businesses will soon treat software engineering the way Google does "support." It'll all be just robots pissing everyone off. Well everyone, except the executives who will receive a nice big paycheck.
paul7986 · 1d ago
Indeed and as a UX Researcher, Designer and Front-End Dev chatGPT Plus does same level of design and front-end development I do. It takes me hours vs. chatGPT Plus five minutes or less to come up with professional logos, web app or website design around the logo and then spits out the front-end code. Once I saw that in late Fall of this year I was like yup there it is .. it can do a good portions/parts of my job. So at my job since I've been doing a lot more Customer UX Support and Research as well telling my co-workers and client that if I could use chatGPT at work I would be more effective (have it do the design and front end development). I feel i need to jump ahead and show I embrace this change asap and AI can not interface with clients, do UX research and any other thing that involves human to human interaction. So to me CX/UX Research is safe and now such workers can also do UX Design and Front-End Development using AI. Really anyone now can do those two things and quickly.
ednite · 1d ago
Great example, and to be honest, I’m guilty of the same. I used ChatGPT to come up with a logo for one of my projects, and it only took about five minutes. The kicker? The designer I would’ve usually handed it to actually approved it and liked it.

It kind of stings, though. That used to be someone’s craft, their livelihood. But like you said, the key now is finding ways to adapt, maybe by leaning more into the human side of the work: real collaboration, client interaction, deeper research.

Still, not everyone can or wants to adapt. What happens to the quiet designer who just wants to spend the day in their bubble, listening to music and creating? Not chasing clients, not pivoting constantly, just doing what they love. That’s the part that saddens me at times when I see AI in action. Thanks for sharing.

djhn · 23h ago
How can it come up with a logo? Is it able to output vector graphics now?

Chatgpt and Genini choked on trying to generate even the simplest of 2d shapes, like a 5-pointed star, as svg.

ednite · 21h ago
At this early stage of our project, a PNG was sufficient. Later on, I can have a designer convert it to a vector format, or our printing service provider can handle that for us. I assume it won’t be long before AI can generate a perfect vector format directly.
outside1234 · 1d ago
But if the AI inserts things that constantly cost the company $400k per previous head in a lawsuit when private information is leaked, that equation flips.
tines · 1d ago
This kind of thing won't happen much in our future Technopoly. The regulators are captured by the regulated, and all that will happen when a mistake is made is a shuffling of the cards.
reverendsteveii · 1d ago
I feel like I ran into a example use case yesterday that really illustrates how I work with AI as a dev. It's a simple db entity, with a field called "parent" that is the id of another entity of the same type. Theoretically there can be any number of parents (though practically w the data we're modeling we don't ever expect more than 3 levels). Classic case for recursion, right? So I whip up a quick method that goes something like

public void getEntityWithParents(List<DBEntity> entityList, String id) { DBEntity entity = dao.getById(id); entityList.add(entity); if (entity.getParent() != null) { getEntityWithParents(entityList, entity.getId()); } }

Because I'm working from a make it work first then make it efficient perspective I realize that I'm making a lot of DB calls there. So I pop open gemini, copy/paste that algorithm and ask it "How can I replace this method with a single call to the database, in postgresql?" and it gives me

WITH RECURSIVE parent_chain AS (SELECT id, name, parent, 0 AS depth FROM table WHERE id = :id UNION ALL SELECT i.id, i.name, i.parent, pc.depth + 1 FROM fam.dx_concept i INNER JOIN parent_chain pc ON i.id = pc.parent) SELECT id, name, parent FROM parent_chain ORDER BY depth;

That might have been a day or two of research, instead it was 5 minutes to come up with a theory and an hour or so writing tests. Gemini saved the day there. But it wasn't able to determine what needed done (minimizing DB calls), only how to do it, and it wasn't able to verify correctness. That's where we'll fit in in all of this: figuring out what to do and then making sure we actually did it.

ednite · 1d ago
This is a perfect example of how AI can be a powerful partner in development when you already know what you’re trying to solve.

Feels like the real sweet spot right now is: humans define the goal and validate the work, AI just helps fill in the middle.

reverendsteveii · 1d ago
if they make an AI that lets you format code readably on HackerNews we are all sauteed
Jensson · 18h ago
You can use 2 spaces before and you get a <code> block. AI should be able to do it for you if you ask it to since its good at transforming between formats.

  public void getEntityWithParents(List<DBEntity> entityList, String id) {
      DBEntity entity = dao.getById(id);
      entityList.add(entity);
      if (entity.getParent() != null) {
          getEntityWithParents(entityList, entity.getId());
      }
  }
rubslopes · 1d ago
> Even if I prompt with security as the top priority...

A bit of a counterpoint: I've been programming for 12 years, but only recently started working in webdev. I have a general understanding of cybersecurity, but I never had to actively implement security measures in my code until now—and my boss is always pushing for speed.

AI has been incredible in that regard. I can ask it to review my code for vulnerabilities, explain what they are, and outline the pros and cons of each strategy. In just a month, I've learned a lot about JWT, CSRF, DDoS protection, and more.

In another world, my web apps would be far more vulnerable.

wyclif · 1d ago
In that sense, AI is a tremendous learning tool. You could have learned about those subjects before, but it would have taken exponentially longer to search, scan, appropriate, and integrate them together to form a potential solution to a real-world problem.
ChrisMarshallNY · 1d ago
I suspect that we will be seeing some very creative supply chain compromises.

If you can train AI to insert calls to your malware server, in whatever solutions it provides, that's a huge win.

ednite · 1d ago
Totally agree! That’s where I think cybersecurity experts and system administrators deserve a lot of credit too. AI might help automate some of the work, but it’s also conjuring up threats that are way more complex and sneaky.

Hopefully, countermeasures and AI-powered defense tools can keep up. It's going to be some type of an arms race, for sure.

lambdasquirrel · 1d ago
The AI I use at work can’t set up IAM correctly, didn’t even know it needed to, let alone associate said IAM principal with the correct k8s RBAC groups. I do appreciate that it ground through a lot of boilerplate, but I’m concerned that it’s like buying a new Sony or Leica camera as a beginner photographer. As with your non-coder VBA friend, it might make them think they’re a lot more skilled than they really are. And this is specifically why I’ve never touched VBA.
ednite · 1d ago
Well put. And funny enough, I’m actually a complete newbie in photography and just got one of those expensive Sony cameras, so I know exactly what you mean. It’s overkill for my current skill set.

But the key difference? I’m not planning to use it as the primary photographer at my cousin’s wedding next week.

As you said, the real danger isn’t just the tool, it’s the false confidence it gives. AI can make us feel a bit too capable and too fast, and that’s when things can go sideways.

FirmwareBurner · 1d ago
>The AI I use at work can’t set up IAM correctly

Can you? :D

Based on my favorite quote from the I, Robot(2004) movie with Will Smith when he got roasted by a robot:

  W.S.:  "You are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?*
  Robot:  "Can you?"
Which I think applies to a lot of anti-AI sentiments. Sure, the AI doesn't know how to do X, Y, Z but then most people also don't know X, Y and Z. Sure, people can learn how to X, Y and Z, but then so can an AI. ChatGPT couldn't initially draw anime, now they trained it to do that and it can. Similarly it can also learn how to set up IAM correctly if they bother to train it well enough for that task. But for now, like you discovered, we're far away form an AI that's universally good at everything but I expect specialisation will arrive sooner than later.
bigstrat2003 · 1d ago
> Can you?

Yes, I can. In fact, the entire reason I think AI is not a useful tool (contrary to the hype) is that it can't do many things I find easy to do. I'm certainly not going to trust it with the things I find hard to do (and therefore can't check it effectively) in that case!

For example, a month or two ago I was trying to determine if there's a way to control the order in which CloudFormation creates+deletes objects when a replacement is needed. AI (including both ChatGPT and AWS' own purpose built AI) insisted yes, hallucinating configuration options that straight up don't exist if you try to use them. The ability to produce syntactically valid configuration files (not even necessarily correctly doing what you want them to) should be table stakes here, but I find that AI routinely can't do it.

FirmwareBurner · 1d ago
What if the AI can do things others find hard to do?

And my point with AI isn't that it can do things that are easy or hard, it's that it can do things I don't know how to do, yet. I can spend time to learn them but the AI knows them already and at that point it's better than me.

Hard and easy are subjective. When yo don't know how to do something it's hard, when you know how to do it it's easy.

You might know IAM but most people don't so it's not meant to replace you, it's to replace them.

diggan · 1d ago
> Even if I prompt with security as the top priority

LLMs do really poorly with general statements like that, so not sure it's unexpected. If you put "Make sure to make it production worthy", you'll get as many different answers as there are programmers, because not even us human programmers agree what that really means.

Same for "Make sure security is the top priority", most programmers would understand that differently. If you instead spell out exactly what behavior you expect from something like that (so "Make sure there are no XSS's", "Make sure users can't bypass authentication" and so on), you'll get a much higher probability it'll manage to follow those.

ednite · 1d ago
Totally agree. I was simplifying for discussion’s sake, but yeah, I learned the hard way that vague prompts will lead you that rabbit hole. If you’re not crystal clear, you get everything but what you actually wanted.

These days, I make sure every “t” is crossed and every “i” dotted when giving instructions. Good point, definitely a lesson worth repeating.

kalleboo · 20h ago
Although the promise of the latest generation of "thinking" models is that they're supposed to do that themselves - in the internal chat thread in their "thought" process they're supposed to go "what does secure mean for a web app", list those items (which it can already easily do if you prompt it separately) and then go through that list
geoka9 · 1d ago
> It's great for... getting unstuck

That one I never considered until it happened to me. It's funny that the AI-provided implementation was mostly off, but it was a start. "Blank canvas paralysis" is a thing.

socalgal2 · 1d ago
> So yeah, for me AI isn’t a replacement.

I agree with everything you wrote. The question is, what about in 6 months? 2 years? 4 years? Just a year ago (not sure the exact timeline) we didn't have the systems we have today that will edit multiple files, iterate over compilation errors and test failures, etc.... So what's it tomorrow?

ednite · 1d ago
That’s the part that keeps me up at night at times. I’m already seeing my workflow shift fast, and the next 6–12 months feel even more unpredictable.

Staying adaptable feels like the only real option, but I get that even that might not be enough for everyone.

yifanl · 1d ago
What if in 6 months earthquakes sinks the entire lower 48?
ednite · 1d ago
I guess AI will be the least of our worries.
TiredOfLife · 1d ago
> just speaking from experience as a working dev.

That's your problem. You haven't spent the past couple of years trying to get a junior job.

ednite · 1d ago
Fair point, and I appreciate your comment. I’ve been speaking from the perspective of someone already working in the field, but let me offer another angle.

Outside of coding, I’m also a new blogger and writer trying to publish articles and novels in a world already flooded with AI-generated content.

In that sense, I’m in a similar position to a junior dev, just in a different domain. As I seriously consider shifting more into writing than coding, I know I’ll be facing similar competitive pressures with AI. Honestly, I have no idea where that road will lead.

What I do know is that I’ll try to take advantage of every tool and opportunity available to help me succeed. For junior devs, I’d say: keep knocking on doors and focus on how AI can support your employer, not threaten either party.

Funny enough, just a few weeks ago I watched an "AI expert" with no real dev experience sell a vision full of buzzwords to business leaders, and they ate it up. Hats off to him. It reminded me that today, it's not just about skills anymore. It’s about how you communicate, connect, and present your value.

Sharpen your communication and people skills. Like it or not, AI is rapidly taking over the technical part. We have to stand out in the human part.

Just my two cents. And for the record, please know that I don’t consider myself an expert in any of this, just speaking from experience and opinions.

Hope it helps and good luck!

eranation · 1d ago
My personal thoughts on this are

- A good lawyer + AI will likely win in court against a non lawyer with AI who would likely win in court against just an AI

- A good software engineer + AI will ship features faster / safer vs a non engineer with AI, who will beat just AI

- A good doctor + AI will save more lives than a non doctor + AI, who will perform better than just AI

As long as a human has a marginal boost to AI (either by needing to supervise it, regulation, or just AI is simply better with a human agency and intuition) - jobs won't be lost, but the paradox of "productivity increases, yet we end up working harder" will continue.

p.s. there is the classic example I'm sure we all are aware of, autopilot is capable of taking off and landing since the 80s, I personally prefer to keep the pilots there, just in case.

bluefirebrand · 1d ago
> A good doctor + AI will save more lives than a non doctor + AI, who will perform better than just AI

Ok, what about an Average doctor with an AI? Or how about a Bad doctor with an AI?

AI assisted medcare will be good if it catches some amount of misdiagnoses

AI will be terrible if it winds up reinforcing misdiagnoses

My suspicion is that AI will act as a force multiplier to some extent, more than a safety net

Yes, some top percentage of performers will get some percentage of performance gain out of AI

But it will not make average performers great or bad performers good. It will make bad performers worse

bee_rider · 1d ago
It could make good doctors faster rather than better. This could allow them to help people who wouldn’t be able to afford them otherwise.
bluefirebrand · 1d ago
Doctors already barely see patients for more than a couple of minutes, you want them to be faster?
bee_rider · 1d ago
I’m actually not clear on what percentage of the doctor’s work-time is spend doing things other than talking to patients (like arguing with insurance or making records).
bluefirebrand · 1d ago
The solution to doctors arguing with insurance isn't "have an AI do it" it's universal health care so doctors don't need to worry about insurance in the slightest

I worked on software for electronic medical record note taking and I'm not sure how an LLM can help a doctor speed that up tbh. All of the stats need to be typed into the computer regardless. The LLM can't really speed that up?

bee_rider · 20h ago
I’d also prefer single payer, but nothing except ourselves has been stopping us from doing that, and we haven’t changed much. Maybe it’ll happen. But I don’t see any recent tech changes making it so.

Unless somebody manages to make hyper-convincing LLMs and use them for good, I guess. (Note: I think this is a bad path).

monknomo · 1d ago
I have no expertise and am prepared to be quite wrong, but I wonder if llm's would be good at listening to a session, and/or a doctor dictating, and putting the right stats in the right place, and the dictated case history into a note.

I think llm's are alright at speech recognition and that sort of unstructured to structured text manipulation. At least, in my corner of the customer success world I've seen some uses along those lines

davidcbc · 1d ago
My doctor was in a pilot program for this exact thing. It recorded our conversation and created the after visit summary.

It's summary was that I wasn't taking my antibiotics (I was, neither I nor my doctor said anything to the contrary). Luckily my doctor was very skeptical of the whole thing and carefully reviewed the notes, but this could be an absolute disaster if it hallucinates something more nefarious and the doctor isn't diligent about reviewing

bluefirebrand · 1d ago
Medical records are incredibly highly regulated so this is probably a really risky thing to try and build tbh

Actually any medical data being processed by AI is probably going to be under a ton of scrutiny

Medicine will likely be one of the last fields we start to see widespread usage of AI for this reason, tbh

const_cast · 23h ago
From me talking to Doctors, it seems most of their job is handling records, contacting insurance, prior auth, talking to pharmacists, etc. This is despite having billing specialists and admins out the wazoo.

Also their care is pretty much completely decided by insurance. What surgeries they can perform, what medicine they can give, how much, what materials they can use for surgery, and on and on. Your doctor is practicing shockingly little medicine, your real doctor is thousands of pages of guidelines created by insurers and peer-to-peer doctors who you will never meet.

vjvjvjvjghv · 1d ago
It's pretty much guaranteed that it will be used to increase profits. Caring for patients is secondary.
eranation · 1d ago
I believe that AI will help close the gap (e.g. a bad doctor with AI will be in average, better than just a bad doctor)
bluefirebrand · 1d ago
Maybe one day, but not right now

My experience with the current stuff on the market is you get out what you put in

If you put in a very detailed and high quality, precisely defined question and also provide a framework for how you would like it to reason and execute a task, then you can get out a pretty good response

But the less effort you put in the less accurate the outcome is

If a bad doctor is someone who puts in less effort, is less precise, and less detail oriented, it's difficult to see how AI improves on the situation at all

Especially current iterations of AI that don't really prompt the users for more details or recognize when users need to be more precise

DSMan195276 · 1d ago
IMO the problem is that, at least right now, the AI can't examine the patient itself, it has to be fed information from the doctor. This step means bad doctors are likely to provide the AI with bad information and reduce it's effectiveness (or cause the AI to re-enforce the biases of the doctor by only feeding it the information they see as relevant).
mandevil · 1d ago
Not sure about what will happen with software engineers, lawyers, or doctors, but I do know how computer assistance worked decades ago when it took over retail clerks, the net effect was to de-skill and damage the job as a career, by bringing everyone up to the same baseline level management lost interest in building skills above that baseline level.

So until the 1970's shopping clerk was a medium-skill and prestige job. Each clerk had to know the prices for all the items in your store because of the danger of price-tag switching(1). So clerks who knew all the prices were faster at checking out then clerks who had to look up the prices in their book, and reducing customer friction is hugely valuable for stores. So during this era store clerk is a reasonable career, you could have a middle-class lifestyle from working retail, there are people who went from clerk to CEO, and even those who weren't ambitious could just find a stable path to support their family.

Then the UPC code, laser scanner, and product/price database came along in the 1970's. The UPC code is printed in a more permanent way so switching tags is not as big a threat (2). Changing prices is just a database update, rather than printing new tags for every item and having the clerks memorize the new price. And there is a natural language description of every item that the register can display, so you don't have to keep the clerk around to be able to tell the difference between the expensive dress and the cheap dress- it will say the brand and description. This vastly improved the performance of a new clerk, but also decreased the value of the more experienced clerk. The result was a great hallowing-out of the retail sector employment, the so-called "McJob" of the 1990's.

But the result was things like Circuit City (in its death throes) firing all of their experienced retail employees (3) because the management didn't think that experience was worth paying for. This is actually the same sort of process that Marx had noted about factory jobs in the 19th century- he called it the alienation of labor, this is capital investment replacing skilled labor, to the benefit of the owners of the investment- but since retail jobs largely code as female no one really paid much attention to it. It never became a subject of national conversation.

1: This also created a limit on store size: you couldn't have something like a modern supercenter (e.g. Costco, Walmart, Target) because a single clerk couldn't know the prices for such a wide assortment of goods. In department stores in the pre-computer era every section had its own checkout area, you would buy the pots in the housewares section and then go to the women's clothes area and buy that separately, and they would use store credit to make the transaction as friction-less as possible.

2: Because in the old days a person with a price tag gun would come along and put the price directly onto each item when a price changed, so you'd have each orange with a "10p" sticker on it, and now it's a code entry and only the database entry needs to change, the UPC can be much more permanently printed.

3: https://abcnews.go.com/GMA/story?id=2994476 all employees paid above a certain amount were laid off, which pretty much meant they were the ones who had stuck around for a while and actually knew the business well and were good at their jobs.

vjvjvjvjghv · 1d ago
Considering how little interest doctors have taken in some of my medical problems I'll be happy to have AI help me to investigate things myself. And for a lot of people in the US it may make the difference between not being able to afford a doctor vs getting some advice.
touisteur · 1d ago
You (and I) prefer to keep the pilots there, but still, there's a push to need only one person and not two in that plane/cockpit. I have little to no doubt we'll have to relearn some hard lessons after we've AI'd up pilots.
dehrmann · 1d ago
I know airlines are a cutthroat business, but wouldn't the copilot add no more than $1 per passenger for the average flight?
ponector · 1d ago
Remember that success story when airline removed one olive from salads served onboard?

$1 per passenger is huge! For Ryanair it's 200m annually.

cj · 1d ago
I wanted to say maybe the 2nd pilot could double as a flight attendent if they're not needed full time in the cockpit. Still retains redundancy while saving the airline money.

The problem with that is most skills need to be practiced. When you only need to use your skills unexpectedly in an emergency, that may not end well. Same applies to other fields where AI can do something 95% of the time, with human intervention required in the 5% case. Is it realistic to expect humans to continue to fill that 5% gap if we allow our skills to wane by outsourcing the easiest 95% of a job and only keep the hardest 5% for ourselves.

HeyLaughingBoy · 1d ago
> maybe the 2nd pilot could double as a flight attendent

Have you ever managed people?

bigbuppo · 1d ago
And yet there's plenty of evidence that having three pilots in the cockpit is usually a better option when the inevitable happens.
touisteur · 1d ago
For those who can stomach it, reading aviation accident reports, listening to actual recorded voice footage, you very often read about the cognitive load of a two-person team trying to get through a shitty moment.

Richard de Crespigny, who flew the Quantas A380 that blew one of its engines after a departure from Changi, explains very clearly and in a gripping way the amount of stuff happening while trying to save an aircraft.

Lots of accidents happen today already at the seams of automation, I don't think we're collectively ready for a world with much more automation, especially in the name of more shareholder value of a 4 dollars discount.

sponaugle · 1d ago
Agree 100%. Watch a few videos on youtube from Mentour Pilot. The cognitive load is such a huge factor in so many accidents and close calls. There are also equally many accidents that could have been prevented with just a bit more automation and fault detection. Perhaps the most amazing thing is that after an accident, it can take years to get a real corrective action across the industry. It would be like level 10 CVEs taking 5 years to get patched!
touisteur · 22h ago
With the level of regression I get from 'security patches' :-) I won't blame the conservative mindset there.

The Air France Rio-Paris crash is a good example of sudden full mistrust of automation and sensors by the crew after a sensor failure appeared and then recovered. Very, very sad transcript and analysis... I'm arguing against myself here, singe it was also a huge case of crew management failure and it might not have gone to crash with only one person in the cockpit.

HenryBemis · 1d ago
You kinda said it, but you didn't hit the nail on the head. Yes we need the pilots. But -I will repeat my own example in my current mega-corp employer- I am about to develop a solution using an LLM (premium/enterprise) that will stop a category of employees from reaching 50, and will remain to 20, and with organic wear & tear, will drop to 10, which will be the 'forever number' (until the next 'jump' in tech).

So yes, we keep pilots, but we keep _fewer_ pilots.

hollerith · 1d ago
It's unclear what your numbers refer to. If I had to guess, I'd say 50 means the number of employees in the category employed by your employer, but I'm not sure.
JSR_FDED · 1d ago
That’s all well and good for the humans with experience, for whom AI is a force multiplier.

My concern is for the juniors - there’s going to be far fewer opportunities for them to get started in careers.

eranation · 1d ago
It’s all supply and demand.

When the market pool of seniors will run dry, and as long as hiring a junior + AI is better than a random person + AI, it will balance itself.

I do believe the “we have a tech talent shortage” was and is a lie, the shortage is tech talent that is willing to work for less. Everyone was told to just learn to code and make 6 figures out do college. This drove over-supply.

There is still shortage of very good software engineers, just not shortage of people with a computer science degree.

esafak · 1d ago
How did commercial pilots solve the problem?
kgilpin · 1d ago
In the US, “Junior” pilots typically work as flight instructors until they have built up enough time to no longer be junior. 1500 flight hours is the essential requirement to be an airline pilot, and every hour spent giving instruction counts as a flight hour. It’s not the only way, but it’s the most common way. Airlines don’t fund this; pilots have to work their way up to this level themselves.

In Europe it’s different.

ls612 · 1d ago
The 1500 hour rule was instituted by congress at the request of pilots unions not the FAA or any other regulator. Europe only requires 250 hours and has a similar aviation safety track record to the US in the 21st century.
bluefirebrand · 1d ago
Accreditation, Licensing and Unions

Things that software developers are extremely allergic to

vjvjvjvjghv · 1d ago
Accepting that people need to be trained within a system. As of now it's easy enough for software devs to get started without formal training. I don't see that changing. Smart people will be able to jump directly to senior level with the help of AI.
dgfitz · 1d ago
Not all of them of course, but a lot of them are ex-military.
gh0stcat · 1d ago
My concern though is that over time, a "good ANYTHING" + AI will converge to just AI, as you continue to outsource your thinking processes to AI, it will create dependence like any tool. This is a problem for any individual's long term prospects as a source of expertise. How do you think one might combat this? It seems the skills are at odds, and you are in the best position at the very START of using AI, and then your growth likely slows or stops completely as you migrate to thinking via AI API calls.
programmertote · 1d ago
I generally agree with your thoughts.

I am also concerned about couple of important things: human skill erosion (a lot of new devs who use AI might not bother to learn the basics that can make a difference in production/performance, security, etc.), and human laziness (and thus, gradually growing the habit to trust/rely on AI's output entirely).

qgin · 1d ago
When it's been studied so far, AI alone does better than AI + human doctor

>Surprisingly, in many cases, A.I. systems working independently performed better than when combined with physician input. This pattern emerged consistently across different medical tasks, from chest X-ray and mammography interpretation to clinical decision-making.

https://erictopol.substack.com/p/when-doctors-with-ai-are-ou...

jcfrei · 1d ago
The scenario you describe leads to a massive productivity boost for some engineers and no work left for the rest. Or in other words: The profit share of labour compared to capital becomes even smaller. Meaning an even more skewed income distribution, where a few make millions and the rest of the currently employed software engineers / lawyers, etc will become bartenders or greeters at walmart.
eranation · 1d ago
When backlogs run dry and users don’t come up with feature requests / bugs faster than humans + AI can tackle. Yes.

Until then, adding one more engineer (with AI) will have a better ROI than firing one.

Engineers who are purist and refuse to use AI, might end up with a wake up call. But they are smart, they’ll adapt too.

norir · 1d ago
> A good software engineer + AI will ship features faster / safer vs a non engineer with AI, who will beat just AI

Safer is the crucial word here. If you remove it, I'd argue the ordering should be reversed.

I also will point out that you could replace ai with amphetamines and have close to the same meaning. (And like amphetamines an ai can only act through humans, never solely on its own.)

freedomben · 1d ago
I think you're missing the spectrum between no jobs being lost and all jobs being lost. I think your first points are correct, but to me that points to some job losses as the good lawyers/doctors/SWEs get more efficient and better, and the lower tier aren't needed anymore and/or aren't worth the salary to employers.
bluefirebrand · 1d ago
Frankly "some jobs lost" is the worst possible outcome. This is the nightmare scenario for me

If all jobs are lost then our society becomes fundamentally broken and we need to figure out how to elevate the lives of everyone very quickly before it turns into riots and chaos. The thing is that it will be a very clear signal that something has to change, so change is more likely

If no jobs are lost we continue the status quo which is not perfect but is at least relatively sane and tolerable for now and hopefully we can keep working on fixing some of our underlying problems

If some jobs are lost but not all, then we see a further widening of the wealth gap but it is just another muddy signal of a problem that will not be dealt with. This is the "boiling the frog" outcome and I don't want to see what happens when we reach the end of that track.

Unfortunately that seems like the most likely outcome because boiling the frog is the path we've been on for a long time now.

mattgreenrocks · 1d ago
In theory, I agree with this.

However, one constant I've observed over my career: the quality and speed of the work I produce has not significantly contributed to career advancement. I know I'm appreciated for the fact that I don't cause more problems, and I usually make the total number of problems go down. I mention this because if quality/speed was truly valued, I believe I'd see more career-related growth (titles, etc) from it at some point in the last 20 years of my career.

This isn't to say AI won't be helpful. It is, and I use it some. But the whole schtick around, "SWEs must adopt AI or they'll be left behind," reeks of thought-terminating influencer BS. If people had great ways of assessing programmer productivity, we wouldn't need the ceremony-ridden promo culture that we have in some places.

(Arguably most of my career advancement in the last 5 years or so has come mainly from therapy: emotional regulation, holding onto problems that cannot be fixed easily w/o being consumed with trying to fix them or disengaging completely, and applying all that and more to various types of leadership.)

uludag · 1d ago
I've grown to believe the following more extreme (or maybe reasonable) version of what you said:

- A good lawyer with or without AI will likely win in court against a mediocre lawyer with AI

- A good SWE with or without AI will likely ship features faster/safer than a mediocre engineer with AI

- A good doctor with or without AI will save more lives than a mediocre doctor with AI.

I've experimented with this personally, stopping all my usage of AI coding tools for a time, including the autocomplete stuff. I by no means found myself barely treading water, soon to be overtaken by my cybernetically enhanced colleagues. In fact, quite the opposite, nothing really changed.

  - A good doctor + AI will save more lives than a non doctor + AI, who will perform better than just AI
I find even entertaining the opposite conclusion comical. Think of, for example, a world acclaimed heart surgeon. Are people seriously entertaining the idea that a rando with some agentic AI setup could outperform such a surgeon in said field, saving more lives? Is this the level of delusion that some people are at now?
Izkata · 1d ago
I figure by "doctor" they're thinking of a GP, who most people only ever see taking measurements and diagnosing things, not actually doing physical things like a surgeon.
freedomben · 1d ago
As Doc Brown famously said, "I don't think you're thinking fourth-dimensionally."

Current gen AI taking all the medical jobs is indeed laughable, but the amount of R&D going into AI right now is staggering and the progress has been rapid, with no signs of slowing down. 5 years from now things will be very different IMHO.

guluarte · 1d ago
for some time until using AI is the norm like using a computer is today.
gosub100 · 1d ago
A bad doctor with AI can commit malpractice longer by throwing AI under the bus. It remains to be tested, but a plaintiff suing a professional who uses AI may have a harder time prevailing if the defendant uses Shaggy defense and points to the Black Box that is shielded behind layers of third parties and liability limitations.
rvz · 1d ago
The enitre point is less knowledge worker jobs will be needed.

Not more.

mjr00 · 1d ago
Only if you assume the current amount of knowledge work being done, or the amount of output from knowledge work, is the maximum amount possible or desired. Which is incorrect.

Every software company has a backlog of 1000 features they want to add, everywhere has a shortage of healthcare workers. If AI makes developers on a successful product 20% more efficient, they won't fire 20% of developers, they'll build 20% more features.

The problem is the "successful product" part; for a decade or more unsuccessful products were artificially propped up by ZIRP. Now that money isn't free these products are being culled, and the associated jobs along with them. AI is just an excuse.

rvz · 1d ago
> Only if you assume the current amount of knowledge work being done, or the amount of output from knowledge work, is the maximum amount possible or desired. Which is incorrect.

My point is simple:

Why would I hire 100s of employees when I can cut the most junior and mid-level roles and make the seniors more productive with AI?

> Every software company has a backlog of 1000 features they want to add, everywhere has a shortage of healthcare workers. If AI makes developers on a successful product 20% more efficient, they won't fire 20% of developers, they'll build 20% more features.

Exactly. Keep the seniors with AI and no need for any more engineers, or even just get away with it by firing one of them if they don't want to use AI.

> Now that money isn't free these products are being culled, and the associated jobs along with them. AI is just an excuse.

The problem is "AI" is already good enough and even if their jobs somehow "come back", the salaries will be much lower (not higher) than before.

So knowledge workers have a lot more to lose, rather than gain if they don't use AI.

mjr00 · 1d ago
> Why would I hire 100s of employees when I can cut the most junior and mid-level roles and make the seniors more productive with AI?

Because at competent companies juniors and mid-level employees aren't just cranking out code, they're developing an understanding of the domain and system. If all you cared about was cranking out code and features, you'd have outsourced to Infosys etc long ago. (Admittedly, many companies aren't competent.)

> Exactly. Keep the seniors with AI and no need for any more engineers, or even just get away with it by firing one of them if they don't want to use AI.

This doesn't make any sense. I asked ChatGPT and it couldn't parse it either.

> The problem is "AI" is already good enough and even if their jobs somehow "come back", the salaries will be much lower (not higher) than before.

This much is true but tech salary inflation was, again, largely a ZIRP phenomenon and has nothing to do with AI. Junior developers were never really worth $150k/year right out of university.

rvz · 23h ago
> Because at competent companies juniors and mid-level employees aren't just cranking out code, they're developing an understanding of the domain and system.

So many companies like Microsoft, Meta, Salesforce and Google (who are actively using AI just did layoffs) are some how not 'competent companies' because they believe with AI they can do more with less engineers and employees?

> This doesn't make any sense. I asked ChatGPT and it couldn't parse it either.

Made total sense for the companies I mentioned above, who just did layoffs based on 'streamlining operations' and 'effciency gains' with AI just this year (and beat their earnings estimates).

> This much is true but tech salary inflation was, again, largely a ZIRP phenomenon and has nothing to do with AI. Junior developers were never really worth $150k/year right out of university.

It's more than just that, including an increasing over-supply of software engineers in general and lots of them with highly inflated salaries regardless of rank. The point is that it wasn't sustainable in the first place and roles in the junior to mid-level will see a reduction of salaries and jobs.

Once again, knowledge workers still have a lot more to lose, rather than gain if they don't use AI.

mjr00 · 21h ago
> So many companies like Microsoft, Meta, Salesforce and Google (who are actively using AI just did layoffs) are some how not 'competent companies' because they believe with AI they can do more with less engineers and employees?

Is there any evidence the layoffs are actually due to AI, or due to a hiring correction using AI as an excuse?

naijaboiler · 1d ago
Wrong. More knowledge workers will be needed. The nature of what they do will change
rvz · 1d ago
> The nature of what they do will change

Exactly. Less of them will be needed given that a few of them will be more productive with AI vs without it. That is the change which is happening right now.

So this is actually cope.

lenerdenator · 1d ago
It's also a great example of why tech executives shouldn't be trusted, at all.

"My thing will break our entire economy. I'm still gonna build it, though." - statements dreamed up by the utterly deranged

xeromal · 1d ago
You could say that about a number of things we've benefited from. The cotton gin, the plow, industrialization, the car, electricity, alarm clocks, etc.
roywiggins · 1d ago
Uh, not everyone benefited from the cotton gin, to put it mildly. Though I suppose it depends how tightly or loosely you define "we."

It probably wasn't even a net good for the South, being blamed for locking it into an agrarian plantation economy and stunting manufacturing in the states that depended on cotton.

mplanchard · 1d ago
This is a popular meme[0] about our industry, in fact:

> Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale

> Tech Company: At long last, we have created the Torment Nexus, from [the] classic sci-fi novel, Don’t Create the Torment Nexus

[0]: https://knowyourmeme.com/memes/torment-nexus

ergonaught · 1d ago
They shouldn't be trusted for any number of reasons, but the need for social systems to adapt to reality isn't their fault.
lenerdenator · 1d ago
It wouldn't be their fault if the economic class that they were a part of weren't actively opposed to changing those social systems.

I wouldn't care nearly as much about AI were there a stronger social safety net in the US. However, that's not going to happen anytime soon, because that requires taxes to pay for, and the very wealthy do not like paying those because it reduces their wealth.

const_cast · 23h ago
It kind of is their fault when they're simultaneously lobbying against those social systems and designing their platforms in a way to further align with that propaganda. It seems very intentional to me.
Ozarkian · 1d ago
You can't constrain an idea whose time has come. China will continue developing AI regardless of whether we will. We have to do it just to stay in the race.

It's the same thing with the atomic bomb. There wasn't really a choice not to do it. All the theoretical physicists at the time knew that it was possible to develop the thing. If the United States hadn't done it, someone else would have. Perhaps a few years or a decade later, but it would have happened somewhere.

bigstrat2003 · 1d ago
> It's the same thing with the atomic bomb. There wasn't really a choice not to do it.

There is always a choice. "Someone else will do this if I don't" does not absolve one from moral responsibility. Even if it is inevitable (which things generally are not, claiming they are is a rationalization most of the time), you still are culpable if you're the one who pulls the trigger.

Imustaskforhelp · 1d ago
Sam altman literally said something like this, i forgot the youtube video.

https://www.reddit.com/r/ChatGPT/comments/1axkvns/sam_altman...

Its crazy. Idk what else to say because my jaw gets dropped every time I hear something like this. Humanity is a mess sometimes.

some_random · 1d ago
If you believe that breaking the economy is good you're obviously going to do it. If you believe that if you don't break the economy one of your many competitors will, you're obviously going to do it.
lenerdenator · 1d ago
If it's good, why's Altman (reportedly) bragging about the preps he's making for societal collapse?[0]

[0]https://futurism.com/the-byte/openai-ceo-survivalist-prepper

freedomben · 1d ago
If OpenAI shut theirs down tomorrow and Sam Altman became a travelling monk preaching against the development of AI, do you really believe it would stop the momentum?

I don't. The cat is out of the bag. The only thing that would accomplish is giving Google and others less competition. Personally I don't have much trust in any tech companies, including OpenAI, but I'd much rather there be a field of competition than one dominant and (unchecked) leader.

lenerdenator · 1d ago
> If OpenAI shut theirs down tomorrow and Sam Altman became a travelling monk preaching against the development of AI, do you really believe it would stop the momentum?

Oh, I know it wouldn't, but I know he won't, because there's too much financial incentive to do so, and Altman and his ilk think that all human endeavors can be judged as a net good or net bad by whether or not they make number go bigger.

hooverd · 1d ago
At least I can rest assured the billionaires would probably kill each other in a mad scramble to be king of the ashes.
spacemadness · 1d ago
At this point it’s going to break the economy anyway if it doesn’t end up breaking the economy as investors are going to retreat and pop the bubble.
freejazz · 1d ago
If you're a tech ceo, then maybe yeah.... I've seen bankers that are more reflective about their actions than these tech leaders. Tech would kill the goose laying golden eggs just because they could find enough people to believe their BS marketing to get their VC funded startup sold.
msgodel · 1d ago
I like to call most of this stuff "executive sounds." My favorite recent example is the Nvidia CEO talking about how they're going to use quantum computing for ML.
saubeidl · 1d ago
Capitalism is a doomsday cult and these people are its prophets.
Imustaskforhelp · 1d ago
I have no problem with capitalism. I have problem with that there are people who have so much money that they can't even spend yet most people live paycheck to paycheck.

Maybe the solution might be socialism except you can own money till 10 million I guess. But I am not sure if its effective or not. Definitely loop holes. Idk

saubeidl · 1d ago
I think the phenomenon in your second sentence is a direct result of unbridled capitalism.

Maybe the solution is a simple as a social market economy, maybe it takes something a bit more radical - but the extreme techno capitalism that our industry's leaders are trying to advance is definitely a step in the wrong direction.

yoyohello13 · 1d ago
It’s as mask off as a business man can get. “This is bad, but I want money.” That is our society in a nutshell.
roywiggins · 1d ago
What if it just makes most jobs worse, or replaces good jobs with more, worse jobs? "Meat robot constantly monitored and directed by AI overlords" is technically a job.
lapcat · 1d ago
> What if it just makes most jobs worse, or replaces good jobs with more, worse jobs?

Right. Consider:

1) Senior engineer writing code

vs.

2) Senior engineer prompting and code reviewing LLM agent

The senior engineer is essential to the process in each case, because the LLM agent left to its own devices will produce nonfunctional crap. But think about the pleasantness of the job and the job satisfaction of the senior engineer? Speaking just for myself, I'd rather quit the industry than spend my career babysitting nonhuman A.I. That's not what I signed up for.

shafyy · 1d ago
Same. I actually like writing code, reviewing code that my colleagues have written, having interesting technical discusisons. I don't want to spend my days reviewing code that some AI has written.

But I guess if you don't like writing code, and you are "just doing it for the money", having an LLM write all the code for you is fine. As long as it passes some very low bar of quality — which, let's be honest — is enough for most companies (i.e. software factories) out there.

AstroBen · 1d ago
As of right now it's actually making my job much more enjoyable. It lets me focus on the things I enjoy thinking about - code design, architecture, and the higher level of how to solve problems

I haven't seen any evidence its made progress on these which is nice

No comments yet

sasmithjr · 1d ago
I don't think it's an exclusive choice between the two, though. I think senior engineers will end up doing both. Looking at GitHub Copilot's agent, it can work asynchronously from the user, so a senior engineer can send it off to work on multiple issues at once while still working on tasks that aren't well suited for the agent.

And really, I think many senior engineers are already doing both in a lot of cases where they're helping guide and teach junior and early mid-level developers.

lapcat · 1d ago
> And really, I think many senior engineers are already doing both in a lot of cases where they're helping guide and teach junior and early mid-level developers.

Babysitting and correcting automated tools is radically different from mentoring less experienced engineers. First, and most important IMO, there's no relationship. It's entirely impersonal. You become alienated from your fellow humans. I'm reminded of Mark Zuckerberg recently claiming that in the future, most of your "friends" will be A.I. That's not an ideal, it's a damn dystopia.

Moreover, you're not teaching the LLM anything. If the LLMs happen to become better in the future, that's not due to your mentoring. The time you spend reviewing the automatically generated code does not have any productive side effects, doesn't help to "level up" your coworkers/copilots.

Also, since LLMs aren't human, they don't make human mistakes. In some sense, reviewing a human engineer's code is an exercise in mind reading: you can guess what they were thinking, and where they might have overlooked something. But LLMs don't "think" in the same way, and they tend to produce bizarre results and mistakes that a human would never make. Reviewing their code can be a very different, and indeed unpleasant WTF experience.

bluefirebrand · 1d ago
Guiding and teaching developers is rewarding because human connections are important

I don't mentor juniors because it makes me more productive I mentor juniors because I enjoy watching a human grow and develop and gain expertise

I am reminded of reports that Ian McKellen broke down crying on the set of one of The Hobbit movies because the joy of being an actor for him was nothing like acting on green screen sets delivering lines to a tennis ball on a stick

ookblah · 1d ago
and just to play devil's advocate maybe some people don't enjoy that? remove the issue of training the next generation for a moment.

just like with open vs. closed offices or remote vs in-person, maybe some people have all the human interaction they want outside of work and don't mind "talking" to some AI as long as it gets shit done in the manner they want.

lapcat · 1d ago
> and just to play devil's advocate

Your comment would be improved by simply removing that phrase. It adds nothing and in fact detracts.

> just like with open vs. closed offices or remote vs in-person, maybe some people have all the human interaction they want outside of work and don't mind "talking" to some AI as long as it gets shit done in the manner they want.

You're presenting a false dichotomy. If someone doesn't enjoying mentoring juniors, that's fine. They shouldn't have to. But why would one have to choose between mentoring juniors or babysitting LLM agents? How about neither?

sasmithjr was apparently trying to defend babysitting A.I. by making an analogy with mentoring juniors, whereas I replied by arguing that the two are not alike. Whether or not you enjoy using A.I. is an entirely separate issue, independent of mentoring.

sasmithjr · 1d ago
> sasmithjr was apparently trying to defend babysitting A.I. by making an analogy with mentoring juniors

I regret adding that last bit to my comment because my main point (which I clearly messed up emphasizing and communicating) is that I think you’re presenting a false dichotomy in the original comment. Now that work can be done with LLMs asynchronously, it’s possible to both write your own code and guide LLMs as they need it when you have down time. And nothing about that requires stopping other functions of the job like mentoring and teaching juniors, either, so you can still build relationships on the job, too.

If having to attend to an LLM in any way makes the job worse for you, I guess we’ll have to agree to disagree. So far, LLMs feel like one of many other automations that I use frequently and haven’t really changed my satisfaction with my job.

lapcat · 1d ago
> If having to attend to an LLM in any way makes the job worse for you

I think you're downplaying the nightmare scenario, and your own previous comment already suggests a more expansive use of LLM: "so a senior engineer can send it off to work on multiple issues at once".

What I fear, and what I already see happening to an extent, is a top-down corporate mandate to use AI, indeed a mandate to maximize the use of AI in order to maximize (alleged) "productivity". Ultimately, then, senior engineers become glorified babysitters of the AI. It's not about the personal choice of the engineer, just like, as the other commenter mentioned, open vs. closed offices or remote vs. in-person are often not the choice of individual engineers but rather a top-down corporate mandate.

We've already seen a corporate backlash against remote work and a widespread top-down demand for RTO. That's real; it's happened and is happening.

ookblah · 1d ago
i was trying to frame it as something i'm also grappling with, but i digress poor choice of words.

maybe you're responding to the wrong person? because i'm not even disagreeing with you on that. maybe they want both or neither, that's fine.

the person i'm responding to is framing mentoring as some kind must have from a "socialization" standpoint (which i disagreed with, but i get the practical aspect of it where if you don't have people train juniors there won't be seniors).

bluefirebrand · 1d ago
No, not "socialization" as in "having social interactions with other people"

I mean "socialization" as in "being a positive part of and building a society worth living in"

ookblah · 1d ago
and why do you think that has to exist solely within the confines of work? not that you said that, but your comments seem to suggest that if you don't like or want to mentor junior devs then you don't value human connections. thus my comment about having enough connections outside of work.

if it's rewarding to you that's great, but don't frame it as something bigger than it is. i would hope we are all "being a positive part of and building a society worth living in" in our own way.

bluefirebrand · 1d ago
> if you don't like or want to mentor junior devs then you don't value human connections

If you don't like or want to mentor the younger generation then you are actively sabotaging the future of society because those people are the future of society

Why do I care about the future of society? Because I still have to live in it for another few decades

ookblah · 1d ago
alright, we can agree to disagree because this so obviously touching a chord with you and you're now literally making sweeping assumptions based on things i've never said.

maybe i like taking care of my friends kids, volunteering, or doing other things that contribute to the "future of society"? personally, i think mentoring junior devs is slightly lower on the priority list, but that's my opinion.

seriously, how arrogant of you to make assumptions about how others think about the future based on a tiny slice of your personal life lol.

bluefirebrand · 1d ago
> maybe i like taking care of my friends kids, volunteering, or doing other things that contribute to the "future of society

That's great, that doesn't absolve you of your responsibility to also mentor juniors at work though

Those are different tasks in different worlds and they all need doing

ookblah · 6h ago
nice deflection. i might not share your enthusiasm for mentoring junior devs, but i do it anyway because like you, i agree it's important. the point, though, is at the end of the day even if i didn't do it you have no fucking right to come with that moral high ground.

if you've optimized every facet of your life to do all the "responsible things" society needs then feel free to throw the first stone. anything else is just posturing.

and just a small thing, it's ironic that you're so fixated on socialization for society’s sake while being so tunnel-visioned in defending your own definition of what that even means. i've given you plenty examples but it just doesn't fit the one you personally adhere to.

bluefirebrand · 1d ago
> maybe some people have all the human interaction they want outside of work and don't mind "talking" to some AI as long as it gets shit done in the manner they want

This isn't about satisfying a person's need for socializing it is about satisfying society's need for well socialized people

You can prefer closed offices and still be a well socialized person

You can prefer remote work and still be a well socialized person

You can even prefer working alone and still be a well socialized person

If you are in favor of replacing all humans with machines, you are pretty much by definition an asocial person and society should reject you

ookblah · 1d ago
you're making a strawman here. it was never black and white and i never advocated all humans being replaced with machines so we have zero interaction with each other.

every technological push has been to automate more and more and every time that's happened we've reduced socialization to some extent or changed the nature of it (social media anyone? and yes, this also has everything to do with remote vs in-person, etc, all which pull the lever on what level of socialization is acceptable).

just because it doesn't fit your particular brand doesn't mean it's wrong, and it's clear this is pushing on your line where you find it unacceptable. i could just as well argue that people who do not show up to an in-person office are not "socialized" to the degree society needs them to be.

the debate has always been to what degree is this acceptable.

roywiggins · 1d ago
This is more or less what happened to artisans during the industrial revolution: sell your tools, become a widget hammerer on the assembly line. Lots of jobs created for widget hammerers. Not a great deal for a lot of people. Deskilling jobs demonstrably sucks for the people with skills!
HeyLaughingBoy · 1d ago
Same here. But for every one of us, there are probably 10 people out there who'd be more happy babysitting an LLM than actually writing code.
Mobius01 · 1d ago
This reminded me of this short story where the AI disruption starts as a work management software directing workers in a burger shop:

https://marshallbrain.com/manna1

qoez · 1d ago
I can't make up my mind if I'd prefer an AI boss or not. Human bosses can be quite terrible and not having to deal with an emotional being seems kinda nice.
monknomo · 1d ago
your ai boss isn't going to bend rules for you, and isn't going to advocate for you. You can see how an ai boss would go by looking to amazon warehouses and drivers, or call centers, and how those folks are managed. It's already by computer, they already uses machines learning to detect when people are deviating from expected rails, and you can decide for yourself if that looks appealing
johnpaulkiser · 1d ago
Let me help you. An AI boss would be 100x worse.
coaksford · 1d ago
Don’t worry, it wouldn’t replace your human boss, you’d just have both bosses.
roywiggins · 1d ago
Your human boss can't feasibly maintain a panopticon with only their human brain, AI arguably can. Every single word uttered or pixel emitted can be saved and analyzed, for relative pennies.
LPisGood · 1d ago
AI bosses can also be quite nice and having the benefits of reporting to an emotional being is kinda nice.
AstroBen · 1d ago
Don't forget that AI boss would be controlled by a human.. a human who has no idea how it works
wnc3141 · 1d ago
I think we saw that with the "vibe session" from a year or so ago. People were technically employed through door dash and other dead end jobs while overall economic agency shrank.
skwee357 · 1d ago
The problem with “AI will replace all jobs” hype, is that it also comes with a flavor of “and we all will do creative work”, while in reality AI replaces all the creative work and people go back to collecting garbage or other physically demanding and mundane jobs.
bufferoverflow · 1d ago
Why would you think garbage collecting or other mundane jobs won't be automated when much more complex ones are?

If AI+robotization gets to the point where most jobs are automated, humans will get to do.what they actually want to do. For some it's endless entertainment. For others it's science exploration, pollution cleanup, space colonization, curing the disease. All of that with the help of the AIs.

skwee357 · 1d ago
By the time robots will be able to do personal trainings in, say, boxing; or fix people’s roofs, humanity will long be dead or turned into power source for said robots.
mplanchard · 1d ago
Turns out a simulacrum of intelligence is much easier than dexterous robots. Robots are still nowhere near being able to fold laundry, as far as I know.
freedomben · 1d ago
Yep, that's going to be the main outcome I suspect. The bottom 50% or maybe even 80 to 90% of knowledge workers are going to have to go back to physical work. That too will eventually be automated, but I suspect things like construction work (including the many trades wrapped up therein) will be toward the end of that.
DebtDeflation · 1d ago
Maybe that's why the current administration is pushing so hard to bring back low end manufacturing.

If you're a SWE, Accountant, Marketer, HR person, etc. put out of work by AI, now you can screw together iPhones for just over minimum wage. And if we run out of those jobs, there's always picking vegetables now that all the migrants are getting deported.

It would not surprise me one bit if the Tech CEOs see things this way.

saubeidl · 1d ago
That's how you get a revolution.
theSherwood · 1d ago
The analogies to previous technologies always seem misguided to me. Maybe it allows us to make some predictions about the next few years, but not more than that. We do not know when/where we will hit the limits on AI capabilities. I think this is completely unlike any previous technology. AI is intentionally being developed to be able to make decisions in any domain humans work in. This is unlike any previous technology.

The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years? And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is. Analogies to industrial agriculture (a very big deal, historically) and other technologies completely miss the scope of what's happening.

Imustaskforhelp · 1d ago
Let me give my two cents. I remember when people used to think ai models are all the rage and one day we are gonna get super intelligence.

I am not sure If we can call the current sota models that. Maybe, maybe not. But a little disappointing.

Now everyones saying that ai agents are the hype and the productivity gains are in that, the Darwin Gödel paper which was recently released for example.

On the same day (yesterday), hn top page had an ai blog by some fly.io and the top comment was worried about ai excelling and that as devs we should do something as he was concerned what if companies reach the intelligence hype that they are saying.

On the same day, builder.ai turned out to be actually Indians.

The companies are most likely giving us hype because we are giving them valuation. The hype seems not worth it. Everyones saying that all models are really good and now all that matters are vibes.

So in all of this, I have taken this.

Trust noone. Or atleast not take things at face value of ai hype companies. I genuinely believe that ai is gonna reach a plateau of sorts at such moment like ours and as someone who tinkers with it. I am genuinely happy at its current scale and I kind of don't want it to grow more I guess, and I kind of think that a plateau might come soon. But maybe not.

I don't think that its analogous to species but maybe that's me being optimistic about future but I genuinely don't want to think too much as it stresses my brain too much and makes evene my present... Well not a present(gift)

theSherwood · 1d ago
LLMs have only really been around a handful of years and what they are capable of is shocking. Maybe LLMs hit a wall and plateau. Maybe it's a few years before there's another breakthrough that results in another step-change in capabilities. Maybe not. We can focus on the hype and the fraud and the marketing and all the nonsense, but it's missing the forest for the trees.

We genuinely have seen a shocking increase in reasoning abilities over the course of only a decade from things that aren't human. There may be bumps in the road, but we have very little idea how long this trajectory of capability increases will continue. I don't see any reason to think humans are near the ceiling of what is possible. We are in uncharted territory.

Imustaskforhelp · 1d ago
I may be wrong,I usually am but wasn't ai basically possible even in the 1970s but back then there were of course no gpus and basically alexnet showed that gpu are really effective for ai and that is what basically created the ai snowballing.

I am not sure,but in my opinion, a hardware limitation might be real. These models are training on 100k gpus and like the whole totality of internet. I am not sure but I wouldn't be too certain of ai.

Also,maybe I am biased.is it wrong that I want ai to just stay here,at the moment it is right now. It's genuinely good but anything more feels to me as if it might be terrifying (if the ai companies hype genuinely comes true)

tptacek · 1d ago
I've got no dog on this hunt at all, the idea that any give AI company could be a house of cards is not only plausible but is the bet I would place every time, but the whole "builder.com is all Indians" thing is something 'dang spent a half an hour ruefully looking into yesterday and it turned out not to be substantiated.
Imustaskforhelp · 1d ago
I am not sure but I read the hn post a little and didnt see that part I suppose.

But even then,people were defending it,saying so what,they never said that they aren't doing it or SMTH. So I of course assumed that people are defending what's true.

Maybe not,but such a rumour was quite a funny one to hear as an Indian myself.

kypro · 1d ago
While robotics are still relatively immature, I would think of AI as something akin to a remote worker.

Anything a human remote worker can do, a super human remote worker will be able to do better, faster and for a fraction of the cost – this includes work that humans currently do in offices but could theoretically done remotely.

We should therefore assume if (when) AI broadly surpasses the capabilities of a human remote worker it will not longer make economic sense to hire humans for these roles.

Should we assume this then what is the human's role in the labour market? It won't be their physical abilities (the industrial revolution replaced the human's role here), it won't be their reasoning abilities (AI will soon replace the human's here), but perhaps jobs which require both physical dexterity and human-level reasoning ability humans might still retain an edge? Perhaps at least for now we can assume jobs like roofing, plumbing, and gardening will continue to exist. While jobs like coding, graphic design and copy writing will almost certainly be replaced.

I think the only long-term question at the moment is how long it will take for robotics to catch up and provide something akin to human-level dexterity with super-human intelligence? At which point I'm not sure why anyone would hire a human except from the novelty of it – perhaps like the novelty of riding a horse into town.

AI is so obviously not like other technologies. Past technologies effectively just found ways to automate low-intelligence tasks and augment human strength via machinery. Advanced robotics and AI is fundamentally different in their ability to cut into human labour, and combined it's hard to see any edge to a human labourer.

But ether way, even if you subscribe to this notion that AI will not take all human jobs it seems very likely that AI will displace many more jobs than the industrial revolution did, and at a much much faster pace. Additionally, it will target those who are most educated, which isn't necessarily a bad thing, but it unlike the working class who are easy to ignore and tell to re-skill, my guess would be that demands will be made for UBI and large reorganisations of our existing economic and political systems. My point is, the likelihood any of this will end well is close to zero, even if you just believe AI will replace a bunch of inefficient jobs like software engineers.

theSherwood · 1d ago
This matches my expectations for the near term pretty closely.
awb · 1d ago
We’ve seen tech completely eliminate jobs like phone switch operators and lamp lighters.

And it’s decimated other professions like manual agriculture, assembly line jobs, etc.

It seems like people are debating whether the impact of AI on computer-based jobs will be elimination or decimation. But for the majority of people, what’s the difference?

yoyohello13 · 1d ago
I think the comparisons to lamp lighters or whatever don't quite capture why this is so much worse. The training for those jobs was relatively low. You don't need a decade of school to become a lamp lighter.

So if the white collar bloodbath is true. We have to tell a bunch of people, who have spent a significant portion of their lives training for a specific jobs and may be in debt for that education, to go do manual labor or something. The potential civil unrest from this should really concern everyone.

ffsm8 · 1d ago
You honestly think it's gonna take more then a few years until everything else to follow?

Seriously, once something is able to do 90% of a white collar workers job, general ai has gotten far enough for robotics to take over/decimate the other industries within the decade.

yoyohello13 · 1d ago
Seems like that would make the civil unrest worse not better.
Peroni · 1d ago
>And it’s decimated other professions like manual agriculture, assembly line jobs, etc.

When Henry Ford introduced the moving assembly line, production went from hundreds of cars to thousands. It had a profoundly positive impact on the secondary market, leading to an overall increase in job creation.

I've yet to see any "AI is gonna take your job" articles that even attempt to consider the impact on the secondary market. It seems their argument is that it'll be AI all the way down which is utter nonsense.

naijaboiler · 1d ago
Humans beings can not run out of economically valuable things we can do for one another. Technology can change what that thing is profoundly though.
monknomo · 1d ago
What do you think the secondary market for knowledge work is?
cootsnuck · 1d ago
More knowledge work. It's disheartening for me to see so many people think so little about their own abilities.

There's a reason we can still spot the sterile whiff of AI written content. When you set coding aside, the evidence just hasn't shown up yet that AI agents can reliably replace anything more than the most formulaic and uninspired tasks. At least with how the tech is currently being implemented...

(There's a reason these big companies spend very very little time talking about the power of businesses using their own data to fine-tune or train their own models...)

ilaksh · 1d ago
The biggest reason there is such a difference of opinion on this is that people have fundamentally different worldviews. If you have bought into the singularity concept and exponential acceleration of computing performance, then you are likely to believe that we are right on track to shortly have smarter-than-human AI. This is also related to just having a technology-positive versus negative worldview. Many people like to blame technology for humanity's failings when in reality it's a neutral lever. But that comes down to the way people look at the world.

People who don't "believe" in the exponential of computing (even though I find the charts pretty convincing) seem to always assume that AI progress will stop near where it is. With that assumption, the skepticism is reasonable. But it's a poorly informed assumption.

https://en.wikipedia.org/wiki/Technological_singularity#Expo...

I think that some of that gets into somewhat religious territory, but the increasing power and efficiency of compute seems fairly objective. And also the intelligence of LLMs seems to track roughly with their size and amount of training. So this does look like it's about scale. And we continue to increase the scale with innovations and new paradigms. There will likely be a new memory-centric computing paradigm (or maybe multiple) within the next five years that increases efficiency by another two orders of magnitude.

Why can I just throw out a prediction about orders of magnitude? Because we have increased the efficiency and performance by orders of magnitude over and over again throughout the entire history of computing.

parineum · 1d ago
I think you're missing a third conglomerate that, I think, is actually the most influential.

It's not unlike the crypto space; you've got your true believers, your skeptics and thirdly, your financially motivated hype men. The CEOs if these publicly traded companies and companies that want to be bought are the latter and they are the ones who are behind stories where "the ai lies so we don't turn it off!!!" hype that gets spun into click bait headlines.

ilaksh · 1d ago
I think those CEOs like Altman and Amodei do find it convenient to hype their products like that, but also they believe in computing exponentials and artificial superintelligence etc.
tzs · 1d ago
>> The automation of farm work is the most notable and most labor-impacting example we have from history, rapidly unemploying a huge portion of human beings in the developing economies of the late 19th and 20th centuries. And yet, at the conclusion of this era (~1940s/50s), the conclusion was that “technological unemployment is a myth,” because “technology has created so many new industries” and has expanded the market by “lowering the cost of production to make a price within reach of large masses of purchasers.” In short, technological advances had created more jobs overall

From the late 19th century to the 1940s/50s is 50 years. It's not really reassuring to middle aged workers who lose their jobs to new technology that 50 years later there will overall be more jobs available.

game_the0ry · 1d ago
I will likely be leaving tech bc of business execs getting horny and skeeting all over each other at the cost savings they perceive.

The flip side is that now I am using AI for my own entrepreneurial endeavors. Then I get to be the business exec, except my employees will be AI workflows.

And I never have to deal with a business exec every again.

lddemi · 1d ago
Until you have to hire one :)
game_the0ry · 1d ago
True true. But I want to try to stay at the "solopreuership" level for as long as I can pull it off. I would prefer not to have too much influence over other peoples lives.
cootsnuck · 1d ago
Same here. I'm currently doing the soloist route as a consultant. Going well but I am reaching a point where I'm starting to need help.

Even if you do end up having the best kind of problem and have to scale your business, there are other ways to organize work besides the same ol' tired hierarchy.

Enspiral is one real-life example I can think of. They're a entrepreneurial collective in New Zealand that has figured out its own way of organizing collaboration in a way without bosses/execs. Seems to be working fine for them. (Other types of worker cooperatives / collectives too, they're just a great example.)

I'd rather dare to try to make something perhaps more difficult at first but that allows me to avoid recreating the types of working conditions that pushed me to leave the rat race.

tines · 1d ago
People compare AI to the automation that happened in e.g. car factories. Lots of people were put out of jobs, and that’s just the way things go, they say.

But the difference is that automotive automation does create way more jobs than it destroyed. Programmers, designers, machine maintainers, computer engineers, mechanical engineers, materials scientists, all have a part to play in making those machines. More people are employed by auto manufacturers than ever before, albeit different people.

AI isn’t the same really. It’s not a case of creating more-different jobs. It just substitutes people with a crappier replacement, puts all the power in the hands of the few companies that make it, and the pie is shrinking rather than growing.

We will all pay for the damage done in pure pursuit of profit by shoehorning this tech everywhere.

palmotea · 1d ago
I think one interesting way to frame AI is that it will "degrade" jobs: force speed-ups and remove much of the enjoyable and engaging aspects and replace them with drudgery.
tines · 1d ago
That’s a good way to think of it. I think it’s degrading every aspect of life. We’re so obsessed with metrics and efficiency that we don’t know how to live any more.
palmotea · 1d ago
Yeah, modern society (essentially neoliberal capitalism) does not prioritize quality of life. The apotheosis is maximum output (shareholder profits), even if that means if the vast majority of people are miserable and unhappy (because the low quality of their work-life is not compensated by the products and services they're given access to).
tines · 1d ago
Agreed. And the interesting thing is that now, with smartphones and an endless stream of captivating entertainment (basically Soma), people won't even be able to think long enough to realize they're miserable and unhappy.

If you can impose a new way of living on one single generation---an existence directed toward endless free (for now) entertainment---it is enough to change all subsequent generations.

Once this takes hold, there's no going back.

palmotea · 1d ago
> Once this takes hold, there's no going back.

I'm not that pessimistic. If humans are in charge, there'll always change, but it may be in the far future.

Though AGI might change that, because I can see one important application being the creation of ever-present minds to continuously watch and control each person who remains, on behalf of some totalitarian entity. Basically 1984 telescreens, but real and far cheaper.

sct202 · 1d ago
It's the uncertainty with the transition. Will I, in my mid-career, be able to get one of these new jobs that spawn out as a result of AI, or will I be displaced into something lower-paying as result. TFA kind of just glosses over the people who get displaced in each transition like a footnote.
siliconc0w · 1d ago
For SWE, there is a huge misconception that implementation is bottleneck - for mid level to senior it takes much longer to decide what to build than to actually build it. Each additional line of code adds weight to the airframe.
bob1029 · 1d ago
Architecture & aesthetics are the real bottlenecks in developing products that customers are actually interested in paying money for. This is why everyone seems to struggle so hard with the frontend development duties.

You cannot solve for taste and art with frameworks and patterns. Having a fixed-sized canvas and standardized brushes/paints to work with is not much help if you are ass at painting.

pontus · 1d ago
Even if AI can't replace an entire worker, they may still be able to help one worker do the work of two which by itself could lead to massive unemployment (at least in the short term.)
elevatortrim · 1d ago
In this instance, it may not lead to massive unemployment as it could be simply that we do more white collar work than we currently do. White collar work, for the large part, is not born out of necessity, as white-collars, we do not produce things, we oversee the process, we market, we sell, we do accounting, we support, and so on. The more of these we do, the better our companies can compete. But there is no inherent reason that a mature company could not reduce its white-collar jobs to a halt if there was not competition. White-collars work because the competition also hires and gets them to work. Hence we say "productivity is increasing but we have to work more". So even if AI leads to massive improvements in white-collar work productivity, as long as it is not entirely replacing people, it may not lead to job losses at all.
lz400 · 1d ago
not all of them, but that's not the issue, the issue is 100 engineers + AI replacing 500 engineers
AstroBen · 1d ago
This assumes the amount of work available is static. If the cost to produce software reduces by 80% then suddenly many more projects are viable, needing more engineers
mythrwy · 1d ago
How much software do we actually need though?

I'm a little embarrassed to admit this, but over the past 15 years of my career I have worked on a few products the world really didn't need at all (nor want). I was paid, and life went on but it was a misallocation of resources.

I suppose now someone can build stuff the world doesn't need a lot easier.

GMoromisato · 1d ago
I grew up in the 80s and we were basically taught that at any moment we could see a mushroom cloud and get a minute to say goodbye to everything. As horrible as that was, I think it taught us to deal with uncertainty in a way that maybe other generations didn't get. We learned to accept that we don't always have control over what's going to happen. And in that acceptance, there is a Zen peace.

The truth is, we don't know how AI will evolve. Maybe it will replace all jobs in 10 years. Or maybe never. Or maybe the world will change bit by bit over the next 50 years until it is utterly unrecognizable. Anyone who tells you that they know for sure is selling you something.

If forced to guess, I would say AI is like electricity or the microprocessor: it will change everything, but it will take decades.

Once you accept that things are going to change, it frees you to focus on what's important. Focus on what you can control: your skills, your effort, and your relationships (business and personal).

1vuio0pswjnm7 · 1d ago
"Over the weekend I went digging for evidence that AI can, will, or has replaced a large percentage of jobs."

Perhaps the author is curious whether "AI" will replace "SEO jobs", or "web marketing" jobs

If "AI"-generated answers are replacing www search, and even visits to other websites, then perhaps to some extent "AI" will reduce the number of "SEO" or "web marketing" jobs

https://searchengineland.com/seo-opportunity-shrinking-rand-...

The author is a "web marketer" accusing "Tech Execs" of doing web marketing

Marketers trying to discredit other marketers

ArtTimeInvestor · 1d ago
What could stop AI from doing all jobs?
pluc · 1d ago
Realizing that without feeding AI new shit to learn from, the whole industry is going to stagnate.

Realizing that there are no longer entry level jobs for tech positions and stifling a large swathe of tech professions.

Realizing that AI isn't that good, and the bulk of the workload has become technical debt and nobody wants to be an AI janitor

etc? It's easy really just gotta get your head out of the hole AI put it in

raxxorraxor · 1d ago
We don't even have a robot than can effectively scrub toilets, if they aren't designed for self-cleaning.

Some will smile at the triviality of such things, but if you solve that you will get very rich. It surely is not at all trivial to solve.

Currently I don't think we can even replace any normal clerk for any slightly more complex problem yet. Programming is essentially a form of translation and these fields will probably change in they get more supplemental tools.

I first thought artists will suffer from AI and perhaps some will be taken advantage off. On the other hand people seem to dislike AI generated content.

hyperhello · 1d ago
Of course we have a machine that scrubs toilets. It only costs minimum wage to operate.
max_ · 1d ago
It has no mind.

Also from IBM,

"A computer should never be put incharge because a computer can never be held accountable"

abeppu · 1d ago
Sometimes the involvement or effort from an actual human is part of the perceived value / willingness of the customer to pay.

- People will pay more to go to a concert performed by live human musicians than they would to listen to a recording or to go see an artificial performer (e.g. chuck e cheese?)

- A "handmade" product, or a "made in house" dish is valued above a mass-production or factory-frozen meal

- A therapist, doctor etc may be valued partly for their bedside manner or ability for form genuine rapport with a patient

- Even in AI _defining and measuring value_ ultimately comes from people. E.g. we wouldn't have the LLMs we do if there hadn't been teams of humans providing input to the RL.

keiferski · 1d ago
At some point, it will be expected that people creating content (writing/video/etc.) use a human validation service to prove that they aren’t AI. Anyone that doesn’t use this (and presumably uses AI) will be deranked from YouTube and social media.
IggleSniggle · 1d ago
And their human content fed into the machine, so it can be mass produced whenever and wherever desired, allowing less of the total benefit to flow to the creator, giving them just enough incentive to keep feeding the machine.
HeyLaughingBoy · 1d ago
Why?
keiferski · 1d ago
Content ecosystems are only valuable as ecosystems. If everything on a platform becomes AI generated, the average person will stop caring - and they’ll switch to a social network that enforces humans-only.

I don’t think it’s a realistic understanding of psychology or sociology to think that people will be happy only consuming AI stuff.

nothercastle · 1d ago
Lack of actual AI. If ai could do all jobs it wouldn’t need us and eliminate the inefficiencies
lawlessone · 1d ago
Yeah but once it eliminates us it's purpose is completely gone.
lagrange77 · 1d ago
If humanity survives long enough, it will eventually. It's the end goal of technology.
blibble · 1d ago
extreme unreliability
adamlangsner · 1d ago
only Tom Cruise can
wiz21c · 1d ago
"When Chuck Norris codes, AI develops self-awareness just to avoid being roundhouse kicked."
Night_Thastus · 1d ago
Incompetence
CivBase · 1d ago
The same thing stopping AI from doing all the jobs right now.
wang_li · 1d ago
Lack of thumbs.
darkoob12 · 16h ago
I was wondering if these claims are supported by numbers. Are we seeing a decrease in employment?

I checked the unemployment numbers of US they have a regular trend. But they are very vague and general.

I cannot attribute layoffs at companies like Microsoft to AI because these things happened before many times.

Notatheist · 1d ago
>the labor displacing effect of technology appears to be more than offset by compensating mechanisms that create or reinstate labor.

I don't buy into this at all:

>Assuming AI will have an effect similar to 20th Century farm equipment’s on agriculture, why will that labor force behave differently to their 20th Century counterparts (and either refuse to or be prevented from finding new jobs)?

Because "farm equipment" can't also perform the jobs it creates. I'm assuming if/once AI can do most current jobs, it can also do most if not all the jobs it creates.

hintymad · 1d ago
I have a hard time imagining how the current large model could possibly replace all the jobs, given that the models simply retrieve and re-organize human-generated knowledge, and perform limited interpolation when it comes to code generation. We software engineers open source all kinds of cool code to replace ourselves to a certain degree, but without us, who's going to generate new knowledge to feed the models?
sreekanth850 · 14h ago
AI will not replace developers, they will help developers to solve problems faster. But model collapse is real and we have to see how it can overcome this.
greenie_beans · 1d ago
they already expecting the same work for less money. and therefore wanting me to work more. i thought it was supposed to make me work less?
guluarte · 1d ago
I'll create more work no less. I have a client with zero programming experience who is shipping slope UIs and then asking me to fix them. I barely have time as it is for all the low-quality work I have to fix that he sends me daily.

With AI, companies will ship more features, but their competitors will too. This will likely result in a net-zero gain, similar to the current situation.

jasonthorsness · 1d ago
"Jobs" aren't people, but these headlines are making that emotional appeal for engagement. AI will transform work like many technologies of the past and most people will be doing different things or the same things differently in a few years. Just like computers or mobile phones.
agentultra · 1d ago
You just have to look at who benefits from making these claims and keep asking where they're getting their claims/numbers from.

"Oh no, we invented an AI that is so smart we're afraid of it! We need AI safety!" is literally snake-oil sales pitching. Media outlets that give air to these claims are cringe.

AI isn't replacing people. It's being used to replace labour with capital which weakens labour's negotiating power in order to increase the profits of the capital class. Just as other disruptive technologies have been used in the past.

What the Luddite movement shows us is that society needs to prepare for taking care of highly skilled people. What society didn't do back then was take care of people. Textile workers didn't find jobs elsewhere unless you mean work houses. The myth capitalists fabricated around disruptive technologies is that people displaced by these technologies will acquire new skills and find work elsewhere and that these technologies create new opportunities. It doesn't exactly happen that way.

The same could happen here. The wealthy transfer even more wealth from the labour class to themselves and avoid taxation or doing their part to replace the value they've taken from the labour class.

Update: Changed sentence on the "capitalist myth" to explain what the myth is.

qgin · 1d ago
Maybe not 100% of every job, but maybe 80% of every job, which means you can consolidate the human-gap-work in 20% of the previous headcount and fire the rest.

When you’re employed, any efficiency gains you get from AI belong to the company, not you.

elktown · 1d ago
I find it amusing how you can promise the world then people will just accept a compromise; "well, that's too far, but maybe it will replace juniors", even when that's also an extraordinary claim.
disambiguation · 1d ago
> The AI Fear & Hype Marketing Flywheel

And I suspect the majority of this flywheel is fully manufactured at this point.

Dead Internet Theory is nearly complete.

heldrida · 1d ago
AI can probably generate music that sounds like Bob Marley but who would pay to watch it perform live? Even if free…
kazinator · 1d ago
It's not just tech execs doing marketing!

Idiots everywhere are repeating it for them ad nauseum.

softwaredoug · 1d ago
Not just marketing, but often trying to cover financial issues by veering into AI hype
CivBase · 1d ago
I think the AI job extermination narrative has so much traction right now because it's a convenient cover story for layoffs. Every tech CEO is talking about how they're replacing jobs with AI but I don't see stories on HN about how any of these jobs are being replaced with AI.

Thar doesn't mean AI won't replace jobs. I don't know the scale, but it certainly will replace some jobs at least. Just probably not the ones we're currently losing.

M4v3R · 1d ago
The problem with the discourse around AI is that people tend to fall for the extremes.

"AI will replace all programmers within 1 year"

vs

"AI is just another fad like NFTs"

Both sides are very obviously wrong, and the truth lies somewhere in the middle. Most people knowledgeable about AI agree that it will eventually surpass humans in all tasks requiring mental labor (and will thus displace those humans, as using AI will be cheaper than employing people), but no one knows exactly when this will happen. I personally believe it will happen within the next 10 years, but it’s really just a slightly educated guess.

eagsalazar2 · 1d ago
I rarely hear people talk about the fact that demand for new software will increase exponentially as cost to produce software crash. The ratios of devs_per_1k_lines_of_code_in_2022:devs_per_1k_lines_of_code_in_2028 and increase_in_demand:decrease_in_cost are unknown. But what if 1 dev could produce 300% more code, reducing eng cost (assuming same cost per hour per dev) to 1/3 of historical norms. Would that result in an increase of 20% in demand? Probably not, more like 1000%.

If that's the case, there is a large net increase in demand for experienced devs who know how to use AI for coding. Demand will go up massively, I have zero doubt of that, but will AI get so much better that unskilled MBAs are making large complex apps? ¯\_(ツ)_/¯

bluefirebrand · 1d ago
Demand does not always boom this way

AI companies are positioning themselves as "the everything machine"

The vast majority of software written today is "capture data -> transform data (optional) -> display data nicely formatted and easily accessible"

If an AI can wire into your database to retrieve the data in the format you want, then a bunch of the job is done

If the AI can also be made to present a form to users to capture the data in the first place, then almost all of the job is done

These are huge IFs. I remain skeptical that we'll reach this level soon. But if we do, the software industry is gonna tank

The AI industry will grow. Maybe.

AstroBen · 1d ago
These basic CRUD apps are solved with no-code tools today aren't they? Why do we need AI for that

'the everything machine' is pure fantasy and hype

bluefirebrand · 1d ago
No they aren't solved by No-Code tools. No-Code tools have become "Don't write code, just write endless configuration. And then when it doesn't do what you want, write code anyways"

Except now your code starting point is an absolute mess under the hood so it's a complete crapshoot to build out anything meaningful

hooverd · 1d ago
So true. No-code tools splatter your code, your business logic, across a thousand windows and forms.
heldrida · 1d ago
That’s a fair hypothesis.

Software as we know it, will disappear.

hyperhello · 1d ago
Why is more code any kind of value to anyone?
hooverd · 1d ago
The entire economy will be one-man SaaS products. /s
mistrial9 · 1d ago
and, the money to pay those new devs will just pour out like a waterfall? Some players in the ecosystem exist solely to stop payments as a cost center. Paying for work is a liability, capital expenditure and long term deals for automation are a tax writeoff. Let's be even more blunt -- slavery for labor is very profitable and makes a great economy on a large scale; taxes are healthy, production goes up, some people get very rich. Slavery for labor is a desirable economic stable point.

What does slavery for labor look like in high-skill urban setting? Rent, login credentials that are monitored, computer use is monitored, electricity is metered centrally, access to new model updates is monitored, required security updates are controlled via certificates and individual profiles, communication is by phone which is individually monitored for access patterns and location.. all very sci-fi eh?

hooverd · 1d ago
Hey, remember to turn off the Lathe of Heaven before manifesting that.
insane_dreamer · 1d ago
My take: It won't replace all the SWE jobs. But it _will_ replace many of the entry-level jobs, thereby overtime significantly reducing the number of people in the industry. This is because most companies are focused on short-term cost-cutting rather than training and retaining talent (why spent $$ on that when the talent might then just hop over to another company, especially when what that entry-level person does could instead be done by an AI -- not autonomously, not, but by a more senior dev "overseeing" a number of spawned LLM instances.
fakedang · 1d ago
https://news.ycombinator.com/item?id=44181342

Hottakes like these are getting retarded.

zb3 · 1d ago
AI will make it worse - it will replace "easier" jobs, harder jobs will remain, but the bar will be raised and so many people will not be able to have an "office" job.
ChrisArchitect · 1d ago
Related:

The ‘white-collar bloodbath’ is all part of the AI hype machine

https://news.ycombinator.com/item?id=44136117

ninetyninenine · 1d ago
No it’s a realistic possibility. Not just marketing.

It may not replace us and it also may. Given the progress of the last decade in AI it’s not far off to say that we will come up with something in the next decade.

I hope nothing will come but it’s unrealistic to say something definitively will not replace us.

sulam · 1d ago
You can’t project trends out endlessly. If you could, FB would have 20B users right now based on early growth (just a random guess, you get the point). The planet would have 15B people on it based on growth rate up until the 90s. Google would be bigger than the world GDP. Etc.

One of the more bullish AI people has said the models performance scales with log of compute (Sam Altman). Do you know how hard it will be to move that number? We are already well into diminishing returns with current methodologies and there is no one pointing the way to a break through that will get us to expert level performance. RLHF is underinvested in currently but will likely be the path to get us from Junior contributor to Mid in specific domains, but that still leaves a lot of room for humanity.

The most likely reason for my PoV to be wrong is that AI labs are investing a lot of training time into programming, hoping the model can self improve. I’m willing to believe that will have some payoffs in terms of cheaper, faster models and perhaps some improvements in scaling for RLHF (a huge priority for research IMO). Unsupervised RL would also be interesting, albeit with alignment concerns.

What I find unlikely with current models is that they will show truly innovative thinking, as opposed to the remixed ideas presented as “intelligence” today.

Finally, I am absolutely convinced today’s AI is already powerful enough to affect every business on the planet (yes even the plumbers). I just don’t believe they will replace us wholesale.

ninetyninenine · 1d ago
>You can’t project trends out endlessly.

But this is not just an endless projection. In one sense we can't have economic growth and energy consumption go endlessly as that will eat up all the available resources on earth, there is a physical hard line.

However for AI this is not the case. There is literally an example of human level intelligence exiting in the real world. You're it. We know we haven't even scratched the limit.

It can be done because an example of the finished product is humanity itself. The question is do we have the capability to do it? And for this we don't know. Given the trend and the fact that a Finished product Already exists, It is Totally realistic to say AI will replace our jobs.

AstroBen · 1d ago
There's no evidence we're even on the right track to have human level intelligence so no, I don't think it's realistic to say that

Counterpoint: our brains use about 20 watts of power. How much does AI use again? Does this not suggest that it's absolutely nothing like what our brains do?

ninetyninenine · 1d ago
There is evidence we're on the right track. Are you blind? The evidence is not definitive, but it's evidence that makes it a possibility.

Evidence: ChatGPT and all LLMs.

You cannot realistically say that this isn't evidence. Neither of these things guarantees that AI will take over our jobs but they are datapoints that lend credence to the possibility that it will.

On the other side of the coin, it is utterly unrealistic to say that AI will never take over our jobs when there is Also no definitive evidence on this front.

AstroBen · 1d ago
> unrealistic to say that AI will never take over our jobs

That's not my position. I'm agnostic. I have no idea where it'll end up but there's no reason to have a strong belief either way

The comment you originally replied to is I think the sanest thing in here. You can't just project out endlessly unless you have a technological basis for it. The current methodologies are getting into diminishing returns and we'll need another breakthrough to push it much further

This is turning into religious debate

ninetyninenine · 23h ago
Then we're in agreement. It's clearly not a religous debate, you're just mischaracterizing it that way.

The original comment I replied to is categorically wrong. It's not sane at all when it's rationally and factually not true. We are not projecting endlessly. We are hitting a 1 year mark of a bumpy upward trendline that's been going for over 15 years. This 1 year mark is characterized by a bump of a slight diminishing return of LLM technology that's being over exaggerated as an absolute limit of AI.

Clearly we've had all kinds of models developed in the last 15 years so one blip is not evidence of anything.

Again we already have a datapoint here. You are a human brain, we know that an intelligence up to human intelligence can be physically realized because the human brain is ALREADY a physical realization. It is not insane to draw a projection in that direction and it is certainly not an endless growth trendline. That's false.

Given the information we have you gave it an "agnostic" outlook which is 50 50. If you asked me 10 years ago whether we would hit agi or not I would've given it a 5 percent chance, and now both of us are at 50 50. So your stance actually contradicts the "sane" statement you stated you agree with.

We are not projecting to infinite growth and you disagree with that because in your own statement you believe there is a 50 percent possibility we will hit agi.

AstroBen · 21h ago
Agnostic, at least as I was using it, was intending to mean 'who knows'. That's very different from a 50% possibility

"You are a human brain, we know that an intelligence up to human intelligence can be physically realized" - not evidence that LLMs will lead to AGI

"trendline that's been going for over 15 years" - not evidence LLMs will continue to AGI, even more so now given we're running into the limits of scaling it

AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades

The only evidence that justifies a specific probability is going to be technical explanations of how LLMs are going to scale to AGI. No one has that

1. LLMs are good at specific, well defined tasks with clear outcomes. The thing that got them there is hitting its limit

2. ???

3. AGI

What's the 2?

It matters.. because everyone's hyped up and saying we're all going to be replaced but they can't fill in the 2. It's a religious debate because it's blind faith without evidence

ninetyninenine · 13h ago
>Agnostic, at least as I was using it, was intending to mean 'who knows'. That's very different from a 50% possibility

I take "don't know" to mean the outcome is 50/50 either way because that's the default probability of "don't know"

> not evidence LLMs will continue to AGI, even more so now given we're running into the limits of scaling it

Never said it was. The human brain is evidence of what can be physically realized and that is compelling evidence that it can be built by us. It's not definitive evidence but it's compelling evidence. Fusion is less compelling because we don't have any evidence of it existing on earth.

>AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades

>AI winter is a common term for a reason. We make huge progress in a short amount of time, everyone goes crazy with hype, then it dies down for years or decades

AI winter refers to a singular event that happened through the entire history of AI. It is not a term applicable to a common occurrence as you seem to imply. We had one winter, and that is not enough to establish a pattern that it is going to repeat.

>1. LLMs are good at specific, well defined tasks with clear outcomes. The thing that got them there is hitting its limit

What's the thing that got them there? Training data?

>It matters.. because everyone's hyped up and saying we're all going to be replaced but they can't fill in the 2. It's a religious debate because it's blind faith without evidence

The hype is in the other direction. On HN everyone is overwhelmingly against AI and making claims that it will never happen. Also artists are already replaced. I worked at a company where artists did in fact get replaced by AI.

yesbut · 1d ago
Glorified chqtbots. We'll never have AI.
eximius · 1d ago
Maybe. But I think this is a useful time to engage in the conversation about what we do as a semi-post-scarcity society and how we should treat people under that framework. If we _do_ replace all the jobs, how should we, as a society, treat each other?

Fundamentally, you either continue down the path of pure capitalism and let people starve in the streets or you adapt socially.

gigel82 · 1d ago
The worse is when tech execs elsewhere actually believe the BS or (more likely) use it as an excuse to put pressure on already overworked employees through layoffs and fear mongering.

This is very real and it's happening now across the industry, with devastating consequences for many (financial, health, etc.)

moi2388 · 1d ago
Of course AI will replace us.

Just like the steam engine did, just like robots did, just like computers did.

Oh, wait.

fullshark · 1d ago
Cars replaced horses, cause they were clearly better at their job. Humans' primary labor value to society at large comes from their brains, and ability to do multiple physical tasks, some featuring precise movements, with little instruction/development.

The idea that computers and general purpose robots cannot possibly replace humans is no longer outrageous to me. Especially if we are talking about a few key humans controlling / managing multiple robots to do what was previously the work of N humans.

moi2388 · 5h ago
“ The idea that computers and general purpose robots cannot possibly replace humans is no longer outrageous to me”

It’s not. My point is that already was the case. We already have computers, robots, automation, machines which are better than humans in any task you’ll give them. And they still don’t replace humans, because humans will do other things.

You will always only replace tasks, never people. The question isn’t man vs machine, it’s man and machine vs only machine.

bluefirebrand · 1d ago
I think humans should have a strong incentive to never let this become reality

I do not want to see the bloodbath that will follow

And I do pretty strongly think that is the trajectory we're on. We cannot create utopia for 1% of the population and purgatory for the other 99% and expect people to just sit still and take it

fullshark · 1d ago
The people will indulge in vices, fail to build for their own future, not have children, and cheer on their own obsolescence.
NoOn3 · 1d ago
When cars were invented, someone might have thought that there would be other jobs for horses. Quote from Wikipedia: "There were an estimated 20 million horses in March 1915 in the United States.[36] But as increased mechanization reduced the need for horses as working animals, populations declined. A USDA census in 1959 showed the horse population had dropped to 4.5 million"...