The greatest trick MEGACORP had, was making software engineers think they are being replaced by AI, even though they were being replaced by cheaper near-/offshore devs working remotely.
noworriesnate · 5h ago
My gut feeling is that the offshore workers are going to be replaced by AI first. They have a reputation of bad quality work unfortunately, and it’s hard to know ahead of time who is good and who isn’t because there aren’t many connections (connections is how people are going to be hired exclusively moving forward IMO).
perrygeo · 3h ago
Offshore workers and generative AI have a lot in common. Little formal training. Lots of book smarts but zero context. Performs ok at well specified tasks, extremely poorly otherwise. Cannot understand human-written design documents with enough nuance. No aesthetic design sense. And finally their undisputed ability to pump out high volumes of code (including high volumes of garbage).
The only difference is that LLMs have a deeper and wider understanding of English, no time zone barriers, and nearly instant response time. I find it hard to picture a world where AI doesn't decimate these types of low-skill software jobs.
bn-l · 4h ago
> They have a reputation of bad quality
That is an extreme understatement.
vmaurin · 6h ago
I claim that by the end of the year, all VC jobs will be replaced by AI. But I don't know why, my claim is not taken seriously or not very popular !
ctkhn · 6h ago
Myth BUSTED - AI cant do coke or ketamine with founders and has no value in your use case
skm · 4h ago
Marc Andreessen would beg to differ :)
On a recent a16z podcast, Andreessen said:
“It's possible that [being a VC] is quite literally timeless, and when the AIs are doing everything else, that may be one of the last remaining fields that people are still doing."
Interestingly, his justification is not that VCs are measurably good at what they do, but rather that they appear to be so bad at what they do:
“Every great venture capitalist in the last 70 years has missed most of the great companies of his generation... if it was a science, you could eventually dial it in and have somebody who gets 8 out of 10 [right]. There's an intangibility to it, there's a taste aspect, the human relationship aspect, the psychology — by the way a lot of it is psychological analysis."
(Personally, I’m not quite sure he actually believes this - but watching him is a certain kind of masterclass in using spicy takes to generate publicity / awareness / buzz. And by talking about him I’m participating in his clever scheme.)
lenerdenator · 4h ago
That sounds more like him trying to justify all of the possible harms to society as a whole to his peers.
"It'll screw everyone else, but we'll be okay, so..."
comte7092 · 3h ago
The way I’d read that take is that being a “good” VC is about having enough money to spread around and enough networking connections to generate the right leads. After that pretty much any idiot can do the job.
Tldr AI can replace labor but not capital. More news at 11.
cgio · 6h ago
My lemma: “No one thinks their direct reports can be replaced by AI. Everyone thinks their direct reports’ reports can be replaced by AI.”
ctkhn · 6h ago
As one of those reports' reports, I have noticed that these decision maker business types love AI for their own job too. A teammate had to email another department for approval on some infra and our manager's manager told him to run it through our firm's proprietary LLM to touch it up when it was essentially "I'm on team X and we need approval to use Y for Z, is that ok?"
Makes me wonder what is even going on in her brain if she thought something so simple needed an AI touchup.
cgio · 5h ago
That’s how we decision makers tell each other we are transforming our teams to AI first ;-). Touching up a text is literally the shallowest way to apply AI, so it figures. The schizophrenic practice of asking for AI savings and cutting based on spans of control is an indication of how even big players, that previously had team topology maturity, such as Amazon, are losing the script.
bravetraveler · 4h ago
I'm supposed to be SRE... the industry is so off the rails my job is better described as "YAML peddler".
Not worried in the slightest. Just exhausted and annoyed.
feverzsj · 6h ago
The big corps poured shit ton of money into AI, so they have to cut the salary of human employees or just cut them.
xnx · 5h ago
Everyone wants the benefits of AI for themselves, but doesn't want others to benefit from AI: screenwriters and studios, college students and professors, etc.
It's a very natural (if not honest) situation to try and get an edge in a competitive (not cooperative) environment.
matt-cicero · 10h ago
# Developers, Don't Despair, Big Tech and AI Hype is off the Rails Again
Many software engineers seem to be more worried than usual that the AI agents are coming, which I find saddening and infuriating at the same time. I'll quickly break down the good, bad, and ugly for you.
## Fever Pitch Hype
I think I'm smelling blood in the water for these generative AI companies, because the hype train is currently totally off the rails again, this time with especially absurd and outlandish claims. This latest round follows a very linear sequence of events:
1. Mark Zuckerberg appeared on Joe Rogan in Jan 2025, claiming by year's end Meta will have an AI mid-level software engineer.
2. Shortly after, Sam Altman appeared boasting that soon OpenAI will have a $20k/month PhD level super coder agent.
3. Not wanting to be left out, Dario Amodei one-upped them claiming within 3 - 6 months AI will write 90% of all code, and within 12 months 100% of all code.
4. Getting the last word in, OpenAI made another appearance assuring us that by year's end they will replace all senior staff level software engineers.
Do these people even hear themselves? I know not to expect any better, because as it turns out, highly manipulative and self-serving individuals will blurt out all sorts of ridiculous bs when tens of billions in investor funds are at stake. The current batch of frontier LLMs can barely churn out 100+ line snippets of usable and clean Rust code, and they want me to believe in one upgrade they're going to be hammering out large enterprise-level, secure, polished, and production-ready systems?
The big problem is a bunch of folks actually take these things seriously and use it as an excuse to freeze the junior hiring pipeline.
At the senior levels this is not actually believed by the powers that be, since a bunch of hiring is still happening to compensate for overdone layoffs in spots, etc.
mock-possum · 4h ago
Rogan is a red flag, I’ve seen the kind of content he platforms and the audience that consumes it.
jajko · 6h ago
If those were real claims (don't follow those guys because why on earth would I do this to myself, life as in spending my free time is about completely different matters), why is anybody still taking them seriously?
People bash trump for his momentary brainfarts, yet this is exactly same stuff. Are they really trying to imitate same behavior with similar consequences? Should be ignored by both devs and investors alike (or invested via shorts or similar reverse tools). Real progress looks differently.
kentm · 6h ago
A large number of people do not take Zuckerberg or Altman seriously and do bash them, but there is also a contingent that do. This is similar to Trump; about 1/3 of America listens to him and think he’s talking sense. Note that these comments were made on Joe Rogan’s show, apparently. I’ll leave you to consider what sort of audience Rogan appeals to.
marstall · 6h ago
v impressed with how much OP can do, as a blind person.
sublinear · 5h ago
I don't think anyone was taking this idea any more seriously than cryptocurrency replacing the banks?
fragmede · 6h ago
> Every day, instead of picking up where you left off, you need to re-train the AI assistant. Granted, you could maintain an ever-changing set of training prompts, but this adds an extra development layer to the project.
If you aren't actually having the LLM write short term memory files/using a feature in practice, why should I believe you to be qualified to speak on how well the feature actually works in practice?
To be clear, this isn't a comment on the feasibility of bold claims made by people with a significant financial interest in those claims.
No comments yet
ninetyninenine · 6h ago
This article is off the rails in a way. Yeah we all know about how LLMs hallucinate and how that’s an impossible hurdle to get over (currently).
But AI at its current level, pre ChatGPT was in itself an even MORE off the rails claim then anything in this article. Like what AI can do today is unthinkable to the point where you can be sent to the mental ward of a hospital if you made a claim for predicting what AI can do currently. The Turing test was leap frogged and everybody just complains about AI is garbage and then they moved the goal posts.
It’s not that the claims are wildly overblown. It’s only overblown a little and not by an overly bullshit amount.
It’s that the hype is pervasive. Like we see this hype everywhere and we are riding along with it. AI has infiltrated our lives so deeply that we are just no longer impressed so we get all kinds of people saying AI is overblown when really it’s not that overblown at all. AI agents that code for us? We are 50 percent of the way there. It’s the last 50 percent that’s brutally hard to make happen but it’s not completely out of this world for a company to try to jump that gap in a year. We’ve made incremental progress.
If Elon invented a space faring vehicle that had a light speed drive and was available for anyone to purchase and fly for 5$ then I guarantee you hype will blow up to the point where people get sick of it just like AI.
People will be talking about how space travel and light speed drives are overblown. I’m not impressed that it still takes 4 light years to get to Alpha Centauri are you kidding me?
The only difference is that LLMs have a deeper and wider understanding of English, no time zone barriers, and nearly instant response time. I find it hard to picture a world where AI doesn't decimate these types of low-skill software jobs.
That is an extreme understatement.
On a recent a16z podcast, Andreessen said:
“It's possible that [being a VC] is quite literally timeless, and when the AIs are doing everything else, that may be one of the last remaining fields that people are still doing."
Interestingly, his justification is not that VCs are measurably good at what they do, but rather that they appear to be so bad at what they do:
“Every great venture capitalist in the last 70 years has missed most of the great companies of his generation... if it was a science, you could eventually dial it in and have somebody who gets 8 out of 10 [right]. There's an intangibility to it, there's a taste aspect, the human relationship aspect, the psychology — by the way a lot of it is psychological analysis."
The podcast in question: https://youtu.be/qpBDB2NjaWY
(Personally, I’m not quite sure he actually believes this - but watching him is a certain kind of masterclass in using spicy takes to generate publicity / awareness / buzz. And by talking about him I’m participating in his clever scheme.)
"It'll screw everyone else, but we'll be okay, so..."
Tldr AI can replace labor but not capital. More news at 11.
Not worried in the slightest. Just exhausted and annoyed.
It's a very natural (if not honest) situation to try and get an edge in a competitive (not cooperative) environment.
Many software engineers seem to be more worried than usual that the AI agents are coming, which I find saddening and infuriating at the same time. I'll quickly break down the good, bad, and ugly for you.
## Fever Pitch Hype
I think I'm smelling blood in the water for these generative AI companies, because the hype train is currently totally off the rails again, this time with especially absurd and outlandish claims. This latest round follows a very linear sequence of events:
1. Mark Zuckerberg appeared on Joe Rogan in Jan 2025, claiming by year's end Meta will have an AI mid-level software engineer. 2. Shortly after, Sam Altman appeared boasting that soon OpenAI will have a $20k/month PhD level super coder agent. 3. Not wanting to be left out, Dario Amodei one-upped them claiming within 3 - 6 months AI will write 90% of all code, and within 12 months 100% of all code. 4. Getting the last word in, OpenAI made another appearance assuring us that by year's end they will replace all senior staff level software engineers.
Do these people even hear themselves? I know not to expect any better, because as it turns out, highly manipulative and self-serving individuals will blurt out all sorts of ridiculous bs when tens of billions in investor funds are at stake. The current batch of frontier LLMs can barely churn out 100+ line snippets of usable and clean Rust code, and they want me to believe in one upgrade they're going to be hammering out large enterprise-level, secure, polished, and production-ready systems?
Full Article: https://cicero.sh/forums/thread/developers-don-t-despair-big...
(can only post 4000 chars max here)
At the senior levels this is not actually believed by the powers that be, since a bunch of hiring is still happening to compensate for overdone layoffs in spots, etc.
People bash trump for his momentary brainfarts, yet this is exactly same stuff. Are they really trying to imitate same behavior with similar consequences? Should be ignored by both devs and investors alike (or invested via shorts or similar reverse tools). Real progress looks differently.
If you aren't actually having the LLM write short term memory files/using a feature in practice, why should I believe you to be qualified to speak on how well the feature actually works in practice?
To be clear, this isn't a comment on the feasibility of bold claims made by people with a significant financial interest in those claims.
No comments yet
But AI at its current level, pre ChatGPT was in itself an even MORE off the rails claim then anything in this article. Like what AI can do today is unthinkable to the point where you can be sent to the mental ward of a hospital if you made a claim for predicting what AI can do currently. The Turing test was leap frogged and everybody just complains about AI is garbage and then they moved the goal posts.
It’s not that the claims are wildly overblown. It’s only overblown a little and not by an overly bullshit amount.
It’s that the hype is pervasive. Like we see this hype everywhere and we are riding along with it. AI has infiltrated our lives so deeply that we are just no longer impressed so we get all kinds of people saying AI is overblown when really it’s not that overblown at all. AI agents that code for us? We are 50 percent of the way there. It’s the last 50 percent that’s brutally hard to make happen but it’s not completely out of this world for a company to try to jump that gap in a year. We’ve made incremental progress.
If Elon invented a space faring vehicle that had a light speed drive and was available for anyone to purchase and fly for 5$ then I guarantee you hype will blow up to the point where people get sick of it just like AI.
People will be talking about how space travel and light speed drives are overblown. I’m not impressed that it still takes 4 light years to get to Alpha Centauri are you kidding me?
No comments yet