Andrej Karpathy's YC AI SUS talk on the future of the industry

214 pudiklubi 147 6/18/2025, 4:56:18 PM donnamagi.com ↗

Comments (147)

karpathy · 45m ago
Btw I notice many pretty bad errors in this transcription of the talk. The actual video will be up soon I hope.
dang · 28m ago
Ah sorry! I'm going to downweight this thread now.

There's so much demand around this, people are just super eager to get the information. I can understand why, because it was my favorite talk as well :)

kapildev · 21m ago
How soon? I am contemplating whether to read this errorful transcript or wait for the video
pudiklubi · 27m ago
anything you'd want fixed immediately? happy to do so – or even take this down if you wish. it's your talk.
sotix · 23m ago
Is this because it was recorded with AI tooling rather than a traditional note taker?
pudiklubi · 21m ago
it was an audio recording, transcribed with speech to text models. there's definitely some errors and words lost. I also tried to emphasize this
sotix · 16m ago
Thanks for the clarification. Bit ironic given the talk’s subject. It is quite a bit of effort, but there’s something to say for going through and manually writing up the transcript like a journalist. Sometimes you can’t beat human effort ;)
pudiklubi · 5h ago
For context - I was in the audience when Karpathy gave this amazing talk on software 3.0. YC has said the official video will take a few weeks to release, by which Karpathy himself said the talk will be deprecated.

https://x.com/karpathy/status/1935077692258558443

levocardia · 4h ago
To complete the loop, we need an AI avatar of Karpathy doing text-to-voice from the transcript. Who says AI can't boost productivity!
msgodel · 3h ago
I listened to it with an old fashion CMU speech synth.
chrisweekly · 3h ago
Do the talk's predictions about the future of the industry project beyond a few weeks? If so, I'd expect the salient points of the talk to remain valid. Hmm...
swyx · 3h ago
i synced the slides with the talk transcript here : https://latent.space/s3
pudiklubi · 2h ago
so you took my transcript and put it behind a newsletter sub? haha. just share them!
swyx · 2h ago
not quite, i compiled the slides within a few hours of the talk yesterday well before your transcript was available. the slides are my main output/contribution. a full slides+transcript is too long for substack. i've linked your transcript prominently for people to find, and used it to fix slide ordering because twitter people took terrible notes for the purpose of exact talk reconstruction.

i exepct YC to prioritize publishing this talk so propbably the half life of any of this work is measured in days anyway.

100% of our podcast is published for free, but we still have ~1000 people who choose to support our work with a subscription (it does help pay for editors, equipment, and travel). I always feel bad that we dont have much content for them so i figured i'd put just the slide compilation up for subscribers. i'm trying to find nice ways to ramp up value for our subs over time, mostly by showing "work in progress" things like this that i had to do anyway to summarize/internalize the talk properly - which again is what we published entirely free/no subscription required

pudiklubi · 1h ago
gotcha. def a good idea! i'm only a bit wary of making all of this a bit too put together, as to create an official source before there is one.

that being said, HN is a negative place, and not what I was trying to go for. thank you for your work with the slides!

dang · 1h ago
Let's all make HN a less negative place.

(As a step towards making it a non-negative place.)

fellatio · 55m ago
Looks like you are putting a derivative behind a paywall though, no? I think quid pro quo let pudiklubi publish your work too? Some kind of open license?
swyx · 2h ago
btw i think your transcript is missing most of the Perplexity slide discussion, right after the Cursor example
theyinwhy · 2h ago
What a poor judgement he must have if his outlook becomes irrelevant in a few weeks' time.

Edit: the emoji at the end of the original sentence has not been quoted. How a smile makes the difference. Original tweet: https://x.com/karpathy/status/1935077692258558443

theturtletalks · 2h ago
It was in jest, more a take of how quickly things move in AI
pudiklubi · 56m ago
the way I read it it's more about how fast examples and references become irrelevant. fundamentals of the speech not.
qwertox · 2h ago
We better stop talking about the future then.
scottyah · 2h ago
Every time you talk about the future, it gets altered.
amarait · 2h ago
Hard determinists would disagree
koakuma-chan · 51m ago
Even if universe is not deterministic, what he said still does not make sense because him talking about future is part of destiny.
addaon · 50m ago
They basically have to.
bcrosby95 · 2h ago
I might be wrong, but it seems like some people are misinterpreting what is being said here.

Software 3.0 isn't about using AI to write code. It's about using AI instead of code.

So not Human -> AI -> Create Code -> Compile Code -> Code Runs -> The Magic Happens. Instead, it's Human -> AI -> The Magic Happens.

imiric · 1h ago
So... Who builds the AI?

This is why I think the AI industry is mostly smoke and mirrors. If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities. Yet in the last year or so we've seen marginal improvements based mainly on increasing the scale and quality of the data they're trained on, and the scale of deployments, with some clever engineering work thrown in.

TeMPOraL · 1h ago
> If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities.

Recursive self-improvement is literally the endgame scenario - hard takeoff, singularity, the works. Are you really saying you're dissatisfied with the progress of those tools because they didn't manage to end the world as we know it just yet?

imiric · 1m ago
No, that's not what I'm saying.

The progress has been adequate and expected, save for very few cases such as generative image and video, which has exceeded my expectations.

Before we reach the point where AI is self-improving on its own, we should go through stages where AI is being improved by humans using AI. That is, if these tools are capable of reasoning and are able to solve advanced logic, math, and programming challenges as shown in benchmarks, then surely they must be more capable of understanding and improving their own codebases with assistance from humans than humans could do alone.

My point is that if this was being done, we should be seeing much greater progress than we've seen so far.

Either these tools are intelligent, or they're highly overrated. Which wouldn't mean that they can't be useful, just not to the extent that they're being marketed as.

no_wizard · 1h ago
I don’t believe the technology horizon in the next 5 years is sufficiently developed for recursive self improvement to work so well it requires no human intervention, by which I mean it will hit a limit and the technology will still be a tool not a sentient (near sentient?) thing.

I think there will be a wall hit eventually with this, much like there was with visual recognition in the mid 2010s[0]. It will continue to improve but not exponentially

To be fair I am bullish it will make white collar work fundamentally different but smart companies will use it to accelerate their workforce productivity, reliability and delivery, not simply cut labor to the bone, despite that seemingly being every CEOs wet dream right now

[0]: remember when everyone was making demos and apps that would identify objects and such, and all the facial augmentation stuff? My general understanding is that the tech is now in the incremental improvement stage. I think LLMs will hit the same stage in the near term and likely hover there for quite awhile

TeMPOraL · 51m ago
> I don’t believe the technology horizon in the next 5 years is sufficiently developed for recursive self improvement to work so well it requires no human intervention

I'm personally 50/50 on this prediction at this point. It doesn't feel like we have enough ingredients for end-to-end recursive self-improvement in the next 5 years, but the overall pace is such that I'm hesitant to say it's not likely either.

Still, my reply was to the person who seemed to say they won't be impressed until they see AIs "able to build better versions of themselves" and "exponential improvements of their capabilities" - to this I'm saying, if/when it happens, it'll be the last thing that they'll ever be impressed with.

> remember when everyone was making demos and apps that would identify objects and such, and all the facial augmentation stuff? My general understanding is that the tech is now in the incremental improvement stage.

I thought that this got a) boring, and b) all those advancements got completely blown away by multimodal LLMs and other related models.

My perspective is that we had a breakthrough across the board in this a couple years ago, after the stuff you mentioned happened, and that isn't showing signs of slowing down.

mlboss · 26m ago
Recursive self-improvement will never happen. We will hit physical limitation before that: energy, rate earth minerals, datacenter etc. The only way we can have recursive self-improvement if Robots take over and start expanding to other planets/solar systems.
fellatio · 1h ago
Nope. Just that 1. Is better than people. 2. Isn't better than people. Pick one!

If the former then yes singularity. The only hope is it's "good will" (wouldn't bet on that) or turning off switches.

If the latter you still need more workers (programmers or whatever they'll be called) due to increased demand for compute solutions.

TeMPOraL · 56m ago
> Nope. Just that 1. Is better than people. 2. Isn't better than people. Pick one!

That's too coarse of a choice. It's better than people at increasingly large number of distinct tasks. But it's not good enough to recursively self-improve just yet - though it is doing it indirectly: it's useful enough to aid researchers and businesses in creating next generation of models. So in a way, the recursion and resulting exponent are already there, we're just in such early stages that it looks like linear progress.

fellatio · 52m ago
Thanks. Your nuanced version is better. In that version I can still ignore most of LinkedIn and Twitter and assume there will still be a need for people. Not just at OMGAD (OpenAI...) but at thousands of companies.
yusina · 58m ago
Moving goal posts? This was a response to the claim that AIs are the new code.
TeMPOraL · 54m ago
Not really. GP claims they expect to see exponential improvements to be impressed - seemingly without realizing how such exponent will look like once it's happening and starts to look obviously exponential.
trgn · 1h ago
> who builds the ai

3 to 5 companies iso of the hundreds of thousands who sell software now

iLoveOncall · 52m ago
> If these tools are really as revolutionary as they claim they are, then they should be able to build better versions of themselves, and we should be seeing exponential improvements of their capabilities. Yet in the last year or so we've seen marginal improvements based mainly on increasing the scale and quality of the data they're trained on, and the scale of deployments, with some clever engineering work thrown in.

Yes and we've actually been able to witness in public the dubious contributions that Copilot has made on public Microsoft repositories.

bmicraft · 1h ago
The AI isn't much easier, when you consider the "AI" step is actually: create dataset -> train model -> fine-tune model -> run model to train a much smaller model -> ship much smaller model to end devices.
autobodie · 1h ago
I don't think people are misinterpreting. People just don't find it convincing or intriguing.
adriand · 25m ago
It’s like a friend of mine who has an AI company said to me: the future isn’t building a CRM with AI. The future is saying to the AI, act like a CRM.
__loam · 2m ago
And it won't work as well as an actual crm because you've scrubbed all the domain knowledge of that software and how it ought to work out of the organization.
zie1ony · 1h ago
This is great idea, until you have to build something.
fellatio · 1h ago
Let alone productionize it! And god forbid maintain it. And have support that doesn't crap out.
layer8 · 48m ago
Until you have to reliably automate something, I would say.
agarren · 57m ago
That jibes with what Nadella said in an interview not too long ago. Essentially, SaaS apps disappear entirely as LLMs interface directly with the underlying data store. The unspoken implication being that software as we understand it goes away as people interface with LLMs directly rather than ~~computers~~ software at all.

I kind of expect that from someone heading a company that appears to have sold-the-farm in an AI gamble. It’s interesting to see a similar viewpoint here (all biases considered)

Vegenoid · 45m ago
> people interface with LLMs directly rather than software at all

What does this mean? An LLM is used via a software interface. I don’t understand how “take software out of the loop” makes any sense when we are using reprogrammable computers.

FridgeSeal · 41m ago
It’s just…the vibe of it man! It’s LLM’s! They’ll just…do things! And stuff…it’ll just happen!!! Don’t worry about the details!!!!
__loam · 1h ago
This industry is so tiring
mattgreenrocks · 1h ago
Definitely. And it gets more tiring the more experience you have, because you've seen countless hype cycles come and go with very little change. Each time, the same mantra is chanted: "but this time, it's different!" Except, it usually isn't.

Started learning metal guitar seriously to forget about industry as a whole. Highly recommended!

obiefernandez · 52m ago
Self plug: I wrote a whole bestselling book on this exact topic

https://leanpub.com/patterns-of-application-development-usin...

afiodorov · 3h ago
> So, it was really fascinating that I had the menu gem basically demo working on my laptop in a few hours, and then it took me a week because I was trying to make it do it

Reminds me of work where I spend more time figuring out how to run repos than actually modifying code. A lot of my work is focused on figuring out the development environment and deployment process - all with very locked down permissions.

I do think LLMs are likely to change industry considerably, as LLM-guided rewrites are sometimes easier than adding a new feature or fixing a bug - especially if the rewrite is something into more LLM-friendly (i.e., a popular framework). Each rewrite makes the code further Claude-codeable or Cursor-codeable; ready to iterate even faster.

andai · 2h ago
The last 10% always takes 1000% of the time...
Aeolun · 43m ago
Jup. Claude develops the first 90% without a sweat, and then starts flailing.
afiodorov · 2h ago
I am not saying rewrites are always warranted, but I think LLMs change the cost-benefit balance considerably.
steveklabnik · 2h ago
I am with you on this. I'm very much not sure about rewrites, but LLMs do change the cost-benefit balance of refactorings considerably, for me. Both in a "they let me make a more informed decision about proceeding with the refactoring" and "they are faster at doing it than I am".
jacobgorm · 32m ago
This is the guy who convinced Elon to drop the radar in Teslas.
arkj · 2h ago
>Software 2.0 are the weights which program neural networks. >I think it's a fundamental change, is that neural networks became programmable with large libraries... And in my mind, it's worth giving it the designation of a Software 3.0.

I think it's a bit early to change your mind here. We love your 2.0, let's wait for some more time till th e dust settles so we can see clearly and up the revision number.

In fact I'm a bit confused about the number AK has in mind. Anyone else knows how he arrived at software 2.0?

I remember a talk by professor Sussman where he suggest we don't know how to compute, yet[1].

I was thinking he meant this,

Software 0.1 - Machine Code/Assembly Code Software 1.0 - HLLs with Compilers/Interpreters/Libraries Software 2.0 - Language comprehension with LLMs

If we are calling weights 2.0 and NN with libraries as 3.0, then shouldn't we account for functional and oo programming in the numbering scheme?

[1] https://www.youtube.com/watch?v=HB5TrK7A4pI

autobodie · 2h ago
Objectivity is lacking throughout the entire talk, not only in the thesis. But objectivity isn't very good for building hype.
Karrot_Kream · 1h ago
I'm curious why you think that? I thought the talk was pretty grounded. There was a lot of skepticism of using LLMs unbounded to write software and an insistence on using ground truth free from LLM hallucination. The main thesis, to me, seemed like "we need to write software that was designed with human-centric APIs and UI patterns to now use an LLM layer in front and that'll be a lot of opportunity for software engineers to come."

If anything it seemed like the middle ground between AI boosters and doomers.

autobodie · 56m ago
It's a lot of meandering and mundane analogies that don't work very well or explain much, so it's totally understandable that so many people have different interpretations of what he's even trying to say. The only consistent takeaway here is that he's talking about using AI (of many sorts) alongside legacy software.
baxtr · 1h ago
How can someone so smart become a hype machine? I can’t wrap my head around it. Maybe he had the opportunity to learn from someone he worked closely with?
TeMPOraL · 1h ago
> How can someone so smart become a hype machine? I can’t wrap my head around it.

Maybe they didn't, and it's just your perception.

barrkel · 1h ago
Maybe you haven't seen the frontier and envisioned the possibilities?
throwawayoldie · 1h ago
"It is difficult to get a man to understand something, when his salary depends upon his not understanding it." --Upton Sinclair
bigyabai · 1h ago
Reminds me of Vitalik Buterin. I spent a lot of my starry-eyed youth reading his blog, and was hopeful that he was applying the learned-lessons from the early days of Bitcoin. Turned out he was fighting the wrong war though, and today Ethereum gets less lip service than your average shitcoin. The whole industry went up in flames, really.

Nerds are good at the sort of reassuring arithmetic that can make people confident in an idea or investment. But oftentimes that math misses the forest for the trees, and we're left betting the farm on a profoundly bad idea like Theranos or DogTV. Hey, I guess that's why it's called Venture Capital and not Recreation Investing.

DaveChurchill · 37m ago
The death of deterministic computing and unverifiable information is a horror show
pests · 2h ago
I think to understand how Andrej views 3.0 is hinted at with his later analogy at Tesla. He saw a ton of manually written Software 1.0 C++ replaced by the weights of the NN. What we used to write manually in explicit code is now incorporated into the NN itself, moving the implementation from 1.0 to 3.0.
koakuma-chan · 1h ago
"revision number" doesn't matter. He is just saying that traditional software's behaviour ("software 1.0") is defined by its code, whereas outputs produced by a model ("software 2.0") are driven by its training data. But to be fair I stopped reading after that, so can't tell you what "software 3.0" is.
waynenilsen · 20m ago
I used https://app.readaloudto.me to listen it is helpful
no_wizard · 3h ago
A large contention of this essay (which I’m assuming the talk is based on or is transcribed from depending on order) I do think that open source models will eventually catch up to closed source ones, or at least be “good enough” and I also think you can already see how LLMs are augmenting knowledge work.

I don’t think it’s the 4th wave of pioneering a new dawn of civilization but it’s clear LLMs will remain useful when applied correctly.

bix6 · 2h ago
Why would open source outpace? Isn’t there way more money in the closed source ones and therefore more incentive to work on them?
no_wizard · 1h ago
I didn’t say outpace, but I do believe the collective nature of open source will allow it to catch up, much like it did with browser tech, and at which point you’ll see a shift of resources toward that by major companies. It’s a collective works thing. I think it also is attractive to work on in open source, much like Linux or web browsers (hence the comparison to one) and that will also help it along over time.

I stick by my general thesis that OSS will eventually catch up or the gap will be so small only frontier applications will benefit from using the most advanced models

oblio · 1h ago
They didn't say "outpace", they said "catch up to good enough levels".
umeshunni · 3h ago
> I do think that open source models will eventually catch up to closed source ones

It felt like that was the direction for a while, but in the last year or so, the gap seems to have widened. I'm curious whether this is my perception or validated by some metric.

msgodel · 3h ago
Already today I can use aider with qwen3 for free but have to pay per token to use it with any of the commercial models. The flexibility is worth the lower performance.
QRY · 2h ago
Do you have anything to share on that workflow? I've been trying to get a local-first AI thing going, could use your insights!
msgodel · 2h ago
It's super easy. I already had llama.cpp/llama-server set up for a bunch of other stuff and actually had my own homebrew RAG dialog engine, aider is just way better.

One crazy thing is that since I keep all my PIM data in git in flat text I now have essentially "siri for Linux" too if I want it. It's a great example of what Karpathy was talking about where improvements in the ML model have consumed the older decision trees and coded integrations.

I'd highly recommend /nothink in the system prompt. Qwen3 is not good at reasoning and tends to get stuck in loops until it fills up its context window.

My current config is qwen2.5-coder-0.5b for my editor plugin and qwen3-8b for interactive chat and aider. I use nibble quants for everything. 0.5b is not enough for something like aider, 8b is too much for interactive editing. I'd also recommend shrinking the ring context in the neovim plugin if you use that since the default is 32k tokens which takes forever and generates a ton of heat.

QRY · 58m ago
I really appreciate you going into such technical specificity, thank you! I'll have to steal that siri for linux setup, that sounds awesome. Exploring ways to make use of compute people have lying around to do useful things without the vendor dependencies. But I'm relatively new to the AI scene, so your input really boosts my learning speed, thank you again!
no_wizard · 3h ago
This was how early browsers felt too, the open source browser engines were slower at adapting than the ones developed by Netscape and Microsoft, but eventually it all reversed and open source excelled past the closed source software.

Another way to put it, is that over time you see this, it usually takes a little while for open source projects to catch up, but once they do they gain traction quite quickly over the closed source counter parts.

umeshunni · 36m ago
That's a good analogy. Makes a lot of sense. The only caveat I see is that there is a lot of context locked up in proprietary data sets (e.g. YT, books, podcasts) and I'm not sure how OSS models get access to that.
tayo42 · 2h ago
Those were way simpler projects in the beginning when that happened. Like do you think a new browser would catch up today chrome now?
no_wizard · 2h ago
The tech behind LLMs has been open source for a very long time. Look at DeepSeek and LLAMA for example. They aren’t yet as capable as say Gemini but they aren’t “miles behind” either, especially if you know how to tune the models to be purpose built[0].

The time horizons will be different as they always are, but I believe it will happen eventually.

I’d also argue that browsers got complicated pretty fast, long cry from libhtml in a few short years.

[0]: of which I contend most useful applications of this technology will not be the generalized ChatGPT interface but specialized highly tuned models that don’t need the scope of a generalized querying

amelius · 1h ago
Does it say anything about how this will affect wealth distribution?
fenghorn · 3h ago
First time using NotebookLM and it blew my mind. I pasted in the OP's transcription of the talk into NotebookLM and got this "podcast": https://notebooklm.google.com/notebook/5ec54d65-f512-4e6c-9c...
steveBK123 · 3h ago
This sounds like an informercial
jckahn · 2h ago
How so? That sounds a completely realistic use of NotebookLM.
jolan · 2h ago
I think he meant the audio output itself sounds like an infomercial.

https://www.youtube.com/watch?v=gfr4BP4V1R8

steveBK123 · 2h ago
Correct, yes.

That NotebookLM podcast was like the most unpleasant way I can imagine to consume content. Reading transcripts of live talks is already pretty annoying because it's less concise than the written word. Having it re-expanded by robot-voice back to audio to be read to me just makes it even more unpleasant.

Also sort of perverse we are going audio->transcript->fake audio. "YC has said the official video will take a few weeks to release," - I mean shouldn't one of the 100 AI startups solve this for them?

Anyway, maybe it's just me.. I'm the kind of guy that got a cynical chuckle at the airport the other week when I saw a "magazine of audio books".

jasonjmcghee · 2h ago
I have the same perspective - to such a degree that any time I see someone post a notebooklm I wonder if it's paid advertising. Every time I've tried it on something like a whitepaper etc. it just makes stuff up or says things that are kind of worthless. Reminds me of ChatGPT 3.5 in terms of quality of the presented content.

The voices sounded REALLY good the first time I used it. But then sounded exactly the same every time after that and became underwhelmed.

Velorivox · 1h ago
During the early days of NotebookLM, I got it to generate this interesting look behind the scenes of how the script is made:

https://vocaroo.com/1nZBz5hdjwEh

As a bonus its hilarious in its own right.

steveBK123 · 2h ago
Just the tone, I mean I listen to podcasts a bit but.. yuck. I'd rather just read than listen to this.
nico · 2h ago
Thank you for putting it together, it was pretty good - 27m:12s
sensanaty · 58m ago
Christ I'm gonna be forced to listen to the moronic managers and C-suites repeat this "software 3.0" bullshit incessantly from now on aren't I...
zkmon · 59m ago
I'm not an expert on the subject itself, but I can tell that the transcript, in its entirety, is missing a solid line. While the parts of this talk are great on their own, I feel they couldn't stitch the whole story together well. And probably he might not be confident of completeness and composition of his thought. What's the whole point? That should be answered in the first few minutes.
msgodel · 3h ago
This is almost exactly what I've experienced with them. It's a great talk, I wish I could have seen it in person.
bredren · 2h ago
Anyone know what "oil bank" was in the actual talk?
yapyap · 1h ago
SUS talk

great name already

uncircle · 1h ago
AI sus talk. Kinda appropriate.
alganet · 2h ago
> imagine changing it and programming the computer's life

> imagine that the inputs for the car are on the bottom, and they're going through the software stack to produce the steering and acceleration

> imagine inspecting them, and it's got an autonomy slider

> imagine works as like this binary array of a different situation, of like what works and doesn't work

--

Software 3.0 is imaginary. All in your head.

I'm kidding, of course. He's hyping because he needs to.

Let's imagine together:

Imagine it can be proven to be safe.

Imagine it being reliable.

Imagine I can pre-train on my own cheap commodity hardware.

Imagine no one using it for war.

serial_dev · 2h ago
I tried to imagine all that he described and felt literally nothing. If he wants to hype AI, he should find his Steve Jobs.
msgodel · 2h ago
It was easy for me to see and it's incredible. Maybe I should be launching a startup.
Henchman21 · 2h ago
If I’m going to be leaning on my imagination this much I am going to imagine a world where the tech industry considers at great length whether or not something should be built.
alganet · 2h ago
Let me be clear about what I think: I have zero fear of an AI apocalypse. I think the fear is part of the scam.

The danger I see is related to psychological effects caused by humans using LLMs on other humans. And I don't think that's a scenario anyone is giving much attention to, and it's not that bad (it's bad, but not world end bad).

I totally think we should all build it. To be trained from scratch on cheap commodity hardware, so that a lot of people can _really_ learn it and quickly be literate on it. The only true way of democratizing it. If it's not that way, it's a scam.

Henchman21 · 25m ago
I think we fear the same things, roughly. My fear is what humans do with any tech we create. Where we differ is that I would prefer things not be built. After a lifetime of watching what we actually do with the things we create, I think we need a bit of a “time out”. Just my 2 cents that won’t change a thing!
jimmy76615 · 3h ago
The talk is still not available on YouTube? What takes them so long?
layer8 · 3h ago
Apparently AI doesn’t make you a 10x YouTube releaser. ;)
datameta · 2h ago
Certainly it can make you a 10x video releaser, but not a 10x Youtuber.
romain_batlle · 2h ago
The analogy with the grid seems pretty good. The fab one seems bad tho.
kaladin-jasnah · 2h ago
Tangentially related, but it boggles my mind this guy was badmephisto, who made a quite famous cubing tutorial website that I spent plenty of time on in my childhood.
Frummy · 2h ago
Totally not a supervillain

"Q: What does your name (badmephisto) mean?

A: I've had this name for a really long time. I used to be a big fan of Diablo2, so when I had to create my email address username on hotmail, i decided to use Mephisto as my username. But of course Mephisto was already taken, so I tried Mephisto1, Mephisto2, all the way up to about 9, and all was taken. So then I thought... "hmmm, what kind of chracteristic does Mephisto posess?" Now keep in mind that this was about 10 years ago, and my English language dictionary composed of about 20 words. One of them was the word 'bad'. Since Mephisto (the brother of Diablo) was certainly pretty bad, I punched in badmephisto and that worked. Had I known more words it probably would have ended up being evilmephisto or something :p"

bgwalter · 1h ago
> Since Mephisto (the brother of Diablo)

Unbelievable. Perhaps some techies should read Goethe's Faust instead of Lord of the Rings.

bluefirebrand · 46m ago
He's talking about videogame characters

If you want to scoff at anyone, scoff at 1990s Blizzard Entertainment for using those names in that way

debugnik · 45m ago
Tell that to the writers of Diablo. In the context of the game, those two characters are brothers.
mattlangston · 2h ago
Very nice find @pudiklubi. Thank you.
ath3nd · 38m ago
I find it hard to care for the marginal improvements in a glorifiedutocomplete that guzzles a shit ton of water and electricity (all stuff that can be used for more useful stuff than generating a picture of a cat with human hands or some lazy rando's essay assignment) and then ends up having to be coddled by a real engineer into a working solution.

Software 2.0? 3.0? Why stop there? Why not software 1911.1337? We went through crypto, NFTs, web3.0, now LLMs are hyped as if they are frigging AGI (spoiler, LLMs are not designed to be AGI, and even if they were, you sure as hell won't be the one to use them to your advantage, so why are you so irrationally happy about it?).

Man this industry is so tiring! What is the most tiring is the dog-like enthusiasm of the people who buy it EVERY.DAMN.TYPE, as if it's gonna change the life of most of them for the better. Sure, some of these are worse and much more useless than others (NFTs), but in the core of all of it is this cult-like awe we as a society have towards figures like the Karpathy's, Musks and Altmans of this world.

How are LLMs gonna help society? How are they gonna help people work, create and connect with one another? They take away the joy of making art, the joy of writing, of learning how to play a music instrument and sing, and now they are coming for software engineering. Sure, you might be 1%/2% faster, but are you happier, are you smarter (probably not: https://www.mdpi.com/2076-3417/14/10/4115)?

iLoveOncall · 44m ago
Just a grifter grifting.

> The more reliance we have on these models, which already is, like, really dramatic

Please point me to a single critical component anywhere that is built on LLMs. There's absolutely no reliance on models, and ChatGPT being down has absolutely no impact on anything beside teenagers not being able to cheat on their homeworks and LLM wrappers not being able to wrap.

Aeolun · 39m ago
Well, you have all these social security programs using them to decide whether people should be investigated. That’s pretty nasty. I can totally see them not processing any applications if the model is down.
lvl155 · 3h ago
I soak up everything Andrej has to say.
koakuma-chan · 1h ago
Andrej is the Dan Abramov of AI.
swah · 4h ago
swyx · 3h ago
thanks - i've now also updated the powerpoint with matched transcript to slides - so we are now fully confident in the slide order and you can basically watch the talk with slides
pudiklubi · 3h ago
haha, love how we have the two pieces of the puzzle. we should merge!
adamnemecek · 3h ago
AGI = approximating partition function. Everything else is just a poor substitute.
yusina · 1h ago
> I think broadly speaking, software has not changed much at such a fundamental level for 70 years.

I love Andrej, but come on.

Writing essentially punch cards 70 years ago, writing C 40 years ago and writing Go or Typescript or Haskell 10 years ago, these are all very different activities.

TeMPOraL · 1h ago
Nah, not much changed in the past 40-50 years; between the two, Lisp and Smalltalk spearheaded pretty much all the stuff that was added to other programming languages in subsequent decades, and some of the things yet to be added.

The main thing that changed about programming is the social/political/bureaucratic side.

Aeroi · 2h ago
TL;DR: Karpathy says we’re in Software 3.0: big language models act like programmable building blocks where natural language is the new code. Don’t jump straight to fully autonomous “agents”—ship human-in-the-loop tools with an “autonomy slider,” tight generate-→verify loops, and clear GUIs. Cloud LLMs still win on cost, but on-device is coming. To future-proof, expose clean APIs and docs so these models (and coming agents) can safely read, write, and act inside your product.

No comments yet

sammcgrail · 1h ago
You’ve got “two bars” instead of “two rs” in strawberry
pudiklubi · 1h ago
nice catch! the original transcript kept saying dogs instead of docs. thats the only thing i fixed (until your r's find now) after laughing at it for a while
computator · 1h ago
I was going to ask what this meant about strawberries:

> LLMs make mistakes that basically no human will make, like, you know, it will insist that 9.11 is greater than 9.9, or that there are two bars of strawberry. These are some famous examples.

But you answered it: It’s a stupid mistake a human makes when trying to mock the stupid mistakes that LLMs make!

pera · 3h ago
Is "Software 3.0" somehow related to "Web 3.0"?
pudiklubi · 3h ago
No – for more context you can check out Karpathy's original essay from 2017: https://karpathy.medium.com/software-2-0-a64152b37c35
fhd2 · 3h ago
Pure coincidence, I'm sure :)
rvz · 34m ago
No. But it doesn't make any difference that both of them are grifts in different ways.

One bundles "AGI" with broken promises and bullshit claims of "benefits to humanity" and "abundance for all" when at the same time it takes jobs away with the goal of achieving 10% global unemployment in the next 5 years.

The other is an overpromised scam wrapped up in worthless minted "tokens" on a slow blockchain (Ethereum).

Terms like "Software 3.0", "Web 3.0" and even "AGI" are all bullshit.

lcnPylGDnU4H9OF · 3h ago
No, they are totally unrelated. Web 3.0 is blockchain-backed web applications (rather than proprietary server-backed web applications, which is Web 2.0) and Software 3.0 is LLM-powered agents.
uncircle · 1h ago
They are related in that they are pure marketing buzzwords to build enormous hype around a product, if not a dream.
lcnPylGDnU4H9OF · 9m ago
That does not describe Karpathy’s use of Software 3.0. He is referring to LLM agents with that term, which is agnostic to LLM model, whether third-party-hosted or self-hosted.
knowaveragejoe · 3h ago
What was Software 2.0?
Karrot_Kream · 2h ago
It's using NNs or ML models which are given datasets and learn using those datasets. https://karpathy.medium.com/software-2-0-a64152b37c35

If you read the talk you can find out this and more :)

msgodel · 2h ago
Early application specific ML models, Software 1.0 is normal programs.
jckahn · 2h ago
And what was Software 1.0?
lcnPylGDnU4H9OF · 2h ago
Trained Neural Networks.
gooseus · 1h ago
But at what cost? And I don't mean the "human cost", I mean literally, how much will it cost to use an LLM as your "operating system"? Correct me if I'm wrong here, but isn't every useful LLM being operated at a loss?
ath3nd · 29m ago
That's part of the rug pull.

They want to onboard as many people on their stuff and make them as dependent on it as possible, so the switching costs are more.

It's the classic scam. Look at what Meta are doing now that they reached end of the line and are trying to squeeze out people for profitability:

- Bringing Ads to WhatsApp: https://apnews.com/article/whatsapp-meta-advertising-messagi...

- Desperately trying by any illegal means possible to steal your data: https://localmess.github.io/

- Firing all the people who built their empire: https://www.thestreet.com/employment/meta-rewards-executives...

- Enabled ethnic cleansing in multiple instances: https://www.amnesty.org/en/latest/news/2022/09/myanmar-faceb...

If you can't see the total moral bankruptcy of Big Tech, you gotta be blind. Don't Be Evil my ass. To me, LLMs have only one purpose: dumb down the population, make people doubt what's real and what's not, and enrich the tech overlords while our societies drown in the garbage they create.

snickell · 57m ago
If you want to try what Karpathy is describing live today, here's a demo I wrote a few months ago: https://universal.oroborus.org/

It takes mouse clicks, sends them to the LLM, and asks it to render static HTML+CSS of the output frame. HTML+CSS is basically a JPEG here, the original implementation WAS JPEG but diffusion models can't do accurate enough text yet.

My conclusions from doing this project and interacting with the result were: if LLMs keep scaling in performance and cost, programming languages are going to fade away. The long-term future won't be LLMs writing code, it'll be LLMs doing direct computation.