I feel like there's a lot of half-truths in the article and some of the comments here.
1. The $100M is a huge number. Maybe there was 1 person, working on large-scale training or finetuning, who had an offer like this, which surely was a high offer even in base (like let's say $1M+), and had a lot of stock and bonus clauses, which over 4+ years could have worked out to a big number. But I don't believe that the average SWE or DE ("staffer") working on the OpenAI ChatGPT UI Javascript would get this offer..
2. One of the comments here says "[Zuck] has mediocre talent". I worked at Facebook ~10 years ago, it was the highest concentration of talent I've ever seen in my life. I'm pretty sure it's still an impressive set of people overall.
Disclaimer: I don't work in the LLM space, not even in the US tech space anymore.
tyleo · 4h ago
Comments aimed at Zuck’s talent always seem jealous to me. An argument can be made that he lacks a good moral compass using specific public examples but I haven’t seen any similar evidence to argue a lack of talent.
I also know many folks who’ve worked at Meta. Almost all of them are talented despite many working there regretfully.
poisonborz · 1h ago
It's incredible to me that talent != moral values is this widespread. I know this was pre-Cambridge Analytica but the writing was on the wall, and we see the same with each new tech wave.
Whenever I ask such people, they talk about the incredible perks, stock options, challenges. They do say they are overburdened though.
These are people who would be rich anyway, and could work anywhere, doing much more good.
spacemadness · 41m ago
They’re people who would have been on wall street in another era a lot of the time. Smart people focused on money.
benrapscallion · 5h ago
They did “pay” $14B to the CEO of ScaleAI. $100M sounds plausible, relatively.
leosanchez · 5h ago
> "[Zuck] has mediocre talent"
I read it as he not talented himself. Not about the talent he employs.
No comments yet
StochasticLi · 3h ago
OpenAI has very few employees, so I think it's possible for the correct ones. Crazier things have happened.
diziet · 7h ago
Whether this is true or not, this is a clever move to publicize. Anyone being poached by Meta now from OpenAI will feel like asking for 100m bonuses and will possibly feel underappreciated with only a 20 or 50 million signing bonus.
gondo · 5h ago
This can backfire and work the other way around. Existing employees may try to renegotiate their compensation and threaten to leave.
PunchTornado · 7h ago
20 mil is peanuts. who would accept it?
randomcarbloke · 7h ago
I would, where do I sign
aleph_minus_one · 5h ago
If you have to ask this question, the offer is not for you. :-)
thr0waway001 · 1h ago
How much for a ZJ?
cess11 · 6h ago
It's like twelve life incomes in the US.
> 20000000 / (40 * 40000)
12.5
An obscene amount of wealth.
Crosseye_Jack · 6h ago
> twelve life incomes in the US
Or 2 trips to the hospital
dan-robertson · 5h ago
Rich people usually have good insurance though.
postalrat · 2h ago
Insurance always takes more than it gives.
InitialLastName · 1h ago
The key thing is that insurance also gets monopsony power over what they pay providers, so they can pay less than the provider would nominally charge.
bloqs · 4h ago
people not completely out of touch with reality
kasperni · 7h ago
Yeah, and everyone will know you did it for the money.
willvarfar · 7h ago
Isn't pretty much everyone working at OpenAI already clearly motivated by money over principle? OpenAI had a very public departure from being for-good to being for-money last year...
gadders · 7h ago
Everyone works for money unless you are refusing to take your salary.
dan-robertson · 7h ago
Lots of people working for AI labs have other AI labs they could work for, so their decisions will be made based on differences of remuneration, expected work/role, location, and employer culture/mission.
The claim above is that OpenAI loses to other labs on most of the metrics (obviously depends on the person) and so many researchers have gone there based on higher compensation.
Retric · 6h ago
Not what the phrase means, when you decide to take a vastly less lucrative offer you’re working for something other than money.
robertlagrant · 7m ago
How many people do that out of the working population?
matthewmacleod · 6h ago
Obviously this is not the case, and you're deliberately choosing to misunderstand the point.
latexr · 6h ago
> OpenAI had a very public departure from being for-good to being for-money last year...
Were they ever “for good”? With Sam “let’s scam people for their retinas in exchange for crypto” Altman as CEO? I sincerely question that.
There was never a shift, the mask just fell off and they stopped pretending as much.
willvarfar · 5h ago
It was originally called "open" and run as a not-for-profit and a lot of people joined - and even joined the board - on that understanding.
52-6F-62 · 4h ago
It’s not like tech companies have a playbook for becoming “sticky” in peoples’ lives and businesses by bait and switch.
They still call it “open” by the way. Every other nonprofit is paying equivalent salaries and has published polemics about essentially world takeover, right?
BoorishBears · 6h ago
There are options other than money and virtue signaling for why you'd work a given job.
Some people might just like working with competent people, doing work near the forefront of their field, while still being in an environment where their work is shipped to a massively growing user base.
Even getting 1 of those 3 is not a guarantee in most jobs.
neilv · 5h ago
> There are options other than money and virtue signaling for why you'd work a given job.
Doing good normally isn't for virtue signaling.
BoorishBears · 5h ago
Working at a employer that says they're doing good isn't the same as actually doing good.
Especially when said employer is doing cartoonishly villainous stuff like bragging how they'll need to build a doomsday bunker to protect their employees from all from the great evi... er good, their ultimate goal would foist upon the wider world.
52-6F-62 · 4h ago
While your other comment stands, there is no separating yourself with the moral impetus of who you're working for.
If your boss is building a bomb to destroy a major city but you just want to work on hard technical problems and make good money at it, it doesn’t absolve you of your actions.
lm28469 · 5h ago
Are people sacrificing 40 hours of their lives every week to mega corps for anything other than money???
gadders · 7h ago
Imagine!! I would never live down the humiliation of getting a $100m signing bonus (I'd really like the opportunity to try though).
loose-cannon · 7h ago
As opposed to?
BoorishBears · 6h ago
I'm really confused by this comment section, is no one is considering the people they'll have to work with, the industry, the leadership, the customers, the nature of the work itself, the skillset you'll be exercising... literally anything other than TC when selecting a job?
I don't get why this is a point of contention, unless people think Meta is offering $100M to a React dev...
If they're writing up an offer with a $100M sign on bonus, it's going to a person who is making comparable compensation staying at OpenAI, and likely significantly more should OpenAI "win" at AI.
They're also people who have now been considered to be capable of influencing who will win at AI at an individual level by two major players in the space.
At that point even if you are money motivated, being on the winning team when winning the race has unfathomable upside is extremely lucrative. So it's still not worth taking an offer that results in you being on a less competitive team.
(in fact it might backfire, since you do probably get some jaded folks who don't believe in the upside at the end of the race anymore, but will gladly let someone convert their nebulous OpenAI "PPUs" into cash and Meta stock while the coast)
pjc50 · 5h ago
> even if you are money motivated, being on the winning team when winning the race has unfathomable upside
.. what sort of valuation are you expecting that's got an expected NPV of over $100m, or is this more a "you get to be in the bunker while the apocalypse happens around you" kind of benefit?
BoorishBears · 4h ago
$100M doesn't just get pulled out of thin air, it's a reflection of their current compensation: it's reasonable that their current TC is probably around 8 figures, with good portion that will 10x on even the most miserable timelines where OpenAI manages to reach the promised land of superintelligence...
Also at that level of IC, you have to realize there's an immense value to having been a pivotal part of the team that accomplished a milestone as earth shattering as that would be.
-
For a sneak peak of what that's worth, look at Noam Shazeer: funded a AI chatbot app, fought his users on what they actually wanted, and let the product languish... then Google bought the flailing husk for $2.7 Billion just so they could have him back.
tl;dr: once you're bought into the idea that someone will win this race, there's no way that the loser in the race is going to pay better than staying on the winning team does.
StefanBatory · 6h ago
I think most of us work for money ;)
pjc50 · 6h ago
This isn't punk, nobody cares if you're a ""sellout"".
MattPalmer1086 · 1h ago
I believe the Sex Pistols were quite happy to take the man's money! Maybe hippies would have more scruples in that area.
andelink · 35m ago
Ehh. I think much less of people who “sellout” for like $450k TC. It’s so unnecessary at that level yet thousands of people do it. $100M is far more interesting
v5v3 · 7h ago
He said "none of our best people have left" which means some are leaving.
And OpenAi probably had to renegotiate with those with a $100m offer so their costs went up.
Suppose it is karma for Zuckerberg, Meta have abused privacy so much many dislike them and won't work for them out of principle.
martin_a · 6h ago
> And OpenAi probably had to renegotiate with those with a $100m offer so their costs went up.
That sounds like the actual move here. Exploding your competitors cost structure because you're said to pay insane amounts of money for people willing to change...
On the other hand: People talk. If Meta will not pay that money that talk would probably go around...
highwaylights · 6h ago
Following this logic if Meta had been offering this money and stop doing so going forward, this article is pretty good cover for reigning those costs in (if they wanted to).
drexlspivey · 6h ago
They will just give them more equity which costs nothing
twoodfin · 6h ago
I am nearly certain that’s not how Zuckerberg thinks about equity.
You also have to publicly account for RSU’s to the market just like any other expense.
v5v3 · 4h ago
This is an offer made 2 year ago to someone:
>Base salary: $250,000
Stock: worth of $1,500,000 over 4 years
Total comp projected to cross $1M/year
> He said "none of our best people have left" which means some are leaving.
If you define "best" as "not willing to leave", the statement "none of our best people have left" is actually near to a tautology. :-)
Sol- · 6h ago
I think money and the promise of resources will convince enough qualified people to join Meta, but I guess it doesn't help their recruiting efforts that Zuck seems to have the most dystopian and anti-human AGI vision of all the company heads.
Of course we have good reasons to be cynical about Sam Altman or Anthropic's Dario Amodei, but at least their public statements and blog posts pretend to envision a positive future where humanity is empowered. They might bring about ruinous disruption that humanity won't recover from while trying to do that, but at least they claim to care.
What is Zuckerberg's vision? AI generated friends for which there is a "demand" (because their social networks pivoted away from connecting humans) and genAI advertising to more easily hack people's reward centers.
MaxPock · 5h ago
I think Dario has the most dystopian and anti people AI vision
staticman2 · 54m ago
I see no reason to believe anything Altman says, but food for thought:
Is it possible such a bonus, if it exists, would be contingent on Meta inventing AGI within a certain number of years, for some definition of AGI? Or possibly would have some other very ambitious performance metric for advancing the technology?
pjc50 · 6h ago
This is software developing a transfer market like footballers, isn't it? We've still got a long way to catch up with Ronaldo.
In both cases this is driven by "tournament wages": you can't replace Ronaldo with any number of cheaper footballers, because the size of your team is limited and the important metric is beating the other team.
It's also interesting to contrast this with the "AI will replace programmers" rhetoric. It sounds like the compensation curve is going to get steeper and steeper.
blagie · 5h ago
The curve is getting steeper, yes. That's not a contrast to the "AI will replace programmers" rhetoric.
Steeper means: higher at the top. Lower on the bottom.
Right now, AI can do the job of the bottom large percentage of programmers better than those programmers. Look up how a disruptive S-curve works. At the end, we may be left with one programmer overseeing an AI "improving" itself. Or perhaps zero. Or perhaps one per project. We don't know yet.
Good analogue is automation. Mass-scale manufacturing jobs were replaced by a handful of higher-paid, higher-skilled jobs. Certain career classes disappeared entirely.
dachworker · 5h ago
The "I trained a trillion parameter model" club is a very small club.
lotsofpulp · 2h ago
> We've still got a long way to catch up with Ronaldo.
Pretty sure Alexsandr Wang just blew Ronaldo out of the water.
Before that, the WhatsApp/Instagram founders.
wodenokoto · 3h ago
$100m for a staff member sounds crazy, but on the other hand if they were hired before ChatGPT was released and they got stock options still vesting, you might need to compensate them a 100m just for losing their stock.
Still, though, as far as I know that kind of hiring bonus is unheard of. Surely Deepseek and Google have shown that the skills of OpenAI employees are not unique, so this must be part of an effort to cripple OpenAI by poaching their best employees.
oersted · 7h ago
They are world-class engineers of course, but it's always been clear OpenAI's core-advantage was simply the access to massive amounts capital without much expectation of a return on investment.
The ML methods they use have always been quite standard, they have been open about that. They just had the gall (or vision) to burn way more money than anyone else on scaling them up. The scale itself carries its own serious engineering challenges of course, but frankly they are not doing anything that any top-of-class CS post-grad couldn't replicate with enough budget.
It's certainly hard, but it's really not that special from an engineering standpoint. What is absolutely unprecedented is the social engineering and political acumen that allowed them to take such risks with so much money, walking that tightrope of mad ambition combined with good scientific discipline to make sure the money wouldn't be completely wasted, and the vision for what was required to make LLMs actually commercially useful (instruction tuning, "safety/censoring"...). But frankly, I really think most of the the engineers themselves are fungible, and I say this as an engineer myself.
pu_pe · 7h ago
Raising the bar for salaries to be so high creates a huge moat for all these massive companies. Meta and OpenAI can afford to pay $100M for 10-20 top employees, but that would consume the entire initial funding round for startups such as Superintelligence from Ilya Sustskever, who raised $2 billion.
PlunderBunny · 7h ago
Which is what these companies want [0]. So, if you don't have a moat, build one!
I think the real breakthroughs will come from some randos or some researchers, not sure if throwing huge amounts of money to something is always the solution, otherwise many diseases would have been dealt with already.
spookie · 4h ago
Yup, also somebody with a completely different perspective, not tainted by biases stemming from the wrong incentives.
cs02rm0 · 7h ago
The job market in software seems crazy to me at the moment. It's becoming all or nothing.
v5v3 · 7h ago
Only for the top 1% of AI talent. As it is a limited pool.
namblooc · 3h ago
I was never involved in doing ML myself, even through my CS studies. However, from the outside it looks... not that complicated? How do they justify these salaries? Where do they see it coming back to them in terms of revenue?
anshumankmr · 7h ago
Well idk... recruiters from orgs have been really active in reaching out off late, anecdotally speaking, but I would not say I am the top1% of AI talent.
suyash · 4h ago
I can confirm first hand.
jsnell · 6h ago
I thought that the report that was being screenshotted a few weeks ago on the relative movements of staff between the top AI labs[0] would make for a good companion data point. Except now that I look at it, Meta didn't even make it to the graph :-/
Man did I get some pushback when I said this a week ago. People just really don't want to believe the sums involved here.
an0malous · 5h ago
Is there any reason to think he’s not just lying? His entire track record is riddled with dishonesty, about OpenAI’s mission, about the capabilities of their next AI model, about his own role and financial incentives.
yalogin · 3h ago
So is 10s of million a common sign on bonus for individual contributors in the AI space?
jrsj · 5h ago
Altman really is a generational bullshit artist. Exaggerating the value of his talent while pretending he hasn't already lost a lot of his most valuable people (he has).
It makes sense he focuses on Meta in this interview -- his other competitors actually have taken some of his top talent and are producing better models than GPT now.
kgh1337 · 6h ago
Any provable confirmation here around?
quirtenus · 6h ago
Sam Altman is a sociopath who seems to desire only power. He plays teams against each other at the same company. He backstabs people. He lies. He betrays people. He knows how to push levers and exploit relationships and normal human behavior.
I'm disappointed how many people here are accepting it so non-critically. It could be true, but for me, it's very difficult to believe. Are OpenAI staffers really telling Sam Altman what their offers are?
From Bayes' theorem it is much simpler to assume, Sam is lying to burnish the reputation of his company, as he does every week. From a manipulation point of view, it's perfect. Meta won't contradict it, and nobody from OpenAI can contradict it. It hinders Meta's ability to negotiate because engineers will expect more. It makes OpenAI look great -- wow, everyone loves the company so much that they can't be bought off -- and of course he sneaks in a little revenge jab at the end, he just had to say that, of course, "all the good people stayed". He is disgustingly good at these double meanings, statements that appear innocuous but are actually not.
Being the truth engine for sites like reddit is much more valuable than money
d--b · 7h ago
Whatever Altman says can't be trusted.
lvl155 · 6h ago
Zuck is one weird dude. He has mediocre talent and tries to buy his way out of everything. Meta had AI but he was too stupid to see it. Instead he worked on his stupid VR crap for years. Now he is trying to spend 100x or even 1000x more just to stay relevant. Why would you go work for someone like that?
ramraj07 · 6h ago
What an idiot indeed, buying useless companies like whatsapp and Instagram. Where did that take the org. His stupidity shows clearly - the arrogance to think social media could be monetized. Look at that stupidly low sum of a few hundred billion in revenue. Laughable.
6LLvveMx2koXfwn · 6h ago
You answered your own question :) !
nikanj · 6h ago
Man, I wish I also had that sort of mediocre talent. What has he achieved in his career anyway?
Coffeewine · 6h ago
Plus when you have the kind of money that Zuck has, buying your way out of everything seems to work pretty well.
1. The $100M is a huge number. Maybe there was 1 person, working on large-scale training or finetuning, who had an offer like this, which surely was a high offer even in base (like let's say $1M+), and had a lot of stock and bonus clauses, which over 4+ years could have worked out to a big number. But I don't believe that the average SWE or DE ("staffer") working on the OpenAI ChatGPT UI Javascript would get this offer..
2. One of the comments here says "[Zuck] has mediocre talent". I worked at Facebook ~10 years ago, it was the highest concentration of talent I've ever seen in my life. I'm pretty sure it's still an impressive set of people overall.
Disclaimer: I don't work in the LLM space, not even in the US tech space anymore.
I also know many folks who’ve worked at Meta. Almost all of them are talented despite many working there regretfully.
Whenever I ask such people, they talk about the incredible perks, stock options, challenges. They do say they are overburdened though.
These are people who would be rich anyway, and could work anywhere, doing much more good.
I read it as he not talented himself. Not about the talent he employs.
No comments yet
Or 2 trips to the hospital
The claim above is that OpenAI loses to other labs on most of the metrics (obviously depends on the person) and so many researchers have gone there based on higher compensation.
Were they ever “for good”? With Sam “let’s scam people for their retinas in exchange for crypto” Altman as CEO? I sincerely question that.
There was never a shift, the mask just fell off and they stopped pretending as much.
They still call it “open” by the way. Every other nonprofit is paying equivalent salaries and has published polemics about essentially world takeover, right?
Some people might just like working with competent people, doing work near the forefront of their field, while still being in an environment where their work is shipped to a massively growing user base.
Even getting 1 of those 3 is not a guarantee in most jobs.
Doing good normally isn't for virtue signaling.
Especially when said employer is doing cartoonishly villainous stuff like bragging how they'll need to build a doomsday bunker to protect their employees from all from the great evi... er good, their ultimate goal would foist upon the wider world.
If your boss is building a bomb to destroy a major city but you just want to work on hard technical problems and make good money at it, it doesn’t absolve you of your actions.
I don't get why this is a point of contention, unless people think Meta is offering $100M to a React dev...
If they're writing up an offer with a $100M sign on bonus, it's going to a person who is making comparable compensation staying at OpenAI, and likely significantly more should OpenAI "win" at AI.
They're also people who have now been considered to be capable of influencing who will win at AI at an individual level by two major players in the space.
At that point even if you are money motivated, being on the winning team when winning the race has unfathomable upside is extremely lucrative. So it's still not worth taking an offer that results in you being on a less competitive team.
(in fact it might backfire, since you do probably get some jaded folks who don't believe in the upside at the end of the race anymore, but will gladly let someone convert their nebulous OpenAI "PPUs" into cash and Meta stock while the coast)
.. what sort of valuation are you expecting that's got an expected NPV of over $100m, or is this more a "you get to be in the bunker while the apocalypse happens around you" kind of benefit?
Also at that level of IC, you have to realize there's an immense value to having been a pivotal part of the team that accomplished a milestone as earth shattering as that would be.
-
For a sneak peak of what that's worth, look at Noam Shazeer: funded a AI chatbot app, fought his users on what they actually wanted, and let the product languish... then Google bought the flailing husk for $2.7 Billion just so they could have him back.
tl;dr: once you're bought into the idea that someone will win this race, there's no way that the loser in the race is going to pay better than staying on the winning team does.
And OpenAi probably had to renegotiate with those with a $100m offer so their costs went up.
Suppose it is karma for Zuckerberg, Meta have abused privacy so much many dislike them and won't work for them out of principle.
That sounds like the actual move here. Exploding your competitors cost structure because you're said to pay insane amounts of money for people willing to change...
On the other hand: People talk. If Meta will not pay that money that talk would probably go around...
You also have to publicly account for RSU’s to the market just like any other expense.
>Base salary: $250,000 Stock: worth of $1,500,000 over 4 years Total comp projected to cross $1M/year
https://www.linkedin.com/posts/zhengyudian_jobsearch-founder...
https://www.theregister.com/2025/06/13/meta_offers_10m_ai_re...
If you define "best" as "not willing to leave", the statement "none of our best people have left" is actually near to a tautology. :-)
Of course we have good reasons to be cynical about Sam Altman or Anthropic's Dario Amodei, but at least their public statements and blog posts pretend to envision a positive future where humanity is empowered. They might bring about ruinous disruption that humanity won't recover from while trying to do that, but at least they claim to care.
What is Zuckerberg's vision? AI generated friends for which there is a "demand" (because their social networks pivoted away from connecting humans) and genAI advertising to more easily hack people's reward centers.
Is it possible such a bonus, if it exists, would be contingent on Meta inventing AGI within a certain number of years, for some definition of AGI? Or possibly would have some other very ambitious performance metric for advancing the technology?
In both cases this is driven by "tournament wages": you can't replace Ronaldo with any number of cheaper footballers, because the size of your team is limited and the important metric is beating the other team.
It's also interesting to contrast this with the "AI will replace programmers" rhetoric. It sounds like the compensation curve is going to get steeper and steeper.
Steeper means: higher at the top. Lower on the bottom.
Right now, AI can do the job of the bottom large percentage of programmers better than those programmers. Look up how a disruptive S-curve works. At the end, we may be left with one programmer overseeing an AI "improving" itself. Or perhaps zero. Or perhaps one per project. We don't know yet.
Good analogue is automation. Mass-scale manufacturing jobs were replaced by a handful of higher-paid, higher-skilled jobs. Certain career classes disappeared entirely.
Pretty sure Alexsandr Wang just blew Ronaldo out of the water.
Before that, the WhatsApp/Instagram founders.
"Up to"
Still, though, as far as I know that kind of hiring bonus is unheard of. Surely Deepseek and Google have shown that the skills of OpenAI employees are not unique, so this must be part of an effort to cripple OpenAI by poaching their best employees.
The ML methods they use have always been quite standard, they have been open about that. They just had the gall (or vision) to burn way more money than anyone else on scaling them up. The scale itself carries its own serious engineering challenges of course, but frankly they are not doing anything that any top-of-class CS post-grad couldn't replicate with enough budget.
It's certainly hard, but it's really not that special from an engineering standpoint. What is absolutely unprecedented is the social engineering and political acumen that allowed them to take such risks with so much money, walking that tightrope of mad ambition combined with good scientific discipline to make sure the money wouldn't be completely wasted, and the vision for what was required to make LLMs actually commercially useful (instruction tuning, "safety/censoring"...). But frankly, I really think most of the the engineers themselves are fungible, and I say this as an engineer myself.
[0] https://semianalysis.com/2023/05/04/google-we-have-no-moat-a...
[0] https://www.signalfire.com/blog/signalfire-state-of-talent-r...
It makes sense he focuses on Meta in this interview -- his other competitors actually have taken some of his top talent and are producing better models than GPT now.
I'm disappointed how many people here are accepting it so non-critically. It could be true, but for me, it's very difficult to believe. Are OpenAI staffers really telling Sam Altman what their offers are?
From Bayes' theorem it is much simpler to assume, Sam is lying to burnish the reputation of his company, as he does every week. From a manipulation point of view, it's perfect. Meta won't contradict it, and nobody from OpenAI can contradict it. It hinders Meta's ability to negotiate because engineers will expect more. It makes OpenAI look great -- wow, everyone loves the company so much that they can't be bought off -- and of course he sneaks in a little revenge jab at the end, he just had to say that, of course, "all the good people stayed". He is disgustingly good at these double meanings, statements that appear innocuous but are actually not.