I feel like there's a lot of half-truths in the article and some of the comments here.
1. The $100M is a huge number. Maybe there was 1 person, working on large-scale training or finetuning, who had an offer like this, which surely was a high offer even in base (like let's say $1M+), and had a lot of stock and bonus clauses, which over 4+ years could have worked out to a big number. But I don't believe that the average SWE or DE ("staffer") working on the OpenAI ChatGPT UI Javascript would get this offer..
2. One of the comments here says "[Zuck] has mediocre talent". I worked at Facebook ~10 years ago, it was the highest concentration of talent I've ever seen in my life. I'm pretty sure it's still an impressive set of people overall.
Disclaimer: I don't work in the LLM space, not even in the US tech space anymore.
tyleo · 5h ago
Comments aimed at Zuck’s talent always seem jealous to me. An argument can be made that he lacks a good moral compass using specific public examples but I haven’t seen any similar evidence to argue a lack of talent.
I also know many folks who’ve worked at Meta. Almost all of them are talented despite many working there regretfully.
leosanchez · 6h ago
> "[Zuck] has mediocre talent"
I read it as he not talented himself. Not about the talent he employs.
No comments yet
poisonborz · 2h ago
It's incredible to me that talent != moral values is this widespread. I know this was pre-Cambridge Analytica but the writing was on the wall, and we see the same with each new tech wave.
Whenever I ask such people, they talk about the incredible perks, stock options, challenges. They do say they are overburdened though.
These are people who would be rich anyway, and could work anywhere, doing much more good.
spacemadness · 1h ago
They’re people who would have been on wall street in another era a lot of the time. Smart people focused on money.
benrapscallion · 6h ago
They did “pay” $14B to the CEO of ScaleAI. $100M sounds plausible, relatively.
StochasticLi · 4h ago
OpenAI has very few employees, so I think it's possible for the correct ones. Crazier things have happened.
freejazz · 1h ago
> it was the highest concentration of talent
What a waste of a generation
diziet · 8h ago
Whether this is true or not, this is a clever move to publicize. Anyone being poached by Meta now from OpenAI will feel like asking for 100m bonuses and will possibly feel underappreciated with only a 20 or 50 million signing bonus.
gondo · 6h ago
This can backfire and work the other way around. Existing employees may try to renegotiate their compensation and threaten to leave.
PunchTornado · 8h ago
20 mil is peanuts. who would accept it?
randomcarbloke · 8h ago
I would, where do I sign
aleph_minus_one · 7h ago
If you have to ask this question, the offer is not for you. :-)
thr0waway001 · 2h ago
How much for a ZJ?
cess11 · 8h ago
It's like twelve life incomes in the US.
> 20000000 / (40 * 40000)
12.5
An obscene amount of wealth.
Crosseye_Jack · 7h ago
> twelve life incomes in the US
Or 2 trips to the hospital
dan-robertson · 7h ago
Rich people usually have good insurance though.
postalrat · 3h ago
Insurance always takes more than it gives.
InitialLastName · 2h ago
The key thing is that insurance also gets monopsony power over what they pay providers, so they can pay less than the provider would nominally charge.
bloqs · 5h ago
people not completely out of touch with reality
kasperni · 8h ago
Yeah, and everyone will know you did it for the money.
willvarfar · 8h ago
Isn't pretty much everyone working at OpenAI already clearly motivated by money over principle? OpenAI had a very public departure from being for-good to being for-money last year...
gadders · 8h ago
Everyone works for money unless you are refusing to take your salary.
dan-robertson · 8h ago
Lots of people working for AI labs have other AI labs they could work for, so their decisions will be made based on differences of remuneration, expected work/role, location, and employer culture/mission.
The claim above is that OpenAI loses to other labs on most of the metrics (obviously depends on the person) and so many researchers have gone there based on higher compensation.
Retric · 7h ago
Not what the phrase means, when you decide to take a vastly less lucrative offer you’re working for something other than money.
robertlagrant · 1h ago
How many people do that out of the working population?
triceratops · 55s ago
Arguably anyone who's working in a something they're "passionate" about.
Retric · 18m ago
Millions take a noticeable pay cut, it suppress wages in many fields.
It’s one of the reasons so many CEO’s hype up their impact. SpaceX would’ve needed far higher compensation if engineers weren’t enthusiastic about space etc.
matthewmacleod · 7h ago
Obviously this is not the case, and you're deliberately choosing to misunderstand the point.
latexr · 7h ago
> OpenAI had a very public departure from being for-good to being for-money last year...
Were they ever “for good”? With Sam “let’s scam people for their retinas in exchange for crypto” Altman as CEO? I sincerely question that.
There was never a shift, the mask just fell off and they stopped pretending as much.
willvarfar · 6h ago
It was originally called "open" and run as a not-for-profit and a lot of people joined - and even joined the board - on that understanding.
freejazz · 1h ago
I'm not sure that's an answer to the question of whether or not it was ever for good
52-6F-62 · 5h ago
It’s not like tech companies have a playbook for becoming “sticky” in peoples’ lives and businesses by bait and switch.
They still call it “open” by the way. Every other nonprofit is paying equivalent salaries and has published polemics about essentially world takeover, right?
freejazz · 1h ago
Who would have believed it in the first place? Not I.
BoorishBears · 7h ago
There are options other than money and virtue signaling for why you'd work a given job.
Some people might just like working with competent people, doing work near the forefront of their field, while still being in an environment where their work is shipped to a massively growing user base.
Even getting 1 of those 3 is not a guarantee in most jobs.
neilv · 6h ago
> There are options other than money and virtue signaling for why you'd work a given job.
Doing good normally isn't for virtue signaling.
BoorishBears · 6h ago
Working at a employer that says they're doing good isn't the same as actually doing good.
Especially when said employer is doing cartoonishly villainous stuff like bragging how they'll need to build a doomsday bunker to protect their employees from all from the great evi... er good, their ultimate goal would foist upon the wider world.
52-6F-62 · 5h ago
While your other comment stands, there is no separating yourself with the moral impetus of who you're working for.
If your boss is building a bomb to destroy a major city but you just want to work on hard technical problems and make good money at it, it doesn’t absolve you of your actions.
lm28469 · 7h ago
Are people sacrificing 40 hours of their lives every week to mega corps for anything other than money???
freejazz · 1h ago
40?!? That's not hardcore at all!
gadders · 8h ago
Imagine!! I would never live down the humiliation of getting a $100m signing bonus (I'd really like the opportunity to try though).
loose-cannon · 8h ago
As opposed to?
BoorishBears · 7h ago
I'm really confused by this comment section, is no one is considering the people they'll have to work with, the industry, the leadership, the customers, the nature of the work itself, the skillset you'll be exercising... literally anything other than TC when selecting a job?
I don't get why this is a point of contention, unless people think Meta is offering $100M to a React dev...
If they're writing up an offer with a $100M sign on bonus, it's going to a person who is making comparable compensation staying at OpenAI, and likely significantly more should OpenAI "win" at AI.
They're also people who have now been considered to be capable of influencing who will win at AI at an individual level by two major players in the space.
At that point even if you are money motivated, being on the winning team when winning the race has unfathomable upside is extremely lucrative. So it's still not worth taking an offer that results in you being on a less competitive team.
(in fact it might backfire, since you do probably get some jaded folks who don't believe in the upside at the end of the race anymore, but will gladly let someone convert their nebulous OpenAI "PPUs" into cash and Meta stock while the coast)
pjc50 · 6h ago
> even if you are money motivated, being on the winning team when winning the race has unfathomable upside
.. what sort of valuation are you expecting that's got an expected NPV of over $100m, or is this more a "you get to be in the bunker while the apocalypse happens around you" kind of benefit?
BoorishBears · 6h ago
$100M doesn't just get pulled out of thin air, it's a reflection of their current compensation: it's reasonable that their current TC is probably around 8 figures, with good portion that will 10x on even the most miserable timelines where OpenAI manages to reach the promised land of superintelligence...
Also at that level of IC, you have to realize there's an immense value to having been a pivotal part of the team that accomplished a milestone as earth shattering as that would be.
-
For a sneak peak of what that's worth, look at Noam Shazeer: funded a AI chatbot app, fought his users on what they actually wanted, and let the product languish... then Google bought the flailing husk for $2.7 Billion just so they could have him back.
tl;dr: once you're bought into the idea that someone will win this race, there's no way that the loser in the race is going to pay better than staying on the winning team does.
StefanBatory · 7h ago
I think most of us work for money ;)
pjc50 · 7h ago
This isn't punk, nobody cares if you're a ""sellout"".
MattPalmer1086 · 2h ago
I believe the Sex Pistols were quite happy to take the man's money! Maybe hippies would have more scruples in that area.
andelink · 1h ago
Ehh. I think much less of people who “sellout” for like $450k TC. It’s so unnecessary at that level yet thousands of people do it. $100M is far more interesting
v5v3 · 8h ago
He said "none of our best people have left" which means some are leaving.
And OpenAi probably had to renegotiate with those with a $100m offer so their costs went up.
Suppose it is karma for Zuckerberg, Meta have abused privacy so much many dislike them and won't work for them out of principle.
martin_a · 7h ago
> And OpenAi probably had to renegotiate with those with a $100m offer so their costs went up.
That sounds like the actual move here. Exploding your competitors cost structure because you're said to pay insane amounts of money for people willing to change...
On the other hand: People talk. If Meta will not pay that money that talk would probably go around...
highwaylights · 7h ago
Following this logic if Meta had been offering this money and stop doing so going forward, this article is pretty good cover for reigning those costs in (if they wanted to).
drexlspivey · 7h ago
They will just give them more equity which costs nothing
twoodfin · 7h ago
I am nearly certain that’s not how Zuckerberg thinks about equity.
You also have to publicly account for RSU’s to the market just like any other expense.
v5v3 · 6h ago
This is an offer made 2 year ago to someone:
>Base salary: $250,000
Stock: worth of $1,500,000 over 4 years
Total comp projected to cross $1M/year
> He said "none of our best people have left" which means some are leaving.
If you define "best" as "not willing to leave", the statement "none of our best people have left" is actually near to a tautology. :-)
karmakaze · 1h ago
If I were one of the targeted, I'd probably take it as a learning opportunity. Doesn't Yann LeCun work there as the VP and Chief AI Scientist? I don't know many others doing much research other than monetizing LLMs. DeepMind by far would be my (uninformed) first choice.
As for ethics, Meta/FB is disliked but they seem pretty transparent compared to OpenAI [sic].
Sol- · 7h ago
I think money and the promise of resources will convince enough qualified people to join Meta, but I guess it doesn't help their recruiting efforts that Zuck seems to have the most dystopian and anti-human AGI vision of all the company heads.
Of course we have good reasons to be cynical about Sam Altman or Anthropic's Dario Amodei, but at least their public statements and blog posts pretend to envision a positive future where humanity is empowered. They might bring about ruinous disruption that humanity won't recover from while trying to do that, but at least they claim to care.
What is Zuckerberg's vision? AI generated friends for which there is a "demand" (because their social networks pivoted away from connecting humans) and genAI advertising to more easily hack people's reward centers.
MaxPock · 7h ago
I think Dario has the most dystopian and anti people AI vision
pjc50 · 7h ago
This is software developing a transfer market like footballers, isn't it? We've still got a long way to catch up with Ronaldo.
In both cases this is driven by "tournament wages": you can't replace Ronaldo with any number of cheaper footballers, because the size of your team is limited and the important metric is beating the other team.
It's also interesting to contrast this with the "AI will replace programmers" rhetoric. It sounds like the compensation curve is going to get steeper and steeper.
blagie · 7h ago
The curve is getting steeper, yes. That's not a contrast to the "AI will replace programmers" rhetoric.
Steeper means: higher at the top. Lower on the bottom.
Right now, AI can do the job of the bottom large percentage of programmers better than those programmers. Look up how a disruptive S-curve works. At the end, we may be left with one programmer overseeing an AI "improving" itself. Or perhaps zero. Or perhaps one per project. We don't know yet.
Good analogue is automation. Mass-scale manufacturing jobs were replaced by a handful of higher-paid, higher-skilled jobs. Certain career classes disappeared entirely.
dachworker · 6h ago
The "I trained a trillion parameter model" club is a very small club.
lotsofpulp · 3h ago
> We've still got a long way to catch up with Ronaldo.
Pretty sure Alexsandr Wang just blew Ronaldo out of the water.
Still, though, as far as I know that kind of hiring bonus is unheard of. Surely Deepseek and Google have shown that the skills of OpenAI employees are not unique, so this must be part of an effort to cripple OpenAI by poaching their best employees.
oersted · 8h ago
They are world-class engineers of course, but it's always been clear OpenAI's core-advantage was simply the access to massive amounts capital without much expectation of a return on investment.
The ML methods they use have always been quite standard, they have been open about that. They just had the gall (or vision) to burn way more money than anyone else on scaling them up. The scale itself carries its own serious engineering challenges of course, but frankly they are not doing anything that any top-of-class CS post-grad couldn't replicate with enough budget.
It's certainly hard, but it's really not that special from an engineering standpoint. What is absolutely unprecedented is the social engineering and political acumen that allowed them to take such risks with so much money, walking that tightrope of mad ambition combined with good scientific discipline to make sure the money wouldn't be completely wasted, and the vision for what was required to make LLMs actually commercially useful (instruction tuning, "safety/censoring"...). But frankly, I really think most of the the engineers themselves are fungible, and I say this as an engineer myself.
pu_pe · 8h ago
Raising the bar for salaries to be so high creates a huge moat for all these massive companies. Meta and OpenAI can afford to pay $100M for 10-20 top employees, but that would consume the entire initial funding round for startups such as Superintelligence from Ilya Sustskever, who raised $2 billion.
PlunderBunny · 8h ago
Which is what these companies want [0]. So, if you don't have a moat, build one!
I see no reason to believe anything Altman says, but food for thought:
Is it possible such a bonus, if it exists, would be contingent on Meta inventing AGI within a certain number of years, for some definition of AGI? Or possibly would have some other very ambitious performance metric for advancing the technology?
viking123 · 7h ago
I think the real breakthroughs will come from some randos or some researchers, not sure if throwing huge amounts of money to something is always the solution, otherwise many diseases would have been dealt with already.
spookie · 5h ago
Yup, also somebody with a completely different perspective, not tainted by biases stemming from the wrong incentives.
wodenokoto · 4h ago
$100m for a staff member sounds crazy, but on the other hand if they were hired before ChatGPT was released and they got stock options still vesting, you might need to compensate them a 100m just for losing their stock.
lucubratory · 6h ago
Man did I get some pushback when I said this a week ago. People just really don't want to believe the sums involved here.
cs02rm0 · 8h ago
The job market in software seems crazy to me at the moment. It's becoming all or nothing.
v5v3 · 8h ago
Only for the top 1% of AI talent. As it is a limited pool.
namblooc · 4h ago
I was never involved in doing ML myself, even through my CS studies. However, from the outside it looks... not that complicated? How do they justify these salaries? Where do they see it coming back to them in terms of revenue?
psb217 · 32m ago
Most of the people pursued in these "AI talent wars" are folks deeply involved in training or developing infrastructure for training LLMs at whatever level is currently state-of-the-art. Due to the resources required for projects that can provide this sort of experience, the pool of folks with this experience is limited to those with significant clout in orgs with money to burn on LLM projects. These people are expensive to hire, and can kind of run through a loop of jumping from company to company in an upward compensation spiral.
Ie, the skills aren't particularly complicated in principle, but the conditions needed to acquire them aren't widely available, so the pool of people with the skills is limited.
anshumankmr · 8h ago
Well idk... recruiters from orgs have been really active in reaching out off late, anecdotally speaking, but I would not say I am the top1% of AI talent.
suyash · 5h ago
I can confirm first hand.
jsnell · 7h ago
I thought that the report that was being screenshotted a few weeks ago on the relative movements of staff between the top AI labs[0] would make for a good companion data point. Except now that I look at it, Meta didn't even make it to the graph :-/
Is there any reason to think he’s not just lying? His entire track record is riddled with dishonesty, about OpenAI’s mission, about the capabilities of their next AI model, about his own role and financial incentives.
jrsj · 6h ago
Altman really is a generational bullshit artist. Exaggerating the value of his talent while pretending he hasn't already lost a lot of his most valuable people (he has).
It makes sense he focuses on Meta in this interview -- his other competitors actually have taken some of his top talent and are producing better models than GPT now.
yalogin · 4h ago
So is 10s of million a common sign on bonus for individual contributors in the AI space?
kgh1337 · 7h ago
Any provable confirmation here around?
quirtenus · 7h ago
Sam Altman is a sociopath who seems to desire only power. He plays teams against each other at the same company. He backstabs people. He lies. He betrays people. He knows how to push levers and exploit relationships and normal human behavior.
I'm disappointed how many people here are accepting it so non-critically. It could be true, but for me, it's very difficult to believe. Are OpenAI staffers really telling Sam Altman what their offers are?
From Bayes' theorem it is much simpler to assume, Sam is lying to burnish the reputation of his company, as he does every week. From a manipulation point of view, it's perfect. Meta won't contradict it, and nobody from OpenAI can contradict it. It hinders Meta's ability to negotiate because engineers will expect more. It makes OpenAI look great -- wow, everyone loves the company so much that they can't be bought off -- and of course he sneaks in a little revenge jab at the end, he just had to say that, of course, "all the good people stayed". He is disgustingly good at these double meanings, statements that appear innocuous but are actually not.
Being the truth engine for sites like reddit is much more valuable than money
d--b · 8h ago
Whatever Altman says can't be trusted.
lvl155 · 8h ago
[flagged]
dang · 42m ago
Please don't cross into personal attack, fulminate, or call names on HN. All that is against the site guidelines and, more importantly, the intended spirit of HN.
You may not owe $billionaire-celebrity better, but you owe this community better if you're participating in it.
What an idiot indeed, buying useless companies like whatsapp and Instagram. Where did that take the org. His stupidity shows clearly - the arrogance to think social media could be monetized. Look at that stupidly low sum of a few hundred billion in revenue. Laughable.
dang · 44m ago
Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.
1. The $100M is a huge number. Maybe there was 1 person, working on large-scale training or finetuning, who had an offer like this, which surely was a high offer even in base (like let's say $1M+), and had a lot of stock and bonus clauses, which over 4+ years could have worked out to a big number. But I don't believe that the average SWE or DE ("staffer") working on the OpenAI ChatGPT UI Javascript would get this offer..
2. One of the comments here says "[Zuck] has mediocre talent". I worked at Facebook ~10 years ago, it was the highest concentration of talent I've ever seen in my life. I'm pretty sure it's still an impressive set of people overall.
Disclaimer: I don't work in the LLM space, not even in the US tech space anymore.
I also know many folks who’ve worked at Meta. Almost all of them are talented despite many working there regretfully.
I read it as he not talented himself. Not about the talent he employs.
No comments yet
Whenever I ask such people, they talk about the incredible perks, stock options, challenges. They do say they are overburdened though.
These are people who would be rich anyway, and could work anywhere, doing much more good.
What a waste of a generation
Or 2 trips to the hospital
The claim above is that OpenAI loses to other labs on most of the metrics (obviously depends on the person) and so many researchers have gone there based on higher compensation.
It’s one of the reasons so many CEO’s hype up their impact. SpaceX would’ve needed far higher compensation if engineers weren’t enthusiastic about space etc.
Were they ever “for good”? With Sam “let’s scam people for their retinas in exchange for crypto” Altman as CEO? I sincerely question that.
There was never a shift, the mask just fell off and they stopped pretending as much.
They still call it “open” by the way. Every other nonprofit is paying equivalent salaries and has published polemics about essentially world takeover, right?
Some people might just like working with competent people, doing work near the forefront of their field, while still being in an environment where their work is shipped to a massively growing user base.
Even getting 1 of those 3 is not a guarantee in most jobs.
Doing good normally isn't for virtue signaling.
Especially when said employer is doing cartoonishly villainous stuff like bragging how they'll need to build a doomsday bunker to protect their employees from all from the great evi... er good, their ultimate goal would foist upon the wider world.
If your boss is building a bomb to destroy a major city but you just want to work on hard technical problems and make good money at it, it doesn’t absolve you of your actions.
I don't get why this is a point of contention, unless people think Meta is offering $100M to a React dev...
If they're writing up an offer with a $100M sign on bonus, it's going to a person who is making comparable compensation staying at OpenAI, and likely significantly more should OpenAI "win" at AI.
They're also people who have now been considered to be capable of influencing who will win at AI at an individual level by two major players in the space.
At that point even if you are money motivated, being on the winning team when winning the race has unfathomable upside is extremely lucrative. So it's still not worth taking an offer that results in you being on a less competitive team.
(in fact it might backfire, since you do probably get some jaded folks who don't believe in the upside at the end of the race anymore, but will gladly let someone convert their nebulous OpenAI "PPUs" into cash and Meta stock while the coast)
.. what sort of valuation are you expecting that's got an expected NPV of over $100m, or is this more a "you get to be in the bunker while the apocalypse happens around you" kind of benefit?
Also at that level of IC, you have to realize there's an immense value to having been a pivotal part of the team that accomplished a milestone as earth shattering as that would be.
-
For a sneak peak of what that's worth, look at Noam Shazeer: funded a AI chatbot app, fought his users on what they actually wanted, and let the product languish... then Google bought the flailing husk for $2.7 Billion just so they could have him back.
tl;dr: once you're bought into the idea that someone will win this race, there's no way that the loser in the race is going to pay better than staying on the winning team does.
And OpenAi probably had to renegotiate with those with a $100m offer so their costs went up.
Suppose it is karma for Zuckerberg, Meta have abused privacy so much many dislike them and won't work for them out of principle.
That sounds like the actual move here. Exploding your competitors cost structure because you're said to pay insane amounts of money for people willing to change...
On the other hand: People talk. If Meta will not pay that money that talk would probably go around...
You also have to publicly account for RSU’s to the market just like any other expense.
>Base salary: $250,000 Stock: worth of $1,500,000 over 4 years Total comp projected to cross $1M/year
https://www.linkedin.com/posts/zhengyudian_jobsearch-founder...
https://www.theregister.com/2025/06/13/meta_offers_10m_ai_re...
If you define "best" as "not willing to leave", the statement "none of our best people have left" is actually near to a tautology. :-)
As for ethics, Meta/FB is disliked but they seem pretty transparent compared to OpenAI [sic].
Of course we have good reasons to be cynical about Sam Altman or Anthropic's Dario Amodei, but at least their public statements and blog posts pretend to envision a positive future where humanity is empowered. They might bring about ruinous disruption that humanity won't recover from while trying to do that, but at least they claim to care.
What is Zuckerberg's vision? AI generated friends for which there is a "demand" (because their social networks pivoted away from connecting humans) and genAI advertising to more easily hack people's reward centers.
In both cases this is driven by "tournament wages": you can't replace Ronaldo with any number of cheaper footballers, because the size of your team is limited and the important metric is beating the other team.
It's also interesting to contrast this with the "AI will replace programmers" rhetoric. It sounds like the compensation curve is going to get steeper and steeper.
Steeper means: higher at the top. Lower on the bottom.
Right now, AI can do the job of the bottom large percentage of programmers better than those programmers. Look up how a disruptive S-curve works. At the end, we may be left with one programmer overseeing an AI "improving" itself. Or perhaps zero. Or perhaps one per project. We don't know yet.
Good analogue is automation. Mass-scale manufacturing jobs were replaced by a handful of higher-paid, higher-skilled jobs. Certain career classes disappeared entirely.
Pretty sure Alexsandr Wang just blew Ronaldo out of the water.
Before that, the WhatsApp/Instagram founders.
"Up to"
Still, though, as far as I know that kind of hiring bonus is unheard of. Surely Deepseek and Google have shown that the skills of OpenAI employees are not unique, so this must be part of an effort to cripple OpenAI by poaching their best employees.
The ML methods they use have always been quite standard, they have been open about that. They just had the gall (or vision) to burn way more money than anyone else on scaling them up. The scale itself carries its own serious engineering challenges of course, but frankly they are not doing anything that any top-of-class CS post-grad couldn't replicate with enough budget.
It's certainly hard, but it's really not that special from an engineering standpoint. What is absolutely unprecedented is the social engineering and political acumen that allowed them to take such risks with so much money, walking that tightrope of mad ambition combined with good scientific discipline to make sure the money wouldn't be completely wasted, and the vision for what was required to make LLMs actually commercially useful (instruction tuning, "safety/censoring"...). But frankly, I really think most of the the engineers themselves are fungible, and I say this as an engineer myself.
[0] https://semianalysis.com/2023/05/04/google-we-have-no-moat-a...
Is it possible such a bonus, if it exists, would be contingent on Meta inventing AGI within a certain number of years, for some definition of AGI? Or possibly would have some other very ambitious performance metric for advancing the technology?
Ie, the skills aren't particularly complicated in principle, but the conditions needed to acquire them aren't widely available, so the pool of people with the skills is limited.
[0] https://www.signalfire.com/blog/signalfire-state-of-talent-r...
It makes sense he focuses on Meta in this interview -- his other competitors actually have taken some of his top talent and are producing better models than GPT now.
I'm disappointed how many people here are accepting it so non-critically. It could be true, but for me, it's very difficult to believe. Are OpenAI staffers really telling Sam Altman what their offers are?
From Bayes' theorem it is much simpler to assume, Sam is lying to burnish the reputation of his company, as he does every week. From a manipulation point of view, it's perfect. Meta won't contradict it, and nobody from OpenAI can contradict it. It hinders Meta's ability to negotiate because engineers will expect more. It makes OpenAI look great -- wow, everyone loves the company so much that they can't be bought off -- and of course he sneaks in a little revenge jab at the end, he just had to say that, of course, "all the good people stayed". He is disgustingly good at these double meanings, statements that appear innocuous but are actually not.
You may not owe $billionaire-celebrity better, but you owe this community better if you're participating in it.
https://news.ycombinator.com/newsguidelines.html
https://news.ycombinator.com/newsguidelines.html