People are focused on the skills these people must possess or the experience.
Chances are good that while they’re competitive for sure, what they really have that landed them these positions is connections and the ability to market themselves well.
unyttigfjelltol · 5h ago
I take this as a contrarian signal that Meta has hit serious roadblocks improving their AI despite massive data advantages and are throwing a bunch of "Hail Mary" desperation passes to achieve meaningful further progress.
aleph_minus_one · 3h ago
>
I take this as a contrarian signal that Meta has hit serious roadblocks improving their AI despite massive data advantages
Just a thought:
Assuming that Meta's AI is actually good. Could it rather be that having access to a massive amount of data does not bring that much of a business value (in this case particularly for training AIs)?
Evidence for my hypothesis: if you want to gain a deep knowledge about some complicated specific scientific topic, you typically don't want to read a lot of shallow texts tangentially related to this topic, but the few breakthrough papers and books of the smartest mind who moved the state of art in the respective area. Or some of the few survey monographs of also highly smart people who work in the respective area who have a vast overview about how these deep research breakthroughs fit into the grander scheme of things.
master_crab · 2h ago
There’s been a lot of research on the necessity of singular geniuses. The general consensus (from studies on Nobel Prizes and simultaneous patent rates) is that advances tend to be moved by the research community as a whole.
You can get that technical or scientific context for a lot less than $250 million per head.
ignoramous · 3h ago
> Assuming that Meta's AI is actually good. Could it rather be that having access to a massive amount ...
Most would say, vibe-wise Llama 4 fell flat in face of Qwen & friends.
BSOhealth · 5h ago
These figures are for a very small number of potential people. This leaves out that frontier AI is being developed by an incredibly small number of extremely smart people who have migrated between big tech, frontier AI, and others.
Yes, the figures are nuts. But compare them to F1 or soccer salaries for top athletes. A single big name can drive billions in that context at least, and much more in the context of AI. $50M-$100M/year, particularly when some or most is stock, is rational.
AIPedant · 5h ago
A very major difference is that top athletes bring in real tangible money via ticket / merch sales and sponsorships, whereas top AI researchers bring in pseudo-money via investor speculation. The AI money is far more likely to vanish.
brandall10 · 4h ago
It's best to look at this as expected value. A top AI research has the potential to bring in a lot more $$ than a top athlete, but of course there is a big risk factor on top of that.
AIPedant · 3h ago
The expected value is itself a random variable, there is always a chance you mischaracterized the underlying distribution. For sports stars the variance in the expected value is extremely small, even if the variance in the sample value is quite large - it might be hard to predict how an individual sports star will do, but there is enough data to get a sense of the overall distribution and identify potential outliers.
For AI researchers pursuing AGI, this variance between distributions is arguably even worse than the distribution between samples - there's no past data whatsoever to build estimates, it's all vibes.
brandall10 · 2h ago
We’ve seen $T+ scale impacts from AI over the past few years.
You can argue the distribution is hard to pin down (hence my note on risk), but let’s not pretend there’s zero precedent.
If it turns out to be another winter at least it will have been a fucking blizzard.
AIPedant · 2h ago
The distribution is merely tricky to pin down when looking at overall AI spend, i.e. these "$T+ scale impacts."
But the distribution for individual researcher salaries really is pure guesswork. How does the datapoint of "Attention Is All You Need?" fit in to this distribution? The authors had very comfortable Google salaries but certainly not 9-figure contracts. And OpenAI and Anthropic (along with NVIDIA's elevated valuation) are founded on their work.
brandall10 · 2h ago
When Attention is All You Need was published, the market as it stands didn't exist. It's like comparing the pre-Jordan NBA to post. Same game, different league.
I'd argue the top individual researchers figure into the overall AI spend. They are the people leading teams/labs and are a marketable asset in a number of ways. Extrapolate this further outward - why does Jony Ive deserve to be part of a $6B aquihire? Why does Mira Murati deserve to be leading a 5 month old company valued at $12B with only 50 employees? Neither contributed fundamental research leading to where we are today.
jgalt212 · 3h ago
If you imagine hard enough, you can expect anything. e.g. Extraordinary Popular Delusions and the Madness of Crowds
brandall10 · 3h ago
Sure, but the idea these hires could pay out big is within the realm of actual reality, even if AGI itself remains a pipe dream. It’s not like AI hasn’t already had a massive impact on global commerce and markets.
ojbyrne · 2h ago
My understanding is that the bulk of revenue comes from television contracts. There has been speculation that that could easily shrink in the future if the charges become more granular and non-sports watching people stop subsidizing the sports watching people. That seems analogous to AI money.
ignoramous · 3h ago
Another major difference is, BigTech is bigger than these global sporting institutions.
How much revenue does Google make in a day? £700m+.
stocksinsmocks · 3h ago
It’s just a matter of taste, but I am pleased to see publicity on people with compensation packages that greatly exceed actors and athletes. It’s about time the nerds got some recognition. My hope is that researchers get the level of celebrity that they deserve and inspire young people to put their minds to building great things.
layer8 · 20m ago
The money these millions are coming from is already based on nerds having gotten incredibly rich (i.e. big tech). The recognition is arguably yet to follow.
TrackerFF · 1h ago
Frontier AI that scales – these people all have extensive experience with developing systems that operate with hundreds of millions of users.
Don’t get me wrong, they are smart people - but so are thousands of other researchers you find in academia etc. - difference here is scale of the operation.
magic_man · 4h ago
Top athletes they have stats to measure. I guess for these researchers I guess there are papers? How do you know who did what with multiple authors? How do you figure out who is Jordan vs Steve Kerr?
thefaux · 3h ago
Yeah, who knew that Kerr would have the more successful overall career in basketball?
8f2ab37a-ed6c · 2h ago
Is there anything one can do to get in on this? Did I have to be at Stanford getting a PhD 10 years ago, or can I somehow still get on the frontier now as a generic software engineer who's pretty good at learning things, and end up working at one of these labs? Or is it impossible to guess exactly what is going to be desirable a few years from now that might get you in the game at that caliber?
layer8 · 11m ago
If it were possible to guess, enough people would do it to drive the price to down to reasonable levels. Unless maybe you believe you are in the top 50 or so in the world able to do what it takes.
turnsout · 1h ago
I'll give you two completely different and conflicting opinions!
Bear case: No, there's nothing you can do. These are exceptionally rare hires driven by FOMO at the peak of AI froth. If any of these engineers are successful at creating AGI/superintelligence within five years, then the market for human AI engineers will essentially vanish overnight. If they are NOT successful at creating AGI within five years, the ultra high-end market for human AI engineers will also vanish, because companies will no longer trust that talent is the key.
Bull case: Yes, you should go all in and rebrand as a self-proclaimed AI genius. Don't focus on commanding $250M in compensation (although 24, Matt Deitke has been doing AI/ML since high school). Instead, focus on optimizing or changing any random part of the transformer architecture and publishing an absolutely inscrutable paper about the results. Make a glossy startup page that makes some bold claims about how you'll utilize your research to change the game. If you're quick, you can ride the wave of FOMO and start leveling up. Although AGI will never happen, the opportunities will remain as we head into the "plateau of productivity."
master_crab · 1h ago
This is one of those comments that is enjoyably cynical…and conceivably accurate.
smokel · 6h ago
It's similar to how, near the end of a Monopoly game, a player might indiscriminately hand over a stash of $100 bills to acquire Mediterranean Avenue, even though the property is mortgaged.
mathgeek · 6h ago
Which analogy(s) are you going for? The world is about to end so money is essentially worthless? The players with all the money are going to move on to something else soon? The game ceased to be fun for anyone so they all want to find other things to do?
I assume you are going for “there are no more useful resources to acquire so those with all the resources overpay just to feel like they own those last few they don’t yet own”.
mark_l_watson · 5h ago
I saw the ‘forgetting about money, moving on to other challenges’ thing happen about 30 years ago. A childhood friend sold his company for about 300 million (a billion in today’s dollar devaluation?). My friend and his wife continued to live in their same house. The only thing he did was to purchase eight houses for extended family members who didn’t own their own homes, he also got his daughter expensive horse back riding lessons and a horse, and he said he and his wife drank more expensive wine. He did continue to play “The Infinite Game” by staying in the tech industry - it seemed like he loved the game, the money was only to help other people in his life.
cornfieldlabs · 4h ago
Is that friend Josh Kopelman?
smokel · 5h ago
I was going for irony, not analogy. Unfortunately, even though some incompetent fools think it is, life is not a game.
qgin · 5h ago
I think the idea is the end of the game is nearing (AGI) and specific dollar amounts mean less than the binary outcome of getting there first.
tough · 5h ago
if we get AGI and a post-scarcity age what makes these people think they -reaching- AGI will make them kings.
seems like governments will have a thing to say about who's able to run that AGI or not.
GPU's run on datacenters which exist in countries
ElevenLathe · 5h ago
Yes but countries are run by governments, which are composed of people, who can be bribed. If you believe that AI will make you the richest person in human history, you presumably can see that the problem of government can be solved with enough money.
walterbell · 27m ago
> the problem of government can be solved with enough money
Tokyo Professor and former Beijing Billionaire CEO Jack Ma, may disagree.
mhb · 5h ago
Presumably they think that, whatever chance they have of becoming kings if they get there first is more than the chance if someone else does. In we get AGI, we is doing all the work.
saubeidl · 5h ago
Capitalism is about to break. The revolution is coming.
Nevermark · 2h ago
If capitalism breaks it will be to the benefit of very few.
Granted, capitalism needs maintenance.
Externalities need to be consistently reflected, so capitalism can optimize real value creation, instead of profitable value destruction. It is a tool that can be very good at either.
Capitalism also needs to be protected from corrupted government by, ironically, shoring up the decentralization of power so critical for democracy, including protecting democracy from capitalism's big money.
(Democracy and capitalism complement each other, in good ways when both operating independently in terms of power, and supportively in terms of different roles. And, ironically, also complement each other when they each corrupt the other.)
jokoon · 5h ago
Meanwhile I'm not sure that training myself to do ai would increase my odds of getting a job
betaby · 1h ago
Probably not. Definitely not if live outside of the USA.
noobermin · 28m ago
When the crash hits, it will hit hard.
normie3000 · 5h ago
How can I get one of these jobs? I am currently an OK web dev.
beau_g · 3h ago
These $100mm+ hires are centering divs in flex boxes on the first try. They are simply not like you and me.
ramraj07 · 3h ago
These are not just people with credentials, but are literally some of the smartest people on earth. Us normal people cannot and should not think we were just a few decisions away from being there.
lcnPylGDnU4H9OF · 5h ago
Get a PhD in a related field like math or computer science.
seanbarry · 5h ago
And have spent the last 15 years working on the cutting edge of AI research.
kergonath · 3h ago
That is unfortunately far from enough. The majority end up doing ok but nowhere near this much money.
lossolo · 2h ago
There are millions of people with PhDs in math or computer science, and none of them earn that kind of salary. Just like there are Usain Bolts and Michael Phelpses in the world of sports, there are similarly exceptional individuals in every field.
coderatlarge · 5h ago
actually applied math or statistics.
TheAceOfHearts · 5h ago
Wonder what their contracts look like. Are these people gonna be grinding their ass off at the Meta offices working crazy hours? Does Zucc have a strong vision, leadership, and management skills to actually push and enable these people to achieve maximum success? And if so, what does that form of success look like? So far the vision that Zucc has outlined has been rather underwhelming, but maybe the vision which he shares with insiders is different from his public persona.
I can't help but think that the structure of this kinda hints at there being a bit of a scam-y element, where a bunch of smart people are trying to pump some rich people out of as much money as possible, with questionable chances at making it back. Imagine that the people on The List had all the keys needed to build AGI already if they put their knowledge together, what action do you think they would take?
walterbell · 4h ago
> Imagine.. had all the keys needed
.. that had already leaked and would later plummet in value.
dekhn · 1h ago
When I was a kid in the 80s I read the book "Hackers" and it describes the most successful people in the industry as having "Croesus" wealth: counted in the tens of millions of dollars.
fidotron · 6h ago
Good for those involved being offered such packages, but it really does raise the question of what exactly those offering them are so afraid of.
For example, Meta seem to be spending so much so they don't later have to fight a war against an external Facebook-as-chatbot style competitor, but it's hard to see how such a thing could emerge from the current social media landscape.
InterviewFrog · 5h ago
Here is the uncomfortable truth. Only a small group of people are capable of operating at an elite level. The talent pool is extremely small and the companies want the absolute best.
It is the same thing in sports as well. There will only ever be one Michael Jordan one Lionel Messi one Tiger Woods one Magnus Carlsen. And they are paid a lot because they are worth it.
>> Meta seem to be spending so much so they don't later have to fight a war against an external Facebook-as-chatbot style competitor
Meta moved on from facebook a while back.It has been years since I last logged into facebook and hardly anybody I know actually post anything there. Its a relic of the past.
klabb3 · 2h ago
> Here is the uncomfortable truth. Only a small group of people are capable of operating at an elite level. […] It is the same thing in sports as well.
It’s not just uncomfortable but might not be true at all. Sports is practically the opposite type of skills: easy to measure, known rules, enormous amount of repetition. Research is unknown. A researcher that guarantees result is not doing research. (Coincidentally, the increasing rewards in academia for incrementalist result driven work is a big factor in the declining overall quality, imo.)
I think what’s happening is kind of what happened in Wall Street. Those with a few documented successes got disproportionately more business based to a large part on initial conditions and timing.
Not to take away from AI researchers specifically, I’m sure they’re a smart bunch. But I see no reason to think they stand out against other academic fields.
Occam’s razor says it’s panic in the C-suites and they perceive it as an existential race. It’s not important whether it actually is, but rather that’s how they feel. And they have such enormous amount of cash that they’re willing to play many risky bets at the same time. One of them being to hire/poach the hottest names.
ofjcihen · 11m ago
While I don’t doubt that these people have great experience and skills what they really have that others don’t is connections and the ability to market themselves well.
TrackerFF · 56m ago
Hot fucking take - but if these 100 (or whatever small number is being thrown around these days) elite researchers disappeared overnight, the world would go on and little of it would be noticed. New people in the field would catch up, and things would be up to speed quick enough.
It is not a question of exquisitely rare intellect, but rather the opportunity and funding/resources to prosper.
MichaelZuo · 5h ago
They just want the best, and they’re afraid of having second rates, B-players, etc., causing a bozo explosion. That seems like all the motivation that’s needed.
Why why would they need fears about a quasi-facebook chatbot?
HarHarVeryFunny · 4h ago
Coming from Meta, I have to wonder if the reason for this isn't more down to Zuck's ego and history. He seems to have somewhat lost interest in FaceBook, and was previously all-in on the Metaverse as the next big thing, which has failed to take off as a concept, and now wants to go all-in on "super-intelligence" (seems to lack ambition - why not "super-duper extra special intelligence"?) with his new vision being smart glasses as the universal AI interface. He can't seem to get past the notion that people want to wear tech on their head and live in augmented reality.
Anyhow, with the Metaverse as a flop, and apparently having self-assessed Meta's current LLM efforts as unsatisfactory, it seems Zuck may want to rescue his reputation by throwing money at it to try to make his next big gamble a winner. It seems a bit irrational given that other companies, and countries, have built SOTA LLMs without needing to throw NBA/NFL/rockstar money around.
turnsout · 1h ago
This rings true. Zuck wants to go down in the history books like Jobs—as a visionary who introduced technology that changed the world.
He's not there yet, and he knows it. Jobs gave us GUIs and smartphones. Facebook is not even in the same universe, and Instagram is just something he bought. He went all in on the metaverse, but the technology still needs at least 10-15 years to fully bake. In the meantime, there's AGI/super-intelligence. He needs to beat Sam Altman.
The sad thing is, even if he does beat Sam to AGI, Sam will still probably get the credit as the visionary.
lores · 5h ago
Just like in football, buying all the best players pretty much guarantees failure as egos and personal styles clash and take precedence over team achievement. The only reasons one would do that are fear, vanity, and stupidity, and those have to be more important than getting value for the extraordinary amounts of money invested.
HarHarVeryFunny · 4h ago
Yeah, pretty much agree.
The only case where this may have made sense - but more for an individual rather than a team - is Google's aqui-rehire of Noam Shazeer for $1B. He was the original creator of the transformer architecture, had made a number of architectural improvements while at Character.ai, and thus had a track record of being able to wring performance out of it, which at Google-scale may be worth that kind of money.
dekhn · 1h ago
Noam was already one of Google's top AI researchers and a personal friend of Jeff Dean (head of Google AI, at least in title). He worked on some of the early (~2002) search systems at Google and patented some of their most powerful technoloigies at the time- which were critical in making Google Search a product that was popular, and highly profitable.
MichaelZuo · 2h ago
First rate A-players are beyond petty ego clashes, practically by definition… otherwise they wouldn’t be considered so highly (and thus fall into the bozo category).
tempodox · 2h ago
To me it's just fascinating to see how much further you can pump up this hype bubble. My pop corn reserves need a refill.
firesteelrain · 6h ago
When will the bubble pop?
mattlondon · 5h ago
When more than 1 company has "AGI", or whatever we're calling it, and people realise it is not just a license to print money.
Some people are rightly pointing out that for quite a lot of things right now we probably already have AGI to a certain extent. Your average AI is way better than the average schmuck on the street in basically anything you can think of - maths, programming, writing poetry, world languages, music theory. Sure there are outliers where AI is not as good as a skilled practitioner in foo, but I think the AGI bar is about being "about as good as the average human" and not showing complete supremacy in every niche. So far the world has been disrupted sure, but not ended.
ASI of course is the next thing, but that's different.
Enginerrrd · 5h ago
I think the AI is only as good as the person wrangling it a lot of the time. I think it's easy for really competent people to get an inflated sense of how good the AI is in the same way that a junior engineer is often only as good as the senior leading them along and feeding them small chunks of work. When led with great foresight, careful calibration, and frequent feedback and mentorship, a mediocre junior engineer can be made to look pretty good too. But take away the competent senior and youre left pretty lacking.
I've gotten some great results out of LLM's, but thats often because the prompt was well crafted, and numerous iterations were performed based on my expertise.
You couldn't get that out of the LLM without that person most of the time.
impossiblefork · 3h ago
Nah. The models are great, but the models can also write a story where characters who in the prompt are clearly specified as never having met are immediately addressing each other by name.
These models don't understand anything similar to reality and they can be confused by all sorts of things.
This can obviously be managed and people have achieved great things with them, including this IMO stuff, but the models are despite their capability very, very far from AGI. They've also got atrocious performance on things like IQ tests.
tempodox · 1h ago
“AGI” will be whatever state of the art we have at the time the money runs out. The investors will never admit that they built on sand but declare victory by any means necessary, even if it's hollow and meaningless.
mark_l_watson · 5h ago
Perhaps when society balances benefits of AI against energy and environmental costs? I have worked through two ‘AI winters’ when funding dried up. This might happen again.
I think a possible scenario is that we see huge open source advances in training and inference efficiency that ends up making some of the mega-investments in AI infrastructure look silly.
What will probably ‘save’ the mega-spending is (unfortunately!) the application of AI to the Forever Wars for profit.
chvid · 4h ago
When we are seeing down rounds on OpenAI. OpenAI is currently valued at 300B.
snowstormsun · 5h ago
2027
mr90210 · 5h ago
I think we’d need a major war or a pandemic of sorts because we have become pretty good at maintaining such bubble inflated.
Whenever and however it comes, it’s going to be a bloodbath because we haven’t had a proper burst since 2008. I don’t count 2020.
nl · 5h ago
Eventually people might consider that just maybe... it's not a bubble...
DonsDiscountGas · 5h ago
2000 was a bubble, and yet the internet continued to eat the world after it popped. I expect we'll see something similar
impossiblefork · 3h ago
There is definitely a bubble though. Tesla has 28 times larger market cap than other well-run competitors, for example, and there's a bunch of other firms with similarly crazy numbers.
AI is great and it's the future, and a bunch of people will probably eventually turn it into very powerful systems able to solve industrially important maths and software development problems, but that doesn't meant they'll make huge money from that.
firesteelrain · 5h ago
I just don’t think the industry is moatless. Where there is a moat in my opinion is airgap because few are pursuing this and not everyone wants their data in the Cloud.
mhb · 5h ago
That's a lot of confidence that this is a bubble rather than an existential race. Maybe you're making bank betting that view?
firesteelrain · 3h ago
Not sure what you mean but if I was to invest I would have invested years ago in NVIDIA.
thefaux · 2h ago
Honestly, I think a lot of this is as much marketing as it is about actual value. This helps the industry narrative about how transformative the tech is. These inflated comp packages perfectly match the inflated claims around the tech. "See this tech is so incredible we are paying people 1 BILLION dollars!"
These types of comp packages also seem designed to create a kind of indentured servitude for the researchers. Instead of forming their own rival companies that might actually compete with facebook, facebook is trying to foreclose that possibility. The researchers get the money, but they are also giving up autonomy. Personally, no amount of money would induce me to work for Zuckerberg.
siva7 · 6h ago
Ok people, is this for real like are these detached IC roles or are these articles talking about executive rolles filled by a.i. researchers?
Avicebron · 6h ago
As far as I know it's only one guy that got this offer, https://mattdeitke.com/ (aside from the others who had the mythical 100 million dollar poaching package).
flappyeagle · 6h ago
Source: I know people who have both accepted and declined 100M+ packages
They are IC roles for the most part
coderatlarge · 5h ago
they are executive roles in the sense that you are required to profitably allocate a scarce perishable resource (gpu time) way more expensive than any regular engineer’s time.
flappyeagle · 4h ago
Yeah, you could definitely look at it that way. They are IC roles in the sense that their job is to tell computer computers what to do but maybe that’s old-fashioned thinking at this point.
mr90210 · 5h ago
Are you aware of the terms of such offers?
I suppose those $100M are spread across years and potentially contingent upon achieving certain milestones.
flappyeagle · 4h ago
Even amongst the packages, there is a range. One example package was 100 guaranteed up to 250 based on milestones and incentives over five years
apwell23 · 5h ago
murati was offered 1B by meta apparently
ramesh31 · 3h ago
There has to be more at play here. Was this some kind of acquihire? No 24 year old in the history of 24 year olds has been worth $250 million on the basis of their intellectual merit. Even granting that they were some kind of one-off super genius, no single human is that smart or productive, to be worth the bankroll of a literal army of PhDs. He has to be bringing more to the table.
soulofmischief · 2h ago
Lol. I was leading development of a project that did everything his project Vy does and much more, before my company experienced a hostile takeover and I was squeezed out so they could pivot to a shitass AI sex bot company that ultimately ran through our warchest and failed. That was back in 2022-2023.
Maybe I need to get one of these recruitment agents.
dmezzetti · 3h ago
As the margins shrink between the capabilities of each of these models, those who specialize in retrieval / context engineering will be the next frontier. Those who provide the most relevant information to a model will win the day.
ivape · 5h ago
At those prices they have to be hiring god gifted talent. I can’t imagine that being just a regular academic grinder with top grades. Arod made $250 million and it was considered huge news.
gosub100 · 5h ago
aww, they can't be sleazy CEO types who "make number go up" for 2-3 years and leave with a golden parachute?
Chances are good that while they’re competitive for sure, what they really have that landed them these positions is connections and the ability to market themselves well.
Just a thought:
Assuming that Meta's AI is actually good. Could it rather be that having access to a massive amount of data does not bring that much of a business value (in this case particularly for training AIs)?
Evidence for my hypothesis: if you want to gain a deep knowledge about some complicated specific scientific topic, you typically don't want to read a lot of shallow texts tangentially related to this topic, but the few breakthrough papers and books of the smartest mind who moved the state of art in the respective area. Or some of the few survey monographs of also highly smart people who work in the respective area who have a vast overview about how these deep research breakthroughs fit into the grander scheme of things.
You can get that technical or scientific context for a lot less than $250 million per head.
Most would say, vibe-wise Llama 4 fell flat in face of Qwen & friends.
Yes, the figures are nuts. But compare them to F1 or soccer salaries for top athletes. A single big name can drive billions in that context at least, and much more in the context of AI. $50M-$100M/year, particularly when some or most is stock, is rational.
For AI researchers pursuing AGI, this variance between distributions is arguably even worse than the distribution between samples - there's no past data whatsoever to build estimates, it's all vibes.
You can argue the distribution is hard to pin down (hence my note on risk), but let’s not pretend there’s zero precedent.
If it turns out to be another winter at least it will have been a fucking blizzard.
But the distribution for individual researcher salaries really is pure guesswork. How does the datapoint of "Attention Is All You Need?" fit in to this distribution? The authors had very comfortable Google salaries but certainly not 9-figure contracts. And OpenAI and Anthropic (along with NVIDIA's elevated valuation) are founded on their work.
I'd argue the top individual researchers figure into the overall AI spend. They are the people leading teams/labs and are a marketable asset in a number of ways. Extrapolate this further outward - why does Jony Ive deserve to be part of a $6B aquihire? Why does Mira Murati deserve to be leading a 5 month old company valued at $12B with only 50 employees? Neither contributed fundamental research leading to where we are today.
How much revenue does Google make in a day? £700m+.
Don’t get me wrong, they are smart people - but so are thousands of other researchers you find in academia etc. - difference here is scale of the operation.
Bear case: No, there's nothing you can do. These are exceptionally rare hires driven by FOMO at the peak of AI froth. If any of these engineers are successful at creating AGI/superintelligence within five years, then the market for human AI engineers will essentially vanish overnight. If they are NOT successful at creating AGI within five years, the ultra high-end market for human AI engineers will also vanish, because companies will no longer trust that talent is the key.
Bull case: Yes, you should go all in and rebrand as a self-proclaimed AI genius. Don't focus on commanding $250M in compensation (although 24, Matt Deitke has been doing AI/ML since high school). Instead, focus on optimizing or changing any random part of the transformer architecture and publishing an absolutely inscrutable paper about the results. Make a glossy startup page that makes some bold claims about how you'll utilize your research to change the game. If you're quick, you can ride the wave of FOMO and start leveling up. Although AGI will never happen, the opportunities will remain as we head into the "plateau of productivity."
I assume you are going for “there are no more useful resources to acquire so those with all the resources overpay just to feel like they own those last few they don’t yet own”.
seems like governments will have a thing to say about who's able to run that AGI or not.
GPU's run on datacenters which exist in countries
Tokyo Professor and former Beijing Billionaire CEO Jack Ma, may disagree.
Granted, capitalism needs maintenance.
Externalities need to be consistently reflected, so capitalism can optimize real value creation, instead of profitable value destruction. It is a tool that can be very good at either.
Capitalism also needs to be protected from corrupted government by, ironically, shoring up the decentralization of power so critical for democracy, including protecting democracy from capitalism's big money.
(Democracy and capitalism complement each other, in good ways when both operating independently in terms of power, and supportively in terms of different roles. And, ironically, also complement each other when they each corrupt the other.)
I can't help but think that the structure of this kinda hints at there being a bit of a scam-y element, where a bunch of smart people are trying to pump some rich people out of as much money as possible, with questionable chances at making it back. Imagine that the people on The List had all the keys needed to build AGI already if they put their knowledge together, what action do you think they would take?
.. that had already leaked and would later plummet in value.
For example, Meta seem to be spending so much so they don't later have to fight a war against an external Facebook-as-chatbot style competitor, but it's hard to see how such a thing could emerge from the current social media landscape.
It is the same thing in sports as well. There will only ever be one Michael Jordan one Lionel Messi one Tiger Woods one Magnus Carlsen. And they are paid a lot because they are worth it.
>> Meta seem to be spending so much so they don't later have to fight a war against an external Facebook-as-chatbot style competitor
Meta moved on from facebook a while back.It has been years since I last logged into facebook and hardly anybody I know actually post anything there. Its a relic of the past.
It’s not just uncomfortable but might not be true at all. Sports is practically the opposite type of skills: easy to measure, known rules, enormous amount of repetition. Research is unknown. A researcher that guarantees result is not doing research. (Coincidentally, the increasing rewards in academia for incrementalist result driven work is a big factor in the declining overall quality, imo.)
I think what’s happening is kind of what happened in Wall Street. Those with a few documented successes got disproportionately more business based to a large part on initial conditions and timing.
Not to take away from AI researchers specifically, I’m sure they’re a smart bunch. But I see no reason to think they stand out against other academic fields.
Occam’s razor says it’s panic in the C-suites and they perceive it as an existential race. It’s not important whether it actually is, but rather that’s how they feel. And they have such enormous amount of cash that they’re willing to play many risky bets at the same time. One of them being to hire/poach the hottest names.
It is not a question of exquisitely rare intellect, but rather the opportunity and funding/resources to prosper.
Why why would they need fears about a quasi-facebook chatbot?
Anyhow, with the Metaverse as a flop, and apparently having self-assessed Meta's current LLM efforts as unsatisfactory, it seems Zuck may want to rescue his reputation by throwing money at it to try to make his next big gamble a winner. It seems a bit irrational given that other companies, and countries, have built SOTA LLMs without needing to throw NBA/NFL/rockstar money around.
He's not there yet, and he knows it. Jobs gave us GUIs and smartphones. Facebook is not even in the same universe, and Instagram is just something he bought. He went all in on the metaverse, but the technology still needs at least 10-15 years to fully bake. In the meantime, there's AGI/super-intelligence. He needs to beat Sam Altman.
The sad thing is, even if he does beat Sam to AGI, Sam will still probably get the credit as the visionary.
The only case where this may have made sense - but more for an individual rather than a team - is Google's aqui-rehire of Noam Shazeer for $1B. He was the original creator of the transformer architecture, had made a number of architectural improvements while at Character.ai, and thus had a track record of being able to wring performance out of it, which at Google-scale may be worth that kind of money.
Some people are rightly pointing out that for quite a lot of things right now we probably already have AGI to a certain extent. Your average AI is way better than the average schmuck on the street in basically anything you can think of - maths, programming, writing poetry, world languages, music theory. Sure there are outliers where AI is not as good as a skilled practitioner in foo, but I think the AGI bar is about being "about as good as the average human" and not showing complete supremacy in every niche. So far the world has been disrupted sure, but not ended.
ASI of course is the next thing, but that's different.
I've gotten some great results out of LLM's, but thats often because the prompt was well crafted, and numerous iterations were performed based on my expertise.
You couldn't get that out of the LLM without that person most of the time.
These models don't understand anything similar to reality and they can be confused by all sorts of things.
This can obviously be managed and people have achieved great things with them, including this IMO stuff, but the models are despite their capability very, very far from AGI. They've also got atrocious performance on things like IQ tests.
I think a possible scenario is that we see huge open source advances in training and inference efficiency that ends up making some of the mega-investments in AI infrastructure look silly.
What will probably ‘save’ the mega-spending is (unfortunately!) the application of AI to the Forever Wars for profit.
Whenever and however it comes, it’s going to be a bloodbath because we haven’t had a proper burst since 2008. I don’t count 2020.
AI is great and it's the future, and a bunch of people will probably eventually turn it into very powerful systems able to solve industrially important maths and software development problems, but that doesn't meant they'll make huge money from that.
These types of comp packages also seem designed to create a kind of indentured servitude for the researchers. Instead of forming their own rival companies that might actually compete with facebook, facebook is trying to foreclose that possibility. The researchers get the money, but they are also giving up autonomy. Personally, no amount of money would induce me to work for Zuckerberg.
They are IC roles for the most part
I suppose those $100M are spread across years and potentially contingent upon achieving certain milestones.
Maybe I need to get one of these recruitment agents.
https://nypost.com/2025/08/01/business/meta-pays-250m-to-lur...