This post doesn't quite comprehend why Meta made a 49% investment instead of an acquisition.
The path Meta chose avoided global regulatory review. FTC, DOJ, etc and their international counterparts could have chosen to review and block an outright acquisition. They have no authority to review a minority investment.
Scale shareholders received a comparable financial outcome to an acquisition, and also avoided the regulatory uncertainty that comes with govt review.
It was win/win, and there's a chance for the residual Scale company to continue to build a successful business, further rewarding shareholders (of which Meta is now the largest), which is just like wildcard upside and was never the point of the original deal.
sigmoid10 · 1h ago
>They have no authority to review a minority investment.
That's just wrong. Partial acquisitions and minority shareholdings don't allow you to bypass antitrust investigations.
See 15 U.S.C. §18 for example. It is similar in the EU.
linotype · 1h ago
The current US administration allows you to bypass US antitrust regulations.
sigmoid10 · 1h ago
But not because of majority/minority reasons as the other comment implies. It would be utterly ridiculous if you could just buy 49% of each of your competitors without any possibility for the government to interfere.
linotype · 1h ago
Utterly ridiculous is where we’re at right now.
sigmoid10 · 1h ago
In the US perhaps, but elsewhere antitrust laws still have some meaning.
No comments yet
theahura · 1h ago
I agree that this is a possible reason. Meta wants to move fast and m&a is too slow for their tastes. I make the case in the article though that the actual acquisition doesn't really make sense for metas core business, but I agree it's possible.
I disagree that this is a win/win. Scale stock is still illiquid, and people who remain at scale or have scale stock are now stuck with shares that are actually less valuable -- even though the market price has ostensibly gone up, the people who made the stock valuable are gone
aprilthird2021 · 1h ago
Reportedly the only reason they did this at all is that Wang asked to get a return for investors and employees. It was not Meta who wanted to own any part of Scale (makes sense too, they are already users of Scale data and don't really need anything they get from owning Scale as a company / business).
duxup · 3h ago
I guess OpenAI got a good deal at 6.5b for Jony Ive.
>Supposedly, Llama 4 did perform well on benchmarks, but the user experience was so bad that people have accused Meta of cooking the books.
This is one of those things that I've noticed. I don't understand the benchmarks, and my usage certainly isn't going to be wide ranging as benchmarks, but I hear "OMG this AI and benchmarks" and then I go use it and it's not any different for me ... or I get the usual wrong answers to things I've gotten wrong answers to before, and I shrug.
theahura · 3h ago
I think there's basically the benchmarks, and then "the benchmarks". The former is, like, actual stuff you can quantify and game. And the latter is the public response. The former is a leading indicator but it's not always accurate, and the latter is what actually matters. I knew Gemini 2.5 was better than Claude 3.7 based on the Reddit reaction lol
triknomeister · 1h ago
Any industry that cannot depend on artificial benchmarks, and instead has to depends on user benchmarks is a labor intensive industry. If it turns out that LLMs in their current form do depend a lot on manual user benchmarks, then they will always require human workers. Kind of like how AI means actually Indians.
niuzeta · 1h ago
Just out of curiosity, what were the reactions like from what you saw? I had the opposite take from Reddit, which proved to be incorrect. So I'm just curious how you read(more correctly than me) the reactions vis-a-vis Reddit.
theahura · 1h ago
Idk just vibes. I haven't joined any subreddits except r/civ. The feed is inscrutable
duxup · 1h ago
Civ could sure use some AI…
duxup · 1h ago
I keep meaning to try Gemini for coding but honestly they’ve all gotten reasonably good for my use cases that I don’t think I need to try a lot of new ones / new versions.
paxys · 1h ago
The Llama 4 benchmarks published by Meta were proven to be fake immediately after it launched.
dangus · 2h ago
In retrospect it’s quite amazing how Jony Ive managed to make a superstar career out of almost directory copying Dieter Rams all the time.
I can’t imagine what kind of value he could bring to OpenAI.
The dude ruined Apple laptops for 5 years, he really should be an industry castaway.
krackers · 2h ago
Not just hardware, he ruined UI design as well. Everything iOS 7+ (which hasleaked into macOS) is basically his doing, or at least had to be approved by him. Trading Scott Forstall for Johnny Ive was probably the the biggest blunder that Cook made, which put Apple on its current trajectory in terms of software & ui decline.
blackguardx · 1h ago
I learned to program C in high school on the original iMacs and thanks to OS 9's lack of true multi-tasking and Ive's penchant for lack of buttons, anytime our code crashed or made an infinite loop, we would have to reach around and unplug the computer from the wall. The single power button on the front was a soft button that required CPU cycles to operate.
There was also a tiny hole in the front that you could insert a paperclip to reset the machine, but unplugging from the wall was faster.
moffkalast · 1h ago
In retrospect it wasn't really that the models were that bad compared to benchmarks, it was that Meta didn't work with inference engine providers ahead of time to integrate the new architecture like other teams usually do, so it was even more bugged than expected for the first few weeks.
The second problem that compounded it was that both L4 models were too large for 99.9% of people to run, so the only opinion most people had of it was what the few that could load it were saying after release. And they weren't saying good things.
So after inference was fixed the reputation stuck because hardly anyone can even run these behemoths to see otherwise. Meta really messed up in all ways they possibly could, short of releasing the wrong checkpoint or something.
vagabund · 1h ago
I'd push back on a couple things here.
The notion that Scale AI's data is of secondary value to Wang seems wrong: data-labeling in the era of agentic RL is more sophisticated than the pejorative view of outsourcing mechanical turk work at slave wages to third world workers, it's about expert demonstrations and work flows, the shape of which are highly useful for deducing the sorts of RL environments frontier labs are using for post-training. This is likely the primary motivator.
> LLMs are pretty easy to make, lots of people know how to do it — you learn how in any CS program worth a damn.
This also doesn't cohere with my understanding. There's only a few hundred people in the world that can train competitive models at scale, and the process is laden with all sorts of technical tricks and trade secrets. It's what made the deepseek reports and results so surprising. I don't think the toy neural network one gets assigned to create in an undergrad course is a helpful comparison.
Relatedly, the idea that progress in ML is largely stochastic and so horizontal orgs are the only sensible structure seems like a weird conclusion to draw from the record. Saying Schmidhuber is a one hit wonder, or "The LLM paper was written basically entirely by folks for whom "Attention is All You Need" is their singular claim to fame" neglects a long history of foundational contributions in the case of the former, and misses the prolific contributions of Shazeer in the latter. Alec Radford is another notable omission as a consistent superstar researcher. To the point about organizational structure, OpenAI famously made concentrated bets contra the decentralized experimentation of Google and kicked off this whole race. Deepmind is significantly more hierarchical than Brain was and from comments by Pichai, that seemed like part of the motivation for the merger.
theahura · 51m ago
- Could be wrong about Scale. I'm going off folks I know at client companies and at scale itself.
- idk I've trained a lot of models in my time. It's true that there's an arcane art to training LLMs, but it's wrong that this is somehow unlearnable. If I can do it out of undergrad with no prior training and 3 months of slamming my head into a wall, so can others. (Large LLMs are imo not that much different from small ones in terms of training complexity. Tools like torch and libraries like megatron make these things much easier ofc)
- there are a lot of fantastic researchers and I don't mean to disparage anyone, including anyone I didn't mention. Still, I stand by my beliefs on ml. Big changes in architecture, new learning techniques, and training tips and tricks come from a lot of people, all of whom are talking to each other in a very decentralized way.
My opinions are my own, ymmv
blobbers · 1h ago
It's an interesting take for a guy who's company labels data.
Is it a hot dog? Yes, yes it is.
14 BILLIES!
sh34r · 1h ago
At least they could pivot Hotdog or Not into a dick-pic classifier, which was less trivial when that show was airing.
$14 billion for a glorified mechanical turk platform is bananas. Between this, the $6 billion Jonny Ive acquisition, and a ChatGPT wrapper for doctors called Abridge having a $5 billion valuation, this AI fraud bubble is making pets.com look like a reasonable investment.
KaiserPro · 56m ago
Meta has a culture issue.
There are three centres of "AI" gravity: GenAI, FAIR & RL-R
Fair is fucked, they've been passed about, from standalone, to RL-R then to "production" under industrial dipshit Cox. A lot of people have left, or been kicked out. It was a power house, and PSC (the 6month performance charade killed it)
GenAI was originally a nice tight and productive team. Then the facebook disease of doubling the team every 2 months took over. Instead of making good products and dealing with infra scaling issues, 80% of the staff are trying to figure out what they are supposed to be doing. Moreover most of the leadership have no fucking clue how to do applied ML. Also they don't know what the product will be. So the answer is A/B testing what ever coke dream Cox dreamt up that week.
RL-R has the future, but they are tied to either Avatars, which is going to bomb. It'll bomb because its run by an prick who wants perfect rather than deliverable. Moreover splats perform way better than the dumbarse fully ML end-to-end system they spend the last 15 billion trying to make.
Then there is the hand interaction org, which has burnt through not quite as much cash as Avatars, but relies on a wrist device that has to be so tight it feels like a fucking handcuff. That and they've not managed to deliver any working prototype at scale.
The display team promised too much and wildly underdelivered, meaning that orion wasn't possible as a consumer product. Which lets the write team off the hook for not having a comfortable system.
Then there is the mapping team who make research glasses that hovers up any and all personal information with wild abandon.
RL-R had lots of talent. But the "hire to fire" system means that you can't actually do any risky research, unless you have the personal favour of your VP. Plus, even if you do perfect research. getting it to product is a nightmare.
paulpauper · 2h ago
Meta is so profitable with its ad network it can spend tens of billions of dollars on money-losing losing projects and the stock still keeps going up, and the PE ratio is not that high either. Crazy.
nothrowaways · 2h ago
> Meta is falling behind in the AI race
How so?
analog31 · 1h ago
Same way companies fell behind during the original "dot com" era: By not burning money quickly enough.
Disclosure: I work for one of the companies that fell behind.
triknomeister · 1h ago
Was that the real reason companues failed? By not burning money, instead of burning through it too fast?
jnwatson · 1h ago
Their latest Llama is a couple steps behind the frontier. Google, Anthropic, and OpenAI are all well ahead.
They missed the boat on LLMs and have been playing catch up.
jemmyw · 44m ago
Why should they be in that market at all? I don't understand why we've got to this place where every big tech company needs to have it's fingers in every pie or "they're falling behind". AI / LLMs look to be on their way to being infrastructure more than product in and of themselves. A patient huge company could build on top of what others are doing and wait for the market to settle, then buy for vertical integration.
ralusek · 1h ago
They're not even in the same ballpark as the SOTA models, and they're not even in the same ballpark as the SOTA open source models, which was supposed to be their niche.
But even if they were, it's not immediately clear how they plan to make any money with having an open source model. So far, their applications of their AI, i.e. fake AI characters within their social media, are some of the dumbest ideas I've ever heard of.
aprilthird2021 · 1h ago
Deepseek is a solid competitor in their niche, the open-source LLM niche. Llama 4 didn't impress, and so they are very vulnerable even in the free niche they carved out for themselves
spiderfarmer · 2h ago
How much revenue do they make with it?
nradov · 2h ago
How would you even begin to calculate that? They use AI to make their products more valuable to advertisers, but there's no way to say how much their revenue would decline if they suddenly eliminated AI.
speedgoose · 2h ago
I’m not sure revenue is the best measurement of success.
DataDaemon · 1h ago
Someone remember the Meta Universe? How much money burnt? And now the AI. In the meantime stock going up and up.
dangus · 2h ago
The last two paragraphs sure contorted my face into an…interesting expression.
I’m confused at how Zuck has proven himself to be a particularly dynamic and capable CEO compared to peers. Facebook hasn’t had new product success outside of acquisitions in at least a decade. The fact that a newcomer like TikTok came and ate Instagram’s lunch is downright embarrassing. Meta Quest is a cash-bleeding joke of a side quest that Zuck thought justified changing the name of the company.
The kind of customer trustworthiness gap between Meta and competitors like Microsoft, Google, and Amazon is astounding, and I would consider it a major failure by Meta’s management that was entirely preventable. [1]
Microsoft runs their own social network (LinkedIn) and Google reads all your email and searches and they are somehow trusted more than twice as much as Meta. This trust gap actively prevents Meta from launching new products in areas that require trustworthiness.
Personally I don’t think Meta is spending $14B to hire a single guy, they’re spending $14B in hopes of having a stake in some other company that can make a successful new product - because by now they know that they can’t have success on their own.
I'm a Zuck fan, and I think I could mount a compelling case for that take. But that was not the article to do so. And people can def disagree, there's good arguments on the other side
paulpauper · 2h ago
Facebook hasn’t had new product success outside of acquisitions in at least a decade.
You don't need a 'new trick' when the main business is so frigging profitable and scalable. There is this myth that companies need to keep innovating. At a certain point, when the competition is so weak, you don't.
Tobacco stocks have been among the best investment over the past century even though the product never changed. Or Walmart--it's the same type of store, but scaled huge.
The fact that a newcomer like TikTok came and ate Instagram’s lunch is downright embarrassing.
Not really. Instagram is more popular than ever and much more profitable. It's not all or nothing .Both sites can coexist and thrive, like Pepsi and Coca-Cola.
dangus · 1h ago
Sure, a lot of companies don’t need to keep innovating.
But still, TikTok passed Instagram active users. This would be like if Universal theme parks surpassed Disney Parks in revenue. Sure, they can both coexist, but it’s an embarrassing reflection of mismanagement when a company that is so far ahead is surpassed by a competitor with better management working with fewer resources.
And there are a lot of examples like IBM, Kodak, and General Electric where the company appears untouchable with no need to innovate until suddenly they really do need to innovate and it all collapses around them.
pavlov · 1h ago
> "TikTok passed Instagram active users. This would be like if Universal theme parks surpassed Disney Parks in revenue"
It's like if a Universal theme park surpassed Disneyland Paris in revenue.
Meta has other properties too.
aprilthird2021 · 1h ago
> Facebook hasn’t had new product success outside of acquisitions in at least a decade.
Wrong metric to evaluate dynamism and capability.
Also the Ray Ban Metas have been pretty successful. They consistently sell out
The path Meta chose avoided global regulatory review. FTC, DOJ, etc and their international counterparts could have chosen to review and block an outright acquisition. They have no authority to review a minority investment.
Scale shareholders received a comparable financial outcome to an acquisition, and also avoided the regulatory uncertainty that comes with govt review.
It was win/win, and there's a chance for the residual Scale company to continue to build a successful business, further rewarding shareholders (of which Meta is now the largest), which is just like wildcard upside and was never the point of the original deal.
That's just wrong. Partial acquisitions and minority shareholdings don't allow you to bypass antitrust investigations.
See 15 U.S.C. §18 for example. It is similar in the EU.
No comments yet
I disagree that this is a win/win. Scale stock is still illiquid, and people who remain at scale or have scale stock are now stuck with shares that are actually less valuable -- even though the market price has ostensibly gone up, the people who made the stock valuable are gone
>Supposedly, Llama 4 did perform well on benchmarks, but the user experience was so bad that people have accused Meta of cooking the books.
This is one of those things that I've noticed. I don't understand the benchmarks, and my usage certainly isn't going to be wide ranging as benchmarks, but I hear "OMG this AI and benchmarks" and then I go use it and it's not any different for me ... or I get the usual wrong answers to things I've gotten wrong answers to before, and I shrug.
I can’t imagine what kind of value he could bring to OpenAI.
The dude ruined Apple laptops for 5 years, he really should be an industry castaway.
There was also a tiny hole in the front that you could insert a paperclip to reset the machine, but unplugging from the wall was faster.
The second problem that compounded it was that both L4 models were too large for 99.9% of people to run, so the only opinion most people had of it was what the few that could load it were saying after release. And they weren't saying good things.
So after inference was fixed the reputation stuck because hardly anyone can even run these behemoths to see otherwise. Meta really messed up in all ways they possibly could, short of releasing the wrong checkpoint or something.
The notion that Scale AI's data is of secondary value to Wang seems wrong: data-labeling in the era of agentic RL is more sophisticated than the pejorative view of outsourcing mechanical turk work at slave wages to third world workers, it's about expert demonstrations and work flows, the shape of which are highly useful for deducing the sorts of RL environments frontier labs are using for post-training. This is likely the primary motivator.
> LLMs are pretty easy to make, lots of people know how to do it — you learn how in any CS program worth a damn.
This also doesn't cohere with my understanding. There's only a few hundred people in the world that can train competitive models at scale, and the process is laden with all sorts of technical tricks and trade secrets. It's what made the deepseek reports and results so surprising. I don't think the toy neural network one gets assigned to create in an undergrad course is a helpful comparison.
Relatedly, the idea that progress in ML is largely stochastic and so horizontal orgs are the only sensible structure seems like a weird conclusion to draw from the record. Saying Schmidhuber is a one hit wonder, or "The LLM paper was written basically entirely by folks for whom "Attention is All You Need" is their singular claim to fame" neglects a long history of foundational contributions in the case of the former, and misses the prolific contributions of Shazeer in the latter. Alec Radford is another notable omission as a consistent superstar researcher. To the point about organizational structure, OpenAI famously made concentrated bets contra the decentralized experimentation of Google and kicked off this whole race. Deepmind is significantly more hierarchical than Brain was and from comments by Pichai, that seemed like part of the motivation for the merger.
- idk I've trained a lot of models in my time. It's true that there's an arcane art to training LLMs, but it's wrong that this is somehow unlearnable. If I can do it out of undergrad with no prior training and 3 months of slamming my head into a wall, so can others. (Large LLMs are imo not that much different from small ones in terms of training complexity. Tools like torch and libraries like megatron make these things much easier ofc)
- there are a lot of fantastic researchers and I don't mean to disparage anyone, including anyone I didn't mention. Still, I stand by my beliefs on ml. Big changes in architecture, new learning techniques, and training tips and tricks come from a lot of people, all of whom are talking to each other in a very decentralized way.
My opinions are my own, ymmv
Is it a hot dog? Yes, yes it is.
14 BILLIES!
$14 billion for a glorified mechanical turk platform is bananas. Between this, the $6 billion Jonny Ive acquisition, and a ChatGPT wrapper for doctors called Abridge having a $5 billion valuation, this AI fraud bubble is making pets.com look like a reasonable investment.
There are three centres of "AI" gravity: GenAI, FAIR & RL-R
Fair is fucked, they've been passed about, from standalone, to RL-R then to "production" under industrial dipshit Cox. A lot of people have left, or been kicked out. It was a power house, and PSC (the 6month performance charade killed it)
GenAI was originally a nice tight and productive team. Then the facebook disease of doubling the team every 2 months took over. Instead of making good products and dealing with infra scaling issues, 80% of the staff are trying to figure out what they are supposed to be doing. Moreover most of the leadership have no fucking clue how to do applied ML. Also they don't know what the product will be. So the answer is A/B testing what ever coke dream Cox dreamt up that week.
RL-R has the future, but they are tied to either Avatars, which is going to bomb. It'll bomb because its run by an prick who wants perfect rather than deliverable. Moreover splats perform way better than the dumbarse fully ML end-to-end system they spend the last 15 billion trying to make.
Then there is the hand interaction org, which has burnt through not quite as much cash as Avatars, but relies on a wrist device that has to be so tight it feels like a fucking handcuff. That and they've not managed to deliver any working prototype at scale.
The display team promised too much and wildly underdelivered, meaning that orion wasn't possible as a consumer product. Which lets the write team off the hook for not having a comfortable system.
Then there is the mapping team who make research glasses that hovers up any and all personal information with wild abandon.
RL-R had lots of talent. But the "hire to fire" system means that you can't actually do any risky research, unless you have the personal favour of your VP. Plus, even if you do perfect research. getting it to product is a nightmare.
How so?
Disclosure: I work for one of the companies that fell behind.
They missed the boat on LLMs and have been playing catch up.
But even if they were, it's not immediately clear how they plan to make any money with having an open source model. So far, their applications of their AI, i.e. fake AI characters within their social media, are some of the dumbest ideas I've ever heard of.
I’m confused at how Zuck has proven himself to be a particularly dynamic and capable CEO compared to peers. Facebook hasn’t had new product success outside of acquisitions in at least a decade. The fact that a newcomer like TikTok came and ate Instagram’s lunch is downright embarrassing. Meta Quest is a cash-bleeding joke of a side quest that Zuck thought justified changing the name of the company.
The kind of customer trustworthiness gap between Meta and competitors like Microsoft, Google, and Amazon is astounding, and I would consider it a major failure by Meta’s management that was entirely preventable. [1]
Microsoft runs their own social network (LinkedIn) and Google reads all your email and searches and they are somehow trusted more than twice as much as Meta. This trust gap actively prevents Meta from launching new products in areas that require trustworthiness.
Personally I don’t think Meta is spending $14B to hire a single guy, they’re spending $14B in hopes of having a stake in some other company that can make a successful new product - because by now they know that they can’t have success on their own.
[1] https://allaboutcookies.org/big-tech-trust
You don't need a 'new trick' when the main business is so frigging profitable and scalable. There is this myth that companies need to keep innovating. At a certain point, when the competition is so weak, you don't.
Tobacco stocks have been among the best investment over the past century even though the product never changed. Or Walmart--it's the same type of store, but scaled huge.
The fact that a newcomer like TikTok came and ate Instagram’s lunch is downright embarrassing.
Not really. Instagram is more popular than ever and much more profitable. It's not all or nothing .Both sites can coexist and thrive, like Pepsi and Coca-Cola.
But still, TikTok passed Instagram active users. This would be like if Universal theme parks surpassed Disney Parks in revenue. Sure, they can both coexist, but it’s an embarrassing reflection of mismanagement when a company that is so far ahead is surpassed by a competitor with better management working with fewer resources.
And there are a lot of examples like IBM, Kodak, and General Electric where the company appears untouchable with no need to innovate until suddenly they really do need to innovate and it all collapses around them.
It's like if a Universal theme park surpassed Disneyland Paris in revenue.
Meta has other properties too.
Wrong metric to evaluate dynamism and capability.
Also the Ray Ban Metas have been pretty successful. They consistently sell out