This stuff is naive. There's a bunch of people who want a large income (wealth) disparity, and they will fight to preserve it unless you give them an equivalent station in the 'new world'.
But you will still need to sustain ex-workers if they can't get normal jobs, and those same people at the top will not tolerate the taxes required to sustain a basic level of living for much wider population. They already can't tolerate the idea of a much smaller population using food assistance or healthcare from the government.
That leads me to think this is not really a visionary statement, but just a signal that Mark isn't intentionally trying to bring about a new dystopia, and here's his proof. And if a dystopia happens to come about, you can't blame him because he had pure intentions; clearly it was everyone else who just didn't agree with him and it's their fault.
Maybe make Meta a not-for-profit and there might be some credibility here.
lvl155 · 22h ago
Last time I commented on Zuck on HN I got a warning. That said, this stuff confirms for the 100th time that he is out of touch with reality. Perhaps that’s why he wanted to make VR work so badly. I think he might try to change Co’s name again since Meta doesn’t fit the bill anymore. How about they buy Intel and reverse merge just for the name?
Mars008 · 3h ago
He needs to keep investors optimistic about Meta. Moonshot projects, fairy tales work if there is nothing else. Musk uses them too when promotes Tesla.
bamboozled · 2h ago
I've only read about his remarks and it already feels like his taking a leaf from the Elon Musk keep investors hyped playbook.
ta12653421 · 1h ago
this would be then "Mintel" :-D
benterix · 21h ago
> he might try to change Co’s name again since Meta doesn’t fit the bill anymore.
If so, the logical choice would be to change the name from "Meta" to "AGI".
lvl155 · 21h ago
I was thinking they buy Intel and say “we are going to make Intel super. SuperIntel.” Zuck seems to like the term superintelligence over AGI.
whamlastxmas · 21h ago
They’re two separate concepts
No comments yet
deepfriedchokes · 20h ago
I think perhaps it would be useful to completely ignore the nice words people use and just judge everyone based on their behavior.
From Zuckerberg’s behavior, since the beginning, it’s clear what he wants is power, and if you have the kind of mental health disorder where you believe you know better than everyone and deserve power over others, then that’s not dystopian at all.
Everything he says is PR virtue signaling. Judge the man on his actions.
lcnPylGDnU4H9OF · 15h ago
> completely ignore the nice words people use
Kind of an unrelated topic but I'm reminded of a video essay in which the creator talks about this. They put it very kindly, IMO:
> Rich and powerful people have quite a different attitude and approach to truth and lies and games compared to ordinary people.
Which sounds like a really nice way of saying that rich and powerful people are dishonest by ordinary standards.
>There's a bunch of people who want a large income (wealth) disparity
Apart form you of course, so I'm sure you'd be ok if the government would tax your higher than average tech wage till your take home pay would match that of a train conductor's or bus driver's, like in Western Europe, and therefore fix the wage gap you hate so much. Would you like that solution?
Caption this: It's only a problem when the people who earn more than me are greedy, but my greed is fine, it's OK for me to out-earn others because "I've earned it", not like Zuckerberg, he didn't earn it.
benterix · 21h ago
> the government would tax your higher than average tech wage till your take home pay would match that of a train conductor's or bus driver's, like in Western Europe
I live in Europe and earn ca. 6 times more than my friend who is a bus driver in the same city. We both have access to free education and, if we wish, also free healthcare, for which I am paying slightly more, but I really don't mind.
Gud · 20h ago
If you earn 6 times more than your friend who is a bus driver, you live in a place that has an unusually high income disparity for Europe.
fmobus · 19h ago
Tech worker here, I just looked at my total comp, and it's about 4.5x that of a local bus driver. L+1s are probably getting close to 6x.
Gud · 19h ago
What's your location if you don't mind me asking? And did you compare the bus drivers total comp to your total comp? :-)
benterix · 2h ago
I have a good job, most of my friends earn, say, 4x bus driver salary.
FirmwareBurner · 16h ago
That doesn't scan. In which Western European country is that massive difference?
Either you have a FANG wage or your friend has a poverty wage because here's how it's in Austria SW Dev wage 3k net/month, bus driver 2,5k. There's no 6x difference here.
So you're proving my point that it works for you when income distribution is not egalitarian because you wouldn't be very happy if you earned the same as your friend.
FirmwareBurner · 3h ago
That doesn't scan. In which Western European country is that massive difference?
Either you have a FANG wage or your friend has a poverty wage because here's how it's in Austria SW Dev wage 3k net/month, bus driver 2,5k. There's no 6x difference here.
So you're proving my point that it works for you when income distribution is not egalitarian because you wouldn't be very happy if you earned the same as your friend.
benterix · 2h ago
My salary is quite good, my friends in the field earn a bit less, but the gap is still there.
To put things in perspective, according to this website[0], bus drivers earn ca. €20 per hour, within some quite limited margin. I don't know if this data reflects reality. However, the data for SWE show a much, much wider margin[1]. So it would make much more sense to compare the medians, and this gives only 2x difference. A big gap still, but not enormous as in my case.
You're comparing apples to oranges here. Dataset of levels.fyi heavily skews towards FAANGs and big-tech, not entire SW sector of a nation.
Granted, I was also guilty of that since I was comparing the salary of a tram driver for the local state public transport company where they get great benefits and the union always negotiates top salaries since they have a monopoly and can just increase the bus fare to their customers whenever they need a raise.
But my point still stands. Why bother going to uni to become and engineer in the competitive private sector, if you're gonna net only 500 Euros more than a driver or any other government annex union job?
antfarm · 4h ago
I believe that in the long run, that solution would be preferable to the effects of income and wealth disparity on US society.
queenkjuul · 19h ago
I could comfortably live on a bus drivers wage here, obviously I'd prefer they make as much as tech workers, given their job is much harder; your solution is fine with me too, though.
hnthrow90348765 · 22h ago
Sure I would, as long as they tax billionaires even more and guarantee it. I do CRUD app development, I'm not even responsible for anything as potentially dangerous as a train. Superintelligence would very likely take my job anyway, so I won't get taxed for long.
blinkbat · 18h ago
there's a cap on the bracket around 600k. there are people who make many times this and their percentage owed does not go up. they are also uniquely positioned to avoid paying what low comparative amount they owe. let's start there.
tanduv · 21h ago
perhaps we can create a sort of a bracket that scales based on the income?
DanHulton · 22h ago
Unless you are yourself a robber baron the likes of Zuck, you should look up this little concept called "class solidarity."
FirmwareBurner · 22h ago
I'm just pointing out the hypocrisy here of people seeing greed only in those with more income than them but never in themselves when they accept those generous big-tech, big-finance, big-ad-tech, big-4, big-pharma compensation packages from the evil robber barons they claim to hate. If you hate them so much why are you taking their blood money?
Also, there is no class solidarity the way you imagine it in your fantasy, because to the average person on the street putting the fries in the bag ac McD, or stacking shelves at Walmart, or tearing down the roads with a jackhammer in the summer heat, the big-tech worker is closer to the robber baron Zuckerberg, than they are to them. So when you get laid off from your big-tech job, they won't have solidarity for you, they might even break a smile, as those spoiled pampered tech worker are brought down from their Kombucha sipping ivory towers.
Class solidarity, as seen applied in Europe, means bringing the income of tech workers in line with unskilled labor till everyone is equally lower-middle class, not touching the super wealthy robber barons to contribute more to society, because no society does that, that's just fantasy. Look at the owner of IKEA's complex tax avoidance scheme: https://www.greens-efa.eu/legacy/fileadmin/dam/Documents/Stu... Do you think he has any class solidarity? He has more in common with Musk, Zuckerberg or XiJinping than with his average Swedish countrymen.
The more class solidarity you wish and vote for, the higher the tax burdens will be on skilled and ambitious middle class workers and small businesses, not on Zuckerberg or the elites with inherited wealth. So be careful what you wish for. My country already went through communism once and everyone had enough of "class solidarity" for the next lifetime, but there's always some westerners out there who cling on that "this time it will be different". Sure buddy.
einszwei · 21h ago
> the big-tech worker is closer to the robber baron Zuckerberg
A big tech worker earning 200k is closer to a minimum wage worker earning 20k per year than Zuckerberg earning 20M per year with net worth of 200B
bigfishrunning · 19h ago
Yes, but from the point of view of the minimum wage worker, they don't identify with either the engineer or Zuck, so those two are equivalent.
somebehemoth · 17h ago
What now? How would you even begin to know this? It sounds like you are out of touch if you don't think low wage earners can tell the vast difference between normal people and billionaires.
queenkjuul · 19h ago
tech workers are in every literal way closer to construction workers than billionaires, and everyone i know knows it, even the construction workers.
Did you know my construction worker friend actually makes as much as i do? Amazing what class solidarity in the form of unions can achieve, eh?
queenkjuul · 19h ago
The owner of IKEA is literally in the same class as Zuck lmao what are you even on about
sshine · 19h ago
The founder of IKEA, Ingvar Kamprad, passed away in 2018.
IKEA is currently owned by a series of foundations.
On account of Ingvar Kamprad being dead, they're not really in the same class.
Before Ingvar Kamprad passed away, his estimated worth was $42.5B -- $58.7B.
Compared, Zuck's estimated worth is $221.2B -- $247B.
bigfishrunning · 19h ago
And he's got a scheme to avoid taxes, i think is the takeaway here
Voloskaya · 22h ago
> We'll need to be rigorous about mitigating these risks and careful about what we choose to open source.
Here we go, predictably pulling the oldest trick in the book, just two weeks after it was reported [1] that the Superintelligence leadership was discussing moving to closed source for their best models, not for any risk mitigation reason, but for competitive reasons.
Also,
> As recently as 200 years ago, 90% of people were farmers growing food to survive. Advances in technology have steadily freed much of humanity to focus less on subsistence and more on the pursuits we choose. At each step, people have used our newfound productivity to achieve more than was previously possible, pushing the frontiers of science and health, as well as spending more time on creativity, culture, relationships, and enjoying life.
Yea about that... Sure Mark can choose to just fly on his private Hawaiian Island, or is Tahoe bunker and mess around with metaverse and AI and whatever he chooses. 99.9% of the population has an old regular job that they go to for subsistence. Michael from north dakota has not been doing bookeeping for SMEs because this was always the pursuit of his dreams.
I also see no reason at all to believe we spend more time on creativity, culture, relationships or enjoying life than before. Especially that last point is in free fall over the last 50 years by the look every single mental well being metric around.
> Here we go, predictably pulling the oldest trick in the book, just two weeks after it was reported [1] that the Superintelligence leadership was discussing moving to closed source for their best models, not for any risk mitigation reason, but for competitive reasons.
That's not pulling a trick, that's doing precisely what Zuck said he would do. In April 2024 Zuck on Dwarkesh said that models are a commodity right now, but if models became the biggest differentiator, that Meta would stop open sourcing them.
At the time he also said that the Model itself was probably not the most valuable part of an ultimate future product, but he was open to changing his mind on that too.
You can whine about that anyway, but he's not tricking anyone. He has always been frank about this!
Voloskaya · 18h ago
July 2024:
> Open Source AI is the Path Forward.
> Meta is committed to open source AI. I’ll outline why I believe open source is the best development stack for you, why open sourcing Llama is good for Meta, and why open source AI is good for the world and therefore a platform that will be around for the long term.
> We need to control our own destiny and not get locked into a closed vendor.
> We need to protect our data.
> We want to invest in the ecosystem that’s going to be the standard for the long term.
> There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives.
> I think it will be better to live in a world where AI is widely deployed so that larger actors can check the power of smaller bad actors [...] As long as everyone has access to similar generations of models – which open source promotes – then governments and institutions with more compute resources will be able to check bad actors with less compute.
> The bottom line is that open source AI represents the world’s best shot at harnessing this technology to create the greatest economic opportunity and security for everyone.
> I hope you’ll join us on this journey to bring the benefits of AI to everyone in the world.
> Mark Zuckerberg
Pulling the "Closed source for safety" card, once it makes economic sense for you, after having clearly outlined why you think open source is safer, and how you are "committed" to it "for the long term" and for the "good for the world", is mainly where my criticism is coming from. If he was upfront in the new blog post about closing source for competitive reason, I would still find it a distasteful bait and switch but much less so than trying to just put the safety sticker on it after having (correctly) trashed others for doing so.
> The improvement is slow for now, but undeniable. Developing superintelligence is now in sight.
Oh, is it now? So you know for a fact that intelligence comes from token prediction, do you, Mark?
Look, multi-bit screwdrivers have been improving steadily as well. I've got one that stores all it's bits in the handle, and one with over three dozen bits in a handy carrying case! But they're never going to suddenly, magically become an ur-tool, capable of handling any task. They're just going to get better and better as screwdrivers.
(Well, they make a handy hammer in a pinch, but that's using them off-spec. The analogy probably fits here, too, though.)
My POINT, to be crystal clear, is that Mark is saying that A is getting better, so eventually it will turn into B. It's ludicrous on its face, and he deserves the ridicule he's getting in the comments here.
But I also want to go one step further and maybe turn the mirror around a bit. There's also an odd tendency here to do a very similar thing: to observe critical limitations that LLM tools have, that they have always have, and that are very likely baked into the technology and science powering these tools, and then to do the same thing as Mark, to just wave our hands, and say "But I'm sure they'll figure it out/fix it/perfect it soon."
I dunno, I don't see it. I think we're all holding incredible screwdrivers here, which are very impressive. Some people are using them to drive nails, which, okay, sure. But acting like a screwdriver will suddenly turn into precision calipers (and a saw, and a level, and...) if we just keep adding on more bits, I think that's just silly.
tim333 · 13h ago
That's not really what he said.
cootsnuck · 6h ago
Isn't it though? He's provided zero evidence to suggest otherwise. So of course we are all going to assume he's talking about the current, popular, SOTA architectures still as the foundational piece.
jameskilton · 22h ago
Facebook / Meta and Mark in particular are an amazing case of someone who is, at least at this point, incapable of learning from past mistakes, or even recognizing the mistakes they have made and are continuing to make.
Facebook's mission of "connecting the world" turned out to be the absolute worst thing anyone should ever try to do. Humans are social creatures, yes, but every connection we make costs energy to maintain, and at a certain point (Dunbar's Number) we apply the minimal amount of energy and effort. With Internet anonymity, that means we are actually incapable of treating each other as people on the Internet, leading to the rise of toxicity and much, much worse.
Mark has never understood this, and as his fortune is built around not understanding this, he never will.
There is nothing good that will come from Meta's "superintelligence" and this vision is proof.
dinfinity · 18h ago
I don't think the "connecting the world" was the problem. IRC also has tons of toxicity and connects people all over the world.
The core problem is gamification of social interaction. The 'Like' button and everything like it for things people say or show is hands down the worst thing to happen on the internet. Everywhere they can, people whore for karma (unless they spend a lot of mental effort to fight back that urge). How primitive the related moderation systems are directly affects how much primitive shit gets rewarded and alas, most moderation systems are ridiculously primitive.
So, dopamine hits for saying primitive shit.
9rx · 21h ago
> that means we are actually incapable of treating each other as people on the Internet
Well, that's because there aren't people on the internet! I mean, yes, us technologists understand that there are often people pulling knobs and levers behind the scenes as an implementation detail, so technically they are there. But they are only implementation details, not what makes it what it is. If you replaced the implementation with another algorithm that functions just as well, nobody would notice. In that sense, it is just software.
> leading to the rise of toxicity and much, much worse.
It is not so much that it has lead to anything different, but that those who used to be in the forest yelling at animals as if they were human moved into civilized areas when they started yelling at computers as if they were human. That has taken their mental disorders to where it is much more visible.
mrcwinn · 22h ago
>Personal superintelligence that knows us deeply, understands our goals, and can help us achieve.
>We believe the benefits of superintelligence should be shared with the world as broadly as possible.
So... ads.
ankit219 · 19h ago
That is the model everyone gravitate towards. Openai's Fidji too started with the note about how superintelligence is for everyone.
I think it would be back to income based tiers though. You want more assistance, pay $200 per month. Even more, maybe $2000 (for companies). Then, if you dont want to pay, you get contextual ads (which would work here because llms can contextualize far better), and a lower quality of service.
saubeidl · 22h ago
Not just ads. Psyops, a propaganda machine unlike anything the world has ever seen. There's a reason Zuck and the US government are real cozy lately.
pbrum · 6h ago
What if superintelligence isn't even a thing? I was watching an interview with a Chinese-American specialist the other day (I'm sure it's been shared here on HN at some point) and she explained in the Chinese AI community they don't operate under the assumption that something such as AGI or superintelligence exists, and therefore don't work toward that goal. I'm sure people in this community can comment to a much more informed extent on this than I though
fzzzy · 1h ago
Do we have narrow superintelligence in chess and go and jeopardy?
qprofyeh · 22h ago
Reads as RIP Metaverse to me.
Any time a CEO publishes such empty, wordy essays, it's probably earnings reporting time. I can't shake the feeling it's a public subreply at one of or a cluster of doubting investors, who started to doubt the CEOs vision for the company, or find the lack of one on a certain topic concerning.
laweijfmvo · 22h ago
FWIW, today is Meta’s earnings report
lvl155 · 21h ago
That’s exactly why he published this. To justify his insane investment spend. I am not sure if investors will continue to give him a blank check. He’s not spending his money. He’s spending shareholder money to pursue personal projects and endeavors.
9rx · 21h ago
> I am not sure if investors will continue to give him a blank check.
What are they going to do, exactly? They explicitly invested in the company knowing that Zuckerberg would retain full control.
If they can show gross negligence there may be a legal avenue, but it would be pretty hard to justify chasing potentially profitable business ventures, even if they end up failing, as being negligence. Controversial business decisions are not negligence in the eye of the law.
Sure, they can sell their interest in the company — if someone else wants to buy it — but that just moves who the investor is around. That doesn't really change anything.
lvl155 · 21h ago
Well, very true. They let him spend $100B plus on a mediocre VR devices that’s not improved all that much under Zuck. I keep thinking, what if they took that $100B and invested in Nvidia.
9rx · 20h ago
Hindsight is 20/20, I suppose. Not much they can do now, though, except sell their interest to someone else. But why would someone else want to buy their interest when that someone else could use the same money to buy a stake in Nvidia instead?
xnx · 15h ago
Looks like after-hours investors like what they see. Stock at all time high.
subpixel · 22h ago
"more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be."
Meanwhile I can't properly find items that are listed on FB marketplace.
orochimaaru · 22h ago
It’s a fluff piece for investors. I think the same may have been said for the metaverse.
aeon_ai · 22h ago
I read Careless People recently.
I don't think the author of that book is unbiased, and after some healthy debate with friends, imagine there are a number of different perspectives on the facts. But it seems clear that, well before it was public knowledge outside of the company, there was clearly visibility of and ignorance over harms being caused by the platform inside of it.
Facebook (now Meta) turned human attention into a product. They optimized for engagement over wellbeing and knew that their platforms were amplifying division and did it anyway because the metrics looked good.
It's funny, because I aspire to many of the same things cited in this vision -- helping realize the best in each individual, giving them more freedom, and critically, helping them be wise in a world that very clearly would prefer them not to be.
But the vision is being pitched by the company that already knows too much about us and has consistently used that knowledge for extraction rather than empowerment.
twoodfin · 22h ago
If trends continue, then you'd expect people to spend less time in productivity software, and more time creating and connecting.
Does the average American worker today spend a ton of time in productivity software?
I know and Zuckerberg surely knows the impact on labor will be much more pervasive than that, so it seems like an odd way to frame the future.
reaperducer · 22h ago
Does the average American worker today spend a ton of time in productivity software?
"Average?" No. But many millions of people, yes.
The majority of people in my company spend their day tied to Microsoft Office.
Which bring its own problems when managers don't understand that building a computer program isn't the same speed, complexity, and skill level as making a PowerPoint presentation.
ctippett · 22h ago
I know your comparison to PowerPoint was probably not meant to be taken literally, but I'll just add that a good presentation takes just as much time, skill and effort as any creative endeavour (including programming).
K0balt · 22h ago
I’d love to see a PowerPoint presentation that has a million man-hours of work in it. Oh, never mind, I probably have.
But seriously, this comment can easily be true, and if it is , then it is an excellent example of a human endeavour that we invented to improve efficiency but has become a bottomless sink of talent, effort, and cost directed away from generating any value whatsoever.
I have never seen a presentation that couldn’t have been done just as well without the use of a computer., except to demonstrate things that are computer related.
Presentations are a great example of an activity that has become an end unto itself that delivers no value, and only serves as a kind of internal preening behaviour, signalling a persons value to the organisation without actually delivering any.
troyvit · 14h ago
I don't know if I can point you to a PowerPoint presentation that did have a million man-hours of work on it, but here's a story about one that certainly didn't:
Communication is communication, be it by PowerPoint or semaphore, and it takes talent to do it right.
> I have never seen a presentation that couldn’t have been done just as well without the use of a computer., except to demonstrate things that are computer related.
Re-reading that, I wonder what would've happened if the Boeing wonks in that meeting had just not brought a presentation. Maybe you're right.
K0balt · 14h ago
Oh crap. I can only imagine how devastating that must have been for the unfortunate person that made that presentation when that finding came out. If it wasn’t such a horrific situation, that sounds comically bad.
I was a contractor for the military for a few years a while back… the military runs on power point.
I’ve seen weeks poured into a power point presentation whose only point was that regular physical conditioning was beneficial to physical fitness. In the medical equipment maintenance shop.
I attended at least 100 meetings during that 8 years, and there wasn’t a single presentation that couldn’t have been replaced with a sheet or two of paper and a short lecture/discussion. Instead we got 1/2 hour marvels of editing and animation, often replete with music and video, and I’ll admit some of it was really a work of art.
But none of that contributed to the salient points. If anything, a good presentation was a distraction, leading to a ten minute discussion about the technique and tools used to make the presentation lol. There was also an underground market of enlisted presentation gurus that would make presentations in exchange for favors or even for pay, because impressive PowerPoint presentations were considered critical to career advancement for officers.
I often wondered what would have happened if IMD deleted PowerPoint off of all of the machines on the domain lol. Collapse? 10x productivity? Tires burning in the streets? Only one way to know for sure!
reaperducer · 22h ago
more time creating
Considering that the most common use for "AI" is to take jobs away from creators like artists, musicians, illustrators, writers, and such, I find this statement hard to believe.
So far, all I've seen is AI taking money away from the least-paid workers (artists, et.al.) and giving it to tech billionaires.
K0balt · 22h ago
But people have to keep creating to feed the AI! AI is extractive, not creative, so without people toiling away and adding actual creativity, current paradigm AI will become increasingly derivative and uninspiring… so the obvious answer is to put people into nutrient filled VR pods so they can imagine actually new things to power the AI hive-mind.
meindnoch · 22h ago
>and more time creating and connecting.
Creating what? AI slop?
HDThoreaun · 22h ago
I mean yea, Marks vision here is that genAI creates special personalized space in his metaverse for literally everyone in the world. Slop for the masses.
adrianbooth17 · 22h ago
Meta's business model requires:
- Maximum data extraction
- Behavioral modification for profit
- Attention capture and addiction maintenance
"Personal superintelligence" serves all three perfectly while appearing to do the opposite.
osti · 19h ago
Looks like no more open source models :(
"We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We'll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible."
aschobel · 22h ago
> We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We'll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible.
So maybe no more open source because of "safety"?
amradio1989 · 14h ago
Makes you wonder who owns the data that powers a "personal superintelligence that intimately knows all the details about you".
It also makes you wonder what they do with all of that information. But surely this is altruism.
tim333 · 13h ago
Just to take the other side of most of the comments here, I mostly agree with Zuck. Superintelligence is on it's way and we will probably all have access in the same way we call all access ChatGPT and the like. I'm not sure how the "intersection of technology and how people live" will go but it's not a bad thing to work on. I'm not sure Meta is the one do to it though - glad to have other people working on it too.
azinman2 · 22h ago
I don’t know who will buy this puff piece of a utopian future brought to you by… Meta? Filled with scandal after negative world changing behaviors that they don’t stop? The ones that want to make money no matter what harms it does? Why would the future with even more powerful technology be any different?
demirbey05 · 22h ago
We didn't even reach AGI and no sign with LLMs.
AIorNot · 23h ago
Is he renaming the company again?
steve1977 · 21h ago
From Meta to Super? The next step would be Hyper and we have the modifier keys of the space-cadet keyboard covered.
schmorptron · 22h ago
if the way they are explicitly starting to optimize llm chats for engagements and friend-like behaviour, he'll just need to switch around some letters to call it Mate
throwmeoutplzdo · 22h ago
Or Meat, if they were completely honest about their users’ role in the equation </compulsory matrix reference>
khelavastr · 23h ago
How is "superintelligence" defined?
criddell · 22h ago
I define it as smarter than humans. The same way a dog will never understand L'Hôpital's rule, there are probably things that human beings will never understand but future AIs will. I'd call those AIs superintelligent.
AIPedant · 20h ago
It is worth emphasizing that dogs wouldn't understand formal calculus even if they were smarter than humans! They are not physiologically capable of doing formal mathematics:
- their eyesight is too poor to read
- their paws are not designed for fine manipulations so they cannot write or type
- their throats and mouths are not nearly as nimble as ours, so they cannot vocally communicate detailed information
Even if there was an Newton-level dog, they wouldn't be able to access the ideas of an earlier Euclid-level dog. Human knowledge is not just about our big brains, we've developed many physical features that make transmission of information far easier than other species.
OTOH dogs do have a good intuitive "common-sense" understanding of arithmetic, geometry, and physics. It is the unique gift of humans that we can formalize and then extend this intuition, but this ability (and intelligence as a whole) relies on nonverbal common sense.
criddell · 20h ago
> It is worth emphasizing that dogs wouldn't understand formal calculus even if they were smarter than humans!
Exactly! If a dog invented a dog superintelligence and it discovered calculus, the dogs would never understand that discovery. I think a superintelligence we build will discover things we cannot understand.
AIPedant · 20h ago
No, you totally misunderstood my point. Formal calculus is not a physical fact of the
universe, it is an abstract human tool that other species (including dogs) are not capable of using, just like they can't use hammers or drive cars. The problem is not a lack of intelligence, it's having the wrong body.
bigfishrunning · 17h ago
I misunderstood it too. Maybe you're superintelligent!
YetAnotherNick · 20h ago
All of these is true for Stephen Hawking. Of course one can argue that he couldn't have done it if not for other humans making technology for him.
AIPedant · 20h ago
The big difference with Stephen Hawking is that he was not born disabled, he became disabled during graduate school. Even in 2025, a human who is born blind and severely paralyzed (so they cannot speak or sign) will probably never learn calculus, regardless of innate ability. Perhaps in the medium term technology will improve.
That said, another major difference is psychology. Switching animals, it seems plausible to me that chimpanzees are theoretically capable of doing basic calculus as a matter of pattern-matching. But you can't force them to study it! Basic calculus is too tedious and high-effort to learn for a mere banana, you need something truly valuable like "guaranteed admission to the flagship state university" for human children to do it. But we don't have an equivalent offer for chimps. (Likewise an Isaac Newton - level dog might still find calculus exceptionally boring compared to chasing squirrels.)
tim333 · 13h ago
Also LLMs are lacking on the eye, paw and throat front but still do better in the math olympiad than dogs, or me for that matter.
No comments yet
Runsthroughit · 13h ago
OMG. Helen Keller. If examples help you people. Prob not.
AIPedant · 12h ago
Was Helen Keller severely paralyzed????
a human who is born blind and severely paralyzed (so they cannot speak or sign)
alazoral · 21h ago
I think it’s too easy to get anthropocentric and 1D about intelligence. That dog understands things about the world, important things, that mere humans like Bernoulli could never even dream of.
criddell · 20h ago
It's anthropocentric because it's being built by humans. If a dog built a superintelligent AI, I would assume it would be canocentric.
lo_zamoyski · 22h ago
The first rule of marketing and hype is to never define your terms.
Instead, you insinuate and play into fantasy and wishful thinking.
wslh · 22h ago
Same question. I don't think there's a universal definition of superintelligence.
Just to add some food for thought:
Is superintelligence simply a very high IQ, a higher than top humans one? If so, we'd need a way to measure that, since existing IQ tests are designed for human intelligence. Or is superintelligence about scale/order-of-magnitude: many high-IQ minds working together? That would imply a different kind of threshold. But perhaps the key idea is that superintelligence is inherently uncapped, that is once we reach a level we consider "superintelligent" we can still imagine something even more advanced that fits the same label.
dupsik · 22h ago
>Over the last few months we have begun to see glimpses of our AI systems improving themselves. The improvement is slow for now, but undeniable.
Does anyone know what this is referring to?
Archonical · 9h ago
If I told you "I've begun to see glimpses of my sobriety", would you be convinced I'm committed to not being an alcoholic?
I don't think anyone knows what he is referring to. Maybe AlphaEvolve? Certainly not Llama.
tantalor · 22h ago
Hi Mark,
Can you put a date on this please?
Thanks,
tantalor
Henchman21 · 21h ago
Meta has shown us that their goals are antithetical to healthy societies.
The company should be broken up, its assets auctioned, its IP destroyed.
kelseyfrog · 21h ago
> If trends continue, then you'd expect people to spend less time in productivity software, and more time creating and connecting.
Sorry, but Jevon's Paradox[1] returns yet again.
If you make workers more efficient, then we won't be freed up to spend more time creating and connecting. There will be more work.
Creating more efficient steam engines didn't reduce coal consumption, it just made there be more steam engines. The second order effects of efficiency don 't work the way we think they work.
just running on metas servers with metas software and metas tracking and algorithms.
> "This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole"
says the guy who spend most of the last 3 years laying people off.
theres just too much sliminess to dissect. i leave it at that.
one thing for sure, these evil megacorps will use this tech in a dystopian and extractive way. nothing ever changes.
Barrin92 · 22h ago
Anthropic, to their credit, just published that pretty funny paper about how they couldn't get their chatbot to run a single vending machine profitably[1]. Admirable that Mark is so visionary he's skipped straight past Bodega-intelligence to super-intelligence
This isn't measuring the same thing, and recent results are so extreme that they call into question whether the results would map to the real-world implementation Anthropic tried. Is it really the case that Grok 4 can manage a vending machine many times more profitably than a human, or is it exploiting some property of the simulated environment?
SpicyLemonZest · 22h ago
Anthropic also expects that superintelligence will be reached in the somewhat near future. The intuition is that there's just not that much distance between a chatbot that can manage a vending machine poorly and a chatbot that can manage it well.
AIPedant · 21h ago
Even if that intuition is correct and they can fix the vending machine with a bit more data and RLHF, I fail to see where the "super" comes in here. How the fk are they going to get superintelligent training data? A time machine?
SpicyLemonZest · 16h ago
You're getting at a deep point of disagreement - should we expect a modern or near-future LLM to be limited by the intelligence of the people who generated its training data? I don't think anyone claims to have a provably correct answer. There's one intuition that says yes (why should it be impossible to make new insights from data collected by people who didn't have those insights?) and another that says no (how can a statistical average of N people's most likely responses be smarter than any of those N?)
AIPedant · 15h ago
I think you are reading a little too much into my comment but I understand where you are coming from. My point is that even if you agree with this:
There’s just not that much distance between a chatbot that can manage a vending machine poorly and a chatbot that can manage it well.
it is a huge leap to conclude this:
There’s not much distance between a chatbot that is as intelligent as a human and a chatbot that is more intelligent than a human.
But that seems to be what Anthropic is assuming.
blibble · 22h ago
that intuition is wrong
frahs · 22h ago
[citation needed]
blibble · 20h ago
exactly, the burden of proof is on the person making the claim
so SpicyLemonZest, not me
Integrape · 17h ago
Never had a spicy lemon, so I'm already sceptical.
SpicyLemonZest · 16h ago
To be clear, I personally don't think that current models point the way towards superintelligence. But it does us no good to pretend that this is some absurd opinion from a guy who's looking for the next world revolution after the Metaverse didn't work out. Zuckerberg thinks superintelligence is close because a number of experts actively engaged in the field say that it's close. When you and I say it's not close, we're disagreeing not with crazy randos who can be dismissed out of hand, but with smart people who generally know what they're doing.
fredoliveira · 20h ago
In what ways?
Oras · 22h ago
The art of writing so many words without communicating anything useful.
K0balt · 21h ago
Written by meta.ai
But at least he’s trying to signal benevolence. People getting trapped into their projected image is a thing, so in this day and age I’m going to take this as a win.
colesantiago · 23h ago
I don't buy these Superintelligence claims from Meta of 'abundance' and 'humans are free to do other things' as they are tied to Wall St and earnings.
It is always abundance for the super rich, scarcity for those in jobs.
How can I be free to do my gardening whenever I want when the landlord is asking for $11K rent in my SF flat?
So eventually they will do the opposite of this 'vision' and put this super intelligence to replace jobs.
Also, what happened to the Metaverse that Meta invested hundreds of billions as per their namesake?
jazzyjackson · 22h ago
> Meta invested hundreds of billions as per their namesake
They bought a shit ton of GPUs before the LLM boom, which gave them a running start on training their own model. Zuck talks about it in an interview with Lex Friedman.
sorcerer-mar · 22h ago
> How can I be free to do my gardening whenever I want when the landlord is asking for $11K rent in my SF flat?
This is the fatal flaw. It's been recognized explicitly for at least 140 years that the price of land rent rises in lockstep with productivity increases, guaranteeing there is no "escape velocity" for the labor class regardless of how good technology gets.
9rx · 22h ago
Of course, that's why experts started pushing college/university as the way to find escape, with an understanding that people could use its research facilities to create capital instead of falling into the labor class.
But it was ultimately lost in translation. The layman heard: Go to college/university to become a more appealing laborer to employers. And thus nothing improved for the people; the promises of things like higher income never occurred — incomes have held stagnant.
FirmwareBurner · 22h ago
>people could use its research facilities to create capital
This is just survivorship bias. Of course most people choose the employment route since very few people are gonna become good researchers with valuable ideas, and even fewer of those have the ability to become successful business owners, being a good researcher is not enough.
And money doesn't just rain from the sky, you still need money to make money.
9rx · 22h ago
> This is just survivorship bias.
It was forward looking, so survivorship bias doesn't fit. But it may be fair to say that it was unrealistic to think that it was tenable. Reality can certainly give theory a good beating.
> And money doesn't just rain from the sky
If you are going to college to do anything but create capital, the money must be raining from the sky. It would be pretty hard to justify otherwise given that there has been no economic benefit from it. As before, the promise of higher incomes never materialized (obviously, they were promised on the idea of people using college to create capital) — incomes have held stagnant.
FirmwareBurner · 22h ago
>regardless of how good technology gets
Technology increases aren't there so you work less hours for the same pay, they're there so your business owner gets more money form you working the same hours.
If a machine gets invented that can do your job it's not like you can now go home and relax for the rest of your life and still keep receiving your pay cheques. This utopia doesn't exist.
sorcerer-mar · 22h ago
Yes but even this ignores the more important dynamic at play: rent rises to eat nearly all the productivity gains.
If you add technology to your workplace your wages should go up (not to eat all the gains of the technology, but a decent portion via wage competition), but then once your wages go up, the local rent goes up anyway.
gishglish · 22h ago
> How can I be free to do my gardening whenever I want when the landlord is asking for $11K rent in my SF flat?
You can work his fields in exchange for most of the harvest of course!
lo_zamoyski · 22h ago
Or join a purgatorial society in exchange for free grazing on his land!
jerojero · 22h ago
The metaverse is an important part for this vision of the future.
With the metaverse it won't matter that you live in a 3x3m cubicle because you will use your VR headset to pretend you live in a spacious and comfortable place.
That's how it was in snow crash anyway, where the term comes from.
nashashmi · 18h ago
Equipment and hardware being used to do basic stuff like farming, clothing, and housing can become weaponized to both drown local industry with efficiently produced varieties and as carrot and stick to force countries to comply with globalist ideals (that seek to consolidate power into the hands of a few).
jonplackett · 22h ago
When prompt injection is solved I’ll believe super intelligence is possible.
raghavtoshniwal · 15h ago
Isn't that a bit like saying that if common cold is solved, you will believe that human intelligence is possible?
Why isn't it super intelligent if there are security flaws in the system.
[Not taking a stance on wether super intelligence is possible but your condition seems a bit arbitrary]
jonplackett · 1h ago
I guess what I mean is - if it can be easily derailed from its super intelligence - then it can’t be that intelligent. It’s just a tool, doing as it’s told.
koakuma-chan · 22h ago
Meta is yet to release a competitive model.
sifar · 18h ago
>> an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.
What has intelligence (let alone superintelligence) or lack of, got to do with the last two. All these discussions about AGI seems to have reduced what it means to be a human being to a token generator.
grimpy · 22h ago
why does this page have the same look as ssi.inc? feels kind of strange to copy that of all things?
bonoboTP · 21h ago
If you "view source", you'll see that the difference is day and night. ssi.inc is a simple html website. This Meta memo is a JS framework monster.
abxyz · 22h ago
its a stylistic choice, it’s a statement: content is all that matters. Brutalist websites have been around for a long time.
im quite of aware of what it is. this isn't just "make a brutalist website" it is "make a website that looks like ssi when meta has multiple platforms and means to publish the same message." it is much more intentional than "lets make it look like web 1"
sandspar · 9h ago
You can quibble about timelines and percentages and all that. But you gotta admit that this is a treat to watch. What a fun time to be alive.
laweijfmvo · 22h ago
Meta has a lot of users (for better or worse). AFAIK they give away all the current “AI” features for free (via ads) like all of their products, right? That must take an enormous amount of compute to be able to offer all of their users the ability to generate AI Slop at any time… there’s no way they can continue to do that with anything close to a “super intelligence”, right? Do they have that much faith in ads?
coldpie · 22h ago
Deeply embarrassing to see this coming from one of America's most powerful companies, especially after the metaverse faceplant from a few years back.
yodsanklai · 22h ago
Nothing is surprising in the current Zeigest...
techpineapple · 22h ago
Why embarrassing? Having one failure means you can’t try again?
coldpie · 22h ago
LLMs are not going to fundamentally reshape society. Productivity will go up a couple percent, the profits from that productivity will go to the richest people. Some jobs will be made redundant, some new jobs will be created. That's it. Same as it ever was.
FergusArgyll · 22h ago
That's your opinion. It may be correct but it's not embarrassing for a CEO to have different opinion that you
coldpie · 22h ago
It's embarrassing to me as an American that the people who are leading our society are so plainly in the loony bin.
queenkjuul · 19h ago
But his opinion, itself, is an embarrassing opinion for a person to have
croisillon · 22h ago
the source code horror of this motherducking webpage
alittletooraph2 · 22h ago
ngl that felt like reading a lot of fluff
polynomial · 21h ago
What phase of the AOL lifecycle is Meta currently in? (Seriously asking.)
lenerdenator · 22h ago
If there's one person I trust to raise a superintelligence, it's Mark Zuckerberg.
EDIT:
to clarify, this is sarcasm
akomtu · 21h ago
If Meta invents superhuman AI and Mark gets to own it, we'll effectively enter the era of Antichrist. Imagine a human as greedy, as capable and as hollow as Mark, but made 1000x smarter than anyone else on the planet? Corporations are kept in check by competition between them, they can't be too abusive, but this brittle balance will end if one of them creates AI.
lo_zamoyski · 22h ago
It is possible to hold the following two statements to be true without risk of contradiction:
1. LLMs and "AI" broadly can become a very useful and powerful technology that can have a transformative effect on industry and so on.
2. Talk of "superintelligence" is total horseshit.
AIPedant · 21h ago
This applies precisely to 1970s AI boosters getting way too excited about Prolog and Lisp.
techpineapple · 22h ago
I’m curious how many people are actually outside of this, most debates seem to boil down to, “ AI isn’t bullshit I’ve used it and I find I useful” and my argument as a skeptic isn’t that it doesn’t do the things people clearly see it does, nor that those use cases will improve, but that valuations and investment are at the levels of “solving scarcity”
shortrounddev2 · 22h ago
The effect of Meta products from 2006~2022 has been disastrous for humanity. Social media companies are the new big tobacco. Allowing them to play a leading role in the development of AGI would probably be fatal for democracy around the world. I would not trust a single word that comes out of Mark Zuckerberg's mouth
abxyz · 22h ago
“As recently as 200 years ago, 90% of people were farmers growing food to survive. Advances in technology have steadily freed much of humanity to focus less on subsistence and more on the pursuits we choose”
More people today are dying from starvation than people existed on earth 200 years ago. Celebrating our achievements in making shareholders rich is one thing, but to take credit for freeing the people. Yikes. Mark is more out of touch than seems possible.
tim333 · 13h ago
Because there are more people. Life expectancy etc is way up.
queenkjuul · 19h ago
Yeah he says stuff like that as if the vast majority of people even in his own country don't work continuously for their subsistence, to say nothing of the people in the less powerful countries his company exploits
But you will still need to sustain ex-workers if they can't get normal jobs, and those same people at the top will not tolerate the taxes required to sustain a basic level of living for much wider population. They already can't tolerate the idea of a much smaller population using food assistance or healthcare from the government.
That leads me to think this is not really a visionary statement, but just a signal that Mark isn't intentionally trying to bring about a new dystopia, and here's his proof. And if a dystopia happens to come about, you can't blame him because he had pure intentions; clearly it was everyone else who just didn't agree with him and it's their fault.
Maybe make Meta a not-for-profit and there might be some credibility here.
If so, the logical choice would be to change the name from "Meta" to "AGI".
No comments yet
From Zuckerberg’s behavior, since the beginning, it’s clear what he wants is power, and if you have the kind of mental health disorder where you believe you know better than everyone and deserve power over others, then that’s not dystopian at all.
Everything he says is PR virtue signaling. Judge the man on his actions.
Kind of an unrelated topic but I'm reminded of a video essay in which the creator talks about this. They put it very kindly, IMO:
> Rich and powerful people have quite a different attitude and approach to truth and lies and games compared to ordinary people.
Which sounds like a really nice way of saying that rich and powerful people are dishonest by ordinary standards.
https://youtu.be/m6lObdE3s10?t=245
Apart form you of course, so I'm sure you'd be ok if the government would tax your higher than average tech wage till your take home pay would match that of a train conductor's or bus driver's, like in Western Europe, and therefore fix the wage gap you hate so much. Would you like that solution?
Caption this: It's only a problem when the people who earn more than me are greedy, but my greed is fine, it's OK for me to out-earn others because "I've earned it", not like Zuckerberg, he didn't earn it.
I live in Europe and earn ca. 6 times more than my friend who is a bus driver in the same city. We both have access to free education and, if we wish, also free healthcare, for which I am paying slightly more, but I really don't mind.
Either you have a FANG wage or your friend has a poverty wage because here's how it's in Austria SW Dev wage 3k net/month, bus driver 2,5k. There's no 6x difference here.
So you're proving my point that it works for you when income distribution is not egalitarian because you wouldn't be very happy if you earned the same as your friend.
So you're proving my point that it works for you when income distribution is not egalitarian because you wouldn't be very happy if you earned the same as your friend.
To put things in perspective, according to this website[0], bus drivers earn ca. €20 per hour, within some quite limited margin. I don't know if this data reflects reality. However, the data for SWE show a much, much wider margin[1]. So it would make much more sense to compare the medians, and this gives only 2x difference. A big gap still, but not enormous as in my case.
[0] https://www.salaryexpert.com/salary/job/bus-driver/germany [1] https://www.levels.fyi/t/software-engineer/locations/germany
Granted, I was also guilty of that since I was comparing the salary of a tram driver for the local state public transport company where they get great benefits and the union always negotiates top salaries since they have a monopoly and can just increase the bus fare to their customers whenever they need a raise.
But my point still stands. Why bother going to uni to become and engineer in the competitive private sector, if you're gonna net only 500 Euros more than a driver or any other government annex union job?
Also, there is no class solidarity the way you imagine it in your fantasy, because to the average person on the street putting the fries in the bag ac McD, or stacking shelves at Walmart, or tearing down the roads with a jackhammer in the summer heat, the big-tech worker is closer to the robber baron Zuckerberg, than they are to them. So when you get laid off from your big-tech job, they won't have solidarity for you, they might even break a smile, as those spoiled pampered tech worker are brought down from their Kombucha sipping ivory towers.
Class solidarity, as seen applied in Europe, means bringing the income of tech workers in line with unskilled labor till everyone is equally lower-middle class, not touching the super wealthy robber barons to contribute more to society, because no society does that, that's just fantasy. Look at the owner of IKEA's complex tax avoidance scheme: https://www.greens-efa.eu/legacy/fileadmin/dam/Documents/Stu... Do you think he has any class solidarity? He has more in common with Musk, Zuckerberg or XiJinping than with his average Swedish countrymen.
The more class solidarity you wish and vote for, the higher the tax burdens will be on skilled and ambitious middle class workers and small businesses, not on Zuckerberg or the elites with inherited wealth. So be careful what you wish for. My country already went through communism once and everyone had enough of "class solidarity" for the next lifetime, but there's always some westerners out there who cling on that "this time it will be different". Sure buddy.
A big tech worker earning 200k is closer to a minimum wage worker earning 20k per year than Zuckerberg earning 20M per year with net worth of 200B
Did you know my construction worker friend actually makes as much as i do? Amazing what class solidarity in the form of unions can achieve, eh?
IKEA is currently owned by a series of foundations.
On account of Ingvar Kamprad being dead, they're not really in the same class.
Before Ingvar Kamprad passed away, his estimated worth was $42.5B -- $58.7B.
Compared, Zuck's estimated worth is $221.2B -- $247B.
Here we go, predictably pulling the oldest trick in the book, just two weeks after it was reported [1] that the Superintelligence leadership was discussing moving to closed source for their best models, not for any risk mitigation reason, but for competitive reasons.
Also,
> As recently as 200 years ago, 90% of people were farmers growing food to survive. Advances in technology have steadily freed much of humanity to focus less on subsistence and more on the pursuits we choose. At each step, people have used our newfound productivity to achieve more than was previously possible, pushing the frontiers of science and health, as well as spending more time on creativity, culture, relationships, and enjoying life.
Yea about that... Sure Mark can choose to just fly on his private Hawaiian Island, or is Tahoe bunker and mess around with metaverse and AI and whatever he chooses. 99.9% of the population has an old regular job that they go to for subsistence. Michael from north dakota has not been doing bookeeping for SMEs because this was always the pursuit of his dreams. I also see no reason at all to believe we spend more time on creativity, culture, relationships or enjoying life than before. Especially that last point is in free fall over the last 50 years by the look every single mental well being metric around.
[1]: https://www.nytimes.com/2025/07/14/technology/meta-superinte...
That's not pulling a trick, that's doing precisely what Zuck said he would do. In April 2024 Zuck on Dwarkesh said that models are a commodity right now, but if models became the biggest differentiator, that Meta would stop open sourcing them.
At the time he also said that the Model itself was probably not the most valuable part of an ultimate future product, but he was open to changing his mind on that too.
You can whine about that anyway, but he's not tricking anyone. He has always been frank about this!
> Open Source AI is the Path Forward.
> Meta is committed to open source AI. I’ll outline why I believe open source is the best development stack for you, why open sourcing Llama is good for Meta, and why open source AI is good for the world and therefore a platform that will be around for the long term.
> We need to control our own destiny and not get locked into a closed vendor.
> We need to protect our data.
> We want to invest in the ecosystem that’s going to be the standard for the long term.
> There is an ongoing debate about the safety of open source AI models, and my view is that open source AI will be safer than the alternatives.
> I think it will be better to live in a world where AI is widely deployed so that larger actors can check the power of smaller bad actors [...] As long as everyone has access to similar generations of models – which open source promotes – then governments and institutions with more compute resources will be able to check bad actors with less compute.
> The bottom line is that open source AI represents the world’s best shot at harnessing this technology to create the greatest economic opportunity and security for everyone.
> I hope you’ll join us on this journey to bring the benefits of AI to everyone in the world.
> Mark Zuckerberg
Pulling the "Closed source for safety" card, once it makes economic sense for you, after having clearly outlined why you think open source is safer, and how you are "committed" to it "for the long term" and for the "good for the world", is mainly where my criticism is coming from. If he was upfront in the new blog post about closing source for competitive reason, I would still find it a distasteful bait and switch but much less so than trying to just put the safety sticker on it after having (correctly) trashed others for doing so.
https://about.fb.com/news/2024/07/open-source-ai-is-the-path...
Oh, is it now? So you know for a fact that intelligence comes from token prediction, do you, Mark?
Look, multi-bit screwdrivers have been improving steadily as well. I've got one that stores all it's bits in the handle, and one with over three dozen bits in a handy carrying case! But they're never going to suddenly, magically become an ur-tool, capable of handling any task. They're just going to get better and better as screwdrivers.
(Well, they make a handy hammer in a pinch, but that's using them off-spec. The analogy probably fits here, too, though.)
My POINT, to be crystal clear, is that Mark is saying that A is getting better, so eventually it will turn into B. It's ludicrous on its face, and he deserves the ridicule he's getting in the comments here.
But I also want to go one step further and maybe turn the mirror around a bit. There's also an odd tendency here to do a very similar thing: to observe critical limitations that LLM tools have, that they have always have, and that are very likely baked into the technology and science powering these tools, and then to do the same thing as Mark, to just wave our hands, and say "But I'm sure they'll figure it out/fix it/perfect it soon."
I dunno, I don't see it. I think we're all holding incredible screwdrivers here, which are very impressive. Some people are using them to drive nails, which, okay, sure. But acting like a screwdriver will suddenly turn into precision calipers (and a saw, and a level, and...) if we just keep adding on more bits, I think that's just silly.
Facebook's mission of "connecting the world" turned out to be the absolute worst thing anyone should ever try to do. Humans are social creatures, yes, but every connection we make costs energy to maintain, and at a certain point (Dunbar's Number) we apply the minimal amount of energy and effort. With Internet anonymity, that means we are actually incapable of treating each other as people on the Internet, leading to the rise of toxicity and much, much worse.
Mark has never understood this, and as his fortune is built around not understanding this, he never will.
There is nothing good that will come from Meta's "superintelligence" and this vision is proof.
The core problem is gamification of social interaction. The 'Like' button and everything like it for things people say or show is hands down the worst thing to happen on the internet. Everywhere they can, people whore for karma (unless they spend a lot of mental effort to fight back that urge). How primitive the related moderation systems are directly affects how much primitive shit gets rewarded and alas, most moderation systems are ridiculously primitive.
So, dopamine hits for saying primitive shit.
Well, that's because there aren't people on the internet! I mean, yes, us technologists understand that there are often people pulling knobs and levers behind the scenes as an implementation detail, so technically they are there. But they are only implementation details, not what makes it what it is. If you replaced the implementation with another algorithm that functions just as well, nobody would notice. In that sense, it is just software.
> leading to the rise of toxicity and much, much worse.
It is not so much that it has lead to anything different, but that those who used to be in the forest yelling at animals as if they were human moved into civilized areas when they started yelling at computers as if they were human. That has taken their mental disorders to where it is much more visible.
>We believe the benefits of superintelligence should be shared with the world as broadly as possible.
So... ads.
I think it would be back to income based tiers though. You want more assistance, pay $200 per month. Even more, maybe $2000 (for companies). Then, if you dont want to pay, you get contextual ads (which would work here because llms can contextualize far better), and a lower quality of service.
Any time a CEO publishes such empty, wordy essays, it's probably earnings reporting time. I can't shake the feeling it's a public subreply at one of or a cluster of doubting investors, who started to doubt the CEOs vision for the company, or find the lack of one on a certain topic concerning.
What are they going to do, exactly? They explicitly invested in the company knowing that Zuckerberg would retain full control.
If they can show gross negligence there may be a legal avenue, but it would be pretty hard to justify chasing potentially profitable business ventures, even if they end up failing, as being negligence. Controversial business decisions are not negligence in the eye of the law.
Sure, they can sell their interest in the company — if someone else wants to buy it — but that just moves who the investor is around. That doesn't really change anything.
Meanwhile I can't properly find items that are listed on FB marketplace.
I don't think the author of that book is unbiased, and after some healthy debate with friends, imagine there are a number of different perspectives on the facts. But it seems clear that, well before it was public knowledge outside of the company, there was clearly visibility of and ignorance over harms being caused by the platform inside of it.
Facebook (now Meta) turned human attention into a product. They optimized for engagement over wellbeing and knew that their platforms were amplifying division and did it anyway because the metrics looked good.
It's funny, because I aspire to many of the same things cited in this vision -- helping realize the best in each individual, giving them more freedom, and critically, helping them be wise in a world that very clearly would prefer them not to be.
But the vision is being pitched by the company that already knows too much about us and has consistently used that knowledge for extraction rather than empowerment.
Does the average American worker today spend a ton of time in productivity software?
I know and Zuckerberg surely knows the impact on labor will be much more pervasive than that, so it seems like an odd way to frame the future.
"Average?" No. But many millions of people, yes.
The majority of people in my company spend their day tied to Microsoft Office.
Which bring its own problems when managers don't understand that building a computer program isn't the same speed, complexity, and skill level as making a PowerPoint presentation.
But seriously, this comment can easily be true, and if it is , then it is an excellent example of a human endeavour that we invented to improve efficiency but has become a bottomless sink of talent, effort, and cost directed away from generating any value whatsoever.
I have never seen a presentation that couldn’t have been done just as well without the use of a computer., except to demonstrate things that are computer related.
Presentations are a great example of an activity that has become an end unto itself that delivers no value, and only serves as a kind of internal preening behaviour, signalling a persons value to the organisation without actually delivering any.
https://mcdreeamiemusings.com/blog/2019/4/13/gsux1h6bnt8lqjd...
Communication is communication, be it by PowerPoint or semaphore, and it takes talent to do it right.
> I have never seen a presentation that couldn’t have been done just as well without the use of a computer., except to demonstrate things that are computer related.
Re-reading that, I wonder what would've happened if the Boeing wonks in that meeting had just not brought a presentation. Maybe you're right.
I was a contractor for the military for a few years a while back… the military runs on power point.
I’ve seen weeks poured into a power point presentation whose only point was that regular physical conditioning was beneficial to physical fitness. In the medical equipment maintenance shop.
I attended at least 100 meetings during that 8 years, and there wasn’t a single presentation that couldn’t have been replaced with a sheet or two of paper and a short lecture/discussion. Instead we got 1/2 hour marvels of editing and animation, often replete with music and video, and I’ll admit some of it was really a work of art.
But none of that contributed to the salient points. If anything, a good presentation was a distraction, leading to a ten minute discussion about the technique and tools used to make the presentation lol. There was also an underground market of enlisted presentation gurus that would make presentations in exchange for favors or even for pay, because impressive PowerPoint presentations were considered critical to career advancement for officers.
I often wondered what would have happened if IMD deleted PowerPoint off of all of the machines on the domain lol. Collapse? 10x productivity? Tires burning in the streets? Only one way to know for sure!
Considering that the most common use for "AI" is to take jobs away from creators like artists, musicians, illustrators, writers, and such, I find this statement hard to believe.
So far, all I've seen is AI taking money away from the least-paid workers (artists, et.al.) and giving it to tech billionaires.
Creating what? AI slop?
- Maximum data extraction - Behavioral modification for profit - Attention capture and addiction maintenance
"Personal superintelligence" serves all three perfectly while appearing to do the opposite.
"We believe the benefits of superintelligence should be shared with the world as broadly as possible. That said, superintelligence will raise novel safety concerns. We'll need to be rigorous about mitigating these risks and careful about what we choose to open source. Still, we believe that building a free society requires that we aim to empower people as much as possible."
So maybe no more open source because of "safety"?
It also makes you wonder what they do with all of that information. But surely this is altruism.
- their eyesight is too poor to read
- their paws are not designed for fine manipulations so they cannot write or type
- their throats and mouths are not nearly as nimble as ours, so they cannot vocally communicate detailed information
Even if there was an Newton-level dog, they wouldn't be able to access the ideas of an earlier Euclid-level dog. Human knowledge is not just about our big brains, we've developed many physical features that make transmission of information far easier than other species.
OTOH dogs do have a good intuitive "common-sense" understanding of arithmetic, geometry, and physics. It is the unique gift of humans that we can formalize and then extend this intuition, but this ability (and intelligence as a whole) relies on nonverbal common sense.
Exactly! If a dog invented a dog superintelligence and it discovered calculus, the dogs would never understand that discovery. I think a superintelligence we build will discover things we cannot understand.
That said, another major difference is psychology. Switching animals, it seems plausible to me that chimpanzees are theoretically capable of doing basic calculus as a matter of pattern-matching. But you can't force them to study it! Basic calculus is too tedious and high-effort to learn for a mere banana, you need something truly valuable like "guaranteed admission to the flagship state university" for human children to do it. But we don't have an equivalent offer for chimps. (Likewise an Isaac Newton - level dog might still find calculus exceptionally boring compared to chasing squirrels.)
No comments yet
Instead, you insinuate and play into fantasy and wishful thinking.
Just to add some food for thought: Is superintelligence simply a very high IQ, a higher than top humans one? If so, we'd need a way to measure that, since existing IQ tests are designed for human intelligence. Or is superintelligence about scale/order-of-magnitude: many high-IQ minds working together? That would imply a different kind of threshold. But perhaps the key idea is that superintelligence is inherently uncapped, that is once we reach a level we consider "superintelligent" we can still imagine something even more advanced that fits the same label.
Does anyone know what this is referring to?
I don't think anyone knows what he is referring to. Maybe AlphaEvolve? Certainly not Llama.
Can you put a date on this please?
Thanks, tantalor
The company should be broken up, its assets auctioned, its IP destroyed.
Sorry, but Jevon's Paradox[1] returns yet again.
If you make workers more efficient, then we won't be freed up to spend more time creating and connecting. There will be more work.
Creating more efficient steam engines didn't reduce coal consumption, it just made there be more steam engines. The second order effects of efficiency don 't work the way we think they work.
1. https://en.wikipedia.org/wiki/Jevons_paradox
just running on metas servers with metas software and metas tracking and algorithms.
> "This is distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole"
says the guy who spend most of the last 3 years laying people off.
theres just too much sliminess to dissect. i leave it at that.
one thing for sure, these evil megacorps will use this tech in a dystopian and extractive way. nothing ever changes.
[1]https://www.anthropic.com/research/project-vend-1?ref=blog.m...
Model since then have been able to run it profitably. Incredible how fast things are progressing.
https://andonlabs.com/evals/vending-bench
so SpicyLemonZest, not me
But at least he’s trying to signal benevolence. People getting trapped into their projected image is a thing, so in this day and age I’m going to take this as a win.
It is always abundance for the super rich, scarcity for those in jobs.
How can I be free to do my gardening whenever I want when the landlord is asking for $11K rent in my SF flat?
So eventually they will do the opposite of this 'vision' and put this super intelligence to replace jobs.
Also, what happened to the Metaverse that Meta invested hundreds of billions as per their namesake?
They bought a shit ton of GPUs before the LLM boom, which gave them a running start on training their own model. Zuck talks about it in an interview with Lex Friedman.
This is the fatal flaw. It's been recognized explicitly for at least 140 years that the price of land rent rises in lockstep with productivity increases, guaranteeing there is no "escape velocity" for the labor class regardless of how good technology gets.
But it was ultimately lost in translation. The layman heard: Go to college/university to become a more appealing laborer to employers. And thus nothing improved for the people; the promises of things like higher income never occurred — incomes have held stagnant.
This is just survivorship bias. Of course most people choose the employment route since very few people are gonna become good researchers with valuable ideas, and even fewer of those have the ability to become successful business owners, being a good researcher is not enough.
And money doesn't just rain from the sky, you still need money to make money.
It was forward looking, so survivorship bias doesn't fit. But it may be fair to say that it was unrealistic to think that it was tenable. Reality can certainly give theory a good beating.
> And money doesn't just rain from the sky
If you are going to college to do anything but create capital, the money must be raining from the sky. It would be pretty hard to justify otherwise given that there has been no economic benefit from it. As before, the promise of higher incomes never materialized (obviously, they were promised on the idea of people using college to create capital) — incomes have held stagnant.
Technology increases aren't there so you work less hours for the same pay, they're there so your business owner gets more money form you working the same hours.
If a machine gets invented that can do your job it's not like you can now go home and relax for the rest of your life and still keep receiving your pay cheques. This utopia doesn't exist.
If you add technology to your workplace your wages should go up (not to eat all the gains of the technology, but a decent portion via wage competition), but then once your wages go up, the local rent goes up anyway.
You can work his fields in exchange for most of the harvest of course!
With the metaverse it won't matter that you live in a 3x3m cubicle because you will use your VR headset to pretend you live in a spacious and comfortable place.
That's how it was in snow crash anyway, where the term comes from.
What has intelligence (let alone superintelligence) or lack of, got to do with the last two. All these discussions about AGI seems to have reduced what it means to be a human being to a token generator.
https://motherfuckingwebsite.com/
EDIT:
to clarify, this is sarcasm
1. LLMs and "AI" broadly can become a very useful and powerful technology that can have a transformative effect on industry and so on.
2. Talk of "superintelligence" is total horseshit.
More people today are dying from starvation than people existed on earth 200 years ago. Celebrating our achievements in making shareholders rich is one thing, but to take credit for freeing the people. Yikes. Mark is more out of touch than seems possible.