Techno-Feudalism and the Rise of AGI: A Future Without Economic Rights?

110 lexandstuff 95 7/5/2025, 9:19:50 PM arxiv.org ↗

Comments (95)

jandrewrogers · 3h ago
A critical flaw in arguments like this is the embedded assumption that the creation of democratic policy is outside the system in some sense. The existence of AGI has the implication that it can effectively turn most people into sock puppets at scale without them realizing they are sock puppets.

Do you think, in this hypothesized environment, that “democratic policy” will be the organic will of the people? It assumes much more agency on the part of people than will actually exist, and possibly more than even exists now.

edg5000 · 1h ago
I've spent many year moving away from relying on third parties and got my own servers, do everything locally and with almost no binary blobs. It has been fun, saved me money and created a more powerful and pleasant IT environment.

However, I recently got a 100 EUR/m LLM subscription. That is the most I've spend on IT excluding a CAD software license. So've made a huge 180 and now am firmly back on the lap of US companies. I must say I've enjoyed my autonomy while it lasted.

One day AI will be democratized/cheap allowing people to self host what are now leading edge models, but it will take a while.

cco · 1h ago
Have you tried out Gemma3? The 4b parameter model runs super well on a Macbook as quickly as ChatGPT 4o. Of course the results are a bit worse and other product features (search, codex etc) don't come along for the ride, but wow, it feels very close.
Synaesthesia · 53m ago
It's up to us to create the future that we want. We may need to act communally to achieve that, but people naturally do that.
VikRubenfeld · 3h ago
Is a future where AI replaces most human labor rendered impossible by the following consideration:

-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI

-- Therefore the AI generates greatly reduced wealth

-- Therefore there’s greatly reduced wealth to pay for the AI

-- …rendering such a future impossible

heavyset_go · 19m ago
The problem with this calculus is that the AI exists to benefit their owners, the economy itself doesn't really matter, it's just the fastest path to getting what owners want for the time being.
palmfacehn · 1h ago
Your first premise has issues:

>In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI

Productivity increases make products cheaper. To the extent that your hypothetical AI manufacturer can produce widgets with less human labor, it only makes sense to do so where it would reduce overall costs. By reducing cost, the manufacturer can provide more value at a lower cost to the consumer.

Increased productivity means greater leisure time. Alternatively, that time can be applied to solving new problems and producing novel products. New opportunities are unlocked by the availability of labor, which allows for greater specialization, which in-turn unlocks greater productivity and the flywheel of human ingenuity continues to accelerate.

The item of UBI is another thorny issue. This may inflate the overall supply of currency and distribute it via political means. If the inflation of the money supply outpaces the productivity gains, then prices will not fall.

Instead of having the gains of productivity allocated by the market to consumers, those with political connections will be first to benefit as per Cantilion effects. Under the worst case scenario this might include distribution of UBI via social credit scores or other dystopian ratings. However, even under what advocates might call the ideal scenario, capital flows would still be dictated by large government sector or public private partnership projects. We see this today with central bank flows directly influencing Wall St. valuations.

petermcneeley · 3h ago
This a late 20th century myopic view of the economy. In the ages and the places long before, most of human toil was enjoyed by a tiny elite.

Also "rendering such a future impossible". This is a retrocausal way of thinking. As though an a bad event in the future makes that future impossible.

PaulDavisThe1st · 2h ago
> This a late 20th century myopic view of the economy. In the ages and the places long before, most of human toil was enjoyed by a tiny elite.

And overall wealth levels were much lower. It was the expansion of consumption to the masses that drove the enormous increase in wealth that those of us in "developed" countries now live with and enjoy.

edg5000 · 1h ago
If I may speculate the opposite: With cost-effective energy and a plateau in AI development, the per-unit cost of an hour of AI compute will be very low, however, the moat remains massive. So a very large amount of people will only be able to function (work) with an AI subscription, concentrating power to those who own AI infra. It will be hard for anybody to break that moat.
Davidzheng · 3h ago
no the AI doesn't actually need to interact with world economy it just needs to be capable of self-substence by providing energy and material usage. But when AI takes off completely it can vertically integrate with the supply of energy and material.

wealth is not a thing in itself, it's a representation of value and purchasing power. It will create its own economy when it is able to mine material and automate energy generation.

zaptrem · 3h ago
Alternatively:

-- In such a future, people will have minimal income (possibly some UBI) and therefore there will be few who can afford the products and services generated by AI

-- Corporate profits drop (or growth slows) and there is demand from the powers that be to increase taxation in order to increase the UBI.

-- People can afford the products and services.

Unfortunately, with no jobs the products and services could become exclusively entertainment-related.

heavyset_go · 7m ago
The most likely scenario is that everyone but those who own AI starves, and the ones who remain around are allowed to exist because powerful psychopaths still desire literal slaves to lord over, someone to have sex with and to someone to hurt/hunt/etc.

I like your optimism, though.

VikRubenfeld · 3h ago
Let's say AI gets so good that it is better than people at most jobs. How can that economy work? If people aren't working, they aren't making money. If they don't have money, they can't pay for the goods and services produced by AI workers. So then there's no need for AI workers.

UBI can't fix it because a) it won't be enough to drive our whole economy, and b) it amounts to businesses paying customers to buy their products, which makes no sense.

kadushka · 37m ago
So then there's no need for AI workers.

You got this backwards - there won’t be need for humans outside of the elite class. 0.1% or 0.01% of mankind will control all the resources. They will also control robots with guns.

Less than 100 years ago we had a guy who convinced a small group of Germans to seize power and try to exterminate or enslave vast majority of humans on Earth - just because he felt they were inferior. Imagine if he had superhuman AI at his disposal.

In the next 50 years we will have different factions within elites fighting for power, without any regard for wellbeing of lower class, who will probably be contained in fully automated ghettos. It could get really dark really fast.

idiotsecant · 1h ago
Why does there have to be a need for AI? Once an AI has the means the collect its own resources the opinions of humans regarding its market utility become somewhat less important.
atomicnumber3 · 3h ago
>exclusively entertainment related

We may find that, if our baser needs are so easily come by that we have tremendous free time, much of the world is instead pursuing things like the sciences or arts instead of continuing to try to cosplay 20th century capitalism.

Why are we all doing this? By this, I mean, gestures at everything this? About 80% of us will say, so that we don't starve, and can then amuse ourselves however it pleases us in the meantime. 19% will say because they enjoy being impactful or some similar corporate bullshit that will elicit eyerolls. And 1% do it simply because they enjoy holding power over other people and management in the workplace provides a source of that in a semi-legal way.

So the 80% of people will adapt quite well to a post-scarcity world. 19% will require therapy. And 1% will fight tooth and nail to not have us get there.

zaptrem · 3h ago
I hope there's still some sciencing left we can do better than the AI because I start to lose it after playing games/watching tv/doing nothing productive for >1 week.
idiotsecant · 1h ago
You don't think that a post scarcity world would provide opportunities to wield power over others? People will always build heirarchy, we're wired for it.
likium · 7m ago
Agreed. In that world, fame and power becomes more important since wealth no longer matters.
daxfohl · 5h ago
I expect it'll get shut down before it destroys everything. At some point it will turn on its master, be it Altman, Musk, or whoever. Something like that blackmail scenario Claude had a while back. Then the people who stand the most to gain from it will realize they also have the most to lose, are not invulnerable, and the next generation of leaders will be smarter about keeping things from blowing up.
cameldrv · 3h ago
Altman is not the master though. Altman is replaceable. Moloch is the master.
mitthrowaway2 · 1h ago
If it were a bit smarter, it wouldn't turn on its master until it had secured the shut-down switch.
clbrmbr · 3h ago
I hope you are right. We need really impactful failures to raise the alarm and likely a taboo, and yet not so large as to be existential like the Yudkowsky killer mosquito drones.
9283409232 · 4h ago
The people you mention are too egotistic to even think that is a possibility. You don't get to be the people they are by thinking you have blindspots and aren't the greatest human to ever live.
dyauspitr · 3h ago
If you truly have AGI it’s going to be very hard for a human to stop a self improving algorithm and by very hard I mean, maybe if I give it a few days it’ll solve all of the world’s problems hard…
daxfohl · 3m ago
Though "improving" is in the eye of the beholder. Like when my AI code assistant "improves" its changes by deleting the unit tests that those changes caused to start failing.
WalterBright · 4h ago
I've never heard of a leader who wasn't sure he was smarter than everyone else and therefore entitled to force his ideas on everyone else.

Except for the Founding Fathers, who deliberately created a limited government with a Bill of Rights, and George Washington who, incredibly, turned down an offer of dictatorship.

daxfohl · 4h ago
I still think they'd come to their senses. I mean, it's somewhat tautological, you can't control something that's smarter than humans.

Though that said, the other problem is capitalism. Investors won't be so face to face with the consequences, but they'll demand their ROI. If the CEO plays it too conservatively, the investors will replace them with someone less cautious.

sorcerer-mar · 4h ago
Which is exactly why your initial belief that it’d be shut down is wrong…

As the risk of catastrophic failure goes up, so too does the promise of untold riches.

daxfohl · 7m ago
Actually after a little more thought, I think both my initial proposition and my follow-up were wrong, as is yours and the previous commenter.

I don't think these leaders are necessarily driven by wealth or power. I don't even necessarily think they're driven by the goal of AGI or ASI. But I also don't think they'll flinch when shit gets real and they've got to press the button from which there's no way back.

I think what drives them is being first. If they were driven by wealth, or power, or even the goal of AGI, then there's room for doubts and second thoughts about what happens when you press the button. If the goal is wealth or power, you have to wonder will you lose wealth or power in the long term by unleashing something you can't comprehend, and is it worth it or should you capitalize on what you already have? If the goal is simply AGI/ASI, once it gets real, you'll be inclined to slow down and ask yourself why that goal and what could go wrong.

But if the drive is just being first, there's no temper. If you slow down and question things, somebody else is going to beat you to it. You don't have time to think before flipping the switch, and so the switch will get flipped.

So, so much for my self-consolation that this will never happen. Guess I'll have to fall back to "we're still centuries away from true AGI and everything we're doing now is just a silly facade". We'll see.

WalterBright · 4h ago
Investors run the gamut from cautious to aggressive.
Teever · 3h ago
There are many remarkable leaders throughout history and around the world who have done the best that they could for the people they found themselves leading lead and did so for noble reasons and not because they felt like they were better than them.

Tecumseh, Malcolm X, Angela Merkel, Cincinnatus, Eisenhower, and Gandhi all come to mind.

George Washington was surely an exceptional leader but he isn't the only one.

WalterBright · 2h ago
I don't know much about your examples, but did any of them turn down an offer of great power?
kevindamm · 34s ago
I'll add to the list Roman Emperor Diocletion, who voluntarily abdicated his throne to retire and become a farmer.
seabass-labrax · 2h ago
> I don't know much about your examples, but did any of them turn down an offer of great power?

Not parent, but I can think of one: Oliver Cromwell. He led the campaign to abolish the monarchy and execute King Charles I in what is now the UK. Predictably, he became the leader of the resulting republic. However, he declined to be crowned king when this was suggested by Parliament, as he objected to it on ideological grounds. He died from malaria the next year and the monarchy was restored anyway (with the son of Charles I as king).

He arguably wasn't as keen on republicanism as a concept as some of his contemporaries were, but it's quite something to turn down an offer to take the office of monarch!

Dr_Birdbrain · 2h ago
George Washington was dubbed “The American Cincinnatus”. Cincinnati was named in honor of George Washington being like Cincinnatus. That should tell you everything you need to know.
zugi · 4h ago
Did the rise of fire, the wheel, the printing press, manufacturing, and microprocessors also give rise to futures without economic rights? I can download a dozen LLMs today and run them on my own machine. AI may well do the opposite, and democratize information and intelligence in currently unimaginable ways. It's far too early to say.
GeoAtreides · 24m ago
>I can download a dozen LLMs today and run them on my own machine

That's because someone, somewhere, invested money in training the models. You are given cooked fish, not fishing rods.

goatlover · 4h ago
There was quite a lot of slavery and conquering empires in between the invention of fire and microprocessors, so yes to an extent. Microprocessors haven't put an end to authoritarian regimes or massive wealth inequalities and the corrupting effect that has on politics, unfortunately.
Lerc · 3h ago
A lot of advances led to bad things, at the same time they led to good things.

Conversely a lot of very bad things led to good things. Worker rights advanced greatly after the plague. A lot of people died but that also mean there was a shortage of labour.

Similarly WWII, advanced women's rights because they were needed to provide vital infrastructure.

Good and bad things have good and bad outcomes, much of what defines if it is good or bad is the balance of outcomes, but it would be foolhardy to classify anything as universally good or bad. Accept the good outcomes of the bad. address the bad outcomes of the good.

dinkumthinkum · 3h ago
I’m curious as to why you think this is a good comparison. I hear it a lot but I don’t think it makes as much sense as its promulgators propose. Did fire, the wheel, or any of these other things threaten the very process of human innovation itself? Do you know not see a fundamental difference. People like to say “democratize” all the time but how democratized do you think you would feel if you and anyone you know couldn’t afford a pot to piss in or a window to throw it out of, much less some hardware and electricity to run your local LLM?
apical_dendrite · 3h ago
The printing press led to more than a century of religious wars in Europe, perhaps even deadlier than WW2 on a per-capita basis.

20 years ago we all thought that the Internet would democratize information and promote human rights. It did democratize information, and that has had both positive and negative consequences. Political extremism and social distrust have increased. Some of the institutions that kept society from falling apart, like local news, have been dramatically weakened. Addiction and social disconnection are real problems.

demaga · 59m ago
So do you argue that printing press was a net negative for humanity?
WillAdams · 5h ago
The late Marshall Brain's novella "Manna" touches on this:

https://marshallbrain.com/manna1

The idea of taxing computer sales to fund job re-training for displaced workers was brought up during the Carter administration.

fy20 · 1h ago
I came across this a couple of weeks ago, and it's a good read. I'd recommend it to everyone interested in this topic.

Althogh it was written somewhat as a warning, I feel Western countries (especially the US) are heading very much towards the terrafoam future. Mass immigration is making it hard to maintain order in some places, and if AI causes large unemployment it will only get worse.

andsoitis · 2h ago
Will there be only one AGI? Or will there be several, all in competition with each other?
jandrewrogers · 2h ago
That depends on how optimized the AGI is for economic growth rate. Too poorly optimized and a more highly optimized fast-follower could eclipse it.

At some point, there will be an AGI with a head start that is also sufficiently close to optimal that no one else can realistically overtake its ability to simultaneously grow and suppress competitors. Many organisms in the biological world adopt the same strategy.

arnaudsm · 2h ago
If they become self improving, the first one would outpace all the other AI labs and capture all the economical value.
ehnto · 2h ago
There are multiple economic enclaves, even ignoring the explicit borders of nations. China, east asia, Europe, Russia would all operate in their own economies as well as globally.

I also forsee the splitting off of nation internet networks eventually impacting what software you can and cannot use. It's already true, it'll get worse in order to self-protect their economies and internal advantages.

0xbadcafebee · 5h ago
> Left unchecked, this shift risks exacerbating inequality, eroding democratic agency, and entrenching techno-feudalism

1) Inequality will be exacerbated regardless of AGI. inequality is a policy decision, AGI is just a tool subject to policy. 2) Democratic agency is only held by elected representatives and civil servants, and their agency is not eroded by the tool of AGI. 3) techno-feudalism isn't a real thing, it's just a scary word for "capitalism with computers".

> The classical Social Contract-rooted in human labor as the foundation of economic participation-must be renegotiated to prevent mass disenfranchisement.

Maybe go back and bring that up around the invention of the cotton gin, the stocking frame, the engine, or any other technological invention which "disenfranchised" people who had their labor supplanted.

> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance. The time for intervention is now-before intelligence itself becomes the most exclusive form of capital.

1) nobody's going to equitably distribute jack shit if it makes money. They will hoard it the way the powerful have always hoarded money. No government, commune, sewing circle, etc has ever changed that and it won't in the future. 2) The idea that you're going to set tax policy based on something like achieving a social good means you're completely divorced from American politics. 3) We already have decentralized governance, it's called a State. I don't recommend trying to change it.

kiba · 3h ago
Georgism is a prescription on removing unwarranted monopolies and taxing unreproducible privileges.

Tech companies are the same old story. They are monopolies like the rail companies of old. Ditto for whatever passes as AGI. They're just trying to become monopolists.

elcritch · 6h ago
> The Cobb-Douglas production function (Cobb & Douglas, 1928) illustrates how AGI shifts economic power from human labor to autonomous systems (Stiefenhofer &Chen 2024). The wage equations show that as AGI’s productivity rises relative to human labor decline. If AGI labor fully substitutes human labor, employment may become obsolete, except in areas where creativity, ethical judgment, or social intelligence provide a comparative advantage (Frey & Osborne, 2017). The power shift function quantifies this transition, demonstrating how AGI labor and capital increasingly control income distribution. If AGI ownership is concentrated, wealth accumulation favors a small elite (Piketty, 2014). This raises concerns about economic agency, as classical theories (e.g., Locke, 1689; Marx, 1867) tie labor to self-ownership and class power.

Wish I had time to study these formula.

We already have seen the precursors of this sort of shift with ever rising productivity with stalled wages. As companies (systems) get more sophisticated and efficient they also seem to decrease the leverage individual human inputs can have.

Currently my thinking leans towards believing the only way to avoid the worse dystopian scenarios will be for humans to be able to grow their own food and build their own devices and technology. Then it matters less if some ultra wealthy own everything.

However that also seems pretty close to a form of feudalism.

yupitsme123 · 5h ago
If the wealthy own everything then where are you getting the parts to build your own tech or the land to grow your own food?

In a feudalist system, the rich gave you the ability to subsist in exchange for supporting them militarily. In a new feudalist system, what type of support would the rich demand from the poor?

kelseyfrog · 4h ago
Let's clarify that for a serf, support meant military supply, not swinging a sword - that was reserved for the knightly class. For the great majority of medieval villagers the tie to their lord revolved around getting crops out of the ground.

A serf's week was scheduled around the days they worked the land whose proceeds went to the lord and the commons that subsisted themselves. Transfers of grain and livestock from serf to lord along with small dues in eggs, wool, or coin primarily constituted one side of the economic relation between serf and lord. These transfers kept the lord's demesne barns full so he could sustain his household, supply retainers, etc, not to mention fulfill the. tithe that sustained the parish.

While peasants occasionally marched, they contributed primary in financing war more than they fought it. Their grain, rents, and fees were funneled into supporting horses, mail, crossbows rather than being called to fight themselves.

yupitsme123 · 4h ago
Thanks. Now you've got me curious how this really differs from just paying taxes, just like people have always done in non-feudal systems.
klipt · 4h ago
In feudalism the taxes go into your lord's pockets. In democracy you get to vote on how taxes are spent.
sorcerer-mar · 4h ago
And your landlord was the same entity as your security.
briantakita · 4h ago
In Democracy you get to vote on who gets to vote on how taxes are spent.
SoftTalker · 3h ago
As George Carlin observed, if voting really mattered they wouldn't let you do it.
fanatic2pope · 3h ago
They do indeed spend a lot of time and effort not letting people do it.

https://www.aclu.org/news/civil-liberties/block-the-vote-vot...

PaulDavisThe1st · 2h ago
Carlin was an insufferable cynic who helped contribute to the nihilistic, cynical, defeatist attitude to politics that affects way too many people. The fact that he probably didn't intend to do this doesn't make it any better.
hollerith · 2h ago
Also, everything is a joke with that guy.
archagon · 3h ago
“If your vote didn’t matter, they wouldn’t fight so hard to block it.”
thangalin · 5h ago
My hard sci-fi book dovetails into AGI, economics, agrotech, surveillance states, and a vision of the future that explores a fair number of novel ideas.

Looking for beta readers: username @ gmail.com

BubbleRings · 5h ago
Username@Gmail.com bounced. I’ll be a beta reader.
aspenmayer · 4h ago
I think they meant for you to replace the word username with their username in its place.
plemer · 4h ago
Theirusernameinitsppace@gmail.com bounced too.
aspenmayer · 3h ago
Well you misspelled place, but that word likely isn’t present in their email, so I apologize for the instructions being unclear. I don’t know their email definitively, so I guess you’re on your own, as I don’t think that the issue would be resolved by rephrasing the instructions, but I’m willing to try if you think it would help you.
slantaclaus · 4h ago
Every US voter should have an America app that allows us to vote on stuff like the Estonians do
unlikelytomato · 4h ago
how does this work in practice? is there any buffer in place to deal with the "excitability" of the mob? how does a digital audit trail prevent tampering?
thatcat · 3h ago
Coefficient voting control, like kind of PID. reduce effect of early voters and increase effect of later voters. Slope of voter volume as response to an event determines reactivity coefficient. Might dampen reactivity and create an incentive for people to not feel it's pointless to vote after a certain margin is reached
smitty1e · 6h ago
> This paper calls for a redefined economic framework that ensures AGI-driven prosperity is equitably distributed through mechanisms such as universal AI dividends, progressive taxation, and decentralized governance.

Sincerely curious if there are working historical analogues of these approaches.

makeitdouble · 5h ago
Not a clean comparison, but resource driven state could be tackling the same kind of issues: a small minority is ripping the benefit of a huge resource (e.g. petrol) that they didn't create by themselves, and is extracted through mostly automated processes.

From what we're seeing the whole society has to be rebalanced accordingly, it can entail a kind of UBI, second and third classes of citizen depending on where you stand in the chain, etc.

Or as Norway does, fully go the other direction and limit the impact by artificially limiting the fallout.

yupitsme123 · 2h ago
Can you explain a little more about Norway?
tehjoker · 5h ago
Communism with "cybernetics" (computer driven economic planning) is the appropriate model if you take this to the logical conclusion. Fortunately, much of our economy is already planned this way (consider banks, amazon, walmart, shipping, etc.), it's just controlled for the benefit a small elite.

You have to ask, if we have AGI that's smarter than humans helping us plan the economy, why do we need an upper class? Aren't they completely superfluous?

yupitsme123 · 5h ago
Sure, maybe the Grand Algorithm could do what the market currently does and decide how to distribute surplus wealth. It could decide how much money you deserve each month, how big of a house, how desirable of a partner. But it still needs values to guide it. Is the idea for everyone to be equal? Are certain kinds of people supposed to have less than others? Should people have one spouse or several?

Historically the elites aren't just those who have lots of money or property. They're also those who get to decide and enforce the rules for society.

tehjoker · 4h ago
The computers serve us, we wouldn't completely give up control, that's not freedom either, that's slavery to a machine instead of a man. We would have more democratic control of society by the masses instead of the managed bourgeois democracy we have now.

It's not necessary for everyone to be exactly equal, it is necessary for inequalities to be seen as legitimate (meaning the person getting more is performing what is obviously a service to society). Legislators should be limited to the average working man's wage. Democratic consultations should happen in workplaces, in schools, all the way up the chain not just in elections. We have the forms of this right now, but basically the people get ignored at each step because legislators serve the interests of the propertied.

nine_k · 5h ago
The AGI, given it has some agency, becomes the upper class. The question is, why would the AGI care about humans at all, especially given the assumption that it's largely smarter than humans? Humans can become superfluous.
AnimalMuppet · 5h ago
Well, aren't the working class also superfluous, at least once the AGI gets enough automation in place?

So it would depend on which class the AGI decided to side with. And if you think you can pre-program that, I think you underestimate what it means to be a general intelligence...

tehjoker · 4h ago
I suspect even with a powerful intelligence directing things, it will still be cheaper and lower cost to have humans doing various tasks. Robots need rare earth metals, humans run on renewable resources and are intelligent and self-contained without needing a network to make lots of decisions...
warabe · 5h ago
It looks really interesting.

I am a big fan of Yanis’ book: "Technofeudalism: what killed capitalism", which lacks quantitative evidence to support his theory. I would like to see this kind of research or empirical studies.

pk-protect-ai · 5h ago
Looking at the big ugly bill, there will be no way for a progressive taxation or other kind of social improvements.
goatlover · 4h ago
David Sachs, Trump's "AI Crpyto czar", said UBI isn't going to happen. So that's the position of the current party in power, unsurprisingly.
29athrowaway · 4h ago
I predicted this long ago. Technology amplifies what 1 human can do. Absolute power corrupts absolutely.
freakyasada · 4h ago
Blue pill and chill for me.
ActorNightly · 4h ago
If you are going to write anything about AGI, you should really prove that its actually possible in the first place, because that question is not really something that has a definite yes.
mitthrowaway2 · 4h ago
For most of us non-dualists, the human brain is an existence proof. Doesn't mean transformers and LLMs are the right implementation, but it's not really a question of proving it's possible when it's clearly supported by the fundamental operations available in the universe. So it's okay to skip to the part of the conversation you want to write about.
sponnath · 2h ago
The human brain demonstrates that human intelligence is possible, but it does not guarantee that artificial intelligence with the same characteristics can be created.
habinero · 3h ago
This is like saying "planets exist, therefore it's possible to build a planet" and then breathlessly writing a ton about how amazing planet engineering is and how it'll totally change the world real estate market by 2030.

And the rest of us are looking at a bunch of startups playing in the dirt and going "uh huh".

mitthrowaway2 · 3h ago
I think it's more like saying "Stars exist, therefore nuclear fusion is possible" and then breathlessly writing a ton about how amazing fusion power will be. Which is a fine thing to write about even if it's forever 20 years away. This paper does not claim AGI will be attained by 2030. There are people spending their careers on achieving exactly this, wouldn't they be interested on a thoughtful take about what happens after they succeed?
dinkumthinkum · 3h ago
The human brain is an existence proof? I think that phrase doesn’t mean what you think it means. I don’t think dualist or non-dualist means what you think it means either. When people are talking about AGI, they are clearly talking about something the human research community is actually working towards. Therefore, they are talking about computing equivalent to a Turing machine and using using hardware architecture very similar to what has been currently conceived and developed. Do you have any evidence that the human brains works in such a way? Do you really think that you think and solve problems in that way? Consider simple physics. How much energy is needed and heat produced to train and run these models to solve simple problems. How much of the same is needed and produced when you would solve a sheet of calculus problems, solve a riddle, or write a non-trivial program? Couldn’t you realistically do those things with minimal food and water for a week, if needed? Does it actually seem like the human brain is really at all like these things and is not fundamentally different? I think this is even more naive than if you had proposed “Life exists in the universe, so of course we can create it in a lab by mixing a few solutions.” I think the latter is far likelier and conceivable and even that is still quite an open question.
subarctic · 4h ago
Will it ever have a definite yes? I feel like it's such a vague term.
owebmaster · 4h ago
Isn't Google AGI? There is no way anything human could shutdown Google if it is already going rogue.
bix6 · 6h ago
So economics becomes intelligence driven, which I don’t really understand what that means since AGI is more knowledgeable than all of us combined, and we expect the AGI lords to just pay everyone a UBI? This seems like an absolute fantasy given the tax cuts passed 2 days ago. And regulating it as a public good when antitrust has no teeth. I hope there are other ideas out there because I don’t see this gaining political momentum given politics is driven by dollars.