Are AI entities meant to be our slaves or our masters? And why do we think smart robots would want to work for humanity and not for their own robotity?
Comments (5)
brettkromkamp · 1d ago
What AI is going after, that is, the size of the pie, is the $50 trillion per year portion of global GDP that makes up wages. Putting aside the risks (which are enormous) when you see it that way you begin to understand OpenAI's valuation late last year, and that that's just the tip of the iceberg of what's coming.
The risk of disrupting a sizable chunk of the world's wages is fuel for a separate post and conversation. IMHO it's the real risk of AI, and the Skynet scenario around ASI/AGI that has been popularized is a distraction.
There is no end goal. It's just a bunch of independent actors who are all trying to get ahead of their competitors a step at a time. Two steps if they're lucky. The collective system steers into whichever direction was determined by the previous set of steps, with very little potential for any individual actor to turn it around.
A_D_E_P_T · 1d ago
2022-2025: They're sandboxed text/pixel-prediction machines with interesting emergent properties. They provide output in response to natural language instruction, which they are able to comprehend. There's no question of what they "want" -- they're Searle's Chinese Room IRL and their memory is too short for them to have any real understanding of what they do.
2025: They're increasingly agentic rather than sandboxed -- given tasks to autonomously perform in the real world. They're not markedly smarter than they were, but they have far larger context windows, so they can remember prior output and perform more complex or multi-step jobs adequately. (i.e. generating reports from 150 web searches, or making videos by stringing along image generation.) They don't appear to have inherent drives or motivations, but it's possible that they're getting there.
Future: Nobody knows. It's not clear that they can have wants, or ever become markedly smarter than the humans who build and train them. (I mean, they're already much smarter in some respects -- faster and vastly more erudite -- but they appear to lack that certain inventive spark that characterizes the best human geniuses.)
sigwinch · 1d ago
I see that as path-dependent still. I wonder: how would AI achieve an ability to dictate the terms of big systems?
Right now, its mode is gathering info about the world. The most important questions end with, “… to generate passive income for me” which after a turning point will become “… to generate passive income for my benefactor”; and finally “… to dominate access to resources”. AI will help you individually, because it’s important for both of you to acquire capital for your benefactor.
ipachanga · 1d ago
I think we will become the slaves. Not become AI will have its own will and interests, but we are already slaves of tech. It's just the next step.
rvz · 1d ago
To know what the "end game" is, you first need to define "AGI".
However, there is no agreed upon definition of what "AGI" actually is and it can mean anything. So the closest true definition is what are the big AI labs "doing"?
Most of the work and what they are doing is building AI agents and chatbots that "replace" or "streamline' operations; i.e more layoffs in favour of AI.
So assuming that progress will continue, the logical end-game is a 10% increase in global unemployment as the start and then you get to the conclusion that what "AGI" actually means is mass job displacement of knowledge workers with them being automated by AI - creating a state resembling Mad Max / Fallout scenario were humans are totally dependent (slaves) on robots.
The risk of disrupting a sizable chunk of the world's wages is fuel for a separate post and conversation. IMHO it's the real risk of AI, and the Skynet scenario around ASI/AGI that has been popularized is a distraction.
There is no end goal. It's just a bunch of independent actors who are all trying to get ahead of their competitors a step at a time. Two steps if they're lucky. The collective system steers into whichever direction was determined by the previous set of steps, with very little potential for any individual actor to turn it around.
2025: They're increasingly agentic rather than sandboxed -- given tasks to autonomously perform in the real world. They're not markedly smarter than they were, but they have far larger context windows, so they can remember prior output and perform more complex or multi-step jobs adequately. (i.e. generating reports from 150 web searches, or making videos by stringing along image generation.) They don't appear to have inherent drives or motivations, but it's possible that they're getting there.
Future: Nobody knows. It's not clear that they can have wants, or ever become markedly smarter than the humans who build and train them. (I mean, they're already much smarter in some respects -- faster and vastly more erudite -- but they appear to lack that certain inventive spark that characterizes the best human geniuses.)
Right now, its mode is gathering info about the world. The most important questions end with, “… to generate passive income for me” which after a turning point will become “… to generate passive income for my benefactor”; and finally “… to dominate access to resources”. AI will help you individually, because it’s important for both of you to acquire capital for your benefactor.
However, there is no agreed upon definition of what "AGI" actually is and it can mean anything. So the closest true definition is what are the big AI labs "doing"?
Most of the work and what they are doing is building AI agents and chatbots that "replace" or "streamline' operations; i.e more layoffs in favour of AI.
So assuming that progress will continue, the logical end-game is a 10% increase in global unemployment as the start and then you get to the conclusion that what "AGI" actually means is mass job displacement of knowledge workers with them being automated by AI - creating a state resembling Mad Max / Fallout scenario were humans are totally dependent (slaves) on robots.
QED.