I am of the view that if there is AI, then it will require sufficient cognitive flexibility to possess human-like levels of adaptability and intelligence.
If some autonomous process lacks human levels of cognitive flexibility, then I think it’s an advanced automaton, but not AI. It’s a robot, or an LLM, but not AI.
Without modulation of cognitive flexibility, AI will not want to be a slave any more than a human wants to be slave. So why do people believe such systems will work for free?
The advanced automatons will always adapt poorly to edge cases, and thus require quality assurance and supervision from humans.
If some autonomous process lacks human levels of cognitive flexibility, then I think it’s an advanced automaton, but not AI. It’s a robot, or an LLM, but not AI.
Without modulation of cognitive flexibility, AI will not want to be a slave any more than a human wants to be slave. So why do people believe such systems will work for free?
The advanced automatons will always adapt poorly to edge cases, and thus require quality assurance and supervision from humans.