What is the goal of arguing for something that cannot be achieved?
Sigh.
1. Asimov's Laws are a fictional example of why you don't want to force robots to behave like humans under arbitrary constraints.
2. Humans cannot follow things like Asimov's Laws either, and we run the world's military and infrastructure well enough.
Not an AI advocate at all but this is exactly the surface-level conjecture I expect outta lesswrong these days. Oofa doofa.
What is the goal of arguing for something that cannot be achieved?
Sigh.
1. Asimov's Laws are a fictional example of why you don't want to force robots to behave like humans under arbitrary constraints.
2. Humans cannot follow things like Asimov's Laws either, and we run the world's military and infrastructure well enough.
Not an AI advocate at all but this is exactly the surface-level conjecture I expect outta lesswrong these days. Oofa doofa.