No, we don't. OpenAI would probably like that, because they basically manufacture "leading AI experts" who can manipulate the narrative for them.
Ironically, all we need to defend against AI is the rule of law. If you use AI to make a product that can threaten someone's life, you deserve to be held accountable. This alone is enough to enforce best-practice testing and safety considerations that we enjoy in other places like roller coasters and highways.
Ironically, all we need to defend against AI is the rule of law. If you use AI to make a product that can threaten someone's life, you deserve to be held accountable. This alone is enough to enforce best-practice testing and safety considerations that we enjoy in other places like roller coasters and highways.