In May, hundreds of leading figures in artificial intelligence issued a joint statement describing the existential threat the technology they helped to create poses to humanity.
“Mitigating the risk of extinction from AI should be a global priority,” it said, “alongside other societal-scale risks such as pandemics and nuclear war.”