Opinion | The author's opinion does not necessarily reflect Sarah Palin's view.
A former OpenAI governance researcher claims that the probability of AI either destroying or seriously harming humanity is around 70%, which would be unacceptable for any major life event.
However, companies like OpenAI are aggressively pursuing advanced AI capabilities without sufficient focus on safety mechanisms to mitigate potential risks.
This former researcher, Daniel Kokotajlo, joined OpenAI in 2022 but grew convinced that human-level AI could be achieved by 2027 and likely cause catastrophe.
He urged OpenAI leadership to prioritize implementing safeguards rather than advancing the technology, but felt his warnings were ignored.
“OpenAI is really excited about building AGI,” Kokotajlo said, “and they are recklessly racing to be the first there.”
Fed up with the lack of responsible action on safety, he resigned from OpenAI in April 2022 over concerns they were recklessly proceeding without adequate precautions for managing risks posed by advanced AI.
Kokotajlo “lost confidence that OpenAI will behave responsibly,” he said.
“The world isn’t ready, and we aren’t ready,” he wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”