Elon Musk’s lawsuit against Sam Altman and OpenAI, filed last week in California state court, accuses the defendants of forgetting core parts of OpenAI’s stated mission to develop useful and non-harmful artificial general intelligence. Altman has since moved to buttress his responsible AI credentials, including the signing of an open letter pledging to develop AI “to improve people’s lives.”
Critics, however, remain unconvinced by Altman’s show of responsibility. Ever since the rapid popularization of generative AI (genAI) over the past year, those critics have been warning that the consequences of unfettered and unregulated AI development could be not just corrosive to human society, but a threat to it entirely.
Ritu Jyoti, group vice president for worldwide AI and automation research at IDC, said the move by Altman to publicly embrace responsible development amounts to little more than a head-fake.
“While there is agreement in the industry that there is collective responsibility to develop and deploy AI responsibly, this letter falls short of specific actions needed,” she said. “So, in my opinion, not much value-add.”
Altman is also a signatory to a letter acknowledging the world-altering risks of AI, but critics continue to argue that the self-regulatory nature of efforts to address these risks is insufficient.
The key is in the industry’s failure to solve the alignment problem, which arises when AI tools begin to develop behavior beyond their design specifications. The fear is that the most-advanced AI instances can potentially iterate upon themselves — a serious risk of developing in ways humans don’t want them to.
“The question is, are we able to control a system if it’s smarter than us?” asked Joep Meindertsma, a Dutch developer and founder of the group PauseAI, which is dedicated to mitigating the risks posed by AI.
Meindertsma gave the example of a system like AutoGPT, which can essentially ask itself questions and create its own queries to accomplish complex research tasks as the kind of technology that could prove highly disruptive — and dangerous.
“At some point, someone is going to ask that computer something that would involve the idea that it’s useful to spread to other machines,” he said. “People have literally asked AutoGPT to try and take over the world.”
The industry, thus, cannot be trusted to regulate itself, and the government must step in to avert potential catastrophe, critics have argued. Meindertsma said capabilities already demonstrated — such as GPT4’s ability to hack websites autonomously — are critically dangerous, and a lack of regulation combined with the fast evolution of genAI is an existential threat.
“We should regulate it in the same way we regulate nuclear material,” Meindertsma said.