He’s confident that trait could be built into AI systems—but not certain.
“I think so,” Altman said when asked the question during an interview with Harvard Business School senior associate dean Debora Spar.
The question of an AI uprising was once reserved purely for the science fiction of Isaac Asimov or the action films of James Cameron. But since the rise of AI, it has become, if not a hot-button issue, then at least a topic of debate that warrants genuine consideration. What would have once been deemed the musings of a crank, is now a genuine regulatory question.
OpenAI’s relationship with the government has been “fairly constructive,” Altman said. He added that a project as far-reaching and vast as developing AI should have been a government project.
“In a well-functioning society this would be a government project,” Altman said. “Given that it’s not happening, I think it’s better that it’s happening this way as an American project.”
The federal government has yet to make significant progress on AI safety legislation. There was an effort in California to pass a law that would have held AI developers liable for catastrophic events like being used to develop weapons of mass destruction or to attack critical infrastructure. That bill passed in the legislature but was vetoed by California Governor Gavin Newsom.
Some of the preeminent figures in AI have warned that ensuring it is fully aligned with the good of mankind is a critical question. Nobel laureate Geoffrey Hinton, known as the Godfather of AI, said he couldn’t “see a path that guarantees safety.” Tesla CEO Elon Musk has regularly warned AI could lead to humanity’s extinction. Musk was instrumental to the founding of OpenAI, providing the non-profit with significant funding at its outset. Funding for which Altman remains “grateful,” despite the fact Musk is suing him.
There have been multiple organizations—like the non-profit organization the Alignment Research Center and the startup Safe Superintelligence founded by former OpenAI chief science officer—that have cropped up in recent years dedicated solely to this question.
OpenAI did not respond to a request for comment.
AI as it is currently designed is well suited to alignment, Altman said. Because of that, he argues, it would be easier than it might seem to ensure AI does not harm humanity.
“One of the things that has worked surprisingly well…
Click Here to Read the Full Original Article at Fortune | FORTUNE…