Thursday, 25 April 2024
Trending

[the_ad_group id="2845"]

Business News

OpenAI CEO Sam Altman warns that other A.I. developers working on ChatGPT-like tools won’t put on safety limits—and the clock is ticking

OpenAI CEO Sam Altman warns that other A.I. developers working on ChatGPT-like tools won’t put on safety limits—and the clock is ticking

[the_ad id="21475"]

[ad_1]

OpenAI CEO Sam Altman believes artificial intelligence has incredible upside for society, but he also worries about how bad actors will use the technology. 

In an ABC News interview this week, he warned “there will be other people who don’t put some of the safety limits that we put on.” 

OpenAI released its A.I. chatbot ChatGPT to the public in late November, and this week it unveiled a more capable successor called GPT-4.

Other companies are racing to offer ChatGPT-like tools, giving OpenAI plenty of competition to worry about, despite the advantage of having Microsoft as big investor. 

“It’s competitive out there,” OpenAI cofounder and chief scientist Ilya Sutskever told The Verge in an interview published this week. “GPT-4 is not easy to develop…there are many many companies who want to do the same thing, so from a competitive side, you can see this as a maturation of the field.”

While Sutskever was explaining OpenAI’s decision to reveal little about GPT-4’s inner workings, causing many to question whether the name “OpenAI” still made sense, his comments also were an acknowledgment of the slew or rivals nipping at OpenAI’s heels. 

Some of those rivals might be far less concerned than OpenAI is about putting guardrails on their equivalents of ChatGPT and GPT-4, Altman suggested.

“A thing that I do worry about is … we’re not going to be the only creator of this technology,” he said. “There will be other people who don’t put some of the safety limits that we put on it. Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”

OpenAI this week shared a “system card” document that outlines how its testers purposefully tried to get the GPT-4 to offer up dangerous information, such as how to make a dangerous chemical using basic ingredients and kitchen supplies, and how the company fixed the issues before the product’s launch.

Lest anyone doubt the malicious intent of bad actors looking to A.I., phone scammers are now using voice-cloning A.I. tools to sound like people’s relatives in desperate need of financial help—and successfully extracting money from victims.

“I’m particularly worried that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.”

Considering he leads…

Click Here to Read the Full Original Article at Fortune | FORTUNE…

[ad_2]

[the_ad id="21476"]