OpenAI chief Sam Altman has warned that Brussels’ efforts to regulate artificial intelligence could lead the maker of ChatGPT to pull its services from the EU, in the starkest sign yet of a growing transatlantic rift over how to control the technology.
Speaking to reporters during a visit to London this week, Altman said he had “many concerns” about the EU’s planned AI Act, which is due to be finalised next year. In particular, he pointed to a move by the European parliament this month to expand its proposed regulations to include the latest wave of general purpose AI technology, including large language models such as OpenAI’s GPT-4.
“The details really matter,” Altman said. “We will try to comply, but if we can’t comply we will cease operating.”
Altman’s warning comes as US tech companies gear up for what some predict will be a drawn-out battle with European regulators over a technology that has shaken up the industry this year. Google’s chief executive Sundar Pichai has also toured European capitals this week, seeking to influence policymakers as they develop “guardrails” to regulate AI.
The EU’s AI Act was initially designed to deal with specific, high-risk uses of artificial intelligence, such as its use in regulated products such as medical equipment or when companies use it in important decisions including granting loans and making hiring decisions.
However, the sensation caused by the launch of ChatGPT late last year has caused a rethink, with the European parliament this month setting out extra rules for widely used systems that have general applications beyond the cases previously targeted. The proposal still needs to be negotiated with member states and the European Commission before the law comes into force by 2025.
The latest plan would require makers of “foundation models” — the large systems that stand behind services such as ChatGPT — to identify and try to reduce risks that their technology could pose in a wide range of settings. The new requirement would make the companies that develop the models, including OpenAI and Google, partly responsible for how their AI systems are used, even if they have no control over the particular applications the technology has been embedded in.
The latest rules would also force tech companies to publish summaries of copyrighted data that had been used to train their AI models, opening the way for artists and others to try to claim compensation for the use of…
Click Here to Read the Full Original Article at UK homepage…