Friday, 26 April 2024
Trending

[the_ad_group id="2845"]

Business News

Regulating artificial intelligence is a 4D challenge

Regulating artificial intelligence is a 4D challenge

[the_ad id="21475"]

[ad_1]

The writer is founder of Sifted, an FT-backed site about European start-ups

The leaders of the G7 nations addressed plenty of global concerns over sake-steamed Nomi oysters in Hiroshima last weekend: war in Ukraine, economic resilience, clean energy and food security among others. But they also threw one extra item into their parting swag bag of good intentions: the promotion of inclusive and trustworthy artificial intelligence. 

While recognising AI’s innovative potential, the leaders worried about the damage it might cause to public safety and human rights. Launching the Hiroshima AI process, the G7 commissioned a working group to analyse the impact of generative AI models, such as ChatGPT, and prime the leaders’ discussions by the end of this year.

The initial challenges will be how best to define AI, categorise its dangers and frame an appropriate response. Is regulation best left to existing national agencies? Or is the technology so consequential that it demands new international institutions? Do we need the modern-day equivalent of the International Atomic Energy Agency, founded in 1957 to promote the peaceful development of nuclear technology and deter its military use?

One can debate how effectively the UN body has fulfilled that mission. Besides, nuclear technology involves radioactive material and massive infrastructure that is physically easy to spot. AI, on the other hand, is comparatively cheap, invisible, pervasive and has infinite use cases. At the very least, it presents a four-dimensional challenge that must be addressed in more flexible ways. 

The first dimension is discrimination. Machine learning systems are designed to discriminate, to spot outliers in patterns. That’s good for spotting cancerous cells in radiology scans. But it’s bad if black box systems trained on flawed data sets are used to hire and fire workers or authorise bank loans. Bias in, bias out, as they say. Banning these systems in unacceptably high-risk areas, as the EU’s forthcoming AI Act proposes, is one strict, precautionary approach. Creating independent, expert auditors might be a more adaptable way to go.

Second, disinformation. As the academic expert Gary Marcus warned US Congress last week, generative AI might endanger democracy itself. Such models can generate plausible lies and counterfeit humans at lightning speed and industrial scale. 

The onus should be on the technology companies themselves to watermark content and minimise…

Click Here to Read the Full Original Article at UK homepage…

[ad_2]

[the_ad id="21476"]