AI Seoul Summit: Global Efforts to Tame the Risks of Artificial Intelligence

The AI Seoul Summit, co-hosted by South Korea and the UK, aims to build upon the initial AI safety meeting held in the UK last year. It will bring together representatives from governments, tech companies, and organizations to address the risks and potential regulations for artificial intelligence.

AI Seoul Summit: Global Efforts to Tame the Risks of Artificial Intelligence

Following the inaugural AI safety meeting in the UK last year, South Korea is set to host a mini-summit this week to discuss the risks and regulation of artificial intelligence (AI). The gathering in Seoul aims to continue the work started at the UK meeting, focusing on reining in the threats posed by cutting-edge AI systems.

AI Seoul Summit: Global Efforts to Tame the Risks of Artificial Intelligence

The Seoul summit is part of a series of global efforts to establish guardrails for AI, a rapidly advancing technology that has the potential to transform society. However, it has also raised concerns about potential risks, from algorithmic bias in search results to existential threats to humanity.

At the UK summit in November, held at the former secret wartime codebreaking base in Bletchley, diverse stakeholders, including researchers, government leaders, tech executives, and civil society groups, engaged in closed-door talks. Elon Musk, CEO of Tesla, and Sam Altman, CEO of OpenAI, were among the attendees.

Delegates from over two dozen countries, including the US and China, signed the Bletchley Declaration, pledging to collaborate in addressing the potential "catastrophic" risks posed by AI advancements. In March, the UN General Assembly approved its first resolution on AI, supporting international efforts to ensure the technology benefits all nations, respects human rights, and is "safe, secure, and trustworthy."

Earlier this month, the US and China held their first high-level talks on AI in Geneva, discussing risk management and shared standards. US officials expressed concerns about China's alleged "misuse of AI," while Chinese representatives criticized US "restrictions and pressure" on the technology.

The Seoul summit, co-hosted by South Korea and the UK, will take place on May 21-22. On day one, South Korean President Yoon Suk Yeol and UK Prime Minister Rishi Sunak will meet virtually. Leading AI companies will present updates on their efforts to ensure the safety of their AI models.

On day two, digital ministers will gather for an in-person meeting hosted by South Korean Science Minister Lee Jong-ho and Britain's Technology Secretary Michelle Donelan. The focus has expanded beyond extreme risks to include potential negative impacts of AI on energy use, workers, and the spread of misinformation.

Despite efforts like the Bletchley Declaration, progress in AI safety regulation has been slow. According to Lee Seong-yeob of Seoul's Korea University, reaching agreements among participants with different interests and levels of AI development will be challenging.

Nevertheless, developers of powerful AI systems are forming alliances to establish their own safety standards. Facebook parent Meta Platforms and Amazon have joined the Frontier Model Forum, a group founded by Anthropic, Google, Microsoft, and OpenAI.

An expert panel's interim report on AI safety identified a range of risks, including malicious use for fraud, disinformation, and bioweapons. Malfunctioning AI systems could also perpetuate bias in healthcare and job recruitment. The technology's potential for automation poses systemic risks to the labor market.

South Korea aims to establish itself as a leader in AI governance through the Seoul summit. However, critics question whether the country's AI infrastructure is advanced enough for such a role.

The Seoul summit is a significant step in the ongoing global effort to manage the risks of AI. It remains to be seen how effective these efforts will be in ensuring the safe and beneficial development of this transformative technology.