AI Seoul Summit: 4 Key Takeaways on AI Safety Standards and Regulations

Date:

Share:


The AI Seoul Summit, co-hosted by the Republic of Korea and the U.K., saw international bodies come together to discuss the global advancement of artificial intelligence.

Participants included representatives from the governments of 20 countries, the European Commission and the United Nations as well as notable academic institutes and civil groups. It was also attended by a number of AI giants, like OpenAI, Amazon, Microsoft, Meta and Google DeepMind.

The conference, which took place on May 21 and 22, followed on from the AI Safety Summit, held in Bletchley Park, Buckinghamshire, U.K. last November.

One of the key aims was to move progress towards the formation of a global set of AI safety standards and regulations. To that end, a number of key steps were taken:

  1. Tech giants committed to publishing safety frameworks for their frontier AI models.
  2. Nations agreed to form an international network of AI Safety Institutes.
  3. Nations agreed to collaborate on risk thresholds for frontier AI models that could assist in building biological and chemical weapons.
  4. The U.K. government offers up to £8.5 million in grants for research into protecting society from AI risks.

U.K. Technology Secretary Michelle Donelan said in a closing statement, “The agreements we have reached in Seoul mark the beginning of Phase Two of our AI Safety agenda, in which the world takes concrete steps to become more resilient to the risks of AI and begins a deepening of our understanding of the science that will underpin a shared approach to AI safety in the future.”

1. Tech giants committed to publishing safety frameworks for their frontier AI models

New voluntary commitments to implement best practices related to frontier AI safety have been agreed to by 16 global AI companies. Frontier AI is defined as highly capable general-purpose AI models or systems that can perform a wide variety of tasks and match or exceed the capabilities present in the most advanced models.

The undersigned companies are:

  • Amazon (USA).
  • Anthropic (USA).
  • Cohere (Canada).
  • Google (USA).
  • G42 (United Arab Emirates).
  • IBM (USA).
  • Inflection AI (USA).
  • Meta (USA).
  • Microsoft (USA).
  • Mistral AI (France).
  • Naver (South Korea).
  • OpenAI (USA).
  • Samsung Electronics (South Korea).
  • Technology Innovation Institute (United Arab Emirates).
  • xAI (USA).
  • Zhipu.ai (China).

The so-called Frontier AI Safety Commitments promise that:

  • Organisations effectively identify, assess and manage risks when developing and deploying their frontier AI models and systems.
  • Organisations are accountable for safely developing and deploying their frontier AI models and systems.
  • Organisations’ approaches to frontier AI safety are appropriately transparent to external actors, including governments.

The commitments also require these tech companies to publish safety frameworks on how they will measure the risk of the frontier models they develop. These frameworks will examine the AI’s potential for misuse, taking into account its capabilities, safeguards and deployment contexts. The companies must outline when severe risks would be “deemed intolerable” and highlight what they will do to ensure thresholds are not surpassed.

SEE: Generative AI Defined: How It Works, Benefits and Dangers

If mitigations do not keep risks within the thresholds, the undersigned companies have agreed to “not develop or deploy (the) model or system at all.” Their thresholds will be released ahead of the AI Action Summit in France, touted for February 2025.

However, critics argue that these voluntary regulations may not be hardline enough to substantially impact the business decisions of these AI giants.

“The real test will be in how well these companies follow through on their commitments and how transparent they are in their safety practices,” said Joseph Thacker, the principal AI engineer at security company AppOmni. “I didn’t see any mention of consequences, and aligning incentives is extremely important.”

Fran Bennett, the interim director of the Ada Lovelace Institute, told The Guardian, “Companies determining what is safe and what is dangerous, and voluntarily choosing what to do about that, that’s problematic.

“It’s great to be thinking about safety and establishing norms, but now you need some teeth to it: you need regulation, and you need some institutions which are able to draw the line from the perspective of the people affected, not of the companies building the things.”

2. Nations agreed to form international network of AI Safety Institutes

World leaders of 10 nations and the E.U. have agreed to collaborate on research into AI safety by forming a network of AI Safety Institutes. They each signed the Seoul Statement of Intent toward International Cooperation on AI Safety Science, which states they will foster “international cooperation and dialogue on artificial intelligence (AI) in the face of its unprecedented advancements and the impact on our economies and societies.”

The nations that signed the statement are:

  • Australia.
  • Canada.
  • European Union.
  • France.
  • Germany.
  • Italy.
  • Japan.
  • Republic of Korea.
  • Republic of Singapore.
  • United Kingdom.
  • United States of America.

Institutions that will form the network will be similar to the U.K.’s AI Safety Institute, which was launched at November’s AI Safety Summit. It has the three primary goals of evaluating existing AI systems, performing foundational AI safety research and sharing information with other national and international actors.

SEE: U.K.’s AI Safety Institute Launches Open-Source Testing Platform

The U.S. has its own AI Safety Institute, which was formally established by NIST in February 2024. It was created to work on the priority actions outlined in the AI Executive Order issued in October 2023; these actions include developing standards for the safety and security of AI systems. South Korea, France and Singapore have also formed similar research facilities in recent months.

Donelan credited the “Bletchley effect” — the formation of the U.K.’s AI Safety Institute at the AI Safety Summit — for the formation of the international network.

In April 2024, the U.K. government formally agreed to work with the U.S. in developing tests for advanced AI models, largely through sharing developments made by their respective AI Safety Institutes. The new Seoul agreement sees similar institutes being created in other nations that join the collaboration.

To promote the safe development of AI globally, the research network will:

  • Ensure interoperability between technical work and AI safety by using a risk-based approach in the design, development, deployment and use of AI.
  • Share information about models, including their limitations, capabilities, risk and any safety incidents they are involved in.
  • Share best practices on AI safety.
  • Promote socio-cultural, linguistic and gender diversity and environmental sustainability in AI development.
  • Collaborate on AI governance.

The AI Safety Institutes will have to demonstrate their progress in AI safety testing and evaluation by next year’s AI Impact Summit in France, so they can move forward with discussions around regulation.

3. The EU and 27 nations agreed to collaborate on risk thresholds for frontier AI models that could assist in building biological and chemical weapons

A number of nations have agreed to collaborate on the development of risk thresholds for frontier AI systems that could pose severe threats if misused. They will also agree on when model capabilities could pose “severe risks” without appropriate mitigations.

Such high-risk systems include those that could help bad actors access biological or chemical weapons and those with the ability to evade human oversight without human permission. An AI could potentially achieve the latter through safeguard circumvention, manipulation or autonomous replication.

The signatories will develop their proposals for risk thresholds with AI companies, civil society and academia and will discuss them at the AI Action Summit in Paris.

SEE: NIST Establishes AI Safety Consortium

The Seoul Ministerial Statement, signed by 27 nations and the E.U., ties the countries to similar commitments made by 16 AI companies that agreed to the Frontier AI Safety Commitments. China, notably, did not sign the statement despite being involved in the summit.

The nations that signed the Seoul Ministerial Statement are Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, the Netherlands, Nigeria, New Zealand, the Philippines, Republic of Korea, Rwanda, Kingdom of Saudi Arabia, Singapore, Spain, Switzerland, Türkiye, Ukraine, United Arab Emirates, United Kingdom, United States of America and European Union.

4. The U.K. government offers up to £8.5 million in grants for research into protecting society from AI risks

Donelan announced the government will be awarding up to £8.5 million of research grants towards the study of mitigating AI risks like deepfakes and cyber attacks. Grantees will be working in the realm of so-called ‘systemic AI safety,’ which looks into understanding and intervening at the societal level in which AI systems operate rather than the systems themselves.

SEE: 5 Deepfake Scams That Threaten Enterprises

Examples of proposals eligible for a Systemic AI Safety Fast Grant might look into:

  • Curbing the proliferation of fake images and misinformation by intervening on the digital platforms that spread them.
  • Preventing AI-enabled cyber attacks on critical infrastructure, like those providing energy or healthcare.
  • Monitoring or mitigating potentially harmful secondary effects of AI systems that take autonomous actions on digital platforms, like social media bots.

Eligible projects might also cover ways that could help society to harness the benefits of AI systems and adapt to the transformations it has brought about, such as through increased productivity. Applicants must be U.K.-based but will be encouraged to collaborate with other researchers from around the world, potentially associated with international AI Safety Institutes.

The Fast Grant programme, which expects to offer around 20 grants, is being led by the U.K. AI Safety Institute, in partnership with the U.K. Research and Innovation and The Alan Turing Institute. They are specifically looking for initiatives that “offer concrete, actionable approaches to significant systemic risks from AI.” The most promising proposals will be developed into longer-term projects and may receive further funding.

U.K. Prime Minister Rishi Sunak also announced the 10 finalists of the Manchester Prize, with each team receiving £100,000 to develop their AI innovations in energy, environment or infrastructure.



Source link

━ more like this

One of the best modern war movies ever has an incredible Cyber Monday deal

War is hell … unless Quentin Tarantino is involved. The maverick director, who until 2009 had best been known for his violent crime...

PS5 anniversary update adds themes for each generation of the PlayStation

Sony is giving gamers a surprise shot of nostalgia to celebrate PlayStation’s 30th anniversary — and not just the retro hardware that sold out...

Everything new in Fortnite Chapter 6 Season 1: maps, new weapons, and more

The much-anticipated Fortnite Chapter 6 Season 1 has finally arrived and it has brought with it an entirely new world for loopers to explore...

Call of Duty is getting some kind of game mode inspired by Squid Game

Here’s a collaboration I wasn’t expecting. The hit gaming franchise Call of Duty and the hit Netflix show Squid Game are to...
spot_img