DE

AI Summit in Seoul
Why AI Regulation is so Difficult

Der südkoreanische Präsident Yoon Suk Yeol, Mitte, nimmt am virtuellen KI-Gipfel in Seoul im Blauen Haus in Seoul, Südkorea, teil

Der südkoreanische Präsident Yoon Suk Yeol, Mitte, nimmt am virtuellen KI-Gipfel in Seoul im Blauen Haus in Seoul, Südkorea, teil.

© picture alliance / ASSOCIATED PRESS | Uncredited

At the AI Summit in Seoul, an interim status on the international report on the assessment of AI risks was presented. The paper, produced by 75 experts, shows that the development of AI is uncertain, even for the near future. This poses a major challenge for regulation.

International efforts to assess the risks and regulate artificial intelligence are gaining momentum. At the digital AI Summit in Seoul on May 20 and 21, more than two dozen countries agreed to develop common risk limits for the development and use of AI. Among the signatories of the declaration were also Germany and the European Union.

An interim report of the first "Scientific Report on the Safety of Advanced AI" was presented in Seoul. This is the first time that an international and independent committee has provided a detailed assessment of the potential dangers posed by the new technology.

Agreement on Risks, Disagreement on Developments

The report could later lay the scientific foundation for multilateral regulation and monitoring of artificial intelligence. At the previous AI summit in the United Kingdom, which was the first ever, 30 states as well as the UN and the EU decided to set up the scientific committee. 75 well-known experts are working on the mammoth project.

The first interim status shows that there is only a fair amount of agreement in the description of the possible risks. The experts name known dangers such as Deep Fake News, mass unemployment, and racist or ideologically colored results. They also do not rule out a loss of human control over artificial intelligence.

However, there is disagreement about the likelihood of future risks occurring and when and how they could become really acute. In many cases, the only consensus is that a universal consensus is far away. "The future of general-purpose AI is uncertain," the report says, "with a wide range of trajectories appearing possible even in the near future, including both very positive and very negative outcomes."

For example, there are major differences of opinion on how quickly the technology will continue to develop. Will continued scaling of computing power and refinement of existing techniques be enough to make rapid progress? Or are new scientific breakthroughs necessary to significantly improve the capabilities of general purpose AI?

Recent progress has come about through increased use of resources - so-called scaling - rather than scientific breakthroughs. The researchers predict that if current trends persist, some artificial intelligence models will be trained with 40 to 100 times the computing power by the end of 2026 compared to 2023.

AI Summit
© picture alliance / ASSOCIATED PRESS | Ahn Young-joon

Scientists cannot explain some AI decisions

However, it is questionable whether the necessary resources such as energy and chips will be even available for such future increases. It is also controversial whether the mere scaling of computing power is sufficient to drastically improve AI capabilities in the future - or whether the technology will reach its limits without fundamentally new approaches.

The fact that even experts cannot yet fully understand how some AI models come to their conclusions is a major challenge. "Researchers are currently unable to provide human-understandable explanations of how general purpose AI models and systems reach their results and make their decisions," the report says. "This makes it difficult to assess or predict their performance, reliability and the risk of potential dangers."

Given these uncertainties, further research into the technical and social risks of artificial intelligence seems essential. At the summit in Seoul, the states therefore agreed on further measures. Ten countries as well as the EU decided to set up an international network of AI security institutes. The aim is to coordinate research efforts and establish common standards and testing methods. Australia, Canada, the European Union, France, Germany, Italy, Japan, the Republic of Korea, Singapore, the USA and the United Kingdom have agreed to this step.

In addition, 16 companies committed to publishing standards for evaluating their latest AI models during the summit. This includes examining the risk of misuse of the technology by malicious actors. The companies should also define when serious risks are classified as "intolerable". In addition to American big tech companies, the signatories include companies from South Korea, the Middle East along with one from China. From the EU, the French AI company Mistral has joined.

*Frederic Spohr is the head of the Korea office of the Friedrich Naumann Foundation for Freedom in Seoul.