Managing AI Risks in an Era of Rapid Progress

GS 2023-10-28

In a recent short paper, world-leading AI scientists and governance experts from the US, China, EU, UK, and other countries have highlighted that rapid AI progress will pose societal-scale risks. Along with their benefits, today’s AIs already pose a wide array of harms. Tomorrow’s systems will be far more powerful as AI labs plan to scale them rapidly. Forthcoming AI systems will pose risks including rapid job displacement, automated misinformation, and enabling large-scale cyber and biological threats. A loss of control over forthcoming systems is also described as a genuine concern. AI’s growing risks demand a swift response. The experts urge their governments and leading AI labs to ensure responsible AI development.”
The paper was initiated by Geoffrey Hinton and Yoshua Bengio. In 2018, both were awarded the Turing Award by the ACM for their contributions to AI research, widely recognized as the unofficial Nobel Prize of Computer Science. This paper serves as a contribution to the discussion at the Bletchley Park Summit Meeting on AI Safety, hosted by the English government from November 1st to 2nd, 2023. The event will bring together AI researchers and government representatives from around the world, with invitations extended to Russian and Chinese delegates as well.

The field of artificial intelligence (AI) has advanced at a breathtaking pace. Today, deep learning systems are capable of tasks such as software generation, the creation of photorealistic scenes, and providing advice on intellectual matters. The rapid progress in AI raises the possibility of general AI systems outperforming human capabilities in the near future. This article underscores the potential benefits and risks associated with this development. It emphasizes the need to prioritize safety and regulation in conjunction with the advancement of AI to mitigate societal-scale risks. Without adequate protective measures, AI systems could exacerbate social inequalities, undermine stability, and enable large-scale criminal activities. Autonomous AI systems, unless carefully developed and controlled, could pursue undesirable objectives, presenting unprecedented challenges. The authors call for substantial allocation of research funds in AI for safety and ethics and advocate for the establishment of effective governmental oversight. They stress the importance of regulatory mechanisms and standards to prevent irresponsible AI development and the delegation of societal tasks to AI systems with limited human supervision. In a rapidly evolving field, this article advocates for prompt regulatory actions, commitments from major AI corporations, and international cooperation to ensure that AI has a responsible and positive impact on society.

So, when we discuss new risks, we don’t necessarily have to reinvent the wheel, as we can build on work that has already been done. On one hand, we have a new form of Artificial Intelligence that is understood by very few, if anyone at all. And we place it in the hands of users who understand even less but expect significant economic benefits. It’s not this new, complex technology that poses the problem; it’s how it interacts with other complex systems about which we have some knowledge. In this regard, I concur with Martin Rees (Lord Rees of Ludlow), an astrophysicist and the head of the Centre for the Study of Existential Risk at the University of Cambridge. He states, “I worry less about a superintelligent takeover than about the risk of excessive dependence on large-scale networked systems. Large-scale failures of power grids, the Internet, and the like could lead to catastrophic societal breakdowns.” What we need is global supervision and regulation to mitigate these risks.”

Link to the paper:

Gerhard Schimpf, the recipient of the ACM Presidential Award 2016 and 2024 the Albert Endes Award of the German Chapter of the ACM, has a degree in Physics from the University of Karlsruhe. As a former IBM development manager and self-employed consultant for international companies, he has been active in ACM for over four decades. He was a leading supporter of ACM Europe, serving on the first ACM Europe Council in 2009. He was also instrumental in coordinating ACM’s spot as one of the founding organizations of the Heidelberg Laureates Forum. Gerhard Schimpf is a member of the German Chapter of the ACM (Chair 2008 – 2011) and a member of the Gesellschaft für Informatik. --oo-- Gerhard Schimpf, der 2016 mit dem ACM Presidential Award und 2024 mit dem Albert Endres Award des German Chapter of the ACM geehrt wurde, hat an der TH Karlsruhe Physik studiert. Als ehemaliger Manager bei IBM im Bereich Entwicklung und Forschung und als freiberuflicher Berater international tätiger Unternehmen ist er seit 40 Jahren in der ACM aktiv. Er war Gründungsmitglied des ACM Europe Councils und gehört zum Founders Club für das Heidelberg Laureate Forum, einem jährlichen Treffen von Preisträgern der Informatik und Mathematik mit Studenten. Gerhard Schimpf ist Mitglied des German Chapter of the ACM (Chairperson 2008 – 2011) und der Gesellschaft für Informatik.

Leave a Reply

Your email address will not be published. Required fields are marked *

WP2Social Auto Publish Powered By :