Managing AI Risks in an Era of Rapid Progress
GS 2023-10-28
In a recent short paper, world-leading AI scientists and governance experts from the US, China, EU, UK, and other countries have highlighted that rapid AI progress will pose societal-scale risks. Along with their benefits, today’s AIs already pose a wide array of harms. Tomorrow’s systems will be far more powerful as AI labs plan to scale them rapidly. Forthcoming AI systems will pose risks including rapid job displacement, automated misinformation, and enabling large-scale cyber and biological threats. A loss of control over forthcoming systems is also described as a genuine concern. AI’s growing risks demand a swift response. The experts urge their governments and leading AI labs to ensure responsible AI development.”
The paper was initiated by Geoffrey Hinton and Yoshua Bengio. In 2018, both were awarded the Turing Award by the ACM for their contributions to AI research, widely recognized as the unofficial Nobel Prize of Computer Science. This paper serves as a contribution to the discussion at the Bletchley Park Summit Meeting on AI Safety, hosted by the English government from November 1st to 2nd, 2023. The event will bring together AI researchers and government representatives from around the world, with invitations extended to Russian and Chinese delegates as well.
The field of artificial intelligence (AI) has advanced at a breathtaking pace. Today, deep learning systems are capable of tasks such as software generation, the creation of photorealistic scenes, and providing advice on intellectual matters. The rapid progress in AI raises the possibility of general AI systems outperforming human capabilities in the near future. This article underscores the potential benefits and risks associated with this development. It emphasizes the need to prioritize safety and regulation in conjunction with the advancement of AI to mitigate societal-scale risks. Without adequate protective measures, AI systems could exacerbate social inequalities, undermine stability, and enable large-scale criminal activities. Autonomous AI systems, unless carefully developed and controlled, could pursue undesirable objectives, presenting unprecedented challenges. The authors call for substantial allocation of research funds in AI for safety and ethics and advocate for the establishment of effective governmental oversight. They stress the importance of regulatory mechanisms and standards to prevent irresponsible AI development and the delegation of societal tasks to AI systems with limited human supervision. In a rapidly evolving field, this article advocates for prompt regulatory actions, commitments from major AI corporations, and international cooperation to ensure that AI has a responsible and positive impact on society.
So, when we discuss new risks, we don’t necessarily have to reinvent the wheel, as we can build on work that has already been done. On one hand, we have a new form of Artificial Intelligence that is understood by very few, if anyone at all. And we place it in the hands of users who understand even less but expect significant economic benefits. It’s not this new, complex technology that poses the problem; it’s how it interacts with other complex systems about which we have some knowledge. In this regard, I concur with Martin Rees (Lord Rees of Ludlow), an astrophysicist and the head of the Centre for the Study of Existential Risk at the University of Cambridge. He states, “I worry less about a superintelligent takeover than about the risk of excessive dependence on large-scale networked systems. Large-scale failures of power grids, the Internet, and the like could lead to catastrophic societal breakdowns.” What we need is global supervision and regulation to mitigate these risks.”
Link to the paper:https://humancompatible.ai/news/2023/10/24/managing-ai-risks-in-an-era-of-rapid-progress/