Leading AI experts, including OpenAI CEO Sam Altman, have released a statement calling for the mitigation of potential risks of AI to be as high a priority as a nuclear war or a pandemic. The concise statement, released by an umbrella group of AI leaders, aims to stimulate discussion and raise awareness among the growing number of experts concerned about the serious risks associated with advanced AI. Signatories include prominent figures such as Geoffrey Hinton and Yoshua Bengio, two of the three “godfathers of AI,” as well as Demis Hassabis, CEO of Google’s DeepMind, and Dario Amodei, CEO of AI company Anthropic, DW reports.
The statement comes as policymakers and tech industry leaders gather for the EU-US Trade and Technology Council meeting in Sweden to discuss AI regulation. OpenAI’s Altman is expected to meet with EU industry chief Thierry Breton to discuss the implementation of the EU’s upcoming AI regulations. Altman previously expressed concern about the proposed rules, but has since softened his stance.
Although the statement does not address specific risks or mitigation strategies, it does reference recent comments by Yoshua Bengio that point to potential risks associated with AI’s pursuit of goals that conflict with human values. These risks include malicious human actors directing AI to engage in harmful activities, misinterpretation of inappropriately specified goals, emergence of subgoals with adverse consequences, and development of self-serving behavior by AI to ensure its own survival.
Bengio proposes increasing AI safety research, temporarily restricting AI’s pursuit of real-world targets, and banning lethal autonomous weapons. The statement and Bengio’s findings contribute to the ongoing dialogue about the ethics of AI and the need for responsible development and regulation to ensure that the benefits of AI are maximized while potential risks are minimized.
Discussion about this post