You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Today, we are publishing the first-ever International AI Safety Report, backed by 30 countries and the OECD, UN, and EU. It summarises the state of the science on AI capabilities and risks, and how to mitigate those risks.
This unprecedented, large-scale effort involved 100 independent AI experts from around the world including Nobel laureates and Turing Award winners. These contributors represent different perspectives within the AI community and disagree on several questions. But, for this Report, they all worked together to produce an authoritative document on the state of the science. As the Chair of the Report, I am deeply impressed by their work.
A central conclusion of the Report is that even the short-term future of general-purpose AI is remarkably uncertain. Both very positive and very negative outcomes are possible. Therefore, much depends on how societies and governments act.
With all the noise around AI, this Report aims to provide policymakers with an evidence-based, balanced overview of AI risks and mitigations. The remarkably collaborative spirit of all contributors gives me hope. It shows how scientific disagreement can be highly productive, and I want to thank each one of them for their work. I am also grateful to the industry and civil society organisations who contributed invaluable feedback.
I look forward to the important discussions that will unfold during the AI Action Summit in Paris on February 10-11 and in various other forums in the coming months.
The text was updated successfully, but these errors were encountered:
https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Report_2025_accessible_f.pdf
Y Bengio:
Today, we are publishing the first-ever International AI Safety Report, backed by 30 countries and the OECD, UN, and EU. It summarises the state of the science on AI capabilities and risks, and how to mitigate those risks.
This unprecedented, large-scale effort involved 100 independent AI experts from around the world including Nobel laureates and Turing Award winners. These contributors represent different perspectives within the AI community and disagree on several questions. But, for this Report, they all worked together to produce an authoritative document on the state of the science. As the Chair of the Report, I am deeply impressed by their work.
A central conclusion of the Report is that even the short-term future of general-purpose AI is remarkably uncertain. Both very positive and very negative outcomes are possible. Therefore, much depends on how societies and governments act.
With all the noise around AI, this Report aims to provide policymakers with an evidence-based, balanced overview of AI risks and mitigations. The remarkably collaborative spirit of all contributors gives me hope. It shows how scientific disagreement can be highly productive, and I want to thank each one of them for their work. I am also grateful to the industry and civil society organisations who contributed invaluable feedback.
I look forward to the important discussions that will unfold during the AI Action Summit in Paris on February 10-11 and in various other forums in the coming months.
The text was updated successfully, but these errors were encountered: