Skip to main content
Report

Regulating Artificial Intelligence in a World of Uncertainty

October 31, 2024

Key Points

  • Risk management practice, on which current AI governance and regulation are based, differs substantially from managing in the face of uncertainty.
  • Generative pretrained AI applications are complex and dynamic, with uncertain outcomes that will be deployed in complex dynamic human systems, the mechanisms of which are also unknown. The potential outcomes are highly uncertain.
  • Regulation based on risk management cannot prevent harm arising from outcomes that cannot be known ex ante.
  • Some harm is inevitable as society learns about these new applications and use contexts. Rules that are use-case specific rather than generic and a practice of examining ways of redressing harm when it occurs offer a principled way of enabling efficient development and deployment of AI applications.

Read the pdf.

Executive Summary

New and increasingly capable artificial intelligence applications are a fact of life. They offer great promise of advances in human welfare but also have engendered fears of misalignment with human values and objectives, leading at best to harm to individuals and at worst to catastrophic societal outcomes and even threats to human survival. Consequently, considerable attention has been given to whether AI applications should be subject to regulation and, if so, what form that regulation should take. In the EU and the US, the focus has been on using risk management processes to ensure safe development and deployment and establishing confidence in AI use.

Risk management processes and safety regimes draw on a long history of developing computer applications based on models of mathematical, scientific, and engineering precision—and this is likely satisfactory for managing risks associated with “good, old-fashioned” symbolic AI. Nevertheless, a new generation of generative AIs (GAIs) that have been pretrained are not well suited to governance and management using risk management processes because their very basis is toward continuous adaptation and infinite variety rather than constraint and increased precision. They will also likely intersect with complex dynamic human systems, leading to great uncertainty. Managing uncertainty is different from managing risk, so a different sort of regulatory framework is needed for GAIs.

This report explores the distinctions between risk and uncertainty in AI. It illustrates why existing risk management arrangements are insufficient to prevent truly unexpected harms from GAIs. It argues that what is required is a set of arrangements for managing the consequences of harm arising, without chilling the incentives for innovative development and competitive deployment of GAIs. Arguably, insurance arrangements for managing outcome uncertainties provide a more constructive way forward than do risk management regimes, which presume knowledge of outcomes that is just not available.

Read the full report.