Leading artificial intelligence companies have signed up to a new round of voluntary commitments on AI safety, the UK and South Korean governments have announced.
The companies, which include tech giants Amazon, Google, Meta, and Microsoft as well as Sam Altman-led OpenAI, Elon Musk’s xAI, and Chinese developer Zhipu AI, will publish frameworks outlining how they will measure the risks of their “frontier” AI models.
The groups committed “not to develop or deploy a model at all” if severe risks could not be mitigated, the two governments said ahead of the opening of a global AI summit in Seoul on Tuesday.
The announcement builds on the so-called Bletchley Declaration made at the inaugural AI Safety Summit hosted by UK Prime Minister Rishi Sunak in November.
“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” Sunak said in a statement. “It sets a precedent for global standards on AI safety that will unlock the benefits of this transformative technology.”
According to a communique outlining the agreement, the AI companies will “assess the risks posed by their frontier models or systems... including before deploying that model or system, and, as appropriate, before and during training.”
The companies will also set out the “thresholds at which severe risks posed by a model or system, unless adequately mitigated, would be deemed intolerable” and how such mitigations will be implemented.
“The field of AI safety is quickly evolving and we are particularly glad to endorse the commitments’ emphasis on refining approaches alongside the science,” said Anna Makanju, vice-president of global affairs at OpenAI.