The US and UK have signed a landmark agreement on artificial intelligence, as the allies become the first countries to formally cooperate on how to test and assess risks from emerging AI models.
The agreement, signed on Monday in Washington by UK Science Minister Michelle Donelan and US Commerce Secretary Gina Raimondo, lays out how the two governments will pool technical knowledge, information, and talent on AI safety.
The deal represents the first bilateral arrangement on AI safety in the world and comes as governments push for greater regulation of the existential risks from new technology, such as its use in damaging cyber attacks or designing bioweapons.
“The next year is when we’ve really got to act quickly because the next generation of [AI] models are coming out, which could be complete game-changers, and we don’t know the full capabilities that they will offer yet,” Donelan told the Financial Times.
The agreement will specifically enable the UK’s new AI Safety Institute (AISI), set up in November, and its US equivalent, which is yet to begin its work, to exchange expertise through secondments of researchers from both countries. The institutes will also work together on how to independently evaluate private AI models built by the likes of OpenAI and Google.
The partnership is modeled on one between the UK’s Government Communications Headquarters (GCHQ) and the US National Security Agency, who work together closely on matters related to intelligence and security.
“The fact that the United States, a great AI powerhouse, is signing this agreement with us, the United Kingdom, speaks volumes for how we are leading the way on AI safety,” Donelan said.
She added that since many of the most advanced AI companies were currently based in the US, the American government’s expertise was key to both understanding the risks of AI and to holding companies to their commitments.