So will those carefully assembled words lead to regulatory or legislative change? Charlotte Walker-Osborn, technology partner at the international law firm Morrison Foerster, says the declaration will "likely further drive some level of international legislative and governmental consensus around key tenets for regulating AI". For example, she cites core tenets such as transparency around when and how AI is being used, information on the data used in training systems and a requirement for trustworthiness (covering everything from biased outcomes to deepfakes). However, Walker-Osborn says a "truly uniform approach is unlikely" because of "varying approaches to regulation and governance in general" between countries. Nonetheless, the declaration is a landmark, if only because it recognises that AI cannot continue to develop without stronger oversight. State of AI report Sunak announced a "state of AI science" report at the summit, with the inaugural one chaired by Yoshua Bengio, one of three so-called "godfathers of AI", who won the ACM Turing award – the computer science equivalent of the Nobel prize – in 2018 for his work on artificial intelligence. The group writing the report will include leading AI academics and will be supported by an advisory panel drawn from the countries that attended the summit (so the US and China will be on it). Bengio was a signatory of Tegmark's letter and also signed a statement in May warning that mitigating the risk of extinction from AI should be a global priority alongside pandemics and nuclear war. He takes the subject of AI safety seriously. The UK prime minister said the idea was inspired by the Intergovernmental Panel on Climate Change and was supported by the UN secretary-general, António Guterres, who attended the summit. However, it won't be a UN-hosted project and the UK government-backed AI safety institute will host Bengio's office for the report. International safety testing A group of governments attending the summit and major AI firms agreed to collaborate on testing of their AI models before and after their public release. The 11 government signatories included the EU, the US, the UK, Australia, Japan – but not China. The eight companies included Google, ChatGPT developer OpenAI, Microsoft, Amazon and Meta. The UK has already agreed partnerships between its AI safety institute and its US counterpart (which was announced ahead of the summit last week) and also with Singapore, to collaborate on safety testing. This is a voluntary set-up and there is some scepticism about how much impact the Bletchley announcements will have if they are not underpinned by regulation. Sunak told reporters last week that he was not ready to legislate yet and further testing of advanced models is needed first (although he added that "binding requirements" will probably be needed at some point). It means that the White House's executive order on AI use, issued in the same week as the summit, and the forthcoming European Union's AI Act are further ahead of the UK in introducing new, binding regulation of the technology. "When it comes to how the model builders behave … the impending EU AI Act and President Biden's executive order are likely to have a larger impact," says Martha Bennett, a principal analyst at the company Forrester. Others, nonetheless, are happy with how Bletchley has shaped the debate and brought disparate views together. Prof Dame Muffy Calder, vice-principal and head of the college of science and engineering at the University of Glasgow, was worried the summit would dwell too much on existential risk and not "real and current issues". That fear, she believes, was assuaged. "The summit and declaration go beyond just the risks of 'frontier AI'," she says. "For example, issues like transparency, fairness, accountability, regulation, appropriate human oversight, and legal frameworks are all called out explicitly in the declaration. As is cooperation. This is great." Read more on this story The week in Sam Bankman-Fried |
Comments
Post a Comment