Opinion: What we talk about when we talk about AI

It’s been a big week for AI, so what should regulators conclude after the slick presentations and grand claims?

The decision to stage the first Global AI Summit at the UK’s Bletchley Park invited doubts about whether the event would be one of substance or surface. Much was made of the “iconic” setting of the country house that housed the Allied code-breaking operation during the Second World War, which involved drawing on the umbilical connection with that conflict that the UK finds so difficult to cut.

The sense of looking back to look forward was one of many contradictions thrown up around the summit, but the very fact a summit was being held at all was evidence that no one has all the answers. The focus, if anything, was on what the right questions would be, because this was really about how we as a species handle AI. Or, in other words, regulate it.

What was produced was The Bletchley Declaration on AI Safety. Signed by the 28 countries from across the globe that attended, it is a world-first agreement that “AI should be designed, developed, deployed, and used, in a manner that is safe, in such a way as to be human-centric, trustworthy and responsible”.

It’s also very big on international cooperation, a sentiment somewhat at odds with post-Brexit UK rhetoric on taking back control and going it alone. There will have been wry smiles from those still counting the cost of the UK’s departure from the EU’s Horizon science program in 2020 – it took three years to negotiate re-entry – at the commitments to international cooperation.

International AI Safety Institute

The frequent references to the UK ‘leading the world’ made by the government, and by Prime Minister Rishi Sunak at the Summit itself, also jarred with the rhetoric of vital international collaboration.

Of course politics played its part, with Sunak’s need to present himself as the serious leader of a serious nation never far from the surface, and the sense of nations jostling for position intensified when the US announced plans to set up its own institute to police AI – just after it welcomed the UK’s plans to establish an International AI Safety Institute.

The Financial Times presented this as US upstages Sunak with AI regulation plan, quoting a tech company chief executive as saying the US did not want to “lose our commercial control to the UK”. But the New Statesman had a different take, writing that “it is not a disaster that the US has announced it will establish its own AI safety institute. Some suggested this was a snub to the British. On the contrary, greater attention to AI was the aim of the conference and the White House has said the institute will work closely with the British equivalent”.

UK, US but not China

The Summit also produced what is being called, by the Summit organisers at least, a “landmark” document on AI testing in which eight governments – including the UK and US, but not China – agree to test AI models before they are reached. The companies signing up include Google Deepmind, Meta, Microsoft and OpenAI.

The access will be given on a voluntary basis under this agreement. On Monday, US President Biden signed an executive order that puts binding requirements on companies to hand certain safety information over to the government.

The rhetoric and positioning matters to regulators, who have to read the runes in order to help deliver on the preferred direction of travel. Here, again, there is mixed messaging. In remarks that brought to mind former UK PM Tony Blair’s infamous “A day like today is not a day for soundbites … but I feel the hand of history on our shoulder”, Sunak said: “It’s important not to be alarmist” before going on to say: “AI may pose a risk on a scale like pandemics and nuclear war” and that AI was “one of the biggest threats to humanity”.

Despite this, he was anxious to emphasise “The UK’s answer is not to rush to regulate”, and that the Bletchley Declaration delivers on the light touch pledges he has made. All clear?

Voice of labor

There was criticism of the invitation list being too narrowly focused on big tech executives and governments. More than 100 civil society groups, including global labor federations representing 248.5 million workers, wrote a letter calling the gathering a “missed opportunity” for excluding the groups most likely to be affected by AI.

This is not just a vested interest lobby. A thoughtful opinion piece by Rana Foroohar in the FT put the case that Workers could be the ones to regulate AI, and the case for organized labor to be involved in formulating strategies to shape the world of work has been put recently by former US Secretary of Labor Robert Reich and current President Biden.

As Amanda Ballantyne, director of the US union federation AFL-CIO Technology Institute, said in the FT piece: “There is a long history of unions leveraging the knowledge of working people to make better rules around safety, privacy, health and human rights and so on.”

The substance of the Bletchley Summit and Declaration – and let’s not forget the landmark document – may be little more than ‘we need to deal with this’, but that is at least recognition that there is some catching up to do, and that just letting things develop may not be the best of ideas. In other words, regulation is needed. Attention will now turn to questions such as “how much?”, “for what purpose?”, and “by whom?”

A second AI Safety Summit event will be held in South Korea in six months’ time, followed by another in France a year from now.