UK think tank recommends cross-party deepfake regulation

UK tries to catch up on AI regulation.

The EU has taken the lead with its landmark AI Act, enacted this month with the aim of ensuring safety and compliance with fundamental rights, while boosting innovation. MEPs agreed on the regulation in December, with 523 voting in favour, 46 against, and 49 abstentions.

Now, politicians in the UK are proposing stricter rules and regulations, specifically the banning of apps and tools that allow users to “nudify” images of people to create deepfake pornography.

“I think regulation is a friend for AI in that sense: the government has to show the public that it’s got their back and will step in if tech is being used to hurt people.”

Laurel Boxall, tech policy adviser, Labour Together

Labour Together, a think tank aligned with the UK’s opposition Labour Party, has put forward a policy paper with suggestions for AI developers to mitigate the issue.

The policy paper calls for all major parties to make a cross-party pledge to voluntarily commit to not using deepfake technology or spreading misinformation for campaigning purposes.

“I think regulation is a friend for AI in that sense: the government has to show the public that it’s got their back and will step in if tech is being used to hurt people,” Laurel Boxall, tech policy adviser, Labour Together, told The Guardian. “We mustn’t forget that AI can help us; regulation ensures it doesn’t hurt us too.”

Awareness has grown in the past year that the UK is falling behind on AI regulation. This is despite the current Conservative government wanting to turn the UK into an “AI superpower“.

A February poll by the charity CARE found that 80% of the public wanted a government ban on tools that digitally undress women and children.

“Last year it was estimated that links advertising ‘nudification’ apps and websites increased by 2,400%. The content they create is extremely realistic. As well as still images, some platforms allow users to create new pornographic videos where subjects appear to do whatever the user asks,” Louise Davies, Director of Advocacy & Policy, CARE, said.

A high-profile incident involving Taylor Swift in January highlighted the issue when deepfake pornographic images of the singer were spread on X – 47 million accounts viewed the images before they were removed.