Report shows advances in AI leading to growing online human rights issues

Freedom House’s new report shows governments are leveraging automated systems to strengthen information controls and spread disinformation.

Artificial intelligence (AI) has prompted more internet censorship and disinformation campaigns on a global basis, according to a Freedom on the Net 2023 report released recently by Freedom House, a nonprofit and nonpartisan organization.

AI-based tools that can generate text, audio, and imagery have quickly grown more sophisticated, accessible, and easy to use, spurring a concerning escalation of disinformation tactics. Over the past year, the new technology was used in at least 16 countries to sow doubt, smear opponents, or influence public debate, the report authors state.

Some of these online campaigns have long been assisted by AI technology for distributing content via automated “bot” accounts on social media. These accounts have smeared activists into silence and propagated false narratives about electoral fraud to voters. And platform algorithms have promoted untrustworthy and incendiary information over reliable sources. But much of the damage is still being done by humans, Freedom House said.

Legal frameworks in at least 21 countries mandate or incentivize digital platforms to deploy machine learning to remove political, social, and religious speech the regime does not approve of.

The research found evidence of at least 47 countries in which pro-government commentators used deceitful or covert tactics to manipulate online information, double the number from a decade ago. An entire market of for-hire services has emerged to support state-backed content manipulation, it said.

In the censorship realm, the world’s most technically advanced authoritarian governments have responded to innovations in AI chatbot technology by trying to make the applications strengthen their censorship systems.

Legal frameworks in at least 21 countries mandate or incentivize digital platforms to deploy machine learning to remove political, social, and religious speech the regime does not approve of, Freedom House said.

Spreading disinformation

The Russian private sector has played an ongoing role in spreading disinformation about the Kremlin’s full-scale invasion of Ukraine last year. An operation known as “Doppelgänger” mimicked German, American, Italian, British, and French media outlets to disseminate false and conspiratorial narratives about European sanctions and Ukrainian refugees, among other topics, Freedom House said.

Israel has a growing number of disinformation-for-hire companies. A 2023 investigation by Forbidden Stories, The Guardian, and Haaretz uncovered the work of an Israel-based firm known as Team Jorge, which reportedly uses an online platform that can automatically create text based on keywords and then mobilize a network of fake social media accounts to promote it.

Political actors have also worked to exploit the loyalty and trust that ostensibly nonpolitical influencers have cultivated among their social media followers. Ahead of Nigeria’s February 2023 election, influencers were paid to spread false narratives linking political candidates with militant or separatist groups.  

“Given the ways in which AI is already contributing to digital repression, a well-designed regulatory framework is urgently necessary to protect human rights in the digital age.”

Freedom on the Net 2023 report

During Kenya’s August 2022 election, influencers gamed social media platforms’ trending functions to boost misleading political hashtags. In one example, the hashtag #ChebukatiCannotBeTrusted sought to undermine the country’s electoral entity by suggesting that the independent entity’s leader supported one presidential candidate over the others.

Self-regulation and government guardrails

The report authors conclude by saying overreliance on self-regulation by the creators of AI technology is not the answer, but neither is preventing tech development and censoring providers.

“Given the ways in which AI is already contributing to digital repression, a well-designed regulatory framework is urgently necessary to protect human rights in the digital age,” the authors state.

In the US, the report notes how the Biden administration has began its development of AI governance with a push for industry self-regulation, with its Blueprint for an AI Bill of Rights and voluntary commitments from a number of prominent tech companies regarding a number of AI safety and security issues. Further executive action has been promised in this arena.

But the authors once again counsel against an overreliance on self-regulation, saying “to ensure that AI bolsters rather than harms internet freedom, members of Congress should work with civil society and the executive branch to craft bipartisan legislation that takes a rights-based approach to AI governance and transforms guiding principles into binding law”.