Fierce AI race raises privacy and security concerns

Competition between tech companies to profit from AI has undoubtedly pushed the boundaries of AI research, but also raises concerns about the overlooked privacy and security challenges.

In recent months Generative AI, a more advanced form of AI, has been the topic of conversations on many of our feeds. It has been hailed as a game-changing technological breakthrough, with TIME magazine describing it as “the most significant development since social media”.

Last year’s launch of ChatGPT was a watershed moment for the world of search. In a bid to capitalize on this flourishing AI-led search space, Google and Microsoft have been going back and forth, each striving to win the race to market.

Both companies recently announced AI enabled solutions that have been behind-the-scenes for some time, with Microsoft’s Co-Pilot and Google’s Bard aiming to revolutionize the way users perform their daily work by streamlining their workflows and enhancing their creativity.

While Microsoft’s recent experiment to expand its offering with an AI-powered search engine may not pose a significant risk to the continuing success of the company, Google has a lot more at stake, given its heavy reliance on the search engine market for revenue. Even a slight loss in market share could have grave consequences for the search engine giant.

The consequences for other large global tech players, such as Alibaba and Baidu for example, are also quite clear – by joining this contest they are striving to retain both a technological edge and, ultimately, relevance.

Increasing security concerns

A race to capitalize on the commercial potential of AI in a winner-takes-all market means that there is a significant risk of prioritizing innovation, revenue and growth over safety and security. At this stage, the highly accessible Learning Language Models (LLMs) potentially allow users to shape the AI tools with their data.

However, because there is a real gap between user input and an actual understanding on how these systems operate and use data, even seemingly harmless actions such as the fine-tuning of the LLM with a readily identifiable data set can result in issues such as data leaks. All of these potential problems can amplify the risks already present as the tools become more widely used.

Concerns about the potential for the distortion of truth, misinformation and outright dishonesty are evident from the way in which ChatGPT as the early arrival is being utilised (or tested?) by actual users.

This is particularly worrying in the context of politics and security, where even minor inaccuracies or misinterpretations can often find an attuned and receptive human audience. And they can quickly amplify minor problems into crises that have profound effects on the local and sometimes even the global community.

What data?

Co-pilot and Bard require full access to documents to work effectively. They provide the sort of high-quality results that impress users and that are increasingly expected by them.

Despite this existential hunger for data, both Microsoft and Google have stated that user documents will not be used to train the LLMs as Google affirms “[…] private data is kept private, and not used in the broader foundational model training corpus”.  

With these restrictions on user data utilisation potentially in place the details of how the models and tools will be trained in order to meet their potential are currently not clear.

Facilitating cyber crime

What is evident is that the widespread availability of the tools, including to criminals and others who will have no qualms in utilizing them as a potent weapon, will require a mixture of improved practices in organizations. This will include updated cybersecurity training for employees, recognizing phishing emails, and utilizing AI tools to detect and block suspicious activity.

The better the language models understand writing styles and behaviours, the easier it is for scammers to produce convincing emails, letters and messages impersonating users. This entire nefarious industry could well be empowered by this emerging technology, making it increasingly difficult to spot what is legitimate in your inbox or voicemail and what is not. Is it not possible that this could ultimately lead to a total loss of trust in the digital ecosystem and a return to the analogue? It is a question well worth asking.

As the practice of uploading user data online grows to enable and refine the AI solutions, the risk of data breaches that exposes these extensive data sets also increases. To make matters worse, cybercriminals will also now be able to take advantage of the new AI tool set to collate, query, extract and target relevant information – processes that would normally be too work intensive, tedious and difficult for all but the most sophisticated of attackers, such as state sponsored hacking outfits.

Organizations might well be able to step up to the challenge reasonably swiftly, particularly given the fact that the same tools will be available to them. The challenge of adjusting to a reality that includes AI will likely be far greater for individuals, in particular specific demographics such as the elderly, children or even smaller companies or self-employed individuals who do not possess either the knowhow or money to neutralise potential threats.

Data sets, user responsibility and regulation

While software providers have a responsibility to develop ethical guidelines and implement cybersecurity measures to prevent unauthorized access, it is also essential for users to educate themselves on how to protect their data. This includes reviewing where and how it is stored and utilised and trying to consider who the ultimate beneficiary of having access to it might be. This is, no doubt, a tall order given the swift progress being made and needs to be seen alongside the lack of awareness of some and the naïve optimism of others.

Is there also a need for generative AI to be regulated by government? With US congress recently honing in on the potentially detrimental impact of social media companies such as Facebook and TikTok, the case for all tech solutions to be regulated grows.

Regulatory solutions sit on a spectrum, with proponents of innovation arguing for a lightweight approach while others demand outright bans on the technology as society ‘catches up’.  Possible regulatory solutions could include compulsory reporting requirements for incidents involving the misuse of generative AI and severe penalties where that misuse actually harms real people.

Unintended consequences and deflated visions

The consequences of making such incredible technological advancements too swiftly are yet to be uncovered. But if progress here is anything comparable to the rose-tinted way in which the Internet of Things (IoT) boom of the mid-2010’s unfolded, where a race to create smart devices resulted in billions of insecure devices being connected to the internet and millions becoming victims of cybercrime, maybe we should be far more concerned than excited.

That initial sweeping IoT vision has turned out to be a far more modest reality a decade on, with progress far slower than that expected by the visionaries and dreamers. Perhaps the same will be true of AI in the present?

Irrespective of the outcome, the ripple effect of these AI solutions on global working culture and practices is almost certainly going to be profound.

Whether the arrival of AI ultimately leads to a more connected, efficient, and productive future or simply poses so many challenges and ethical dilemmas that its development and use is straitjacketed to protect society, one thing is certain: AI is here to stay and will almost certainly transform the way we live and work for years to come.

Momina Zafar is an Analyst on Global Relay’s future leaders graduate program