Global agreement is first step in establishing AI security framework

Governments, tech companies and Global South bodies sign up to adoption of secure-by-design model.

Amid the noise around how far and how fast to take AI, work has continued on a set of guidelines aimed at providing practical assistance to those wanting to ensure the technology is developed safely and securely.

The guidelines, drawn up by the leading cyber security agencies in the US and UK and with input from G7 nations, international agencies and bodies from the Global South, were published on November 27. The authors say they are “a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout”.

Guidelines for secure AI system development has been drawn up primarily by the UK’s National Cyber Security Centre (NCSC) along with partners from the tech sector, and with input from the US Cybersecurity and Infrastructure Security Agency (CISA).

“When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement.”

Guidelines for secure AI system development

“A more secure global cyber space will help us all to safely and confidently realise this technology’s wonderful opportunities,” said NCSC CEO Lindy Cameron. And CISA director Jen Easterly said: “The domestic and international unity in advancing secure-by-design principles and cultivating a resilient foundation for the safe development of AI systems worldwide could not come at a more important time in our shared technology evolution. This joint effort reaffirms our mission to protect critical infrastructure and reinforces the importance of cross-border collaboration in securing our digital future.”

The guidelines are intended “for providers of any systems that use artificial intelligence (AI), whether those systems have been created from scratch or built on top of tools and services provided by others”. And the introduction to the document goes on to say: “When the pace of development is high – as is the case with AI – security can often be a secondary consideration. Security must be a core requirement, not just in the development phase, but throughout the life cycle of the system.”

Four key areas are set out:

  • Secure design;
  • Secure development;
  • Secure deployment; and
  • Secure operation and maintenance.

Underpinning the recommendations throughout each area are three priorities:

  • taking ownership of security outcomes for customers;
  • embracing radical transparency and accountability; and
  • building organisational structure and leadership so secure by design is a top business priority.

One key principle stressed throughout the document is that “providers of AI components should take responsibility for the security outcomes of users further down the supply chain”.

Secure design

System owners and business leaders are urged to maintain an awareness of relevant security threats and be prepared to help risk owners make informed decisions. Guidance and training on the unique security risks Ai systems face should be provided.

A holistic approach to risk management needs to be taken, which means understanding the risks to users, organizations and society at large if an AI component is compromised or behaves unexpectedly. There needs to be a recognition that threats may grow as systems develop and that the sensitivity and types of data within the system influence the value of the system as a target, and that AI itself enables new attack vectors.

Systems need to be designed for security as well as functionality and performance. When considering threat models, an extensive list of considerations including user experience, ethical and legal requirements and the application of appropriate restrictions to actions triggered need to be taken in to account.

Consideration needs to be given to security benefits and trade-offs when AI models are being selected, and decisions regularly reassessed.

Secure development

AI supply chain security needs to be assessed across the lifecycle of the system, and suppliers must adhere to the same standards as your own organization. Hardware and software components not produced in-house must be acquired from verified commercial, open source and other third-party developers.

The value of your AI-related assets needs to be fully understood, both to your own organization and to potential attackers. Logs must be treated as sensitive data and you must implement controls to protect their confidentiality, integrity and availability. You must be able to track, locate and authenticate assets.

Processes must be put in place to manage what AI systems can access, and content produced by them.

All data, models and prompts must be documented, with security-relevant information included.

A careful track of technical debt – the gap between engineering decisions and best practice that occurs when short-term results are prioritized over longer-term benefits – must be kept and properly managed.

Secure deployment

Good infrastructure security principles must be applied throughout systems, including the application of appropriate access controls to APIs, models and data. This applies to R&D as well as deployment.

Continuous protection of your model must be applied, through the implementation of cyber security best practice and controls on the query interface. Cryptographic hashes, model file signatures and datasets need to be computed and shared as soon as the model being worked on is trained.

Incident management best practice must be applied, with critical resources stored in offline backups. High-quality audit logs and other security features must be provided to customers at no extra charge, in order to facilitate their incident response processes.

AI models must be released responsibly, with due consideration given to effective security evaluation and the provision of clarity to users about known limitations or potential failure modes – for example model input failures, performance bias failures, and robustness failures.

Making it easy for users to do the right thing should be central to every decision, and you need to provide clarity about the risks that they are responsible for.

Secure operation and maintenance

System behaviour must be monitored so that you aware of sudden and gradual changes in behaviour that have an impact on security. Data inputs must be monitored and logged to enable compliance obligations.

Your design approach should be based on secure-by-design principles, and automated updates should be included in every product by default.

You should also take part in information-sharing communities and collaborate with industry, academia and governments to share best practice. You should be prepared to escalate issues to this wider community when necessary, and to take action to mitigate and remediate issues quickly and appropriately.

The guidelines have been endorsed by Amazon, Google, Microsoft and OpenAI, and representatives from 17 other countries – besides the Five Eyes intelligence alliance and the G7, these include Chile, Czechia, Estonia, Israel, Nigeria, Norway, Poland, Singapore and South Korea.