AI 'Secure by Default' Agreement Signed by 18 Countries

Government bodies from 18 countries, including the U.S., have agreed to AI security guidelines developed by the U.K.

The National Cyber Security Centre (NCSC) released its guidelines document on Sunday, Nov. 26, with input and support from other international security bodies, including the U.S. Cybersecurity and Infrastructure Security Agency (CISA). Their goal is to be the first step in defining a methodology that will be used as a basis in creating international standards for developing, deploying and using AI technology in a responsible manner.

"This document recommends guidelines for providers of any systems that use AI, whether those systems have been created from scratch or built on top of tools and services provided by others," read the guidelines. "Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorised parties."

While the 20-page guidelines agreed upon by the 18 countries are non-binding and do not hold any penalties for non-compliance, this week's announcement signifies that major international bodies are willing to work together to address security and privacy issues surrounding the rapid growth of new technologies, like generative AI.

The guidelines present a framework for a more safe and secure AI by focusing on the following four areas:

  • Secure design: Prioritizing security best practices during the initial brainstorming. This includes modeling threats to developing systems, keeping developers apprised of the latest threats and balancing performance with overall security, according to the guidelines.
  • Secure development: This includes developing with supply chain security in mind, monitoring safeguards built in at creation and documenting best practices for new software and services.
  • Secure deployment: The documentation suggest creating protocols relevant to the deployment phase of an AI system's development life cycle. These include safeguarding infrastructure and models against potential compromise, threats, or loss, establishing processes for incident management and ensuring responsible release practices.
  • Secure operation and maintenance:  Finally, the guidelines highlight the importance of secure operation and maintenance phase in the life cycle of AI system development. It details measures especially crucial post-deployment, such as logging and monitoring, managing updates and sharing information.

Now the real work begins, as the agreeing nations will work towards finding practical and realistic ways to both fulfill and further enhance international guidelines and best practices.

"The largest challenge in terms of following the guidance, or more importantly, having the guidance followed, is the fact that it’s all still largely voluntary," said Chris Hughes, chief security advisor at Endor Labs and Cyber Innovation Fellow at CISA. "While the guidance lays out high level best practices and recommendations, there are a lot of things that 'should' be done, not 'must' be done. Unlike guidance from other entities such as the EU, this is not mandatory or binding, and suppliers and vendors can choose to follow the guidance -- or not. That may change, as the AI regulatory landscape continues to evolve and mature in the U.S., but that remains to be seen."

About the Author

Chris Paoli (@ChrisPaoli5) is the associate editor for Converge360.


comments powered by Disqus

Hot Resources

Subscribe on YouTube