AI Developer Guidelines
International Consensus on Responsible AI Development Establishes Strong Initial Standards
Initially Published December 12th, 2023 on Luxembourg Tech School's LinkedIn
A new set of AI safety guidelines have been released in the wake of the recent UK AI Safety Summit held in early November1. The Guidelines for Secure AI Systems Development represent a comprehensive set of best practices, published on November 22, 2023, by the UK National Cyber Security Centre (NCSC), in collaboration with the US Cybersecurity and Infrastructure Security Agency (CISA)2.
These guidelines have been endorsed by several international cybersecurity agencies. EU agencies include those in France (ANSSI), Germany (BSI), Italy (ACN), Norway (NCSC-NO), the Czech Republic (NUKIB), Estonia (RIA and NCSC-EE), and Poland (NASK and Ministry of Digital Affairs). Beyond the EU, they have also been supported by agencies in Japan, Korea, Australia, Canada, and Nigeria, among others. Furthermore, several significant private AI organizations, including Google, Microsoft, and OpenAI, have contributed to their development. With this level of international and private consensus, these guidelines represent a widespread international standard for AI developers and publishers to maintain.
Guideline Structure
These guidelines are intended for a wide range of organizations, including those that develop, deploy, or otherwise use AI systems. They can be used to inform the development of new AI systems, as well as to improve the security of existing systems. The framework outlined therein is comprehensive yet adaptable and helps permit organizations to develop and deploy secure AI systems.
The guidelines’ recommendations are broken into four distinct sections, pertaining to different phases of the AI development lifecycle. The guidelines establish these phases as key areas wherein AI systems may be compromised. Recommendations made for each phase are distinct in the sense that different considerations will apply to different areas of AI development and deployment. The phases are as follows:
Secure Design: This involves considering security threats and mitigations early in the development process.
Secure Development: This involves implementing security best practices throughout the development lifecycle.
Secure Deployment: This involves deploying AI systems in a secure manner, considering the operating environment and potential threats.
Secure Operation and Maintenance: This involves monitoring and maintaining AI systems to ensure that they remain secure over time.
To illustrate, considerations made during design might include asking whether the use of an AI model is an appropriate solution to a given task, or whether a certain situation might benefit from a simpler, more explainable model than a more complex, but potentially more effective model. Contrarily, considerations made during the operation and maintenance phase will be focused on monitoring the behavior of the system, assessing new user input, and collecting and formalizing lessons learned.
This structure encourages a multi-stakeholder approach to responsible AI development and deployment. Different stakeholders will bear ultimate responsibility for the management of different phases. Data scientists will be responsible for selecting, cleaning, and formatting the data that is used to train AI models, but will not be involved in maintaining the system’s deployment infrastructure or cybersecurity, for example.
By distributing responsibility, these guidelines ensure that the most qualified people will be responsible for the phases they are most competent in. This also serves to reiterate the importance of interdisciplinary involvement in responsible AI development. As AI’s impact becomes increasingly widespread, it becomes increasingly necessary to ensure that it is guided by a diversity of expertise.
Strong Initial Standard
Although these guidelines are not legally binding, their endorsement by both governmental agencies, research societies, and private companies affords them significant legitimacy. Regulators might look to them as inspiration when drafting legislation, while consumers and decision-makers might consider a company’s adherence to the guidelines when determining AI systems to use.
Furthermore, these guidelines provide much more specific technical guidance on ensuring AI system safety than current legislative measures do. This guidance is still flexible enough to be adapted to individual use cases, but specific enough to be implemented by members of the tech community.
More Work to be Done
While these guidelines represent a valuable step in responsible AI development, there remains work to be done.
First and foremost, there remains no legal requirement to adhere to these guidelines. Beyond this, there remains no comprehensive internationally binding agreement on AI development and deployment. This is particularly significant given the AI research and development occurring in China, who has not endorsed the currently published guidelines. Because of the internet and the ability for AI developments to proliferate rapidly across the globe, it is imperative for internationally binding agreements to be reached.
Likewise, it remains to be seen how feasible it will be to rigorously implement and adhere to certain recommendations made in the guidelines. For instance, organizations are required to monitor system inputs and collect and share lessons learned with other AI developers, while also protecting the privacy of their model and users. These obligations are both respectable, though may contradict each other in practice. Moreover, certain recommendations may require models to be completely rebuilt, as they require full documentation of data sources, which may no longer exist.
Regardless, the Guidelines for Secure AI Systems Development are a significant step in the right direction. Its publication demonstrates a widespread international desire to ensure that AI is used responsibly and in service of societal good. Its endorsement by private companies is a reassuring indication that this sentiment is reflected in the private sector. Using these guidelines, future research and development, deployment, and legislation can become more rigorous, specific, and secure. It will be necessary to keep a close eye on how this all continues to unfold in light of the guidelines, but it is clear that there is some cause for optimism.
https://www.aisafetysummit.gov.uk/
https://www.ncsc.gov.uk/collection/guidelines-secure-ai-system-development