Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development (White Paper)

Policy Details

Policy Details

Last Action
Published online
Date of Last Action
Apr 16 2019
Date Introduced
Apr 16 2019
Publication Date
May 8 2019
Date Made Public
Apr 16 2019

SciPol Summary

This report, written at the Future of Humanity Institute at Oxford University, begins by recognizing the importance and benefit of international standardization of Artificial Intelligence systems (e.g., explainability, robustness, fail-safe design, etc.) along with their supporting technologies, data, and workforce. Specifically, this report identifies how standardization can help increase the market efficiency and ethics of applied AI systems and their development. However, this report also recognizes that while some entities have begun developing some aspects of AI standardization, there remain significant gaps in standardization policy and participation from AI system stakeholders that this report seeks to address.

In particular, policy recommendations provided by this report in its fourth section include:

In the second section of this report, its authors include a primer discussing the necessity and scope of global participation from AI system stakeholders regarding AI system standardization. Regarding the scope of standardization, the report describes fours kinds of standardization pertinent to AI systems. These four kinds of standardization include:

  • Network-product standards: define the design of systems to ensure portability and interoperability.
  • Network-process standards: define best practices in the research and development of systems to ensure safety, efficiency, security, and other similar design-aspects.
  • Enforcement-product standards: define measures to ensure that AI systems are built so that the systems’ application minimizes externalities and direct risks to its users and society.
  • Enforcement-process standards: defines measures to ensure that AI systems are designed to not infringe on societal values and are also capable of being monitored for such infringements throughout the system’s application.

A main concern of the author is that in the absence of international standards for AI systems there will be a proverbial “race to the bottom” for standardization whereby countries forgo stringent safety, security, and ethical standards (among others) in attempts to attract more of the AI systems industry.  

Given such international standards, the authors identify three main benefits:

  1. Standards will help increase the safety and efficiency of AI systems by helping spread lessons learned from around the globe as well as help identify specific goals and processes of AI research and development.
  2. Standards can also help build trust among participating entities and a means by which opposing perspectives can find consensus or negotiate research aims and methods. However, the report also recognizes that standards and the openness of information among stakeholders may present risks. As such, the degree of openness among participants must be hatched out.
  3. Standards also help increase the portability and interoperability of AI systems and their supporting infrastructure, data, and workforce. This can encourage more efficient research and development of AI systems as systems’ application and scrutiny increases.

As far as the particular entities that the report identifies as relevant AI system stakeholders and participants in standardization. Some of these groups are listed below and their ongoing contributions to AI system standardization are outlined in section 3 of this report.

SciPol Summary authored by