Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


Defense Innovation Board Final Report on AI Ethics Principles

November 5th, 2019 / in Announcements, research horizons, Research News, resources / by Helen Wright

Contributions to this post were provided by CCC Chair Mark D. Hill from the University of Wisconsin-Madison and CCC Executive Committee Member Nadya Bliss from Arizona State University.

The leadership of the Department of Defense (DoD) tasked the Defense Innovation Board (DIB) with proposing Artificial Intelligence (AI) Ethics Principles for DoD for the design, development, and deployment of AI for both combat and non-combat purposes. 

“The mission of the DIB is to provide the Secretary of Defense, Deputy Secretary of Defense, and other senior leaders across the Department with independent advice and recommendations on innovative means to address future challenges through the prism of three focus areas: people and culture, technology and capabilities, and practices and operations.”

As a result of this task, the DIB developed a set of AI Ethics Principles for the Department of Defense entitled “AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense.”  The report has two parts: a short primary document and a richer supporting document that provides additional explanation and detail. See their website here to learn more. 

They propose the following principles and believe that the DoD should set the goal that its use of AI systems is: 

  1. Responsible. Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems. 
  2. Equitable. DoD should take deliberate steps to avoid unintended bias in the development and deployment of combat or non-combat AI systems that would inadvertently cause harm to persons. 
  3. Traceable. DoD’s AI engineering discipline should be sufficiently advanced such that technical experts possess an appropriate understanding of the technology, development processes, and operational methods of its AI systems, including transparent and auditable methodologies, data sources, and design procedure and documentation. 
  4. Reliable. DoD AI systems should have an explicit, well-defined domain of use, and the safety, security, and robustness of such systems should be tested and assured across their entire life cycle within that domain of use. 
  5. Governable. DoD AI systems should be designed and engineered to fulfill their intended function while possessing the ability to detect and avoid unintended harm or disruption, and for human or automated disengagement or deactivation of deployed systems that demonstrate unintended escalatory or other behavior.

In the course of its work developing these proposed AI Ethics Principles, the DIB identified useful actions that can aid in the articulation and implementation of these proposed principles. The following twelve recommendations will support that effort:

  1. Formalize these principles via official DoD channels. 
  2. Establish a DoD-wide AI Steering Committee. 
  3. Cultivate and grow the field of AI engineering. 
  4. Enhance DoD training and workforce programs. 
  5. Invest in research on novel security aspects of AI. 
  6. Invest in research to bolster reproducibility. 
  7. Define reliability benchmarks. 
  8. Strengthen AI test and evaluation techniques. 
  9. Develop a risk management methodology. 
  10. Ensure proper implementation of AI ethics principles. 
  11. Expand research into understanding how to implement AI ethics principles. 
  12. Convene an annual conference on AI safety, security, and robustness. 

“Artificial Intelligence is at a similar inflection point as internet technology was about 25 years ago – on the cusp of widespread adoption. At the time, the information technology community and emerging industries prioritized novel capability development and underestimated importance of building security into the foundations of computer systems and networking.” – said Nadya Bliss, of the CCC and Executive Director of the Global Security Initiative at Arizona State University. “We have an opportunity to not repeat that mistake with AI systems, and incorporating ethics, explainability, and resilience to vulnerabilities into the design of such systems is a key step in this direction.”

Their principles and recommendations echo similar concerns that the Computing Community Consortium (CCC) has in its recently released Artificial Intelligence (AI) Roadmap, titled A 20-Year Community Roadmap for AI Research in the US. The AI Roadmap recognizes many challenges that need to be addressed including AI ethics and states “responsible uses of AI, incorporating human values, respecting privacy, universal access to AI technologies, addressing AI systems bias particularly for marginalized groups, the role of AI in profiling and behavior prediction, as well as algorithmic fairness, accountability, and transparency” all need to be addressed so that we can incorporate ethics and related responsibility principles as central elements in the design and operation of AI systems. This roadmap, led by Yolanda Gil (University of Southern California and President of AAAI) and Bart Selman (Cornell University and President Elect of AAAI), is the result of a year-long effort by the CCC and over 100 members of the research community.

Defense Innovation Board Final Report on AI Ethics Principles

Comments are closed.