Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


CSET Releases Reports to Help Organizations Implement Responsible AI

June 6th, 2023 / in AI, Announcements / by Maddy Hunter

With the rise of Artificial Intelligence (AI) and its increasingly ubiquitous role in society, the Biden administration, a multitude of government agencies, and nonprofits are turning their attention to the assurance and implementation of responsible AI practices. The Center for Security and Emerging Technology (CSET) is no exception and has contributed to the effort with three recent reports seeking to help organizations implement responsible AI.

A Matrix for Selecting Responsible AI Frameworks

By Mina Narayanan and Christian Schoeberl

Synopsis: Process frameworks provide a blueprint for organizations implementing responsible artificial intelligence (AI). A new issue brief by CSET’s Mina Narayanan and Christian Schoeberl presents a matrix that organizes approximately 40 process frameworks to help organizations select and apply frameworks that meet their needs. The matrix is focused on the user of a framework, namely people within Development and Production teams, as well as Governance teams. To help these users select frameworks that will best serve their needs, the matrix further classifies frameworks according to their respective areas of focus: an AI system’s components, an AI system’s lifecycle stages, or characteristics related to an AI system.

This new issue brief is part of CSET’s line of research focused on AI assessment. More information about anticipated future work is available in the research agenda “One Size Does Not Fit All,” found below.

One Size Does Not Fit All:
Assessment, Safety, and Trust for the Diverse Range of AI Products, Tools, Services, and Resources

By Heather Frase

Artificial intelligence is so diverse in its range that no simple one-size-fits-all assessment approach can be adequately applied to it. AI systems have a wide variety of functionality, capabilities, and outputs. They are also created using different tools, data modalities, and resources, which adds to the diversity of their assessment. Thus, a collection of approaches and processes is needed to cover a wide range of AI products, tools, services, and resources.

A Common Language for Responsible AI:
Evolving and Defining DOD Terms for Implementation

By Emelia Probasco

Policymakers, engineers, program managers and operators need the bedrock of a common set of terms to instantiate responsible AI for the Department of Defense. Rather than create a DOD-specific set of terms, this paper argues that the DOD could benefit by adopting the key characteristics defined by the National Institute of Standards and Technology in its draft AI Risk Management Framework with only two exceptions.

To read more about what the government is doing to support responsible AI, you can check out recent initiatives from the Biden Administration on the CCC Blog.

CSET Releases Reports to Help Organizations Implement Responsible AI

Comments are closed.