Late last month, the Department of Commerce’s National Institute of Standards and Technology (“NIST”) released four draft publications regarding actions taken by the agency following President Biden’s executive order on AI (the “Order”; see our prior alert here)[1] and call for action within six months of the Order. Adding to NIST’s mounting portfolio of AI-related guidance, these publications reflect months of research focused on identifying risks associated with the use of artificial intelligence (“AI”) systems and promoting the central goal of the Order: improving the safety, security and trustworthiness of AI. The four draft documents, further described below, are titled:
(1) The AI RMF Generative AI Profile;
(2) Secure Software Development Practices for Generative AI and Dual-Use Foundation Models;
(3) Reducing Risks Posed by Synthetic Content; and
(4) A Plan for Global Engagement on AI Standards.
In addition to the batch of draft documents, NIST introduced its pilot “NIST GenAI” program, a new evaluation series designed to assess generative AI (“GenAI”) technologies, primarily focused on establishing benchmark metrics and practices to distinguish synthetic content from human-generated content in both text-to-text and text-to-image AI models. The program will evaluate GenAI tools with a series of challenge problems presented to both “generator”[2] teams (i.e., which provide AI-generated content) and “discriminator” teams (i.e., which classify the data as either human-made or synthetic). The challenge problems are intended to evaluate the capabilities and limitations of the AI tools and use that information to “promote information integrity and guide the safe and responsible use of digital content,”[3] particularly by helping determine whether content was produced by a human or AI tool. The NIST GenAI evaluations will be used to inform the work of the U.S. AI Safety Institute at NIST. NIST is encouraging teams from academia, industry and other research centers to contribute to this research through the NIST GenAI platform.
The AI RMF Generative AI Profile.[4] The “AI RMF Generative AI Profile,” a companion resource to the AI Risk Management Framework published by NIST in January 2023, aims to help organizations identify risks posed by GenAI technologies and assist organizations in deciding how to best manage their AI risks in accordance with their business goals, legal and regulatory requirements and risk management priorities. This resource discusses risks which are unique to or exacerbated by the use of GenAI across the entire AI lifecycle, including, among others, eased access to chemical, biological, radiological or nuclear information, AI hallucinations, data privacy and information security breaches and intellectual property infringement, and proposes specific and actionable items organizations can take to manage each of those (and other) risks. Organizations will find it helpful to review these action items when developing their AI policy or considering use cases for GenAI.
Secure Software Development Practices for Generative AI and Dual-Use Foundation Models. [5] The publication titled “Secure Software Development Practices for Generative AI and Dual-Use Foundation Models” is a companion resource to NIST’s Secure Software Development Framework published in February 2022 and augments the software development practices defined therein by adding considerations, recommendations, notes and references which are specific to GenAI tools and dual-use foundation models (as defined in the Order). This resource is intended to be used by AI model producers,[6] AI system producers[7] and AI system acquirers[8] to mitigate AI-specific risks present in AI model development, as well as in connection with incorporating and integrating AI models into other software.
Reducing Risks Posed by Synthetic Content.[9] The Order tasked NIST with providing a report related to understanding the provenance and detection of synthetic content, which is reflected in NIST’s draft publication titled “Reducing Risks Posed by Synthetic Content.” The publication describes the potential harmful effects posed by synthetic content, including the disproportionate negative effects which synthetic content may have on public safety and democracy (most notably, through the dissemination of mis- or disinformation), particularly on individuals and communities who face systemic and intersectional discrimination and bias, and the harms caused by synthetic child sexual abuse material or non-consensual intimate imagery of real individuals (i.e., “deepfake” pornography). The report offers an overview of existing technical approaches to digital content transparency and examines current standards, tools and practices for authenticating content and tracking its origin and detecting and labeling synthetic content, such as by using digital watermarking and metadata recording, input and output data filtering, red-teaming and testing safeguards and other validation protocols. The report offers various use-case scenarios to consider which controls might be most effective and an evaluation of how those techniques may be best implemented. The report concludes that there is no “silver bullet to solve the issue of public trust in and safety concerns posed by digital content,” but offers NIST’s research to encourage understanding and lay groundwork for further development and investigation into improving the approaches to synthetic content provenance, detection, labeling and authentication.
A Plan for Global Engagement on AI Standards.[10] The fourth draft publication announced by NIST, “A Plan for Global Engagement on AI Standards,” pronounces at the outset that standards are critical in the advancement and adoption of novel and emerging technologies such as GenAI, where international players have, until now, diverged in their approach and are looking for global alignment on AI-related standards, particularly in areas such as safety, interoperability and competition. The draft outlines a plan for global engagement in promoting and developing AI standards by calling for collaboration among key international stakeholders (such as governments, private industry players, academia, consumers and standard developing organizations) to coordinate the development of AI-related standards, continue foundational research on AI-related questions and promote information sharing. The draft invites feedback on topics that are ripe for AI standardization, including establishing shared terminology for AI concepts, such as foundation models, model fine-tuning, AI red-teaming, open models and synthetic content, and topics related to cybersecurity risks distinct to AI technologies. It also comments on areas where more research is necessary prior to developing standards, such as energy consumption of AI models and incident response and recovery plans. NIST calls on the U.S. government to take the lead in certain actions to drive this plan forward, such as encouraging the U.S. government to focus on increasing agencies’ capacity for standards participation, educating U.S. government staff and incorporating private sector feedback, working with allies to articulate shared principles for AI standards and leveraging existing diplomatic relationships to promote exchanges between technical experts.
Although there are over 200 pages of guidelines and recommendations within NIST’s draft publications, the agency recognizes that its body of work related to AI development and risk management is still in foundational stages, and is not intended to be comprehensive, but is offered to support future research and level-set the understanding of relevant stakeholders to provide a shared vocabulary and set of concepts to work from. All four documents are open for public comment until June 2, 2024. Organizations can use these documents as helpful guidance for developing frameworks, playbooks and policies on the use of GenAI within their companies.
[1] Exec. Order. No. 14110, 88 Fed. Reg. 75191 (Nov. 1, 2023)
[2] As of this writing, the NIST GenAI website only provides a description of “generator“ participants and “discriminator” participants in the text-to-text context: https://ai-challenges.nist.gov/t2t
[3] https://www.commerce.gov/news/press-releases/2024/04/department-commerce-announces-new-actions-implement-president-bidens
[4] https://airc.nist.gov/docs/NIST.AI.600-1.GenAI-Profile.ipd.pdf
[5] https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-218A.ipd.pdf
[6] Described as “Organizations that are developing their own generative AI and dual-use foundation models”
[7] Described as “Organizations that are developing software that leverages a generative AI or dual-use foundation model”
[8] Described as “Organizations that are acquiring a product or service that utilizes one or more AI systems”
[9] https://airc.nist.gov/docs/NIST.AI.100-4.SyntheticContent.ipd.pdf
[10] https://airc.nist.gov/docs/NIST.AI.100-5.Global-Plan.ipd.pdf