AI Regulation in the US: NIST Framework as a Guide for Businesses | Is chatgpt generative ai | Microsoft/generative ai github | Generative ai course free with certificate | Turtles AI

AI Regulation in the US: NIST Framework as a Guide for Businesses
Doing Business with AI in the US: Better to Trust the NIST Framework Than Wait for Federal Legislation
Isabella V14 December 2024

 

In the United States, federal AI regulation is unlikely in the near term. Instead, businesses and users should follow the NIST framework to manage AI risks, given the growing legislative movement at the state level.

Key Points:

  • Federal Legislative Stalemate: A unified AI law appears unlikely due to political divisions.
  • Focus on voluntary frameworks: NIST proposes a model to manage AI risks.
  • Evolving state regulations: California, New York, Texas and other states are working on AI bills.
  • International Harmonization: The NIST framework presents itself as a bridge between US regulations and the EU AI Act.

In the United States, federal legislation to regulate AI use in businesses seems unlikely, despite growing attention to technology governance. Chandler Morse, vice president of corporate affairs at Workday, said the narrow majority in Congress makes it unlikely that there will be any unified legislative action on this issue during the current presidential administration. However, states are emerging as leaders in the development of AI regulations, with California, New York, Colorado and Texas already engaged in specific initiatives.

In California, House Bill 1047, designed to ensure AI safety, was rejected by Governor Gavin Newsom, who urged lawmakers to propose an improved version. Meanwhile, other bills, such as AB 2930 on automated decision-making, remain dormant, often over disputes over user liability for managing AI-based products. This underscores the fact that despite the challenges, states are determined to develop regulations, with California likely to reintroduce new bills soon.

In the meantime, to avoid operational gaps, companies are encouraged to turn to the AI ​​Risk Management Framework developed by the National Institute of Standards and Technology (NIST). Published in 2023, this voluntary framework offers practical guidance for identifying, assessing, and managing AI-related risks, promoting responsible and safe use. The framework, which builds on the previous success of NIST’s cybersecurity and privacy frameworks, is intended to serve as an interim operational standard, pending more robust regulation. Its importance is amplified by the need to harmonize with regulations already introduced in other regions, such as the European Union’s AI Act, to avoid regulatory discrepancies on a global scale.

Despite the framework’s voluntary approach, NIST is gaining a central role in US AI policy. Morse says engaging companies and institutions is key to determining the future directions of technology governance. This inclusive approach could foster a balance between innovation and security. Meanwhile, experts like Joel Meyer suggest that even with a change in administration, established initiatives like NIST’s AI Safety Institute could survive, continuing to provide a base of operations for AI governance.

The United States is at a unique moment, still in the early stages of regulation. Decisions made today, both at the state level and through the adoption of voluntary frameworks, will profoundly influence the future development of AI.