The regulatory landscape for Artificial Intelligence in the United States is rapidly evolving, moving from abstract principles to concrete legal and compliance obligations. For Chief Information Officers (CIOs) and Chief Information Security Officers (CISOs), understanding and aligning with these multifaceted regulations is no longer a future concern but a present-day imperative. This article provides an authoritative overview of the key federal and state-level AI regulations, including significant points from President Biden's Executive Order 14110, the NIST AI Risk Management Framework, and enforcement actions from the SEC and FTC, as well as emerging state laws and a look ahead at potential future directives.
At the federal level, a multi-pronged approach is underway to govern the development and deployment of AI. This effort is characterized by a blend of broad policy directives, detailed risk management guidance, and aggressive enforcement of existing laws in the context of AI.
President Biden's Executive Order 14110, "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," issued in October 2023, represents a landmark in US AI policy. It establishes a comprehensive framework for governing AI, with a strong emphasis on safety, security, and fairness.
Key takeaways for management:
Cybersecurity at the Forefront: The order mandates the development of standards, tools, and best practices for AI safety and security. This includes protecting against AI-enabled cybersecurity threats and ensuring the security of AI systems themselves. The National Institute of Standards and Technology (NIST) is tasked with creating guidelines for red-teaming, model evaluation, and risk management.
Critical Infrastructure in Focus: The EO directs sector-specific agencies to assess and mitigate AI-related risks to critical infrastructure. CIOs and CISOs in sectors like energy, finance, and healthcare must anticipate and prepare for new reporting and compliance requirements.
Transparency and Watermarking: To combat AI-generated deception, the order calls for the development of standards for authenticating and watermarking AI-generated content. This will have significant implications for companies developing or using generative AI.
Federal agencies are not waiting for new laws to regulate AI. The Securities and Exchange Commission (SEC) and the Federal Trade Commission (FTC) are actively using their existing authorities to police the use of AI.
SEC Enforcement: The SEC is cracking down on "AI washing," where companies make exaggerated or false claims about their use of AI to attract investors. Enforcement actions have targeted companies for misrepresenting their AI capabilities in marketing materials and public filings. The message from the SEC is clear: AI-related claims must be accurate and substantiated.
FTC Enforcement: The FTC is focused on unfair and deceptive practices related to AI. This includes biased algorithms that result in discrimination, the use of AI for fraudulent purposes, and the insecure handling of personal data used to train AI models. The FTC has emphasized that there is no "AI exemption" from consumer protection and data security laws.
In addition to federal initiatives, a growing number of states are enacting their own AI-specific legislation, creating a complex compliance landscape for businesses operating nationwide.
The Colorado AI Act is a first-of-its-kind law that specifically targets algorithmic discrimination. It requires developers and deployers of "high-risk" AI systems to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. The law mandates impact assessments and transparency notices for these systems.
California continues to lead in technology regulation. The California Consumer Privacy Act (CCPA), as amended by the CPRA, already provides consumers with rights regarding automated decision-making. More recent legislative efforts focus on increasing transparency, requiring clear disclosure when consumers are interacting with AI and mandating the watermarking of AI-generated content.
Utah's AI Policy Act requires clear and conspicuous disclosure when individuals are interacting with generative AI. The law aims to prevent deception and ensure that consumers are aware when they are not communicating with a human.
The NIST AI RMF provides a voluntary but highly influential framework for managing the risks associated with AI. It is designed to be adaptable to various sectors and applications and is quickly becoming a de facto standard for responsible AI governance.
Core Functions of the AI RMF:
Govern: Establishing a culture of risk management.
Map: Recognizing the context and potential impacts of AI systems.
Measure: Assessing and analyzing identified risks.
Manage: Prioritizing and acting on risks.
For CIOs and CISOs, the AI RMF offers a practical roadmap for integrating risk management into the entire AI lifecycle, from design and development to deployment and monitoring. Adopting this framework can not only mitigate risks but also demonstrate due care and a commitment to responsible AI practices.
While the current regulatory landscape is shaped by the Biden administration, it is crucial to consider potential shifts in policy. Reports and analyses from various sources suggest that a future administration could also take a strong stance on AI, albeit with a different emphasis. For instance, a hypothetical "Trump Executive Order" dated January 23, 2025, has been discussed in some forward-looking analyses. While this is not an official document, such discussions suggest a potential focus on promoting US dominance in AI, reducing regulatory barriers perceived as hindering innovation, and scrutinizing AI for ideological bias. While speculative, it highlights the dynamic nature of AI policy.
For management, navigating this complex web of regulations requires a proactive and strategic approach. Here are key priorities for CIOs and CISOs:
Establish a Robust AI Governance Framework: Go beyond ad-hoc policies. Implement a comprehensive AI governance framework, leveraging the NIST AI RMF, that outlines clear roles, responsibilities, and processes for managing AI risks across the organization. This framework should be integrated with your existing cybersecurity and data governance programs.
Prioritize Transparency and Explainability: The "black box" nature of some AI models is a major regulatory concern. Invest in technologies and processes that enhance the transparency and explainability of your AI systems. Be prepared to explain how your models work, the data they are trained on, and the decisions they make.
Conduct Rigorous Risk and Impact Assessments: For all high-risk AI systems, conduct thorough risk and impact assessments to identify and mitigate potential harms, including algorithmic bias and security vulnerabilities. These assessments should be a continuous process, not a one-time event.
Strengthen Data Governance and Privacy: AI systems are data-hungry, making robust data governance and privacy practices more critical than ever. Ensure that the data used to train your models is accurate, representative, and collected and used in compliance with privacy regulations like the CCPA and GDPR.
Scrutinize Third-Party AI Vendors: Your organization is responsible for the AI systems you deploy, even if they are developed by third-party vendors. Implement a stringent due diligence process for all AI vendors, assessing their security practices, data handling policies, and compliance with relevant regulations.
Develop an AI Incident Response Plan: Be prepared for AI-related incidents, such as model failures, security breaches, or the discovery of bias. Develop and test an AI-specific incident response plan that outlines how you will detect, respond to, and recover from such events.
Foster a Culture of Responsible AI: AI governance is not just a technical issue; it's a cultural one. Foster a culture of responsible AI throughout the organization, from the data scientists building the models to the business units deploying them. This includes providing ongoing training on AI ethics, security, and compliance.
The era of unregulated AI is over. By taking a proactive and strategic approach to governance and risk management, CIOs and CISOs can not only ensure compliance with the evolving regulatory landscape but also build trust with customers, mitigate risks, and unlock the full potential of artificial intelligence for their organizations.
Connect with our subject matter experts to learn how we help organizations prepare their AI strategy.