In the United States, “AI” refers to a spectrum of software-driven systems—often machine learning—that help predict, generate, or support decisions. U.S. policy treats AI as a socio‑technical system that must be governed throughout its lifecycle, anchored by the NIST AI Risk Management Framework, Executive Order 14110 (Federal Register), and agency enforcement.
Helpful distinctions:
There is no single federal AI regulator. Instead, governance is layered: voluntary standards and guidance, executive actions, and enforcement of existing laws by multiple agencies.
NIST’s organizing framework. The National Institute of Standards and Technology publishes the AI Risk Management Framework, a voluntary, sector‑agnostic guide to manage AI risks across the lifecycle. It emphasizes trustworthiness characteristics (e.g., validity, safety, transparency, privacy, and fairness) and outlines four core functions—Govern, Map, Measure, Manage—to structure practical risk controls.
Executive Order 14110. Issued in November 2023, Executive Order 14110 (Federal Register) directs federal agencies to advance AI safety and security standards, strengthen civil rights and consumer protections, and promote responsible innovation. It tasks agencies like NIST to develop guidance, encourages transparency for rights‑ and safety‑impacting uses, and addresses distinct national security contexts.
OMB M‑24‑10 (for federal agencies). The U.S. Office of Management and Budget sets requirements for federal use of AI in its March 2024 memo, including inventories, risk assessments for rights‑ and safety‑impacting systems, and governance roles. See OMB Memorandum M‑24‑10 (PDF). While aimed at agencies, the memo’s themes—clear accountability, documentation, and vendor requirements—often influence private‑sector expectations and contracts.
As of 2025‑09‑26, this triad—NIST guidance, the Executive Order, and OMB’s federal use policy—provides a common vocabulary for lifecycle governance in the U.S.
Civil rights, consumer protection, fair lending, and competition laws apply to automated systems. In April 2023, four agencies affirmed this in a coordinated statement—there is no carve‑out for AI. See the joint statement by the FTC, DOJ, EEOC, and CFPB (PDF).pdf). For organizations, that means:
Practically, “we used AI” is not a defense. You need evidence of appropriate testing, oversight, and remediation when harms or errors occur.
Beyond federal enforcement, a growing patchwork of state privacy laws affects AI data practices—especially around collecting, processing, and sharing personal information. California, Colorado, Virginia, Connecticut, Utah, and others establish consumer rights (like access, deletion, or certain opt‑outs) and require data protection assessments for higher‑risk processing in some cases.
Two practical notes for teams:
Adoption is broadening across industries, with both predictive and generative AI in production. According to the Stanford Human‑Centered AI Institute’s 2025 AI Index Report, U.S. private AI investment reached $109.1 billion in 2024, while global corporate AI investment totaled $252.3 billion. The report also highlights the surge of generative AI funding and the integration of AI across enterprise functions. Numbers vary by methodology, but the direction is clear: significant, sustained investment and adoption.
Implication: The governance basics above are no longer optional “extras”—they’re core to building and buying AI responsibly at scale.
Use these steps as a concise, cross‑functional playbook. They reflect the themes in NIST’s framework, the Executive Order, and federal agency expectations.
Is there a single federal AI regulator in the U.S.?
No. AI governance is multi‑agency. NIST provides voluntary standards; the Executive Branch sets policy direction (e.g., Executive Order 14110); OMB governs federal agency use; enforcement of existing laws comes from agencies like the FTC, EEOC, CFPB, DOJ, and sector regulators.
Does existing law already cover AI harms?
Yes. Consumer protection, civil rights, and competition laws apply to automated systems. See the coordinated FTC/DOJ/EEOC/CFPB statement (2023).pdf). Agencies have reinforced that “AI” is not an excuse for non‑compliance.
How do generative AI risks differ from traditional ML?
Many lifecycle risks overlap (data quality, bias, robustness, security). Generative AI adds content‑centric risks such as misinformation, intellectual property concerns, and prompt injection or data leakage. Mitigate with evaluations tailored to generation quality and safeguards like filtering, citation practices, and human review for high‑stakes outputs.
In U.S. practice, AI is best understood as a set of socio‑technical systems that demand continuous governance—not a single technology you can “set and forget.” If your organization can inventory AI uses, evaluate and monitor them with documented controls, and align with federal guidance while respecting state privacy obligations, you’ll be positioned to adopt AI responsibly and withstand regulatory scrutiny.