CONTENTS

    Artificial Intelligence (AI): A Plain‑English Overview for the United States

    avatar
    Tony Yan
    ·September 26, 2025
    ·5 min read
    Abstract
    Image Source: statics.mylandingpages.co

    In the United States, “AI” refers to a spectrum of software-driven systems—often machine learning—that help predict, generate, or support decisions. U.S. policy treats AI as a socio‑technical system that must be governed throughout its lifecycle, anchored by the NIST AI Risk Management Framework, Executive Order 14110 (Federal Register), and agency enforcement.

    What AI Is—and What It Isn’t

    • What it is: Practical AI today includes machine learning (predictive and generative), rules‑based systems, and automated decision systems used in sectors like finance, health, retail, and government services. These systems learn patterns from data or follow structured logic to classify, rank, recommend, generate content, or flag anomalies.
    • What it isn’t: AI is not a single product or a guarantee of full autonomy. Most real‑world systems are narrow (task‑specific), not “general intelligence.” AI also doesn’t automatically remove human responsibility; organizations remain accountable for outcomes.

    Helpful distinctions:

    • Narrow vs. general AI (AGI): Nearly all business use cases are narrow.
    • Predictive vs. generative: Predictive models estimate outcomes; generative models create text, images, code, or audio.
    • Foundation models vs. applications: Foundation models are broad, pre‑trained systems adapted for specific tasks; applications wrap models with domain data, interfaces, and controls.

    The U.S. Governance Map: How AI Is Framed and Managed

    There is no single federal AI regulator. Instead, governance is layered: voluntary standards and guidance, executive actions, and enforcement of existing laws by multiple agencies.

    • NIST’s organizing framework. The National Institute of Standards and Technology publishes the AI Risk Management Framework, a voluntary, sector‑agnostic guide to manage AI risks across the lifecycle. It emphasizes trustworthiness characteristics (e.g., validity, safety, transparency, privacy, and fairness) and outlines four core functions—Govern, Map, Measure, Manage—to structure practical risk controls.

    • Executive Order 14110. Issued in November 2023, Executive Order 14110 (Federal Register) directs federal agencies to advance AI safety and security standards, strengthen civil rights and consumer protections, and promote responsible innovation. It tasks agencies like NIST to develop guidance, encourages transparency for rights‑ and safety‑impacting uses, and addresses distinct national security contexts.

    • OMB M‑24‑10 (for federal agencies). The U.S. Office of Management and Budget sets requirements for federal use of AI in its March 2024 memo, including inventories, risk assessments for rights‑ and safety‑impacting systems, and governance roles. See OMB Memorandum M‑24‑10 (PDF). While aimed at agencies, the memo’s themes—clear accountability, documentation, and vendor requirements—often influence private‑sector expectations and contracts.

    As of 2025‑09‑26, this triad—NIST guidance, the Executive Order, and OMB’s federal use policy—provides a common vocabulary for lifecycle governance in the U.S.

    Enforcement Posture: “No AI Exemption” Under Existing Law

    Civil rights, consumer protection, fair lending, and competition laws apply to automated systems. In April 2023, four agencies affirmed this in a coordinated statement—there is no carve‑out for AI. See the joint statement by the FTC, DOJ, EEOC, and CFPB (PDF).pdf). For organizations, that means:

    • Deceptive AI marketing claims can trigger enforcement (truth‑in‑advertising and unfair/deceptive practices).
    • Biased or discriminatory outcomes in hiring, housing, credit, or services can violate civil rights and fair lending laws.
    • Data misuse and inadequate security controls can breach privacy and consumer protection requirements.

    Practically, “we used AI” is not a defense. You need evidence of appropriate testing, oversight, and remediation when harms or errors occur.

    The State Privacy Backdrop

    Beyond federal enforcement, a growing patchwork of state privacy laws affects AI data practices—especially around collecting, processing, and sharing personal information. California, Colorado, Virginia, Connecticut, Utah, and others establish consumer rights (like access, deletion, or certain opt‑outs) and require data protection assessments for higher‑risk processing in some cases.

    Two practical notes for teams:

    • Treat automated decision‑making and profiling as potential high‑risk processing, particularly when outcomes can affect individuals’ rights or access to opportunities.
    • Track state rulemaking and attorney‑general guidance. Requirements evolve, and some provisions—such as specific rights related to automated decision‑making—are still being refined through regulations.

    Adoption and Economic Snapshot

    Adoption is broadening across industries, with both predictive and generative AI in production. According to the Stanford Human‑Centered AI Institute’s 2025 AI Index Report, U.S. private AI investment reached $109.1 billion in 2024, while global corporate AI investment totaled $252.3 billion. The report also highlights the surge of generative AI funding and the integration of AI across enterprise functions. Numbers vary by methodology, but the direction is clear: significant, sustained investment and adoption.

    Implication: The governance basics above are no longer optional “extras”—they’re core to building and buying AI responsibly at scale.

    Practical Steps to Adopt AI Safely (Lifecycle Checklist)

    Use these steps as a concise, cross‑functional playbook. They reflect the themes in NIST’s framework, the Executive Order, and federal agency expectations.

    1. Set up accountable governance
    • Assign executive ownership and define roles for product, data science, risk, legal, and security.
    • Maintain an AI use inventory, flagging systems that impact safety or rights.
    1. Map the system and its context
    • Clarify the intended use, users, and affected populations; identify potential harms and benefits.
    • Document data provenance, quality, lineage, and consent posture.
    1. Measure and evaluate
    • Establish test, evaluation, verification, and validation (TEVV) procedures before and after deployment.
    • Evaluate for bias, robustness, privacy leakage, and security risks; set performance thresholds and guardrails.
    1. Manage risks in production
    • Implement human‑in‑the‑loop or human‑on‑the‑loop controls for high‑impact decisions.
    • Provide notices and explanations appropriate to the context; define escalation and appeal pathways.
    • Monitor drift and incidents; log decisions and outcomes for auditability.
    1. Document and disclose
    • Maintain model cards or system cards summarizing purpose, data, limitations, and evaluation results.
    • For vendors: require evidence of lifecycle controls (e.g., risk assessments, security attestations, and robust change management).
    1. Plan for change and retirement
    • Use change control for model updates; re‑validate when data, code, or context changes significantly.
    • Decommission systems safely, including data retention/disposal and mitigation of downstream dependencies.

    Sector Notes (Illustrative)

    • Health: AI/ML in medical devices is subject to lifecycle oversight, including expectations for change‑control plans and transparency. This raises the bar for documentation, monitoring, and post‑market learning.
    • Finance: Credit and fraud models intersect with fair lending, consumer protection, and model risk management; expect strong documentation and explainability expectations.
    • Public sector: Agencies must inventory and assess rights‑ and safety‑impacting AI, with transparency measures and leadership accountability.

    Frequently Asked Questions

    • Is there a single federal AI regulator in the U.S.?
      No. AI governance is multi‑agency. NIST provides voluntary standards; the Executive Branch sets policy direction (e.g., Executive Order 14110); OMB governs federal agency use; enforcement of existing laws comes from agencies like the FTC, EEOC, CFPB, DOJ, and sector regulators.

    • Does existing law already cover AI harms?
      Yes. Consumer protection, civil rights, and competition laws apply to automated systems. See the coordinated FTC/DOJ/EEOC/CFPB statement (2023).pdf). Agencies have reinforced that “AI” is not an excuse for non‑compliance.

    • How do generative AI risks differ from traditional ML?
      Many lifecycle risks overlap (data quality, bias, robustness, security). Generative AI adds content‑centric risks such as misinformation, intellectual property concerns, and prompt injection or data leakage. Mitigate with evaluations tailored to generation quality and safeguards like filtering, citation practices, and human review for high‑stakes outputs.

    Bottom Line

    In U.S. practice, AI is best understood as a set of socio‑technical systems that demand continuous governance—not a single technology you can “set and forget.” If your organization can inventory AI uses, evaluate and monitor them with documented controls, and align with federal guidance while respecting state privacy obligations, you’ll be positioned to adopt AI responsibly and withstand regulatory scrutiny.

    Accelerate Your Blog's SEO with QuickCreator AI Blog Writer