Welcome to Legal Lines Around AI, a six‑part blog series exploring how AI laws are taking shape in the United States and what those changes mean for businesses using, building, or relying on AI systems. Throughout this series, we’ll break down emerging legal requirements, highlight risk triggers, and offer practical guidance to help organizations navigate an increasingly complex and fragmented AI governance environment.
While AI regulation comes in many forms, this series focuses primarily on comprehensive, cross‑sector AI laws—not industry‑specific rules governing areas like health care, employment, or AI companion chatbots. Our goal is to help businesses understand the broad frameworks that are most likely to apply across operations, technologies, and use cases.
Before diving into specific obligations, it’s important to understand the legal backdrop shaping AI governance today.
A Shifting Federal Approach to AI
At the federal level, AI policy has undergone a dramatic shift in just a few years.
In October 2023, the Biden Administration issued Executive Order 14110, the most sweeping federal AI action to date. That order directed federal agencies to address AI‑related risks tied to safety, civil rights, consumer protection, privacy, and the government’s own use of AI systems.
That approach changed course in January 2025, when the Trump Administration rescinded Executive Order 14110 and adopted a more innovation‑first, deregulatory posture, emphasizing U.S. competitiveness in AI development. A follow‑up executive order issued in December 2025 reinforced that shift, signaling a preference for minimal regulatory burden and raising the possibility of future federal preemption of state AI laws, but stopping short of establishing a comprehensive federal AI statute.
Regulation Without a Federal AI Law
Even without a dedicated federal AI law, AI regulation hasn’t stopped.
Federal agencies have continued to apply existing authority to AI‑related practices. The Federal Trade Commission, for example, has made clear that it will use its unfair and deceptive acts and practices authority to police AI claims, data practices, bias, and consumer harm. In practice, that means AI systems are already subject to scrutiny, even in the absence of new legislation.
States Step In—and Move Fast
With no comprehensive federal framework in place, states have become the primary drivers of AI regulation.
Early state regulation often came through comprehensive privacy laws, which introduced direct oversight of profiling, generally defined as the automated processing of personal data to evaluate or predict aspects of an identified or identifiable consumer, such as economic circumstances, health, preferences, behavior, reliability, location, or movements.
These laws impose heightened obligations when profiling is used to produce “legal or similarly significant effects,” including decisions affecting access to financial services, employment, housing, insurance, education, health care, criminal justice outcomes, or other essential goods and services. Those obligations may include consent requirements, opt‑out rights, and mandatory risk assessments. Colorado and California went a step further by adopting comprehensive automated decision‑making regulations under their respective comprehensive privacy laws, adding more structure and specificity to these requirements.
Beginning in 2023, and accelerating through 2025, several states expanded beyond privacy laws, enacting both sector‑specific and cross‑sector AI legislation addressing automated decision‑making, algorithmic discrimination, deepfakes, consumer interactions, employment screening, children’s data, and broader consumer protection concerns. The result is a rapidly evolving and increasingly complex patchwork of state AI requirements.
Why “High‑Risk” AI Matters
Many of the most significant AI obligations turn on how an AI system is used and who is responsible for it.
States like Colorado and California distinguish between organizations that develop or substantially modify AI systems (such as a technology company that designs and trains a résumé‑screening algorithmic system) and those that deploy AI systems (such as an employer that uses that tool to evaluate job applicants). In many cases, obligations are triggered only when an AI system is deemed “high risk.”
Colorado’s most comprehensive AI law applies broadly across entities, while California’s most comprehensive AI requirements apply to organizations that meet the definition of a “business” under the CCPA. Understanding where your organization fits – and whether your AI systems fall into a high‑risk category – is often the key to determining which legal requirements apply.
What Businesses Should Be Doing Now
Even as the legal landscape continues to shift, one takeaway is already clear: AI governance starts with visibility.
Businesses using AI should:
- Inventory where and how AI systems are used across the organization, and
- Assess whether those systems make, or materially influence, significant decisions about consumers, employees, or applicants.
That threshold often determines whether heightened legal obligations apply.
Coming Up…..
In the next installment of Legal Lines Around AI, we’ll take a closer look at what constitutes a “high‑risk AI system” and why that distinction is critical for compliance planning.
* Dylan Shuster contributed to this post.