Under comprehensive AI laws, one question largely determines a company’s regulatory exposure: are you developing or deploying an AI system that qualifies as “high risk”?
In this second installment of Legal Lines Around AI, we take a closer look at how emerging AI laws draw that line, why the high‑risk designation matters, and how it can fundamentally reshape a business’s legal, operational, and governance approach to AI.
Developers vs. Deployers: Why the Distinction Matters
As discussed in the first post in this series, comprehensive AI laws draw a critical distinction between developers and deployers of AI systems.
- A developer is the entity that builds an AI system or intentionally and substantially modifies an existing one.
- A deployer is the entity that puts the AI system into use.
Not every system change turns a deployer into a developer. A substantial modification typically means a material change that affects an AI system’s outcomes, risks, or decision‑making impact. Routine maintenance, updates, or bug fixes generally do not rise to that level. This distinction matters because legal obligations and potential liability often differ significantly depending on an organization’s role.
What Is a “Significant Decision”?
The concept of high‑risk AI is grounded in the real‑world impact of an AI system’s decision‑making, not the sophistication of the technology itself.
Rather than regulating all AI systems uniformly, comprehensive AI laws focus heightened requirements on systems that make or play a substantial role in making significant decisions affecting individuals’ access to, or the cost or terms of:
- Financial and lending services
- Employment and workplace opportunities
- Housing
- Insurance
- Education
- Health care
- Legal services or criminal justice outcomes
- Essential goods and services.
This risk‑based approach enables lawmakers to focus regulatory scrutiny on the AI systems that may be more likely to result in discrimination, unfair treatment, or privacy harm while preserving flexibility for lower‑risk uses and avoiding unnecessary constraints on AI innovation.
Defining “High‑Risk” AI Systems
Most comprehensive AI laws define an AI system broadly as a machine‑based system designed to achieve explicit or implicit objectives by inferring from inputs how to generate outputs, such as predictions, recommendations, or decisions. An AI system becomes “high risk” when it makes, or is a substantial factor in making, significant decisions.
Importantly, an AI system does not lose its high‑risk status simply because a human remains “in the loop.” If an AI system’s outputs meaningfully influence a significant decision, heightened legal obligations may still apply, even where a human is involved. That said, certain requirements may be reduced or modified when there is meaningful human involvement in the decision‑making process. To qualify as meaningful, human involvement must involve the exercise of independent judgment, not mere rubber‑stamping of AI outputs. In practice, this typically requires training decision‑makers to understand and critically assess AI recommendations, along with clearly documented authority to override, modify, or reject AI‑driven outcomes.
Classification Is the Critical First Step
From a risk‑management perspective, accurate classification of the AI system is essential.
Businesses should evaluate whether any AI systems used across the organization could be considered high risk. That analysis should include:
- The nature of the decisions involved
- Whether AI outputs materially influence outcomes
- Who controls system design, training, and modification
Misclassifying a high‑risk AI system as low risk may create regulatory exposure down the line, particularly as enforcement activity increases.
What Happens Once an AI System Is “High Risk”?
Once an AI system is classified as high risk, legal and operational expectations change.
Organizations should implement robust oversight measures, including:
- Documenting the system’s decision-making logic
- Regularly testing for accuracy, bias, and unintended impacts
- Clearly defining when and how human review applies
- Training employees on how to interpret and appropriately rely on AI outputs, including when human judgment should override the system
How Obligations Scale with Risk and Role
In addition to internal governance, applicable laws impose specific compliance obligations depending on whether an organization is acting as a developer or deployer, and whether the system is high risk. These obligations general scale as follows:
| Role | Website Disclosures | Pre‑Interaction Disclosure | Adverse Post‑Use Notice | Consumer Rights | Risk Assessments | Prohibited Behaviors |
| Developer of AI Systems | Yes | Yes | No | No | No | Yes |
| Deployer of AI Systems | No | Yes | No | No | No | Yes |
| Developer of High‑Risk AI Systems | Yes | Yes | No | No | No | Yes |
| Deployer of High‑Risk AI Systems | Yes | Yes | Yes | Yes | Yes | Yes |
For deployers of high‑risk AI systems, obligations often expand to include consumer rights, post‑decision notice requirements, and formal risk assessments. We will dive deeper into these requirements throughout the Legal Lines Around AI series.
Coming Up…
In the next installment of Legal Lines Around AI, we’ll take a closer look at disclosure and notice requirements, including what businesses must communicate before, during, and after AI systems are used to make decisions about consumers.