In the last two installments of Legal Lines Around AI, we examined how transparency and consumer rights work together to give individuals greater control over how businesses use high‑risk AI to make consequential decisions about them.
Those obligations are reinforced and operationalized through risk assessments, which have quickly become a centerpiece of AI governance. Because high‑risk AI systems can amplify risks such as bias, unfair treatment, and invasive profiling, state laws increasingly require businesses to assess those risks before deployment, document safeguards, and revisit their analysis as systems and use cases evolve.
State Laws Converge on Pre‑Deployment Risk Assessments
States are converging on a core rule: deployers of high‑risk AI systems must complete a documented risk assessment before deployment. This obligation applies regardless of whether the system is developed internally or obtained from a vendor or third‑party provider.
California requires risk assessments before processing personal information that presents significant risk to privacy, including inputting personal information in high‑risk AI systems, automated employment‑related inferences, inferences based on sensitive locations, and the training of high‑risk or biometric technologies.
Colorado requires deployers to complete a risk assessment before deploying any high‑risk AI system. Other state comprehensive privacy laws similarly mandate documented assessments when automated processing creates reasonably foreseeable risks, such as unfair or deceptive treatment, unlawful disparate impact, financial or physical injury, offensive intrusions into private affairs, or other substantial consumer harm.
Across these laws, the unifying principle is risk‑based. Regulators are less concerned with labels and more focused on whether and how high-risk AI use meaningfully affects individuals’ rights or wellbeing.
What Regulators Expect in a High‑Risk AI Risk Assessment
A compliant high-risk AI risk assessment should tell a complete and defensible story: how the high-risk AI system works, the risks it creates, and why deploying it is justified given those risks.
At a minimum, assessments must:
- Clearly define the specific purpose of the high‑risk AI processing
- Describe the processing activity itself and identify the categories of personal information involved
- Explain the context of the processing, the organization’s relationship with affected consumers, and consumers’ reasonable expectations
Intended use cases must closely align with the stated purpose, with particular focus on whether the AI system is used to make or materially influence significant decisions.
Beyond these fundamentals, assessments are expected to address operational detail, risk analysis, and mitigation. This includes:
- How data is collected, used, retained, and disclosed
- What transparency measures and disclosures are provided
- How outputs are used in decision‑making
- The role of human oversight
- Whether third‑party tools or vendors are involved
Regulators also expect a thorough evaluation of foreseeable harms, including privacy, discrimination, financial, psychological, and constitutional risks, along with a documented explanation of how safeguards reduce those risks and why the benefits outweigh remaining concerns.
In practical terms, regulators expect risk assessments to address:
- Purpose and scope of AI‑driven processing, with specificity
- Data inputs and outputs, including sensitive or children’s data
- How the system works, including logic, assumptions, limitations, and training data
- Risks, such as discrimination, unfair treatment, privacy intrusion, or economic harm
- Safeguards and mitigation measures, including security, governance and bias controls
- Transparency and oversight mechanisms, including consumer notices, monitoring, audits, and metrics
- Decision‑making accountability, identifying who approved deployment and why
Stakeholder involvement is mandatory: employees involved in the processing activity must also participate in AI risk assessments by providing operational details such as how data is collected, used, and managed to ensure the assessment reflects real world system use.
Risk Assessments as Living Documents
Finally, assessments should reflect an ongoing process. Regulators increasingly expect evidence of post‑deployment monitoring, regular review, internal or external audits, and clear ownership within the organization.
High‑risk AI risk assessments are not one-time paperwork; they must be reviewed and updated throughout the system’s lifecycle. At a minimum, assessments should be revisited annually. More importantly, deployers must update assessments whenever there is a material change to the processing activity that could create new risks, increase existing risks, or weaken existing safeguards (i.e., changes to data types or sources, processing purpose, algorithms, vendors, software, or system outputs). These updates must be completed as soon as feasible and no later than 45 days after the material change.
Retention and regulatory access are equally critical. Deployers must retain all versions of risk assessments (original and updated) for as long as the processing continues and for five years after completion, whichever is later. Organizations must also be prepared to produce these assessments to regulators on demand, typically within 30 days.
In California, deployers face an added layer of oversight: certain risk‑assessment details must be formally submitted to the California Privacy Protection Agency on a defined schedule, accompanied by an executive attestation signed under penalty of perjury.
Practical Considerations for Deployers of High‑Risk AI
As AI risk assessment requirements shift from theory to enforcement, businesses should focus on building processes that are durable, defensible, and scalable.
Deployers should consider the following steps:
- Inventory AI systems early and often. Maintain an up‑to‑date inventory of AI tools used for decision‑making, profiling, or inference, including vendor‑provided systems and internally developed tools.
- Align AI risk assessments with existing privacy governance. Integrate AI risk assessments into existing DPA or privacy risk assessment workflows rather than creating siloed processes.
- Engage operational stakeholders at the outset. Involve teams responsible for data collection, model operation, and business use cases to ensure assessments reflect actual system behavior, not theoretical design.
- Plan for change management. Establish triggers and internal processes to identify material changes to AI systems or data use and ensure assessments are updated within required timelines.
- Document human oversight and escalation paths. Be explicit about when and how humans review AI outputs, override decisions, or handle consumer challenges.
- Prepare for regulator access. Retain assessments in a centralized, review-ready format and assign ownership so the organization can respond quickly to regulatory requests.
- Do not rely solely on vendor assurances. Even when AI tools are third‑party, deployers remain accountable for understanding risks, evaluating safeguards, and documenting compliance.
In practice, regulators are less focused on whether a document exists and more so on whether the assessment reflects real operational decision‑making and ongoing oversight. Businesses that invest now in practical, repeatable assessment processes will be far better positioned to adapt to evolving legal expectations.
Coming Up….
In the final installment of Legal Lines Around AI, we’ll look at how businesses operating across the U.S. can build an AI governance posture that is flexible enough to meet the most demanding regulations, while remaining practical to implement and scale.
*Dylan Shuster contributed to this post.