In our last Legal Lines Around AI post, we explored how emerging AI laws increasingly rely on disclosure obligations as a front‑line consumer protection tool.
But this is only the starting point. Across these same laws, disclosure requirements act as a gateway to a growing set of substantive consumer rights that attach once high-risk AI systems make significant decisions. In this post, we examine the consumer rights triggered by AI use and what those rights mean for businesses deploying AI systems in consequential decision‑making.
Comprehensive Privacy Laws Anchor Consumer Rights
Comprehensive state privacy laws provide the foundational consumer rights framework that already governs personal data and many AI systems used in significant decision making.
At a high level, these laws grant individuals the right to understand how their information is collected and used; to access, correct, and delete personal data; and to opt out of the sale of personal information and targeted advertising.
More importantly for AI governance, comprehensive privacy laws anchor two rights that directly apply to high-risk AI systems:
- The right to access information about the use of a high‑risk AI system; and
- The right to opt out of certain high‑risk AI uses.
Together, these rights give consumers meaningful control over how their personal information is used in AI‑driven decisions and place new operational demands on businesses.
Access Rights Shift from Data‑Centric to Decision‑Centric
Privacy and AI laws increasingly equip consumers with the right to access meaningful information about how businesses use their personal information in high-risk AI systems. However, while traditional access rights are largely data‑centric (i.e., what information the business holds and where it came from), AI‑specific access rights are decision‑centric.
Laws in states such as California and Colorado require deployers of high‑risk AI systems to provide meaningful explanations in response to a high-risk AI access request, including:
- Why a high-risk AI system was used in relation to a particular consumer
- How the system processed the consumer’s data to generate an output
- How that output influenced a significant decision, including the role of any human review
Under California law, if a business plans to reuse high-risk AI outputs for additional significant decisions, it also must explain how those outputs will be used and whether human review is involved.
Minnesota and Connecticut laws further expand access rights when high‑risk AI systems are used, giving consumers the right to question the outcome of an AI decision and be informed of what actions might have led to a different result, as well as what steps could be taken to influence future decisions. Consumers are also entitled to review the personal information used by the system. If the decision is determined to have been based on inaccurate personal information, consumers have the right to correct that information and to have the decision reevaluated using the updated data.
As shared previously, Colorado’s AI law imposes additional obligations when a high-risk AI system produces an adverse consumer decision. In those cases, the deployer must provide a statement of the principal reason or reasons for the decision, offer the consumer an opportunity to correct inaccurate personal information, and provide a mechanism to appeal the adverse decision under human review.
These expanded access rights are significant because they require businesses to translate technical system behavior into clear, consumer‑ready explanations and to maintain processes capable of changing outcomes based on new or corrected data.
Opt Out Rights as a Core Control on High‑Risk AI Use
Most comprehensive state privacy laws provide an opt out right for high-risk AI systems. While many states offer this right in a relatively high‑level form, California and Colorado impose the most detailed requirements and conditions.
Under California regulations, businesses generally must honor consumer opt out requests for high‑risk AI uses, subject to limited exceptions. A deployer is not required to offer an opt out where:
- A clear and simple appeal process exists that includes human review with authority to overturn the AI‑driven decision
- The system is used solely to assess work performance and does not result in unlawful discrimination
- The system is used solely to allocate work or determine compensation, again provided it does not unlawfully discriminate.
Colorado similarly grants consumers the right to opt out of high-risk AI systems, unless the processing involves “meaningful human involvement.” This is defined as a substantive, independent review of the data or AI output by a human with authority to change or influence the resulting decision.
When consumers opt out of high‑risk AI processing, deployers face specific response and process obligations. Businesses must:
- Provide confirmation that the opt out request was honored
- Allow opt‑outs by specific use case, provided a single, universal opt out option is also available covering all uses
Timing matters. If an opt out request is submitted before processing begins, the AI system must not be used. If the request comes after processing has started, the deployer must stop AI processing within 15 business days.
Requests may be denied if reasonably believed to be fraudulent, but the basis for denial must be explained. Even re‑consent is also tightly controlled. California generally prohibits re‑soliciting consent for the same high‑risk AI use for 12 months, while Colorado permits re‑consent only through a neutral and accessible interface accompanied by detailed disclosures explaining how the AI system works, how it affects decisions, and the potential consequences of renewed use.
Procedural Rules Add Operational Complexity
States also regulate how consumers may exercise access and opt out rights.
Most laws require multiple, accessible submission methods aligned with the business’s primary modes of interaction with the consumer. They also prescribe how businesses must respond, including confirmation that a request was received and processed, standardized timelines for compliance, and clear response formats designed to be understandable to consumers.
Verification rules apply to most rights, but some laws limit identity verification for opt out requests, reflecting the view that opting out should be friction‑free.
What Consumer Control Means for Businesses
Together, these obligations shift AI governance toward outcome-based accountability, where consumer rights can directly determine whether and how AI systems may be used to make significant decisions.
Businesses deploying high-risk AI systems must understand, document, and clearly explain how these AI systems function in practice, including how personal information is processed, how outputs influence decisions, and when human judgment meaningfully intervenes. This places operational pressure on businesses to align technical system design, internal governance, and consumer facing explanations, and to maintain processes that allow decisions to be reevaluated when data is corrected or challenged.
Opt out rights and related procedural requirements similarly require businesses to build enforceable controls over AI use, not just theoretical consumer choices. Companies must be able to stop or adjust high-risk AI processing within prescribed timelines, honor opt out requests across systems and downstream service providers, and carefully manage how and when consent may be reintroduced.
Coming Up….
Consumer rights are a powerful driver of AI accountability, but they are not the last piece of the puzzle.
In the next installment of Legal Lines Around AI, we’ll turn to risk assessments, including when they are required, what they must include, how often they must be updated, and how assessment results can be used to reduce claims that AI systems are unfair, discriminatory, or insufficiently governed.
*Dylan Shuster contributed to this post.