Disclosures are a central regulatory requirement across consumer protection laws to promote transparency, fairness, and accountability.
In this third installment of Legal Lines Around AI, we examine how emerging AI laws are expanding disclosure requirements, why those obligations matter from a risk perspective, and how businesses can build AI governance programs that keep pace with evolving transparency expectations.
AI Disclosures Build on Familiar Consumer Protection Principles
AI disclosure obligations may feel new, but the underlying principles are not.
Comprehensive privacy laws, for example, require organizations to provide clear, upfront notices, including disclosures related to profiling, that generally explain what data is being collected, how it will be used, and what, if any, rights may be available to individuals. Telemarketing laws take transparency a step further by imposing real‑time disclosure requirements, such as identifying the caller and the nature of the call at the start of a conversation and again before completing a transaction.
Emerging AI laws follow a similar path, with some states applying these transparency principles across the lifecycle of an AI‑driven interaction.
Multi‑Stage Disclosure Across the AI Lifecycle
Rather than relying on a single notice, some states are implementing multi‑stage disclosure regimes that trigger obligations at defined checkpoints across the AI lifecycle and consumer interaction.
While the specific content and triggers vary by jurisdiction, the shared goal is consistent: to ensure individuals are not unknowingly subject to AI‑driven decision‑making that affects their rights, opportunities, or access to essential services, and to clearly signal when automation is shaping an interaction.
Common disclosure checkpoints include:
- Public website disclosures describing AI use at a high level
- Privacy policy disclosures explaining how personal information will be used in AI systems
- Pre‑use notices provided before an AI system is applied to an individual
- Real‑time interaction disclosures when consumers directly engage with AI
- Post‑use notices, in some jurisdictions, explaining how AI influenced a decision or outcome
Public Website Disclosures
Public‑facing disclosure obligations typically fall on developers of AI systems; that is, entities that build or intentionally and substantially modify AI systems.
Under California AI law, developers must publish a high‑level summary describing the training data used to develop an AI system. While the law stops short of requiring disclosure of raw datasets or sensitive technical details, it does mandate transparency regarding the nature, source, composition, and treatment of training data. This includes whether the data contains personal information, protected intellectual property, or synthetic elements; how and when it was collected and used; and how it supports the AI system’s intended purpose.
Colorado’s AI law goes further for high‑risk AI systems, requiring developers to disclose via a public use‑case inventory or their website the types of systems they build or substantially modify and how they identify and manage reasonably foreseeable risks of algorithmic discrimination. Developers must also provide deployers with detailed documentation covering intended and potentially harmful uses, training data summaries, known limitations, discrimination risks, evaluation methods and data governance practices, mitigation measures, and guidance for proper use.
Importantly, Colorado law also requires developers to report known or reasonably foreseeable risks of algorithmic discrimination to the Colorado Attorney General and to all known deployers or developers within 90 days of discovery.
Privacy Policy Disclosures
Comprehensive privacy laws increasingly require organizations meeting applicable thresholds to explain, in their privacy policies, when and how they use profiling that may produce legal or similarly significant effects on individuals.
Colorado’s privacy law sets the most onerous standards, requiring clear disclosures about:
- The logic behind such automated decision‑making
- The types of data used
- The purpose of the profiling
- The rights individuals have to understand, challenge, or opt out of decisions that meaningfully affect access to services, opportunities, or benefits
These requirements create pressure to align legal disclosures closely with how AI systems function in practice.
Real‑Time Interaction Disclosures
Several states now require businesses to notify individuals when they are interacting with AI—but each takes a different approach to when and how that disclosure must occur.
California, Colorado, Maine, and Utah generally require disclosure at the start of an interaction when AI is used for consumer engagement, but with notable variations:
- California treats disclosure as a safe harbor to its prohibition on deceptively presenting AI as human.
- Colorado waives disclosure when the AI nature of the interaction would be obvious to a reasonable person.
- Utah generally requires disclosure for most businesses only if a consumer asks, but individuals providing services in regulated occupations must prominently disclose when a person is interacting with AI in a high‑risk interaction.
- Maine requires disclosure only when necessary to avoid misleading a reasonable consumer into believing they are interacting with a human.
California’s telemarketing law adds another layer, requiring that, before an AI‑generated message is delivered, a live caller must state the nature of the call and the identity of the business, obtain the recipient’s consent to hear the prerecorded message, and clearly disclose that the message uses an artificial voice.
Pre‑Use Notices for High‑Risk AI
Both California and Colorado require pre‑use notices when a high‑risk AI system is used to make, or play a substantial role in making, a significant decision about an individual.
This notice must be delivered prominently at or before data collection, or before previously collected data is repurposed for use in a high‑risk AI system. Required disclosures must clearly explain:
- The specific purpose of the AI use
- The nature of the decision being made
- How to find additional information on the deployer’s website
- The consumer’s rights to access or opt out of the AI system
- The prohibition on retaliation
- How the system processes personal information, what outputs it generates, and how decisions will be made if the consumer chooses to opt out
Post‑Use Notices Following Adverse Decisions
Colorado’s AI law also requires post‑use notices when a high‑risk AI system is used to make, or meaningfully influence, a significant decision that has an adverse outcome to the consumer.
The deployer must explain the principal reasons for the outcome, including how the AI system contributed to the decision, what types of data it processed, and where that data came from. Consumers must be given an opportunity to correct any inaccurate personal information the system relied on, along with a meaningful appeal mechanism that, when technically feasible, includes review by a human decision‑maker.
Managing Disclosure‑Related Risk
Outdated, inconsistent, or inaccurate disclosures can create significant compliance and enforcement risk.
To mitigate that risk, businesses should centralize ownership of AI disclosures and ensure that website statements, privacy notices, and consumer‑facing communications accurately reflect how AI systems operate in practice. Legal, compliance, and technical teams should collaborate to validate disclosures with detailed system analysis and real‑world disclosure/notification use cases, particularly when systems are updated or repurposed.
Treating Disclosures as Living Obligations
AI disclosures should not be treated as “set it and forget it” statements.
Establishing internal review processes tied to system changes, retraining events, or new deployment contexts can help ensure disclosures remain accurate and compliant over time. Using plain language, accessible formats, and consistent delivery methods can further reduce the risk that disclosures are misleading, insufficient, or inaccessible to affected individuals.
Coming Up….
Disclosure obligations are only one piece of the AI governance puzzle.
In the next installment of Legal Lines Around AI, we’ll examine consumer rights related to AI use, including opt‑out requests, access rights, appeal mechanisms, and human review requirements, and explore how those rights create significant operational challenges for businesses deploying AI systems.
*Dylan Shuster contributed to this post.