AI Risk · Legal

Lack of AI Use Disclosure

Insufficiently disclosures the use of AI can have negative legal implications and create mistrust from those impacted by the system.

📋 Description

Lack of AI Use Disclosure refers to situations where organizations fail to inform users that they are interacting with or being evaluated by an AI system. This absence of transparency can violate legal and regulatory requirements in some jurisdictions and can also lead to user mistrust, especially when users expect human interaction. Without disclosure, individuals cannot provide consent, may misinterpret the authority or reliability of the system, and may have limited ability to seek recourse for errors or harms caused.

Disclosure is also essential for assigning responsibility and accountability for AI outputs. Effective disclosure includes communicating AI involvement during user interactions, labeling AI-generated content, and explaining when and how automated systems influence decisions. Failure to do so can result in regulatory penalties, erosion of consumer trust, and reputational damage.

🔍 Public Examples and Common Patterns

- AIID Incident 1043: Reddit Moderators Report Unauthorized AI Study Involving Fabricated Identities by Purported University of Zurich Researchers: Researchers purportedly affiliated with the University of Zurich reportedly deployed undisclosed AI-generated comments on Reddit's r/ChangeMyView to study persuasion by allegedly fabricating identities such as sexual assault survivors and racial minorities. The experiment reportedly involved unauthorized demographic profiling, emotional manipulation, and violations of subreddit and platform rules.

📐 External Framework Mapping

- IBM Risk Atlas: Non-disclosure risk for AI
Cite this page
Trustible. "Lack of AI Use Disclosure." Trustible AI Governance Insights Center, 2026. https://trustible.ai/ai-risks/lack-of-disclosure/

Manage AI Risk with Trustible

Trustible's AI governance platform helps enterprises identify, assess, and mitigate AI risks like this one at scale.

Explore the Platform