AI Risk · Users

Unauthorized Use

Individuals within the organization can use the system for purposes that are out-of-scope for the system.

📋 Description

An AI system may be designed and deployed to accomplish specific tasks, but it is used for activities outside of the intended scope. It is important to set clear exceptions for how and when AI systems should be used. Relying on systems to accomplish unintended tasks can result in inaccurate or biased outcomes. For instance, background check tools that are intended to confirm basic information about a person submitting a rental application should not be used to decide rental prices. In addition, certain systems may require technical or specialized training to operate. These systems could include AI tools used for medical diagnosis or by legal professionals to review contracts.

🔍 Public Examples and Common Patterns

Employees may have access to generative AI tools, like ChatGPT, for certain applications, but they may end up inputing sensitive or confidential data into these systems, which could be against company policy.

📐 External Framework Mapping

- IBM Risk Atlas: Unauthorized use risk for AI
Cite this page
Trustible. "Unauthorized Use." Trustible AI Governance Insights Center, 2026. https://trustible.ai/ai-risks/unauthorized-use/

Manage AI Risk with Trustible

Trustible's AI governance platform helps enterprises identify, assess, and mitigate AI risks like this one at scale.

Explore the Platform