AI Risk · Users

Insufficient User Training

Insufficient training can result in users misinterpreting system outputs or misunderstanding system limitations.

📋 Description

Insufficient user training can create risk through a variety of avenues:

- Over-reliance and deskilling
- Security vulnerabilities and data breaches
- Bias and fairness

When AI is used in a high-risk setting (e.g. assisting with medical diagnosis), insufficient instruction may lead system users to place undue confidence in system outputs. However, the impact of this risk does vary with use cases. In other scenarios, insufficient training may lead to frustration and a lack of adoption.

It can, however, escalate a low-risk case into a high-risk one. For example, if a user does not understand that they should not use confidential information in a prompt for a generative AI system, there is a risk that the information could be disclosed. Similarly, without proper training on identifying and mitigating bias, users may not be able to recognize or address these issues, leading to unfair or discriminatory outcomes.

🔍 Public Examples and Common Patterns

Insufficient user training commonly leads to either over- or under-reliance on AI, where users either treat AI as a faultless oracle or disregard its use entirely, leading to either error or inefficiency. Users who do not understand AI can fail to understand when it is prone to error and when it is best to be used.

🛡️ Recommended Mitigations

Cite this page
Trustible. "Insufficient User Training." Trustible AI Governance Insights Center, 2026. https://trustible.ai/ai-risks/insufficient-user-training/

Manage AI Risk with Trustible

Trustible's AI governance platform helps enterprises identify, assess, and mitigate AI risks like this one at scale.

Explore the Platform