We recognize AI governance can be overwhelming – we’re here to help. Contact us today to discuss how we can help you solve your challenges and Get AI Governance Done.
AI Risk · Users
Overreliance on AI
Overreliance on AI occurs when excessive trust is placed AI system and results in reduced human oversight.
📋 Description
Overreliance occurs when users start to accept incorrect AI system outputs due to excessive trust in the system. It is not always obvious when users over-rely on AI, particularly when an AI system is usually more accurate than humans at a given task. However, all systems make mistakes or have vulnerabilities, and users need to be able to understand and identify these weaknesses.
Common examples of overreliance across sectors include:
- Doctors accepting an incorrect diagnosis recommendation without reviewing the details.
- Software engineers using AI-generated code without testing all conditions.
- Lawyers accepting AI-generated briefs without checking that citations are valid (i.e. not hallucinated).
In mitigating this risk, the overall goal should be to cultivate an appropriate level of reliance and trust in the system. Users should understand strengths and weaknesses of the system and have a mental checklist of common problems (e.g. hallucinations from generated content). This is particularly important as policymakers call for greater human oversight, making users a key line of defense against AI failures.
🔍 Public Examples and Common Patterns
- AI Literacy in Clinical Settings: This study found that clinicians with low AI literacy were more likely to adhere to AI recommendations than those with higher AI literacy, selecting medical treatments that aligned with AI decisions seven times more often than the other group.