AI Risk · Bias and Fairness

Performance Gap Between Populations

Models may exhibit a performance gap between different populations.

📋 Description

Performance gaps between populations occur when AI models perform better for some user groups than others, even when the system is not explicitly evaluating individuals. For example, facial recognition models have been shown to perform worse on darker-skinned individuals (Source). Similarly, AI-powered speech recognition models may hallucinate more when processing speech from individuals with impediments (Source). These disparities are often the result of underrepresented or misrepresented data during training. They can reinforce societal inequities and lead to adverse outcomes for vulnerable populations. Evaluating and mitigating such gaps is essential for ensuring fair and equitable AI deployment.

This risk is particularly important in systems used for public services, healthcare, safety screening, or communication tools. In these contexts, failure to perform equally well for all users undermines trust and could result in serious consequences, especially for groups already facing structural disadvantages.

🔍 Public Examples and Common Patterns

- AIID Incident 47: LinkedIn Search Prefers Male: An investigation by The Seattle Times in 2016 found a gender bias in LinkedIn's search engine.
- AIID Incident 87: UK passport photo checker shows bias against dark-skinned women

📐 External Framework Mapping

- IBM Risk Atlas: Configuring fairness evaluations
- MIT AI Risk Repository: 1.3 – Unequal performance across groups
Cite this page
Trustible. "Performance Gap Between Populations." Trustible AI Governance Insights Center, 2026. https://trustible.ai/ai-risks/performance-gaps-populations/

Manage AI Risk with Trustible

Trustible's AI governance platform helps enterprises identify, assess, and mitigate AI risks like this one at scale.

Explore the Platform