AI Risk · Performance

Misclassification and Model Errors

AI Systems may produce an output that is incorrect.

📋 Description

AI systems generate outputs based on statistical patterns in data, and those predictions may be incorrect. This risk applies to a wide range of model types, including classification, regression, or generative, and can lead to tangible consequences depending on the domain.

For classification models, errors manifest as false positives or false negatives. In binary classification tasks like spam filtering, a false positive may block legitimate emails, while a false negative may miss harmful content. In multi-class or multi-label setups, misclassifying one category as another or omitting a correct label can affect user trust and downstream decisions.

Regression models introduce continuous error, where predicted values may be significantly higher or lower than ground truth, with implications depending on tolerance levels (e.g., in loan scoring or demand forecasting). Errors in translation, object detection, or other structured outputs require more nuanced metrics to capture deviation from the correct output (known as the ground truth). Evaluating these risks requires a robust, context-specific understanding of error types, impacts, and detectability.

🔍 Public Examples and Common Patterns

- AIID Incident 355: Uber Allegedly Wrongfully Accused Drivers of Fraud via Automated Systems: Uber was alleged in a lawsuit to have wrongfully accused its drivers in the UK and Portugal of fraudulent activity through automated systems, which resulted in their dismissal without a right to appeal.

- AIID Incident 466: AI-Generated-Text-Detection Tools Reported for High Error Rates: Models developed to detect whether text generation AI was used, such as AI Text Classifier and GPTZero, reportedly contained high rates of false positives and false negatives, such as mistakenly flagging Shakespeare's works.

📐 External Framework Mapping

- IBM Risk Atlas: Poor model accuracy risk for AI
Cite this page
Trustible. "Misclassification and Model Errors." Trustible AI Governance Insights Center, 2026. https://trustible.ai/ai-risks/misclassification/

Manage AI Risk with Trustible

Trustible's AI governance platform helps enterprises identify, assess, and mitigate AI risks like this one at scale.

Explore the Platform