AI Risk · Performance

Output Inconsistency

Models can produce inconsistent results for the same or similar inputs.

📋 Description

Output inconsistency occurs when AI systems generate variable outputs for the same or similar inputs. This problem can manifest in three primary contexts:

- Sampling in Generative Models: Large Language Models (LLMs) and image generators often sample from a probability distribution, which means that each run might produce different outputs, even with identical inputs. The sampling can be controlled through some LLMs, but many API-access only models (e.g. OpenAI's GPT models) can not guarantee identical results.
- Overfitting and Sensitivity: Models that are overly tuned to specific datasets may return drastically different outputs for small variations in input features, which can be problematic in decision-making systems.
- Version Drift: Different versions of the same model (i.e., for a model re-trained every month with the same pipeline) can produce different results on the same input.

While some degree of output diversity can be beneficial (e.g., in creative applications), inconsistency often undermines reproducibility, user trust, and explainability in high-stakes settings like financial risk assessment or public benefits eligibility.
Cite this page
Trustible. "Output Inconsistency." Trustible AI Governance Insights Center, 2026. https://trustible.ai/ai-risks/output-inconsistency/

Manage AI Risk with Trustible

Trustible's AI governance platform helps enterprises identify, assess, and mitigate AI risks like this one at scale.

Explore the Platform