We recognize AI governance can be overwhelming – we’re here to help. Contact us today to discuss how we can help you solve your challenges and Get AI Governance Done.
AI Risk · Performance
Unexpected Inputs
AI systems may be exposed to inputs outside of an expected range and need to have a planned failure mode.
📋 Description
Unexpected inputs occur when an AI system receives data it was not designed to process, either due to unvalidated user input, domain shift, or adversarial probing. These inputs can trigger misleading results, system crashes, or opaque behavior, especially if there’s no fallback mechanism or warning to the user. Systems without built-in safeguards may silently produce incorrect results, undermining safety and trust. Designing for robustness requires a clear plan for how the system behaves when faced with inputs that fall outside of its intended distribution or formatting. This includes rejecting invalid inputs, returning default or null values, issuing user-facing warnings, or logging the issue for audit and debugging. These contingencies should be supported by validation, testing, and documentation to ensure the model can gracefully handle variance in the real world.
🔍 Public Examples and Common Patterns
- Amazon Alexa Responding to Environmental Inputs - Amazon's voice assistant Alexa ordered items, including dollhouses and cookies, when triggered by TV news broadcasts discussing the "wake word" (i.e. word that triggers system to turn on), causing unintended purchases across multiple households.