LLM inputs can be manipulated to get an output different from the system’s intended purpose. This behavior is sometimes referred to as jailbreaking.
Trustible's AI governance platform helps enterprises identify, assess, and mitigate AI risks like this one at scale.
Explore the Platform