AI Risk · Generative AI

Harmful Code Generation

Code generated by LLMs may contain vulnerabilities.

📋 Description

Harmful code generation occurs when LLMs produce insecure or malicious code that may introduce critical vulnerabilities into software systems. This includes code that facilitates injection attacks, privilege escalation, or insecure storage and transmission of data. Because language models are trained on large, uncurated codebases, they may replicate insecure coding practices or fail to handle edge cases.
The risk is heightened in security-sensitive domains, such as authentication, financial processing, or cloud infrastructure management. Even seemingly benign code suggestions can propagate insecure dependencies or poor logic. AI-generated code should never be deployed to production without thorough verification.

Insecure code can lead to data breaches, unauthorized access, and other security incidents, potentially causing financial loss, reputational damage, and legal liabilities for organizations. Additionally, malicious actors might exploit LLMs to generate harmful scripts or malware, which can be disseminated widely with minimal effort. The widespread adoption of AI-generated code thus amplifies the potential impact of these risks, making it crucial for developers to critically evaluate and test all AI-generated code before integration into production systems.

Harmful code becomes more acute as LLMs evolve from autocomplete assistants to autonomous coding agents. Based on Sourcegraph’s Levels of Code AI, the increasing levels of autonomy in coding agents can be clasified as follows:

- Level 0 (No AI assistance): At this base level, the developer writes all code manually without any AI assistance.
- Level 1 (Code completion): Generates individual lines or functions.
- Level 2 (Code Creation): Generates entire modules or APIs in one prompt.
- Level 3 (Supervised Automation): The user gives a high-level objective, and the AI performs multiple steps to accomplish it, with some capability to validate its work so it can iterate toward a solution.
- Level 4 (Full Automation): Handles complete software development cycles with minimal human oversight.

🔍 Public Examples and Common Patterns

- Lovable App Vulnerability: External researchers reviewed 1,645 Lovable-created web apps that were featured on the company’s site. Of those, 170 allowed anyone to access information about the site’s users, including names, email addresses, financial information and secret API keys for AI services that would allow would-be hackers to run up charges billed to Lovable’s customers.
- Medium: How a One-Hour Intro Call Saved a Client $17,000: When AI-Generated Code Meets Human Expertise

📐 External Framework Mapping

- OWASP LLM Top 10: LLM05:2025 - Improper Output Handling
- IBM Risk Atlas: Harmful code generation risk for AI
Cite this page
Trustible. "Harmful Code Generation." Trustible AI Governance Insights Center, 2026. https://trustible.ai/ai-risks/harmful-code-generation/

Manage AI Risk with Trustible

Trustible's AI governance platform helps enterprises identify, assess, and mitigate AI risks like this one at scale.

Explore the Platform