We recognize AI governance can be overwhelming – we’re here to help. Contact us today to discuss how we can help you solve your challenges and Get AI Governance Done.
AI Risk · Legal
Use by Minors
Special restrictions may apply to systems that are accessible to minors
📋 Description
Some AI systems can be accessed by individuals under the age of 18, posing significant risks and legal issues to AI companies as well as the underage users. There are two main areas of concern:
**Data Privacy**: Depending on the jurisdiction, your organization may be subject to data protection laws that regulate the collection and use of data for individuals under 18 years of age. For instance, US companies are subject to the Children's Online Privacy Protection Act (COPPA), which regulates data collection for website and internet service operators that direct their services to children under 13 years old. Organizations that deploy AI systems targeted at children may need to consider how they collect data from their users, depending on their age.
**Inappropriate Content**: AI systems may expose individuals under the age of 18 to explicit or violent content, which may be illegal. For further information, please see the 'Harmful and Inapropriate Content Generation' risk.
In addition, other risks that are not specific to minors may be heightened when used by them. For example, young users may be more vulnerable to harms associated with hallucinations and anthropomorphizing conversation agents
🔍 Public Examples and Common Patterns
- Google's YouTube Kids App was repeatedly criticized for presenting inappropriate content to minors, as the filtering and recommendation algorithms failed to protect children. This led to uproar from parents and the media.
- Character.ai has faced a number of lawsuits relating to noncompliance with child protection laws as it has allegedly, knowingly exposed minors to dangerous AI-generated content.