Life After the Rite Aid Order: A Discussion on How the FTC is Shaking Up AI

AI incidents can have major implications for companies looking to develop and deploy AI. Oftentimes, organizations can be financially or reputationally harmed if the AI system performs harmful or unintended actions impacting people – furthering the need for hashtag #AIgovernance

The American drugstore chain RITE AID was recently banned by the Federal Trade Commission (FTC) from using AI facial recognition after the retailer deployed the system without reasonable safeguards. This will have far-reaching implications for facial recognition technology. It also signals the FTC willingness to go big on regulating AI in the US.

Watch our LinkedIn Live hosted on Wednesday, January 31 at 12:00 P.M. EST / 9:00 A.M. PST for a deep dive on this topic.

Speakers 

John Heflin – Director of Policy, Trustible

◾ Jon Leibowitz – former Chairman of the FTC and former Partner, Davis Polk & Wardwell LLP

◾ Maneesha Mithal – Partner, Privacy & Security, Wilson Sonsini Goodrich & Rosati

Agenda

◾ The FTC’s role in regulating AI

◾ The impact of the Rite Aid decision on American businesses 

◾ AI and consumer rights, notices, and complaints 

◾ Effects on AI vendors and suppliers 

Share:

Related Posts

Informational image about the Trustible Zero Trust blog.

When Zero Trust Meets AI Governance: The Future of Secure and Responsible AI

Artificial intelligence is rapidly reshaping the enterprise security landscape. From predictive analytics to generative assistants, AI now sits inside nearly every workflow that once belonged only to humans. For CIOs, CISOs, and information security leaders, especially in regulated industries and the public sector, this shift has created both an opportunity and a dilemma: how do you innovate with AI at speed while maintaining the same rigorous trust boundaries you’ve built around users, devices, and data?

Read More

AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management

As enterprises race to deploy AI across critical operations, especially in highly-regulated sectors like finance, healthcare, telecom, and manufacturing, they face a double-edged sword. AI promises unprecedented efficiency and insights, but it also introduces complex risks and uncertainties. Nearly 59% of large enterprises are already working with AI and planning to increase investment, yet only about 42% have actually deployed AI at scale. At the same time, incidents of AI failures and misuse are mounting; the Stanford AI Index noted a 26-fold increase in AI incidents since 2012, with over 140 AI-related lawsuits already pending in U.S. courts. These statistics underscore a growing reality: while AI’s presence in the enterprise is accelerating, so too are the risks and scrutiny around its use.

Read More