What is the “Perfect” AI Use Case Intake Process?

Recap from Trustible’s Panel at IAPP AI Governance Global North America 2025

Last week at the IAPP AI Governance Global North America conference in Boston, Trustible brought together AI governance leaders from Leidos and Nuix to explore a deceptively tactical but mission-critical question: What does the “perfect” AI intake process look like?

The lively session, moderated by Andrew Gamino-Cheong (CTO & Co-Founder of Trustible), unpacked the front door to AI governance—how organizations capture and review every AI use case, tool, or feature under consideration. Without a reliable intake process, organizations risk missing critical visibility into their AI landscape, undermining governance before it even begins.

But the plot twist? There’s no such thing as the perfect AI intake process—the only perfect process is the one that works best for the nuances and unique needs of your organization.


Setting the Stage: Why Intake Matters

Gamino-Cheong opened by laying out the six tradeoffs that define intake design:

  • Granularity: What exactly are you governing—tools, features, or use cases?
  • Heaviness: Is your process light-touch or burdensome?
  • Outcomes: Does intake produce a yes/no decision, mitigations, or a routing step?
  • Participation: Who fills it out, and who reviews it?
  • Implementation: Is it a spreadsheet, workflow tool, or governance platform?
  • Timeliness: Is it triggered at ideation or just before deployment?

“There’s no ‘perfect’ intake process,” Gamino-Cheong emphasized. “Only the one that’s right-sized for your organization’s size, role, and risk profile.”


Practitioner Journeys: From Messy Starts to Scalable Systems

Sophia Toomey, Program Manager, Leidos

With over 50,000 employees, Leidos faced an avalanche of AI tools and pilots. Toomey candidly described starting with messy spreadsheets and PowerPoints before moving toward a company-wide intake process. Her key lessons: simplify questions, meet contributors where they are, and position governance as risk reduction—not auditing.

“Give yourself grace if you’re still in Excel and PowerPoint,” she advised. “It’s trial and error—you’ll grow from there.”

Chris Stevenson, Head of AI Strategy & Operations, Nuix

At Nuix, Stevenson admitted he wasn’t “a process person” by nature. But the stakes—supporting regulators and investigators—demanded rigor. His first attempts with shared Word docs collapsed under the pace of AI adoption. The breakthrough came by partnering early with legal and privacy leaders, and later, by automating consistency through Trustible’s platform.

“AI governance demanded it. Partnering with legal was the game-changer,” he said.


Common Pitfalls and Culture Shifts

Both panelists stressed that intake is as much about culture as policy:

  • Education & Buy-In: Stevenson described mandatory HR-led AI training and even company hackathons. “AI is a cultural shift as much as a technological shift. HR became our secret weapon.”
  • Cross-Functional Teams: Toomey highlighted the importance of pulling in legal, ethics, IT, HR, and security early to avoid fragmented decision-making.
  • Reframing Governance: “Employees first saw us as auditors saying no,” Toomey said. “We reframed it as risk reduction, not blocking innovation.”

Key Takeaways for Organizations

The panel distilled several practical insights:

  • No perfect intake: It’s about fit for your org’s scale, risk, and role.
  • Start messy, iterate fast: Don’t wait for perfection—trial-and-error builds muscle.
  • Cross-functional collaboration: Intake is stronger when legal, privacy, security, and product are at the table.
  • Culture change is essential: Governance succeeds when employees see it as enabling, not obstructing.
  • Scalability requires tools: Spreadsheets may work early, but automation is critical for sustainability.
  • Risk triage matters: Intake should flag high-risk use cases for deeper governance review.

Or as Stevenson warned: “Don’t underestimate how fast AI will creep in and overwhelm ad hoc systems.”


How Trustible Fits In

Trustible’s mission is to make AI governance practical, scalable, and embedded into everyday workflows. Intake is where it all begins. By partnering with organizations like Nuix and Leidos, Trustible helps turn messy spreadsheets into structured, automated, and trackable systems that align with regulation while keeping innovation flowing.

As Gamino-Cheong summed it up: “The intake process is the front door to AI governance.” At Trustible, we’re building that front door, and ensuring it stays open, usable, and effective for organizations that have the most to win and most to lose with AI.


Interested in rethinking your AI intake process? Get in touch with our team to learn how Trustible can help you right-size governance for your organization.

Share:

Related Posts

Informational image about the Trustible Zero Trust blog.

When Zero Trust Meets AI Governance: The Future of Secure and Responsible AI

Artificial intelligence is rapidly reshaping the enterprise security landscape. From predictive analytics to generative assistants, AI now sits inside nearly every workflow that once belonged only to humans. For CIOs, CISOs, and information security leaders, especially in regulated industries and the public sector, this shift has created both an opportunity and a dilemma: how do you innovate with AI at speed while maintaining the same rigorous trust boundaries you’ve built around users, devices, and data?

Read More

AI Governance Meets AI Insurance: How Trustible and Armilla Are Advancing AI Risk Management

As enterprises race to deploy AI across critical operations, especially in highly-regulated sectors like finance, healthcare, telecom, and manufacturing, they face a double-edged sword. AI promises unprecedented efficiency and insights, but it also introduces complex risks and uncertainties. Nearly 59% of large enterprises are already working with AI and planning to increase investment, yet only about 42% have actually deployed AI at scale. At the same time, incidents of AI failures and misuse are mounting; the Stanford AI Index noted a 26-fold increase in AI incidents since 2012, with over 140 AI-related lawsuits already pending in U.S. courts. These statistics underscore a growing reality: while AI’s presence in the enterprise is accelerating, so too are the risks and scrutiny around its use.

Read More