Building Trust in Enterprise AI: Insights from Trustible, Schellman, and Databricks

AI is rapidly reshaping the enterprise landscape, but organizations face growing pressure from regulators, stakeholders, and customers to ensure these systems are trustworthy, ethical, and well-governed. To help unpack this evolving space, Trustible, Schellman, and Databricks co-hosted a webinar on how AI governance frameworks, standards, and compliance practices can become strategic tools to accelerate AI adoption.

The conversation brought together leaders from across the ecosystem to explore how enterprises can balance AI innovation with accountability.

Setting the Stage: Why AI Governance Matters

The session opened with Andrew (CTO & Co-Founder of Trustible), who framed the discussion around the convergence of regulation, standards, and enterprise adoption:

“There’s a big question about how to communicate what’s done in MLflow or Unity Catalog to legal teams, customers, and regulators. That’s where frameworks and standards really help—they create clarity in an increasingly complex environment.”

Andrew highlighted how regulations like the EU AI Act and standards such as ISO 42001 are pushing organizations to establish clear, auditable practices for managing AI risk.

Databricks: Tackling Technical Governance Challenges

Next, David Wells, Specialist Solutions Architect at Databricks, introduced the Databricks AI Governance Framework. He noted common challenges enterprises face:

  • Unclear ownership of AI responsibilities across teams.
  • Fragmented nomenclature, with different groups speaking “different languages.”
  • Lack of unified standards, leaving organizations unsure what frameworks apply.
  • Missing methodologies for evaluating and scaling AI projects.

As Wells put it:

“These are symptoms of a governance problem. Customers are asking, ‘How do we apply standards? Where do they fit? And how do we make them actionable?’”

The Databricks framework is designed to help enterprises bridge these gaps by aligning data lake management, ML models, and governance requirements.

Schellman: Standards and Certification in Practice

Danny Menimbo, Principal at Schellman, brought deep expertise on auditing and certification. With over a decade in ISO compliance and assurance services, he explained how ISO 42001 and similar frameworks create roadmaps for operationalizing AI governance.

Menimbo emphasized the importance of connecting regulatory requirements to technical implementation:

“Frameworks are not just about checking boxes. They provide a common language between practitioners, legal teams, and regulators.”

Connecting the Dots: Regulation, Standards, and Customers

The second half of the webinar shifted to a broader discussion on what panelists are hearing from customers. A few recurring themes included:

  • Growing demand from boards and executives for AI risk visibility.
  • Concerns about global regulatory fragmentation (EU AI Act, NIST AI RMF, ISO, etc.).
  • The need for practical tools that scale governance without slowing innovation.

The consensus: frameworks like ISO 42001 are emerging as bridges between the technical and regulatory worlds.

Key Takeaways

  • Standards drive clarity – ISO 42001 and similar frameworks help align regulators, legal teams, and practitioners.
  • Governance is a shared responsibility – From data scientists to compliance officers, roles must be clearly defined.
  • Frameworks enable communication – They create a “common language” that makes AI risk visible and explainable.
  • Action is urgent – Regulations like the EU AI Act are here. Enterprises that start now will have a competitive advantage.

In conclusion, AI is moving fast, and trusted AI is achievable using the tools, like frameworks and standards, to guide enterprise AI governance. By embracing frameworks and governance practices today, enterprises can innovate confidently while de-risking AI deployment.

You can watch the full webinar recap here.

Share:

Related Posts

Informational image about the Trustible Zero Trust blog.

When Zero Trust Meets AI Governance: The Future of Secure and Responsible AI

Artificial intelligence is rapidly reshaping the enterprise security landscape. From predictive analytics to generative assistants, AI now sits inside nearly every workflow that once belonged only to humans. For CIOs, CISOs, and information security leaders, especially in regulated industries and the public sector, this shift has created both an opportunity and a dilemma: how do you innovate with AI at speed while maintaining the same rigorous trust boundaries you’ve built around users, devices, and data?

Read More