Building Trust in Enterprise AI: Insights from Trustible, Schellman, and Databricks

AI is rapidly reshaping the enterprise landscape, but organizations face growing pressure from regulators, stakeholders, and customers to ensure these systems are trustworthy, ethical, and well-governed. To help unpack this evolving space, Trustible, Schellman, and Databricks co-hosted a webinar on how AI governance frameworks, standards, and compliance practices can become strategic tools to accelerate AI adoption.

The conversation brought together leaders from across the ecosystem to explore how enterprises can balance AI innovation with accountability.

Setting the Stage: Why AI Governance Matters

The session opened with Andrew (CTO & Co-Founder of Trustible), who framed the discussion around the convergence of regulation, standards, and enterprise adoption:

“There’s a big question about how to communicate what’s done in MLflow or Unity Catalog to legal teams, customers, and regulators. That’s where frameworks and standards really help—they create clarity in an increasingly complex environment.”

Andrew highlighted how regulations like the EU AI Act and standards such as ISO 42001 are pushing organizations to establish clear, auditable practices for managing AI risk.

Databricks: Tackling Technical Governance Challenges

Next, David Wells, Specialist Solutions Architect at Databricks, introduced the Databricks AI Governance Framework. He noted common challenges enterprises face:

  • Unclear ownership of AI responsibilities across teams.
  • Fragmented nomenclature, with different groups speaking “different languages.”
  • Lack of unified standards, leaving organizations unsure what frameworks apply.
  • Missing methodologies for evaluating and scaling AI projects.

As Wells put it:

“These are symptoms of a governance problem. Customers are asking, ‘How do we apply standards? Where do they fit? And how do we make them actionable?’”

The Databricks framework is designed to help enterprises bridge these gaps by aligning data lake management, ML models, and governance requirements.

Schellman: Standards and Certification in Practice

Danny Menimbo, Principal at Schellman, brought deep expertise on auditing and certification. With over a decade in ISO compliance and assurance services, he explained how ISO 42001 and similar frameworks create roadmaps for operationalizing AI governance.

Menimbo emphasized the importance of connecting regulatory requirements to technical implementation:

“Frameworks are not just about checking boxes. They provide a common language between practitioners, legal teams, and regulators.”

Connecting the Dots: Regulation, Standards, and Customers

The second half of the webinar shifted to a broader discussion on what panelists are hearing from customers. A few recurring themes included:

  • Growing demand from boards and executives for AI risk visibility.
  • Concerns about global regulatory fragmentation (EU AI Act, NIST AI RMF, ISO, etc.).
  • The need for practical tools that scale governance without slowing innovation.

The consensus: frameworks like ISO 42001 are emerging as bridges between the technical and regulatory worlds.

Key Takeaways

  • Standards drive clarity – ISO 42001 and similar frameworks help align regulators, legal teams, and practitioners.
  • Governance is a shared responsibility – From data scientists to compliance officers, roles must be clearly defined.
  • Frameworks enable communication – They create a “common language” that makes AI risk visible and explainable.
  • Action is urgent – Regulations like the EU AI Act are here. Enterprises that start now will have a competitive advantage.

In conclusion, AI is moving fast, and trusted AI is achievable using the tools, like frameworks and standards, to guide enterprise AI governance. By embracing frameworks and governance practices today, enterprises can innovate confidently while de-risking AI deployment.

You can watch the full webinar recap here.

Share:

Related Posts

Why AI Governance is the Next Generation of Model Risk Management

For decades, Model Risk Management (MRM) has been a cornerstone of financial services risk practices. In banking and insurance, model risk frameworks were designed to control the risks of internally built, rule-based, or statistical models such as credit risk models, actuarial pricing models, or stress testing frameworks. These practices have served regulators and institutions well, providing structured processes for validation, monitoring, and documentation.

Read More

Should the EU “Stop the Clock” on the AI Act?

The European Union (EU) AI Act became effective in August 2024, after years of negotiations (and some drama). Since entering into force, the AI Act’s implementation has been somewhat bumpy. The initial set of obligations for general-purpose AI (GPAI) providers took effect in August 2025 but the voluntary Code of Practice faced multiple drafting delays. The finalized version was released with less than a month to go before GPAI providers needed to comply with the law.

Read More