AI Governance at Scale: Trustible Becomes Official Databricks Technology Partner

At Trustible, we empower organizations to responsibly build, deploy, and monitor AI systems at scale. Today, we are excited to announce our partnership with Databricks to bring together our leading AI governance platform with their trusted data and AI lakehouse, enabling joint customers to rapidly implement responsible, compliant, and accountable AI.

We believe AI practitioners and innovators should be dually focused on maximizing the benefits of AI, and minimizing its risks. Their expertise and knowledge of the systems will be necessary to comply with emerging regulations – collecting and maintaining a corpus of documentation evidence required for compliance. Our AI governance platform translates complex legal requirements and responsible AI frameworks into actionable steps to enable collaboration between AI and legal/compliance teams.

Trustible’s integration with Databricks emerged from a pain point that our team experienced while previously leading AI/ML teams: how can we leverage information already stored in the Databricks Lakehouse to accelerate compliance with emerging regulations like the European Union’s AI Act? Moreover, how do we set up policies and processes that are prepared for future requirements such as external audits, post-market monitoring, and public disclosure requirements?

Emerging AI regulations like the EU AI Act will require extensive documentation and disclosure about underlying models. Key model attributes such as training objectives, accuracy metrics, and bias/fairness statistics must be provided to users and regulators to properly convey key risks, limitations, and mitigation steps. It is best practice, and will soon be a regulatory requirement, to store these kinds of metrics and metadata in a model registry such as MLflow. In practice, organizations rarely have just one model, but rather a whole set of model variants and experiments with different hyperparameters. Ensuring that the models that reach production have the required documentation is essential in order to not have to retrofit compliance post-deployment – a costly task.

That’s where Trustible’s integration with MLflow on Databricks comes in. Our platform seamlessly generates regulatory model documentation by automatically mapping MLflow metrics and metadata to required fields in Model Cards, tailoring reporting to legal and governance needs. This is just the beginning, though. Going forward, we will extend integrations across the full machine learning lifecycle, empowering continuous monitoring, auditing, and transparency as regulations and customer needs expand.

Many proposed regulations have specific requirements for testing, internal or external audits, post market monitoring processes, and enforced internal AI governance policies. Trustible can help Databricks customers say it, do it, and prove it. For example, Trustible will help organizations say what risks are associated with a particular use case of AI, use integrations with Databricks to do the technical risk mitigations, and then export an analysis notebook as a compliance artifact to prove that the evaluation is in place. As auditing, record keeping, and post market monitoring requirements become clearer, Trustible can help Databricks customers identify what policies they need on their lakehouse, generate proof of enforcement, and connect auditors directly and securely through Delta Sharing.

The future of AI development will require visibility and collaboration between a broader set of stakeholders such as compliance teams, senior management, regulators, and the broader public. Trustible enables organizations to build trusted and accountable AI systems by connecting the needs and requirements of these various stakeholder groups. We’re excited to be working with Databricks as a Technology Partner and are looking forward to helping organizations navigate the regulated future of AI.

Share:

Related Posts

Why AI Governance is the Next Generation of Model Risk Management

For decades, Model Risk Management (MRM) has been a cornerstone of financial services risk practices. In banking and insurance, model risk frameworks were designed to control the risks of internally built, rule-based, or statistical models such as credit risk models, actuarial pricing models, or stress testing frameworks. These practices have served regulators and institutions well, providing structured processes for validation, monitoring, and documentation.

Read More

Should the EU “Stop the Clock” on the AI Act?

The European Union (EU) AI Act became effective in August 2024, after years of negotiations (and some drama). Since entering into force, the AI Act’s implementation has been somewhat bumpy. The initial set of obligations for general-purpose AI (GPAI) providers took effect in August 2025 but the voluntary Code of Practice faced multiple drafting delays. The finalized version was released with less than a month to go before GPAI providers needed to comply with the law.

Read More