top of page

The Trustible Story

Apr 19, 2023

4 min read

0

440

0

Hi there 👋! We’re Trustible – a software company dedicated to enabling organizations to adopt trustworthy & responsible AI. Here’s our story.

Artificial Intelligence (AI) is permeating every aspect of society. 🤖

AI is becoming a foundational tool in our everyday lives – from business applications, to public services, to consumer products. Recent advances in AI have dramatically accelerated its adoption across society – unquestionably changing the way humans interact with technology and basic services. Tools like Generative AI will make it easier than ever for businesses and governments to deploy AI within their organizations. Now, instead of teams of ML experts equipped with PhDs from leading institutions, access to cutting edge, multi-purpose foundational AI models is simply an API call away.

Truthfully, we’re excited about AI. We think that it has the potential to empower a more insightful, enriching, and productive society. That said, we are equally as concerned about its potential harmful applications – from discrimination and wrongful prosecution, to unequal health care and national surveillance.

How did we get here? 🤔

The process for building AI has traditionally mimicked the software development process: ship fast and iterate. While this may be ok for some low-risk use cases of AI, it does not work with higher-risk applications like cancer predictions, autonomous driving, or facial recognition. Once deployed into production, these systems can cause enormous concerns around safety, fairness, bias, accountability, and privacy – an issue particularly acute with Large Language Models (LLMs). Imagine, for a second, if we built buildings like we build software. Our buildings would be collapsing every day in the name of ‘move fast and break things.’ High-risk AI must be treated like other regulated products and tools we use in our everyday life.


The conversation around Responsible AI is accelerating. 🗣️

Organizations developing or deploying AI need to undergo a change towards deliberate design and operational management of AI systems that is ethical, transparent, and accountable to the stakeholders with whom it interacts.

But companies looking to deploy AI face a myriad of risky ethical and legal questions. Is our AI biased or discriminatory? Can we explain how our AI reached a conclusion? Should our AI automate decision making? If our AI breaks the law or lies, who is liable? Where and how are we collecting the source data?

With great power comes great responsibility. Despite good intentions, organizations deploying AI need the enterprise tools and skills to build Responsible AI practices at scale. Moreover, they don't feel prepared to meet the requirements of emerging AI regulations.


Governments stepping in to regulate AI 🏛️

Governments around the world are taking notice and drafting regulations that require strict governance under the new laws.


127 countries have enacted legislation containing the term ‘artificial intelligence.’ The most notable of which is the European Union’s AI Act which will regulate AI systems and require conformity assessments for high-risk AI use cases. We expect this regulation to pass in 2023 and become the global standard for AI regulations, much like GDPR became the standard for data privacy.

In the United States, NIST has released their AI Risk Management Framework which will likely inform U.S. regulation, and The White House has released their Blueprint for an AI Bill of Rights which serves as a blueprint for institutions looking to develop internal AI policies. Federal and State legislation will play a large role as well, in addition to agency-specific regulations. Every organization developing or procuring AI will need to understand how these rules differ from each other, what internal controls must be put in place, and how to prove compliance across jurisdictions.

That’s where we come in. ✋

Trust in AI systems, and the organizations deploying them, is the single most important factor that will drive successful adoption of AI. Many of the challenges we’ve outlined require interdisciplinary solutions – they are as much of a technical and business problem as they are socio-technical, political, and humanitarian. But there is a critical role for a technology solution to accelerate Responsible AI priorities and scale governance programs.

With a mission of maximizing trust, reducing risk, and increasing transparency, Trustible empowers organizations to confidently adopt AI technologies in a rapidly evolving regulatory environment. Even our name is a portmanteau of the words trustworthy and responsible.

The Trustible AI Governance platform integrates with existing AI/ML platforms and helps organizations define necessary AI policies, implement and enforce Responsible AI practices, and generate evidence to prove compliance with emerging AI regulatory frameworks and be prepared for AI audits.


Trustible enables leaders in compliance and data science with the workflows, checklists, documentation tools, and reporting capabilities necessary to succeed in this rapidly evolving regulatory environment. Building trust and deploying AI ethically goes beyond just good governance and regulatory compliance, but they are a critical part in the Responsible AI ecosystem of solutions.


Much like the urgency our planet feels in combating climate change, we feel that we must act with the same resolve to ensure the future of Artificial Intelligence is fair, reliable, and secure. That’s why we’ve decided to join the likes of Patagonia, Warby Parker, and Lemonade and incorporate as a Benefits Corporation – a corporate structure that allows us to both grow our business and have a positive impact on society.


As founders, we’ve spent nearly a decade at the intersection of policy and technology, having been early employees and executives at FiscalNote as an early-stage startup all the way through IPO and beyond. Being headquartered in Washington DC, we fundamentally understand the role that regulations play in directing the capital markets, the public & private sectors, and civil society. And we also believe that enterprise software is the most productive tool introduced in the business economy over the last 50 years. So we’re excited to bring these things together and put our dent in the universe. Come join us in our journey.


Read the press release about our emergence from stealth here.

- Gerald & Andrew

✅Certified Human Written: this post was not written (or edited!) by generative AI


Apr 19, 2023

4 min read

0

440

0

Comments

Commenting has been turned off.
bottom of page