Anastassia Kornilova is the Director of Machine Learning at Trustible. Anastassia translates research into actionable insights and uses AI to accelerate compliance with regulations. Her notable projects have involved creating the Trustible Model Ratings and AI Policy Analyzer. Previously, she has worked at Snorkel AI developing large-scale machine learning systems, and at FiscalNote developing NLP […]
Navigating The AI Regulatory Minefield: State And Local Themes From Recent Legislation
This article was originally published on Forbes. Click here for the original version. The complex regulatory landscape for artificial intelligence (AI) has become a pressing challenge for businesses. Governments are approaching AI through the same piecemeal lens as other emerging technologies such as autonomous vehicles, ride-sharing, and even data privacy. In the absence of a […]
Trustible Becomes Official Implementation Partner for the Databricks AI Governance Framework (DAGF)
Despite the explosive growth of AI, most enterprises remain unprepared to manage the very real risks that come with its adoption. While the opportunities are vast—from smarter products to more efficient operations—the path to realizing AI’s full potential is fraught with challenges around performance, cybersecurity, privacy, ethics, and legal compliance. Without a strong AI governance […]
Trustible’s Perspective: The AI Moratorium would have been bad for AI adoption
In the early hours of July 1, 2025, the Senate overwhelmingly voted to strip the proposed federal moratorium on state and local AI laws from the Republican’s reconciliation bill. The moratorium went through several re-writes in an attempt to salvage it, though ultimately 99 Senators supported removing it from the final legislative package. While the political […]
Understanding the Data in AI
Data governance is a key component of responsible AI governance, and it features prominently in every emerging AI regulations and standards. However, “data” is not a monolithic concept within AI systems. From the massive datasets collected for training large language models (LLMs), to user feedback loops that refine and improve outputs, multiple “data streams” flow through any modern AI application.
Navigating AI Vendor Risk: 10 Questions for your Vendor Due Diligence Process
AI is everywhere, but the race to add AI from vendors has embedded unknown risks into your supply chain. Knowing what type of AI your suppliers use is difficult enough, let alone knowing how to ensure your due diligence adequately addresses the unique risks it may pose. Yet, customers and regulators are increasingly probing into […]
What is AI Monitoring?
When many technical personas hear the term monitoring, they often think of internal monitoring of the AI system.
Understanding AI Stakeholders with Trustible’s AI Stakeholder Taxonomy
Trustible developed an AI Stakeholder Taxonomy that can help organizations easily identify stakeholders as part of the impact assessment process for their high-risk use cases
ML Deployment Patterns & Associated AI Governance Challenges
As the deployment of AI becomes pervasive, many teams from across your organization need to get involved with AI Governance, not only the data scientists and engineers. With increasing government regulation and reputational risks, it’s more essential that all stakeholders work with a consistent framework of categorizing different patterns of AI deployments. This blog post offers one high level framework for categorizing different AI deployment patterns and discusses some of the AI Governance challenges associated with each pattern.
Everything you need to know about the NY DFS Insurance Circular Letter No. 7
On July 11, 2024, the New York Department of Financial Services (NY DFS) released its final circular letter on the use of external consumer data and information sources (ECDIS), AI systems, and other predictive models in underwriting and pricing insurance policies and annuity contracts. A circular letter is not a regulation per se, but rather a formalized interpretation of existing laws and regulations by the NY DFS. The finalized guidance comes after the NY DFS sought input on its proposed circular letter, which was published in January 2024.









