A Pragmatic Blueprint for AI Regulation

An AI startup’s proposal for fair, pro-growth, pro-AI, non-partisan, AI regulation

AI is one of the most transformative technologies of the century, with the potential to accelerate scientific research, improve healthcare outcomes, and help small businesses compete with larger enterprises. The United States currently leads the world in AI development. Yet despite this leadership, a significant gap has emerged between AI’s potential and its actual adoption. Many businesses remain on the sidelines, uncertain whether AI tools are reliable enough to deploy, unclear on their legal exposure, and unsure which vendors they can trust.

This adoption gap is the central challenge facing American AI policy today. It poses a direct risk to national competitiveness. China and other nations are investing heavily in AI deployment across their economies, and they will not wait for American businesses to build confidence. If the United States cannot translate its technological leadership into widespread adoption, that leadership will erode. There is also a domestic economic risk. Billions of dollars have flowed into AI companies on the expectation of transformative returns. If adoption stalls and revenue growth disappoints, a bubble correction could devastate the very industry the United States is counting on to maintain its edge.

Closing this gap requires trust. And trust requires a regulatory environment that establishes clear rules without stifling innovation. At Trustible, we define AI governance as the combination of processes, policies, and evaluations that manage and mitigate the risks of AI. Done well, governance does not slow adoption. It accelerates adoption by giving businesses the confidence to invest and deploy. Critically, trust cannot be mandated. Attempting to force AI on skeptical businesses, workers, or consumers will generate backlash. Sustainable adoption requires bringing stakeholders along willingly and building genuine confidence in the systems being deployed.

Right now, policymakers are not hitting the mark. The AI policy landscape is fragmented and uncertain. The European Union’s AI Act’s rollout has been marked by repeated debates over timing and simplification. State laws in the United States face constant threat of federal preemption. High-profile lawsuits are working through courts with judges applying old frameworks to new problems. Meanwhile, the proposals on the table tend toward extremes: some are too heavy, imposing compliance burdens only the largest firms can absorb; others are too light, gesturing at concerns without creating real accountability.

The loudest voices in the debate have crowded out the reasonable middle. AI doomers treat the technology as an existential threat demanding precautionary restrictions. AI optimists dismiss concerns about harm as obstacles to progress. Neither camp addresses what most businesses actually need: a stable, predictable environment where they can adopt AI with confidence.

We call ourselves AI pragmatists. We believe AI will be genuinely transformative, but that transformation does not have to be catastrophic or ungoverned. We are not interested in hypothetical extinction scenarios, nor do we believe that market forces alone will solve every problem. Pragmatism means focusing on the actual barriers to adoption, the real harms that have materialized, and the practical compromises that can align incentives across the value chain.

At its core, good regulation allocates risk appropriately. It places accountability on those best positioned to manage it while protecting those who lack the information to protect themselves. No one wants to fly on an unregulated plane or receive care from an unlicensed professional. Thoughtful regulatory frameworks build trust in industries, and that trust allows markets to function and grow.

This paper offers policymakers a pragmatic framework built around five core positions: a shared liability model that distributes accountability across model providers, deployers, and end users; a balanced approach to copyright that protects creators while enabling beneficial AI development; principles for protecting children while building AI literacy; content provenance systems that help distinguish authentic from synthetic content; and information-sharing mechanisms that reduce uncertainty across the ecosystem. Each position reflects insights from our direct experience helping companies govern AI systems in practice, and each is designed to create conditions where responsible actors can thrive.

Share:

Related Posts

Everything You Need to Know About New York’s RAISE Act 

New York became the second state last year to enact a frontier model disclosure law when Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act. The new law requires frontier model providers to disclose certain safety processes for their models and report certain safety incidents to state regulators, with many similarities to California’s slate of AI laws passed last fall. The RAISE Act will take effect on January 1, 2027. This article covers who must comply with the RAISE Act, what transparency obligations the law creates, and how the law will be enforced.

Read More