What Does the Global Pause on AI Laws Mean for AI Governance?

Informational blog overview

The global AI regulatory landscape has taken a completely new direction in just one year. The US was previously leading the way on AI safety, attempting to work with like minded countries on building a responsible AI ecosystem. Yet since January 2025, the Trump Administration swiftly shifted the narrative by focusing on AI innovation and pausing previous initiatives that may “unduly burden” AI development. The Administration’s tone also threw cold water on Congressional efforts to regulate AI, despite bipartisan efforts in the House and Senate to explore AI-related laws.

Governments across the globe appear to have adopted this approach to curry favor with the Trump Administration and attempt to build their own national or regional AI ecosystems. The new approach prioritizes AI innovation and views legislative efforts as a barrier to reaping AI’s benefits. The AI Action Summit in Paris earlier this year sought to underscore the move towards a more tech-friendly environment. However, countries like China are taking notice and pursuing an agenda aimed at shaping the global AI regulatory landscape. The emerging divide ushers in a new phase for the AI arms race, which will be defined in terms of unbridled innovation versus cautious growth.

In this blog post, we explore some of the changes in the regulatory landscape, what this means for the AI ecosystem, and what we expect to happen going forward. 

What is Happening Globally?

Below are a few examples of countries that are taking a second look at comprehensive AI laws:

  • Australia. In the lead-up to the May 2025 federal election, the Labor Party campaigned on passing a comprehensive AI law, however recent reports indicate that those efforts are being abandoned. 
  • Canada.The previous Liberal government, under Prime Minister Justin Trudeau, introduced a comprehensive AI law called the Artificial Intelligence and Data Act (AIDA). The AIDA died when the Canadian Parliament dissolved ahead of the April 2025 federal elections. The re-elected Liberal government does not intend to reintroduce the AIDA but does plan to take some action on AI and copyright. 
  • China. The national government has not passed a national AI law, however it has leveraged its massive bureaucracy to impose some restrictions and guardrails on AI. These rules include disclosures for AI generated content and oversight for recommendation algorithms. The Chinese government also released its AI Action Plan, which outlines how it intends to influence global AI innovation and standards.    
  • Chile. The Chamber of Deputies introduced a comprehensive AI law based on the EU AI Act. Legislators have come under pressure from industry over the proposed law in its current form.
  • EU. While the EU AI Act continues to take effect, there has been a push by industry and some EU policymakers to “Stop the Clock” on implementation. 
  • UK. The Liberal government’s AI Opportunities Action Plan outlined plans to regulate AI used in critical sectors but Parliament delayed those efforts until at least May 2026. 

What Does This Mean?

When the EU passed the EU AI Act, many policy experts thought that the new law would spark copycat legislation like we saw after GDPR passed. However, the 2024 US presidential election transformed the global regulatory landscape by chilling most AI legislation. It is plausible that a less burdensome regulatory landscape can spark AI investment in the global south, where there is opportunity for growth. Policymakers are also considering AI rules for specific industries to address certain higher risk use cases while giving other industries some room to breathe and experiment. 

Unfortunately for the private sector, AI Adoption is not keeping pace with trust in the technology and that is hindering adoption. Consumers and organizations want some form of assurance that technology is safe and they are using it properly. Polls in the US and Canada have demonstrated that society writ large wants more regulatory oversight for AI technologies; yet, the lack of political will for comprehensive laws that address issues like basic AI literacy programs and public disclosures, will only further that gap.  

What’s Next

The pullback on AI regulations will leave a void that will likely be filled by industry and standards-setting bodies. While these stakeholders have proven to produce useful frameworks in the past (e.g., PCI cybersecurity standard or ISO frameworks), relying solely on non-governmental entities is risky. For instance, industry-led efforts may prioritize issues with reputational or legal harm over consumer protection concerns with AI tools. Moreover, standards-setting bodies may lack certain expertise to help operationalize their standards, relying on industry partnerships to actually implement.

We also expect to see more interest to regulate AI at the provincial and municipal level. In the US, states are continuing to explore legislation for high risk use cases and oversight for frontier model companies. Other provincial governments in places like British Columbia and Ontario are also working on AI-related laws. 

Regardless, the lack of national regulatory frameworks or standards continue to inject uncertainty into the AI ecosystem. We discussed how the proposed US federal moratorium on state and local AI laws harmed AI adoption and the same sentiment is true in countries that are pulling back on AI legislation. Moreover, the lack of concrete rules of the road for AI means that organizations will have less insights into how their systems operate and that can exacerbate the risks posed by these systems. For instance, without a requirement to conduct impact assessments for certain high systems means that organizations cannot understand the negative impacts on those systems’ stakeholders.   

Where Trustible Can Help

Trustible is the only native AI governance platform providing near real-time actionable intelligence and practitioner context on global AI policy shifts, regulations, and the latest in frameworks, standards and models. Those insights infused into the Trustible platform provide organizations with the tools to align with responsible AI frameworks and standards, such as the NIST AI Risk Management Framework and ISO standards. In addition, we provide regulatory tracking on a local, state/provincial, national, and international level, so that when a regulatory change impacts your business operations, you have the resources, context, and suggested actions to take to keep your organization moving forward with AI. 

Regardless of where the regulatory landscape goes, Trustible can help your organization align with existing standards and frameworks to accelerate responsible AI adoption. Connect with us, and let’s get AI governance done.

Share:

Related Posts

Building Trust in Enterprise AI: Insights from Trustible, Schellman, and Databricks

AI is rapidly reshaping the enterprise landscape, but organizations face growing pressure from regulators, stakeholders, and customers to ensure these systems are trustworthy, ethical, and well-governed. To help unpack this evolving space, Trustible, Schellman, and Databricks co-hosted a webinar on how AI governance frameworks, standards, and compliance practices can become strategic tools to accelerate AI adoption.

Read More