Should the EU “Stop the Clock” on the AI Act?

The European Union (EU) AI Act became effective in August 2024, after years of negotiations (and some drama). Since entering into force, the AI Act’s implementation has been somewhat bumpy. The initial set of obligations for general-purpose AI (GPAI) providers took effect in August 2025 but the voluntary Code of Practice faced multiple drafting delays. The finalized version was released with less than a month to go before GPAI providers needed to comply with the law.

It was during the GPAI Code of Practice drafting process that the “Stop the Clock” movement began gaining momentum. Stop the Clock is fairly straightforward: proponents say that the European Commission should pause implementation of the AI Act or delay certain obligations until it can be re-evaluated and potentially revised. Throughout the summer there has been growing support from industry and policymakers from across the EU to Stop the Clock, including the Swedish Prime Minister. Former European Central Bank President, Mario Draghi (author of the infamous Draghi Report), recently gave the Stop the Clock movement a major boost by supporting a pause

The European Commission initially threw cold water on the idea, but it is not completely dead. The European Commission is evaluating options for a pause at an upcoming AI Board meeting in October primarily because of implementation delays at the national level. In this blog post, we analyze the risks and benefits of pausing the AI Act’s implementation, as well as where the debate leaves AI governance professionals.

The Risks of Stopping the Clock

In theory, stopping the EU AI Act from being fully implemented does not seem difficult. The European Commission could announce tomorrow that it plans to delay the Act and follow up with the necessary legislative fixes to achieve that goal. However, there are several considerations to keep in mind should that happen. 

A Pause Will Hurt Companies That Are Implementing Their Compliance Frameworks

The EU AI Act is massive and complex, with 113 articles, 13 annexes, and 180 recitals. Once it officially passed, companies began to draw up their compliance plans because they knew it would take years to decipher and implement the law’s requirements. This is especially true for highly-regulated industries that use higher risk use cases (e.g., healthcare, finance, or criminal justice). August 2026 marks another major compliance deadline and companies are scrambling to figure out what needs to be in place by then. Pausing the law would throw a lot of uncertainty into the equation for these companies. An indefinite delay would mean resources were spent only to have the proverbial football pulled right before the kick. If the European Commission chose to go back to the drawing board, then it leaves companies in compliance limbo until a new agreement is struck.

An EU Patchwork of AI Law

There has been some criticism that the EU AI Act tries to accomplish too much and that a sectoral approach would be more effective. Yes, the AI Act is a complicated, comprehensive law but the alternative is far worse. The European Commission initiated the AI Act in part because if a Union-wide law did not exist it left open the door to 27 national AI laws. If there is anxiety in the U.S. over the patchwork of state laws, then imagine the challenges of 27 national AI laws. Stopping the clock on the AI Act may prompt certain Member States to consider implementing their own laws. While it is still possible for member nations to pass national AI laws (Italy being a good example), the AI Act is a bulwark against a patchwork of European national laws.

A Geopolitical Vacuum Forms on AI Guardrails

The U.S. has dramatically pulled back from its role as a global leader in AI safety under the Trump Administration. The U.S. maintains some interest in security standards but has shown little interest in addressing broader AI risks. In fact, the push to eliminate “woke AI” may cause frontier model providers to loosen existing AI guardrails to avoid being called “woke.” The absence of U.S. leadership on AI safety leaves a large geopolitical gap that China is more than happy to fill, which would threaten the respect for human rights and democratic values. As it stands, the EU AI Act is currently the best defense against those threats because the Act prohibits certain AI use cases and imposes AI safety requirements throughout the value chain. Pausing or delaying the AI Act would cede almost all the influence over the AI ecosystem to China.  

The Upsides of Pausing the AI Act

Stopping the clock is not all doom and gloom. It is important to understand that there are some benefits to a potential pause.

Policymakers Get Room to Breathe and Revise

The AI Act took over 3 years to finalize and the current iteration does not account for some of the massive shifts with AI technology or policy in that timeframe. For instance, copyright issues and worker displacement have become far larger concerns than they were in 2021 when the European Commission first proposed the AI Act. Moreover, the shifts towards agents and agentic AI are disrupting existing AI governance models. Delaying the implementation timeline gives policymakers space to amend the law and address some of these new concerns, while also considering how to ease compliance burdens posed by the existing requirements. 

Avoids Concerns with Non-Compliance from Major Tech Companies

Big tech does not love the EU AI Act for some fairly obvious reasons, chief among them are the hefty compliance burdens. EU tech laws have pushed companies to pull back on product launches (like Apple Intelligence) and rumors have swirled that companies may boycott compliance with the EU AI Act. We have already seen some resistance with the voluntary Code of Practice for GPAI, whereas Meta refused to sign altogether and xAI only agreed to the security chapters. These cracks, while not large, may expose a deeper desire to test the limits of not complying with the law. A temporary pause would allow the EU to save some face if that is the case and work on a broader solution to mitigate non-compliance issues from big tech companies.

Unshackles the EU AI Ecosystem

The European tech industry is woefully underdeveloped and there are real concerns that the AI Act will further harm an EU-based AI ecosystem. Yes, Mistral agreed to the GPAI Code of Practice, but it does not mean they are thrilled about the law writ large. We have also seen a push by countries like France to be more pro-innovation and focus on tech investments instead of regulations. A pause would provide the EU with an opportunity to build a tech industry that is guided by both innovation and European values. 

Presses Reset with the Trump Administration

It is not secret that the Trump Administration does not like the EU AI Act. There is no better evidence to that point than when Vice President JD Vance declared at the Paris AI Action summit that he was not there “to talk about AI safety” he was there “to talk about AI opportunity.” The Administration continues to shun efforts that appear to regulate AI, which includes entertaining sanctions on the EU for their tech laws. Pausing the AI Act could reset the relationship between the EU and the Trump Administration on AI, which could lead to cooperation on specific AI initiatives like security standards.

Where Does This Leave AI Governance Professionals?

The key takeaway for AI governance professionals is that there is a lot of talk but little action. There are prominent voices arguing to stop the clock, but the European Commission has not made a final decision on whether to implement a pause. This means if you are subject to the EU AI Act, it is better to be safe than sorry. The next major milestone is less than a year away (i.e., August 2, 2026). Companies should continue working on their compliance programs, which means understanding the type of entity they may be under the law (e.g., deployer or provider), creating AI inventories and assessing whether their use cases qualify as high-risk, as well as updating their AI literacy programs to capture changes across the AI policy landscape. 

Our Final Thoughts

There is a lot of drumbeating around AI innovation (primarily from the U.S.) and that is causing other countries to reconsider their own AI regulatory efforts. On the flip side, it is not wrong for the EU to be concerned about the effects of new technologies on its citizens and want to move ahead with an approach that tries to address those issues. This is especially true in an era where isolationism is on the raise. While stopping the clock may not be a viable option, there is still time to issue implementation guidance and standards. If the European Commission wants to press forward but also scratch the itch of those who want a pause, they could use existing forums to produce guidance that softens some of the harder edges of the law.

Share:

Related Posts

What is the “Perfect” AI Use Case Intake Process?

Last week at the IAPP AI Governance Global Governance conference in Boston, Trustible brought together AI governance leaders from Leidos and Nuix to explore a deceptively tactical but mission-critical question: What does the “perfect” AI intake process look like?

Read More