What the Trump Administration’s AI Action Plan Means for Enterprises

The Trump Administration released “Winning the AI Race: America’s AI Action Plan” (AI Action Plan) on July 23, 2025. The AI Action plan was published in accordance with the January 2025 Removing Barriers to American Leadership in AI Executive Order. The AI Action Plan proposes approximately 90 policy recommendations within three thematic pillars: Pillar I addresses AI innovation; Pillar II focuses on AI infrastructure; and Pillar III centers on national security and international engagement.

While the policy recommendations are limited to leveraging resources across the federal government, there are several recommendations that can directly impact companies that develop, deploy, or use AI. Across the various policy recommendations that can influence the private sector, three general themes emerged:

  • The Trump Administration is injecting some level of uncertainty into the AI regulatory ecosystem by upending established rules or standards, which can confuse organizations on their own obligations.
  • Some of the AI Action Plan’s high-level AI priorities come in contradiction with other administration actions or priorities (e.g., talent pipeline and energy).
  • Where the federal government is interested in standards development, there may be cascading obligations for organizations that contract with the federal government. There is an added benefit of adding some level of clarity on issues such as AI security and incidents.

Below we analyze several of the policy recommendations that can impact the private sector and offer insights into why each of the recommendations matter for these companies. It is important to note that this is not an exhaustive list, and additional policy recommendations may be applicable depending on the nature and scope of your particular industry and business.    

Pillar I: Accelerate AI Innovation

Remove Red Tape and Onerous Regulation

The policy recommendations take a top-down approach when it comes to shaping the regulatory environment and would also inject further uncertainty into the regulatory landscape. There is a proposed policy recommendation that would impose a “shadow” moratorium on state AI rules by limiting federal funds to states that “may hinder the effectiveness” of the money. Trustible previously discussed the negative consequences of a federal moratorium on state AI rules. Another policy recommendation calls on the Federal Trade Commission to review all previously initiated investigations that advance burdensome AI liability theories (or other liability theories that impact AI), as well as review any enforcement AI-related enforcement actions. 

Why does this matter: Companies are trying to navigate certain rules and regulations for AI from states, as well as how federal agencies treat AI. Implementing these recommendations would inject uncertainty into the regulatory landscape and make it difficult for AI governance professionals to understand their compliance obligations.

Ensure that Frontier AI Protects Free Speech and American Values

The policy recommendations seek to rewrite federal guidelines and standards. One of the policy recommendations suggests that NIST revisit the AI Risk Management Framework (AI RMF) to remove references to Diversity, Equity, and Inclusion (DEI) and climate change.

Why does this matter: This recommendation would upend a previously settled standard because it is not clear what will qualify as a “DEI” reference (there are only two explicit references to DEI). Organizations that adhere to the NIST AI RMF may also need to update their internal policies and procedures to align with proposed updaters. Moreover, changes to the NIST AI RMF could disrupt how organizations recruit AI talent, as the two DEI references are workforce-related.

A second recommendation proposes updating federal procurement guidelines to ensure that the federal government awards contracts to LLM developers who supply systems that are “objective and free from top-down ideological bias.”

Why does this matter: Changes to procurement guidance will not just impact frontier model providers. Organizations that contract with the federal government will need to make sure they are using LLMs for their AI products and services that align with government procurement guidance. The recommendation itself is unclear on further guidance for criteria being considered to support the underlying objective, which adds further confusion as to what models may be acceptable for use.

Enable AI Adoption

The policy recommendations are aimed at improving AI adoption within the privacy sector. Specifically, there is a recommendation for sector-specific stakeholder engagement (e.g., healthcare, energy, and agriculture) to “accelerate the development and adoption of national standards for AI systems” and measure realized AI productivity in those sectors.

Why does this matter: This is an opportunity for organizations to influence the standards setting process by providing the government with concrete examples of successes and pain points with AI adoption. Organizations can advocate for clearer AI guidance and standards in certain industry sectors, especially those where AI use may be higher risk (e.g., healthcare and financial services). 

Empower American Workers in the Age of AI

The Trump Administration previously focused on AI-related workforce development issues, though within the context of developing an AI-literal workforce. These recommendations focus on how the federal government can encourage AI reskilling or upskilling for the current workforce. For instance, there is a policy recommendation to update IRS guidance to clarify that “many AI literacy and AI skill development programs may qualify as eligible educational assistance.”

Why does this matter: Organizations should think about how existing AI training programs may be eligible for tax-free reimbursement, or develop AI training programs in anticipation of potential tax benefits.

Build an AI Evaluations Ecosystem

The recommendations address how AI systems are evaluated for performance and reliability. Specifically, there is a recommendation to have the federal government conduct evaluations of its own AI systems, as well as convening stakeholders to develop guidance on building AI evaluations. 

Why does this matter: Evaluations being conducted by federal agencies can impact organizations that contract with those agencies, as they would need to conduct and document similar evaluations on AI tools that support work done for the respective agencies. 

Protect Commercial and Government AI Innovations

This is one of the many sections that focuses on AI security. There is a policy recommendation that proposes greater collaboration between intelligence agencies and AI developers to protect AI technologies from security risks.

Why does this matter: The federal government could draft AI security guidance as part of a broader partnership with organizations that develop AI technologies. Organizations that contract with the government would need to implement these security procedures for their AI systems.

Pillar II: Build American AI Infrastructure

Create Streamlined Permitting for Data Centers, Semiconductor Manufacturing Facilities, and Energy Infrastructure while Guaranteeing Security

The Trump Administration prioritized tapping into new energy sources, in part to support energy demands from AI infrastructure. President Trump issued an Executive Order in tandem with the AI Action Plan specifically aimed at permitting AI data center infrastructure. This section of the Action Place provides additional recommendations on how the Administration can support developing AI infrastructure, which focus on environmental reviews and expanding ways to provide additional energy sources.    

Why does this matter: The underlying point of the recommendations to rapidly expand energy sources conflicts with the administration’s views on climate change, which includes sustainability efforts aimed at improving energy efficiency. Organizations that may have tracked AI-related environmental impacts or sustainability metrics will likely have to cease or curtail such activities if they contract with the federal government.  

Train a Skilled Workforce for AI Infrastructure

The recommendations focus on increasing AI educational and training opportunities for college students. It fits within the Trump Administration’s broader AI workforce development efforts.  

Why does this matter: Organizations may find it difficult to develop well-rounded talent pipelines with constraints on activities that appear like DEI initiatives or that relied on international workers. Moreover, recent reforms to student loan programs may decrease the talent pool due to raising higher education costs and limited loan options.

Bolster Critical Infrastructure Cybersecurity

The policy recommendations further the Trump Administration’s AI security efforts. One recommendation proposes that the Department of Homeland Security maintain guidance for private sector critical infrastructure entities on “remediating and responding to AI-specific vulnerabilities and threats” and improving information sharing on known vulnerabilities. 

Why does this matter: Implementing this recommendation can help organizations have a clearer understanding of the threat landscape as it relates to AI technologies. Organizations that deliver or support critical infrastructure could be required to implement AI incident response guidance and share information with the federal government related to their AI incidents.  

Promote Secure-By-Design AI Technologies and Applications

These are additional AI security recommendations, which includes continued standards development to secure AI systems. 

Why does this matter: Security standards imposed on federal agencies will reverberate into organizations that contract with those agencies. Organizations will need to implement or adhere to similar AI security standards.    

Promote Mature Federal Capacity for AI Incident Response

These recommendations propose developing incident response standards for the federal government and encourage interagency information sharing on AI vulnerabilities. 

Why does this matter: Similar to security standards, when these standards are imposed on federal agencies they will also apply to private sector organizations that contract with those federal agencies. Developing these standards can also help organizations understand how to practically identify and remediate AI incidents.

Pillar III: Lead in International AI Diplomacy and Security

Ensure that the U.S. Government is at the Forefront of Evaluating National Security Risks in Frontier Models

The recommendations focus on addressing national security risks from frontier models, which includes setting standards to mitigate those risks. 

Why does this matter: While the recommendations are specific to frontier models, organizations that contract with the federal government will need to have greater supply chain transparency. Specifically, organizations will need insights into data sources that train or fine-tune the underlying models, hardware and software components, as well as corporate ownership of third- and fourth-party providers.   

What’s Next?

The AI Action Plan is the most comprehensive vision for AI policy released in the second Trump Adminstration. It also articulates a strong break from previous AI policy initiatives under Biden and first Trump Administrations that emphasized AI safety and risk management. Both issues are notably absent from the AI Action Plan, along with data-related concerns (i.e., copyright issues). It is notable that the AI Action Plan only dedicates 3 out of 23 pages to international engagement on AI-related issues. Conversely, China’s recently released Action Plan on Global Governance of Artificial Intelligence leans into international engagement and influence to address AI concerns and opportunities. 

As with other actions from the Executive Branch, the AI Action Plan has a limited effect in the absence of legislation from Congress. There is also an outstanding question with the ability for the Trump Administration to implement these recommendations. The political situation may change after November 2026, should Democrats win back a congressional chamber. The recent federal workforce layoffs may also slow federal agencies ability to execute these recommendations because they lack adequate in-house expertise.

Even if the recommendations are not fully realized, the general tone and tenor of the AI Action Plan sends a powerful message throughout the AI ecosystem. For instance, the Trump Administration can still have been clear about targeting “woke AI” and industry could proactively respond by removing certain guardrails to improve their chances of securing government contracts. States may also take a second look at passing additional AI-related legislation for fear that they may be overlooked for federal grants. The bottom line is that, while the Executive Branch may not get everything on its AI Action Plan wishlist, it can still shape the AI ecosystem. 

Share:

Related Posts

Building Trust in Enterprise AI: Insights from Trustible, Schellman, and Databricks

AI is rapidly reshaping the enterprise landscape, but organizations face growing pressure from regulators, stakeholders, and customers to ensure these systems are trustworthy, ethical, and well-governed. To help unpack this evolving space, Trustible, Schellman, and Databricks co-hosted a webinar on how AI governance frameworks, standards, and compliance practices can become strategic tools to accelerate AI adoption.

Read More