Publication

Is the AI Action Plan Positioning the U.S. for “AI Dominance?”

Jul 29, 2025

Since assuming office in January 2025, the Trump administration has sought to transform the U.S.’s approach to, and regulation of, Artificial Intelligence (AI). From signing an executive order entitled “Removing Barriers to American Leadership in Artificial Intelligence,” to allocating billions in developing military and government AI programs through “The One Big Beautiful Bill Act,” and unveiling a trio of executive orders aimed at turning the U.S. into an AI export powerhouse, the Trump administration has advanced an ambitious regulatory philosophy with respect to AI. 

On July 23, 2025, The White House deepened its commitment to this philosophy by debuting its long-awaited AI Action Plan (the Plan). The Plan, which aims to “usher in a new golden age of human flourishing, economic competitiveness, and national security for the American people,” presents numerous opportunities and constraints for companies in the AI sector. Companies should consider the Plan’s proposed policy actions and potential issues with regard to implementation as they forecast and develop future business opportunities, assess the impact on organizational management, and secure competitive advantages.

Key Elements of the Plan

The Plan introduces three pillars to “achieve [U.S.] global dominance” in AI: (1) accelerating AI innovation by removing regulatory barriers; (2) building AI infrastructure to support AI at scale; and (3) positioning the U.S. as a global leader in AI diplomacy and security. Together, these pillars treat AI development as both a critical national security imperative and a vital source of economic stability. Combined with the Biden administration’s emphasis on domesticating the semiconductor industry via the CHIPS Act in the U.S. and the restrictions on technology “know how” to competitors (including China), the Plan’s pillars provide a roadmap to try to maximize domestic competitive advantages.

Pillar I: Accelerating AI Innovation

Pillar I aims to “create the conditions where private-sector-led innovation can flourish.” It seeks to accomplish this goal by directing a government-wide effort to identify and remove federal regulations and policies that could hinder AI innovation and adoption. At a high level, Pillar I directs the federal government to invest in AI-enabled science, support next-generation manufacturing, and drive adoption of AI within the Department of Defense (DoD). As the largest government agency, there is precedent for innovation started in the DoD to transition to other federal government agencies and then downwards to State agencies.

To remove “red tape and onerous regulation,” Pillar I recommends five distinct policy actions for the Office of Science and Technology Policy (OSTP), the Office of Management and Budget (OMB), the Federal Communications Commission (FCC), and the Federal Trade Commission (FTC):

  1. Launch a Request for Information about current federal regulations that obstruct AI innovation;
  2. Identify, revise, and repeal regulations, memoranda, guidance documents, and other orders that “unnecessarily hinder AI development or deployment;”
  3. Limit funding to states whose AI regulatory regimes may hinder the development and deployment of AI;
  4. Evaluate whether states’ AI regulations interfere with the FCC’s ability to carry out its obligations; and
  5. Review all FTC investigations commenced by the Biden administration to ensure they do not unduly burden AI innovation.

Pillar I also seeks to accelerate AI innovation by ensuring that AI “protects free speech and American values.” To that end, the Plan mandates that the National Institute of Standards and Technology (NIST) review its AI Risk Management Framework to eliminate references to “misinformation, Diversity, Equity, and Inclusion, and climate change.” This includes requiring an update to federal procurement guidelines to ensure that government contracts go only to AI developers whose systems are “objective and free from top-down ideological bias.” The timing of this directive is consistent with the on-going overhaul to federal government procurement regulations to ensure expedited sales of goods and services to agencies at reduced historical costs. However, it is unclear how and to what extent Pillar I will impact ongoing government programs, such as DoD’s Cybersecurity Maturity Model Certification (CMMC) 2.0 program. CMMC 2.0 requires defense contractors to affirm compliance with a variety of significant security requirements, and the role of AI-powered platforms in accelerating such compliance remains uncertain.

Finally, Pillar I advocates for the adoption of open-source and open-weight AI models across the U.S. It recommends that NIST, OSTP, and the National Science Foundation’s (NSF) National AI Research Resource (NAIRR) partner with industry to develop financial markets for computer resources. Pillar I also instructs OSTP to publish a new National AI Research and Development Strategic Plan to guide federal AI research investments.

Pillar II: Building AI Infrastructure

Pillar II seeks to build out the U.S.’s energy capacity and eliminate cumbersome permitting processes for data centers, semiconductor manufacturing facilities, and energy infrastructure. To that end, Pillar II recommends seven policy actions:

  1. Establish categorical exclusions under the National Environmental Policy Act to fast-track the construction of data centers;
  2. Expand the use of the FAST-41 process to cover all data center and energy projects;
  3. Consider developing a nationwide Clean Water Act Section 404 permit for data centers;
  4. Eliminate regulations under the Clean Air Act and Clean Water Act that hinder data center construction;
  5. Make federal lands available for data center construction;
  6. Ensure that the domestic AI computing stack is built on U.S. products; and
  7. Utilize AI itself to accelerate environmental reviews.

To facilitate the development of a robust electric grid, Pillar II urges the U.S. to prevent premature decommissioning of power plants, optimize existing resources, and prioritize next-generation nuclear and geothermal energy. Pillar II also emphasizes the need to “bring semiconductor manufacturing back to U.S. soil.” Pillar II may benefit from the fulsome due diligence foundation created by the CHIPS Act that the semiconductor industry and related support supply chains underwent to qualify for government funding opportunities. As a part of the CHIPS Act process, local governments were required to invest in the semiconductor opportunity via incentives and support. In regard to Pillar II, a similar process could be used to facilitate changes to utility usage and access requirements.

Lastly, Pillar II calls for developing high-security data centers for military and intelligence community usage. It also instructs the Department of Labor (DOL) to train the U.S. workforce by creating a national initiative to identify high-priority occupations essential to the U.S.’s growing AI infrastructure. 

Pillar III:

Pillar III extends the U.S.’s AI strategy beyond its borders. It aims to promote global adoption of U.S. AI systems while simultaneously preventing U.S. adversaries from “free-riding on [America’s] innovation and investment.”

To export U.S. AI to allies and Partners, Pillar III calls for the Department of Commerce (DOC) to establish and operationalize an aggressive export strategy. Specifically, Pillar III instructs DOC to gather proposals from industry for full-stack AI export packages and then work with the Department of State (DOS), the U.S. Trade and Development Agency, and others to strike deals focused on exporting hardware, models, software, and AI applications simultaneously. 

In conjunction with this export strategy, Pillar III advocates strengthening export controls on sensitive technologies. It does so by instructing DOC to establish U.S. export controls on semiconductor manufacturing sub-systems and increase end-use monitoring in countries where there is a high risk of diversion of U.S. AI systems.

Finally, Pillar III emphasizes the need to encourage partners and allies to follow U.S. export controls. Pillar III envisions an ongoing role for the U.S. in international norm-setting by calling on the DOS and DOC to leverage the U.S.’s position in international diplomatic and standard-setting bodies to “advocate for international AI governance approaches that promote innovation, reflect American values, and counter authoritarian influence.” 

However, as with previous U.S. export controls imposed on the semiconductor industry and the flow of technical data, compliance may remain difficult. Because a significant amount of industry supply chains and development run through China, it can be difficult for companies — especially foreign companies or foreign affiliates of U.S. companies — to comply with certain more onerous export controls that already exist. 

Key Takeaways of the Plan

The Plan outlines a series of policy recommendations that reflect the Trump administration’s scientific, economic, and international priorities. Compared to Biden-era AI policies, the Plan focuses less on safety and responsibility and more on growth and development.  But this shift does not entirely abandon security and trustworthiness in the context of AI regulation: the Plan includes significant discussion about building out an AI “evaluations ecosystem” to evaluate and address national security risks and AI-specific vulnerabilities. It also encourages the U.S. government to evaluate frontier models for potential misuse in developing chemical, biological, radiological, nuclear, or explosive (CBRNE) weapons. 

The Plan embraces public-private partnership as a primary method of advancing the Trump administration’s AI priorities. It urges the federal government to create industry-driven training partnerships, convene industry groups to develop national skill frameworks and competency models, and work with cloud service providers to codify priority access to computing resources in the event of a national emergency. The Plan also invites further feedback from industry on regulations that are overly burdensome with respect to AI innovation.

Much about the Plan’s implementation remains unknown. It is unclear how companies will operationalize the Plan’s recommendations while complying with the One Big Beautiful Bill Act’s restrictions on the involvement of prohibited foreign entities in federally supported AI projects. It is also unclear how the Plan will treat the use of intellectual property and copyrighted material in the development of AI models. Many of these issues will likely be addressed in updated federal acquisition regulations, guidelines, and export controls. In many cases, any proposed updates will be subject to public comment. It is important for companies interested in these opportunities to consider taking advantage of the public comment periods to influence the scope of any developments around the Pillars. Despite these uncertainties, the Plan highlights numerous significant opportunities for private sector AI leaders to deepen their collaboration with the U.S. government and extend their influence abroad.

*Any opinions expressed are those of the authors, and not necessarily the firm or their colleagues.

About Snell & Wilmer

Founded in 1938, Snell & Wilmer is a full-service business law firm with more than 500 attorneys practicing in 17 locations throughout the United States and in Mexico, including Los Angeles, Orange County, Palo Alto and San Diego, California; Phoenix and Tucson, Arizona; Denver, Colorado; Washington, D.C.; Boise, Idaho; Las Vegas and Reno-Tahoe, Nevada; Albuquerque, New Mexico; Portland, Oregon; Dallas, Texas; Salt Lake City, Utah; Seattle, Washington; and Los Cabos, Mexico. The firm represents clients ranging from large, publicly traded corporations to small businesses, individuals and entrepreneurs. For more information, visit swlaw.com.

©2025 Snell & Wilmer L.L.P. All rights reserved. The purpose of this publication is to provide readers with information on current topics of general interest and nothing herein shall be construed to create, offer, or memorialize the existence of an attorney-client relationship. The content should not be considered legal advice or opinion, because it may not apply to the specific facts of a particular matter. As guidance in areas is constantly changing and evolving, you should consider checking for updated guidance, or consult with legal counsel, before making any decisions.
Media Contact

Olivia Nguyen-Quang

Associate Director of Communications
media@swlaw.com 714.427.7490