These varying approaches are shaped by competition among great powers in an increasingly multipolar world and will have important implications for peace and security. The question is whether varying perspectives on AI governance are mutually exclusive or can be balanced to advance national interests while ensuring international security.
The Great Power Approach
One approach highlighted at the summit was a great power competition perspective. This view espouses a winner-take-all outlook, best captured by Russian President Vladimir Putin in 2017 when he emphasized AI’s emerging critical importance: "Artificial intelligence is the future, not only for Russia but for all humankind ... Whoever becomes the leader in this sphere will become the ruler of the world." Eight years later, the strategic significance that countries place on AI supremacy has grown exponentially. Today, we are seeing a 21st century AI arms race.
The U.S. has maintained an assertive AI approach across recent administrations. Vice President J.D. Vance, representing the U.S. at the summit, criticized Europe's stringent regulations, warning that excessive oversight could stifle innovation. He has previously described the new administration's overall strategy as "all gas, no brakes," which reflects the U.S. commitment to rapid and unrestrained advancement in AI.
A recent Trump administration executive order revokes previous policies perceived as hindrances to innovation, and it mandates an action plan within 180 days to sustain U.S. AI leadership, focusing on economic competitiveness and national security. Moreover, the U.S. has launched the "Stargate" project — a joint venture involving OpenAI, SoftBank and Oracle — aiming to invest up to $500 billion in AI infrastructure. The initiative plans to construct data centers across the country, starting with a significant facility in Texas, to bolster AI capabilities and secure American leadership in the field.
Aside from the recent efforts to deregulate some aspects of AI development, U.S. AI policy has largely been consistent in its investments and a competitive stance on the world stage. One of the earlier moves in this direction was the 2022 CHIPS and Science Act, which allocated substantial funding to bolster domestic semiconductor manufacturing, reducing reliance on Chinese technology. The act included a $12 billion investment in onshoring Taiwan Semiconductor Manufacturing Company to Phoenix, Arizona, which aimed at securing the U.S. supply chain for advanced semiconductors — an essential component for AI development. Taiwan currently produces over 90% of the world’s most sophisticated chips, leaving the global AI and defense industries vulnerable to disruption, particularly given China’s threats to invade Taiwan by 2027.
The Biden administration also implemented multiple rounds of stringent export controls to impede China's advancement in AI. This includes comprehensive measures taken by the U.S. Department of Commerce's Bureau of Industry and Security to restrict China's access to advanced technologies, including AI chips, cloud services, AI model weights and semiconductor manufacturing items. These measures are designed to safeguard U.S. national security by preventing the transfer of critical AI technologies that could enhance China's military and surveillance capabilities.
It should be noted, however, that despite these restrictions, China's AI sector has demonstrated resilience and innovation. DeepSeek, a Chinese AI startup, sent shockwaves through markets and policy circles with its development of an advanced AI model that rivals leading U.S. counterparts while operating at significantly lower costs. This breakthrough raises serious national security concerns for the United States, as it underscores China's accelerating AI capabilities, potentially outpacing U.S. advancements and challenging efforts to maintain technological superiority in critical sectors, including defense, cybersecurity and economic competitiveness.
Ultimately, the U.S. and U.K. declined to sign the summit’s final declaration, aligning their hands-off approach to AI governance. This decision is unsurprising given Vance’s advocacy for an unrestrained, pro-innovation strategy and the broader Whiggish optimism that has historically characterized the Anglosphere’s approach to technology. Even though this widely held market-driven sentiment is clear, the U.S. is still debating AI policies, as decisions of such global significance should be shaped through a robust debate in Congress to structure the laws and institutions that will help protect U.S. interests.
Europe’s Cautious Approach
At the summit, French President Emmanuel Macron promised to “cut red tape on tech”, but the overall approach from Europe remains cautious. Following a number of statist controls targeting the tech sector, the European Union's Artificial Intelligence Act, enacted in August 2024, represented the world's first comprehensive legal framework for AI. It classifies AI applications into risk categories — unacceptable, high, limited and minimal — and imposes strict regulations on high-risk applications and outright bans on those deemed unacceptable including risks with implications for advanced commercially developing advanced weapons capabilities. The act mandates rigorous compliance requirements, including data quality standards, transparency, human oversight and accountability measures to ensure that AI systems uphold fundamental rights and safety standards.
The EU has also enacted several other key regulations that have implications for AI and the broader tech sector, including the Digital Services Act and Digital Markets Act to enhance transparency, content moderation, and fair competition in digital platforms, as well as the Cyber Resilience Act to strengthen cybersecurity in AI-powered systems. Additionally, updates to the sweeping General Data Protection Regulation have addressed AI-specific concerns, further reinforcing the EU’s precautionary approach to AI governance.
This more cautious approach is shared by many countries and was a focal point at the 2024 United Nations Summit of the Future, held in conjunction with the launch of the 79th session of the United Nations General Assembly. During the summit, world leaders adopted the Pact for the Future, which includes the Global Digital Compact — the first worldwide agreement on the regulation of artificial intelligence. The compact emphasizes the “ethical use of AI,” advocating for frameworks that ensure accountability in AI development and deployment. Concerns highlighted during the summit encompassed the potential for AI to exacerbate social unrest, infringe on human rights and disrupt labor markets, all issues connected to violent conflict around the world.
In Paris, Indian Prime Minister Narendra Modi emphasized the need for governance frameworks to ensure AI serves its intended purpose while unlocking economic opportunities, particularly for nations in the Global South. India's approach balances innovation and oversight, positioning AI as a driver of inclusive growth in health care, agriculture, education and digital governance. Similarly, the African Union has embraced AI’s transformative potential while advocating for responsible governance, adopting its Continental Artificial Intelligence Strategy to harness AI for development and investment, while mitigating risks.
Shaping Peace and Security
As Thucydides famously stated, “The strong do what they can, and the weak suffer what they must.” The AI race, like all technological revolutions before, will be shaped by those with the resources and the strategic will to dominate. However, without international cooperation, AI could become a force that threatens global security.
The rapid acceleration of disruptive technologies is not merely another global security challenge alongside nuclear instability or transnational terrorism — it is a cross-cutting force multiplier, capable of exacerbating all other threats at an unprecedented pace and scale. Within this landscape, AI is the ultimate force multiplier, evolving faster than the world’s ability to fully grasp the consequences.
The AI Action Summit in Paris underscored the stark divide between global approaches: an unrestrained race for AI supremacy, championed by the U.S. and China, versus a cautious, regulatory path, led by Europe and backed by multilateral institutions like the United Nations. These competing visions set the stage for an unpredictable future — one in which AI will not only determine national power but could fundamentally reshape the international order itself.
As history has shown, these approaches are not mutually exclusive. Just as the Cold War was defined by an accelerating nuclear arms race — paralleled by the simultaneous pursuit of arms control treaties and deterrence strategies — the AI era will require a dual approach. While leading nations will inevitably seek to push the frontiers of AI for strategic advantage, there is a simultaneous and urgent need for robust governance frameworks to prevent catastrophic consequences. Much like the nuclear non-proliferation treaties and strategic stability dialogues between the U.S. and the Soviet Union sought to contain the worst excesses of the atomic age, an AI governance framework must seek to balance strategic interests with global stability.
The U.S. should lead in the AI sector with unmatched innovation and industry strength, but smart policies can be the shield that ensures new technologies serve and do not threaten U.S. interests. As Clausewitz observed, the character of war changes, but its nature constant. Much about AI remains unknown, but one certainty is that bad actors will exploit it to cause violence and seek harm — and without strong guardrails, the American people could be exposed to threats that could have been anticipated.
PHOTO: A Trainium 2 system at an Amazon facility in Austin, Texas, Nov. 25, 2024. Amazon and several start-ups are beginning to offer credible alternatives to Nvidia’s market-dominating AI chips. (Spencer Lowell/The New York Times)
The views expressed in this publication are those of the author(s).