Commentary
21.09.2025

How "middle powers" can avoid being left behind in frontier AI

In this guest contribution, Anton Leicht at the Carnegie Endowment’s Technology and International Affairs team, former visiting researcher with the Centre for Digital Governance, discusses how “middle powers” can avoid being left behind in frontier AI while great powers like the United States and China are moving to capitalise on AI as a strategic technology.

Technical progress in artificial intelligence (AI) is rapid and enduring. Great powers like the United States and China are moving to capitalise on AI as a strategic technology: boosting their leading developers, constraining and leveraging access to their AI products, and deploying AI for national security applications. AI middle powers – countries like Germany and France – need to find a strategy to deal with this trend.  

These middle powers are in strategic trouble and risk being left behind on economic benefits and security applications of frontier capabilities. While they might not catch up, they still have options: they can find sectors that benefit from rapid AI progress, equip them for that progress, and leverage them as comparative advantage in negotiations with great powers. 

Where we are 

AI middle powers – that is, most advanced economies excluding the US and China – are in a tough spot: 

  • Their participation in AI-powered economic growth is contingent on finding a profitable niche in the AI market. 

  • Their access to leading AI systems depends on successful foreign policy. 

  • They’re just as exposed to AI-powered threats to safety and the job market, with much less regulatory or technical recourse.  

Currently, most middle powers, such as Germany, France, India, Japan or South Korea, lack clear plans to avoid being left behind. Many involved in frontier AI – the cutting edge of AI development – expect AI capabilities to continue their rapid advance over the next decade, driven mostly by private companies in the US and China and supported by their national governments. This may make AI systems meaningful drivers of growth and scientific progress, as well as central aggressive and defensive tools for national security. 

What middle powers need 

A more AI-centric world economy and its huge impact on the global strategic order asks two questions of a middle power: What is its economic contribution - its market niche - in an AI-dominated economy, and what is its strategic leverage in an AI-driven world order? 

An economic contribution helps create a business incentive for US- or China-built AI services to spread across a middle power’s country. Without a strong, profitable sector that uses AI, frontier AI companies may quickly find it unprofitable to offer their full suite of products there. This is especially true if there are costs to operating in that country, such as regulatory compliance or localisation. The business incentive could simply be the country’s consumer market, if big enough, but that is highly contingent on two factors. First, a very specific model of AI diffusion with business-to-consumer AI products driving profits – if revenue from AI doesn’t scale with consumers but rather concentrates in highly automated US firms or in national security applications, consumer market size matters less. Second, this model depends on growth in developers’ home markets being slow enough to make it worth entering foreign consumer markets despite regulatory barriers. If growth at home is fast enough to create competitive demand for all computational supply, the additional benefit of new markets might be much less attractive, and the regulatory leverage provided by these markets might be lower.  

Middle powers also need to establish their advantages beyond economic ties to ensure they aren’t entirely at the geopolitical mercy of great AI powers. Economic incentives are fine and sufficient as long as AI remains a predominantly civilian technology, where people can trade AI goods and leverage comparative advantage in a way that ensures their spread is mutually beneficial.  

But that’s only part of the story. Even with few obvious security applications so far, trends point to increasing securitisation. This might lead to a world where countries need access to near-frontier capabilities that puts them far enough ahead of the AI tech curve to stave off AI-powered aggression like cyberattacks or engineered biological weapons. But these capabilities are safeguarded by state-run projects of great AI powers. These state-run projects might not be too keen on sharing. Recent policy writing from the US calls into question the proliferation of advanced AI capabilities even to the Five Eyes intelligence alliance (US, UK, Canada, Australia and New Zealand). This restrictive stance should send shivers down many middle powers’ spines, as they might be even further down the list of export recipients. Even many close US allies would do well to remember how much wringing it took for France and the UK to participate in nuclear proliferation. 

In that setting, a middle power might need to make a strategic contribution to convince allied great powers to provide access to leading security-relevant AI models and participation in the compute supply chain; and that reason should be compelling enough that great powers can’t unilaterally leverage the middle powers’ dependency on their models and compute. 

Here’s what they should do  

To start, middle powers should not get ambitious about building AI models themselves. In short: for almost all middle powers, the infrastructural gap to the US and China is huge. In terms of energy, compute, talent, and competition, the advantage of the US and China is at an all-time high. At the same time, middle powers are facing a shaky economic situation and related investment limitations and a comparative lack of venture capital. If catching up was an option, it would obviously be advisable – but currently, it does not seem realistic. 

Given the recent release of leading open-sourced models, such as from Chinese AI developer DeepSeek, it might seem appealing to count on simply using open source models – even moreso now that the US government has repeatedly encouraged its domestic developers to open-source models, too. But that notion is risky. Fast-following approaches like DeepSeek’s tend to be easier when the underlying technology is new, but it becomes harder over time. Top-tier AI capabilities are less likely to spread through open source due to securitisation, and the shift to inference scaling as a way of boosting AI capability places large infrastructural requirements even on the use of open-source models. 

That makes the infrastructure question tangential to the broader strategic angle. Depending on factors such as available capital, energy prices, and access to US exports, some middle powers might be inclined to push for different thresholds of sovereignty short of training their own frontier models: The ability to service only security-relevant inference demands; to service all economic inference demands; to service specialised training and fine-tuning demands to make models applicable to security-relevant context. These strategies will vary – and none of them will make a middle power a sovereign player in AI in its own right. The only potential path to sovereignty through infrastructure lies in becoming an ‘AI oil state’ of sorts, a path reserved to energy-rich countries with huge sovereign wealth funds like the UAE, Saudi-Arabia, or perhaps Norway.  

What options remain?  

So what should middle powers do instead? When intelligence becomes cheap, bottlenecks become more important. If we are no longer constrained by cognitive labour that AI can do, other parts of the economy matter more and more. That means first of all that middle powers should not rely too much on knowledge economy services that can likely be replaced by AI or moved to the great powers at low cost (instead of enhanced), or they will risk losing economic and strategic relevance as AI makes their contributions obsolete.  

There are plenty of alternatives. First, some countries play a role in making AI itself possible. They control strategic elements of the compute supply chain, from raw materials to lithography — the Netherlands with ASML and Taiwan with TSMC are the obvious examples. Second, allowing AI to interact with the real world is another bottleneck. The smartest systems will only have limited effects if they can’t be used to make things. In a narrow sense, that means robotics might matter a lot – global leaders like South Korea and Japan could do well on this. But it also means industrial capacity writ large is relevant. Many avenues to the purported economic and social benefits of AI run through physically building things following AI instructions and innovations, from biotech molecules to new materials. An automated and digitised manufacturing sector, like in Germany, or strong biotech industry, like in India, could become a much more important share of the global economy. And third, for AI systems to carry out complex tasks, they need high-quality training data from a broad range of domains. Novel and privileged sources of data, especially around industrial and strategic applications, could become a prized resource. Israel, Singapore, and modern sub-sectors of most industrial economies could do well on this. 

The middle powers that control these bottlenecks can use them – to gain concessions from great powers that control leading AI systems and to find a viable economic model for the future. But to use this leverage well, they need to bet on these bottleneck sectors today, support them and enable them to fit tomorrow’s strategic purpose well. Another implication is that  

A lot of people in AI will tell you that advanced AI will boost US- and China-based capacities in all these areas, stripping middle powers of niches. The long history of comparative advantage and the starting positions suggest that’s not a given — especially when both of these two great powers are focused on an AI race. 


Photo by Conny Schneider on Unsplash.