Imagine if the U.S. Federal Reserve based its monetary policy on cryptocurrency’s speculative hype—or the Defense Department bet its manufacturing future on the overexcitement for 3D printing in the 2010s that never panned out. As detailed in a memorandum on artificial intelligence released on Oct. 24, President Joe Biden’s administration was beginning to run a similar risk by staking the lion’s share of the United States’ AI strategy on uncertain projections about the progress of large-scale frontier models, like those that power ChatGPT.

As President-elect Donald Trump’s incoming tech czars craft a new AI agenda, they have the opportunity to be both more ambitious and more risk averse: turbocharging the progress of frontier models and accelerating alternative uses of the technology, specifically for national security, in equal measure. Such a diversified approach would better account for the inherent uncertainty in AI development. It would also put the United States on firmer footing to expand its lead over China in the most transformative technology in a generation.


AI’s investors, technologists, and policymakers are divided into two camps about the future of the technology. One camp believes that the future of AI is frontier models—general-purpose AI systems like the ones that power ChatGPT, able to solve problems across a variety of fields. Proponents of frontier models, often folks from industry giants like OpenAI and Anthropic, believe that these models could surpass human intellect if they had sufficient computing power, revolutionizing science and technology. Opinions vary on the ultimate potential of “superintelligent” frontier models, but frontier lab CEOs have predicted that their chatbots will soon transcend the intellect of Nobel Prize winners and deliver “unimaginable” prosperity.

Critics believe that while frontier models may prove useful for a variety of tasks, current methods to build these models intrinsically lack the sophistication needed to supersede human intelligence and may never reach it. In this view, general-purpose frontier models are only one avenue among many for AI growth. This camp believes narrow models—which solve problems in specific domains but do not aim to “think” in a general sense—play an equal or greater role in the AI revolution. An example would be the specialized AlphaFold model, which slashed the time it took researchers to predict protein structures from months to minutes.

The disagreements about AI progress are so fundamental and held with such conviction that they have evoked comparisons to a “religious schism” among technologists. Meanwhile, media and investor bullishness has ebbed and flowed, most recently with investors starting to grow skeptical of frontier AI’s potential—either concerned about deficiencies in the methods available to advance them, or wary that the models’ payout may not justify their massive investments. Even industry leader OpenAI is reportedly lowering its expectations about the near-term prospects for frontier models’ progress.

But despite uncertainty around the future potential of generally intelligent AI, the Biden administration’s de facto approach to it ultimately became “rooted in the premise that capabilities generated by … frontier AI are poised to shape geopolitical, military, and intelligence competition,” as one senior administration official admitted. Indeed, the Biden administration’s recent AI memorandum explicitly reiterates frontier model proponents’ theory that expanding computational capabilities will create revolutionary models—likely so transformative that they will necessitate an entire redesign of the U.S. government’s informational and organizational infrastructure to cope.

The actions that the Biden administration advanced in the memorandum, as a result, were heavily focused on frontier AI development, with a raft of measures dedicated to accelerating the massive computing infrastructure ecosystem needed to develop future models—including streamlining permitting, approvals, and incentives for private actors to build gargantuan new data centers. Narrow models, on the other hand, require far less energy and computing resources compared to their frontier counterparts. Another prominent theme of the memorandum is developing extensive testing mechanisms—some classified—for frontier models’ hypothetical future cyber, nuclear, and radiological capabilities of concern. And while some of the broader provisions in the memorandum may also be relevant to nonfrontier AI, the comparative lack of specific focus on furthering narrow AI systems is conspicuous.

For those highly optimistic about frontier AI’s potential, this is undoubtedly the most sensible strategy. Why try to orchestrate a panoply of narrow models for specific subfields when a handful of generally intelligent frontier models may eclipse their capabilities altogether? With seemingly boundless potential, the race to superintelligent frontier models would arguably supersede all other possible pursuits for national competitiveness.

But what if, on the other hand, frontier AI fails to live up to its expectations, as other AI experts warn? By focusing on generalized AI above all else, the Biden administration’s strategy would risk sidelining narrow applications that could more effectively address critical national security needs—and are already doing so.

So far, AI’s ongoing warfighting revolution in logistics, predictive maintenance, and drone autonomy is largely unrelated to frontier models. And it is not obvious that many of the most strategic capabilities on the horizon will have anything to do with them: AI-powered hypersonic modeling, autonomous drone swarming, and the material science research critical to developing new weapons are all most likely to be powered by narrow-purpose AI models, mostly irrelevant to the infrastructure provisions that the memorandum advances.

While the U.S. government and industry fixate on an uncertain, moonshot approach to ultra-sophisticated frontier models, many of their counterparts in China have naturally started focusing on more practical use cases of the technology. This is in part because of the Biden administration’s campaign to cut China off from the advanced chips it needs to build frontier AI models—another manifestation of the administration’s big bet on this particular form of AI development. As much as Beijing might also like to lead in the development of frontier models, this current approach has provided the incentives for it to focus on routing the United States on more narrow models and practical use cases.

The result may not be favorable to the United States. If frontier models don’t prove decisive, Chinese expertise in more specific, mundane AI applications—such as in drones—could ultimately prove more effective on the battlefield. China already outperforms the United States in many areas of AI-powered image recognition—crucial for advanced weapons targeting—and history suggests that diffusing technology to specific applications is often more important than pioneering new fundamental innovations, such as pushing the bounds of frontier models. As important as an edge in innovation might be, the Chinese tech sector’s vaunted ability to aggressively commercialize tech could win the day.

From this perspective, it is puzzling that the Biden administration began to focus on frontier model development—where the United States already enjoys a lead over China—at the exclusion of other AI areas where Sino-American competition is tighter. If the U.S. government and leading tech   continue to fixate on a resource-intensive race to AI superintelligence that could prove illusory, China may assemble an arsenal of small, practical, and potentially decisive AI systems in the meantime.

To be clear, this is not to say that any one of the government’s efforts to expand the United States’ lead in frontier models for national security is a bad idea—it is just that a more balanced approach is needed. As demonstrated by the crypto, 3D printing, and AI hype cycles of the past, technological progress is simply unpredictable. Frontier models may reach their projected potential soon, much later than expected, or not at all.

The same is true for narrow models in a variety of fields—which ones will yield breakthroughs, and on what timescale, is ultimately unpredictable. The most sensible approach, then, is to focus on a diverse, broad set of efforts that balances support for the industry’s big frontier moonshots with a focus on smaller, varied projects.


There are indications that Trump’s administration intends to chart a new course on AI and accelerate frontier AI development even more aggressively—a worthy goal, given the technology’s potentially transformative effects. Voices inside and outside the government—such as Trump supporter and technologist Elon Musk—are likely to continue pushing for accelerated frontier progress. But whatever Trump’s frontier AI policy ends up looking like, the administration would be remiss not to couple these efforts with equal emphasis on transformative, narrow AI prospects, an approach resonant with the broader view of the technology championed in Trump’s first term. Indeed, in the years before ChatGPT launched frontier AI into the limelight, Trump’s actions on AI promoted a diverse range of the technology’s applications across scientific and industry disciplines. A similarly diverse approach is needed again, especially for national security.

Specifically, the Defense Department should identify critical subfields in areas like biology, material science, and aeronautics that are primed for narrow, specialized AI disruptions—and prioritize their development with equal urgency as frontier model progress. The Energy Department’s Frontiers in AI for Science, Security, and Technology (FASST) initiative, which seeks out application-specific uses for AI, offers a viable model of what this could look like.

In equal measure to accelerating frontier data center growth, decision-makers should promote the development and deployment of smaller AI-enabled systems. These are already known to be critical in the United States’ competition with China as an indispensable feature of its AI national security strategy, and they include vast numbers of cheap AI-powered drones, next-generation biotech capabilities, and democracy-friendly alternatives to China’s AI-fueled techno-authoritarian exports abroad, such as privacy-preserving smart city tools and 6G telecommunications infrastructure.

A more even assessment of frontier and nonfrontier AI competition should also recalibrate our attention to chip production. While the much-discussed U.S. export restrictions on advanced chips to China cover hardware that is essential to frontier models, China has expanded its dominance in legacy chips—essential for narrower AI applications ranging from industrial automation to internet-connected devices—which is a growing strategic vulnerability for the United States. Friendshoring defense-critical legacy chip production in countries like India and Mexico should be at least as high a priority as blocking China’s access to frontier chips.

Big bets on risky moonshots might work for Silicon Valley’s venture capitalists, who can afford to miss the mark occasionally. But amid fierce competition with China over the future of the world’s most transformative technology, the U.S. government needs a more diversified AI portfolio to ensure that regardless of how the future of AI develops, the United States will retain the lead.