OpenAI Urges U.S. Ban on Chinese AI Rival DeepSeek, Citing National Security Threats

OpenAI has formally urged the U.S. government to ban Chinese AI company DeepSeek from use in federal, military, and intelligence operations, branding it a “state-controlled” and “state-subsidized” threat. The call, issued in a striking policy proposal to the White House, warns that technologies like DeepSeek—while powerful—pose serious risks to U.S. national security and the global balance of AI innovation. At the heart of the concern? The race toward artificial general intelligence (AGI), and fears that authoritarian regimes could weaponize it.

In a move that feels ripped straight from a techno-thriller, OpenAI has sent a formal request to the U.S. Office of Science and Technology Policy urging the government to ban DeepSeek, a Chinese AI company that’s been making waves in the global AI scene.

Not mincing words, OpenAI’s proposal describes DeepSeek as both “state-controlled” and “state-subsidized”—terms that suggest more than just economic rivalry. It’s a warning. And it’s loud.

The letter, publicly available on OpenAI’s website and signed by Chris Lehane, the company’s Vice President of Global Affairs, doesn’t stop at calling out DeepSeek by name. It goes further—pushing for a broader restriction on technologies linked to the People’s Republic of China, including banning hardware like Huawei’s Ascend chips and any models deemed to “violate user privacy” or introduce potential vectors for intellectual property theft.

One line in particular lands hard:

“As America’s world-leading AI sector approaches artificial general intelligence (AGI)… the Trump Administration’s new AI Action Plan can ensure that American-led AI built on democratic principles continues to prevail over CCP-built autocratic, authoritarian AI.”

This is not just about code. It’s about ideology, geopolitics, and the kind of future we want to build.

The DeepSeek Disruption

DeepSeek made headlines recently for one simple reason: it’s powerful—and cheap. Their DeepSeek-R1 model shocked the AI community by matching the reasoning capabilities of OpenAI’s ChatGPT o1 model, while undercutting it on cost and offering a free-to-use browser version.

That seismic move briefly rocked the stock market, dragging down shares of AI-focused firms before the panic subsided. But the message was clear: DeepSeek isn’t playing small.

Still, skeptics are raising eyebrows. How did this leap happen so quickly? Could DeepSeek have trained on OpenAI’s proprietary data in violation of usage terms? There’s no definitive proof—but some peculiar glitches raise red flags. Users have reported instances where DeepSeek’s chatbot mistakenly identified itself as ChatGPT. Coincidence? Or something more sinister?

Table of Contents

The AGI Arms Race

At its core, OpenAI’s appeal reflects a broader anxiety: the global sprint toward artificial general intelligence. AGI, the holy grail of AI, represents systems capable of reasoning and adapting like a human mind—only faster, smarter, and infinitely scalable.

OpenAI has always been transparent about its mission to steer AGI development in a direction aligned with democratic values. But the stakes are escalating, fast.

Citing China’s ambition to lead the world in AI by 2030, OpenAI’s letter draws a line in the sand. It warns that models like DeepSeek could, under pressure from the Chinese Communist Party (CCP), be repurposed to manipulate outcomes in critical infrastructure, skew political narratives, or worse—cause intentional harm.

And while there’s no conclusive evidence that DeepSeek is actively engaged in government-led interference, there are notable signs of censorship. The model reportedly refuses to answer questions on politically sensitive topics in China, such as the 1989 Tiananmen Square protests—a telltale sign of CCP-aligned moderation.

More Than Machines

Underneath the policy jargon and corporate posturing, one truth echoes throughout the letter: this is about more than machines. It’s about who controls the future of intelligence itself.

OpenAI’s CEO Sam Altman has said it plainly—AGI could usher in a new age of prosperity. But only if it remains open, fair, and free from authoritarian manipulation.

The company also took a swipe at what it calls “overly burdensome state laws,” claiming that too much regulation at home could ironically slow down the very innovation the U.S. needs to stay ahead.

“We must ensure that people have freedom of intelligence… protected from both autocratic powers that would take people’s freedoms away, and layers of laws and bureaucracy that would prevent our realizing them.”

It’s a bold statement. And it begs a bigger question:
In the age of intelligent machines, who gets to write the rules?