Highlights:
- Chinese open-source AI models have surged to nearly 30% of global usage, up from just 1% a year earlier.
- Major U.S. tech leaders and startups are adopting Chinese models due to lower costs, fast iteration, and strong performance.
- Nvidia CEO Jensen Huang warns that China’s rapid infrastructure growth and energy capacity give it a structural advantage in AI scaling.

A new set of industry reports shows that Chinese open-source large language models (LLMs) are reshaping the global AI landscape. Once considered secondary players, Chinese models have now reached almost 30% of worldwide usage and are gaining traction among developers, startups, and major companies in the U.S.
Explosive Growth in Global Adoption
Chinese open-source models grew from just 1.2% of global usage in late 2024 to nearly 30% in 2025, according to a study covering 100 trillion tokens. While Western proprietary systems such as OpenAI’s GPT-4o and GPT-5 still dominate with about 70% market share, Chinese models now match the usage of all other non-Chinese open-source models combined.
Key drivers include Alibaba’s Qwen series, DeepSeek V3, and Moonshot AI’s Kimi K2, all of which have seen rapid update cycles and widespread adoption among developers worldwide.
Strong Endorsements From Silicon Valley
Several high-profile U.S. technology leaders are publicly shifting toward Chinese LLMs. Airbnb CEO Brian Chesky said the company prefers Alibaba’s Qwen because it is “fast and cheap.” Investor Chamath Palihapitiya also moved key workflows to Moonshot’s Kimi K2, calling it more performant than alternatives.
On platforms such as Hugging Face and OpenRouter, Chinese models now dominate trending charts and usage rankings, replacing many U.S. systems.
Why Startups Prefer Chinese AI
Cost is a major factor. Chinese models often operate at one-fifth the price of Western models. MiniMax’s M2, for instance, offers performance similar to Claude Sonnet 4.5 at just 8% of the cost.
Open-weight licensing is another benefit, allowing companies to access model weights, fine-tune systems, and build custom versions more easily than with closed U.S. models. Alibaba reports more than 170,000 derivative models built on Qwen alone.
Geopolitical Concerns and Regulatory Tensions
The surge of Chinese AI tools has sparked debate in Washington. U.S. authorities warn of potential security risks associated with foreign AI systems, placing firms such as Zhipu AI on trade blacklists. Analysts say the rise of Chinese open-source AI could challenge long-standing U.S. dominance if current trends continue.
Despite concerns, a recent global survey shows 82% of users would adopt Chinese models if hosted outside China, indicating broad openness to using these systems.
China’s Infrastructure Advantage Raises Alarms
Nvidia CEO Jensen Huang recently warned that China’s ability to build AI infrastructure “at unbelievable speed” is a strategic advantage. He noted that China has double the energy capacity of the U.S., with much faster growth.
Massive nationwide data-center construction, supported by low-cost energy and government initiatives, is accelerating the country’s AI expansion.
Rapidly Shifting AI Balance
With competitive performance, lower prices, and fast iteration, Chinese models are becoming central to global AI development. Industry experts say the momentum marks a shift toward a two-superpower AI race, with China emerging as a formidable peer to the United States.
Read More:
- Say Goodbye to Driving: Rivian CEO Says Cars Will Soon Run Errands Without You
- ByteDance’s Agentic AI Phone Sparks Panic and Nationwide App Bans
- Leaked Video Exposes Tesla’s Robot Secret: Was Optimus Being Remote-Controlled?
(via)







Comments