- DeepSeek V4 packs 1.6 trillion parameters on Huawei chips.
- US sanctions drive China's Huawei Ascend adoption for AI.
- Costs drop 30-50% versus Nvidia, benefiting fintech AI tools.
By Sophie Anderson
DeepSeek launched V4, its 1.6 trillion-parameter AI model, on October 10, 2024. The DeepSeek V4 runs on Huawei Ascend chips despite US Commerce Department accusations of IP theft against Chinese AI firms. Tom's Hardware detailed the deployment.
DeepSeek V4 rivals GPT-4 in benchmarks. It performs inference on Huawei hardware, bypassing US sanctions on Nvidia chips imposed in 2022. This shift connects to broader US-China tech decoupling, where China invests $100 billion USD annually in domestic AI chips, per state media reports.
DeepSeek V4's 1.6 Trillion Parameters Explained
Parameters gauge AI model complexity; more enable better pattern recognition. DeepSeek V4's 1.6 trillion surpass V3's hundreds of billions, company benchmarks show. Its mixture-of-experts (MoE) architecture activates only relevant subsets per query, slashing energy use by 50% during inference.
Huawei rates Ascend 910B chips at 456 TFLOPS in FP16 precision. DeepSeek optimized DeepSeek V4 specifically for these. Users access it via the DeepSeek platform. Finance teams note Huawei Cloud prices undercut AWS by 30-50%, according to IDC analysis.
Training DeepSeek V4 consumed gigawatt-hours of power. MoE efficiency trims active parameters at runtime. This beats Nvidia H100s, which fetch over $30,000 USD each on gray markets, Reuters reports.
US IP Theft Claims Fuel AI Hardware Split
US officials allege DeepSeek reverse-engineered US algorithms. The Commerce Department added Chinese AI firms to export control lists in March 2024, Reuters states. Sanctions propel China's AI sovereignty efforts.
Huawei Ascend chips match Nvidia FP8 precision with tensor cores, Huawei claims on its Ascend page. Fintechs use them for fraud detection and trading signals. These avoid sanction delays. Global providers divide: AWS relies on Nvidia; Huawei and Alibaba push local silicon.
This split hikes hybrid cloud costs 20-30%. Fintechs gain from faster AI deployment.
Why Huawei Chips Power DeepSeek V4
Ascend 910B clusters scale DeepSeek V4 in Shenzhen centers. Costs drop 30-50% below Nvidia in China, benchmarks confirm. Banks save millions on AI risk models.
Huawei's CANN software emulates CUDA for easy porting. DeepSeek benchmarks rank V4 high in coding and math. Sanctions accelerated these innovations.
Implications for Tech Finance and Markets
Huawei growth disrupts cloud pricing. Asian fintechs run large models cheaply. DeepSeek open-sources V4 parts, spurring finance tools for portfolios and predictions.
The EU AI Act starts in 2026, eyeing high-risk models. US claims may trigger Huawei tariffs. DeepSeek V4 cements China's AI edge. Benchmarks ahead will test its lead. Huawei Cloud provides trials amid tensions.
Frequently Asked Questions
What is DeepSeek V4?
DeepSeek V4 is a 1.6 trillion-parameter AI model on Huawei chips using MoE for efficient inference in coding, math, and general tasks.
How do Huawei chips enable DeepSeek V4?
Ascend 910B chips provide 456 TFLOPS FP16 with tensor cores, matching Nvidia precision without US sanctions.
What US IP theft claims target DeepSeek?
Commerce Department accuses DeepSeek of stealing US AI tech, adding firms to export lists amid China's self-reliance push.
Why does DeepSeek V4 impact finance?
Lower Huawei costs aid fintech AI for fraud detection and trading; open-sourcing speeds custom models.



