Particle.news
Download on the App Store

DeepSeek Releases V4 Preview With 1M-Token Context and Huawei Support

The launch signals a push toward a lower-cost, China-centric AI stack that could narrow the gap with top U.S. models.

Overview

  • DeepSeek, which released the preview Friday, posted open-source V4 in Pro and Flash versions that can handle one million tokens in a single session.
  • V4-Pro totals 1.6 trillion parameters with 49 billion active per task and V4-Flash has 284 billion with 13 billion active, using a mixture-of-experts design to cut compute, and both models are text-only for now.
  • Company tests say V4 leads open models in reasoning and coding yet trails frontier systems by roughly three to six months, with claims still awaiting broad independent verification.
  • Huawei said its Ascend 950 supernode clusters fully support V4 and contributed chips for part of V4-Flash’s training, highlighting a shift from Nvidia hardware and China’s drive for AI self-sufficiency under U.S. export limits.
  • DeepSeek set rock-bottom prices that undercut U.S. rivals and noted Pro capacity is constrained by limited high-end compute, while U.S. officials escalated allegations of large-scale model distillation by Chinese entities, keeping regulatory scrutiny high.