Particle.news
Download on the App Store

DeepSeek Releases V4 Preview With 1M-Token Context

The preview tests whether an open model with a very long memory running on Chinese chips can match top U.S. systems.

Overview

  • DeepSeek released a preview of its V4 model Friday, offering Pro and Flash versions with a one million token context window that can handle book-length inputs in a single session.
  • V4‑Pro uses 1.6 trillion parameters and V4‑Flash uses 284 billion, with Flash positioned as the faster, lower-cost choice for wider deployment.
  • The company says V4‑Pro leads other open models on world‑knowledge tests and trails Google’s Gemini 3.1‑Pro only slightly, with independent evaluations still to come.
  • Huawei said its AscendSupernode” clusters built on Ascend 950 chips support V4, showing DeepSeek’s shift toward domestic compute as U.S. export rules limit access to Nvidia and AMD hardware.
  • The launch lands as Washington alleges industrial‑scale model distillation by China and as Anthropic and OpenAI level similar claims, which Chinese officials and DeepSeek have rejected.