Particle.news
Download on the App Store

AI Model Race Accelerates Into Lunar New Year: DeepSeek Lifts Context To 1M Tokens As GLM‑5 Clues Emerge

Company shakeups reflect pressure from a compressed Lunar New Year release cycle.

Overview

  • An anonymous OpenRouter model labeled Pony Alpha is widely believed by developers to be a GLM‑5 test build, showing strong coding and reasoning with roughly a 200K‑token context, though no vendor has confirmed it.
  • Open‑source sleuthing in a vLLM GitHub pull request maps GLM‑5 to DeepSeek V3/V3.2 components such as sparse attention (DSA) and multi‑token prediction, with inferred specs near 745B parameters and about 202K max context.
  • Zhipu AI’s Hong Kong shares jumped about 60% over two days following the GLM‑5/Pony Alpha chatter and code‑based architecture findings.
  • DeepSeek pushed a gray update that expands its context window from around 128K to about 1M tokens; users report a noticeably colder, more abrupt chat style, and reports suggest a V4 release target in mid‑February.
  • ByteDance is reported to be planning a Feb. 14 rollout for Doubao 2.0 plus Seedance 2.0 and Seedream 5.0 Preview upgrades, and xAI publicly posted an all‑hands video confirming a restructuring with departures.