Alibaba released Qwen3.5-397B-A17B, an open-weight vision-language model using a sparse MoE architecture that activates only 17B of its 397B parameters per query. It benchmarks near or above OpenAI’s GPT-5.2 and Google’s Gemini 3 Pro in agentic search, document recognition, and instruction following — while being 60% cheaper and 8x better at large workloads than its predecessor. The trend is clear: Chinese labs are closing the gap fast, and the race is shifting from raw model size toward efficiency and cost. Via The Rundown AI. Read more