Chinese startup DeepSeek has unveiled DeepSeek-V3.2-Exp, an experimental upgrade to its Terminus model, designed to boost efficiency and cut AI running costs by half. The launch, announced Monday on Hugging Face, introduces DeepSeek Sparse Attention (DSA), a feature that improves handling of long documents and conversations.
Analysts say the advance could make powerful AI tools more accessible to smaller firms and researchers. “It’s significant because it should make the model faster and more cost-effective,” said Nick Patience of The Futurum Group.
However, experts warn sparse attention could compromise reliability by filtering out important data. Investor Ekaterina Almasque noted risks to AI safety and inclusivity.
Despite concerns, DeepSeek says V3.2-Exp matches its predecessor in performance and runs seamlessly on Chinese-made chips. By open-sourcing its code, the company hopes to spur innovation, though critics argue its edge may be difficult to defend. The release underscores efficiency as a new battleground in the ongoing U.S.-China AI race.



