What’s new in DeepSeek’s latest model: DeepSeek-V3.2-Exp

Anna Barclay | Getty Images News | Getty Images
The latest experimental model of the Chinese initiative Deepseek promises to increase productivity and improve AI’s ability to address too much information in a portion of the cost, but the questions remain about how effective and safe architecture is.
Deepseek sent the Silicon Valley to the madness when it launched the first model R1 last year, and using less resources, it showed that it was possible to quickly train large language models (LLMS) in less powerful chips using less resources.
The company released Deepseek-V3.2-Exp on Monday, an experimental version of the current model’s Deepseek-V3.1-Terminal, AI systems is based on the mission of increasing efficiency. According to a post on the face of the AI forum wrapped.
“Deepseek V3.2 continues to focus on efficiency, cost reduction and open source sharing.” He said. “Great development is a new feature called DSA (Deepseek Care), which makes AI better in addressing long documents and conversations.
“This is important because Nick Patience, Vice President of Futurum Group and Vice President for AI,” This is important because the model should make the model faster and more costly without a significant drop. ” He said. “This makes the powerful AI more accessible for developers, researchers and smaller companies, leading to a potentially wave of new and innovative applications.”
The pros and cons of rare attention
A AI model makes decisions based on educational data and new information. Tell me that an airline wants to find the best route from A to B, while there are many options, it all is not possible. By filtering less applicable paths, you significantly reduce time, fuel and ultimately reduce the money to travel. This is exactly rare attention, unlike other models that crushes all the data in the model, given the factors that he thinks is important only in the data, given the task at hand.
“Basically, you cut the things you think is not important,” he said.
Sparse attention is a blessing for productivity, and considering less resources, AI scaling is required, but an concern is how reliable models caused by lack of surveillance about how and why it reduces information.
“The truth is, they [sparse attention models] They lost a lot of nuances, “said Dataiku and Darktrace’s early supporter Almasque and Graphcore.” And the real question was that it has the right mechanism to exclude non -important data, or is there a mechanism that excludes really important data and will the result be much less relevant? “
The investor added that this may be problematic especially for AI safety and inclusiveness, compared to the investor, competitors or traditional architectures “the most appropriate or the safest AI model.
However, Deepseek says that the experimental model works equally with the V3.1-Terminal. Despite speculation AI, which creates a bubble, remains at the center of the geopolitical competition competing for the winning point with the US and China. Yanefu said that Deepseek’s models work with Chinese -made Cips such as Ascend and Cambricon, which can work locally in local equipment without extra installation.
Deepseek also shared the real programming code and tools required to use the experimental model. “This means that other people can learn from him and create their own improvements.”
However, for Almasque, this means that technology may not be defensive. “The approach is not super new,” he said, “the industry” has been talking about sparse models since 2015, “and Deepseek’s technology could not patent because of open source. Deepseek’s competitive advantage should therefore look at which information should be included.
The company itself acknowledges that V3.2-EXP is a “intermediate step towards our new generation architecture per hugging face pole.
As patience states, “This Deepseek’s value propeller is everywhere: productivity becomes as important as raw power.”
“Deepseek is playing long games to ensure that the community invests in the progress of the community.” “People will always go cheap, reliable and effective.”



