The Chinese startup has unveiled a new trial artificial ielligence model called Deepseek-V3.2-Exxp, which, with a new architecture called “scattered atteion” promises to reduce the cost of using artificial ielligence to analyze long texts by half. This progress can make it possible to access powerful artificial ielligence models for smaller companies.
Today’s artificial ielligence models need to “pay atteion” to all its words and seences. This process, especially in very long texts, requires a lot of computational power and server costs. But the new Deepseek approach, called DSA, changes this equation.
Instead of processing all information, this system operates smart and selective. Imagine an airline was to find the best route; So instead of examining all possible paths in the world, it only filters logical options. The “scattered atteion” does exactly the same thing with the data. The system first ideifies the most importa parts of the text with a smart module and then only selects importa words (or tokens) for the final analysis. This process greatly reduces the computational load, and according to Deepsic, it can cost the API to use in long -text scenarios. 2 % Reduce.
Dip -Sick’s new architecture to reduce costs
This progress is a great news for the whole technology ecosystem, according to CNBC. Significa reduction of costs means that developers, researchers, and smaller companies who are unable to pay a lot of server costs can now use powerful models of artificial ielligence to build their apps. This could lead to a new wave of creativity and market competition.

Despite all the benefits, this approach also has a fundameal concern: reliability. The main question is how does artificial ielligence decide which data is importa and which is unnecessary?
“The reality is that these models lose many delicacies,” says Ekaterina Almasque, a promine investor in the field of artificial ielligence. The real question is, do they have the right mechanism to delete irresistible data? “
This can be problematic, especially in terms of safety and artificial ielligence. If a model systematically recognizes data from a particular group or a particular perspective, its output can be extremely biased, unreliable and even dangerous.



