The Chinese startup has unveiled a new trial artificial intelligence model called Deepseek-V3.2-Exxp, which, with a new architecture called “scattered attention” promises to reduce the cost of using artificial intelligence to analyze long texts by half. This progress can make it possible to access powerful artificial intelligence models for smaller companies.
Today’s artificial intelligence models need to “pay attention” to all its words and sentences. This process, especially in very long texts, requires a lot of computational power and server costs. But the new Deepseek approach, called DSA, changes this equation.
Instead of processing all information, this system operates smart and selective. Imagine an airline wants to find the best route; So instead of examining all possible paths in the world, it only filters logical options. The “scattered attention” does exactly the same thing with the data. The system first identifies the most important parts of the text with a smart module and then only selects important words (or tokens) for the final analysis. This process greatly reduces the computational load, and according to Deepsic, it can cost the API to use in long -text scenarios. 2 % Reduce.
Dip -Sick’s new architecture to reduce costs
This progress is a great news for the whole technology ecosystem, according to CNBC. Significant reduction of costs means that developers, researchers, and smaller companies who are unable to pay a lot of server costs can now use powerful models of artificial intelligence to build their apps. This could lead to a new wave of creativity and market competition.
Despite all the benefits, this approach also has a fundamental concern: reliability. The main question is how does artificial intelligence decide which data is important and which is unnecessary?
“The reality is that these models lose many delicacies,” says Ekaterina Almasque, a prominent investor in the field of artificial intelligence. The real question is, do they have the right mechanism to delete irresistible data? “
This can be problematic, especially in terms of safety and artificial intelligence. If a model systematically recognizes data from a particular group or a particular perspective, its output can be extremely biased, unreliable and even dangerous.
RCO NEWS



