It should also be pretty obvious that this is following the usual Chinese MO of using massive state subsidies to destroy the international competition with impossibly low dumping prices. We are seeing this in all sorts of sectors.
In this case, DeepSeek is announcing the training time for their LLMl, which wall street is extrapolating costs from. No state aid involved.
Until this has been independently verified, I have my doubts. This wouldn’t be the first time for China to vastly exaggerate its technological capabilities.
Deepseek seems to have done a clever thing w.r.t. training data, by having the model train on data that was emitted by other LLMs (as far as I’ve heard). That means there is sort of “quality-pass”, filtering out a lot of the definitely bogus data. That probably leads to a smaller model, and thus less training hours.
Google engineers put out a paper on this technique recently as well.
In this case, DeepSeek is announcing the training time for their LLMl, which wall street is extrapolating costs from. No state aid involved.
The article mentions the cost of tokens to end users, not training time.
Ah ok, I didn’t catch that. Other articles were discussing v3’s training using only 2.8M GPU hours.
https://www.ft.com/content/c82933fe-be28-463b-8336-d71a2ff5bbbf
Until this has been independently verified, I have my doubts. This wouldn’t be the first time for China to vastly exaggerate its technological capabilities.
Deepseek seems to have done a clever thing w.r.t. training data, by having the model train on data that was emitted by other LLMs (as far as I’ve heard). That means there is sort of “quality-pass”, filtering out a lot of the definitely bogus data. That probably leads to a smaller model, and thus less training hours.
Google engineers put out a paper on this technique recently as well.