You figure that out and the VC's will be shovelling money into your face.
I suspect the "it ain't training costs/hardware" bit is a bit exagerated since it ignores all the prior work that DeepSeek was built on top of.
But, if all else fails, there's always the tried-and-true approaches: regulatory capture, industry entrenchment, use your VC bucks to be the last one who can wait out the costs the incumbents do face before they fold, etc.
> I suspect the "it ain't training costs/hardware" bit is a bit exagerated since it ignores all the prior work that DeepSeek was built on top of.
How does it ignore it? The success of Deepseek proves that training costs/hardware are definitely NOT a barrier to entry that protects OpenAI from competition. If anyone can train their model with ChatGPT for a fraction of the cost it took to train ChatGPT and get similar results, then how is that a barrier?
Can anyone do that though? You need the tokens and the pipelines to feed them to the matmul mincers. Quoting only dollar equivalent of GPU time is disingenuous at best.
That’s not to say they lie about everything, obviously the thing works amazingly well. The cost is understated by 10x or more, which is still not bad at all I guess? But not mind blowing.
Even if that's 10x, that's easy to counter. $50M can be invested by almost anyone. There are thousands of entities (incl. governments, even regional ones) who could easily bring such capital.
I suspect the "it ain't training costs/hardware" bit is a bit exagerated since it ignores all the prior work that DeepSeek was built on top of.
But, if all else fails, there's always the tried-and-true approaches: regulatory capture, industry entrenchment, use your VC bucks to be the last one who can wait out the costs the incumbents do face before they fold, etc.