Member-only story

Debunking 10 Popular Myths About DeepSeek

I’ve heard too many wrong takes that need correction

Alberto Romero
5 min read1 day ago
  1. DeepSeek-R1 is more lobotomized than US models. DeepSeek’s models have indeed been trained to adhere to the narratives of the CCP (unsurprisingly). However, you can always download and fine-tune the open-source weights with any data you want or wait for someone else to build an app on top of DeepSeek and remove the censorship. So even if they’re by default unable to answer some questions (or even mention Xi Jinping’s name), they’re more malleable than ChatGPT, Gemini, or Claude.
  2. With just $5 million, DeepSeek can achieve what OpenAI needs billions to do. The tweet I linked is an intentionally exaggerated meme but I’ve seen people believe subtler forms of the same idea. No: DeepSeek can’t replicate OpenAI’s business (or Google’s or Anthropic’s) with that amount of money. The figure — $5.576 million to be precise — is what DeepSeek spent on pre- and post-training DeepSeek-V3. Deployment and inference costs aren’t included (inference scales with the number of users, which has surely increased orders of magnitude in the last few days). Staff salaries, failed runs, architecture experiments, data preprocessing, data gathering, GPUs, infrastructure costs — none of that is included. Sorry, but no one can compete — neither DeepSeek nor OpenAI — in this field at the…

--

--

Alberto Romero
Alberto Romero

Written by Alberto Romero

AI & Tech | Weekly AI Newsletter: https://thealgorithmicbridge.substack.com/ | Contact: alber.romgar at gmail dot com

Responses (9)