Get the latest tech news
DeepSeek may have used Google’s Gemini to train its latest model
Chinese AI lab DeepSeek released an updated version of its R1 reasoning model that performs well on a number of math and coding benchmarks. Some AI researchers speculate that at least a portion came from Google's Gemini family of AI.
Last week, Chinese lab DeepSeek released an updated version of its R1 reasoning AI model that performs well on a number of math and coding benchmarks. Sam Paeach, a Melbourne-based developer who creates “emotional intelligence” evaluations for AI, published what he claims is evidence that DeepSeek’s latest model was trained on outputs from Gemini. Earlier this year, OpenAI told the Financial Times it found evidence linking DeepSeek to the use of distillation, a technique to train AI models by extracting data from bigger, more capable ones.
Or read this on TechCrunch