Post Views: 3
The latest model update from Chinese artificial intelligence (AI) start-up DeepSeek has matched the coding performance of industry heavyweights Google and Anthropic, according to the latest results from WebDev Arena, a real-time AI coding competition.
The updated version of DeepSeek-R1 tied for first place with Google’s Gemini-2.5 and Anthropic’s Claude Opus 4 on the WebDev Arena leaderboard, which evaluates large language models (LLMs) on their ability to solve coding tasks quickly and accurately. The Hangzhou-based company’s R1 scored 1,408.84, in line with Opus 4’s 1,405.51 and Gemini-2.5’s 1,433.16.
The quality of the models’ output is evaluated by humans, who determine the scores. DeepSeek’s reasoning model has consistently performed at levels close to leading models in various benchmark tests since it was unveiled in January, despite significantly lower training costs.
DeepSeek quietly updated R1 in late May, marking its first revision since its high-profile debut. The start-up released R1-0528 on the open-source AI developer community Hugging Face, calling it a “minor upgrade” and offering no details on the changes. It later said the updated model had improved in reasoning and creative writing capabilities, with a 50 per cent reduction in hallucinations – instances where AI generates misleading information with little factual basis.
The R1 update attracted attention from the developer community amid widespread anticipation for DeepSeek’s next-generation reasoning model, R2. The company has said little about when it might release its big follow-up.
DeepSeek’s approach to making its high-performance models open source has set it apart from many leading players. The ability for developers to use and modify the code as they see fit led to rapid early adoption. In China, it pushed tech giants such as Baidu to endorse the open-source model approach.
Source link