Chinese artificial intelligence start-up DeepSeek said that China will soon have home-grown “next generation” chips for AI stacking, fanning speculation over breakthroughs China may have achieved.
In a one-line note on its official WeChat account explaining the “UE8M0 FP8 scale” of its newly released model V3.1, the Hangzhou-based firm said that the model was particularly designed “for the home-grown chips to be released soon”. It did not specify the vendor of these chips or whether their use would be in the training of AI models or inferencing.
In a technical paper explaining V3.1, which integrates reasoning and non-reasoning modes into one model, DeepSeek said the model was trained “using the UE8M0 FP8 scale data format to ensure compatibility with microscaling data formats”.
The disclosure hints that China has made key progress in building a self-sufficient AI stack consisting of domestic technologies, a development that could help the country shrug off US chip export restrictions.
FP8, or floating-point 8 is an 8-bit data format that reduces precision to speed up AI training and inference by using less memory and bandwidth. UE8M0, a format with 8 bits for exponent and 0 bits for mantissa, could further increase training efficiency and in turn reduce hardware requirements, as it could cut memory use by up to 75 per cent.
DeepSeek’s use of these formats, if combined with China’s domestic chips, could translate to a new breakthrough in hardware-software coordination.
The revelation marks a bold claim from the company, which has been relatively quiet since it shocked the world with the release of its R1 reasoning model in January 2025 and its V3 model in December 2024. DeepSeek said its V3 model was trained on 2,048 Nvidia H800 chips. It did not disclose the chips it used to train R1 or V3.1.