Chinese artificial intelligence start-up DeepSeek has conducted internal evaluations on the “frontier risks” of its AI models, according to a person familiar with the matter.
The development, not previously reported, comes as Beijing seeks to promote awareness of such risks within China’s AI industry. Frontier risks refer to AI systems that pose potentially significant threats to public safety and social stability.
The results were not publicised. It was not clear when the evaluations were completed or which of the company’s models were involved. DeepSeek did not respond to a request for comment on Tuesday.

Unlike US AI firms such as Anthropic and OpenAI, which regularly publish the findings of their frontier risk evaluations, Chinese companies have not announced such details.