In a document published on Monday, the Hangzhou-based start-up said it “has always prioritised AI security” and decided to make its disclosure to help people use its models, at a time when Beijing is ramping up oversight over the industry.
The company said data in the pre-training stage was “mainly” collected from publicly available online information as well as authorised third-party data, and DeepSeek had no intention to collect personal data.
DeepSeek said it applied automated filters to remove raw data containing “hate speech, pornography, violence, spam and potentially infringing contents”. Meanwhile, it applied algorithmic detection with human review to identify “inherent statistical biases in large-scale data sets” to mitigate the impact on model values.
The company, founded by computer scientist Liang Wenfeng, said it was committed to reducing the “hallucinations” of its models through research and techniques such as retrieval-augmented generation, but added that it remained an “unavoidable” problem.
“AI is still in its early stages and the technology is still immature … at this stage, we cannot guarantee that our models will not produce hallucinations,” it said, reminding users to seek professional advice when necessary and emphasising that its models predicted rather than retrieved answers based on user prompts.