“This is the first time a real quantum computer has been used to fine-tune a large language model in a practical setting. It shows that current quantum hardware can begin to support real-world AI training tasks,” said Chen Zhaoyun, a researcher at the Institute of Artificial Intelligence under the Hefei Comprehensive National Science Centre.
The improved AI model also reportedly delivered better results on specific tasks. When trained on mental health conversation data, it made 15 per cent fewer mistakes and in a maths problem-solving test, its accuracy rose from 68 to 82 per cent, according to Science and Technology Daily.
Fine-tuning is a key step in customising general AI models such as DeepSeek or Qwen for specialised tasks, such as analysing medical data. Traditionally, this process relies on powerful servers and faces multiple challenges, including limited ability to scale and high energy consumption.
Quantum computing, by contrast, brings unique advantages. By leveraging quantum principles such as superposition – one particle holding multiple possible states at once – and entanglement, which means particles remain linked and instantly affect each other, quantum computers can explore vast combinations of parameters simultaneously, making AI training much faster and more efficient.