Chinese start-up Moonshot AI has released a new open-source artificial intelligence (AI) model, called Kimi K2, that is touted to excel in frontier knowledge, maths, coding and general agentic tasks, as the company looks to maintain an edge against rivals such as DeepSeek.
Beijing-based Moonshot said Kimi K2 was developed with a mixture-of-experts (MoE) architecture and boasts 1 trillion total parameters, with 32 billion so-called activated parameters – specialised computational units engaged for specific tasks, according to the firm’s blog post on Friday.
MoE is a machine-learning approach that divides an AI model into separate sub-networks, or experts – each focused on a subset of the input data – to jointly perform a task. This is said to greatly reduce computation costs during pre-training and achieve faster performance during inference time.
Moonshot said it open-sourced two versions of Kimi K2. The foundation model, Kimi-K2-Base, was optimised for researchers and builders who want full control for fine-tuning and custom solutions. By contrast, Kimi-K2-Instruct was post-trained for drop-in, general-purpose chat and agentic AI experiences.
Kimi K2 is now freely available via its web and mobile applications.
Moonshot’s latest AI model reflects a broader trend in the industry towards open-source development, which has enabled developers – from start-ups like DeepSeek to larger tech firms such as Baidu and Alibaba Cloud – to improve efficiency and attain broader adoption of their AI products.
The open-source approach gives public access to a program’s source code, allowing third-party software developers to modify or share its design, fix broken links or scale up its capabilities.