China’s open source AI ecosystem continues to gain global attention as new models challenge long-standing leaders in reasoning, coding, and multimodal benchmarks. Independent evaluations show several Chinese systems now ranking alongside top open source models worldwide, highlighting a shift in the balance of AI innovation.
Moonshot AI has joined this momentum with the release of Kimi K2.5, a natively multimodal open source model designed to handle text, images and video. The company said the model was trained on roughly 15 trillion mixed tokens, enabling complex reasoning and advanced coding workflows.
Kimi K2.5 uses a mixture-of-experts architecture with about one trillion total parameters. Only a portion of these parameters activate per request, which lowers compute demands while maintaining performance. Moonshot AI also emphasized support for agent-based workflows, including coordinated agent swarms of up to 100 agents on a single task.
Alongside the model, Moonshot AI introduced Kimi Code, an open source coding agent built for modern developer tools. The company said it integrates directly with popular editors and supports multimodal inputs beyond text.
Key Highlights
- Multimodal reasoning across text, images and video
- Optimized for multi-agent and coding workflows
- Reduced compute through selective expert activation
Quick Overview
| Component | Description |
| Kimi K2.5 | Open source multimodal mixture-of-experts model |
| Kimi Code | Coding agent for integrated development environments |
| Training scale | ~15 trillion mixed tokens |
| Agent support | Up to 100 coordinated agents |
Alibaba Group and HongShan back Moonshot AI, which former Google and Meta researcher Yang Zhilin founded. The company has raised significant funding, underscoring growing investor confidence in China’s open source AI trajectory.