Kaiko-Vortex-1 wins Vision Language Task
Our MLLM outperforms in Vision Language at the Unicorn Challenge 2025

Kaiko-Vortex-1 is the first iteration of our Multimodal Large Language Model. It’s trained from the ground up to reason over pathology patches and full Whole Slide Images, answer questions, and generate pathology reports.
During the Unicorn Challenge 2025, part of MICCAI 2025, it outperformed the competition and won in the Vision-Language Task of the Unicorn Challenge by generating more accurate pathology reports from whole slide images than others.
For healthcare, this could mean:
- Better accuracy in any task involving pathology image analysis
- Increased understanding of pathology as a field and associated guidelines and practices.
- Unlocks advanced image capabilites in our clinical assistant — kaiko.w