Get the latest tech news

Qwen2.5-VL-32B: Smarter and Lighter


QWEN CHAT GITHUB HUGGING FACE MODELSCOPE DISCORD Introduction At the end of January this year, we launched the Qwen2.5-VL series of models, which received widespread attention and positive feedback from the community. Building on the Qwen2.5-VL series, we continued to optimize the model using reinforcement learning and open-sourced the new VL model with the beloved 32B parameter scale under the Apache 2.0 license — Qwen2.5-VL-32B-Instruct. Compared to the previously released Qwen2.

At the end of January this year, we launched the Qwen2.5-VL series of models, which received widespread attention and positive feedback from the community. Extensive benchmarking against state-of-the-art (SoTA) models of comparable scale, Qwen2.5-VL-32B-Instruct has demonstrated superiority over baselines, e.g., Mistral-Small-3.1-24B and Gemma-3-27B-IT, even surpassing the larger Qwen2-VL-72B-Instruct. Notably, it achieves significant advantages in multimodal tasks such as MMMU, MMMU-Pro, and MathVista, which focus on complex, multi-step reasoning.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of qwen2.5

qwen2.5

Photo of Smarter

Smarter

Photo of Lighter

Lighter

Related news:

News photo

Qwen2.5-7B-Instruct-1M and Qwen2.5-14B-Instruct-1M

News photo

Qwen2.5: A Party of Foundation Models