Get the latest tech news
Cohere claims its new Aya Vision AI model is best-in-class
Cohere for AI, Cohere's nonprofit research lab, has released an 'open' multimodal AI model, Aya Vision, the lab claims is best-in-class.
“While AI has made significant progress, there is still a big gap in how well models perform across different languages — one that becomes even more noticeable in multimodal tasks that involve both text and images,” Cohere wrote in a blog post. Together with Aya Vision, Cohere also released a new benchmark suite, AyaVisionBench, designed to probe a model’s skills in “vision-language” tasks like identifying differences between two images and converting screenshots to code. “[T]he dataset serves as a robust benchmark for evaluating vision-language models in multilingual and real-world settings,” Cohere researchers wrote in a post on Hugging Face.
Or read this on TechCrunch