Get the latest tech news

Cohere claims its new Aya Vision AI model is best-in-class


Cohere for AI, Cohere's nonprofit research lab, has released an 'open' multimodal AI model, Aya Vision, the lab claims is best-in-class.

“While AI has made significant progress, there is still a big gap in how well models perform across different languages — one that becomes even more noticeable in multimodal tasks that involve both text and images,” Cohere wrote in a blog post. Together with Aya Vision, Cohere also released a new benchmark suite, AyaVisionBench, designed to probe a model’s skills in “vision-language” tasks like identifying differences between two images and converting screenshots to code. “[T]he dataset serves as a robust benchmark for evaluating vision-language models in multilingual and real-world settings,” Cohere researchers wrote in a post on Hugging Face.

Get the Android app

Or read this on TechCrunch

Read more on:

Photo of class

class

Photo of Cohere

Cohere

Photo of Aya Vision AI

Aya Vision AI

Related news:

News photo

When Professor Bryant Lin got cancer, he taught a class about it

News photo

Maternity clinic Millie nabs $12M Series A from an all-star, all female class of VCs

News photo

Publishers sue AI startup Cohere over alleged copyright infringement