Get the latest tech news

Google releases PaliGemma, its first Gemma vision-language multimodal open model


Google has a new Gemma model variant called PaliGemma, giving developers access to lightweight AI vision-language capabilities.

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Web and mobile apps are perhaps the more conventional use cases for PaliGemma, but it’s feasible that the model could be incorporated into wearables such as sunglasses that would compete against the Ray-Ban Meta Smart Glasses or in devices similar to the Rabbit r1 or Humane AI Pin. Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of Google

Google

Photo of language

language

Photo of Gemma

Gemma

Related news:

News photo

Google unveils its Gemma 2 series, with a 27B parameter model that can run on a single TPU

News photo

Google’s AI Studio adds adjustable video frame extraction, context caching

News photo

Google’s Gemini Nano comes to the Chrome desktop client