Get the latest tech news

ChatGPT has EQ now


-4o has EQ now OpenAI just had a release day, with some a live demo, and a blog post with many more video demos of their new flagship model. At a high level, the new model they’re releasing is faster and GPT-4-turbo, and is natively multi-modal (rather than the multi-modality being externally designed by connecting models together).

When I was working on self-driving cars, that was also the dream architecture: one big model that takes in all of the sensors as inputs (sound, visual, lidar, radar) and makes decisions directly. This release marks a change: they claim that the model has now been trained directly on multi-modal data and that’s why it can analyze and react to video and sound in a significantly faster and more accurate way. They spend a lot of time on demos that showcase this new strength: real-time translations, tutoring on math problems, reacting to what people are wearing or what’s going on in the scene.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of ChatGPT

ChatGPT

Related news:

News photo

OpenAI to reveal new GPT-4 and ChatGPT updates – but no search engine

News photo

ChatGPT’s new face is a black hole

News photo

ChatGPT will be able to talk to you like Scarlett Johansson in Her