Get the latest tech news

What OpenAI’s new GPT-4o model means for developers


OpenAI’s new model was trained from the ground-up to be multimodal, and is at once faster, cheaper, and more powerful than its predecessors

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. “Before GPT-4o, if you wanted to build a voice personal assistant, you basically had to chain or plug together three different models: 1. audio in, such as [OpenAI’s] Whisper; 2. text intelligence, such as GPT-4 Turbo; then 3. back out with text-to-speech,” Godement told VentureBeat. A 128,000 token context window is equivalent to roughly 300 pages of text from a book, according to OpenAI and press coverage of the company, so that’s still a pretty tremendous amount that developers and their end-users can count on from GPT-4o, but it is substantially less than rivals.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of OpenAI

OpenAI

Photo of developers

developers

Photo of new GPT-4o model

new GPT-4o model

Related news:

News photo

Apple touts stopping $1.8BN in App Store fraud last year in latest pitch to developers

News photo

OpenAI’s ChatGPT announcement: Watch the GPT-4o reveal and demo here

News photo

Elon Musk calls OpenAI's ChatGPT-4o demo 'cringe'