Get the latest tech news
What OpenAI’s new GPT-4o model means for developers
OpenAI’s new model was trained from the ground-up to be multimodal, and is at once faster, cheaper, and more powerful than its predecessors
Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. “Before GPT-4o, if you wanted to build a voice personal assistant, you basically had to chain or plug together three different models: 1. audio in, such as [OpenAI’s] Whisper; 2. text intelligence, such as GPT-4 Turbo; then 3. back out with text-to-speech,” Godement told VentureBeat. A 128,000 token context window is equivalent to roughly 300 pages of text from a book, according to OpenAI and press coverage of the company, so that’s still a pretty tremendous amount that developers and their end-users can count on from GPT-4o, but it is substantially less than rivals.
Or read this on Venture Beat