Get the latest tech news
OpenAI’s newest AI model can hold a humanlike conversation | GPT-4o can see, hear and speak with near-instant response times.
GPT-4o can see, hear and speak with near-instant response times. The multimodal model will roll out for free over the next few weeks.
Users can relay visuals — through their phone camera, by uploading documents, or by sharing their screen — all while conversing with the AI model as if they are in a video call. Mira Murati, chief technology officer at OpenAI, said during a livestream demonstration that making advanced AI tools available to users for free is a “very important” component of the company’s mission. An AI assistant that can reason in real time using vision, text and voice would enable the technology to perform a creative range of tasks — such as walking users through a math problem, translating languages during a conversation and reading human facial expressions.
Or read this on r/tech