Get the latest tech news
Chinese researchers unveil LLaVA-o1 to challenge OpenAI’s o1 model
LLaVA-o1 breaks down the answer into multiple reasoning components and uses inference-time scaling to optimize each stage.
While OpenAI has not released much detail about the underlying mechanism of o1, its results show promising directions for improving the reasoning abilities of foundational models. “Notably, it is the structured output design of LLaVA-o1 that makes this approach feasible, enabling efficient and accurate verification at each stage,” the researchers write. Despite being trained on only 100,000 examples, LLaVA-o1 showed significant performance improvements over the base Llama model, with an average benchmark score increase of 6.9%.
Or read this on Venture Beat