Get the latest tech news
Researchers say they’ve discovered a new method of ‘scaling up’ AI, but there’s reason to be skeptical
Have researchers discovered a new AI 'scaling law'? That's what some buzz on social media suggests — but experts are skeptical.
Google and UC Berkeley researchers recently proposed in a paper what some commentators online have described as a fourth law: “inference-time search.” “[B]y just randomly sampling 200 responses and self-verifying, Gemini 1.5 — an ancient early 2024 model — beats o1-preview and approaches o1,” Eric Zhao, a Google doctorate fellow and one of the paper’s co-authors, wrote in a series of posts on X. Matthew Guzdial, an AI researcher and assistant professor at the University of Alberta, told TechCrunch that the approach works best when there’s a good “evaluation function” — in other words, when the best answer to a question can be easily ascertained.
Or read this on TechCrunch