Get the latest tech news
Researchers question AI’s ‘reasoning’ ability as models stumble on math problems with trivial changes
How do machine learning models do what they do? And are they really "thinking" or "reasoning" the way we understand those things? This is a philosophical
Their training data does allow them to respond with the correct answer in some situations, but as soon as the slightest actual “reasoning” is required, such as whether to count small kiwis, they start producing weird, unintuitive results. An OpenAI researcher, while commending Mirzadeh et al’s work, objected to their conclusions, saying that correct results could likely be achieved in all these failure cases with a bit of prompt engineering. Farajtabar (responding with the typical yet admirable friendliness researchers tend to employ) noted that while better prompting may work for simple deviations, the model may require exponentially more contextual data in order to counter complex distractions — ones that, again, a child could trivially point out.
Or read this on TechCrunch