Get the latest tech news

Apple study proves LLM-based AI models are flawed because they cannot reason


A new paper from Apple's artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills.

A new paper from Apple's artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills. The study found that adding even a single sentence that appears to offer relevant information to a given math question can reduce the accuracy of the final answer by up to 65 percent. The query then adds a clause that appears relevant, but actually isn't with regards to the final answer, noting that of the kiwis picked on Sunday, "five of them were a bit smaller than average."

Get the Android app

Or read this on r/apple

Read more on:

Photo of Apple

Apple

Photo of Study

Study

Photo of LLM

LLM

Related news:

News photo

Apple’s cheapest iPads are still steeply discounted following Prime Day

News photo

US labor board accuses Apple of restricting workers' Slack, social media use

News photo

Apple opens lab in China amid fierce competition with Huawei