Get the latest tech news
Meta AI Chief: Large Language Models Won't Achieve AGI
Yann LeCun is not too concerned about AI reaching human levels of intelligence, so he and his 500-person team are full steam ahead on developing new tech that moves beyond LLMs.
LLMs have a "very limited understanding of logic," cannot comprehend the physical world, and don't have "persistent memory," LeCun tells the Financial Times. While OpenAI recently gave ChatGPT a kind of " working memory," LeCun doesn't think current AI models are much smarter " than a house cat." They're also "intrinsically unsafe" because they rely so heavily on the training data, which could contain inaccuracies or be out-of-date (AI models are still prone to hallucinating fake information).
Or read this on r/technology