Get the latest tech news
Colorless green DNNs sleep furiously in an unexplainable fantasy
It seems that the misguided hype surrounding deep learning and in turn artificial intelligence (AI) does not seem to be subsiding. In fact, with the advent of Large Language Models (LLMs) – and products therefrom, such as GPT-4, Sora, Copilot, Llama 3, etc., many have been raising up the ante as we started to hear of declarations such as “AGI (artificial general intelligence) is near,” or that “kids should stop learning to code because AI programming is here.” In this blurred atmosphere foundational research that shows the “in-theory” limitations of LLMs (or, for that matter, limitations of the overall deep neural networks architecture) are completely ignored.
I will discuss here (i) some of the misguided claims being made; and (ii) some of the results regarding “in-theory” limitations of deep neural networks (DNNs)—results that are stubbornly being brushed aside or outright ignored. My hope, like in several other posts and blogs I have written about this subject, is to bring some sanity to the discussion and to start thinking seriously about other alternatives that could help in producing AI that is explainable, reliable, and scalable. The bottom line here is this: while the data-driven bottom-up strategy in the reverse-engineering of language at scale (which is what LLMs are) has resulted in impressive performance in text generation, extrapolating this into suggesting AGI is near is also a fantasy, and this, too, requires immediate attention from the computer science police department (CSPD).
Or read this on Hacker News