Get the latest tech news

A non-anthropomorphized view of LLMs


In many discussions where questions of "alignment" or "AI safety" crop up, I am baffled by seriously intelligent people imbuing almost magic...

In many discussions where questions of "alignment" or "AI safety" crop up, I am baffled by seriously intelligent people imbuing almost magical human-like powers to something that - in my mind - is just MatMul with interspersed nonlinearities. To repeat myself: To me, considering that any human concept such as ethics, will to survive, or fear, apply to an LLM appears similarly strange as if we were discussing the feelings of a numerical meteorology simulation. My grandfather lived from 1904 to 1981, a period which encompassed moving from gas lamps to electric, the replacement of horse carriages by cars, nuclear power, transistors, all the way to computers.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Related news:

News photo

LLMs should not replace therapists

News photo

Everything around LLMs is still magical and wishful thinking

News photo

LLM-assisted writing in biomedical publications through excess vocabulary