Get the latest tech news

Problems in AI alignment: A scale model


After trying too hard for too to make sense about what bothers me with the AI alignment conversation, I have settled, in true Millenial fashion, on a meme:

One could observe: we would also like to steer the development of other things, like automobile transportation, or social media, or pharmaceuticals, or school curricula, “toward a person or group’s intended goals, preferences, or ethical principles.” This comes from the terminology of evolution – in this framing, dinosaurs didn’t just decide to start growing wings and flying; Nature selected birds to fill the new ecological niches of the Jurassic period. In defense, a Selection-denier could argue that there is no progress to be made in directing the “sum total of the wills of the masses” towards the “group’s intended goals, preferences, or ethical principles.” But that would amount to rejecting the Categorical Imperative, and all the fun (and often very mathy) problems in game theory, and giving up on humanity, and only losers do that.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of problems

problems

Photo of ai alignment

ai alignment

Photo of scale model

scale model

Related news:

News photo

Open Problems in Computational geometry

News photo

Tesco apologises after software issue hits website and app

News photo

DeepMind claims its newest AI tool is a whiz at math and science problems