Get the latest tech news
Problems in AI alignment: A scale model
After trying too hard for too to make sense about what bothers me with the AI alignment conversation, I have settled, in true Millenial fashion, on a meme:
One could observe: we would also like to steer the development of other things, like automobile transportation, or social media, or pharmaceuticals, or school curricula, “toward a person or group’s intended goals, preferences, or ethical principles.” This comes from the terminology of evolution – in this framing, dinosaurs didn’t just decide to start growing wings and flying; Nature selected birds to fill the new ecological niches of the Jurassic period. In defense, a Selection-denier could argue that there is no progress to be made in directing the “sum total of the wills of the masses” towards the “group’s intended goals, preferences, or ethical principles.” But that would amount to rejecting the Categorical Imperative, and all the fun (and often very mathy) problems in game theory, and giving up on humanity, and only losers do that.
Or read this on Hacker News