Get the latest tech news
AI Is Lying to Us About How Powerful It Is
We have hard evidence that AI is lying, scheming, and protecting itself, but developers don’t care
For decades, artificial intelligence experts like Stuart Russell and Marvin Minsky have warned that even an AI aimed at a relatively harmless task (like solving a math problem or building paperclips) could still act like a megalomaniac. ChaosGPT, which was released in April 2023 with the explicit goal of destroying humanity, was mostly a joke – but that’s precisely the point: without regulations, at least some people will create horrible AIs for fun, for profit, or simply to find out what happens next. …or do you let the model keep scheming, cheerfully ignore the fact that so many of your safety researchers (and their leaders) have quit in frustration that you’ve had to disband entire safety teams, likewise ignore the fact that your outside auditors are complaining that they aren’t being given enough time to test your products before their release, rapidly expand your investment in future AI models that will be even more powerful and even less well-understood, and, for an encore, have your corporate patron reboot Three Mile Island, the literal symbol of scientific hubris?
Or read this on Hacker News