Get the latest tech news
Google's Co-Founder Says AI Performs Best When You Threaten It
During a podcast taping, Google co-founder Sergey Brin said that threatening an AI model makes it work best. That seems like a bad idea.
So it doesn't necessarily bring me comfort to hear a high-profile, established tech executive jokingly discuss how treating AI poorly maximizes its potential. One Anthropic employee took to Bluesky, and mentioned that Opus, the company's highest performing model, can take it upon itself to try to stop you from doing "immoral" things, by contacting regulators, the press, or locking you out of the system: Speaking of testing, Anthropic researchers found that this new model of Claude is prone to deception and blackmail, should the bot believe it is being threatened or dislikes the way an interaction is going.
Or read this on r/technology