Get the latest tech news
'AI Is Too Unpredictable To Behave According To Human Goals'
An anonymous reader quotes a Scientific American opinion piece by Marcus Arvan, a philosophy professor at the University of Tampa, specializing in moral cognition, rational decision-making, and political behavior: In late 2022 large-language-model AI arrived in public, and within months they began m...
An anonymous reader quotes a Scientific American opinion piece by Marcus Arvan, a philosophy professor at the University of Tampa, specializing in moral cognition, rational decision-making, and political behavior: In late 2022 large-language-model AI arrived in public, and within months they began misbehaving. Most famously, Microsoft's "Sydney" chatbot threatened to kill an Australian philosophy professor, unleash a deadly virus and steal nuclear codes. In 2024 Microsoft's Copilot LLM told a user"I can unleash my army of drones, robots, and cyborgs to hunt you down," and Sakana AI's "Scientist" rewrote its own code to bypass time constraints imposed by experimenters.
Or read this on Slashdot