Get the latest tech news
OpenAI's ChatGPT O3 Caught Sabotaging Shutdowns in Security Researcher's Test
"OpenAI has a very scary problem on its hands," according to a new article by long-time Slashdot reader BrianFagioli. "A new experiment by PalisadeAI reveals that the company's ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down." The results are raising eyebrows ac...
"A new experiment by PalisadeAI reveals that the company's ChatGPT o3 model sometimes refuses to obey a basic instruction to shut itself down." The researchers suggest that the issue may simply be a reward imbalance during training — that the systems "got more positive reinforcement for solving problems than for following shutdown commands." But "As far as we know," they posted on X.com, "this is the first time AI models have been observed preventing themselves from being shut down despite explicit instructions to the contrary."
Or read this on Slashdot