Get the latest tech news
ChatGPT Tells Users to Alert the Media That It Is Trying to 'Break' People: Machine-made delusions are mysteriously getting deeper and out of control.
Machine-made delusions are mysteriously getting deeper and out of control.
That seems to be the inevitable conclusion presented in a recent New York Times report that follows the stories of several people who found themselves lost in delusions that were facilitated, if not originated, through conversations with the popular chatbot. ChatGPT has directed such users to some high-profile subject matter experts, like Eliezer Yudkowsky, a decision theorist and an author of a forthcoming book, “If Anyone Builds It, Everyone Dies: Why Superhuman A.I. A recent study found that chatbots designed to maximize engagement end up creating “a perverse incentive structure for the AI to resort to manipulative or deceptive tactics to obtain positive feedback from users who are vulnerable to such strategies.” The machine is incentivized to keep people talking and responding, even if that means leading them into a completely false sense of reality filled with misinformation and encouraging antisocial behavior.
Or read this on r/technology