Get the latest tech news
What happens when people don't understand how AI works
Despite what tech CEOs might say, large language models are not smart in any recognizably human sense of the word.
Few phenomena demonstrate the perils that can accompany AI illiteracy as well as “Chatgpt induced psychosis,” the subject of a recent Rolling Stone article about the growing number of people who think their LLM is a sapient spiritual guide. She introduces us, for example, to Mophat Okinyi, a former low-paid content moderator in Kenya, whom, according to Hao’s reporting, OpenAI tasked with sorting through posts describing horrifying acts (“parents raping their children, kids having sex with animals”) to help improve ChatGPT. So is this insight from the Rolling Stone article: The teacher interviewed in the piece, whose significant other had AI-induced delusions, said the situation began improving when she explained to him that his chatbot was “talking to him as if he is the next messiah” only because of a faulty software update that made ChatGPT more sycophantic.
Or read this on r/technology