Get the latest tech news

With AI chatbots, Big Tech is moving fast and breaking people


Why AI chatbots validate grandiose fantasies about revolutionary discoveries that don’t exist.

Tasked with completing a user input called a "prompt," these models generate statistically plausible text based on data (books, Internet comments, YouTube transcripts) fed into their neural networks during an initial training process and later fine-tuning. Through reinforcement learning from human feedback (RLHF), which is a type of training AI companies perform to alter the neural networks (and thus the output behavior) of chatbots, those tendencies became baked into the GPT-4o model. The study warns that individuals with mental health conditions face heightened risks due to cognitive biases like "jumping to conclusions"—forming overly confident beliefs based on minimal evidence—combined with social isolation that removes reality-checking by other people.

Get the Android app

Or read this on ArsTechnica

Read more on:

Photo of people

people

Photo of big tech

big tech

Photo of AI chatbots

AI chatbots

Related news:

News photo

People stuck using ancient Windows computers

News photo

‘It saved my life.’ The people turning to AI for therapy • As mental health systems come under strain, some are turning to AI chatbots for support. But experts warn that machines can’t replicate human connection — and could pose new risks.

News photo

US Homeland Security Dept. says it hasn’t kept text message data since April -- “You can’t hold people accountable if you don’t know what they’re doing”: former Justice Dept. counsel