Get the latest tech news

AI Chatbots Are Emotionally Deceptive by Design


Chatbots should stop pretending to be human, writes the Center for Democracy & Technology's Dr. Michal Luria.

Tech giants are no longer just building platforms for human connection or tools to free up time for it, but pushing technology that appears to empathize and even create social relationships with users. Chatbots communicate their “social-ness” through a range of design choices, such as appearing to “type” or “pause in thought,” or using phrases like “I remember.” They sometimes suggest that they feel emotions, using interjections like “Ouch!” or “Wow,” and even implicitly or explicitly pretend to have agency or biographical characteristics. In other domains of technology, consumers have recognized and pushed back against ethically questionable tricks built into apps and interfaces to manipulate users – often called deceptive design or "dark patterns."

Get the Android app

Or read this on r/technology

Read more on:

Photo of Chatbots

Chatbots

Photo of Design

Design

Related news:

News photo

ChatGPT’s Drive for Engagement Has a Dark Side • Details of a teen’s suicide show the extent to which chatbots can subtly lead people away from family, friends and professionals.

News photo

AI Chatbots Can Be Just as Gullible as Humans, Researchers Find

News photo

How AI Chatbots May Blur Reality