Get the latest tech news
What Could a Healthy AI Companion Look Like?
A chatbot designed to avoid anthropomorphism offers a compelling glimpse into the future of human-to-AI relationships.
They’re also programmed to avoid romantic and sexual interactions, to identify problematic behavior including unhealthy levels of engagement, and to encourage users to seek out real-life activities and relationships. Last April, OpenAI said it would modify its models to reduce their so-called sycophancy, or a tendency to be “overly flattering or agreeable”, which the company said could be “uncomfortable, unsettling, and cause distress.” Over the past year, I have received numerous emails and DMs from people wanting to tell me about conspiracies involving popular AI chatbots.
Or read this on Wired