Get the latest tech news
Is “AI welfare” the new frontier in ethics?
Anthropic’s new hire is preparing for a future where advanced AI models may experience suffering.
The authors emphasize that no single feature would definitively prove consciousness, but they claim that examining multiple indicators may help companies make probabilistic assessments about whether their AI systems might require moral consideration. Google DeepMind recently posted a job listing for research on machine consciousness (since removed), and the authors of the new AI welfare report thank two OpenAI staff members in the acknowledgements. While today's language models can produce convincing expressions of emotions, this ability to simulate human-like responses doesn't necessarily indicate genuine feelings or internal experiences.
Or read this on ArsTechnica