Get the latest tech news
Is AI lying to me? Scientists warn of growing capacity for deception
Researchers find instances of systems double-crossing opponents, bluffing, pretending to be human and modifying behaviour in tests
The analysis, by Massachusetts Institute of Technology (MIT) researchers, identifies wide-ranging instances of AI systems double-crossing opponents, bluffing and pretending to be human. Park was prompted to investigate after Meta, which owns Facebook, developed a program called Cicero that performed in the top 10% of human players at the world conquest strategy game Diplomacy. Prof Anthony Cohn, a professor of automated reasoning at the University of Leeds and the Alan Turing Institute, said the study was “timely and welcome”, adding that there was a significant challenge in how to define desirable and undesirable behaviours for AI systems.
Or read this on Hacker News