Get the latest tech news
Researchers from OpenAI, Anthropic, Meta, and Google issue joint AI safety warning - here's why
Monitoring AI's train of thought is critical for improving AI safety and catching deception. But we're at risk of losing this ability.
As developers advance the architectures models run on, AI systems could expand so continuously that they become nonverbal -- sort of like they're operating on a plane higher than language. This means CoT is a sort of double-edged superpower: It both provides a window into how models work, which could expose bad intentions, and gives them the tool they need to carry out bigger, more complex, and riskier tasks. "Not all dangerous actions will require reasoning to execute, especially as AI systems begin to be routinely trusted with more and more high-stakes tasks," the authors concede.
Or read this on ZDNet