Get the latest tech news

Google's latest AI safety report explores AI beyond human control


How AI models could manipulate, misuse, and even build systems humans can't understand.

In the absence of strong federal oversight, the very tech companies that are so aggressively pushing consumer-facing AI tools are also the entities that, by default, are setting the standards for the safe deployment of the rapidly evolving technology. It focuses on what Google describes as "Critical Capability Levels," or CCLs, which can be thought of as thresholds of ability beyond which AI systems could escape human control and therefore endanger individual users or society at large. Some companies have been aggressively pushing out AI companions, virtual avatars powered by large language models and intended to engage in humanlike -- and sometimes overtly flirtatious -- conversations with human users.

Get the Android app

Or read this on ZDNet

Read more on:

Photo of Google

Google

Photo of human control

human control

Related news:

News photo

Google AI Mode now speaks Spanish

News photo

Google’s AI Mode arrives in Spanish globally

News photo

How Google’s dev tools manager makes AI coding work