Get the latest tech news
Anthropic’s Claude 3 knew when researchers were testing it
The more time we spend with LLMs, and the more powerful they get, the more surprises seem to emerge about their capabilities.
But among the interesting details to emerge today about Claude 3’s release is one shared by Anthropic prompt engineer Alex Albert on X (formerly Twitter). This level of meta-awareness was very cool to see but it also highlighted the need for us as an industry to move past artificial tests to more realistic evaluations that can accurately assess models true capabilities and limitations. Yet, it is important to remember that even the most powerful LLMs are rule-based machine learning programs governed by word and conceptual associations — not conscious entities (that we know of).
Or read this on Venture Beat