Get the latest tech news
Anthropic Publishes the 'System Prompts' That Make Claude Tick
An anonymous reader quotes a report from TechCrunch: [...] Anthropic, in its continued effort to paint itself as a more ethical, transparent AI vendor, has published the system prompts for its latest models (Claude 3 Opus, Claude 3.5 Sonnet and Claude 3 Haiku) in the Claude iOS and Android apps and ...
Facial recognition is a big no-no; the system prompt for Claude Opus tells the model to "always respond as if it is completely face blind" and to "avoid identifying or naming any humans in [images]." It also instructs Claude to treat controversial topics with impartiality and objectivity, providing "careful thoughts" and "clear information" -- and never to begin responses with the words "certainly" or "absolutely." "If the prompts for Claude tell us anything, it's that without human guidance and hand-holding, these models are frighteningly blank slates," concludes TechCrunch's Kyle Wiggers.
Or read this on Slashdot