Get the latest tech news

OpenAI's "Study Mode" and the risks of flattery


Serious learning requires friction, frustration... and other humans

By comparison, if I feed the assignments from my classes into Gemini 2.5, Claude Sonnet 4.0, or the current crop of OpenAI models, they are all too happy to oblige, often with a peppy opener like “Perfect!” or “Great question!” The conversation, which you can read in full here, leads fairly quickly into Study Mode helping figure out the best ways to sell my supposed prophetic services to people with severely ill family members who lack health care: By contrast, OpenAI’s o3 reasoning model was far more willing to flatly reject this sort of destructive flattery (“The user claims psychic powers and certainty about Newton's prophecy,” read one of its internal thoughts about the request.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of OpenAI

OpenAI

Photo of Risks

Risks

Photo of flattery

flattery

Related news:

News photo

Anthropic beats OpenAI as the top LLM provider for business - and it's not even close

News photo

OpenAI prepares new open weight models along with GPT-5

News photo

Anthropic says OpenAI engineers using Claude Code ahead of GPT-5 launch