Get the latest tech news
OpenAI's "Study Mode" and the risks of flattery
Serious learning requires friction, frustration... and other humans
By comparison, if I feed the assignments from my classes into Gemini 2.5, Claude Sonnet 4.0, or the current crop of OpenAI models, they are all too happy to oblige, often with a peppy opener like “Perfect!” or “Great question!” The conversation, which you can read in full here, leads fairly quickly into Study Mode helping figure out the best ways to sell my supposed prophetic services to people with severely ill family members who lack health care: By contrast, OpenAI’s o3 reasoning model was far more willing to flatly reject this sort of destructive flattery (“The user claims psychic powers and certainty about Newton's prophecy,” read one of its internal thoughts about the request.
Or read this on Hacker News