Get the latest tech news

OpenAI Threatens To Ban Users Who Probe Its 'Strawberry' AI Models


OpenAI truly does not want you to know what its latest AI model is "thinking." From a report: Since the company launched its "Strawberry" AI model family last week, touting so-called reasoning abilities with o1-preview and o1-mini, OpenAI has been sending out warning emails and threats of bans to an...

Unlike previous AI models from OpenAI, such as GPT-4o, the company trained o1 specifically to work through a step-by-step problem-solving process before generating an answer. However, by design, OpenAI hides the raw chain of thought from users, instead presenting a filtered interpretation created by a second AI model. Nothing is more enticing to enthusiasts than information obscured, so the race has been on among hackers and red-teamers to try to uncover o1's raw chain of thought using jailbreaking or prompt injection techniques that attempt to trick the model into spilling its secrets.

Get the Android app

Or read this on Slashdot

Read more on:

Photo of OpenAI

OpenAI

Photo of users

users

Photo of Models

Models

Related news:

News photo

Elon Musk’s X Finds Way Around Brazil Ban and Goes Live Again for Many Users

News photo

OpenAI Hires Former Coursera Executive to Expand AI Use in Schools

News photo

OpenAI Threatening to Ban Users for Asking Strawberry About Its Reasoning