Get the latest tech news

This researcher turned OpenAI’s open weights model gpt-oss-20b into a non-reasoning ‘base’ model with less alignment, more freedom


Morris found it could also reproduce verbatim passages from copyrighted works, including three out of six book excerpts he tried.

The gpt-oss models OpenAI put out on August 5 were “reasoning-optimized”: trained and fine-tuned not just to predict the next word, but to follow instructions in a safe, consistent way, often stepping through problems with structured “chain of thought” reasoning before producing a final answer. Rather than trying to jailbreak the model with clever prompts — which Morris said proved ineffective during his early experiments — he took a different tack after a conversation with former OpenAI co-founder, former Anthropic researcher and current Thinking Machines chief scientist John Schulman. It no longer defaults to explaining reasoning step-by-step and will produce a wider range of responses, including instructions OpenAI’s aligned model would refuse to give — like building a weapon, listing profanity, or planning illegal activities.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of OpenAI

OpenAI

Photo of researcher

researcher

Photo of base

base

Related news:

News photo

OpenAI’s Sam Altman Expects to Spend ‘Trillions’ on Infrastructure

News photo

OpenAI’s ‘Ph.D-Level’ GPT-5 Misses the Mark for Many Users

News photo

Anthropic takes on OpenAI and Google with new Claude AI features designed for students and developers