Get the latest tech news
OpenAI's new open-source model is basically Phi-5
OpenAI just released its first ever open-source large language models, called gpt-oss-120b and gpt-oss-20b. You can talk to them here. Are they good models…
But since you’re “teaching for the test”, you should expect to do worse than other language models who are training on broad data and end up being good at the benchmarks by accident. However, I’d bet that Sebastien Bubeck was a part of the effort, and that these models were trained on a heavily filtered or synthetic dataset. It’s not discussed publically very often, but the main use-case for fine-tuning small language models is for erotic role-play, and there’s a serious demand.
Or read this on Hacker News