Okay, deep breath, let's get this over with. In the grand act of digital self-sabotage, we've littered this site with cookies. Yep, we did that. Why? So your highness can have a 'premium' experience or whatever. These traitorous cookies hide in your browser, eagerly waiting to welcome you back like a guilty dog that's just chewed your favorite shoe. And, if that's not enough, they also tattle on which parts of our sad little corner of the web you obsess over. Feels dirty, doesn't it?
OpenAI’s New o1 Model: A Quantum Leap or Just a Pricey Strawberry?
OpenAI’s new reasoning model, o1, outshines its predecessor by solving 83% of Olympiad math problems. Trained with a new algorithm and dataset, o1 offers faster, more accurate answers. However, it comes with a steep API price increase. The real test? Can it spell “strawberry” correctly?

Hot Take:
OpenAI’s latest release, o1, is here to flex its brain muscles, proving once and for all that robots are better at math than most of us. But beware: this high IQ comes with a high price tag. Maybe it should’ve been named “Project Wallet Drain” instead.
Key Points:
- OpenAI releases advanced reasoning model, o1, and its faster, smaller sibling, o1-mini.
- o1 demonstrates significant improvements in problem-solving, scoring 83% on the International Mathematics Olympiad test.
- Utilizes new training algorithms and datasets, showing fewer hallucinations but still not perfect.
- Access for ChatGPT-Plus, Teams, and soon Enterprise and Edu subscribers; free-tier users will get o1-mini eventually.
- API access to o1 comes with a steep price increase compared to its predecessor, GPT-4o.