OpenAI today officially announced the latest o3-mini! This small inference model is optimized for STEM fields (science, mathematics, programming), providing powerful logical reasoning capabilities while maintaining low cost and low latency. Compared to its predecessor o1-mini, o3-mini operates faster, answers more accurately, and has a 39% drop in error rate, making it one of the most competitive lightweight AI models currently available.
o3-mini is officially open today, accessible through ChatGPT (including Plus, Team, and Pro plans) and OpenAI API. The enterprise version will be available in February. Of particular note is the first-time availability of the inference model for free users to try, allowing anyone to experience the “Reason” mode or regenerate responses in ChatGPT.
Upgrade in full! 5 key points where o3-mini is stronger than o1-mini.
o3-mini is OpenAI’s first small-scale inference model that supports popular developer features, including:
Function Calling - Seamless Integration of AI and Applications
Structured Outputs - Generate JSON, tables