OpenAI introduces a free new model O3-mini: the most powerful compact inference model, upgraded with low cost and high efficiency!

OpenAI today officially announced the latest o3-mini! This small inference model is optimized for STEM fields (science, mathematics, programming), providing powerful logical reasoning capabilities while maintaining low cost and low latency. Compared to its predecessor o1-mini, o3-mini operates faster, answers more accurately, and has a 39% drop in error rate, making it one of the most competitive lightweight AI models currently available.

o3-mini is officially open today, accessible through ChatGPT (including Plus, Team, and Pro plans) and OpenAI API. The enterprise version will be available in February. Of particular note is the first-time availability of the inference model for free users to try, allowing anyone to experience the "Reason" mode or regenerate responses in ChatGPT.

Upgrade in full! 5 key points where o3-mini is stronger than o1-mini.

  1. Supports a variety of developer functions, directly entering the production environment

o3-mini is OpenAI's first small-scale inference model that supports popular developer features, including:

Function Calling - Seamless Integration of AI and Applications

Structured Outputs - Generate JSON, tables

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate app
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)