OpenAI Introduces o3-pro: The New AI Model Optimized for Reasoning and Precision | ChatGPT 4 | OpenAI API | Chat OpenAI | Turtles AI
OpenAI introduced o3-pro, an evolution of the o3 model, now accessible to Pro/Team users via ChatGPT and API. It offers enhanced step-by-step reasoning, access to advanced tools, improved performance on STEM benchmarks, and variable token pricing.
Key Points:
- Immediately available on ChatGPT Pro and Team, API starting today, Enterprise/Edu next week.
- API Pricing: $20 per million tokens input and $80 output.
- Benchmarks: Beats Gemini2.5Pro on AIME2024 and Claude4Opus on GPQA Diamond.
- Tool Support: Web, File, Visual Input, Python, Memory; Temporary Chat and Image Generation are currently unavailable.
OpenAI launched o3-pro, an advanced version of the o3 reasoning model, available immediately to ChatGPT Pro and Team users, with API release starting today, and Enterprise and Edu access planned for next week. At $20 per million input tokens and $80 per million output tokens, the model has superior step-by-step reasoning, with expert evaluators showing preference for clarity, accuracy, and ease of use. In AIME2024 testing, o3-pro outperformed Google’s Gemini2.5Pro, and in GPQA Diamond, it beat Anthropic’s Claude4Opus for doctoral-level questions. The model integrates tools for web search, file analysis, visual comprehension, Python execution, and custom memory, but some features are temporarily suspended: temporary chats are disabled, image generation is not supported, and Canvas is not available. The choice of o3-pro prioritizes reliability in critical tasks, accepting longer response times than o1-pro, in exchange for precision and consistency, tested with scoring models such as “4/4 reliability. OpenAI’s decision to keep o3-pro as a premium model while reducing the cost of ordinary o3 by 80% underscores the strategy of balancing accessibility and high performance.
o3-pro enriches the OpenAI reasoning model ecosystem, focusing on robustness and integrated tools.