Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
Just hours after announcing a big price cut for its o3 reasoning model, OpenAI made o3-pro, an even more powerful version, available to developers.
o3-pro is “designed to think longer and provide the most reliable responses,” and has access to many more software tool integrations than its predecessor, making it potentially appealing to enterprises and developers searching for high levels of detail and accuracy.
However, this model will also be slower than what many developers are accustomed to, having access to computer tools that OpenAI claims make the model more accurate.
“Because 03-pro has access to tools, responses typically take longer than o1-pro to complete. We recommend using it for challenging questions where reliability matters more than speed, and waiting a few minutes is worth the tradeoff,” the company said in an email to reporters.
COME SEE OpenAI Head of Product, API Olivier Godement in-person at VB TRANSFORM 2025 in San Francisco June 24-25. REGISTER NOW while tickets still remain.

But how much longer? We asked OpenAI about how much slower o3-pro is than o3 on average to output responses and will update when we receive one of our own from the company.
On X, Hyerbolic Labs co-founder and CTO Yuchen Jin posted several screenshots of his o3-pro usage showing it took 3 minutes and $80 worth of token to respond to the phrase, “Hi, I’m Sam Altman.”
Bindu Reddy, CEO of Abacus AI, also said o3-pro took 2 minutes to respond to “hey there.”
https://twitter.com/bindureddy/status/1932562799772971295
How to use OpenAI o3-pro
Developers can access o3-pro through the OpenAI API as well as for Pro and Team users of ChatGPT. The new model version replaces o1-pro in the model picker for paying ChatGPT users.
OpenAI said o3-pro “has access to tools that make ChatGPT useful,” such as web search, analyzing files, reason about visual inputs, using Python, and personalize responses.
The mode, though, is pricey, which may give some enterprise developers pause. Based on OpenAI’s pricing page, o3-pro costs $20 per input and $80 for outputs, compared to o3 itself, which is now down to $2 and $8, a tenth of the price.
A more comprehensive model
OpenAI launched o3 and o4-mini in April, expanding its “o-series” of models that rely on reasoning and can “think with images.” The new model, o3-pro, uses the same underlying model as o3.
Evaluations conducted by OpenAI showed the o3-pro can often outperform the base model. Expert reviewers ranked o3-pro higher in domains such as science, education, programming, business, and writing help. The company said o3-pro is more effective, more comprehensive, and follows instructions better.
Reasoning models have become a new battleground for model providers, with competitors like Google, Antropicși xAI, as well as rivals from China, such as DeepSeek, coming out with their own models designed to think through responses.
Currently, o3-pro is not able to generate images, and OpenAI has disabled temporary chats to resolve a technical issue. ChatGPT’s expanded workspace feature Canvas is also not yet accessible using o3-pro.
Some early users claim that o3-pro has been working remarkably, but it is still early days, and the high cost of running it may deter some developers from experimenting with it.
See some initial reactions below:
As Ben Hyak, a former Apple Vision Pro interface designer and co-founder of startup Raindrop AI observability solutions wrote in a blog post about his early access usage of o3, “It’s noticeably better at discerning what it’s environment is; accurately communicating what tools it has access to, when to ask questions about the outside world (rather than pretending it has the information/access), and choosing the right tool for the job.” OpenAI founder and CEO Sam Altman also highlighted Hyak’s blog post in an X post.
The launch also comes at a time when OpenAI said it has reached three million business users, with enterprise users surging 50% since February.