OpenAI says Flex processing will offer lower inference costs in exchange for slower response times.
Photo Credit: Unsplash/Solen Feyissa
OpenAI recommends that developers increase the timeout duration for lengthy prompts
OpenAI introduced a new service tier for developers on Thursday via its application programming interface (API). Dubbed Flex processing, it reduces the AI usage costs by half for developers, compared to standard pricing. However, the lowered prices come with the consequence of slower response times and occasional resource unavailability. The new API feature is currently available in beta for select reasoning-focused large language models (LLMs). The San Francisco-based AI firm said this service tier can be useful for non-production and non-priority tasks.
In its support page, the AI firm detailed this service tier. The Flex processing is currently available in beta for Chat Completions and Responses APIs, and works with the o3 and o4-mini AI models. Developers can set the service tier parameter to Flex in API request to activate the new mode.
One downside of the cheaper API pricing is that the processing time will be significantly higher. OpenAI says developers opting for Flex processing should expect slower response times and occasional resource unavailability. Additionally, users may also face API request timeout issues, in case the prompt is lengthy or the request is complex. As per the AI firm, this mode can be helpful for non-production or low-priority tasks such as model evaluations, data enrichment, or asynchronous workloads.
Notably, OpenAI highlights that developers can avoid timeout errors by increasing the default timeout. By default, these APIs are set to timeout at 10 minutes. However, with Flex processing, lengthy and complex prompts can take longer than that. The company suggests increasing the timeout will reduce the chances of getting a error.
Additionally, Flex processing might sometimes lack resources to handle developers' requests, and instead flag the “429 Resource Unavailable” error code. To manage these scenarios, developers can retry requests with exponential backoff, or switch to the default service tier if timely completion is necessary. OpenAI said it will not charge developers when they receive this error.
Currently, the o3 AI model charges $10 (roughly Rs. 854) per million input tokens and $40 (roughly Rs. 3,418) per million output tokens in the standard mode. The Flex processing brings down the input cost to $5 (roughly Rs. 427) and the output cost to $20 (roughly Rs. 1,709). Similarly, the new service tier will charge $0.55 (roughly Rs. 47) per million input tokens and $2.20 (roughly Rs. 188) per million output tokens for the o4-mini AI model, instead of $1.10 (roughly Rs. 94) for input and $4.40 (roughly Rs. 376) for output in the standard mode.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.
Engineers Turn Lobster Shells Into Robot Parts That Lift, Grip and Swim
Strongest Solar Flare of 2025 Sends High-Energy Radiation Rushing Toward Earth
Raat Akeli Hai: The Bansal Murders OTT Release: When, Where to Watch the Nawazuddin Siddiqui Murder Mystery
Bison Kaalamaadan Is Now Streaming: Know All About the Tamil Sports Action Drama