OpenAI CEO Sam Altman said that with GPT-5.2, the focus was on coding and reasoning, which disbalanced writing.
Photo Credit: Unsplash/Levart_Photographer
Sam Altman took questions on AI coding, agents, scientific collaboration, new features, and more
OpenAI CEO Sam Altman admitted that writing performance on ChatGPT has recently witnessed regression, and attributed it to selective training of the latest artificial intelligence (AI) model. The company hosted its inaugural town hall on Tuesday, inviting AI builders from across the ecosystem to ask questions and share feedback on its various products and services. During the conversation, Altman tackled a wide-range of questions across the AI giant's go-to-market strategy, future products, the cost vs performance problem, and ChatGPT's increased usage as a scientific collaborator.
The company's first town hall was streamed live on YouTube, and can be watched here. The 50-minute-long session covered a diverse range of topics and concerns raised by developers and employees of the company. One of the biggest highlight was the admission from the company that the latest model, GPT-5.2, delivers poorer writing capability compared to GPT-4.5, while significantly improving coding, reasoning, and mathematics performance.
Answering a question of Ben Hylak, Co-Founder of AI startup Raindrop, about the decline in the writing performance of GPT-5.2, Altman said, “I think we just screwed that up. We will make future versions of GPT-5.x hopefully much better at writing than 4.5 was. We did decide, and I think for good reason, to put most of our effort in 5.2 into making it super good at intelligence, reasoning, coding, engineering, that kind of thing. And we have limited bandwidth here, and sometimes we focus on one thing and neglect another.”
OpenAI's stance makes commercial sense. The company has been targeting an increase in revenue, and while subscription from users has a lower ceiling due to fixed rates and price sensitivity, enterprises are willing to shell out more money with the token-based pricing. So, improving capabilities that are valued by companies and developers was kept as a priority while developing GPT-5.2, and as a result, writing and conversations took a hit. Altman does promise that this will be fixed with future releases.
Altman also addressed the Jevons paradox in AI. The paradox states that increased efficiency in usage of a resource results in lowered cost, which then causes the demand and consumption to increase. As a result, higher efficiency lowering resource depletion is a myth. In the AI space, it has been argued by some experts that if AI makes coding faster and cheaper, the rise in demand could create new jobs.
“I think what it means to be an engineer is going to super change. There will be probably far more people and creating far more value and capturing more value that are getting computers to do what they want. Getting computers to do what other people want, figuring out ways to make these useful experiences for others. But the shape of that job and the amount of time you spend like typing code or debugging code or a bunch of other things is going to very much change,” Altman said.
Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2026 hub.
Honor Magic V6 Leak Hints at Slimmer Build, New Hardware Upgrades Ahead of Anticipated March Debut
Sony Said to Be Planning State of Play Broadcast for February