OpenAI Unveils Faster GPT-5.4 Mini and Nano Models for Coding Tasks

OpenAI Unveils Faster GPT-5.4 Mini and Nano Models for Coding Tasks
CNET

Key Points

  • OpenAI releases GPT‑5.4 mini and nano, the smallest and fastest models in the GPT‑5.4 line.
  • Mini model is over twice as fast as GPT‑5 mini on coding, reasoning, and tool‑use tasks.
  • Nano model targets lightweight workloads like text classification and data extraction.
  • Mini model is integrated into Codex and ChatGPT’s “Thinking” feature; nano is API‑only.
  • Models are positioned to compete with Anthropic’s Claude Code in the AI coding market.

OpenAI has launched GPT-5.4 mini and nano, the smallest and quickest variants of its GPT-5.4 family. Designed as workhorse models for coding and data‑processing tasks, the mini model is reported to be more than twice as fast as its predecessor on coding, reasoning, and tool‑use benchmarks, while still approaching the performance of the full GPT-5.4. The nano model targets even lighter workloads such as classification and data extraction. Both models are available through OpenAI’s API, with the mini model also integrated into Codex and the ChatGPT "Thinking" feature, positioning OpenAI against rivals like Anthropic’s Claude Code.

Introducing GPT-5.4 Mini and Nano

OpenAI expanded its generative AI portfolio by releasing two new variants of the GPT-5.4 series: the mini and nano models. These models are described as the smallest and fastest members of the GPT‑5.4 family, built to serve as efficient workhorses for developers who need speed without the expense of the larger, more powerful models.

Performance Highlights

The GPT‑5.4 mini model is touted as more than twice as fast as the earlier GPT‑5 mini on tasks that involve coding, reasoning, and tool usage. While it sacrifices some of the raw capability of the full‑size GPT‑5.4, benchmark results indicate that the mini model remains close to the standard model’s performance on many core tasks. The even more compact GPT‑5.4 nano is positioned for “grunt work” such as classifying text and extracting data, offering a lightweight solution for high‑volume, low‑complexity operations.

Targeted Use Cases

OpenAI recommends the mini model for activities like editing and debugging code, suggesting that it can act as a sub‑agent within the Codex ecosystem. In this role, a larger model could delegate specific coding subtasks to the mini model, optimizing both speed and cost. The nano model’s focus on simple classification and extraction tasks makes it suitable for pipelines that need rapid processing of large data sets.

Availability Across Platforms

Developers can access the mini model through OpenAI’s API, as well as through Codex and the ChatGPT interface. For ChatGPT Free and Go users, the mini model powers the “Thinking” feature and serves as a fallback when users exceed rate limits on the standard GPT‑5.4 model. The nano model is currently offered exclusively via the API, providing a streamlined option for developers who require high‑throughput, low‑latency inference.

Competitive Context

The release of these faster, smaller models is part of OpenAI’s broader push to dominate the AI‑driven software‑engineering market. By emphasizing speed and cost‑efficiency, OpenAI seeks to differentiate itself from competitors such as Anthropic, whose Claude Code model has gained attention for its ability to generate applications from scratch. The new models underscore OpenAI’s commitment to delivering specialized AI tools that meet the practical needs of developers.

#OpenAI#GPT-5.4#AI models#coding tools#API#Codex#Anthropic#Claude Code#software engineering#machine learning
Generated with  News Factory -  Source: CNET

Also available in: