models.md +58 −6
2 2
3## Recommended models3## Recommended models
4 4
5
6
7gpt-5.4
8
9Flagship frontier model for professional work that brings the industry-leading coding capabilities of GPT-5.3-Codex together with stronger reasoning, tool use, and agentic workflows.
10
11codex -m gpt-5.4
12
13Copy command
14
15Capability
16
17Speed
18
19Codex CLI & SDK
20
21Codex app & IDE extension
22
23Codex Cloud
24
25ChatGPT Credits
26
27API Access
28
29
30
31gpt-5.4-mini
32
33Fast, efficient mini model for responsive coding tasks and subagents.
34
35codex -m gpt-5.4-mini
36
37Copy command
38
39Capability
40
41Speed
42
43Codex CLI & SDK
44
45Codex app & IDE extension
46
47Codex Cloud
48
49ChatGPT Credits
50
51API Access
52
553
6 54
7gpt-5.3-codex55gpt-5.3-codex
8 56
957Most capable agentic coding model to date, combining frontier coding performance with stronger reasoning and professional knowledge capabilities.Industry-leading coding model for complex software engineering. Its coding capabilities now also power GPT-5.4.
10 58
11codex -m gpt-5.3-codex59codex -m gpt-5.3-codex
12 60
50 98
51API Access99API Access
52 100
53101The gpt-5.3-codex-spark model is available in research preview for ChatGPT ProFor most tasks in Codex, start with `gpt-5.4`. It combines strong coding,
54102subscribers. It is optimized for near-instant, real-time coding iteration.reasoning, native computer use, and broader professional workflows in one
103model. Use `gpt-5.4-mini` when you want a faster, lower-cost option for
104lighter coding tasks or subagents. The `gpt-5.3-codex-spark` model remains
105available in research preview for ChatGPT Pro subscribers and is optimized for
106near-instant, text-only iteration.
55 107
56## Alternative models108## Alternative models
57 109
71 123
72gpt-5.2124gpt-5.2
73 125
74126Our best general agentic model for tasks across industries and domains.Previous general-purpose model for coding and agentic tasks across industries and domains. Succeeded by GPT-5.4.
75 127
76codex -m gpt-5.2128codex -m gpt-5.2
77 129
167The Codex CLI and IDE extension use the same `config.toml` [configuration file](https://developers.openai.com/codex/config-basic). To specify a model, add a `model` entry to your configuration file. If you don't specify a model, the Codex app, CLI, or IDE Extension defaults to a recommended model.219The Codex CLI and IDE extension use the same `config.toml` [configuration file](https://developers.openai.com/codex/config-basic). To specify a model, add a `model` entry to your configuration file. If you don't specify a model, the Codex app, CLI, or IDE Extension defaults to a recommended model.
168 220
169```221```
170222model = "gpt-5.2"model = "gpt-5.4"
171```223```
172 224
173### Choosing a different local model temporarily225### Choosing a different local model temporarily
177To start a new Codex CLI thread with a specific model or to specify the model for `codex exec` you can use the `--model`/`-m` flag:229To start a new Codex CLI thread with a specific model or to specify the model for `codex exec` you can use the `--model`/`-m` flag:
178 230
179```bash231```bash
180232codex -m gpt-5.3-codexcodex -m gpt-5.4
181```233```
182 234
183### Choosing your model for cloud tasks235### Choosing your model for cloud tasks