concepts/subagents.md +10 −9
65 65
66If you don't pin a model or `model_reasoning_effort`, Codex can choose a setup66If you don't pin a model or `model_reasoning_effort`, Codex can choose a setup
67that balances intelligence, speed, and price for the task. It may favor67that balances intelligence, speed, and price for the task. It may favor
6868`gpt-5.4-mini` for fast scans or a higher-effort `gpt-5.4``gpt-5.4-mini` for fast scans or a higher-effort `gpt-5.5` configuration for
6969configuration for more demanding reasoning. When you want finer control, steer thatmore demanding reasoning. When you want finer control, steer that choice in
7070choice in your prompt or set `model` and `model_reasoning_effort` directly inyour prompt or set `model` and
7171the agent file.`model_reasoning_effort` directly in the agent file.
72 72
7373For most tasks in Codex, start with `gpt-5.4`. Use `gpt-5.4-mini` when youFor most tasks in Codex, start with `gpt-5.5`. Use `gpt-5.4-mini` when you
7474want a faster, lower-cost option for lighter subagent work. If you have want a faster, lower-cost option for lighter subagent work. If you have
7575ChatGPT Pro and want near-instant text-only iteration, `gpt-5.3-codex-spark` ChatGPT Pro and want near-instant text-only iteration, `gpt-5.3-codex-spark`
7676remains available in research preview. remains available in research preview.
77 77
78### Model choice78### Model choice
79 79
8080- **`gpt-5.4`**: Start here for most agents. It combines strong coding, reasoning, tool use, and broader workflows. The main agent and agents that coordinate ambiguous or multi-step work fit here.- **`gpt-5.5`**: Start here for demanding agents. It is strongest for ambiguous, multi-step work that needs planning, tool use, validation, and follow-through across a larger context.
81- **`gpt-5.4`**: Use this when a workflow is pinned to GPT-5.4. It combines strong coding, reasoning, tool use, and broader workflows.
81- **`gpt-5.4-mini`**: Use for agents that favor speed and efficiency over depth, such as exploration, read-heavy scans, large-file review, or processing supporting documents. It works well for parallel workers that return distilled results to the main agent.82- **`gpt-5.4-mini`**: Use for agents that favor speed and efficiency over depth, such as exploration, read-heavy scans, large-file review, or processing supporting documents. It works well for parallel workers that return distilled results to the main agent.
82- **`gpt-5.3-codex-spark`**: If you have ChatGPT Pro, use this research preview model for near-instant, text-only iteration when latency matters more than broader capability.83- **`gpt-5.3-codex-spark`**: If you have ChatGPT Pro, use this research preview model for near-instant, text-only iteration when latency matters more than broader capability.
83 84