SpyBara
Go Premium Account
2026
23 Feb 2026, 18:27
14 May 2026, 21:00 14 May 2026, 07:00 13 May 2026, 00:57 12 May 2026, 01:59 11 May 2026, 18:00 7 May 2026, 20:02 7 May 2026, 17:08 5 May 2026, 23:00 2 May 2026, 06:45 2 May 2026, 00:48 1 May 2026, 18:29 30 Apr 2026, 18:36 29 Apr 2026, 12:40 29 Apr 2026, 00:50 25 Apr 2026, 06:37 25 Apr 2026, 00:42 24 Apr 2026, 18:20 24 Apr 2026, 12:28 23 Apr 2026, 18:31 23 Apr 2026, 12:28 23 Apr 2026, 00:46 22 Apr 2026, 18:29 22 Apr 2026, 00:42 21 Apr 2026, 18:29 21 Apr 2026, 12:30 21 Apr 2026, 06:45 20 Apr 2026, 18:26 20 Apr 2026, 06:53 18 Apr 2026, 18:18 17 Apr 2026, 00:44 16 Apr 2026, 18:31 16 Apr 2026, 00:46 15 Apr 2026, 18:31 15 Apr 2026, 06:44 14 Apr 2026, 18:31 14 Apr 2026, 12:29 13 Apr 2026, 18:37 13 Apr 2026, 00:44 12 Apr 2026, 06:38 10 Apr 2026, 18:23 9 Apr 2026, 00:33 8 Apr 2026, 18:32 8 Apr 2026, 00:40 7 Apr 2026, 00:40 2 Apr 2026, 18:23 31 Mar 2026, 06:35 31 Mar 2026, 00:39 28 Mar 2026, 06:26 28 Mar 2026, 00:36 27 Mar 2026, 18:23 27 Mar 2026, 00:39 26 Mar 2026, 18:27 25 Mar 2026, 18:24 23 Mar 2026, 18:22 20 Mar 2026, 00:35 18 Mar 2026, 12:23 18 Mar 2026, 00:36 17 Mar 2026, 18:24 17 Mar 2026, 00:33 16 Mar 2026, 18:25 16 Mar 2026, 12:23 14 Mar 2026, 00:32 13 Mar 2026, 18:15 13 Mar 2026, 00:34 11 Mar 2026, 00:31 9 Mar 2026, 00:34 8 Mar 2026, 18:10 8 Mar 2026, 00:35 7 Mar 2026, 18:10 7 Mar 2026, 06:14 7 Mar 2026, 00:33 6 Mar 2026, 00:38 5 Mar 2026, 18:41 5 Mar 2026, 06:22 5 Mar 2026, 00:34 4 Mar 2026, 18:18 4 Mar 2026, 06:20 3 Mar 2026, 18:20 3 Mar 2026, 00:35 27 Feb 2026, 18:15 24 Feb 2026, 06:27 24 Feb 2026, 00:33 23 Feb 2026, 18:27 21 Feb 2026, 00:33 20 Feb 2026, 12:16 19 Feb 2026, 20:53 19 Feb 2026, 20:37
7 May 2026, 20:02
14 May 2026, 21:00 14 May 2026, 07:00 13 May 2026, 00:57 12 May 2026, 01:59 11 May 2026, 18:00 7 May 2026, 20:02 7 May 2026, 17:08 5 May 2026, 23:00 2 May 2026, 06:45 2 May 2026, 00:48 1 May 2026, 18:29 30 Apr 2026, 18:36 29 Apr 2026, 12:40 29 Apr 2026, 00:50 25 Apr 2026, 06:37 25 Apr 2026, 00:42 24 Apr 2026, 18:20 24 Apr 2026, 12:28 23 Apr 2026, 18:31 23 Apr 2026, 12:28 23 Apr 2026, 00:46 22 Apr 2026, 18:29 22 Apr 2026, 00:42 21 Apr 2026, 18:29 21 Apr 2026, 12:30 21 Apr 2026, 06:45 20 Apr 2026, 18:26 20 Apr 2026, 06:53 18 Apr 2026, 18:18 17 Apr 2026, 00:44 16 Apr 2026, 18:31 16 Apr 2026, 00:46 15 Apr 2026, 18:31 15 Apr 2026, 06:44 14 Apr 2026, 18:31 14 Apr 2026, 12:29 13 Apr 2026, 18:37 13 Apr 2026, 00:44 12 Apr 2026, 06:38 10 Apr 2026, 18:23 9 Apr 2026, 00:33 8 Apr 2026, 18:32 8 Apr 2026, 00:40 7 Apr 2026, 00:40 2 Apr 2026, 18:23 31 Mar 2026, 06:35 31 Mar 2026, 00:39 28 Mar 2026, 06:26 28 Mar 2026, 00:36 27 Mar 2026, 18:23 27 Mar 2026, 00:39 26 Mar 2026, 18:27 25 Mar 2026, 18:24 23 Mar 2026, 18:22 20 Mar 2026, 00:35 18 Mar 2026, 12:23 18 Mar 2026, 00:36 17 Mar 2026, 18:24 17 Mar 2026, 00:33 16 Mar 2026, 18:25 16 Mar 2026, 12:23 14 Mar 2026, 00:32 13 Mar 2026, 18:15 13 Mar 2026, 00:34 11 Mar 2026, 00:31 9 Mar 2026, 00:34 8 Mar 2026, 18:10 8 Mar 2026, 00:35 7 Mar 2026, 18:10 7 Mar 2026, 06:14 7 Mar 2026, 00:33 6 Mar 2026, 00:38 5 Mar 2026, 18:41 5 Mar 2026, 06:22 5 Mar 2026, 00:34 4 Mar 2026, 18:18 4 Mar 2026, 06:20 3 Mar 2026, 18:20 3 Mar 2026, 00:35 27 Feb 2026, 18:15 24 Feb 2026, 06:27 24 Feb 2026, 00:33 23 Feb 2026, 18:27 21 Feb 2026, 00:33 20 Feb 2026, 12:16 19 Feb 2026, 20:53 19 Feb 2026, 20:37
Fri 1 18:29 Sat 2 00:48 Sat 2 06:45 Tue 5 23:00 Thu 7 17:08 Thu 7 20:02 Mon 11 18:00 Tue 12 01:59 Wed 13 00:57 Thu 14 07:00 Thu 14 21:00

After 2026-05-02 06:45 UTC, this monitor no longer uses markdownified HTML/MDX. Comparisons across that boundary can therefore show more extensive diffs.

Codex Models

<ModelDetails client:load name="gpt-5.4" slug="gpt-5.4" wallpaperUrl="/images/api/models/gpt-5.4.jpg" description="Flagship frontier model for professional work that brings the industry-leading coding capabilities of GPT-5.3-Codex together with stronger reasoning, tool use, and agentic workflows." data={{ features: [ { title: "Capability", value: "", icons: [ "openai.SparklesFilled", "openai.SparklesFilled", "openai.SparklesFilled", "openai.SparklesFilled", "openai.SparklesFilled", ], }, { title: "Speed", value: "", icons: ["openai.Flash", "openai.Flash", "openai.Flash"], }, { title: "Codex CLI & SDK", value: true, }, { title: "Codex app & IDE extension", value: true }, { title: "Codex Cloud", value: false, }, { title: "ChatGPT Credits", value: true }, { title: "API Access", value: true }, ], }} />

<ModelDetails client:load name="gpt-5.4-mini" slug="gpt-5.4-mini" wallpaperUrl="/images/api/models/gpt-5-mini.jpg" description="Fast, efficient mini model for responsive coding tasks and subagents." data={{ features: [ { title: "Capability", value: "", icons: [ "openai.SparklesFilled", "openai.SparklesFilled", "openai.SparklesFilled", ], }, { title: "Speed", value: "", icons: ["openai.Flash", "openai.Flash", "openai.Flash", "openai.Flash"], }, { title: "Codex CLI & SDK", value: true, }, { title: "Codex app & IDE extension", value: true }, { title: "Codex Cloud", value: false, }, { title: "ChatGPT Credits", value: true }, { title: "API Access", value: true }, ], }} />

<ModelDetails client:load name="gpt-5.3-codex" slug="gpt-5.3-codex" wallpaperUrl="/images/codex/codex-wallpaper-1.webp" description="Industry-leading coding model for complex software engineering. Its coding capabilities now also power GPT-5.4." data={{ features: [ { title: "Capability", value: "", icons: [ "openai.SparklesFilled", "openai.SparklesFilled", "openai.SparklesFilled", "openai.SparklesFilled", "openai.SparklesFilled", ], }, { title: "Speed", value: "", icons: ["openai.Flash", "openai.Flash", "openai.Flash"], }, { title: "Codex CLI & SDK", value: true, }, { title: "Codex app & IDE extension", value: true }, { title: "Codex Cloud", value: true, }, { title: "ChatGPT Credits", value: true }, { title: "API Access", value: true }, ], }} />

<ModelDetails client:load name="gpt-5.3-codex-spark" slug="gpt-5.3-codex-spark" wallpaperUrl="/images/codex/codex-wallpaper-2.webp" description="Text-only research preview model optimized for near-instant, real-time coding iteration. Available to ChatGPT Pro users." data={{ features: [ { title: "Capability", value: "", icons: [ "openai.SparklesFilled", "openai.SparklesFilled", "openai.SparklesFilled", ], }, { title: "Speed", value: "", icons: [ "openai.Flash", "openai.Flash", "openai.Flash", "openai.Flash", "openai.Flash", ], }, { title: "Codex CLI & SDK", value: true, }, { title: "Codex app & IDE extension", value: true }, { title: "Codex Cloud", value: false, }, { title: "ChatGPT Credits", value: false }, { title: "API Access", value: false }, ], }} />

For most tasks in Codex, start with gpt-5.5 when it appears in your model picker. It is strongest for complex coding, computer use, knowledge work, and research workflows. GPT-5.5 is currently available in Codex when you sign in with ChatGPT; it isn't available with API-key authentication. During the rollout, continue using gpt-5.4 if gpt-5.5 is not yet available. Use gpt-5.4-mini when you want a faster, lower-cost option for lighter coding tasks or subagents. The gpt-5.3-codex-spark model is available in research preview for ChatGPT Pro subscribers and is optimized for near-instant, real-time coding iteration.

Alternative models

Other models

When you sign in with ChatGPT, Codex works best with the models listed above.

You can also point Codex at any model and provider that supports either the Chat Completions or Responses APIs to fit your specific use case.

Support for the Chat Completions API is deprecated and will be removed in future releases of Codex.

Configuring models

Configure your default local model

The Codex CLI and IDE extension use the same config.toml configuration file. To specify a model, add a model entry to your configuration file. If you don't specify a model, the Codex app, CLI, or IDE Extension defaults to a recommended model.

model = "gpt-5.5"

If gpt-5.5 isn't available in your account yet, use gpt-5.4.

Choosing a different local model temporarily

In the Codex CLI, you can use the /model command during an active thread to change the model. In the IDE extension, you can use the model selector below the input box to choose your model.

To start a new Codex CLI thread with a specific model or to specify the model for codex exec you can use the --model/-m flag:

codex -m gpt-5.5

Choosing your model for cloud tasks

Currently, you can't change the default model for Codex cloud tasks.