SpyBara
Go Premium Account
2026
19 Feb 2026, 20:37
14 May 2026, 21:00 14 May 2026, 07:00 13 May 2026, 00:57 12 May 2026, 01:59 11 May 2026, 18:00 7 May 2026, 20:02 7 May 2026, 17:08 5 May 2026, 23:00 2 May 2026, 06:45 2 May 2026, 00:48 1 May 2026, 18:29 30 Apr 2026, 18:36 29 Apr 2026, 12:40 29 Apr 2026, 00:50 25 Apr 2026, 06:37 25 Apr 2026, 00:42 24 Apr 2026, 18:20 24 Apr 2026, 12:28 23 Apr 2026, 18:31 23 Apr 2026, 12:28 23 Apr 2026, 00:46 22 Apr 2026, 18:29 22 Apr 2026, 00:42 21 Apr 2026, 18:29 21 Apr 2026, 12:30 21 Apr 2026, 06:45 20 Apr 2026, 18:26 20 Apr 2026, 06:53 18 Apr 2026, 18:18 17 Apr 2026, 00:44 16 Apr 2026, 18:31 16 Apr 2026, 00:46 15 Apr 2026, 18:31 15 Apr 2026, 06:44 14 Apr 2026, 18:31 14 Apr 2026, 12:29 13 Apr 2026, 18:37 13 Apr 2026, 00:44 12 Apr 2026, 06:38 10 Apr 2026, 18:23 9 Apr 2026, 00:33 8 Apr 2026, 18:32 8 Apr 2026, 00:40 7 Apr 2026, 00:40 2 Apr 2026, 18:23 31 Mar 2026, 06:35 31 Mar 2026, 00:39 28 Mar 2026, 06:26 28 Mar 2026, 00:36 27 Mar 2026, 18:23 27 Mar 2026, 00:39 26 Mar 2026, 18:27 25 Mar 2026, 18:24 23 Mar 2026, 18:22 20 Mar 2026, 00:35 18 Mar 2026, 12:23 18 Mar 2026, 00:36 17 Mar 2026, 18:24 17 Mar 2026, 00:33 16 Mar 2026, 18:25 16 Mar 2026, 12:23 14 Mar 2026, 00:32 13 Mar 2026, 18:15 13 Mar 2026, 00:34 11 Mar 2026, 00:31 9 Mar 2026, 00:34 8 Mar 2026, 18:10 8 Mar 2026, 00:35 7 Mar 2026, 18:10 7 Mar 2026, 06:14 7 Mar 2026, 00:33 6 Mar 2026, 00:38 5 Mar 2026, 18:41 5 Mar 2026, 06:22 5 Mar 2026, 00:34 4 Mar 2026, 18:18 4 Mar 2026, 06:20 3 Mar 2026, 18:20 3 Mar 2026, 00:35 27 Feb 2026, 18:15 24 Feb 2026, 06:27 24 Feb 2026, 00:33 23 Feb 2026, 18:27 21 Feb 2026, 00:33 20 Feb 2026, 12:16 19 Feb 2026, 20:53 19 Feb 2026, 20:37
8 Mar 2026, 18:10
14 May 2026, 21:00 14 May 2026, 07:00 13 May 2026, 00:57 12 May 2026, 01:59 11 May 2026, 18:00 7 May 2026, 20:02 7 May 2026, 17:08 5 May 2026, 23:00 2 May 2026, 06:45 2 May 2026, 00:48 1 May 2026, 18:29 30 Apr 2026, 18:36 29 Apr 2026, 12:40 29 Apr 2026, 00:50 25 Apr 2026, 06:37 25 Apr 2026, 00:42 24 Apr 2026, 18:20 24 Apr 2026, 12:28 23 Apr 2026, 18:31 23 Apr 2026, 12:28 23 Apr 2026, 00:46 22 Apr 2026, 18:29 22 Apr 2026, 00:42 21 Apr 2026, 18:29 21 Apr 2026, 12:30 21 Apr 2026, 06:45 20 Apr 2026, 18:26 20 Apr 2026, 06:53 18 Apr 2026, 18:18 17 Apr 2026, 00:44 16 Apr 2026, 18:31 16 Apr 2026, 00:46 15 Apr 2026, 18:31 15 Apr 2026, 06:44 14 Apr 2026, 18:31 14 Apr 2026, 12:29 13 Apr 2026, 18:37 13 Apr 2026, 00:44 12 Apr 2026, 06:38 10 Apr 2026, 18:23 9 Apr 2026, 00:33 8 Apr 2026, 18:32 8 Apr 2026, 00:40 7 Apr 2026, 00:40 2 Apr 2026, 18:23 31 Mar 2026, 06:35 31 Mar 2026, 00:39 28 Mar 2026, 06:26 28 Mar 2026, 00:36 27 Mar 2026, 18:23 27 Mar 2026, 00:39 26 Mar 2026, 18:27 25 Mar 2026, 18:24 23 Mar 2026, 18:22 20 Mar 2026, 00:35 18 Mar 2026, 12:23 18 Mar 2026, 00:36 17 Mar 2026, 18:24 17 Mar 2026, 00:33 16 Mar 2026, 18:25 16 Mar 2026, 12:23 14 Mar 2026, 00:32 13 Mar 2026, 18:15 13 Mar 2026, 00:34 11 Mar 2026, 00:31 9 Mar 2026, 00:34 8 Mar 2026, 18:10 8 Mar 2026, 00:35 7 Mar 2026, 18:10 7 Mar 2026, 06:14 7 Mar 2026, 00:33 6 Mar 2026, 00:38 5 Mar 2026, 18:41 5 Mar 2026, 06:22 5 Mar 2026, 00:34 4 Mar 2026, 18:18 4 Mar 2026, 06:20 3 Mar 2026, 18:20 3 Mar 2026, 00:35 27 Feb 2026, 18:15 24 Feb 2026, 06:27 24 Feb 2026, 00:33 23 Feb 2026, 18:27 21 Feb 2026, 00:33 20 Feb 2026, 12:16 19 Feb 2026, 20:53 19 Feb 2026, 20:37
Tue 3 00:35 Tue 3 18:20 Wed 4 06:20 Wed 4 18:18 Thu 5 00:34 Thu 5 06:22 Thu 5 18:41 Fri 6 00:38 Sat 7 00:33 Sat 7 06:14 Sat 7 18:10 Sun 8 00:35 Sun 8 18:10 Mon 9 00:34 Wed 11 00:31 Fri 13 00:34 Fri 13 18:15 Sat 14 00:32 Mon 16 12:23 Mon 16 18:25 Tue 17 00:33 Tue 17 18:24 Wed 18 00:36 Wed 18 12:23 Fri 20 00:35 Mon 23 18:22 Wed 25 18:24 Thu 26 18:27 Fri 27 00:39 Fri 27 18:23 Sat 28 00:36 Sat 28 06:26 Tue 31 00:39 Tue 31 06:35

agent-approvals-security.md +253 −0 added

Details

1# Agent approvals & security

2 

3Codex helps protect your code and data and reduces the risk of misuse.

4 

5This page covers how to operate Codex safely, including sandboxing, approvals,

6 and network access. If you are looking for Codex Security, the product for

7 scanning connected GitHub repositories, see [Codex Security](https://developers.openai.com/codex/security).

8 

9By default, the agent runs with network access turned off. Locally, Codex uses an OS-enforced sandbox that limits what it can touch (typically to the current workspace), plus an approval policy that controls when it must stop and ask you before acting.

10 

11For a high-level explanation of how sandboxing works across the Codex app, IDE

12extension, and CLI, see [Sandboxing](https://developers.openai.com/codex/concepts/sandboxing).

13 

14## Sandbox and approvals

15 

16Codex security controls come from two layers that work together:

17 

18- **Sandbox mode**: What Codex can do technically (for example, where it can write and whether it can reach the network) when it executes model-generated commands.

19- **Approval policy**: When Codex must ask you before it executes an action (for example, leaving the sandbox, using the network, or running commands outside a trusted set).

20 

21Codex uses different sandbox modes depending on where you run it:

22 

23- **Codex cloud**: Runs in isolated OpenAI-managed containers, preventing access to your host system or unrelated data. Uses a two-phase runtime model: setup runs before the agent phase and can access the network to install specified dependencies, then the agent phase runs offline by default unless you enable internet access for that environment. Secrets configured for cloud environments are available only during setup and are removed before the agent phase starts.

24- **Codex CLI / IDE extension**: OS-level mechanisms enforce sandbox policies. Defaults include no network access and write permissions limited to the active workspace. You can configure the sandbox, approval policy, and network settings based on your risk tolerance.

25 

26In the `Auto` preset (for example, `--full-auto`), Codex can read files, make edits, and run commands in the working directory automatically.

27 

28Codex asks for approval to edit files outside the workspace or to run commands that require network access. If you want to chat or plan without making changes, switch to `read-only` mode with the `/permissions` command.

29 

30Codex can also elicit approval for app (connector) tool calls that advertise side effects, even when the action isn't a shell command or file change. Destructive app/MCP tool calls always require approval when the tool advertises a destructive annotation, even if it also advertises other hints (for example, read-only hints).

31 

32## Network access [Elevated Risk](https://help.openai.com/articles/20001061)

33 

34For Codex cloud, see [agent internet access](https://developers.openai.com/codex/cloud/internet-access) to enable full internet access or a domain allow list.

35 

36For the Codex app, CLI, or IDE Extension, the default `workspace-write` sandbox mode keeps network access turned off unless you enable it in your configuration:

37 

38```toml

39[sandbox_workspace_write]

40network_access = true

41```

42 

43You can also control the [web search tool](https://platform.openai.com/docs/guides/tools-web-search) without granting full network access to spawned commands. Codex defaults to using a web search cache to access results. The cache is an OpenAI-maintained index of web results, so cached mode returns pre-indexed results instead of fetching live pages. This reduces exposure to prompt injection from arbitrary live content, but you should still treat web results as untrusted. If you are using `--yolo` or another [full access sandbox setting](#common-sandbox-and-approval-combinations), web search defaults to live results. Use `--search` or set `web_search = "live"` to allow live browsing, or set it to `"disabled"` to turn the tool off:

44 

45```toml

46web_search = "cached" # default

47# web_search = "disabled"

48# web_search = "live" # same as --search

49```

50 

51Use caution when enabling network access or web search in Codex. Prompt injection can cause the agent to fetch and follow untrusted instructions.

52 

53## Defaults and recommendations

54 

55- On launch, Codex detects whether the folder is version-controlled and recommends:

56 - Version-controlled folders: `Auto` (workspace write + on-request approvals)

57 - Non-version-controlled folders: `read-only`

58- Depending on your setup, Codex may also start in `read-only` until you explicitly trust the working directory (for example, via an onboarding prompt or `/permissions`).

59- The workspace includes the current directory and temporary directories like `/tmp`. Use the `/status` command to see which directories are in the workspace.

60- To accept the defaults, run `codex`.

61- You can set these explicitly:

62 - `codex --sandbox workspace-write --ask-for-approval on-request`

63 - `codex --sandbox read-only --ask-for-approval on-request`

64 

65### Protected paths in writable roots

66 

67In the default `workspace-write` sandbox policy, writable roots still include protected paths:

68 

69- `<writable_root>/.git` is protected as read-only whether it appears as a directory or file.

70- If `<writable_root>/.git` is a pointer file (`gitdir: ...`), the resolved Git directory path is also protected as read-only.

71- `<writable_root>/.agents` is protected as read-only when it exists as a directory.

72- `<writable_root>/.codex` is protected as read-only when it exists as a directory.

73- Protection is recursive, so everything under those paths is read-only.

74 

75### Run without approval prompts

76 

77You can disable approval prompts with `--ask-for-approval never` or `-a never` (shorthand).

78 

79This option works with all `--sandbox` modes, so you still control Codex's level of autonomy. Codex makes a best effort within the constraints you set.

80 

81If you need Codex to read files, make edits, and run commands with network access without approval prompts, use `--sandbox danger-full-access` (or the `--dangerously-bypass-approvals-and-sandbox` flag). Use caution before doing so.

82 

83For a middle ground, `approval_policy = { reject = { ... } }` lets you auto-reject specific approval prompt categories (sandbox escalation, execpolicy-rule prompts, or MCP elicitations) while keeping other prompts interactive.

84 

85### Common sandbox and approval combinations

86 

87| Intent | Flags | Effect |

88| ----------------------------------------------------------------- | ------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------ |

89| Auto (preset) | *no flags needed* or `--full-auto` | Codex can read files, make edits, and run commands in the workspace. Codex requires approval to edit outside the workspace or to access network. |

90| Safe read-only browsing | `--sandbox read-only --ask-for-approval on-request` | Codex can read files and answer questions. Codex requires approval to make edits, run commands, or access network. |

91| Read-only non-interactive (CI) | `--sandbox read-only --ask-for-approval never` | Codex can only read files; never asks for approval. |

92| Automatically edit but ask for approval to run untrusted commands | `--sandbox workspace-write --ask-for-approval untrusted` | Codex can read and edit files but asks for approval before running untrusted commands. |

93| Dangerous full access | `--dangerously-bypass-approvals-and-sandbox` (alias: `--yolo`) | [Elevated Risk](https://help.openai.com/articles/20001061) No sandbox; no approvals *(not recommended)* |

94 

95`--full-auto` is a convenience alias for `--sandbox workspace-write --ask-for-approval on-request`.

96 

97With `--ask-for-approval untrusted`, Codex runs only known-safe read operations automatically. Commands that can mutate state or trigger external execution paths (for example, destructive Git operations or Git output/config-override flags) require approval.

98 

99#### Configuration in `config.toml`

100 

101For the broader configuration workflow, see [Config basics](https://developers.openai.com/codex/config-basic), [Advanced Config](https://developers.openai.com/codex/config-advanced#approval-policies-and-sandbox-modes), and the [Configuration Reference](https://developers.openai.com/codex/config-reference).

102 

103```toml

104# Always ask for approval mode

105approval_policy = "untrusted"

106sandbox_mode = "read-only"

107allow_login_shell = false # optional hardening: disallow login shells for shell-based tools

108 

109# Optional: Allow network in workspace-write mode

110[sandbox_workspace_write]

111network_access = true

112 

113# Optional: granular approval prompt auto-rejection

114# approval_policy = { reject = { sandbox_approval = true, rules = false, mcp_elicitations = false } }

115```

116 

117You can also save presets as profiles, then select them with `codex --profile <name>`:

118 

119```toml

120[profiles.full_auto]

121approval_policy = "on-request"

122sandbox_mode = "workspace-write"

123 

124[profiles.readonly_quiet]

125approval_policy = "never"

126sandbox_mode = "read-only"

127```

128 

129### Test the sandbox locally

130 

131To see what happens when a command runs under the Codex sandbox, use these Codex CLI commands:

132 

133```bash

134# macOS

135codex sandbox macos [--full-auto] [--log-denials] [COMMAND]...

136# Linux

137codex sandbox linux [--full-auto] [COMMAND]...

138```

139 

140The `sandbox` command is also available as `codex debug`, and the platform helpers have aliases (for example `codex sandbox seatbelt` and `codex sandbox landlock`).

141 

142## OS-level sandbox

143 

144Codex enforces the sandbox differently depending on your OS:

145 

146- **macOS** uses Seatbelt policies and runs commands using `sandbox-exec` with a profile (`-p`) that corresponds to the `--sandbox` mode you selected. When restricted read access enables platform defaults, Codex appends a curated macOS platform policy (instead of broadly allowing `/System`) to preserve common tool compatibility.

147- **Linux** uses `Landlock` plus `seccomp` by default. You can opt into the alternative Linux sandbox pipeline with `features.use_linux_sandbox_bwrap = true` (or `-c use_linux_sandbox_bwrap=true`). In managed proxy mode, the bwrap pipeline routes egress through a proxy-only bridge and fails closed if it cannot build valid loopback proxy routes; landlock-only flows do not use that bridge behavior.

148- **Windows** uses the Linux sandbox implementation when running in [Windows Subsystem for Linux (WSL)](https://developers.openai.com/codex/windows#windows-subsystem-for-linux). When running natively on Windows, Codex uses a [Windows sandbox](https://developers.openai.com/codex/windows#windows-sandbox) implementation.

149 

150If you use the Codex IDE extension on Windows, it supports WSL directly. Set the following in your VS Code settings to keep the agent inside WSL whenever it’s available:

151 

152```json

153{

154 "chatgpt.runCodexInWindowsSubsystemForLinux": true

155}

156```

157 

158This ensures the IDE extension inherits Linux sandbox semantics for commands, approvals, and filesystem access even when the host OS is Windows. Learn more in the [Windows setup guide](https://developers.openai.com/codex/windows).

159 

160When running natively on Windows, configure the native sandbox mode in `config.toml`:

161 

162```toml

163[windows]

164sandbox = "unelevated" # or "elevated"

165```

166 

167See the [Windows setup guide](https://developers.openai.com/codex/windows#windows-sandbox) for details.

168 

169When you run Linux in a containerized environment such as Docker, the sandbox may not work if the host or container configuration doesn’t support the required `Landlock` and `seccomp` features.

170 

171In that case, configure your Docker container to provide the isolation you need, then run `codex` with `--sandbox danger-full-access` (or the `--dangerously-bypass-approvals-and-sandbox` flag) inside the container.

172 

173## Version control

174 

175Codex works best with a version control workflow:

176 

177- Work on a feature branch and keep `git status` clean before delegating. This keeps Codex patches easier to isolate and revert.

178- Prefer patch-based workflows (for example, `git diff`/`git apply`) over editing tracked files directly. Commit frequently so you can roll back in small increments.

179- Treat Codex suggestions like any other PR: run targeted verification, review diffs, and document decisions in commit messages for auditing.

180 

181## Monitoring and telemetry

182 

183Codex supports opt-in monitoring via OpenTelemetry (OTel) to help teams audit usage, investigate issues, and meet compliance requirements without weakening local security defaults. Telemetry is off by default; enable it explicitly in your configuration.

184 

185### Overview

186 

187- Codex turns off OTel export by default to keep local runs self-contained.

188- When enabled, Codex emits structured log events covering conversations, API requests, SSE/WebSocket stream activity, user prompts (redacted by default), tool approval decisions, and tool results.

189- Codex tags exported events with `service.name` (originator), CLI version, and an environment label to separate dev/staging/prod traffic.

190 

191### Enable OTel (opt-in)

192 

193Add an `[otel]` block to your Codex configuration (typically `~/.codex/config.toml`), choosing an exporter and whether to log prompt text.

194 

195```toml

196[otel]

197environment = "staging" # dev | staging | prod

198exporter = "none" # none | otlp-http | otlp-grpc

199log_user_prompt = false # redact prompt text unless policy allows

200```

201 

202- `exporter = "none"` leaves instrumentation active but doesn't send data anywhere.

203- To send events to your own collector, pick one of:

204 

205```toml

206[otel]

207exporter = { otlp-http = {

208 endpoint = "https://otel.example.com/v1/logs",

209 protocol = "binary",

210 headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }

211}}

212```

213 

214```toml

215[otel]

216exporter = { otlp-grpc = {

217 endpoint = "https://otel.example.com:4317",

218 headers = { "x-otlp-meta" = "abc123" }

219}}

220```

221 

222Codex batches events and flushes them on shutdown. Codex exports only telemetry produced by its OTel module.

223 

224### Event categories

225 

226Representative event types include:

227 

228- `codex.conversation_starts` (model, reasoning settings, sandbox/approval policy)

229- `codex.api_request` (attempt, status/success, duration, and error details)

230- `codex.sse_event` (stream event kind, success/failure, duration, plus token counts on `response.completed`)

231- `codex.websocket_request` and `codex.websocket_event` (request duration plus per-message kind/success/error)

232- `codex.user_prompt` (length; content redacted unless explicitly enabled)

233- `codex.tool_decision` (approved/denied, source: configuration vs. user)

234- `codex.tool_result` (duration, success, output snippet)

235 

236Associated OTel metrics (counter plus duration histogram pairs) include `codex.api_request`, `codex.sse_event`, `codex.websocket.request`, `codex.websocket.event`, and `codex.tool.call` (with corresponding `.duration_ms` instruments).

237 

238For the full event catalog and configuration reference, see the [Codex configuration documentation on GitHub](https://github.com/openai/codex/blob/main/docs/config.md#otel).

239 

240### Security and privacy guidance

241 

242- Keep `log_user_prompt = false` unless policy explicitly permits storing prompt contents. Prompts can include source code and sensitive data.

243- Route telemetry only to collectors you control; apply retention limits and access controls aligned with your compliance requirements.

244- Treat tool arguments and outputs as sensitive. Favor redaction at the collector or SIEM when possible.

245- Review local data retention settings (for example, `history.persistence` / `history.max_bytes`) if you don't want Codex to save session transcripts under `CODEX_HOME`. See [Advanced Config](https://developers.openai.com/codex/config-advanced#history-persistence) and [Configuration Reference](https://developers.openai.com/codex/config-reference).

246- If you run the CLI with network access turned off, OTel export can't reach your collector. To export, allow network access in `workspace-write` mode for the OTel endpoint, or export from Codex cloud with the collector domain on your approved list.

247- Review events periodically for approval/sandbox changes and unexpected tool executions.

248 

249OTel is optional and designed to complement, not replace, the sandbox and approval protections described above.

250 

251## Managed configuration

252 

253Enterprise admins can configure Codex security settings for their workspace in [Managed configuration](https://developers.openai.com/codex/enterprise/managed-configuration). See that page for setup and policy details.

ambassadors.md +2 −0

Details

10 10 

11[Apply Today](https://openai.com/form/codex-ambassadors)11[Apply Today](https://openai.com/form/codex-ambassadors)

12 12 

13[Upcoming Meetups](https://developers.openai.com/codex/community/meetups)

14 

13![Codex Ambassadors leading a community workshop](/images/codex/ambassadors/ambassadors-18.jpg) ![Builders collaborating during a Codex Ambassador event](/images/codex/ambassadors/ambassadors-25.jpg)15![Codex Ambassadors leading a community workshop](/images/codex/ambassadors/ambassadors-18.jpg) ![Builders collaborating during a Codex Ambassador event](/images/codex/ambassadors/ambassadors-25.jpg)

14 16 

15Ambassadors run hands-on meetups, workshops, and community sessions17Ambassadors run hands-on meetups, workshops, and community sessions

app.md +7 −11

Details

1# Codex app1# Codex app

2 2 

3Your Codex command center

4 

5The Codex app is a focused desktop experience for working on Codex threads in parallel, with built-in worktree support, automations, and Git functionality.3The Codex app is a focused desktop experience for working on Codex threads in parallel, with built-in worktree support, automations, and Git functionality.

6 4 

7ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. Learn more about [what's included](https://developers.openai.com/codex/pricing).5ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. Learn more about [what's included](https://developers.openai.com/codex/pricing).

8 6 

9![Codex app window with a project sidebar, active thread, and review pane](/images/codex/app/app-screenshot-light.webp) ![Codex app window with a project sidebar, active thread, and review pane](/images/codex/app/app-screenshot-dark.webp)7![Codex app for Windows showing a project sidebar, active thread, and review pane](/images/codex/windows/codex-windows-light.webp)

10 8 

11![Codex app window with a project sidebar, active thread, and review pane](/images/codex/app/app-screenshot-light.webp) ![Codex app window with a project sidebar, active thread, and review pane](/images/codex/app/app-screenshot-dark.webp)9![Codex app window with a project sidebar, active thread, and review pane](/images/codex/app/app-screenshot-light.webp)

12 10 

13## Getting started11## Getting started

14 12 


16 14 

171. Download and install the Codex app151. Download and install the Codex app

18 16 

19 The Codex app is currently only available for macOS.17 Download the Codex app for Windows or macOS.

20 18 

21 [Download for macOS](https://persistent.oaistatic.com/codex-app-prod/Codex.dmg)19 [Download for macOS](https://persistent.oaistatic.com/codex-app-prod/Codex.dmg)

22 20 

23 [Get notified for Windows and Linux](https://openai.com/form/codex-app/)21 [Get notified for Linux](https://openai.com/form/codex-app/)

242. Open Codex and sign in222. Open Codex and sign in

25 23 

26 Once you downloaded and installed the Codex app, open it and sign in with your ChatGPT account or an OpenAI API key.24 Once you downloaded and installed the Codex app, open it and sign in with your ChatGPT account or an OpenAI API key.


38 36 

39 You can ask Codex anything about the project or your computer in general. Here are some examples:37 You can ask Codex anything about the project or your computer in general. Here are some examples:

40 38 

41 ![](https://developers.openai.com/codex/colorcons/brain.png)Tell me about this projectCopied![](https://developers.openai.com/codex/colorcons/gamepad.png)Build a classic Snake game in this repo.Copied![](https://developers.openai.com/codex/colorcons/search.png)Find and fix bugs in my codebase with minimal, high-confidence changes.Copied39- Tell me about this project

40- Build a classic Snake game in this repo.

41- Find and fix bugs in my codebase with minimal, high-confidence changes.

42 42 

43 If you need more inspiration, check out the [explore section](https://developers.openai.com/codex/explore).43 If you need more inspiration, check out the [explore section](https://developers.openai.com/codex/explore).

44 44 


69---69---

70 70 

71Need help? Visit the [troubleshooting guide](https://developers.openai.com/codex/app/troubleshooting).71Need help? Visit the [troubleshooting guide](https://developers.openai.com/codex/app/troubleshooting).

72 

73[Next

74 

75Features](https://developers.openai.com/codex/app/features)

app-server.md +282 −29

Details

1# Codex App Server1# Codex App Server

2 2 

3Embed Codex into your product with the app-server protocol

4 

5Codex app-server is the interface Codex uses to power rich clients (for example, the Codex VS Code extension). Use it when you want a deep integration inside your own product: authentication, conversation history, approvals, and streamed agent events. The app-server implementation is open source in the Codex GitHub repository ([openai/codex/codex-rs/app-server](https://github.com/openai/codex/tree/main/codex-rs/app-server)). See the [Open Source](https://developers.openai.com/codex/open-source) page for the full list of open-source Codex components.3Codex app-server is the interface Codex uses to power rich clients (for example, the Codex VS Code extension). Use it when you want a deep integration inside your own product: authentication, conversation history, approvals, and streamed agent events. The app-server implementation is open source in the Codex GitHub repository ([openai/codex/codex-rs/app-server](https://github.com/openai/codex/tree/main/codex-rs/app-server)). See the [Open Source](https://developers.openai.com/codex/open-source) page for the full list of open-source Codex components.

6 4 

7If you are automating jobs or running Codex in CI, use the5If you are automating jobs or running Codex in CI, use the


118- **Start (or resume) a thread**: Call `thread/start` for a new conversation, `thread/resume` to continue an existing one, or `thread/fork` to branch history into a new thread id.116- **Start (or resume) a thread**: Call `thread/start` for a new conversation, `thread/resume` to continue an existing one, or `thread/fork` to branch history into a new thread id.

119- **Begin a turn**: Call `turn/start` with the target `threadId` and user input. Optional fields override model, personality, `cwd`, sandbox policy, and more.117- **Begin a turn**: Call `turn/start` with the target `threadId` and user input. Optional fields override model, personality, `cwd`, sandbox policy, and more.

120- **Steer an active turn**: Call `turn/steer` to append user input to the currently in-flight turn without creating a new turn.118- **Steer an active turn**: Call `turn/steer` to append user input to the currently in-flight turn without creating a new turn.

121- **Stream events**: After `turn/start`, keep reading notifications on stdout: `item/started`, `item/completed`, `item/agentMessage/delta`, tool progress, and other updates.119- **Stream events**: After `turn/start`, keep reading notifications on stdout: `thread/archived`, `thread/unarchived`, `item/started`, `item/completed`, `item/agentMessage/delta`, tool progress, and other updates.

122- **Finish the turn**: The server emits `turn/completed` with final status when the model finishes or after a `turn/interrupt` cancellation.120- **Finish the turn**: The server emits `turn/completed` with final status when the model finishes or after a `turn/interrupt` cancellation.

123 121 

124## Initialization122## Initialization


203- `thread/start` - create a new thread; emits `thread/started` and automatically subscribes you to turn/item events for that thread.201- `thread/start` - create a new thread; emits `thread/started` and automatically subscribes you to turn/item events for that thread.

204- `thread/resume` - reopen an existing thread by id so later `turn/start` calls append to it.202- `thread/resume` - reopen an existing thread by id so later `turn/start` calls append to it.

205- `thread/fork` - fork a thread into a new thread id by copying stored history; emits `thread/started` for the new thread.203- `thread/fork` - fork a thread into a new thread id by copying stored history; emits `thread/started` for the new thread.

206- `thread/read` - read a stored thread by id without resuming it; set `includeTurns` to return full turn history.204- `thread/read` - read a stored thread by id without resuming it; set `includeTurns` to return full turn history. Returned `thread` objects include runtime `status`.

207- `thread/list` - page through stored thread logs; supports cursor-based pagination plus `modelProviders`, `sourceKinds`, `archived`, and `cwd` filters.205- `thread/list` - page through stored thread logs; supports cursor-based pagination plus `modelProviders`, `sourceKinds`, `archived`, and `cwd` filters. Returned `thread` objects include runtime `status`.

208- `thread/loaded/list` - list the thread ids currently loaded in memory.206- `thread/loaded/list` - list the thread ids currently loaded in memory.

209- `thread/archive` - move a threads log file into the archived directory; returns `{}` on success.207- `thread/archive` - move a thread's log file into the archived directory; returns `{}` on success and emits `thread/archived`.

210- `thread/unarchive` - restore an archived thread rollout back into the active sessions directory; returns the restored `thread`.208- `thread/unsubscribe` - unsubscribe this connection from thread turn/item events. If this was the last subscriber, the server unloads the thread and emits `thread/closed`.

209- `thread/unarchive` - restore an archived thread rollout back into the active sessions directory; returns the restored `thread` and emits `thread/unarchived`.

210- `thread/status/changed` - notification emitted when a loaded thread's runtime `status` changes.

211- `thread/compact/start` - trigger conversation history compaction for a thread; returns `{}` immediately while progress streams via `turn/*` and `item/*` notifications.211- `thread/compact/start` - trigger conversation history compaction for a thread; returns `{}` immediately while progress streams via `turn/*` and `item/*` notifications.

212- `thread/rollback` - drop the last N turns from the in-memory context and persist a rollback marker; returns the updated `thread`.212- `thread/rollback` - drop the last N turns from the in-memory context and persist a rollback marker; returns the updated `thread`.

213- `turn/start` - add user input to a thread and begin Codex generation; responds with the initial `turn` and streams events. For `collaborationMode`, `settings.developer_instructions: null` means "use built-in instructions for the selected mode."213- `turn/start` - add user input to a thread and begin Codex generation; responds with the initial `turn` and streams events. For `collaborationMode`, `settings.developer_instructions: null` means "use built-in instructions for the selected mode."


225- `tool/requestUserInput` - prompt the user with 1-3 short questions for a tool call (experimental); questions can set `isOther` for a free-form option.225- `tool/requestUserInput` - prompt the user with 1-3 short questions for a tool call (experimental); questions can set `isOther` for a free-form option.

226- `config/mcpServer/reload` - reload MCP server configuration from disk and queue a refresh for loaded threads.226- `config/mcpServer/reload` - reload MCP server configuration from disk and queue a refresh for loaded threads.

227- `mcpServerStatus/list` - list MCP servers, tools, resources, and auth status (cursor + limit pagination).227- `mcpServerStatus/list` - list MCP servers, tools, resources, and auth status (cursor + limit pagination).

228- `feedback/upload` - submit a feedback report (classification + optional reason/logs + conversation id).228- `windowsSandbox/setupStart` - start Windows sandbox setup for `elevated` or `unelevated` mode; returns quickly and later emits `windowsSandbox/setupCompleted`.

229- `feedback/upload` - submit a feedback report (classification + optional reason/logs + conversation id, plus optional `extraLogFiles` attachments).

229- `config/read` - fetch the effective configuration on disk after resolving configuration layering.230- `config/read` - fetch the effective configuration on disk after resolving configuration layering.

231- `externalAgentConfig/detect` - detect migratable external-agent artifacts with `includeHome` and optional `cwds`; each detected item includes `cwd` (`null` for home).

232- `externalAgentConfig/import` - apply selected external-agent migration items by passing explicit `migrationItems` with `cwd` (`null` for home).

230- `config/value/write` - write a single configuration key/value to the user's `config.toml` on disk.233- `config/value/write` - write a single configuration key/value to the user's `config.toml` on disk.

231- `config/batchWrite` - apply configuration edits atomically to the user's `config.toml` on disk.234- `config/batchWrite` - apply configuration edits atomically to the user's `config.toml` on disk.

232- `configRequirements/read` - fetch requirements from `requirements.toml` and/or MDM, including allow-lists and residency requirements (or `null` if you havent set any up).235- `configRequirements/read` - fetch requirements from `requirements.toml` and/or MDM, including allow-lists, pinned `featureRequirements`, and residency/network requirements (or `null` if you haven't set any up).

233 236 

234## Models237## Models

235 238 


241{ "method": "model/list", "id": 6, "params": { "limit": 20, "includeHidden": false } }244{ "method": "model/list", "id": 6, "params": { "limit": 20, "includeHidden": false } }

242{ "id": 6, "result": {245{ "id": 6, "result": {

243 "data": [{246 "data": [{

244 "id": "gpt-5.2-codex",247 "id": "gpt-5.4",

245 "model": "gpt-5.2-codex",248 "model": "gpt-5.4",

246 "upgrade": "gpt-5.3-codex",249 "displayName": "GPT-5.4",

247 "displayName": "GPT-5.2 Codex",

248 "hidden": false,250 "hidden": false,

249 "defaultReasoningEffort": "medium",251 "defaultReasoningEffort": "medium",

250 "reasoningEffort": [{252 "supportedReasoningEfforts": [{

251 "effort": "low",253 "reasoningEffort": "low",

252 "description": "Lower latency"254 "description": "Lower latency"

253 }],255 }],

254 "inputModalities": ["text", "image"],256 "inputModalities": ["text", "image"],


261 263 

262Each model entry can include:264Each model entry can include:

263 265 

264- `reasoningEffort` - supported effort options for the model.266- `supportedReasoningEfforts` - supported effort options for the model.

265- `defaultReasoningEffort` - suggested default effort for clients.267- `defaultReasoningEffort` - suggested default effort for clients.

266- `upgrade` - optional recommended upgrade model id for migration prompts in clients.268- `upgrade` - optional recommended upgrade model id for migration prompts in clients.

269- `upgradeInfo` - optional upgrade metadata for migration prompts in clients.

267- `hidden` - whether the model is hidden from the default picker list.270- `hidden` - whether the model is hidden from the default picker list.

268- `inputModalities` - supported input types for the model (for example `text`, `image`).271- `inputModalities` - supported input types for the model (for example `text`, `image`).

269- `supportsPersonality` - whether the model supports personality-specific instructions such as `/personality`.272- `supportsPersonality` - whether the model supports personality-specific instructions such as `/personality`.


301- `thread/list` supports cursor pagination plus `modelProviders`, `sourceKinds`, `archived`, and `cwd` filtering.304- `thread/list` supports cursor pagination plus `modelProviders`, `sourceKinds`, `archived`, and `cwd` filtering.

302- `thread/loaded/list` returns the thread IDs currently in memory.305- `thread/loaded/list` returns the thread IDs currently in memory.

303- `thread/archive` moves the thread's persisted JSONL log into the archived directory.306- `thread/archive` moves the thread's persisted JSONL log into the archived directory.

307- `thread/unsubscribe` unsubscribes the current connection from a loaded thread and can trigger `thread/closed`.

304- `thread/unarchive` restores an archived thread rollout back into the active sessions directory.308- `thread/unarchive` restores an archived thread rollout back into the active sessions directory.

305- `thread/compact/start` triggers compaction and returns `{}` immediately.309- `thread/compact/start` triggers compaction and returns `{}` immediately.

306- `thread/rollback` drops the last N turns from the in-memory context and records a rollback marker in the thread's persisted JSONL log.310- `thread/rollback` drops the last N turns from the in-memory context and records a rollback marker in the thread's persisted JSONL log.


315 "cwd": "/Users/me/project",319 "cwd": "/Users/me/project",

316 "approvalPolicy": "never",320 "approvalPolicy": "never",

317 "sandbox": "workspaceWrite",321 "sandbox": "workspaceWrite",

318 "personality": "friendly"322 "personality": "friendly",

323 "serviceName": "my_app_server_client"

319} }324} }

320{ "id": 10, "result": {325{ "id": 10, "result": {

321 "thread": {326 "thread": {

322 "id": "thr_123",327 "id": "thr_123",

323 "preview": "",328 "preview": "",

329 "ephemeral": false,

324 "modelProvider": "openai",330 "modelProvider": "openai",

325 "createdAt": 1730910000331 "createdAt": 1730910000

326 }332 }


328{ "method": "thread/started", "params": { "thread": { "id": "thr_123" } } }334{ "method": "thread/started", "params": { "thread": { "id": "thr_123" } } }

329```335```

330 336 

337`serviceName` is optional. Set it when you want app-server to tag thread-level metrics with your integration's service name.

338 

331To continue a stored session, call `thread/resume` with the `thread.id` you recorded earlier. The response shape matches `thread/start`. You can also pass the same configuration overrides supported by `thread/start`, such as `personality`:339To continue a stored session, call `thread/resume` with the `thread.id` you recorded earlier. The response shape matches `thread/start`. You can also pass the same configuration overrides supported by `thread/start`, such as `personality`:

332 340 

333```json341```json


335 "threadId": "thr_123",343 "threadId": "thr_123",

336 "personality": "friendly"344 "personality": "friendly"

337} }345} }

338{ "id": 11, "result": { "thread": { "id": "thr_123" } } }346{ "id": 11, "result": { "thread": { "id": "thr_123", "name": "Bug bash notes", "ephemeral": false } } }

339```347```

340 348 

341Resuming a thread doesn't update `thread.updatedAt` (or the rollout file's modified time) by itself. The timestamp updates when you start a turn.349Resuming a thread doesn't update `thread.updatedAt` (or the rollout file's modified time) by itself. The timestamp updates when you start a turn.


354{ "method": "thread/started", "params": { "thread": { "id": "thr_456" } } }362{ "method": "thread/started", "params": { "thread": { "id": "thr_456" } } }

355```363```

356 364 

365When a user-facing thread title has been set, app-server hydrates `thread.name` on `thread/list`, `thread/read`, `thread/resume`, `thread/unarchive`, and `thread/rollback` responses. `thread/start` and `thread/fork` may omit `name` (or return `null`) until a title is set later.

366 

357### Read a stored thread (without resuming)367### Read a stored thread (without resuming)

358 368 

359Use `thread/read` when you want stored thread data but don't want to resume the thread or subscribe to its events.369Use `thread/read` when you want stored thread data but don't want to resume the thread or subscribe to its events.

360 370 

361- `includeTurns` - when `true`, the response includes the thread's turns; when `false` or omitted, you get the thread summary only.371- `includeTurns` - when `true`, the response includes the thread's turns; when `false` or omitted, you get the thread summary only.

372- Returned `thread` objects include runtime `status` (`notLoaded`, `idle`, `systemError`, or `active` with `activeFlags`).

362 373 

363```json374```json

364{ "method": "thread/read", "id": 19, "params": { "threadId": "thr_123", "includeTurns": true } }375{ "method": "thread/read", "id": 19, "params": { "threadId": "thr_123", "includeTurns": true } }

365{ "id": 19, "result": { "thread": { "id": "thr_123", "turns": [] } } }376{ "id": 19, "result": { "thread": { "id": "thr_123", "name": "Bug bash notes", "ephemeral": false, "status": { "type": "notLoaded" }, "turns": [] } } }

366```377```

367 378 

368Unlike `thread/resume`, `thread/read` doesn't load the thread into memory or emit `thread/started`.379Unlike `thread/resume`, `thread/read` doesn't load the thread into memory or emit `thread/started`.


402} }413} }

403{ "id": 20, "result": {414{ "id": 20, "result": {

404 "data": [415 "data": [

405 { "id": "thr_a", "preview": "Create a TUI", "modelProvider": "openai", "createdAt": 1730831111, "updatedAt": 1730831111 },416 { "id": "thr_a", "preview": "Create a TUI", "ephemeral": false, "modelProvider": "openai", "createdAt": 1730831111, "updatedAt": 1730831111, "name": "TUI prototype", "status": { "type": "notLoaded" } },

406 { "id": "thr_b", "preview": "Fix tests", "modelProvider": "openai", "createdAt": 1730750000, "updatedAt": 1730750000 }417 { "id": "thr_b", "preview": "Fix tests", "ephemeral": true, "modelProvider": "openai", "createdAt": 1730750000, "updatedAt": 1730750000, "status": { "type": "notLoaded" } }

407 ],418 ],

408 "nextCursor": "opaque-token-or-null"419 "nextCursor": "opaque-token-or-null"

409} }420} }


411 422 

412When `nextCursor` is `null`, you have reached the final page.423When `nextCursor` is `null`, you have reached the final page.

413 424 

425### Track thread status changes

426 

427`thread/status/changed` is emitted whenever a loaded thread's runtime status changes. The payload includes `threadId` and the new `status`.

428 

429```json

430{

431 "method": "thread/status/changed",

432 "params": {

433 "threadId": "thr_123",

434 "status": { "type": "active", "activeFlags": ["waitingOnApproval"] }

435 }

436}

437```

438 

414### List loaded threads439### List loaded threads

415 440 

416`thread/loaded/list` returns thread IDs currently loaded in memory.441`thread/loaded/list` returns thread IDs currently loaded in memory.


420{ "id": 21, "result": { "data": ["thr_123", "thr_456"] } }445{ "id": 21, "result": { "data": ["thr_123", "thr_456"] } }

421```446```

422 447 

448### Unsubscribe from a loaded thread

449 

450`thread/unsubscribe` removes the current connection's subscription to a thread. The response status is one of:

451 

452- `unsubscribed` when the connection was subscribed and is now removed.

453- `notSubscribed` when the connection was not subscribed to that thread.

454- `notLoaded` when the thread is not loaded.

455 

456If this was the last subscriber, the server unloads the thread and emits a `thread/status/changed` transition to `notLoaded` plus `thread/closed`.

457 

458```json

459{ "method": "thread/unsubscribe", "id": 22, "params": { "threadId": "thr_123" } }

460{ "id": 22, "result": { "status": "unsubscribed" } }

461{ "method": "thread/status/changed", "params": {

462 "threadId": "thr_123",

463 "status": { "type": "notLoaded" }

464} }

465{ "method": "thread/closed", "params": { "threadId": "thr_123" } }

466```

467 

423### Archive a thread468### Archive a thread

424 469 

425Use `thread/archive` to move the persisted thread log (stored as a JSONL file on disk) into the archived sessions directory.470Use `thread/archive` to move the persisted thread log (stored as a JSONL file on disk) into the archived sessions directory.


427```json472```json

428{ "method": "thread/archive", "id": 22, "params": { "threadId": "thr_b" } }473{ "method": "thread/archive", "id": 22, "params": { "threadId": "thr_b" } }

429{ "id": 22, "result": {} }474{ "id": 22, "result": {} }

475{ "method": "thread/archived", "params": { "threadId": "thr_b" } }

430```476```

431 477 

432Archived threads won't appear in future calls to `thread/list` unless you pass `archived: true`.478Archived threads won't appear in future calls to `thread/list` unless you pass `archived: true`.


437 483 

438```json484```json

439{ "method": "thread/unarchive", "id": 24, "params": { "threadId": "thr_b" } }485{ "method": "thread/unarchive", "id": 24, "params": { "threadId": "thr_b" } }

440{ "id": 24, "result": { "thread": { "id": "thr_b" } } }486{ "id": 24, "result": { "thread": { "id": "thr_b", "name": "Bug bash notes" } } }

487{ "method": "thread/unarchived", "params": { "threadId": "thr_b" } }

441```488```

442 489 

443### Trigger thread compaction490### Trigger thread compaction


451{ "id": 25, "result": {} }498{ "id": 25, "result": {} }

452```499```

453 500 

501### Roll back recent turns

502 

503Use `thread/rollback` to remove the last `numTurns` entries from the in-memory context and persist a rollback marker in the rollout log. The returned `thread` includes `turns` populated after the rollback.

504 

505```json

506{ "method": "thread/rollback", "id": 26, "params": { "threadId": "thr_b", "numTurns": 1 } }

507{ "id": 26, "result": { "thread": { "id": "thr_b", "name": "Bug bash notes", "ephemeral": false } } }

508```

509 

454## Turns510## Turns

455 511 

456The `input` field accepts a list of items:512The `input` field accepts a list of items:


480}536}

481```537```

482 538 

539On macOS, `includePlatformDefaults: true` appends a curated platform-default Seatbelt policy for restricted-read sessions. This improves tool compatibility without broadly allowing all of `/System`.

540 

483Examples:541Examples:

484 542 

485```json543```json


656- `sandboxPolicy` accepts the same shape used by `turn/start` (for example, `dangerFullAccess`, `readOnly`, `workspaceWrite`, `externalSandbox`).714- `sandboxPolicy` accepts the same shape used by `turn/start` (for example, `dangerFullAccess`, `readOnly`, `workspaceWrite`, `externalSandbox`).

657- When omitted, `timeoutMs` falls back to the server default.715- When omitted, `timeoutMs` falls back to the server default.

658 716 

717### Read admin requirements (`configRequirements/read`)

718 

719Use `configRequirements/read` to inspect the effective admin requirements loaded from `requirements.toml` and/or MDM.

720 

721```json

722{ "method": "configRequirements/read", "id": 52, "params": {} }

723{ "id": 52, "result": {

724 "requirements": {

725 "allowedApprovalPolicies": ["onRequest", "unlessTrusted"],

726 "allowedSandboxModes": ["readOnly", "workspaceWrite"],

727 "featureRequirements": {

728 "personality": true,

729 "unified_exec": false

730 },

731 "network": {

732 "enabled": true,

733 "allowedDomains": ["api.openai.com"],

734 "allowUnixSockets": ["/tmp/example.sock"],

735 "dangerouslyAllowAllUnixSockets": false

736 }

737 }

738} }

739```

740 

741`result.requirements` is `null` when no requirements are configured. See the docs on [`requirements.toml`](https://developers.openai.com/codex/config-reference#requirementstoml) for details on supported keys and values.

742 

743### Windows sandbox setup (`windowsSandbox/setupStart`)

744 

745Custom Windows clients can trigger sandbox setup asynchronously instead of blocking on startup checks.

746 

747```json

748{ "method": "windowsSandbox/setupStart", "id": 53, "params": { "mode": "elevated" } }

749{ "id": 53, "result": { "started": true } }

750```

751 

752App-server starts setup in the background and later emits a completion notification:

753 

754```json

755{

756 "method": "windowsSandbox/setupCompleted",

757 "params": { "mode": "elevated", "success": true, "error": null }

758}

759```

760 

761Modes:

762 

763- `elevated` - run the elevated Windows sandbox setup path.

764- `unelevated` - run the legacy setup/preflight path.

765 

659## Events766## Events

660 767 

661Event notifications are the server-initiated stream for thread lifecycles, turn lifecycles, and the items within them. After you start or resume a thread, keep reading the active transport stream for `thread/started`, `turn/*`, and `item/*` notifications.768Event notifications are the server-initiated stream for thread lifecycles, turn lifecycles, and the items within them. After you start or resume a thread, keep reading the active transport stream for `thread/started`, `thread/archived`, `thread/unarchived`, `thread/closed`, `thread/status/changed`, `turn/*`, `item/*`, and `serverRequest/resolved` notifications.

662 769 

663### Notification opt-out770### Notification opt-out

664 771 


676- `fuzzyFileSearch/sessionUpdated` - `{ sessionId, query, files }` with the current matches for the active query.783- `fuzzyFileSearch/sessionUpdated` - `{ sessionId, query, files }` with the current matches for the active query.

677- `fuzzyFileSearch/sessionCompleted` - `{ sessionId }` once indexing and matching for that query completes.784- `fuzzyFileSearch/sessionCompleted` - `{ sessionId }` once indexing and matching for that query completes.

678 785 

786### Windows sandbox setup events

787 

788- `windowsSandbox/setupCompleted` - `{ mode, success, error }` emitted after a `windowsSandbox/setupStart` request finishes.

789 

679### Turn events790### Turn events

680 791 

681- `turn/started` - `{ turn }` with the turn id, empty `items`, and `status: "inProgress"`.792- `turn/started` - `{ turn }` with the turn id, empty `items`, and `status: "inProgress"`.


691`ThreadItem` is the tagged union carried in turn responses and `item/*` notifications. Common item types include:802`ThreadItem` is the tagged union carried in turn responses and `item/*` notifications. Common item types include:

692 803 

693- `userMessage` - `{id, content}` where `content` is a list of user inputs (`text`, `image`, or `localImage`).804- `userMessage` - `{id, content}` where `content` is a list of user inputs (`text`, `image`, or `localImage`).

694- `agentMessage` - `{id, text}` containing the accumulated agent reply.805- `agentMessage` - `{id, text, phase?}` containing the accumulated agent reply. When present, `phase` uses Responses API wire values (`commentary`, `final_answer`).

695- `plan` - `{id, text}` containing proposed plan text in plan mode. Treat the final `plan` item from `item/completed` as authoritative.806- `plan` - `{id, text}` containing proposed plan text in plan mode. Treat the final `plan` item from `item/completed` as authoritative.

696- `reasoning` - `{id, summary, content}` where `summary` holds streamed reasoning summaries and `content` holds raw reasoning blocks.807- `reasoning` - `{id, summary, content}` where `summary` holds streamed reasoning summaries and `content` holds raw reasoning blocks.

697- `commandExecution` - `{id, command, cwd, status, commandActions, aggregatedOutput?, exitCode?, durationMs?}`.808- `commandExecution` - `{id, command, cwd, status, commandActions, aggregatedOutput?, exitCode?, durationMs?}`.

698- `fileChange` - `{id, changes, status}` describing proposed edits; `changes` list `{path, kind, diff}`.809- `fileChange` - `{id, changes, status}` describing proposed edits; `changes` list `{path, kind, diff}`.

699- `mcpToolCall` - `{id, server, tool, status, arguments, result?, error?}`.810- `mcpToolCall` - `{id, server, tool, status, arguments, result?, error?}`.

811- `dynamicToolCall` - `{id, tool, arguments, status, contentItems?, success?, durationMs?}` for client-executed dynamic tool invocations.

700- `collabToolCall` - `{id, tool, status, senderThreadId, receiverThreadId?, newThreadId?, prompt?, agentStatus?}`.812- `collabToolCall` - `{id, tool, status, senderThreadId, receiverThreadId?, newThreadId?, prompt?, agentStatus?}`.

701- `webSearch` - `{id, query, action?}` for web search requests issued by the agent.813- `webSearch` - `{id, query, action?}` for web search requests issued by the agent.

702- `imageView` - `{id, path}` emitted when the agent invokes the image viewer tool.814- `imageView` - `{id, path}` emitted when the agent invokes the image viewer tool.


753Order of messages:865Order of messages:

754 866 

7551. `item/started` shows the pending `commandExecution` item with `command`, `cwd`, and other fields.8671. `item/started` shows the pending `commandExecution` item with `command`, `cwd`, and other fields.

7562. `item/commandExecution/requestApproval` includes `itemId`, `threadId`, `turnId`, optional `reason`, optional `command`, optional `cwd`, optional `commandActions`, and optional `proposedExecpolicyAmendment`.8682. `item/commandExecution/requestApproval` includes `itemId`, `threadId`, `turnId`, optional `reason`, optional `command`, optional `cwd`, optional `commandActions`, optional `proposedExecpolicyAmendment`, optional `networkApprovalContext`, and optional `availableDecisions`. When `initialize.params.capabilities.experimentalApi = true`, the payload can also include experimental `additionalPermissions` describing requested per-command sandbox access. Any filesystem paths inside `additionalPermissions` are absolute on the wire.

7573. Client responds with one of the command execution approval decisions above.8693. Client responds with one of the command execution approval decisions above.

7584. `item/completed` returns the final `commandExecution` item with `status: completed | failed | declined`.8704. `serverRequest/resolved` confirms that the pending request has been answered or cleared.

8715. `item/completed` returns the final `commandExecution` item with `status: completed | failed | declined`.

872 

873When `networkApprovalContext` is present, the prompt is for managed network access (not a general shell-command approval). The current v2 schema exposes the target `host` and `protocol`; clients should render a network-specific prompt and not rely on `command` being a user-meaningful shell command preview.

874 

875Codex groups concurrent network approval prompts by destination (`host`, protocol, and port). The app-server may therefore send one prompt that unblocks multiple queued requests to the same destination, while different ports on the same host are treated separately.

759 876 

760### File change approvals877### File change approvals

761 878 


7641. `item/started` emits a `fileChange` item with proposed `changes` and `status: "inProgress"`.8811. `item/started` emits a `fileChange` item with proposed `changes` and `status: "inProgress"`.

7652. `item/fileChange/requestApproval` includes `itemId`, `threadId`, `turnId`, optional `reason`, and optional `grantRoot`.8822. `item/fileChange/requestApproval` includes `itemId`, `threadId`, `turnId`, optional `reason`, and optional `grantRoot`.

7663. Client responds with one of the file change approval decisions above.8833. Client responds with one of the file change approval decisions above.

7674. `item/completed` returns the final `fileChange` item with `status: completed | failed | declined`.8844. `serverRequest/resolved` confirms that the pending request has been answered or cleared.

8855. `item/completed` returns the final `fileChange` item with `status: completed | failed | declined`.

886 

887### `tool/requestUserInput`

888 

889When the client responds to `item/tool/requestUserInput`, app-server emits `serverRequest/resolved` with `{ threadId, requestId }`. If the pending request is cleared by turn start, turn completion, or turn interruption before the client answers, the server emits the same notification for that cleanup.

890 

891### Dynamic tool calls (experimental)

892 

893`dynamicTools` on `thread/start` and the corresponding `item/tool/call` request or response flow are experimental APIs.

894 

895When a dynamic tool is invoked during a turn, app-server emits:

896 

8971. `item/started` with `item.type = "dynamicToolCall"`, `status = "inProgress"`, plus `tool` and `arguments`.

8982. `item/tool/call` as a server request to the client.

8993. The client response payload with returned content items.

9004. `item/completed` with `item.type = "dynamicToolCall"`, the final `status`, and any returned `contentItems` or `success` value.

768 901 

769### MCP tool-call approvals (apps)902### MCP tool-call approvals (apps)

770 903 

771App (connector) tool calls can also require approval. When an app tool call has side effects, the server may elicit approval with `tool/requestUserInput` and options such as **Accept**, **Decline**, and **Cancel**. If the user declines or cancels, the related `mcpToolCall` item completes with an error instead of running the tool.904App (connector) tool calls can also require approval. When an app tool call has side effects, the server may elicit approval with `tool/requestUserInput` and options such as **Accept**, **Decline**, and **Cancel**. Destructive tool annotations always trigger approval even when the tool also advertises less-privileged hints. If the user declines or cancels, the related `mcpToolCall` item completes with an error instead of running the tool.

772 905 

773## Skills906## Skills

774 907 


865 998 

866## Apps (connectors)999## Apps (connectors)

867 1000 

868Use `app/list` to fetch available apps. In the CLI/TUI, `/apps` is the user-facing picker; in custom clients, call `app/list` directly. Each entry includes both `isAccessible` (available to the user) and `isEnabled` (enabled in `config.toml`) so clients can distinguish install/access from local enabled state.1001Use `app/list` to fetch available apps. In the CLI/TUI, `/apps` is the user-facing picker; in custom clients, call `app/list` directly. Each entry includes both `isAccessible` (available to the user) and `isEnabled` (enabled in `config.toml`) so clients can distinguish install/access from local enabled state. App entries can also include optional `branding`, `appMetadata`, and `labels` fields.

869 1002 

870```json1003```json

871{ "method": "app/list", "id": 50, "params": {1004{ "method": "app/list", "id": 50, "params": {


881 "name": "Demo App",1014 "name": "Demo App",

882 "description": "Example connector for documentation.",1015 "description": "Example connector for documentation.",

883 "logoUrl": "https://example.com/demo-app.png",1016 "logoUrl": "https://example.com/demo-app.png",

1017 "logoUrlDark": null,

1018 "distributionChannel": null,

1019 "branding": null,

1020 "appMetadata": null,

1021 "labels": null,

884 "installUrl": "https://chatgpt.com/apps/demo-app/demo-app",1022 "installUrl": "https://chatgpt.com/apps/demo-app/demo-app",

885 "isAccessible": true,1023 "isAccessible": true,

886 "isEnabled": true1024 "isEnabled": true


906 "name": "Demo App",1044 "name": "Demo App",

907 "description": "Example connector for documentation.",1045 "description": "Example connector for documentation.",

908 "logoUrl": "https://example.com/demo-app.png",1046 "logoUrl": "https://example.com/demo-app.png",

1047 "logoUrlDark": null,

1048 "distributionChannel": null,

1049 "branding": null,

1050 "appMetadata": null,

1051 "labels": null,

909 "installUrl": "https://chatgpt.com/apps/demo-app/demo-app",1052 "installUrl": "https://chatgpt.com/apps/demo-app/demo-app",

910 "isAccessible": true,1053 "isAccessible": true,

911 "isEnabled": true1054 "isEnabled": true


938}1081}

939```1082```

940 1083 

1084### Config RPC examples for app settings

1085 

1086Use `config/read`, `config/value/write`, and `config/batchWrite` to inspect or update app controls in `config.toml`.

1087 

1088Read the effective app config shape (including `_default` and per-tool overrides):

1089 

1090```json

1091{ "method": "config/read", "id": 60, "params": { "includeLayers": false } }

1092{ "id": 60, "result": {

1093 "config": {

1094 "apps": {

1095 "_default": {

1096 "enabled": true,

1097 "destructive_enabled": true,

1098 "open_world_enabled": true

1099 },

1100 "google_drive": {

1101 "enabled": true,

1102 "destructive_enabled": false,

1103 "default_tools_approval_mode": "prompt",

1104 "tools": {

1105 "files/delete": { "enabled": false, "approval_mode": "approve" }

1106 }

1107 }

1108 }

1109 }

1110} }

1111```

1112 

1113Update a single app setting:

1114 

1115```json

1116{

1117 "method": "config/value/write",

1118 "id": 61,

1119 "params": {

1120 "keyPath": "apps.google_drive.default_tools_approval_mode",

1121 "value": "prompt",

1122 "mergeStrategy": "replace"

1123 }

1124}

1125```

1126 

1127Apply multiple app edits atomically:

1128 

1129```json

1130{

1131 "method": "config/batchWrite",

1132 "id": 62,

1133 "params": {

1134 "edits": [

1135 {

1136 "keyPath": "apps._default.destructive_enabled",

1137 "value": false,

1138 "mergeStrategy": "upsert"

1139 },

1140 {

1141 "keyPath": "apps.google_drive.tools.files/delete.approval_mode",

1142 "value": "approve",

1143 "mergeStrategy": "upsert"

1144 }

1145 ]

1146 }

1147}

1148```

1149 

1150### Detect and import external agent config

1151 

1152Use `externalAgentConfig/detect` to discover migratable external-agent artifacts, then pass the selected entries to `externalAgentConfig/import`.

1153 

1154Detection example:

1155 

1156```json

1157{ "method": "externalAgentConfig/detect", "id": 63, "params": {

1158 "includeHome": true,

1159 "cwds": ["/Users/me/project"]

1160} }

1161{ "id": 63, "result": {

1162 "items": [

1163 {

1164 "itemType": "AGENTS_MD",

1165 "description": "Import /Users/me/project/CLAUDE.md to /Users/me/project/AGENTS.md.",

1166 "cwd": "/Users/me/project"

1167 },

1168 {

1169 "itemType": "SKILLS",

1170 "description": "Copy skill folders from /Users/me/.claude/skills to /Users/me/.agents/skills.",

1171 "cwd": null

1172 }

1173 ]

1174} }

1175```

1176 

1177Import example:

1178 

1179```json

1180{ "method": "externalAgentConfig/import", "id": 64, "params": {

1181 "migrationItems": [

1182 {

1183 "itemType": "AGENTS_MD",

1184 "description": "Import /Users/me/project/CLAUDE.md to /Users/me/project/AGENTS.md.",

1185 "cwd": "/Users/me/project"

1186 }

1187 ]

1188} }

1189{ "id": 64, "result": {} }

1190```

1191 

1192Supported `itemType` values are `AGENTS_MD`, `CONFIG`, `SKILLS`, and `MCP_SERVER_CONFIG`. Detection returns only items that still have work to do. For example, AGENTS migration is skipped when `AGENTS.md` already exists and is non-empty, and skill imports do not overwrite existing skill directories.

1193 

941## Auth endpoints1194## Auth endpoints

942 1195 

943The JSON-RPC auth/account surface exposes request/response methods plus server-initiated notifications (no `id`). Use these to determine auth state, start or cancel logins, logout, and inspect ChatGPT rate limits.1196The JSON-RPC auth/account surface exposes request/response methods plus server-initiated notifications (no `id`). Use these to determine auth state, start or cancel logins, logout, and inspect ChatGPT rate limits.

Details

1# Automations1# Automations

2 2 

3Schedule recurring Codex tasks

4 

5Automate recurring tasks in the background. Codex adds findings to the inbox, or automatically archives the task if there's nothing to report. You can combine automations with [skills](https://developers.openai.com/codex/skills) for more complex tasks.3Automate recurring tasks in the background. Codex adds findings to the inbox, or automatically archives the task if there's nothing to report. You can combine automations with [skills](https://developers.openai.com/codex/skills) for more complex tasks.

6 4 

7Automations run locally in the Codex app. The app needs to be running, and the5Automations run locally in the Codex app. The app needs to be running, and the


12checkout. In non-version-controlled projects, automations run directly in the10checkout. In non-version-controlled projects, automations run directly in the

13project directory.11project directory.

14 12 

15![Automation creation form with schedule and prompt fields](/images/codex/app/create-automation-light.webp) ![Automation creation form with schedule and prompt fields](/images/codex/app/create-automation-dark.webp)13![Automation creation form with schedule and prompt fields](/images/codex/app/create-automation-light.webp)

16 

17![Automation creation form with schedule and prompt fields](/images/codex/app/create-automation-light.webp) ![Automation creation form with schedule and prompt fields](/images/codex/app/create-automation-dark.webp)

18 14 

19## Managing tasks15## Managing tasks

20 16 


66 62 

67If you are in a managed environment, admins can restrict these behaviors using63If you are in a managed environment, admins can restrict these behaviors using

68admin-enforced requirements. For example, they can disallow `approval_policy = "never"` or constrain allowed sandbox modes. See64admin-enforced requirements. For example, they can disallow `approval_policy = "never"` or constrain allowed sandbox modes. See

69[Admin-enforced requirements (`requirements.toml`)](https://developers.openai.com/codex/security#admin-enforced-requirements-requirementstoml).65[Admin-enforced requirements (`requirements.toml`)](https://developers.openai.com/codex/enterprise/managed-configuration#admin-enforced-requirements-requirementstoml).

70 66 

71Automations use `approval_policy = "never"` when your organization policy67Automations use `approval_policy = "never"` when your organization policy

72allows it. If `approval_policy = "never"` is disallowed by admin requirements,68allows it. If `approval_policy = "never"` is disallowed by admin requirements,


174```markdown170```markdown

175Check my commits from the last 24h and submit a $recent-code-bugfix.171Check my commits from the last 24h and submit a $recent-code-bugfix.

176```172```

177 

178[Previous

179 

180Review](https://developers.openai.com/codex/app/review)[Next

181 

182Worktrees](https://developers.openai.com/codex/app/worktrees)

app/commands.md +0 −8

Details

1# Codex app commands1# Codex app commands

2 2 

3Reference for Codex app commands and keyboard shortcuts

4 

5Use these commands and keyboard shortcuts to navigate the Codex app.3Use these commands and keyboard shortcuts to navigate the Codex app.

6 4 

7## Keyboard shortcuts5## Keyboard shortcuts


54 52 

55- [Features](https://developers.openai.com/codex/app/features)53- [Features](https://developers.openai.com/codex/app/features)

56- [Settings](https://developers.openai.com/codex/app/settings)54- [Settings](https://developers.openai.com/codex/app/settings)

57 

58[Previous

59 

60Local Environments](https://developers.openai.com/codex/app/local-environments)[Next

61 

62Troubleshooting](https://developers.openai.com/codex/app/troubleshooting)

app/features.md +23 −37

Details

1# Codex app features1# Codex app features

2 2 

3What you can do with the Codex app

4 

5The Codex app is a focused desktop experience for working on Codex threads in parallel,3The Codex app is a focused desktop experience for working on Codex threads in parallel,

6with built-in worktree support, automations, and Git functionality.4with built-in worktree support, automations, and Git functionality.

7 5 


16session in a specific directory.14session in a specific directory.

17 15 

18If you work in a single repository with two or more apps or packages, split16If you work in a single repository with two or more apps or packages, split

19distinct projects into separate app projects so the [sandbox](https://developers.openai.com/codex/security)17distinct projects into separate app projects so the [sandbox](https://developers.openai.com/codex/agent-approvals-security)

20only includes the files for that project.18only includes the files for that project.

21 19 

22![Codex app showing multiple projects in the sidebar and threads in the main pane](/images/codex/app/multitask-light.webp) ![Codex app showing multiple projects in the sidebar and threads in the main pane](/images/codex/app/multitask-dark.webp)20![Codex app showing multiple projects in the sidebar and threads in the main pane](/images/codex/app/multitask-light.webp)

23 

24![Codex app showing multiple projects in the sidebar and threads in the main pane](/images/codex/app/multitask-light.webp) ![Codex app showing multiple projects in the sidebar and threads in the main pane](/images/codex/app/multitask-dark.webp)

25 21 

26## Skills support22## Skills support

27 23 


29IDE Extension. You can also view and explore new skills that your team has25IDE Extension. You can also view and explore new skills that your team has

30created across your different projects by clicking Skills in the sidebar.26created across your different projects by clicking Skills in the sidebar.

31 27 

32![Skills picker showing available skills in the Codex app](/images/codex/app/skill-selector-light.webp) ![Skills picker showing available skills in the Codex app](/images/codex/app/skill-selector-dark.webp)28![Skills picker showing available skills in the Codex app](/images/codex/app/skill-selector-light.webp)

33 

34![Skills picker showing available skills in the Codex app](/images/codex/app/skill-selector-light.webp) ![Skills picker showing available skills in the Codex app](/images/codex/app/skill-selector-dark.webp)

35 29 

36## Automations30## Automations

37 31 


39such as evaluating errors in your telemetry and submitting fixes or creating reports on recent33such as evaluating errors in your telemetry and submitting fixes or creating reports on recent

40codebase changes.34codebase changes.

41 35 

42![Automation creation form with schedule and prompt fields](/images/codex/app/create-automation-light.webp) ![Automation creation form with schedule and prompt fields](/images/codex/app/create-automation-dark.webp)36![Automation creation form with schedule and prompt fields](/images/codex/app/create-automation-light.webp)

43 

44![Automation creation form with schedule and prompt fields](/images/codex/app/create-automation-light.webp) ![Automation creation form with schedule and prompt fields](/images/codex/app/create-automation-dark.webp)

45 37 

46## Modes38## Modes

47 39 


55 47 

56For the full glossary and concepts, explore the [concepts section](https://developers.openai.com/codex/prompting).48For the full glossary and concepts, explore the [concepts section](https://developers.openai.com/codex/prompting).

57 49 

58![New thread composer with Local, Worktree, and Cloud mode options](/images/codex/app/modes-light.webp) ![New thread composer with Local, Worktree, and Cloud mode options](/images/codex/app/modes-dark.webp)50![New thread composer with Local, Worktree, and Cloud mode options](/images/codex/app/modes-light.webp)

59 

60![New thread composer with Local, Worktree, and Cloud mode options](/images/codex/app/modes-light.webp) ![New thread composer with Local, Worktree, and Cloud mode options](/images/codex/app/modes-dark.webp)

61 51 

62## Built-in Git tools52## Built-in Git tools

63 53 


71 61 

72For more advanced Git tasks, use the [integrated terminal](#integrated-terminal).62For more advanced Git tasks, use the [integrated terminal](#integrated-terminal).

73 63 

74![Git diff and commit panel with a commit message field](/images/codex/app/git-commit-light.webp) ![Git diff and commit panel with a commit message field](/images/codex/app/git-commit-dark.webp)64![Git diff and commit panel with a commit message field](/images/codex/app/git-commit-light.webp)

75 

76![Git diff and commit panel with a commit message field](/images/codex/app/git-commit-light.webp) ![Git diff and commit panel with a commit message field](/images/codex/app/git-commit-dark.webp)

77 65 

78## Worktree support66## Worktree support

79 67 


88 76 

89[Learn more about using worktrees in the Codex app.](https://developers.openai.com/codex/app/worktrees)77[Learn more about using worktrees in the Codex app.](https://developers.openai.com/codex/app/worktrees)

90 78 

91![Worktree thread view showing branch actions and worktree details](/images/codex/app/worktree-light.webp) ![Worktree thread view showing branch actions and worktree details](/images/codex/app/worktree-dark.webp)79![Worktree thread view showing branch actions and worktree details](/images/codex/app/worktree-light.webp)

92 

93![Worktree thread view showing branch actions and worktree details](/images/codex/app/worktree-light.webp) ![Worktree thread view showing branch actions and worktree details](/images/codex/app/worktree-dark.webp)

94 80 

95## Integrated terminal81## Integrated terminal

96 82 


113Note that <kbd>Cmd</kbd>+<kbd>K</kbd> opens the command palette in the Codex99Note that <kbd>Cmd</kbd>+<kbd>K</kbd> opens the command palette in the Codex

114app. It doesn't clear the terminal. To clear the terminal use <kbd>Ctrl</kbd>+<kbd>L</kbd>.100app. It doesn't clear the terminal. To clear the terminal use <kbd>Ctrl</kbd>+<kbd>L</kbd>.

115 101 

116![Integrated terminal drawer open beneath a Codex thread](/images/codex/app/integrated-terminal-light.webp) ![Integrated terminal drawer open beneath a Codex thread](/images/codex/app/integrated-terminal-dark.webp)102![Integrated terminal drawer open beneath a Codex thread](/images/codex/app/integrated-terminal-light.webp)

103 

104## Native Windows sandbox

117 105 

118![Integrated terminal drawer open beneath a Codex thread](/images/codex/app/integrated-terminal-light.webp) ![Integrated terminal drawer open beneath a Codex thread](/images/codex/app/integrated-terminal-dark.webp)106On Windows, Codex can run natively in PowerShell with a native Windows sandbox

107instead of requiring WSL or a virtual machine. This lets you stay in

108Windows-native workflows while keeping bounded permissions in place.

109 

110[Learn more about Windows setup and sandboxing](https://developers.openai.com/codex/app/windows).

111 

112![Codex app Windows sandbox setup prompt above the message composer](/images/codex/windows/windows-sandbox-setup.webp)

119 113 

120## Voice dictation114## Voice dictation

121 115 

122Use your voice to prompt Codex. Hold <kbd>Ctrl</kbd>+<kbd>M</kbd> while the composer is visible and start talking. Your voice will be transcribed. Edit the transcribed prompt or hit send to have Codex start work.116Use your voice to prompt Codex. Hold <kbd>Ctrl</kbd>+<kbd>M</kbd> while the composer is visible and start talking. Your voice will be transcribed. Edit the transcribed prompt or hit send to have Codex start work.

123 117 

124![Voice dictation indicator in the composer with a transcribed prompt](/images/codex/app/voice-dictation-light.webp) ![Voice dictation indicator in the composer with a transcribed prompt](/images/codex/app/voice-dictation-dark.webp)118![Voice dictation indicator in the composer with a transcribed prompt](/images/codex/app/voice-dictation-light.webp)

125 

126![Voice dictation indicator in the composer with a transcribed prompt](/images/codex/app/voice-dictation-light.webp) ![Voice dictation indicator in the composer with a transcribed prompt](/images/codex/app/voice-dictation-dark.webp)

127 119 

128## Floating pop-out window120## Floating pop-out window

129 121 


134You can also toggle the pop-out window to stay on top when you want it to remain126You can also toggle the pop-out window to stay on top when you want it to remain

135visible across your workflow.127visible across your workflow.

136 128 

137![Pop-out window preview in light mode](/images/codex/app/popover-light.webp) ![Pop-out window preview in light mode](/images/codex/app/popover-dark.webp)129![Pop-out window preview in light mode](/images/codex/app/popover-light.webp)

138 

139![Pop-out window preview in light mode](/images/codex/app/popover-light.webp) ![Pop-out window preview in light mode](/images/codex/app/popover-dark.webp)

140 130 

141---131---

142 132 


172opening separate projects or using worktrees rather than asking Codex to roam162opening separate projects or using worktrees rather than asking Codex to roam

173outside the project root.163outside the project root.

174 164 

175For details on how Codex handles sandboxing, check out the [security documentation](https://developers.openai.com/codex/security).165For a high-level overview, see [Sandboxing](https://developers.openai.com/codex/concepts/sandboxing). For

166configuration details, see the

167[agent approvals & security documentation](https://developers.openai.com/codex/agent-approvals-security).

176 168 

177## MCP support169## MCP support

178 170 


185 177 

186Codex ships with a first-party web search tool. For local tasks in the Codex IDE Extension, Codex178Codex ships with a first-party web search tool. For local tasks in the Codex IDE Extension, Codex

187enables web search by default and serves results from a web search cache. If you configure your179enables web search by default and serves results from a web search cache. If you configure your

188sandbox for [full access](https://developers.openai.com/codex/security), web search defaults to live results. See180sandbox for [full access](https://developers.openai.com/codex/agent-approvals-security), web search defaults to live results. See

189[Config basics](https://developers.openai.com/codex/config-basic) to disable web search or switch to live results that fetch the181[Config basics](https://developers.openai.com/codex/config-basic) to disable web search or switch to live results that fetch the

190most recent data.182most recent data.

191 183 


216- [Automations](https://developers.openai.com/codex/app/automations)208- [Automations](https://developers.openai.com/codex/app/automations)

217- [Local environments](https://developers.openai.com/codex/app/local-environments)209- [Local environments](https://developers.openai.com/codex/app/local-environments)

218- [Worktrees](https://developers.openai.com/codex/app/worktrees)210- [Worktrees](https://developers.openai.com/codex/app/worktrees)

219 

220[Previous

221 

222Overview](https://developers.openai.com/codex/app)[Next

223 

224Settings](https://developers.openai.com/codex/app/settings)

Details

1# Local environments1# Local environments

2 2 

3Configure common actions and setup scripts for worktrees

4 

5Local environments let you configure setup steps for worktrees as well as common actions for a project.3Local environments let you configure setup steps for worktrees as well as common actions for a project.

6 4 

7You configure your local environments through the [Codex app settings](codex://settings) pane. You can check the generated file into your project's Git repository to share with others.5You configure your local environments through the [Codex app settings](codex://settings) pane. You can check the generated file into your project's Git repository to share with others.


31 29 

32Actions are helpful to keep you from typing common actions like triggering a build for your project or starting a development server. For one-off quick debugging you can use the integrated terminal directly.30Actions are helpful to keep you from typing common actions like triggering a build for your project or starting a development server. For one-off quick debugging you can use the integrated terminal directly.

33 31 

34![Project actions list shown in Codex app settings](/images/codex/app/actions-light.webp) ![Project actions list shown in Codex app settings](/images/codex/app/actions-dark.webp)32![Project actions list shown in Codex app settings](/images/codex/app/actions-light.webp)

35 

36![Project actions list shown in Codex app settings](/images/codex/app/actions-light.webp) ![Project actions list shown in Codex app settings](/images/codex/app/actions-dark.webp)

37 33 

38For example, for a Node.js project you might create a "Run" action that contains the following script:34For example, for a Node.js project you might create a "Run" action that contains the following script:

39 35 


44If the commands for your action are platform-specific, define platform-specific scripts for macOS, Windows, and Linux.40If the commands for your action are platform-specific, define platform-specific scripts for macOS, Windows, and Linux.

45 41 

46To identify your actions, choose an icon associated with each action.42To identify your actions, choose an icon associated with each action.

47 

48[Previous

49 

50Worktrees](https://developers.openai.com/codex/app/worktrees)[Next

51 

52Commands](https://developers.openai.com/codex/app/commands)

app/review.md +1 −11

Details

1# Review1# Review

2 2 

3Review and iterate with Codex on changes inside the app

4 

5The review pane helps you understand what Codex changed, give targeted feedback, and decide what to keep.3The review pane helps you understand what Codex changed, give targeted feedback, and decide what to keep.

6 4 

7It only works for projects that live inside a Git repository. If your project5It only works for projects that live inside a Git repository. If your project


57If you use `/review` to run a code review, comments will show up directly55If you use `/review` to run a code review, comments will show up directly

58inline in the review pane.56inline in the review pane.

59 57 

60![Inline code review comments displayed in the review pane](/images/codex/app/inline-code-review-light.webp) ![Inline code review comments displayed in the review pane](/images/codex/app/inline-code-review-dark.webp)58![Inline code review comments displayed in the review pane](/images/codex/app/inline-code-review-light.webp)

61 

62![Inline code review comments displayed in the review pane](/images/codex/app/inline-code-review-light.webp) ![Inline code review comments displayed in the review pane](/images/codex/app/inline-code-review-dark.webp)

63 59 

64## Staging and reverting files60## Staging and reverting files

65 61 


81Git can represent both staged and unstaged changes in the same file. When that77Git can represent both staged and unstaged changes in the same file. When that

82happens, it can look like the pane is showing “the same file twice” across78happens, it can look like the pane is showing “the same file twice” across

83staged and unstaged views. That's normal Git behavior.79staged and unstaged views. That's normal Git behavior.

84 

85[Previous

86 

87Settings](https://developers.openai.com/codex/app/settings)[Next

88 

89Automations](https://developers.openai.com/codex/app/automations)

app/settings.md +1 −9

Details

1# Codex app settings1# Codex app settings

2 2 

3Configure Codex app behavior and preferences

4 

5Use the settings panel to tune how the Codex app behaves, how it opens files,3Use the settings panel to tune how the Codex app behaves, how it opens files,

6and how it connects to tools. Open [**Settings**](codex://settings) from the app menu or4and how it connects to tools. Open [**Settings**](codex://settings) from the app menu or

7press <kbd>Cmd</kbd>+<kbd>,</kbd>.5press <kbd>Cmd</kbd>+<kbd>,</kbd>.


26 24 

27Codex agents in the app inherit the same configuration as the IDE and CLI extension.25Codex agents in the app inherit the same configuration as the IDE and CLI extension.

28Use the in-app controls for common settings, or edit `config.toml` for advanced26Use the in-app controls for common settings, or edit `config.toml` for advanced

29options. See [Codex security](https://developers.openai.com/codex/security) and27options. See [Codex security](https://developers.openai.com/codex/agent-approvals-security) and

30[config basics](https://developers.openai.com/codex/config-basic) for more detail.28[config basics](https://developers.openai.com/codex/config-basic) for more detail.

31 29 

32## Git30## Git


54 52 

55The **Archived threads** section lists archived chats with dates and project53The **Archived threads** section lists archived chats with dates and project

56context. Use **Unarchive** to restore a thread.54context. Use **Unarchive** to restore a thread.

57 

58[Previous

59 

60Features](https://developers.openai.com/codex/app/features)[Next

61 

62Review](https://developers.openai.com/codex/app/review)

Details

1# Troubleshooting1# Troubleshooting

2 2 

3FAQ and fixes for common Codex app issues

4 

5## Frequently Asked Questions3## Frequently Asked Questions

6 4 

7### Files appear in the side panel that Codex didn't edit5### Files appear in the side panel that Codex didn't edit


34### Only some threads appear in the sidebar32### Only some threads appear in the sidebar

35 33 

36The sidebar allows filtering of threads depending on the state of a project. If34The sidebar allows filtering of threads depending on the state of a project. If

37youre missing threads, check whether you have any filters applied by clicking35you're missing threads, click the filter icon next to the **Threads** label and

38the filter icon next to the **Threads** label.36switch to Chronological. If you still don't see the thread, open

37[Settings](codex://settings) and check the archived chats or archived threads

38section.

39 39 

40### Code doesn't run on a worktree40### Code doesn't run on a worktree

41 41 


134**Fonts aren't rendering correctly**134**Fonts aren't rendering correctly**

135 135 

136Codex uses the same font for the review pane, integrated terminal and any other code displayed inside the app. You can configure the font inside the [Settings](codex://settings) pane as **Code font**.136Codex uses the same font for the review pane, integrated terminal and any other code displayed inside the app. You can configure the font inside the [Settings](codex://settings) pane as **Code font**.

137 

138[Previous

139 

140Commands](https://developers.openai.com/codex/app/commands)

app/windows.md +202 −0 added

Details

1# Windows

2 

3The [Codex app for Windows](https://apps.microsoft.com/detail/9plm9xgg6vks?hl=en-US&gl=US) gives you one interface for

4working across projects, running parallel agent threads, and reviewing results.

5It runs natively on Windows using PowerShell and the

6[Windows sandbox](https://developers.openai.com/codex/windows#windows-sandbox), or you can configure it to

7run in [Windows Subsystem for Linux (WSL)](#windows-subsystem-for-linux-wsl).

8 

9![Codex app for Windows showing a project sidebar, active thread, and review pane](/images/codex/windows/codex-windows-light.webp)

10 

11## Download and update the Codex app

12 

13Download the Codex app from the

14[Microsoft Store](https://apps.microsoft.com/detail/9plm9xgg6vks?hl=en-US&gl=US).

15 

16Then follow the [quickstart](https://developers.openai.com/codex/quickstart?setup=app) to get started.

17 

18To update the app, open the Microsoft Store, go to **Downloads**, and click

19**Check for updates**. The Store installs the latest version afterward.

20 

21For enterprises, administrators can deploy the app with Microsoft Store app

22distribution through enterprise management tools.

23 

24If you prefer a command-line install path, or need an alternative to opening

25the Microsoft Store UI, run:

26 

27```powershell

28winget install Codex -s msstore

29```

30 

31## Customize for your dev setup

32 

33### Preferred editor

34 

35Choose a default app for **Open**, such as Visual Studio, VS Code, or another

36editor. You can override that choice per project. If you already picked a

37different app from the **Open** menu for a project, that project-specific

38choice takes precedence.

39 

40![Codex app settings showing the default Open In app on Windows](/images/codex/windows/open-in-windows-light.webp)

41 

42### Integrated terminal

43 

44You can also choose the default integrated terminal. Depending on what you have

45installed, options include:

46 

47- PowerShell

48- Command Prompt

49- Git Bash

50- WSL

51 

52This change applies only to new terminal sessions. If you already have an

53integrated terminal open, restart the app or start a new thread before

54expecting the new default terminal to appear.

55 

56![Codex app settings showing the integrated terminal selection on Windows](/images/codex/windows/integrated-shell-light.webp)

57 

58## Windows Subsystem for Linux (WSL)

59 

60By default, the Codex app uses the Windows-native agent. That means the agent

61runs commands in PowerShell. The app can still work with projects that live in

62Windows Subsystem for Linux (WSL) by using the `wsl` CLI when needed.

63 

64If you want to add a project from the WSL filesystem, click **Add new project**

65or press <kbd>Ctrl</kbd>+<kbd>O</kbd>, then type `\\wsl$\` into the File

66Explorer window. From there, choose your Linux distribution and the folder you

67want to open.

68 

69If you plan to keep using the Windows-native agent, prefer storing projects on

70your Windows filesystem and accessing them from WSL through

71`/mnt/<drive>/...`. This setup is more reliable than opening projects

72directly from the WSL filesystem.

73 

74If you want the agent itself to run in WSL, open **[Settings](codex://settings)**,

75switch the agent from Windows native to WSL, and **restart the app**. The

76change doesn't take effect until you restart. Your projects should remain in

77place after restart.

78 

79![Codex app settings showing the agent selector with Windows native and WSL options](/images/codex/windows/wsl-select-light.webp)

80 

81You configure the integrated terminal independently from the agent. See

82[Customize for your dev setup](#customize-for-your-dev-setup) for the

83terminal options. You can keep the agent in WSL and still use PowerShell in the

84terminal, or use WSL for both, depending on your workflow.

85 

86## Useful developer tools

87 

88Codex works best when a few common developer tools are already installed:

89 

90- **Git**: Powers the review panel in the Codex app and lets you inspect or

91 revert changes.

92- **Node.js**: A common tool that the agent uses to perform tasks more

93 efficiently.

94- **Python**: A common tool that the agent uses to perform tasks more

95 efficiently.

96- **.NET SDK**: Useful when you want to build native Windows apps.

97- **GitHub CLI**: Powers GitHub-specific functionality in the Codex app.

98 

99Install them with the default Windows package manager `winget` by pasting this

100into the [integrated terminal](https://developers.openai.com/codex/app/features#integrated-terminal) or

101asking Codex to install them:

102 

103```powershell

104winget install --id Git.Git

105winget install --id OpenJS.NodeJS.LTS

106winget install --id Python.Python.3.14

107winget install --id Microsoft.DotNet.SDK.10

108winget install --id GitHub.cli

109```

110 

111After installing GitHub CLI, run `gh auth login` to enable GitHub features in

112the app.

113 

114If you need a different Python or .NET version, change the package IDs to the

115version you want.

116 

117## Troubleshooting and FAQ

118 

119### Run commands with elevated permissions

120 

121If you need Codex to run commands with elevated permissions, start the Codex app

122itself as an administrator. After installation, open the Start menu, find

123Codex, and choose Run as administrator. The Codex agent inherits that

124permission level.

125 

126### PowerShell execution policy blocks commands

127 

128If you have never used tools such as Node.js or `npm` in PowerShell before, the

129Codex agent or integrated terminal may hit execution policy errors.

130 

131This can also happen if Codex creates PowerShell scripts for you. In that case,

132you may need a less restrictive execution policy before PowerShell will run

133them.

134 

135An error may look something like this:

136 

137```text

138npm.ps1 cannot be loaded because running scripts is disabled on this system.

139```

140 

141A common fix is to set the execution policy to `RemoteSigned`:

142 

143```powershell

144Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

145```

146 

147For details and other options, check Microsoft's

148[execution policy guide](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_execution_policies)

149before changing the policy.

150 

151### Local environment scripts on Windows

152 

153If your [local environment](https://developers.openai.com/codex/app/local-environments) uses cross-platform

154commands such as `npm` scripts, you can keep one shared setup script or

155set of actions for every platform.

156 

157If you need Windows-specific behavior, create Windows-specific setup scripts or

158Windows-specific actions.

159 

160Actions run in the environment used by your integrated terminal. See

161[Customize for your dev setup](#customize-for-your-dev-setup).

162 

163Local setup scripts run in the agent environment: WSL if the agent uses WSL,

164and PowerShell otherwise.

165 

166### Share config, auth, and sessions with WSL

167 

168The Windows app uses the same Codex home directory as native Codex on Windows:

169`%USERPROFILE%\.codex`.

170 

171If you also run the Codex CLI inside WSL, the CLI uses the Linux home

172directory by default, so it does not automatically share configuration, cached

173auth, or session history with the Windows app.

174 

175To share them, use one of these approaches:

176 

177- Sync WSL `~/.codex` with `%USERPROFILE%\.codex` on your file system.

178- Point WSL at the Windows Codex home directory by setting `CODEX_HOME`:

179 

180```bash

181export CODEX_HOME=/mnt/c/Users/<windows-user>/.codex

182```

183 

184If you want that setting in every shell, add it to your WSL shell profile, such

185as `~/.bashrc` or `~/.zshrc`.

186 

187### Git features are unavailable

188 

189If you don't have Git installed natively on Windows, the app can't use some

190features. Install it with `winget install Git.Git` from PowerShell or `cmd.exe`.

191 

192### Git isn't detected for projects opened from `\\wsl$`

193 

194For now, if you want to use the Windows-native agent with a project that is

195also accessible from WSL, the most reliable workaround is to store the project

196on the native Windows drive and access it in WSL through `/mnt/<drive>/...`.

197 

198### Cmder is not listed in the open dialog

199 

200If Cmder is installed but doesn’t show in Codex’s open dialog, add it to the

201Windows Start Menu: right-click Cmder and choose **Add to Start**, then restart

202Codex or reboot.

app/worktrees.md +46 −56

Details

1# Worktrees1# Worktrees

2 2 

3Leverage Git worktrees within the Codex app to let Codex work in parallel3In the Codex app, worktrees let Codex run multiple independent tasks in the same project without interfering with each other. For Git repositories, [automations](https://developers.openai.com/codex/app/automations) run on dedicated background worktrees so they don't conflict with your ongoing work. In non-version-controlled projects, automations run directly in the project directory. You can also start threads on a worktree manually, and use Handoff to move a thread between Local and Worktree.

4 

5In the Codex app, worktrees let Codex run multiple independent tasks in the same project without interfering with each other. For Git repositories, [automations](https://developers.openai.com/codex/app/automations) run on dedicated background worktrees so they don’t conflict with your ongoing work. In non-version-controlled projects, automations run directly in the project directory. You can also start threads on a worktree manually.

6 4 

7## What's a worktree5## What's a worktree

8 6 


12 10 

13- **Local checkout**: The repository that you created. Sometimes just referred to as **Local** in the Codex app.11- **Local checkout**: The repository that you created. Sometimes just referred to as **Local** in the Codex app.

14- **Worktree**: A [Git worktree](https://git-scm.com/docs/git-worktree) that was created from your local checkout in the Codex app.12- **Worktree**: A [Git worktree](https://git-scm.com/docs/git-worktree) that was created from your local checkout in the Codex app.

13- **Handoff**: The flow that moves a thread between Local and Worktree. Codex handles the Git operations required to move your work safely between them.

15 14 

16## Why use a worktree15## Why use a worktree

17 16 

181. Work in parallel with Codex without breaking each other as you work.171. Work in parallel with Codex without disturbing your current Local setup.

192. Start a thread unrelated to your current work182. Queue up background work while you stay focused on the foreground.

20 - Staging area to queue up work you want Codex to start but aren’t ready to test yet.193. Move a thread into Local later when you're ready to inspect, test, or collaborate more directly.

21 20 

22## Getting started21## Getting started

23 22 


333. Submit your prompt323. Submit your prompt

34 33 

35 Submit your task and Codex will create a Git worktree based on the branch you selected. By default, Codex works in a ["detached HEAD"](https://git-scm.com/docs/git-checkout#_detached_head).34 Submit your task and Codex will create a Git worktree based on the branch you selected. By default, Codex works in a ["detached HEAD"](https://git-scm.com/docs/git-checkout#_detached_head).

364. Verify your changes354. Choose where to keep working

36 

37 When you’re ready, you can either keep working directly on the worktree or hand the thread off to your local checkout. Handing off to or from local will move your thread *and* code so you can continue in the other checkout.

37 38 

38 When you’re ready, follow one of the paths [below](#verifying-and-pushing-workflow-changes)39## Working between Local and Worktree

39 based on your project and flow.

40 40 

41## Verifying and pushing workflow changes41Worktrees look and feel much like your local checkout. The difference is where they fit into your flow. You can think of Local as the foreground and Worktree as the background. Handoff lets you move a thread between them.

42 42 

43Worktrees look and feel much like your local checkout. But **Git only allows a branch to be checked out in one place at a time**. If you check out a branch on a worktree, you **cant** check it out in your local checkout at the same time, and vice versa.43Under the hood, Handoff handles the Git operations required to move work between two checkouts safely. This matters because **Git only allows a branch to be checked out in one place at a time**. If you check out a branch on a worktree, you **can't** check it out in your local checkout at the same time, and vice versa.

44 44 

45Because of this, choose how you want to verify and commit changes Codex made on a worktree:45In practice, there are two common paths:

46 46 

471. [Work exclusively on the worktree](#option-1-working-on-the-worktree). This path works best when you can verify changes directly on the worktree, for example because you have dependencies and tools installed using a [local environment setup script](https://developers.openai.com/codex/app/local-environments).471. [Work exclusively on the worktree](#option-1-working-on-the-worktree). This path works best when you can verify changes directly on the worktree, for example because you have dependencies and tools installed using a [local environment setup script](https://developers.openai.com/codex/app/local-environments).

482. [Work in your local checkout](#option-2-working-in-your-local-checkout). Use this when you need to bring changes back into your main checkout, for example because you can run only one instance of your app.482. [Hand the thread off to Local](#option-2-handing-a-thread-off-to-local). Use this when you want to bring the thread into the foreground, for example because you want to inspect changes in your usual IDE or can run only one instance of your app.

49 49 

50### Option 1: Working on the worktree50### Option 1: Working on the worktree

51 51 


55 55 

56You can open your IDE to the worktree using the "Open" button in the header, use the integrated terminal, or anything else that you need to do from the worktree directory.56You can open your IDE to the worktree using the "Open" button in the header, use the integrated terminal, or anything else that you need to do from the worktree directory.

57 57 

58![Worktree thread view with branch controls and worktree details](/images/codex/app/worktree-light.webp) ![Worktree thread view with branch controls and worktree details](/images/codex/app/worktree-dark.webp)58![Worktree thread view with branch controls and worktree details](/images/codex/app/worktree-light.webp)

59 

60![Worktree thread view with branch controls and worktree details](/images/codex/app/worktree-light.webp) ![Worktree thread view with branch controls and worktree details](/images/codex/app/worktree-dark.webp)

61 59 

62Remember, if you create a branch on a worktree, you can't check it out in any other worktree, including your local checkout.60Remember, if you create a branch on a worktree, you can't check it out in any other worktree, including your local checkout.

63 61 

64If you plan to keep working on this branch, you can [add it to the sidebar](#adding-a-worktree-to-the-sidebar). Otherwise, archive the thread after you’re done so the worktree can be deleted.62### Option 2: Handing a thread off to Local

65 63 

66### Option 2: Working in your local checkout64If you want to bring a thread into the foreground, click **Hand off** in the header of your thread and move it to **Local**.

67 65 

68If you don’t want to verify your changes directly on the worktree and instead check them out on your local checkout, click **Sync with local** in the header of your thread.66This path works well when you want to read the changes in your usual IDE window, run your existing development server, or validate the work in the same environment you already use day to day.

69 67 

70You will be presented with the option of creating a new branch or syncing to an existing branch.68Codex handles the Git steps required to move the thread safely between the worktree and your local checkout.

71 69 

72You can sync with local at any point. To do so, click **Sync with local** in the header again. From here, you can choose which direction to sync (to local or from local) and a sync method:70Each thread keeps the same associated worktree over time. If you hand the thread back to a worktree later, Codex returns it to that same background environment so you can pick up where you left off.

73 71 

74- **Overwrite**: Makes the destination checkout match the source checkout’s files and commit history.72![Handoff dialog moving a thread from a worktree to Local](/images/codex/app/handoff-light.webp)

75- **Apply**: Calculates the source changes since the nearest shared commit and applies that patch onto the destination checkout, preserving destination commit history while bringing over source code changes (not source commits).

76 73 

77![Sync worktree dialog with options to apply or pull changes](/images/codex/app/sync-worktree-light.webp) ![Sync worktree dialog with options to apply or pull changes](/images/codex/app/sync-worktree-dark.webp)74You can also go the other direction. If you're already working in Local and want to free up the foreground, use **Hand off** to move the thread to a worktree. This is useful when you want Codex to keep working in the background while you switch your attention back to something else locally.

78 75 

79![Sync worktree dialog with options to apply or pull changes](/images/codex/app/sync-worktree-light.webp) ![Sync worktree dialog with options to apply or pull changes](/images/codex/app/sync-worktree-dark.webp)76Since Handoff uses Git operations, any files that are part of your `.gitignore` file won't move with the thread.

80 77 

81You can create multiple worktrees and sync them to the same feature branch to split up your work into parallel threads.78## Advanced details

82 

83In some cases, changes on your worktree might conflict with changes on your local checkout, for example from testing a previous worktree. In those cases, you can use the **Overwrite local** option to reset the previous changes and cleanly apply your worktree changes.

84 

85Since this process uses Git operations, any files that are part of the `.gitignore` file won’t be transferred during the sync process.

86 79 

87## Adding a worktree to the sidebar80### Codex-managed and permanent worktrees

88 81 

89If you choose option one above (work on the worktree), once you have created a branch on the worktree, an option appears in the header to add the worktree to your sidebar. This promotes the worktree to a permanent home. When you do this, it will never be automatically deleted, and you can even kick off new threads from the same worktree.82By default, threads use a Codex-managed worktree. These are meant to feel lightweight and disposable. A Codex-managed worktree is typically dedicated to one thread, and Codex returns that thread to the same worktree if you hand it back there later.

90 83 

91## Advanced details84If you want a long-lived environment, create a permanent worktree from the three-dot menu on a project in the sidebar. This creates a new permanent worktree as its own project. Permanent worktrees are not automatically deleted, and you can start multiple threads from the same worktree.

92 85 

93### How Codex manages worktrees for you86### How Codex manages worktrees for you

94 87 

95Codex will create a worktree in `$CODEX_HOME/worktrees`. The starting commit will be the `HEAD` commit of the branch selected when you start your thread. If you chose a branch with local changes, the uncommitted changes will be applied to the worktree as well. The worktree will *not* be checked out as a branch. It will be in a [detached HEAD](https://git-scm.com/docs/git-checkout#_detached_head) state. This means you can create several worktrees without polluting your branches.88Codex creates worktrees in `$CODEX_HOME/worktrees`. The starting commit will be the `HEAD` commit of the branch selected when you start your thread. If you chose a branch with local changes, the uncommitted changes will be applied to the worktree as well. The worktree will *not* be checked out as a branch. It will be in a [detached HEAD](https://git-scm.com/docs/git-checkout#_detached_head) state. This lets Codex create several worktrees without polluting your branches.

96 89 

97### Branch limitations90### Branch limitations

98 91 


104 97 

105To resolve this, you would need to check out another branch instead of `feature/a` on the worktree.98To resolve this, you would need to check out another branch instead of `feature/a` on the worktree.

106 99 

107If you plan on checking out the branch locally, try Workflow 2 ([sync with local](#option-2-working-in-your-local-checkout)).100If you plan on checking out the branch locally, use Handoff to move the thread into Local instead of trying to keep the same branch checked out in both places at once.

108 101 

109Why this limitation exists102Why this limitation exists

110 103 


118 111 

119Worktrees can take up a lot of disk space. Each one has its own set of repository files, dependencies, build caches, etc. As a result, the Codex app tries to keep the number of worktrees to a reasonable limit.112Worktrees can take up a lot of disk space. Each one has its own set of repository files, dependencies, build caches, etc. As a result, the Codex app tries to keep the number of worktrees to a reasonable limit.

120 113 

121Worktrees will never be cleaned up if:114By default, Codex keeps your most recent 15 Codex-managed worktrees. You can change this limit or turn off automatic deletion in settings if you prefer to manage disk usage yourself.

122 115 

123- A pinned conversation is tied to it116Codex tries to avoid deleting worktrees that are still important. Codex-managed worktrees won't be deleted automatically if:

124- The worktree was added to the sidebar (see above)

125 117 

126Worktrees are eligible for cleanup when:118- A pinned conversation is tied to it

119- The thread is still in progress

120- The worktree is a permanent worktree

127 121 

128- It’s more than 4 days old122Codex-managed worktrees are deleted automatically when:

129- You have more than 10 worktrees

130 123 

131When either of those conditions are met, Codex automatically cleans up a worktree when you archive a thread, or on app startup if it finds a worktree with no associated threads.124- You archive the associated thread

125- Codex needs to delete older worktrees to stay within your configured limit

132 126 

133Before cleaning up a worktree, Codex will save a snapshot of the work on it that you can restore at any point in a new worktree. If you open a conversation after its worktree was cleaned up, youll see the option to restore it.127Before deleting a Codex-managed worktree, Codex saves a snapshot of the work on it. If you open a conversation after its worktree was deleted, you'll see the option to restore it.

134 128 

135## Frequently asked questions129## Frequently asked questions

136 130 


139 Not today. Codex creates worktrees under `$CODEX_HOME/worktrees` so it can133 Not today. Codex creates worktrees under `$CODEX_HOME/worktrees` so it can

140 manage them consistently.134 manage them consistently.

141 135 

142Can I move a session between worktrees?136Can I move a thread between Local and Worktree?

143 137 

144Not yet. If you need to change environments, you have to start a new thread in138 Yes. Use **Hand off** in the thread header to move a thread between your local

145the target environment and restate the prompt. You can use the up arrow keys139 checkout and a worktree. Codex handles the Git operations needed to move the

146in the composer to try to recover your prompt.140 thread safely between environments. If you hand a thread back to a worktree

141 later, Codex returns it to the same associated worktree.

147 142 

148What happens to threads if a worktree is deleted?143What happens to threads if a worktree is deleted?

149 144 

150 Threads can remain in your history even if the underlying worktree directory145 Threads can remain in your history even if the underlying worktree directory

151is cleaned up. However, Codex saves a snapshot of the worktree prior to146 is deleted. For Codex-managed worktrees, Codex saves a snapshot before

152cleaning it up and offers to restore it if you reopen the thread associated147 deleting the worktree and offers to restore it if you reopen the associated

153with it.148 thread. Permanent worktrees are not automatically deleted when you archive

154 149 their threads.

155[Previous

156 

157Automations](https://developers.openai.com/codex/app/automations)[Next

158 

159Local Environments](https://developers.openai.com/codex/app/local-environments)

auth.md +22 −3

Details

1# Authentication1# Authentication

2 2 

3Sign-in methods for Codex

4 

5## OpenAI authentication3## OpenAI authentication

6 4 

7Codex supports two ways to sign in when using OpenAI models:5Codex supports two ways to sign in when using OpenAI models:


11 9 

12Codex cloud requires signing in with ChatGPT. The Codex CLI and IDE extension support both sign-in methods.10Codex cloud requires signing in with ChatGPT. The Codex CLI and IDE extension support both sign-in methods.

13 11 

12Your sign-in method also determines which admin controls and data-handling policies apply.

13 

14- With sign in with ChatGPT, Codex usage follows your ChatGPT workspace permissions, RBAC, and ChatGPT Enterprise retention and residency settings

15- With an API key, usage follows your API organization's retention and data-sharing settings instead

16 

17For the CLI, Sign in with ChatGPT is the default authentication path when no valid session is available.

18 

14### Sign in with ChatGPT19### Sign in with ChatGPT

15 20 

16When you sign in with ChatGPT from the Codex app, CLI, or IDE Extension, Codex opens a browser window for you to complete the login flow. After you sign in, the browser returns an access token to the CLI or IDE extension.21When you sign in with ChatGPT from the Codex app, CLI, or IDE Extension, Codex opens a browser window for you to complete the login flow. After you sign in, the browser returns an access token to the CLI or IDE extension.


21 26 

22OpenAI bills API key usage through your OpenAI Platform account at standard API rates. See the [API pricing page](https://openai.com/api/pricing/).27OpenAI bills API key usage through your OpenAI Platform account at standard API rates. See the [API pricing page](https://openai.com/api/pricing/).

23 28 

29Features that rely on ChatGPT credits, such as [fast mode](https://developers.openai.com/codex/speed), are

30available only when you sign in with ChatGPT. If you sign in with an API key,

31Codex uses standard API pricing instead.

32 

33Recommendation is to use API key authentication for programmatic Codex CLI workflows (for example CI/CD jobs). Don't expose Codex execution in untrusted or public environments.

34 

24## Secure your Codex cloud account35## Secure your Codex cloud account

25 36 

26Codex cloud interacts directly with your codebase, so it needs stronger security than many other ChatGPT features. Enable multi-factor authentication (MFA).37Codex cloud interacts directly with your codebase, so it needs stronger security than many other ChatGPT features. Enable multi-factor authentication (MFA).


45 56 

46Codex caches login details locally in a plaintext file at `~/.codex/auth.json` or in your OS-specific credential store.57Codex caches login details locally in a plaintext file at `~/.codex/auth.json` or in your OS-specific credential store.

47 58 

59For sign in with ChatGPT sessions, Codex refreshes tokens automatically during use before they expire, so active sessions usually continue without requiring another browser login.

60 

48## Credential storage61## Credential storage

49 62 

50Use `cli_auth_credentials_store` to control where the Codex CLI stores cached credentials:63Use `cli_auth_credentials_store` to control where the Codex CLI stores cached credentials:


76 89 

77If the active credentials don't match the configured restrictions, Codex logs the user out and exits.90If the active credentials don't match the configured restrictions, Codex logs the user out and exits.

78 91 

79These settings are commonly applied via managed configuration rather than per-user setup. See [Managed configuration](https://developers.openai.com/codex/security#managed-configuration).92These settings are commonly applied via managed configuration rather than per-user setup. See [Managed configuration](https://developers.openai.com/codex/enterprise/managed-configuration).

80 93 

81## Login on headless devices94## Login on headless devices

82 95 


132docker cp ~/.codex/auth.json MY_CONTAINER:"$CONTAINER_HOME/.codex/auth.json"145docker cp ~/.codex/auth.json MY_CONTAINER:"$CONTAINER_HOME/.codex/auth.json"

133```146```

134 147 

148For a more advanced version of this same pattern on trusted CI/CD runners, see

149[Maintain Codex account auth in CI/CD (advanced)](https://developers.openai.com/codex/auth/ci-cd-auth).

150That guide explains how to let Codex refresh `auth.json` during normal runs and

151then keep the updated file for the next job. API keys are still the recommended

152default for automation.

153 

135### Fallback: Forward the localhost callback over SSH154### Fallback: Forward the localhost callback over SSH

136 155 

137If you can forward ports between your local machine and the remote host, you can use the standard browser-based flow by tunneling Codex's local callback server (default `localhost:1455`).156If you can forward ports between your local machine and the remote host, you can use the standard browser-based flow by tunneling Codex's local callback server (default `localhost:1455`).

auth/ci-cd-auth.md +277 −0 added

Details

1# Maintain Codex account auth in CI/CD (advanced)

2 

3This guide shows how to keep ChatGPT-managed Codex auth working on a trusted

4CI/CD runner without calling the OAuth token endpoint yourself.

5 

6The right way to authenticate automation is with an API key. Use this guide

7only if you specifically need to run the workflow as your Codex account.

8 

9The pattern is:

10 

111. Create `auth.json` once on a trusted machine with `codex login`.

122. Put that file on the runner.

133. Run Codex normally.

144. Let Codex refresh the session when it becomes stale.

155. Keep the refreshed `auth.json` for the next run.

16 

17This is an advanced workflow for enterprise and other trusted private

18automation. API keys are still the recommended option for most CI/CD jobs.

19 

20Treat `~/.codex/auth.json` like a password: it contains access tokens. Don't

21 commit it, paste it into tickets, or share it in chat. Do not use this

22 workflow for public or open-source repositories.

23 

24## Why this works

25 

26Codex already knows how to refresh a ChatGPT-managed session.

27 

28As of the current open-source client:

29 

30- Codex loads the local auth cache from `auth.json`

31- if `last_refresh` is older than about 8 days, Codex refreshes the token

32 bundle before the run continues

33- after a successful refresh, Codex writes the new tokens and a new

34 `last_refresh` back to `auth.json`

35- if a request gets a `401`, Codex also has a built-in refresh-and-retry path

36 

37That means the supported CI/CD strategy is not "call the refresh API yourself."

38It is "run Codex and persist the updated `auth.json`."

39 

40## When to use this

41 

42Use this guide only when all of the following are true:

43 

44- you need ChatGPT-managed Codex auth rather than an API key

45- `codex login` cannot run on the remote runner

46- the runner is trusted private infrastructure

47- you can preserve the refreshed `auth.json` between runs

48- only one machine or serialized job stream will use a given `auth.json` copy

49 

50This guide applies to Codex-managed ChatGPT auth (`auth_mode: "chatgpt"`).

51 

52It does not apply to:

53 

54- API key auth

55- external-token host integrations (`auth_mode: "chatgptAuthTokens"`)

56- generic OAuth clients outside Codex

57 

58If your credentials are stored in the OS keyring, switch to file-backed storage

59first. See [Credential storage](https://developers.openai.com/codex/auth#credential-storage).

60 

61## Seed `auth.json` once

62 

63On a trusted machine where browser login is possible:

64 

651. Configure Codex to store credentials in a file:

66 

67```toml

68cli_auth_credentials_store = "file"

69```

70 

712. Run:

72 

73```bash

74codex login

75```

76 

773. Verify the file looks like managed ChatGPT auth:

78 

79```bash

80AUTH_FILE="${CODEX_HOME:-$HOME/.codex}/auth.json"

81 

82jq '{

83 auth_mode,

84 has_tokens: (.tokens != null),

85 has_refresh_token: ((.tokens.refresh_token // "") != ""),

86 last_refresh

87}' "$AUTH_FILE"

88```

89 

90Continue only if:

91 

92- `auth_mode` is `"chatgpt"`

93- `has_refresh_token` is `true`

94 

95Then place the contents of `auth.json` into your CI/CD secret manager or copy

96it to a trusted persistent runner.

97 

98## Recommended pattern: GitHub Actions on a self-hosted runner

99 

100The simplest fully automated setup is a self-hosted GitHub Actions runner with a

101persistent `CODEX_HOME`.

102 

103Why this pattern works well:

104 

105- the runner can keep `auth.json` on disk between jobs

106- Codex can refresh the file in place

107- later jobs automatically pick up the refreshed tokens

108- you only need the original secret for bootstrap or reseeding

109 

110The critical detail is to seed `auth.json` only if it is missing. If you

111rewrite the file from the original secret on every run, you throw away the

112refreshed tokens that Codex just wrote.

113 

114Example scheduled workflow:

115 

116```yaml

117name: Keep Codex auth fresh

118 

119on:

120 schedule:

121 - cron: "0 9 * * 1"

122 workflow_dispatch:

123 

124jobs:

125 keep-codex-auth-fresh:

126 runs-on: self-hosted

127 steps:

128 - name: Bootstrap auth.json if needed

129 shell: bash

130 env:

131 CODEX_AUTH_JSON: ${{ secrets.CODEX_AUTH_JSON }}

132 run: |

133 export CODEX_HOME="${CODEX_HOME:-$HOME/.codex}"

134 mkdir -p "$CODEX_HOME"

135 chmod 700 "$CODEX_HOME"

136 

137 if [ ! -f "$CODEX_HOME/auth.json" ]; then

138 printf '%s' "$CODEX_AUTH_JSON" > "$CODEX_HOME/auth.json"

139 chmod 600 "$CODEX_HOME/auth.json"

140 fi

141 

142 - name: Run Codex

143 shell: bash

144 run: |

145 codex exec --json "Reply with the single word OK." >/dev/null

146```

147 

148What this does:

149 

150- the first run seeds `auth.json`

151- later runs reuse the same file

152- once the cached session is old enough, Codex refreshes it during the normal

153 `codex exec` step

154- the refreshed file remains on disk for the next workflow run

155 

156A weekly schedule is usually enough because Codex treats the session as stale

157after roughly 8 days in the current open-source client.

158 

159## Ephemeral runners: restore, run Codex, persist the updated file

160 

161If you use GitHub-hosted runners, GitLab shared runners, or any other ephemeral

162environment, the runner filesystem disappears after each job. In that setup,

163you need a round-trip:

164 

1651. restore the current `auth.json` from secure storage

1662. run Codex

1673. write the updated `auth.json` back to secure storage

168 

169Generic GitHub Actions shape:

170 

171```yaml

172name: Run Codex with managed auth

173 

174on:

175 workflow_dispatch:

176 

177jobs:

178 codex-job:

179 runs-on: ubuntu-latest

180 steps:

181 - name: Restore auth.json

182 shell: bash

183 run: |

184 export CODEX_HOME="${CODEX_HOME:-$HOME/.codex}"

185 mkdir -p "$CODEX_HOME"

186 chmod 700 "$CODEX_HOME"

187 

188 # Replace this with your secret manager or secure storage command.

189 my-secret-cli read codex-auth-json > "$CODEX_HOME/auth.json"

190 chmod 600 "$CODEX_HOME/auth.json"

191 

192 - name: Run Codex

193 shell: bash

194 run: |

195 codex exec --json "summarize the failing tests"

196 

197 - name: Persist refreshed auth.json

198 if: always()

199 shell: bash

200 run: |

201 # Replace this with your secret manager or secure storage command.

202 my-secret-cli write codex-auth-json < "$CODEX_HOME/auth.json"

203```

204 

205The key requirement is that the write-back step stores the refreshed file that

206Codex produced during the run, not the original seed.

207 

208## You do not need a separate refresh command

209 

210Any normal Codex run can refresh the session.

211 

212That means you have two good options:

213 

214- let your existing CI/CD Codex job refresh the file naturally

215- add a lightweight scheduled maintenance job, like the GitHub Actions example

216 above, if your real jobs do not run often enough

217 

218The first Codex run after the session becomes stale is the one that refreshes

219`auth.json`.

220 

221## Operational rules that matter

222 

223- Use one `auth.json` per runner or per serialized workflow stream.

224- Do not share the same file across concurrent jobs or multiple machines.

225- Do not overwrite a persistent runner's refreshed file from the original seed

226 on every run.

227- Do not store `auth.json` in the repository, logs, or public artifact storage.

228- Reseed from a trusted machine if built-in refresh stops working.

229 

230## What to do when refresh stops working

231 

232This flow reduces manual work, but it does not guarantee the same session lasts

233forever.

234 

235Reseed the runner with a fresh `auth.json` if:

236 

237- Codex starts returning `401` and the runner can no longer refresh

238- the refresh token was revoked or expired

239- another machine or concurrent job rotated the token first

240- your secure-storage round trip failed and an old file was restored

241 

242To reseed:

243 

2441. Run `codex login` on a trusted machine.

2452. Replace the stored CI/CD copy of `auth.json`.

2463. Let the next runner job continue using Codex's built-in refresh flow.

247 

248## Verify that the runner is maintaining the session

249 

250Check that the runner still has managed auth tokens and that `last_refresh`

251exists:

252 

253```bash

254AUTH_FILE="${CODEX_HOME:-$HOME/.codex}/auth.json"

255 

256jq '{

257 auth_mode,

258 last_refresh,

259 has_access_token: ((.tokens.access_token // "") != ""),

260 has_id_token: ((.tokens.id_token // "") != ""),

261 has_refresh_token: ((.tokens.refresh_token // "") != "")

262}' "$AUTH_FILE"

263```

264 

265If your runner is persistent, you should see the same file continue to exist

266between runs. If your runner is ephemeral, confirm that your write-back step is

267storing the updated file from the last job.

268 

269## Source references

270 

271If you want to verify this behavior in the open-source client:

272 

273- [`codex-rs/core/src/auth.rs`](https://github.com/openai/codex/blob/main/codex-rs/core/src/auth.rs)

274 covers stale-token detection, automatic refresh, refresh-on-401 recovery, and

275 persistence of refreshed tokens

276- [`codex-rs/core/src/auth/storage.rs`](https://github.com/openai/codex/blob/main/codex-rs/core/src/auth/storage.rs)

277 covers file-backed `auth.json` storage

cli.md +1 −7

Details

1# Codex CLI1# Codex CLI

2 2 

3Pair with Codex in your terminal

4 

5Codex CLI is OpenAI's coding agent that you can run locally from your terminal. It can read, change, and run code on your machine in the selected directory.3Codex CLI is OpenAI's coding agent that you can run locally from your terminal. It can read, change, and run code on your machine in the selected directory.

6It's [open source](https://github.com/openai/codex) and built in Rust for speed and efficiency.4It's [open source](https://github.com/openai/codex) and built in Rust for speed and efficiency.

7 5 


57 55 

58Run `codex` to start an interactive terminal UI (TUI) session.](https://developers.openai.com/codex/cli/features#running-in-interactive-mode)[### Control model and reasoning56Run `codex` to start an interactive terminal UI (TUI) session.](https://developers.openai.com/codex/cli/features#running-in-interactive-mode)[### Control model and reasoning

59 57 

60Use `/model` to switch between GPT-5.3-Codex and other available models, or adjust reasoning levels.](https://developers.openai.com/codex/cli/features#models-reasoning)[### Image inputs58Use `/model` to switch between GPT-5.4, GPT-5.3-Codex, and other available models, or adjust reasoning levels.](https://developers.openai.com/codex/cli/features#models-reasoning)[### Image inputs

61 59 

62Attach screenshots or design specs so Codex reads them alongside your prompt.](https://developers.openai.com/codex/cli/features#image-inputs)[### Run local code review60Attach screenshots or design specs so Codex reads them alongside your prompt.](https://developers.openai.com/codex/cli/features#image-inputs)[### Run local code review

63 61 


74Give Codex access to additional third-party tools and context with Model Context Protocol (MCP).](https://developers.openai.com/codex/mcp)[### Approval modes72Give Codex access to additional third-party tools and context with Model Context Protocol (MCP).](https://developers.openai.com/codex/mcp)[### Approval modes

75 73 

76Choose the approval mode that matches your comfort level before Codex edits or runs commands.](https://developers.openai.com/codex/cli/features#approval-modes)74Choose the approval mode that matches your comfort level before Codex edits or runs commands.](https://developers.openai.com/codex/cli/features#approval-modes)

77 

78[Next

79 

80Features](https://developers.openai.com/codex/cli/features)

cli/features.md +13 −12

Details

1# Codex CLI features1# Codex CLI features

2 2 

3Overview of functionality in the Codex terminal client

4 

5Codex supports workflows beyond chat. Use this guide to learn what each one unlocks and when to use it.3Codex supports workflows beyond chat. Use this guide to learn what each one unlocks and when to use it.

6 4 

7## Running in interactive mode5## Running in interactive mode


22 20 

23- Send prompts, code snippets, or screenshots (see [image inputs](#image-inputs)) directly into the composer.21- Send prompts, code snippets, or screenshots (see [image inputs](#image-inputs)) directly into the composer.

24- Watch Codex explain its plan before making a change, and approve or reject steps inline.22- Watch Codex explain its plan before making a change, and approve or reject steps inline.

23- Read syntax-highlighted markdown code blocks and diffs in the TUI, then use `/theme` to preview and save a preferred color theme.

24- Use `/clear` to wipe the terminal and start a fresh chat, or press <kbd>Ctrl</kbd>+<kbd>L</kbd> to clear the screen without starting a new conversation.

25- Use `/copy` to copy the latest completed Codex output. If a turn is still running, Codex copies the most recent finished output instead of in-progress text.

25- Navigate draft history in the composer with <kbd>Up</kbd>/<kbd>Down</kbd>; Codex restores prior draft text and image placeholders.26- Navigate draft history in the composer with <kbd>Up</kbd>/<kbd>Down</kbd>; Codex restores prior draft text and image placeholders.

26- Press <kbd>Ctrl</kbd>+<kbd>C</kbd> or use `/exit` to close the interactive session when you're done.27- Press <kbd>Ctrl</kbd>+<kbd>C</kbd> or use `/exit` to close the interactive session when you're done.

27 28 


45 46 

46## Models and reasoning47## Models and reasoning

47 48 

48For most coding tasks in Codex, `gpt-5.3-codex` is the go-to model. It’s available for ChatGPT-authenticated Codex sessions in the Codex app, CLI, IDE extension, and Codex Cloud. For extra fast tasks, ChatGPT Pro subscribers have access to the GPT-5.3-Codex-Spark model in research preview.49For most tasks in Codex, `gpt-5.4` is the recommended model. It brings the industry-leading coding capabilities of `gpt-5.3-codex` to OpenAI’s flagship frontier model, combining frontier coding performance with stronger reasoning, native computer use, and broader professional workflows. For extra fast tasks, ChatGPT Pro subscribers have access to the GPT-5.3-Codex-Spark model in research preview.

49 50 

50Switch models mid-session with the /model command, or specify one when launching the CLI.51Switch models mid-session with the `/model` command, or specify one when launching the CLI.

51 52 

52```bash53```bash

53codex --model gpt-5.3-codex54codex --model gpt-5.4

54```55```

55 56 

56[Learn more about the models available in Codex](https://developers.openai.com/codex/models).57[Learn more about the models available in Codex](https://developers.openai.com/codex/models).


85 86 

86Codex accepts common formats such as PNG and JPEG. Use comma-separated filenames for two or more images, and combine them with text instructions to add context.87Codex accepts common formats such as PNG and JPEG. Use comma-separated filenames for two or more images, and combine them with text instructions to add context.

87 88 

89## Syntax highlighting and themes

90 

91The TUI syntax-highlights fenced markdown code blocks and file diffs so code is easier to scan during reviews and debugging.

92 

93Use `/theme` to open the theme picker, preview themes live, and save your selection to `tui.theme` in `~/.codex/config.toml`. You can also add custom `.tmTheme` files under `$CODEX_HOME/themes` and select them in the picker.

94 

88## Running local code review95## Running local code review

89 96 

90Type `/review` in the CLI to open Codex's review presets. The CLI launches a dedicated reviewer that reads the diff you select and reports prioritized, actionable findings without touching your working tree. By default it uses the current session model; set `review_model` in `config.toml` to override.97Type `/review` in the CLI to open Codex's review presets. The CLI launches a dedicated reviewer that reads the diff you select and reports prioritized, actionable findings without touching your working tree. By default it uses the current session model; set `review_model` in `config.toml` to override.


98 105 

99## Web search106## Web search

100 107 

101Codex ships with a first-party web search tool. For local tasks in the Codex CLI, Codex enables web search by default and serves results from a web search cache. The cache is an OpenAI-maintained index of web results, so cached mode returns pre-indexed results instead of fetching live pages. This reduces exposure to prompt injection from arbitrary live content, but you should still treat web results as untrusted. If you are using `--yolo` or another [full access sandbox setting](https://developers.openai.com/codex/security), web search defaults to live results. To fetch the most recent data, pass `--search` for a single run or set `web_search = "live"` in [Config basics](https://developers.openai.com/codex/config-basic). You can also set `web_search = "disabled"` to turn the tool off.108Codex ships with a first-party web search tool. For local tasks in the Codex CLI, Codex enables web search by default and serves results from a web search cache. The cache is an OpenAI-maintained index of web results, so cached mode returns pre-indexed results instead of fetching live pages. This reduces exposure to prompt injection from arbitrary live content, but you should still treat web results as untrusted. If you are using `--yolo` or another [full access sandbox setting](https://developers.openai.com/codex/agent-approvals-security), web search defaults to live results. To fetch the most recent data, pass `--search` for a single run or set `web_search = "live"` in [Config basics](https://developers.openai.com/codex/config-basic). You can also set `web_search = "disabled"` to turn the tool off.

102 109 

103You'll see `web_search` items in the transcript or `codex exec --json` output whenever Codex looks something up.110You'll see `web_search` items in the transcript or `codex exec --json` output whenever Codex looks something up.

104 111 


192- Launch Codex from any directory using `codex --cd <path>` to set the working root without running `cd` first. The active path appears in the TUI header.199- Launch Codex from any directory using `codex --cd <path>` to set the working root without running `cd` first. The active path appears in the TUI header.

193- Expose more writable roots with `--add-dir` (for example, `codex --cd apps/frontend --add-dir ../backend --add-dir ../shared`) when you need to coordinate changes across more than one project.200- Expose more writable roots with `--add-dir` (for example, `codex --cd apps/frontend --add-dir ../backend --add-dir ../shared`) when you need to coordinate changes across more than one project.

194- Make sure your environment is already set up before launching Codex so it doesn't spend tokens probing what to activate. For example, source your Python virtual environment (or other language environments), start any required daemons, and export the environment variables you expect to use ahead of time.201- Make sure your environment is already set up before launching Codex so it doesn't spend tokens probing what to activate. For example, source your Python virtual environment (or other language environments), start any required daemons, and export the environment variables you expect to use ahead of time.

195 

196[Previous

197 

198Overview](https://developers.openai.com/codex/cli)[Next

199 

200Command Line Options](https://developers.openai.com/codex/cli/reference)

Details

1# Command line options1# Command line options

2 2 

3Options and flags for the Codex terminal client

4 

5## How to read this reference3## How to read this reference

6 4 

7This page catalogs every documented Codex CLI command and flag. Use the interactive tables to search by key or description. Each section indicates whether the option is stable or experimental and calls out risky combinations.5This page catalogs every documented Codex CLI command and flag. Use the interactive tables to search by key or description. Each section indicates whether the option is stable or experimental and calls out risky combinations.


1477- [Config basics](https://developers.openai.com/codex/config-basic): persist defaults like the model and provider.1475- [Config basics](https://developers.openai.com/codex/config-basic): persist defaults like the model and provider.

1478- [Advanced Config](https://developers.openai.com/codex/config-advanced): profiles, providers, sandbox tuning, and integrations.1476- [Advanced Config](https://developers.openai.com/codex/config-advanced): profiles, providers, sandbox tuning, and integrations.

1479- [AGENTS.md](https://developers.openai.com/codex/guides/agents-md): conceptual overview of Codex agent capabilities and best practices.1477- [AGENTS.md](https://developers.openai.com/codex/guides/agents-md): conceptual overview of Codex agent capabilities and best practices.

1480 

1481[Previous

1482 

1483Features](https://developers.openai.com/codex/cli/features)[Next

1484 

1485Slash commands](https://developers.openai.com/codex/cli/slash-commands)

Details

1# Slash commands in Codex CLI1# Slash commands in Codex CLI

2 2 

3Control Codex during interactive sessions3Slash commands give you fast, keyboard-first control over Codex. Type `/` in

4 4the composer to open the slash popup, choose a command, and Codex will perform

5Slash commands give you fast, keyboard-first control over Codex. Type `/` in the composer to open the slash popup, choose a command, and Codex will perform actions such as switching models, adjusting permissions, or summarizing long conversations without leaving the terminal.5actions such as switching models, adjusting permissions, or summarizing long

6conversations without leaving the terminal.

6 7 

7This guide shows you how to:8This guide shows you how to:

8 9 

9- Find the right built-in slash command for a task10- Find the right built-in slash command for a task

10- Steer an active session with commands like `/model`, `/personality`, `/permissions`, `/experimental`, `/agent`, and `/status`11- Steer an active session with commands like `/model`, `/personality`,

12 `/permissions`, `/experimental`, `/agent`, and `/status`

11 13 

12## Built-in slash commands14## Built-in slash commands

13 15 

14Codex ships with the following commands. Open the slash popup and start typing the command name to filter the list.16Codex ships with the following commands. Open the slash popup and start typing

17the command name to filter the list.

15 18 

16| Command | Purpose | When to use it |19| Command | Purpose | When to use it |

17| ------------------------------------------------------------------------------- | --------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |20| ------------------------------------------------------------------------------- | --------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |


19| [`/sandbox-add-read-dir`](#grant-sandbox-read-access-with-sandbox-add-read-dir) | Grant sandbox read access to an extra directory (Windows only). | Unblock commands that need to read an absolute directory path outside the current readable roots. |22| [`/sandbox-add-read-dir`](#grant-sandbox-read-access-with-sandbox-add-read-dir) | Grant sandbox read access to an extra directory (Windows only). | Unblock commands that need to read an absolute directory path outside the current readable roots. |

20| [`/agent`](#switch-agent-threads-with-agent) | Switch the active agent thread. | Inspect or continue work in a spawned sub-agent thread. |23| [`/agent`](#switch-agent-threads-with-agent) | Switch the active agent thread. | Inspect or continue work in a spawned sub-agent thread. |

21| [`/apps`](#browse-apps-with-apps) | Browse apps (connectors) and insert them into your prompt. | Attach an app as `$app-slug` before asking Codex to use it. |24| [`/apps`](#browse-apps-with-apps) | Browse apps (connectors) and insert them into your prompt. | Attach an app as `$app-slug` before asking Codex to use it. |

25| [`/clear`](#clear-the-terminal-and-start-a-new-chat-with-clear) | Clear the terminal and start a fresh chat. | Reset the visible UI and conversation together when you want a clean slate. |

22| [`/compact`](#keep-transcripts-lean-with-compact) | Summarize the visible conversation to free tokens. | Use after long runs so Codex retains key points without blowing the context window. |26| [`/compact`](#keep-transcripts-lean-with-compact) | Summarize the visible conversation to free tokens. | Use after long runs so Codex retains key points without blowing the context window. |

27| [`/copy`](#copy-the-latest-response-with-copy) | Copy the latest completed Codex output. | Grab the latest finished response or plan text without manually selecting it. |

23| [`/diff`](#review-changes-with-diff) | Show the Git diff, including files Git isn't tracking yet. | Review Codex's edits before you commit or run tests. |28| [`/diff`](#review-changes-with-diff) | Show the Git diff, including files Git isn't tracking yet. | Review Codex's edits before you commit or run tests. |

24| [`/exit`](#exit-the-cli-with-quit-or-exit) | Exit the CLI (same as `/quit`). | Alternative spelling; both commands exit the session. |29| [`/exit`](#exit-the-cli-with-quit-or-exit) | Exit the CLI (same as `/quit`). | Alternative spelling; both commands exit the session. |

25| [`/experimental`](#toggle-experimental-features-with-experimental) | Toggle experimental features. | Enable optional features such as sub-agents from the CLI. |30| [`/experimental`](#toggle-experimental-features-with-experimental) | Toggle experimental features. | Enable optional features such as sub-agents from the CLI. |


41| [`/debug-config`](#inspect-config-layers-with-debug-config) | Print config layer and requirements diagnostics. | Debug precedence and policy requirements, including experimental network constraints. |46| [`/debug-config`](#inspect-config-layers-with-debug-config) | Print config layer and requirements diagnostics. | Debug precedence and policy requirements, including experimental network constraints. |

42| [`/statusline`](#configure-footer-items-with-statusline) | Configure TUI status-line fields interactively. | Pick and reorder footer items (model/context/limits/git/tokens/session) and persist in config.toml. |47| [`/statusline`](#configure-footer-items-with-statusline) | Configure TUI status-line fields interactively. | Pick and reorder footer items (model/context/limits/git/tokens/session) and persist in config.toml. |

43 48 

44`/quit` and `/exit` both exit the CLI. Use them only after you have saved or committed any important work.49`/quit` and `/exit` both exit the CLI. Use them only after you have saved or

50committed any important work.

45 51 

46The `/approvals` command still works as an alias, but it no longer appears in the slash popup list.52The `/approvals` command still works as an alias, but it no longer appears in the slash popup list.

47 53 


641. In an active conversation, type `/personality` and press Enter.701. In an active conversation, type `/personality` and press Enter.

652. Choose a style from the popup.712. Choose a style from the popup.

66 72 

67Expected: Codex confirms the new style in the transcript and uses it for later responses in the thread.73Expected: Codex confirms the new style in the transcript and uses it for later

74responses in the thread.

68 75 

69Codex supports `friendly`, `pragmatic`, and `none` personalities. Use `none` to disable personality instructions.76Codex supports `friendly`, `pragmatic`, and `none` personalities. Use `none`

77to disable personality instructions.

70 78 

71If the active model doesn't support personality-specific instructions, Codex hides this command.79If the active model doesn't support personality-specific instructions, Codex hides this command.

72 80 

73### Switch to plan mode with `/plan`81### Switch to plan mode with `/plan`

74 82 

751. Type `/plan` and press Enter to switch the active conversation into plan mode.831. Type `/plan` and press Enter to switch the active conversation into plan

84 mode.

762. Optional: provide inline prompt text (for example, `/plan Propose a migration plan for this service`).852. Optional: provide inline prompt text (for example, `/plan Propose a migration plan for this service`).

773. You can paste content or attach images while using inline `/plan` arguments.863. You can paste content or attach images while using inline `/plan` arguments.

78 87 


87 96 

88Expected: Codex saves your feature choices to config and applies them on restart.97Expected: Codex saves your feature choices to config and applies them on restart.

89 98 

99### Clear the terminal and start a new chat with `/clear`

100 

1011. Type `/clear` and press Enter.

102 

103Expected: Codex clears the terminal, resets the visible transcript, and starts

104a fresh chat in the same CLI session.

105 

106Unlike <kbd>Ctrl</kbd>+<kbd>L</kbd>, `/clear` starts a new conversation.

107 

108<kbd>Ctrl</kbd>+<kbd>L</kbd> only clears the terminal view and keeps the current

109chat. Codex disables both actions while a task is in progress.

110 

90### Update permissions with `/permissions`111### Update permissions with `/permissions`

91 112 

921. Type `/permissions` and press Enter.1131. Type `/permissions` and press Enter.

932. Select the approval preset that matches your comfort level, for example `Auto` for hands-off runs or `Read Only` to review edits.1142. Select the approval preset that matches your comfort level, for example

115 `Auto` for hands-off runs or `Read Only` to review edits.

116 

117Expected: Codex announces the updated policy. Future actions respect the

118updated approval mode until you change it again.

94 119 

95Expected: Codex announces the updated policy. Future actions respect the new approval mode until you change it again.120### Copy the latest response with `/copy`

121 

1221. Type `/copy` and press Enter.

123 

124Expected: Codex copies the latest completed Codex output to your clipboard.

125 

126If a turn is still running, `/copy` uses the latest completed output instead of

127the in-progress response. The command is unavailable before the first completed

128Codex output and immediately after a rollback.

96 129 

97### Grant sandbox read access with `/sandbox-add-read-dir`130### Grant sandbox read access with `/sandbox-add-read-dir`

98 131 


1011. Type `/sandbox-add-read-dir C:\absolute\directory\path` and press Enter.1341. Type `/sandbox-add-read-dir C:\absolute\directory\path` and press Enter.

1022. Confirm the path is an existing absolute directory.1352. Confirm the path is an existing absolute directory.

103 136 

104Expected: Codex refreshes the Windows sandbox policy and grants read access to that directory for later commands that run in the sandbox.137Expected: Codex refreshes the Windows sandbox policy and grants read access to

138that directory for later commands that run in the sandbox.

105 139 

106### Inspect the session with `/status`140### Inspect the session with `/status`

107 141 

1081. In any conversation, type `/status`.1421. In any conversation, type `/status`.

1092. Review the output for the active model, approval policy, writable roots, and current token usage.1432. Review the output for the active model, approval policy, writable roots, and current token usage.

110 144 

111Expected: You see a summary like what `codex status` prints in the shell, confirming Codex is operating where you expect.145Expected: You see a summary like what `codex status` prints in the shell,

146confirming Codex is operating where you expect.

112 147 

113### Inspect config layers with `/debug-config`148### Inspect config layers with `/debug-config`

114 149 

1151. Type `/debug-config`.1501. Type `/debug-config`.

1162. Review the output for config layer order (lowest precedence first), on/off state, and policy sources.1512. Review the output for config layer order (lowest precedence first), on/off

152 state, and policy sources.

117 153 

118Expected: Codex prints layer diagnostics plus policy details such as `allowed_approval_policies`, `allowed_sandbox_modes`, `mcp_servers`, `rules`, `enforce_residency`, and `experimental_network` when configured.154Expected: Codex prints layer diagnostics plus policy details such as

155`allowed_approval_policies`, `allowed_sandbox_modes`, `mcp_servers`, `rules`,

156`enforce_residency`, and `experimental_network` when configured.

119 157 

120Use this output to debug why an effective setting differs from `config.toml`.158Use this output to debug why an effective setting differs from `config.toml`.

121 159 


1241. Type `/statusline`.1621. Type `/statusline`.

1252. Use the picker to toggle and reorder items, then confirm.1632. Use the picker to toggle and reorder items, then confirm.

126 164 

127Expected: The footer status line updates immediately and persists to `tui.status_line` in `config.toml`.165Expected: The footer status line updates immediately and persists to

166`tui.status_line` in `config.toml`.

128 167 

129Available status-line items include model, model+reasoning, context stats, rate limits, git branch, token counters, session id, current directory/project root, and Codex version.168Available status-line items include model, model+reasoning, context stats, rate

169limits, git branch, token counters, session id, current directory/project root,

170and Codex version.

130 171 

131### Check background terminals with `/ps`172### Check background terminals with `/ps`

132 173 

1331. Type `/ps`.1741. Type `/ps`.

1342. Review the list of background terminals and their status.1752. Review the list of background terminals and their status.

135 176 

136Expected: Codex shows each background terminals command plus up to three recent, non-empty output lines so you can gauge progress at a glance.177Expected: Codex shows each background terminal's command plus up to three

178recent, non-empty output lines so you can gauge progress at a glance.

137 179 

138Background terminals appear when `unified_exec` is in use; otherwise, the list may be empty.180Background terminals appear when `unified_exec` is in use; otherwise, the list may be empty.

139 181 


1421. After a long exchange, type `/compact`.1841. After a long exchange, type `/compact`.

1432. Confirm when Codex offers to summarize the conversation so far.1852. Confirm when Codex offers to summarize the conversation so far.

144 186 

145Expected: Codex replaces earlier turns with a concise summary, freeing context while keeping critical details.187Expected: Codex replaces earlier turns with a concise summary, freeing context

188while keeping critical details.

146 189 

147### Review changes with `/diff`190### Review changes with `/diff`

148 191 

1491. Type `/diff` to inspect the Git diff.1921. Type `/diff` to inspect the Git diff.

1502. Scroll through the output inside the CLI to review edits and added files.1932. Scroll through the output inside the CLI to review edits and added files.

151 194 

152Expected: Codex shows changes youve staged, changes you havent staged yet, and files Git hasn’t started tracking, so you can decide what to keep.195Expected: Codex shows changes you've staged, changes you haven't staged yet,

196and files Git hasn't started tracking, so you can decide what to keep.

153 197 

154### Highlight files with `/mention`198### Highlight files with `/mention`

155 199 


162 206 

1631. Type `/new` and press Enter.2071. Type `/new` and press Enter.

164 208 

165Expected: Codex starts a fresh conversation in the same CLI session, so you can switch tasks without leaving your terminal.209Expected: Codex starts a fresh conversation in the same CLI session, so you

210can switch tasks without leaving your terminal.

211 

212Unlike `/clear`, `/new` does not clear the current terminal view first.

166 213 

167### Resume a saved conversation with `/resume`214### Resume a saved conversation with `/resume`

168 215 

1691. Type `/resume` and press Enter.2161. Type `/resume` and press Enter.

1702. Choose the session you want from the saved-session picker.2172. Choose the session you want from the saved-session picker.

171 218 

172Expected: Codex reloads the selected conversations transcript so you can pick up where you left off, keeping the original history intact.219Expected: Codex reloads the selected conversation's transcript so you can pick

220up where you left off, keeping the original history intact.

173 221 

174### Fork the current conversation with `/fork`222### Fork the current conversation with `/fork`

175 223 

1761. Type `/fork` and press Enter.2241. Type `/fork` and press Enter.

177 225 

178Expected: Codex clones the current conversation into a new thread with a fresh ID, leaving the original transcript untouched so you can explore an alternative approach in parallel.226Expected: Codex clones the current conversation into a new thread with a fresh

227ID, leaving the original transcript untouched so you can explore an alternative

228approach in parallel.

179 229 

180If you need to fork a saved session instead of the current one, run `codex fork` in your terminal to open the session picker.230If you need to fork a saved session instead of the current one, run

231`codex fork` in your terminal to open the session picker.

181 232 

182### Generate `AGENTS.md` with `/init`233### Generate `AGENTS.md` with `/init`

183 234 

1841. Run `/init` in the directory where you want Codex to look for persistent instructions.2351. Run `/init` in the directory where you want Codex to look for persistent instructions.

1852. Review the generated `AGENTS.md`, then edit it to match your repository conventions.2362. Review the generated `AGENTS.md`, then edit it to match your repository conventions.

186 237 

187Expected: Codex creates an `AGENTS.md` scaffold you can refine and commit for future sessions.238Expected: Codex creates an `AGENTS.md` scaffold you can refine and commit for

239future sessions.

188 240 

189### Ask for a working tree review with `/review`241### Ask for a working tree review with `/review`

190 242 

1911. Type `/review`.2431. Type `/review`.

1922. Follow up with `/diff` if you want to inspect the exact file changes.2442. Follow up with `/diff` if you want to inspect the exact file changes.

193 245 

194Expected: Codex summarizes issues it finds in your working tree, focusing on behavior changes and missing tests. It uses the current session model unless you set `review_model` in `config.toml`.246Expected: Codex summarizes issues it finds in your working tree, focusing on

247behavior changes and missing tests. It uses the current session model unless

248you set `review_model` in `config.toml`.

195 249 

196### List MCP tools with `/mcp`250### List MCP tools with `/mcp`

197 251 


2051. Type `/apps`.2591. Type `/apps`.

2062. Pick an app from the list.2602. Pick an app from the list.

207 261 

208Expected: Codex inserts the app mention into the composer as `$app-slug`, so you can immediately ask Codex to use it.262Expected: Codex inserts the app mention into the composer as `$app-slug`, so

263you can immediately ask Codex to use it.

209 264 

210### Switch agent threads with `/agent`265### Switch agent threads with `/agent`

211 266 

2121. Type `/agent` and press Enter.2671. Type `/agent` and press Enter.

2132. Select the thread you want from the picker.2682. Select the thread you want from the picker.

214 269 

215Expected: Codex switches the active thread so you can inspect or continue that agent’s work.270Expected: Codex switches the active thread so you can inspect or continue that

271agent's work.

216 272 

217### Send feedback with `/feedback`273### Send feedback with `/feedback`

218 274 

2191. Type `/feedback` and press Enter.2751. Type `/feedback` and press Enter.

2202. Follow the prompts to include logs or diagnostics.2762. Follow the prompts to include logs or diagnostics.

221 277 

222Expected: Codex collects the requested diagnostics and submits them to the maintainers.278Expected: Codex collects the requested diagnostics and submits them to the

279maintainers.

223 280 

224### Sign out with `/logout`281### Sign out with `/logout`

225 282 


2321. Type `/quit` (or `/exit`) and press Enter.2891. Type `/quit` (or `/exit`) and press Enter.

233 290 

234Expected: Codex exits immediately. Save or commit any important work first.291Expected: Codex exits immediately. Save or commit any important work first.

235 

236[Previous

237 

238Command Line Options](https://developers.openai.com/codex/cli/reference)

cloud.md +0 −6

Details

1# Codex web1# Codex web

2 2 

3Delegate to Codex in the cloud

4 

5Codex is OpenAI's coding agent that can read, edit, and run code. It helps you build faster, fix bugs, and understand unfamiliar code. With Codex cloud, Codex can work on tasks in the background (including in parallel) using its own cloud environment.3Codex is OpenAI's coding agent that can read, edit, and run code. It helps you build faster, fix bugs, and understand unfamiliar code. With Codex cloud, Codex can work on tasks in the background (including in parallel) using its own cloud environment.

6 4 

7## Codex web setup5## Codex web setup


27Tag `@codex` on issues and pull requests to spin up tasks and propose changes directly from GitHub.](https://developers.openai.com/codex/integrations/github)[### Control internet access25Tag `@codex` on issues and pull requests to spin up tasks and propose changes directly from GitHub.](https://developers.openai.com/codex/integrations/github)[### Control internet access

28 26 

29Decide whether Codex can reach the public internet from cloud environments, and when to enable it.](https://developers.openai.com/codex/cloud/internet-access)27Decide whether Codex can reach the public internet from cloud environments, and when to enable it.](https://developers.openai.com/codex/cloud/internet-access)

30 

31[Next

32 

33Environments](https://developers.openai.com/codex/cloud/environments)

Details

1# Cloud environments1# Cloud environments

2 2 

3Customize dependencies and tools for Codex

4 

5Use environments to control what Codex installs and runs during cloud tasks. For example, you can add dependencies, install tools like linters and formatters, and set environment variables.3Use environments to control what Codex installs and runs during cloud tasks. For example, you can add dependencies, install tools like linters and formatters, and set environment variables.

6 4 

7Configure environments in [Codex settings](https://chatgpt.com/codex/settings/environments).5Configure environments in [Codex settings](https://chatgpt.com/codex/settings/environments).


83Internet access is available during the setup script phase to install dependencies. During the agent phase, internet access is off by default, but you can configure limited or unrestricted access. See [agent internet access](https://developers.openai.com/codex/cloud/internet-access).81Internet access is available during the setup script phase to install dependencies. During the agent phase, internet access is off by default, but you can configure limited or unrestricted access. See [agent internet access](https://developers.openai.com/codex/cloud/internet-access).

84 82 

85Environments run behind an HTTP/HTTPS network proxy for security and abuse prevention purposes. All outbound internet traffic passes through this proxy.83Environments run behind an HTTP/HTTPS network proxy for security and abuse prevention purposes. All outbound internet traffic passes through this proxy.

86 

87[Previous

88 

89Overview](https://developers.openai.com/codex/cloud)[Next

90 

91Internet Access](https://developers.openai.com/codex/cloud/internet-access)

Details

1# Agent internet access1# Agent internet access

2 2 

3Control internet access for Codex cloud tasks

4 

5By default, Codex blocks internet access during the agent phase. Setup scripts still run with internet access so you can install dependencies. You can enable agent internet access per environment when you need it.3By default, Codex blocks internet access during the agent phase. Setup scripts still run with internet access so you can install dependencies. You can enable agent internet access per environment when you need it.

6 4 

7## Risks of agent internet access5## Risks of agent internet access


141visualstudio.com139visualstudio.com

142yarnpkg.com140yarnpkg.com

143```141```

144 

145[Previous

146 

147Environments](https://developers.openai.com/codex/cloud/environments)

codex.md +7 −7

Details

1# Codex1# Codex

2 2 

3One agent for everywhere you code3![Codex app showing a project sidebar, thread list, and review pane](/images/codex/app/codex-app-basic-light.webp)

4 

5![Codex app showing a project sidebar, thread list, and review pane](/images/codex/app/codex-app-basic-light.webp) ![Codex app showing a project sidebar, thread list, and review pane](/images/codex/app/codex-app-basic-dark.webp)

6 

7![Codex app showing a project sidebar, thread list, and review pane](/images/codex/app/codex-app-basic-light.webp) ![Codex app showing a project sidebar, thread list, and review pane](/images/codex/app/codex-app-basic-dark.webp)

8 4 

9Codex is OpenAI's coding agent for software development. ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. It can help you:5Codex is OpenAI's coding agent for software development. ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. It can help you:

10 6 


26 22 

27 Learn more](https://developers.openai.com/codex/explore) [### Community23 Learn more](https://developers.openai.com/codex/explore) [### Community

28 24 

29Join the OpenAI Discord to ask questions, share workflows and connect with others.25Explore Codex Ambassadors and upcoming community meetups by location.

26 

27 See community](https://developers.openai.com/codex/community/meetups) [### Codex for OSS

28 

29Apply or nominate maintainers for API credits, ChatGPT Pro with Codex, and selective Codex Security access.

30 30 

31 Join the Discord](https://discord.gg/openai)31 Learn more](https://developers.openai.com/codex/community/codex-for-oss)

codex-for-oss-terms.md +47 −0 added

Details

1# Codex for Open Source Program Terms

2 

3These Program Terms govern the Codex for Open Source program (the “Program”) offered by OpenAI OpCo, LLC and its affiliates (“OpenAI,” “we,” “our,” or “us”). By submitting an application to the Program or accepting any Program benefit, you agree to these Program Terms.

4 

5These Program Terms supplement, and do not replace, the OpenAI Terms of Use, Privacy Policy, applicable service terms, and OpenAI policies that govern your use of ChatGPT, Codex, the API, and any other OpenAI services made available through the Program. If there is a conflict, these Program Terms control only with respect to the Program.

6 

7## 1. Program Overview

8 

9The Program is designed to support maintainers of important open-source software. Approved applicants may receive one or more of the following benefits, as determined by OpenAI in its sole discretion: (i) a limited-duration ChatGPT Pro benefit that includes Codex access; (ii) API credits for eligible open-source maintainer workflows; and (iii) conditional access to Codex Security for qualified repositories or maintainers. Availability, duration, scope, and timing of any benefit may vary by applicant, repository, or use case.

10 

11## 2. Eligibility and Applications

12 

13To be considered for the Program, applicants must have a valid ChatGPT account and provide accurate and complete information about themselves, their repositories, and their role in maintaining or administering those repositories. OpenAI may consider factors such as repository usage, ecosystem importance, evidence of active maintenance, role or permissions, and Program capacity. Submission of an application does not guarantee selection, funding, or access.

14 

15## 3. Selection and Verification

16 

17OpenAI may approve or deny applications in its sole discretion. OpenAI may request additional information to verify identity, repository affiliation, maintainer status, or repository control, and may condition any benefit on successful verification. OpenAI's decisions are final.

18 

19## 4. Benefits

20 

21Unless OpenAI states otherwise in writing, Program benefits are personal, limited, non-transferable, and have no cash value. Program benefits may not be sold, assigned, sublicensed, exchanged, or shared. If OpenAI provides a redemption code, invitation, or activation flow, the recipient must follow the applicable redemption instructions and any additional redemption terms communicated by OpenAI. Benefits may expire if they are not redeemed or activated within the period specified by OpenAI.

22 

23## 5. Additional Conditions for Codex Security and API Credits

24 

25Codex Security access and API credits are optional, additional Program benefits and may require separate review, additional eligibility checks, and/or additional terms. OpenAI may limit Codex Security access to repositories that the applicant owns, maintains, or is otherwise authorized to administer.

26 

27Applicants may not use the Program, including Codex Security, to scan, probe, test, or review repositories, systems, or codebases that they do not own or lack permission to review. OpenAI may require proof of control or authorization before granting or continuing such access and may limit or revoke access at any time if authorization is unclear or no longer valid.

28 

29## 6. Fraud, Abuse, and Revocation

30 

31OpenAI may reject, suspend, or revoke any Program benefit for any reason in its sole discretion, including without limitation if it reasonably believes that an applicant or recipient: (i) provided false, misleading, or incomplete information; (ii) used multiple identities or accounts to obtain more than one benefit; (iii) transferred, resold, or shared a benefit; (iv) violated OpenAI's terms or policies; (v) used the Program in a harmful, abusive, fraudulent, or unauthorized manner; or (vi) otherwise created legal, security, reputational, or operational risk for OpenAI or others.

32 

33## 7. Submission Similarity; No Exclusivity; No Confidentiality

34 

35The applicant acknowledges that OpenAI may currently or in the future develop, receive, review, fund, support, or work with ideas, projects, repositories, workflows, or proposals that are similar or identical to the applicant's submission. Nothing in these Program Terms prevents OpenAI from independently developing, funding, or supporting any such similar or identical work.

36 

37The applicant further acknowledges that OpenAI assumes no obligation of exclusivity with respect to any submission and that any decision to select, fund, or support a project or maintainer is made in OpenAI's sole discretion.

38 

39Except as described in OpenAI's privacy policy or as required by law, applicants should not submit confidential information in connection with the Program, and OpenAI has no duty to treat application materials as confidential.

40 

41## 8. Program Changes

42 

43OpenAI may modify, pause, limit, or discontinue the Program, its eligibility criteria, or any Program benefit at any time. OpenAI may also update these Program Terms from time to time. Continued participation in the Program after an update constitutes acceptance of the revised Program Terms.

44 

45## 9. Taxes and Local Restrictions

46 

47Recipients are responsible for any taxes, reporting obligations, or local legal requirements that may apply to receipt or use of Program benefits. The Program is void where prohibited or restricted by law.

Details

1# Codex for Open Source

2 

3Open-source maintainers do critical work, often behind the scenes, to keep the software ecosystem healthy. Over the past year, the Codex Open Source Fund ($1 million) has supported projects that need API credits, including teams using Codex to power GitHub pull request workflows. OpenAI is grateful to the maintainers who keep that work moving.

4 

5The fund now supports eligible maintainers by offering six months of ChatGPT Pro with Codex and conditional access to Codex Security for core maintainers with write access. Developers should code in the tools they prefer, whether that’s Codex, [OpenCode](https://github.com/anomalyco/opencode), [Cline](https://github.com/cline/cline), [pi](https://github.com/badlogic/pi-mono/tree/main/packages/coding-agent), [OpenClaw](https://github.com/openclaw/openclaw), or something else, and this program supports that work.

6 

7## What the program includes

8 

9- Six months of ChatGPT Pro with Codex for day-to-day coding, triage, review, and maintainer workflows

10- Conditional access to Codex Security for repositories that need deeper security coverage

11- API credits through the Codex Open Source Fund for projects that use Codex in pull request review, maintainer automation, release workflows, or other core OSS work

12 

13Given GPT-5.4’s capabilities, the team reviews Codex Security access case by case to ensure these workflows get the care and diligence they require.

14 

15If you’re a core maintainer or run a widely used public project, apply. If your project doesn’t fit the criteria but it plays an important role in the ecosystem, apply anyway and explain why.

16 

17By submitting an application, you agree to the [Codex for Open Source Program Terms](https://developers.openai.com/codex/codex-for-oss-terms).

18 

19[Apply today!](https://openai.com/form/codex-for-oss/)

community/meetups.md +17 −0 added

Details

1# Codex Meetups

2 

3Mar 12

4 

5![Stylized city cover for Orlando](https://developers.openai.com/codex/meetups/orlando.webp)

6 

7UpcomingMar 12

8 

9Orlando, FL, USA

10 

11### Orlando

12 

13March 12, 2026

14 

15Hosted by [Leonard](https://www.linkedin.com/in/lgofman/), [Michael](https://www.linkedin.com/in/michael-rusudev/), and [Carlos](https://www.linkedin.com/in/cataladev/)

16 

17[Register now](https://luma.com/39y2dvwx)[Share city](https://developers.openai.com/codex/community/meetups?city=Orlando)

concepts/customization.md +150 −0 added

Details

1# Customization

2 

3Customization is how you make Codex work the way your team works.

4 

5In Codex, customization comes from a few layers that work together:

6 

7- **Project guidance (`AGENTS.md`)** for persistent instructions

8- **Skills** for reusable workflows and domain expertise

9- **[MCP](https://developers.openai.com/codex/mcp)** for access to external tools and shared systems

10- **[Multi-agents](https://developers.openai.com/codex/concepts/multi-agents)** for delegating work to specialized sub-agents

11 

12These are complementary, not competing. `AGENTS.md` shapes behavior, skills package repeatable processes, and [MCP](https://developers.openai.com/codex/mcp) connects Codex to systems outside the local workspace.

13 

14## AGENTS Guidance

15 

16`AGENTS.md` gives Codex durable project guidance that travels with your repository and applies before the agent starts work. Keep it small.

17 

18Use it for the rules you want Codex to follow every time in a repo, such as:

19 

20- Build and test commands

21- Review expectations

22- Repo-specific conventions

23- Directory-specific instructions

24 

25When the agent makes incorrect assumptions about your codebase, correct them in `AGENTS.md` and ask the agent to update `AGENTS.md` so the fix persists. Treat it as a feedback loop.

26 

27**Updating `AGENTS.md`:** Start with only the instructions that matter. Codify recurring review feedback, put guidance in the closest directory where it applies, and tell the agent to update `AGENTS.md` when you correct something so future sessions inherit the fix.

28 

29### When to update `AGENTS.md`

30 

31- **Repeated mistakes**: If the agent makes the same mistake repeatedly, add a rule.

32- **Too much reading**: If it finds the right files but reads too many documents, add routing guidance (which directories/files to prioritize).

33- **Recurring PR feedback**: If you leave the same feedback more than once, codify it.

34- **In GitHub**: In a pull request comment, tag `@codex` with a request (for example, `@codex add this to AGENTS.md`) to delegate the update to a cloud task.

35- **Automate drift checks**: Use [automations](https://developers.openai.com/codex/app/automations) to run recurring checks (for example, daily) that look for guidance gaps and suggest what to add to `AGENTS.md`.

36 

37Pair `AGENTS.md` with infrastructure that enforces those rules: pre-commit hooks, linters, and type checkers catch issues before you see them, so the system gets smarter about preventing recurring mistakes.

38 

39Codex can load guidance from multiple locations: a global file in your Codex home directory (for you as a developer) and repo-specific files that teams can check in. Files closer to the working directory take precedence.

40Use the global file to shape how Codex communicates with you (for example, review style, verbosity, and defaults), and keep repo files focused on team and codebase rules.

41 

42- ~/.codex/

43 

44 - AGENTS.md Global (for you as a developer)

45- repo-root/

46 

47 - AGENTS.md Repo-specific (for your team)

48 

49[Custom instructions with AGENTS.md](https://developers.openai.com/codex/guides/agents-md)

50 

51## Skills

52 

53Skills give Codex reusable capabilities for repeatable workflows.

54Skills are often the best fit for reusable workflows because they support richer instructions, scripts, and references while staying reusable across tasks.

55Skills are loaded and visible to the agent (at least their metadata), so Codex can discover and choose them implicitly. This keeps rich workflows available without bloating context up front.

56 

57A skill is typically a `SKILL.md` file plus optional scripts, references, and assets.

58 

59- my-skill/

60 

61 - SKILL.md Required: instructions + metadata

62 - scripts/ Optional: executable code

63 - references/ Optional: documentation

64 - assets/ Optional: templates, resources

65 

66The skill directory can include a `scripts/` folder with CLI scripts that Codex invokes as part of the workflow (for example, seed data or run validations). When the workflow needs external systems (issue trackers, design tools, docs servers), pair the skill with [MCP](https://developers.openai.com/codex/mcp).

67 

68Example `SKILL.md`:

69 

70```md

71---

72name: commit

73description: Stage and commit changes in semantic groups. Use when the user wants to commit, organize commits, or clean up a branch before pushing.

74---

75 

761. Do not run `git add .`. Stage files in logical groups by purpose.

772. Group into separate commits: feat → test → docs → refactor → chore.

783. Write concise commit messages that match the change scope.

794. Keep each commit focused and reviewable.

80```

81 

82Use skills for:

83 

84- Repeatable workflows (release steps, review routines, docs updates)

85- Team-specific expertise

86- Procedures that need examples, references, or helper scripts

87 

88Skills can be global (in your user directory, for you as a developer) or repo-specific (checked into `.agents/skills`, for your team). Put repo skills in `.agents/skills` when the workflow applies to that project; use your user directory for skills you want across all repos.

89 

90| Layer | Global | Repo |

91| :----- | :--------------------- | :--------------------------------------------- |

92| AGENTS | `~/.codex/AGENTS.md` | `AGENTS.md` in repo root or nested dirs |

93| Skills | `$HOME/.agents/skills` | `.agents/skills` in repo |

94 

95Codex uses progressive disclosure for skills:

96 

97- It starts with metadata (`name`, `description`) for discovery

98- It loads `SKILL.md` only when a skill is chosen

99- It reads references or runs scripts only when needed

100 

101Skills can be invoked explicitly, and Codex can also choose them implicitly when the task matches the skill description. Clear skill descriptions improve triggering reliability.

102 

103[Agent Skills](https://developers.openai.com/codex/skills)

104 

105## MCP

106 

107MCP (Model Context Protocol) is the standard way to connect Codex to external tools and context providers.

108It’s especially useful for remotely hosted systems such as Figma, Linear, Jira, GitHub, or internal knowledge services your team depends on.

109 

110Use MCP when Codex needs capabilities that live outside the local repo, such as issue trackers, design tools, browsers, or shared documentation systems.

111 

112A useful mental model:

113 

114- **Host**: Codex

115- **Client**: the MCP connection inside Codex

116- **Server**: the external tool or context provider

117 

118MCP servers can expose:

119 

120- **Tools** (actions)

121- **Resources** (readable data)

122- **Prompts** (reusable prompt templates)

123 

124This separation helps you reason about trust and capability boundaries. Some servers mainly provide context, while others expose powerful actions.

125 

126In practice, MCP is often most useful when paired with skills:

127 

128- A skill defines the workflow and names the MCP tools to use

129 

130[Model Context Protocol](https://developers.openai.com/codex/mcp)

131 

132## Multi-agents

133 

134You can create different agents with different roles and prompt them to use tools differently. For example, one agent might run specific testing commands and configurations, while another has MCP servers that fetch production logs for debugging. Each sub-agent stays focused and uses the right tools for its job.

135 

136[Multi-agents concepts](https://developers.openai.com/codex/concepts/multi-agents)

137 

138## Skills + MCP together

139 

140Skills plus MCP is where it all comes together: skills define repeatable workflows, and MCP connects them to external tools and systems.

141If a skill depends on MCP, declare that dependency in `agents/openai.yaml` so Codex can install and wire it automatically (see [Agent Skills](https://developers.openai.com/codex/skills)).

142 

143## Next step

144 

145Build in this order:

146 

1471. [Custom instructions with AGENTS.md](https://developers.openai.com/codex/guides/agents-md) so Codex follows your repo conventions. Add pre-commit hooks and linters to enforce those rules.

1482. [Skills](https://developers.openai.com/codex/skills) so you never have the same conversation twice. Skills can include a `scripts/` directory with CLI scripts or pair with [MCP](https://developers.openai.com/codex/mcp) for external systems.

1493. [MCP](https://developers.openai.com/codex/mcp) when workflows need external systems (Linear, JIRA, docs servers, design tools).

1504. [Multi-agents](https://developers.openai.com/codex/multi-agent) when you’re ready to delegate noisy or specialized tasks to sub-agents.

Details

1# Cyber Safety1# Cyber Safety

2 2 

3Cybersecurity safeguards and trusted access for Codex users

4 

5[GPT-5.3-Codex](https://openai.com/index/introducing-gpt-5-3-codex/) is the first model we are treating as High cybersecurity capability under our [Preparedness Framework](https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf), which requires additional safeguards. These safeguards include training the model to refuse clearly malicious requests like stealing credentials.3[GPT-5.3-Codex](https://openai.com/index/introducing-gpt-5-3-codex/) is the first model we are treating as High cybersecurity capability under our [Preparedness Framework](https://cdn.openai.com/pdf/18a02b5d-6b67-4cec-ab64-68cdfbddebcd/preparedness-framework-v2.pdf), which requires additional safeguards. These safeguards include training the model to refuse clearly malicious requests like stealing credentials.

6 4 

7In addition to safety training, automated classifier-based monitors detect signals of suspicious cyber activity and route high-risk traffic to a less cyber-capable model (GPT-5.2). We expect a very small portion of traffic to be affected by these mitigations, and are working to refine our policies, classifiers, and in-product notifications.5In addition to safety training, automated classifier-based monitors detect signals of suspicious cyber activity and route high-risk traffic to a less cyber-capable model (GPT-5.2). We expect a very small portion of traffic to be affected by these mitigations, and are working to refine our policies, classifiers, and in-product notifications.

concepts/multi-agents.md +53 −0 added

Details

1# Multi-agents

2 

3Codex can run multi-agent workflows by spawning specialized agents in parallel and collecting their results in one response.

4 

5This page explains the core concepts and tradeoffs. For setup, agent configuration, and examples, see [Multi-agents](https://developers.openai.com/codex/multi-agent).

6 

7## Why multi-agent workflows help

8 

9Even with large context windows, models have limits. If you flood the main conversation (where you’re defining requirements, constraints, and decisions) with noisy intermediate output such as exploration notes, test logs, stack traces, and command output, the session can become less reliable over time.

10 

11This is often described as:

12 

13- **Context pollution**: useful information gets buried under noisy intermediate output.

14- **Context rot**: performance degrades as the conversation fills up with less relevant details.

15 

16For background, see Chroma’s writeup on [context rot](https://research.trychroma.com/context-rot).

17 

18Multi-agent workflows help by moving noisy work off the main thread:

19 

20- Keep the **main agent** focused on requirements, decisions, and final outputs.

21- Run specialized **sub-agents** in parallel for exploration, tests, or log analysis.

22- Return **summaries** from sub-agents instead of raw intermediate output.

23 

24As a starting point, use parallel agents for tasks that mostly read (exploration, tests, triage, and summarization). Be more careful with parallel write-heavy workflows, because multiple agents editing code at once can create conflicts and increase coordination overhead.

25 

26## Core terms

27 

28Codex uses a few related terms in multi-agent workflows:

29 

30- **Multi-agent**: A workflow where Codex runs multiple agents in parallel and combines their results.

31- **Sub-agent**: A delegated agent that Codex starts to handle a specific task.

32- **Agent thread**: The CLI thread for an agent, which you can inspect and switch between with `/agent`.

33 

34## Choosing models and reasoning

35 

36Different agents benefit from different model and reasoning settings.

37 

38`gpt-5.3-codex-spark` is available in research preview for ChatGPT Pro

39subscribers. See [Models](https://developers.openai.com/codex/models) for current availability. If you’re

40using Codex via the API, use GPT-5.2-Codex today.

41 

42### Model choice

43 

44- **`gpt-5.3-codex`**: Use for agents that need stronger reasoning, such as code review, security analysis, multi-step implementation, or tasks with ambiguous requirements. The main agent and agents that propose or apply edits usually fit here.

45- **`gpt-5.3-codex-spark`**: Use for agents that prioritize speed over depth, such as exploration, read-heavy scans, or quick summarization tasks. Spark works well for parallel workers that return distilled results to the main agent.

46 

47### Reasoning effort (`model_reasoning_effort`)

48 

49- **`high`**: Use when an agent needs to trace complex logic, validate assumptions, or work through edge cases (for example, reviewer or security-focused agents).

50- **`medium`**: A balanced default for most agents.

51- **`low`**: Use when the task is straightforward and speed matters most.

52 

53Higher reasoning effort increases response time and token usage, but it can improve quality for complex work. For details, see [Models](https://developers.openai.com/codex/models), [Config basics](https://developers.openai.com/codex/config-basic), and [Configuration Reference](https://developers.openai.com/codex/config-reference).

concepts/sandboxing.md +102 −0 added

Details

1# Sandboxing – Codex

2 

3Sandboxing is the boundary that lets Codex act autonomously without giving it

4unrestricted access to your machine. When Codex runs local commands in the

5**Codex app**, **IDE extension**, or **CLI**, those commands run inside a

6constrained environment instead of running with full access by default.

7 

8That environment defines what Codex can do on its own, such as which files it

9can modify and whether commands can use the network. When a task stays inside

10those boundaries, Codex can keep moving without stopping for confirmation. When

11it needs to go beyond them, Codex falls back to the approval flow.

12 

13Sandboxing and approvals are different controls that work together. The

14 sandbox defines technical boundaries. The approval policy decides when Codex

15 must stop and ask before crossing them.

16 

17## What the sandbox does

18 

19The sandbox applies to spawned commands, not just to Codex's built-in file

20operations. If Codex runs tools like `git`, package managers, or test runners,

21those commands inherit the same sandbox boundaries.

22 

23Codex uses platform-native enforcement on each OS. The implementation differs

24between macOS, Linux, WSL, and native Windows, but the idea is the same across

25surfaces: give the agent a bounded place to work so routine tasks can run

26autonomously inside clear limits.

27 

28## Why it matters

29 

30Sandboxing reduces approval fatigue. Instead of asking you to confirm every

31low-risk command, Codex can read files, make edits, and run routine project

32commands within the boundary you already approved.

33 

34It also gives you a clearer trust model for agentic work. You are not just

35trusting the agent's intentions; you are trusting that the agent is operating

36inside enforced limits. That makes it easier to let Codex work independently

37while still knowing when it will stop and ask for help.

38 

39## How you control it

40 

41Most people start with the permissions controls in the product.

42 

43In the Codex app and IDE, you choose a mode from the permissions selector under

44the composer or chat input. That selector lets you rely on Codex's default

45permissions, switch to full access, or use your custom configuration.

46 

47![Codex app permissions selector showing Default permissions, Full access, and Custom (config.toml)](/images/codex/app/permissions-selector-light.webp)

48 

49In the CLI, use [`/permissions`](https://developers.openai.com/codex/cli/slash-commands#update-permissions-with-permissions)

50to switch modes during a session.

51 

52## Configure defaults

53 

54If you want Codex to start with the same behavior every time, use a custom

55configuration. Codex stores those defaults in `config.toml`, its local settings

56file. [Config basics](https://developers.openai.com/codex/config-basic) explains how it works, and the

57[Configuration reference](https://developers.openai.com/codex/config-reference) documents the exact keys for

58`sandbox_mode`, `approval_policy`, and

59`sandbox_workspace_write.writable_roots`. Use those settings to decide how much

60autonomy Codex gets by default, which directories it can write to, and when it

61should pause for approval.

62 

63At a high level, the common sandbox modes are:

64 

65- `read-only`: Codex can inspect files, but it cannot edit files or run

66 commands without approval.

67- `workspace-write`: Codex can read files, edit within the workspace, and run

68 routine local commands inside that boundary. This is the default low-friction

69 mode for local work.

70- `danger-full-access`: Codex runs without sandbox restrictions. This removes

71 the filesystem and network boundaries and should be used only when you want

72 Codex to act with full access.

73 

74The common approval policies are:

75 

76- `untrusted`: Codex asks before running commands that are not in its trusted

77 set.

78- `on-request`: Codex works inside the sandbox by default and asks when it

79 needs to go beyond that boundary.

80- `never`: Codex does not stop for approval prompts.

81 

82Full access means using `sandbox_mode = "danger-full-access"` together with

83`approval_policy = "never"`. By contrast, `--full-auto` is the lower-risk local

84automation preset: `sandbox_mode = "workspace-write"` and

85`approval_policy = "on-request"`.

86 

87If you need Codex to work across more than one directory, writable roots let

88you extend the places it can modify without removing the sandbox entirely. If

89you need a broader or narrower trust boundary, adjust the default sandbox mode

90and approval policy instead of relying on ad hoc exceptions.

91 

92When a workflow needs a specific exception, use [rules](https://developers.openai.com/codex/rules). Rules

93let you allow, prompt, or forbid command prefixes outside the sandbox, which is

94often a better fit than broadly expanding access. For a higher-level overview

95of approvals and sandbox behavior in the app, see

96[Codex app features](https://developers.openai.com/codex/app/features#approvals-and-sandboxing), and for the

97IDE-specific settings entry points, see [Codex IDE extension settings](https://developers.openai.com/codex/ide/settings).

98 

99Platform details live in the platform-specific docs. For native Windows setup,

100behavior, and troubleshooting, see [Windows](https://developers.openai.com/codex/windows). For admin

101requirements and organization-level constraints on sandboxing and approvals, see

102[Agent approvals & security](https://developers.openai.com/codex/agent-approvals-security).

Details

1# Advanced Configuration1# Advanced Configuration

2 2 

3More advanced configuration options for Codex local clients

4 

5Use these options when you need more control over providers, policies, and integrations. For a quick start, see [Config basics](https://developers.openai.com/codex/config-basic).3Use these options when you need more control over providers, policies, and integrations. For a quick start, see [Config basics](https://developers.openai.com/codex/config-basic).

6 4 

5For background on project guidance, reusable capabilities, custom slash commands, multi-agent workflows, and integrations, see [Customization](https://developers.openai.com/codex/concepts/customization). For configuration keys, see [Configuration Reference](https://developers.openai.com/codex/config-reference).

6 

7## Profiles7## Profiles

8 8 

9Profiles let you save named sets of configuration values and switch between them from the CLI.9Profiles let you save named sets of configuration values and switch between them from the CLI.


17```toml17```toml

18model = "gpt-5-codex"18model = "gpt-5-codex"

19approval_policy = "on-request"19approval_policy = "on-request"

20model_catalog_json = "/Users/me/.codex/model-catalogs/default.json"

20 21 

21[profiles.deep-review]22[profiles.deep-review]

22model = "gpt-5-pro"23model = "gpt-5-pro"

23model_reasoning_effort = "high"24model_reasoning_effort = "high"

24approval_policy = "never"25approval_policy = "never"

26model_catalog_json = "/Users/me/.codex/model-catalogs/deep-review.json"

25 27 

26[profiles.lightweight]28[profiles.lightweight]

27model = "gpt-4.1"29model = "gpt-4.1"


30 32 

31To make a profile the default, add `profile = "deep-review"` at the top level of `config.toml`. Codex loads that profile unless you override it on the command line.33To make a profile the default, add `profile = "deep-review"` at the top level of `config.toml`. Codex loads that profile unless you override it on the command line.

32 34 

35Profiles can also override `model_catalog_json`. When both the top level and the selected profile set `model_catalog_json`, Codex prefers the profile value.

36 

33## One-off overrides from the CLI37## One-off overrides from the CLI

34 38 

35In addition to editing `~/.codex/config.toml`, you can override configuration for a single run from the CLI:39In addition to editing `~/.codex/config.toml`, you can override configuration for a single run from the CLI:


41 45 

42```shell46```shell

43# Dedicated flag47# Dedicated flag

44codex --model gpt-5.248codex --model gpt-5.4

45 49 

46# Generic key/value override (value is TOML, not JSON)50# Generic key/value override (value is TOML, not JSON)

47codex --config model='"gpt-5.2"'51codex --config model='"gpt-5.4"'

48codex --config sandbox_workspace_write.network_access=true52codex --config sandbox_workspace_write.network_access=true

49codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'53codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'

50```54```


184 188 

185## Approval policies and sandbox modes189## Approval policies and sandbox modes

186 190 

187Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access). See [Sandbox & approvals](https://developers.openai.com/codex/security) for deeper examples.191Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access).

192 

193For operational details that are easy to miss while editing `config.toml`, see [Common sandbox and approval combinations](https://developers.openai.com/codex/agent-approvals-security#common-sandbox-and-approval-combinations), [Protected paths in writable roots](https://developers.openai.com/codex/agent-approvals-security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/agent-approvals-security#network-access).

194 

195You can also use a granular reject policy (`approval_policy = { reject = { ... } }`) to auto-reject only selected prompt categories, such as sandbox approvals, `execpolicy` rule prompts, or MCP input requests (`mcp_elicitations`), while keeping other prompts interactive.

188 196 

189```197```

190approval_policy = "untrusted" # Other options: on-request, never198approval_policy = "untrusted" # Other options: on-request, never, or { reject = { ... } }

191sandbox_mode = "workspace-write"199sandbox_mode = "workspace-write"

200allow_login_shell = false # Optional hardening: disallow login shells for shell tools

192 201 

193[sandbox_workspace_write]202[sandbox_workspace_write]

194exclude_tmpdir_env_var = false # Allow $TMPDIR203exclude_tmpdir_env_var = false # Allow $TMPDIR


197network_access = false # Opt in to outbound network206network_access = false # Opt in to outbound network

198```207```

199 208 

209Need the complete key list (including profile-scoped overrides and requirements constraints)? See [Configuration Reference](https://developers.openai.com/codex/config-reference) and [Managed configuration](https://developers.openai.com/codex/enterprise/managed-configuration).

210 

200In workspace-write mode, some environments keep `.git/` and `.codex/`211In workspace-write mode, some environments keep `.git/` and `.codex/`

201 read-only even when the rest of the workspace is writable. This is why212 read-only even when the rest of the workspace is writable. This is why

202 commands like `git commit` may still require approval to run outside the213 commands like `git commit` may still require approval to run outside the


291| `codex.tool.call` | counter | `tool`, `success` | Tool invocation count by tool name and success/failure. |302| `codex.tool.call` | counter | `tool`, `success` | Tool invocation count by tool name and success/failure. |

292| `codex.tool.call.duration_ms` | histogram | `tool`, `success` | Tool execution duration in milliseconds by tool name and outcome. |303| `codex.tool.call.duration_ms` | histogram | `tool`, `success` | Tool execution duration in milliseconds by tool name and outcome. |

293 304 

294For more security and privacy guidance around telemetry, see [Security](https://developers.openai.com/codex/security#monitoring-and-telemetry).305For more security and privacy guidance around telemetry, see [Security](https://developers.openai.com/codex/agent-approvals-security#monitoring-and-telemetry).

295 306 

296### Metrics307### Metrics

297 308 

config-basic.md +20 −9

Details

1# Config basics1# Config basics

2 2 

3Learn the basics of configuring your local Codex client

4 

5Codex reads configuration details from more than one location. Your personal defaults live in `~/.codex/config.toml`, and you can add project overrides with `.codex/config.toml` files. For security, Codex loads project config files only when you trust the project.3Codex reads configuration details from more than one location. Your personal defaults live in `~/.codex/config.toml`, and you can add project overrides with `.codex/config.toml` files. For security, Codex loads project config files only when you trust the project.

6 4 

7## Codex configuration file5## Codex configuration file


13The CLI and IDE extension share the same configuration layers. You can use them to:11The CLI and IDE extension share the same configuration layers. You can use them to:

14 12 

15- Set the default model and provider.13- Set the default model and provider.

16- Configure [approval policies and sandbox settings](https://developers.openai.com/codex/security).14- Configure [approval policies and sandbox settings](https://developers.openai.com/codex/agent-approvals-security#sandbox-and-approvals).

17- Configure [MCP servers](https://developers.openai.com/codex/mcp).15- Configure [MCP servers](https://developers.openai.com/codex/mcp).

18 16 

19## Configuration precedence17## Configuration precedence


35 33 

36On managed machines, your organization may also enforce constraints via34On managed machines, your organization may also enforce constraints via

37 `requirements.toml` (for example, disallowing `approval_policy = "never"` or35 `requirements.toml` (for example, disallowing `approval_policy = "never"` or

38`sandbox_mode = "danger-full-access"`). See [Security](https://developers.openai.com/codex/security).36 `sandbox_mode = "danger-full-access"`). See [Managed

37 configuration](https://developers.openai.com/codex/enterprise/managed-configuration) and [Admin-enforced

38 requirements](https://developers.openai.com/codex/enterprise/managed-configuration#admin-enforced-requirements-requirementstoml).

39 39 

40## Common configuration options40## Common configuration options

41 41 


46Choose the model Codex uses by default in the CLI and IDE.46Choose the model Codex uses by default in the CLI and IDE.

47 47 

48```toml48```toml

49model = "gpt-5.2"49model = "gpt-5.4"

50```50```

51 51 

52#### Approval prompts52#### Approval prompts


57approval_policy = "on-request"57approval_policy = "on-request"

58```58```

59 59 

60For behavior differences between `untrusted`, `on-request`, and `never`, see [Run without approval prompts](https://developers.openai.com/codex/agent-approvals-security#run-without-approval-prompts) and [Common sandbox and approval combinations](https://developers.openai.com/codex/agent-approvals-security#common-sandbox-and-approval-combinations).

61 

60#### Sandbox level62#### Sandbox level

61 63 

62Adjust how much filesystem and network access Codex has while executing commands.64Adjust how much filesystem and network access Codex has while executing commands.


65sandbox_mode = "workspace-write"67sandbox_mode = "workspace-write"

66```68```

67 69 

70For mode-by-mode behavior (including protected `.git`/`.codex` paths and network defaults), see [Sandbox and approvals](https://developers.openai.com/codex/agent-approvals-security#sandbox-and-approvals), [Protected paths in writable roots](https://developers.openai.com/codex/agent-approvals-security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/agent-approvals-security#network-access).

71 

72#### Windows sandbox mode

73 

74When running Codex natively on Windows, set the native sandbox mode to `elevated` in the `windows` table. Use `unelevated` only if you don't have administrator permissions or if elevated setup fails.

75 

76```toml

77[windows]

78sandbox = "elevated" # Recommended

79# sandbox = "unelevated" # Fallback if admin permissions/setup are unavailable

80```

81 

68#### Web search mode82#### Web search mode

69 83 

70Codex enables web search by default for local tasks and serves results from a web search cache. The cache is an OpenAI-maintained index of web results, so cached mode returns pre-indexed results instead of fetching live pages. This reduces exposure to prompt injection from arbitrary live content, but you should still treat web results as untrusted. If you are using `--yolo` or another [full access sandbox setting](https://developers.openai.com/codex/security), web search defaults to live results. Choose a mode with `web_search`:84Codex enables web search by default for local tasks and serves results from a web search cache. The cache is an OpenAI-maintained index of web results, so cached mode returns pre-indexed results instead of fetching live pages. This reduces exposure to prompt injection from arbitrary live content, but you should still treat web results as untrusted. If you are using `--yolo` or another [full access sandbox setting](https://developers.openai.com/codex/agent-approvals-security#common-sandbox-and-approval-combinations), web search defaults to live results. Choose a mode with `web_search`:

71 85 

72- `"cached"` (default) serves results from the web search cache.86- `"cached"` (default) serves results from the web search cache.

73- `"live"` fetches the most recent data from the web (same as `--search`).87- `"live"` fetches the most recent data from the web (same as `--search`).


133 147 

134| Key | Default | Maturity | Description |148| Key | Default | Maturity | Description |

135| -------------------- | :-------------------: | ------------ | ---------------------------------------------------------------------------------------- |149| -------------------- | :-------------------: | ------------ | ---------------------------------------------------------------------------------------- |

136| `apply_patch_freeform` | false | Experimental | Include the freeform `apply_patch` tool |

137| `apps` | false | Experimental | Enable ChatGPT Apps/connectors support |150| `apps` | false | Experimental | Enable ChatGPT Apps/connectors support |

138| `apps_mcp_gateway` | false | Experimental | Route Apps MCP calls through `https://api.openai.com/v1/connectors/mcp/` instead of legacy routing |151| `apps_mcp_gateway` | false | Experimental | Route Apps MCP calls through `https://api.openai.com/v1/connectors/mcp/` instead of legacy routing |

139| `elevated_windows_sandbox` | false | Experimental | Use the elevated Windows sandbox pipeline |

140| `collaboration_modes` | true | Stable | Enable collaboration modes such as plan mode |152| `collaboration_modes` | true | Stable | Enable collaboration modes such as plan mode |

141| `experimental_windows_sandbox` | false | Experimental | Use the Windows restricted-token sandbox |

142| `multi_agent` | false | Experimental | Enable multi-agent collaboration tools |153| `multi_agent` | false | Experimental | Enable multi-agent collaboration tools |

143| `personality` | true | Stable | Enable personality selection controls |154| `personality` | true | Stable | Enable personality selection controls |

144| `remote_models` | false | Experimental | Refresh remote model list before showing readiness |155| `remote_models` | false | Experimental | Refresh remote model list before showing readiness |

config-reference.md +1177 −342

Details

1# Configuration Reference1# Configuration Reference

2 2 

3Complete reference for Codex config.toml and requirements.toml

4 

5Use this page as a searchable reference for Codex configuration files. For conceptual guidance and examples, start with [Config basics](https://developers.openai.com/codex/config-basic) and [Advanced Config](https://developers.openai.com/codex/config-advanced).3Use this page as a searchable reference for Codex configuration files. For conceptual guidance and examples, start with [Config basics](https://developers.openai.com/codex/config-basic) and [Advanced Config](https://developers.openai.com/codex/config-advanced).

6 4 

7## `config.toml`5## `config.toml`

8 6 

9User-level configuration lives in `~/.codex/config.toml`. You can also add project-scoped overrides in `.codex/config.toml` files. Codex loads project-scoped config files only when you trust the project.7User-level configuration lives in `~/.codex/config.toml`. You can also add project-scoped overrides in `.codex/config.toml` files. Codex loads project-scoped config files only when you trust the project.

10 8 

9For sandbox and approval keys (`approval_policy`, `sandbox_mode`, and `sandbox_workspace_write.*`), pair this reference with [Sandbox and approvals](https://developers.openai.com/codex/agent-approvals-security#sandbox-and-approvals), [Protected paths in writable roots](https://developers.openai.com/codex/agent-approvals-security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/agent-approvals-security#network-access).

10 

11| Key | Type / Values | Details |11| Key | Type / Values | Details |

12| --- | --- | --- |12| --- | --- | --- |

13| `agents.<name>.config_file` | `string (path)` | Path to a TOML config layer for that role; relative paths resolve from the config file that declares the role. |13| `agents.<name>.config_file` | `string (path)` | Path to a TOML config layer for that role; relative paths resolve from the config file that declares the role. |

14| `agents.<name>.description` | `string` | Role guidance shown to Codex when choosing and spawning that agent type. |14| `agents.<name>.description` | `string` | Role guidance shown to Codex when choosing and spawning that agent type. |

15| `agents.max_threads` | `number` | Maximum number of agent threads that can be open concurrently. |15| `agents.<name>.nickname_candidates` | `array<string>` | Optional pool of display nicknames for spawned agents in that role. |

16| `approval_policy` | `untrusted | on-request | never` | Controls when Codex pauses for approval before executing commands. `on-failure` is deprecated; use `on-request` for interactive runs or `never` for non-interactive runs. |16| `agents.job_max_runtime_seconds` | `number` | Default per-worker timeout for `spawn_agents_on_csv` jobs. When unset, the tool falls back to 1800 seconds per worker. |

17| `apps.<id>.disabled_reason` | `unknown | user` | Optional reason attached when an app/connector is disabled. |17| `agents.max_depth` | `number` | Maximum nesting depth allowed for spawned agent threads (root sessions start at depth 0; default: 1). |

18| `agents.max_threads` | `number` | Maximum number of agent threads that can be open concurrently. Defaults to `6` when unset. |

19| `allow_login_shell` | `boolean` | Allow shell-based tools to use login-shell semantics. Defaults to `true`; when `false`, `login = true` requests are rejected and omitted `login` defaults to non-login shells. |

20| `analytics.enabled` | `boolean` | Enable or disable analytics for this machine/profile. When unset, the client default applies. |

21| `approval_policy` | `untrusted | on-request | never | { reject = { sandbox_approval = bool, rules = bool, mcp_elicitations = bool } }` | Controls when Codex pauses for approval before executing commands. You can also use `approval_policy = { reject = { ... } }` to auto-reject specific prompt categories while keeping other prompts interactive. `on-failure` is deprecated; use `on-request` for interactive runs or `never` for non-interactive runs. |

22| `approval_policy.reject.mcp_elicitations` | `boolean` | When `true`, MCP elicitation prompts are auto-rejected instead of shown to the user. |

23| `approval_policy.reject.rules` | `boolean` | When `true`, approvals triggered by execpolicy `prompt` rules are auto-rejected. |

24| `approval_policy.reject.sandbox_approval` | `boolean` | When `true`, sandbox escalation approval prompts are auto-rejected. |

25| `apps._default.destructive_enabled` | `boolean` | Default allow/deny for app tools with `destructive_hint = true`. |

26| `apps._default.enabled` | `boolean` | Default app enabled state for all apps unless overridden per app. |

27| `apps._default.open_world_enabled` | `boolean` | Default allow/deny for app tools with `open_world_hint = true`. |

28| `apps.<id>.default_tools_approval_mode` | `auto | prompt | approve` | Default approval behavior for tools in this app unless a per-tool override exists. |

29| `apps.<id>.default_tools_enabled` | `boolean` | Default enabled state for tools in this app unless a per-tool override exists. |

30| `apps.<id>.destructive_enabled` | `boolean` | Allow or block tools in this app that advertise `destructive_hint = true`. |

18| `apps.<id>.enabled` | `boolean` | Enable or disable a specific app/connector by id (default: true). |31| `apps.<id>.enabled` | `boolean` | Enable or disable a specific app/connector by id (default: true). |

32| `apps.<id>.open_world_enabled` | `boolean` | Allow or block tools in this app that advertise `open_world_hint = true`. |

33| `apps.<id>.tools.<tool>.approval_mode` | `auto | prompt | approve` | Per-tool approval behavior override for a single app tool. |

34| `apps.<id>.tools.<tool>.enabled` | `boolean` | Per-tool enabled override for an app tool (for example `repos/list`). |

35| `background_terminal_max_timeout` | `number` | Maximum poll window in milliseconds for empty `write_stdin` polls (background terminal polling). Default: `300000` (5 minutes). Replaces the older `background_terminal_timeout` key. |

19| `chatgpt_base_url` | `string` | Override the base URL used during the ChatGPT login flow. |36| `chatgpt_base_url` | `string` | Override the base URL used during the ChatGPT login flow. |

20| `check_for_update_on_startup` | `boolean` | Check for Codex updates on startup (set to false only when updates are centrally managed). |37| `check_for_update_on_startup` | `boolean` | Check for Codex updates on startup (set to false only when updates are centrally managed). |

21| `cli_auth_credentials_store` | `file | keyring | auto` | Control where the CLI stores cached credentials (file-based auth.json vs OS keychain). |38| `cli_auth_credentials_store` | `file | keyring | auto` | Control where the CLI stores cached credentials (file-based auth.json vs OS keychain). |

39| `commit_attribution` | `string` | Override the commit co-author trailer text. Set an empty string to disable automatic attribution. |

22| `compact_prompt` | `string` | Inline override for the history compaction prompt. |40| `compact_prompt` | `string` | Inline override for the history compaction prompt. |

23| `developer_instructions` | `string` | Additional developer instructions injected into the session (optional). |41| `developer_instructions` | `string` | Additional developer instructions injected into the session (optional). |

24| `disable_paste_burst` | `boolean` | Disable burst-paste detection in the TUI. |42| `disable_paste_burst` | `boolean` | Disable burst-paste detection in the TUI. |

25| `experimental_compact_prompt_file` | `string (path)` | Load the compaction prompt override from a file (experimental). |43| `experimental_compact_prompt_file` | `string (path)` | Load the compaction prompt override from a file (experimental). |

26| `experimental_use_freeform_apply_patch` | `boolean` | Legacy name for enabling freeform apply\_patch; prefer `[features].apply_patch_freeform` or `codex --enable apply_patch_freeform`. |

27| `experimental_use_unified_exec_tool` | `boolean` | Legacy name for enabling unified exec; prefer `[features].unified_exec` or `codex --enable unified_exec`. |44| `experimental_use_unified_exec_tool` | `boolean` | Legacy name for enabling unified exec; prefer `[features].unified_exec` or `codex --enable unified_exec`. |

28| `features.apply_patch_freeform` | `boolean` | Expose the freeform `apply_patch` tool (experimental). |

29| `features.apps` | `boolean` | Enable ChatGPT Apps/connectors support (experimental). |45| `features.apps` | `boolean` | Enable ChatGPT Apps/connectors support (experimental). |

30| `features.apps_mcp_gateway` | `boolean` | Route Apps MCP calls through the OpenAI connectors MCP gateway (`https://api.openai.com/v1/connectors/mcp/`) instead of legacy routing (experimental). |46| `features.apps_mcp_gateway` | `boolean` | Route Apps MCP calls through the OpenAI connectors MCP gateway (`https://api.openai.com/v1/connectors/mcp/`) instead of legacy routing (experimental). |

47| `features.artifact` | `boolean` | Enable native artifact tools such as slides and spreadsheets (under development). |

31| `features.child_agents_md` | `boolean` | Append AGENTS.md scope/precedence guidance even when no AGENTS.md is present (experimental). |48| `features.child_agents_md` | `boolean` | Append AGENTS.md scope/precedence guidance even when no AGENTS.md is present (experimental). |

32| `features.collaboration_modes` | `boolean` | Enable collaboration modes such as plan mode (stable; on by default). |49| `features.collaboration_modes` | `boolean` | Legacy toggle for collaboration modes. Plan and default modes are available in current builds without setting this key. |

33| `features.elevated_windows_sandbox` | `boolean` | Enable the elevated Windows sandbox pipeline (experimental). |50| `features.default_mode_request_user_input` | `boolean` | Allow `request_user_input` in default collaboration mode (under development; off by default). |

34| `features.experimental_windows_sandbox` | `boolean` | Run the Windows restricted-token sandbox (experimental). |51| `features.elevated_windows_sandbox` | `boolean` | Legacy toggle for an earlier elevated Windows sandbox rollout. Current builds do not use it. |

35| `features.multi_agent` | `boolean` | Enable multi-agent collaboration tools (`spawn\_agent`, `send\_input`, `resume\_agent`, `wait`, and `close\_agent`) (experimental; off by default). |52| `features.enable_request_compression` | `boolean` | Compress streaming request bodies with zstd when supported (stable; on by default). |

53| `features.experimental_windows_sandbox` | `boolean` | Legacy toggle for an earlier Windows sandbox rollout. Current builds do not use it. |

54| `features.fast_mode` | `boolean` | Enable Fast mode selection and the `service_tier = "fast"` path (stable; on by default). |

55| `features.image_detail_original` | `boolean` | Allow image outputs with `detail = "original"` on supported models (under development). |

56| `features.image_generation` | `boolean` | Enable the built-in image generation tool (under development). |

57| `features.multi_agent` | `boolean` | Enable multi-agent collaboration tools (`spawn_agent`, `send_input`, `resume_agent`, `wait`, `close_agent`, and `spawn_agents_on_csv`) (experimental; off by default). |

36| `features.personality` | `boolean` | Enable personality selection controls (stable; on by default). |58| `features.personality` | `boolean` | Enable personality selection controls (stable; on by default). |

37| `features.powershell_utf8` | `boolean` | Force PowerShell UTF-8 output (defaults to true). |59| `features.powershell_utf8` | `boolean` | Force PowerShell UTF-8 output. Enabled by default on Windows and off elsewhere. |

38| `features.remote_models` | `boolean` | Refresh remote model list before showing readiness (experimental). |60| `features.prevent_idle_sleep` | `boolean` | Prevent the machine from sleeping while a turn is actively running (experimental; off by default). |

39| `features.request_rule` | `boolean` | Enable Smart approvals (`prefix_rule` suggestions on escalation requests; stable; on by default). |61| `features.remote_models` | `boolean` | Legacy toggle for an older remote-model readiness flow. Current builds do not use it. |

62| `features.request_rule` | `boolean` | Legacy toggle for Smart approvals. Current builds include this behavior by default, so most users can leave this unset. |

63| `features.responses_websockets` | `boolean` | Prefer the Responses API WebSocket transport for supported providers (under development). |

64| `features.responses_websockets_v2` | `boolean` | Enable Responses API WebSocket v2 mode (under development). |

40| `features.runtime_metrics` | `boolean` | Show runtime metrics summary in TUI turn separators (experimental). |65| `features.runtime_metrics` | `boolean` | Show runtime metrics summary in TUI turn separators (experimental). |

41| `features.search_tool` | `boolean` | Enable `search_tool_bm25` for Apps tool discovery before invoking app MCP tools (experimental). |66| `features.search_tool` | `boolean` | Legacy toggle for an older Apps discovery flow. Current builds do not use it. |

42| `features.shell_snapshot` | `boolean` | Snapshot shell environment to speed up repeated commands (beta). |67| `features.shell_snapshot` | `boolean` | Snapshot shell environment to speed up repeated commands (stable; on by default). |

43| `features.shell_tool` | `boolean` | Enable the default `shell` tool for running commands (stable; on by default). |68| `features.shell_tool` | `boolean` | Enable the default `shell` tool for running commands (stable; on by default). |

44| `features.unified_exec` | `boolean` | Use the unified PTY-backed exec tool (beta). |69| `features.skill_env_var_dependency_prompt` | `boolean` | Prompt for missing skill environment-variable dependencies (under development). |

70| `features.skill_mcp_dependency_install` | `boolean` | Allow prompting and installing missing MCP dependencies for skills (stable; on by default). |

71| `features.sqlite` | `boolean` | Enable SQLite-backed state persistence (stable; on by default). |

72| `features.steer` | `boolean` | Legacy toggle from an earlier Enter/Tab steering rollout. Current builds always use the current steering behavior. |

73| `features.undo` | `boolean` | Enable undo support (stable; off by default). |

74| `features.unified_exec` | `boolean` | Use the unified PTY-backed exec tool (stable; enabled by default except on Windows). |

45| `features.use_linux_sandbox_bwrap` | `boolean` | Use the bubblewrap-based Linux sandbox pipeline (experimental; off by default). |75| `features.use_linux_sandbox_bwrap` | `boolean` | Use the bubblewrap-based Linux sandbox pipeline (experimental; off by default). |

46| `features.web_search` | `boolean` | Deprecated legacy toggle; prefer the top-level `web_search` setting. |76| `features.web_search` | `boolean` | Deprecated legacy toggle; prefer the top-level `web_search` setting. |

47| `features.web_search_cached` | `boolean` | Deprecated legacy toggle. When `web_search` is unset, true maps to `web_search = "cached"`. |77| `features.web_search_cached` | `boolean` | Deprecated legacy toggle. When `web_search` is unset, true maps to `web_search = "cached"`. |


53| `hide_agent_reasoning` | `boolean` | Suppress reasoning events in both the TUI and `codex exec` output. |83| `hide_agent_reasoning` | `boolean` | Suppress reasoning events in both the TUI and `codex exec` output. |

54| `history.max_bytes` | `number` | If set, caps the history file size in bytes by dropping oldest entries. |84| `history.max_bytes` | `number` | If set, caps the history file size in bytes by dropping oldest entries. |

55| `history.persistence` | `save-all | none` | Control whether Codex saves session transcripts to history.jsonl. |85| `history.persistence` | `save-all | none` | Control whether Codex saves session transcripts to history.jsonl. |

56| `include_apply_patch_tool` | `boolean` | Legacy name for enabling freeform apply\_patch; prefer `[features].apply_patch_freeform`. |

57| `instructions` | `string` | Reserved for future use; prefer `model_instructions_file` or `AGENTS.md`. |86| `instructions` | `string` | Reserved for future use; prefer `model_instructions_file` or `AGENTS.md`. |

58| `log_dir` | `string (path)` | Directory where Codex writes log files (for example `codex-tui.log`); defaults to `$CODEX_HOME/log`. |87| `log_dir` | `string (path)` | Directory where Codex writes log files (for example `codex-tui.log`); defaults to `$CODEX_HOME/log`. |

59| `mcp_oauth_callback_port` | `integer` | Optional fixed port for the local HTTP callback server used during MCP OAuth login. When unset, Codex binds to an ephemeral port chosen by the OS. |88| `mcp_oauth_callback_port` | `integer` | Optional fixed port for the local HTTP callback server used during MCP OAuth login. When unset, Codex binds to an ephemeral port chosen by the OS. |

89| `mcp_oauth_callback_url` | `string` | Optional redirect URI override for MCP OAuth login (for example, a devbox ingress URL). `mcp_oauth_callback_port` still controls the callback listener port. |

60| `mcp_oauth_credentials_store` | `auto | file | keyring` | Preferred store for MCP OAuth credentials. |90| `mcp_oauth_credentials_store` | `auto | file | keyring` | Preferred store for MCP OAuth credentials. |

61| `mcp_servers.<id>.args` | `array<string>` | Arguments passed to the MCP stdio server command. |91| `mcp_servers.<id>.args` | `array<string>` | Arguments passed to the MCP stdio server command. |

62| `mcp_servers.<id>.bearer_token_env_var` | `string` | Environment variable sourcing the bearer token for an MCP HTTP server. |92| `mcp_servers.<id>.bearer_token_env_var` | `string` | Environment variable sourcing the bearer token for an MCP HTTP server. |


69| `mcp_servers.<id>.env_http_headers` | `map<string,string>` | HTTP headers populated from environment variables for an MCP HTTP server. |99| `mcp_servers.<id>.env_http_headers` | `map<string,string>` | HTTP headers populated from environment variables for an MCP HTTP server. |

70| `mcp_servers.<id>.env_vars` | `array<string>` | Additional environment variables to whitelist for an MCP stdio server. |100| `mcp_servers.<id>.env_vars` | `array<string>` | Additional environment variables to whitelist for an MCP stdio server. |

71| `mcp_servers.<id>.http_headers` | `map<string,string>` | Static HTTP headers included with each MCP HTTP request. |101| `mcp_servers.<id>.http_headers` | `map<string,string>` | Static HTTP headers included with each MCP HTTP request. |

102| `mcp_servers.<id>.oauth_resource` | `string` | Optional RFC 8707 OAuth resource parameter to include during MCP login. |

72| `mcp_servers.<id>.required` | `boolean` | When true, fail startup/resume if this enabled MCP server cannot initialize. |103| `mcp_servers.<id>.required` | `boolean` | When true, fail startup/resume if this enabled MCP server cannot initialize. |

104| `mcp_servers.<id>.scopes` | `array<string>` | OAuth scopes to request when authenticating to that MCP server. |

73| `mcp_servers.<id>.startup_timeout_ms` | `number` | Alias for `startup_timeout_sec` in milliseconds. |105| `mcp_servers.<id>.startup_timeout_ms` | `number` | Alias for `startup_timeout_sec` in milliseconds. |

74| `mcp_servers.<id>.startup_timeout_sec` | `number` | Override the default 10s startup timeout for an MCP server. |106| `mcp_servers.<id>.startup_timeout_sec` | `number` | Override the default 10s startup timeout for an MCP server. |

75| `mcp_servers.<id>.tool_timeout_sec` | `number` | Override the default 60s per-tool timeout for an MCP server. |107| `mcp_servers.<id>.tool_timeout_sec` | `number` | Override the default 60s per-tool timeout for an MCP server. |

76| `mcp_servers.<id>.url` | `string` | Endpoint for an MCP streamable HTTP server. |108| `mcp_servers.<id>.url` | `string` | Endpoint for an MCP streamable HTTP server. |

77| `model` | `string` | Model to use (e.g., `gpt-5-codex`). |109| `model` | `string` | Model to use (e.g., `gpt-5-codex`). |

78| `model_auto_compact_token_limit` | `number` | Token threshold that triggers automatic history compaction (unset uses model defaults). |110| `model_auto_compact_token_limit` | `number` | Token threshold that triggers automatic history compaction (unset uses model defaults). |

111| `model_catalog_json` | `string (path)` | Optional path to a JSON model catalog loaded on startup. Profile-level `profiles.<name>.model_catalog_json` can override this per profile. |

79| `model_context_window` | `number` | Context window tokens available to the active model. |112| `model_context_window` | `number` | Context window tokens available to the active model. |

80| `model_instructions_file` | `string (path)` | Replacement for built-in instructions instead of `AGENTS.md`. |113| `model_instructions_file` | `string (path)` | Replacement for built-in instructions instead of `AGENTS.md`. |

81| `model_provider` | `string` | Provider id from `model_providers` (default: `openai`). |114| `model_provider` | `string` | Provider id from `model_providers` (default: `openai`). |


91| `model_providers.<id>.requires_openai_auth` | `boolean` | The provider uses OpenAI authentication (defaults to false). |124| `model_providers.<id>.requires_openai_auth` | `boolean` | The provider uses OpenAI authentication (defaults to false). |

92| `model_providers.<id>.stream_idle_timeout_ms` | `number` | Idle timeout for SSE streams in milliseconds (default: 300000). |125| `model_providers.<id>.stream_idle_timeout_ms` | `number` | Idle timeout for SSE streams in milliseconds (default: 300000). |

93| `model_providers.<id>.stream_max_retries` | `number` | Retry count for SSE streaming interruptions (default: 5). |126| `model_providers.<id>.stream_max_retries` | `number` | Retry count for SSE streaming interruptions (default: 5). |

94| `model_providers.<id>.wire_api` | `chat | responses` | Protocol used by the provider (defaults to `chat` if omitted). |127| `model_providers.<id>.supports_websockets` | `boolean` | Whether that provider supports the Responses API WebSocket transport. |

128| `model_providers.<id>.wire_api` | `responses` | Protocol used by the provider. `responses` is the only supported value, and it is the default when omitted. |

95| `model_reasoning_effort` | `minimal | low | medium | high | xhigh` | Adjust reasoning effort for supported models (Responses API only; `xhigh` is model-dependent). |129| `model_reasoning_effort` | `minimal | low | medium | high | xhigh` | Adjust reasoning effort for supported models (Responses API only; `xhigh` is model-dependent). |

96| `model_reasoning_summary` | `auto | concise | detailed | none` | Select reasoning summary detail or disable summaries entirely. |130| `model_reasoning_summary` | `auto | concise | detailed | none` | Select reasoning summary detail or disable summaries entirely. |

97| `model_supports_reasoning_summaries` | `boolean` | Force Codex to send or not send reasoning metadata. |131| `model_supports_reasoning_summaries` | `boolean` | Force Codex to send or not send reasoning metadata. |

98| `model_verbosity` | `low | medium | high` | Control GPT-5 Responses API verbosity (defaults to `medium`). |132| `model_verbosity` | `low | medium | high` | Optional GPT-5 Responses API verbosity override; when unset, the selected model/preset default is used. |

99| `notice.hide_full_access_warning` | `boolean` | Track acknowledgement of the full access warning prompt. |133| `notice.hide_full_access_warning` | `boolean` | Track acknowledgement of the full access warning prompt. |

100| `notice.hide_gpt-5.1-codex-max_migration_prompt` | `boolean` | Track acknowledgement of the gpt-5.1-codex-max migration prompt. |134| `notice.hide_gpt-5.1-codex-max_migration_prompt` | `boolean` | Track acknowledgement of the gpt-5.1-codex-max migration prompt. |

101| `notice.hide_gpt5_1_migration_prompt` | `boolean` | Track acknowledgement of the GPT-5.1 migration prompt. |135| `notice.hide_gpt5_1_migration_prompt` | `boolean` | Track acknowledgement of the GPT-5.1 migration prompt. |


113| `otel.exporter.<id>.tls.client-certificate` | `string` | Client certificate path for OTEL exporter TLS. |147| `otel.exporter.<id>.tls.client-certificate` | `string` | Client certificate path for OTEL exporter TLS. |

114| `otel.exporter.<id>.tls.client-private-key` | `string` | Client private key path for OTEL exporter TLS. |148| `otel.exporter.<id>.tls.client-private-key` | `string` | Client private key path for OTEL exporter TLS. |

115| `otel.log_user_prompt` | `boolean` | Opt in to exporting raw user prompts with OpenTelemetry logs. |149| `otel.log_user_prompt` | `boolean` | Opt in to exporting raw user prompts with OpenTelemetry logs. |

150| `otel.metrics_exporter` | `none | statsig | otlp-http | otlp-grpc` | Select the OpenTelemetry metrics exporter (defaults to `statsig`). |

116| `otel.trace_exporter` | `none | otlp-http | otlp-grpc` | Select the OpenTelemetry trace exporter and provide any endpoint metadata. |151| `otel.trace_exporter` | `none | otlp-http | otlp-grpc` | Select the OpenTelemetry trace exporter and provide any endpoint metadata. |

117| `otel.trace_exporter.<id>.endpoint` | `string` | Trace exporter endpoint for OTEL logs. |152| `otel.trace_exporter.<id>.endpoint` | `string` | Trace exporter endpoint for OTEL logs. |

118| `otel.trace_exporter.<id>.headers` | `map<string,string>` | Static headers included with OTEL trace exporter requests. |153| `otel.trace_exporter.<id>.headers` | `map<string,string>` | Static headers included with OTEL trace exporter requests. |


120| `otel.trace_exporter.<id>.tls.ca-certificate` | `string` | CA certificate path for OTEL trace exporter TLS. |155| `otel.trace_exporter.<id>.tls.ca-certificate` | `string` | CA certificate path for OTEL trace exporter TLS. |

121| `otel.trace_exporter.<id>.tls.client-certificate` | `string` | Client certificate path for OTEL trace exporter TLS. |156| `otel.trace_exporter.<id>.tls.client-certificate` | `string` | Client certificate path for OTEL trace exporter TLS. |

122| `otel.trace_exporter.<id>.tls.client-private-key` | `string` | Client private key path for OTEL trace exporter TLS. |157| `otel.trace_exporter.<id>.tls.client-private-key` | `string` | Client private key path for OTEL trace exporter TLS. |

158| `permissions.network.admin_url` | `string` | Admin endpoint for the managed network proxy. |

159| `permissions.network.allow_local_binding` | `boolean` | Permit local bind/listen operations through the managed proxy. |

160| `permissions.network.allow_unix_sockets` | `array<string>` | Allowlist of Unix socket paths permitted through the managed proxy. |

161| `permissions.network.allow_upstream_proxy` | `boolean` | Allow the managed proxy to chain to another upstream proxy. |

162| `permissions.network.allowed_domains` | `array<string>` | Allowlist of domains permitted through the managed proxy. |

163| `permissions.network.dangerously_allow_all_unix_sockets` | `boolean` | Allow the proxy to use arbitrary Unix sockets instead of the default restricted set. |

164| `permissions.network.dangerously_allow_non_loopback_admin` | `boolean` | Permit non-loopback bind addresses for the managed proxy admin listener. |

165| `permissions.network.dangerously_allow_non_loopback_proxy` | `boolean` | Permit non-loopback bind addresses for the managed proxy listener. |

166| `permissions.network.denied_domains` | `array<string>` | Denylist of domains blocked by the managed proxy. |

167| `permissions.network.enable_socks5` | `boolean` | Expose a SOCKS5 listener from the managed network proxy. |

168| `permissions.network.enable_socks5_udp` | `boolean` | Allow UDP over the SOCKS5 listener when enabled. |

169| `permissions.network.enabled` | `boolean` | Enable the managed network proxy configuration for subprocesses. |

170| `permissions.network.mode` | `limited | full` | Network proxy mode used for subprocess traffic. |

171| `permissions.network.proxy_url` | `string` | HTTP proxy endpoint used by the managed network proxy. |

172| `permissions.network.socks_url` | `string` | SOCKS5 proxy endpoint used by the managed network proxy. |

123| `personality` | `none | friendly | pragmatic` | Default communication style for models that advertise `supportsPersonality`; can be overridden per thread/turn or via `/personality`. |173| `personality` | `none | friendly | pragmatic` | Default communication style for models that advertise `supportsPersonality`; can be overridden per thread/turn or via `/personality`. |

174| `plan_mode_reasoning_effort` | `none | minimal | low | medium | high | xhigh` | Plan-mode-specific reasoning override. When unset, Plan mode uses its built-in preset default. |

124| `profile` | `string` | Default profile applied at startup (equivalent to `--profile`). |175| `profile` | `string` | Default profile applied at startup (equivalent to `--profile`). |

125| `profiles.<name>.*` | `various` | Profile-scoped overrides for any of the supported configuration keys. |176| `profiles.<name>.*` | `various` | Profile-scoped overrides for any of the supported configuration keys. |

126| `profiles.<name>.experimental_use_freeform_apply_patch` | `boolean` | Legacy name for enabling freeform apply\_patch; prefer `[features].apply_patch_freeform`. |177| `profiles.<name>.analytics.enabled` | `boolean` | Profile-scoped analytics enablement override. |

127| `profiles.<name>.experimental_use_unified_exec_tool` | `boolean` | Legacy name for enabling unified exec; prefer `[features].unified_exec`. |178| `profiles.<name>.experimental_use_unified_exec_tool` | `boolean` | Legacy name for enabling unified exec; prefer `[features].unified_exec`. |

128| `profiles.<name>.include_apply_patch_tool` | `boolean` | Legacy name for enabling freeform apply\_patch; prefer `[features].apply_patch_freeform`. |179| `profiles.<name>.model_catalog_json` | `string (path)` | Profile-scoped model catalog JSON path override (applied on startup only; overrides the top-level `model_catalog_json` for that profile). |

180| `profiles.<name>.model_instructions_file` | `string (path)` | Profile-scoped replacement for the built-in instruction file. |

129| `profiles.<name>.oss_provider` | `lmstudio | ollama` | Profile-scoped OSS provider for `--oss` sessions. |181| `profiles.<name>.oss_provider` | `lmstudio | ollama` | Profile-scoped OSS provider for `--oss` sessions. |

130| `profiles.<name>.personality` | `none | friendly | pragmatic` | Profile-scoped communication style override for supported models. |182| `profiles.<name>.personality` | `none | friendly | pragmatic` | Profile-scoped communication style override for supported models. |

183| `profiles.<name>.plan_mode_reasoning_effort` | `none | minimal | low | medium | high | xhigh` | Profile-scoped Plan-mode reasoning override. |

184| `profiles.<name>.service_tier` | `flex | fast` | Profile-scoped service tier preference for new turns. |

185| `profiles.<name>.tools_view_image` | `boolean` | Enable or disable the `view_image` tool in that profile. |

131| `profiles.<name>.web_search` | `disabled | cached | live` | Profile-scoped web search mode override (default: `"cached"`). |186| `profiles.<name>.web_search` | `disabled | cached | live` | Profile-scoped web search mode override (default: `"cached"`). |

187| `profiles.<name>.windows.sandbox` | `unelevated | elevated` | Profile-scoped Windows sandbox mode override. |

132| `project_doc_fallback_filenames` | `array<string>` | Additional filenames to try when `AGENTS.md` is missing. |188| `project_doc_fallback_filenames` | `array<string>` | Additional filenames to try when `AGENTS.md` is missing. |

133| `project_doc_max_bytes` | `number` | Maximum bytes read from `AGENTS.md` when building project instructions. |189| `project_doc_max_bytes` | `number` | Maximum bytes read from `AGENTS.md` when building project instructions. |

134| `project_root_markers` | `array<string>` | List of project root marker filenames; used when searching parent directories for the project root. |190| `project_root_markers` | `array<string>` | List of project root marker filenames; used when searching parent directories for the project root. |


139| `sandbox_workspace_write.exclude_tmpdir_env_var` | `boolean` | Exclude `$TMPDIR` from writable roots in workspace-write mode. |195| `sandbox_workspace_write.exclude_tmpdir_env_var` | `boolean` | Exclude `$TMPDIR` from writable roots in workspace-write mode. |

140| `sandbox_workspace_write.network_access` | `boolean` | Allow outbound network access inside the workspace-write sandbox. |196| `sandbox_workspace_write.network_access` | `boolean` | Allow outbound network access inside the workspace-write sandbox. |

141| `sandbox_workspace_write.writable_roots` | `array<string>` | Additional writable roots when `sandbox_mode = "workspace-write"`. |197| `sandbox_workspace_write.writable_roots` | `array<string>` | Additional writable roots when `sandbox_mode = "workspace-write"`. |

198| `service_tier` | `flex | fast` | Preferred service tier for new turns. `fast` is honored only when the `features.fast_mode` gate is enabled. |

142| `shell_environment_policy.exclude` | `array<string>` | Glob patterns for removing environment variables after the defaults. |199| `shell_environment_policy.exclude` | `array<string>` | Glob patterns for removing environment variables after the defaults. |

143| `shell_environment_policy.experimental_use_profile` | `boolean` | Use the user shell profile when spawning subprocesses. |200| `shell_environment_policy.experimental_use_profile` | `boolean` | Use the user shell profile when spawning subprocesses. |

144| `shell_environment_policy.ignore_default_excludes` | `boolean` | Keep variables containing KEY/SECRET/TOKEN before other filters run. |201| `shell_environment_policy.ignore_default_excludes` | `boolean` | Keep variables containing KEY/SECRET/TOKEN before other filters run. |


149| `skills.config` | `array<object>` | Per-skill enablement overrides stored in config.toml. |206| `skills.config` | `array<object>` | Per-skill enablement overrides stored in config.toml. |

150| `skills.config.<index>.enabled` | `boolean` | Enable or disable the referenced skill. |207| `skills.config.<index>.enabled` | `boolean` | Enable or disable the referenced skill. |

151| `skills.config.<index>.path` | `string (path)` | Path to a skill folder containing `SKILL.md`. |208| `skills.config.<index>.path` | `string (path)` | Path to a skill folder containing `SKILL.md`. |

209| `sqlite_home` | `string (path)` | Directory where Codex stores the SQLite-backed state DB used by agent jobs and other resumable runtime state. |

152| `suppress_unstable_features_warning` | `boolean` | Suppress the warning that appears when under-development feature flags are enabled. |210| `suppress_unstable_features_warning` | `boolean` | Suppress the warning that appears when under-development feature flags are enabled. |

153| `tool_output_token_limit` | `number` | Token budget for storing individual tool/function outputs in history. |211| `tool_output_token_limit` | `number` | Token budget for storing individual tool/function outputs in history. |

212| `tools.view_image` | `boolean` | Enable the local-image attachment tool `view_image`. |

154| `tools.web_search` | `boolean` | Deprecated legacy toggle for web search; prefer the top-level `web_search` setting. |213| `tools.web_search` | `boolean` | Deprecated legacy toggle for web search; prefer the top-level `web_search` setting. |

155| `tui` | `table` | TUI-specific options such as enabling inline desktop notifications. |214| `tui` | `table` | TUI-specific options such as enabling inline desktop notifications. |

156| `tui.alternate_screen` | `auto | always | never` | Control alternate screen usage for the TUI (default: auto; auto skips it in Zellij to preserve scrollback). |215| `tui.alternate_screen` | `auto | always | never` | Control alternate screen usage for the TUI (default: auto; auto skips it in Zellij to preserve scrollback). |

157| `tui.animations` | `boolean` | Enable terminal animations (welcome screen, shimmer, spinner) (default: true). |216| `tui.animations` | `boolean` | Enable terminal animations (welcome screen, shimmer, spinner) (default: true). |

217| `tui.model_availability_nux.<model>` | `integer` | Internal startup-tooltip state keyed by model slug. |

158| `tui.notification_method` | `auto | osc9 | bel` | Notification method for unfocused terminal notifications (default: auto). |218| `tui.notification_method` | `auto | osc9 | bel` | Notification method for unfocused terminal notifications (default: auto). |

159| `tui.notifications` | `boolean | array<string>` | Enable TUI notifications; optionally restrict to specific event types. |219| `tui.notifications` | `boolean | array<string>` | Enable TUI notifications; optionally restrict to specific event types. |

160| `tui.show_tooltips` | `boolean` | Show onboarding tooltips in the TUI welcome screen (default: true). |220| `tui.show_tooltips` | `boolean` | Show onboarding tooltips in the TUI welcome screen (default: true). |

161| `tui.status_line` | `array<string> | null` | Ordered list of TUI footer status-line item identifiers. `null` disables the status line. |221| `tui.status_line` | `array<string> | null` | Ordered list of TUI footer status-line item identifiers. `null` disables the status line. |

222| `tui.theme` | `string` | Syntax-highlighting theme override (kebab-case theme name). |

162| `web_search` | `disabled | cached | live` | Web search mode (default: `"cached"`; cached uses an OpenAI-maintained index and does not fetch live pages; if you use `--yolo` or another full access sandbox setting, it defaults to `"live"`). Use `"live"` to fetch the most recent data from the web, or `"disabled"` to remove the tool. |223| `web_search` | `disabled | cached | live` | Web search mode (default: `"cached"`; cached uses an OpenAI-maintained index and does not fetch live pages; if you use `--yolo` or another full access sandbox setting, it defaults to `"live"`). Use `"live"` to fetch the most recent data from the web, or `"disabled"` to remove the tool. |

163| `windows_wsl_setup_acknowledged` | `boolean` | Track Windows onboarding acknowledgement (Windows only). |224| `windows_wsl_setup_acknowledged` | `boolean` | Track Windows onboarding acknowledgement (Windows only). |

225| `windows.sandbox` | `unelevated | elevated` | Windows-only native sandbox mode when running Codex natively on Windows. |

164 226 

165Key227Key

166 228 


188 250 

189Key251Key

190 252 

191`agents.max_threads`253`agents.<name>.nickname_candidates`

192 254 

193Type / Values255Type / Values

194 256 

195`number`257`array<string>`

196 258 

197Details259Details

198 260 

199Maximum number of agent threads that can be open concurrently.261Optional pool of display nicknames for spawned agents in that role.

200 262 

201Key263Key

202 264 

203`approval_policy`265`agents.job_max_runtime_seconds`

204 266 

205Type / Values267Type / Values

206 268 

207`untrusted | on-request | never`269`number`

208 270 

209Details271Details

210 272 

211Controls when Codex pauses for approval before executing commands. `on-failure` is deprecated; use `on-request` for interactive runs or `never` for non-interactive runs.273Default per-worker timeout for `spawn_agents_on_csv` jobs. When unset, the tool falls back to 1800 seconds per worker.

212 274 

213Key275Key

214 276 

215`apps.<id>.disabled_reason`277`agents.max_depth`

216 278 

217Type / Values279Type / Values

218 280 

219`unknown | user`281`number`

220 282 

221Details283Details

222 284 

223Optional reason attached when an app/connector is disabled.285Maximum nesting depth allowed for spawned agent threads (root sessions start at depth 0; default: 1).

224 286 

225Key287Key

226 288 

227`apps.<id>.enabled`289`agents.max_threads`

228 290 

229Type / Values291Type / Values

230 292 

231`boolean`293`number`

232 294 

233Details295Details

234 296 

235Enable or disable a specific app/connector by id (default: true).297Maximum number of agent threads that can be open concurrently. Defaults to `6` when unset.

236 298 

237Key299Key

238 300 

239`chatgpt_base_url`301`allow_login_shell`

240 302 

241Type / Values303Type / Values

242 304 

243`string`305`boolean`

244 306 

245Details307Details

246 308 

247Override the base URL used during the ChatGPT login flow.309Allow shell-based tools to use login-shell semantics. Defaults to `true`; when `false`, `login = true` requests are rejected and omitted `login` defaults to non-login shells.

248 310 

249Key311Key

250 312 

251`check_for_update_on_startup`313`analytics.enabled`

252 314 

253Type / Values315Type / Values

254 316 


256 318 

257Details319Details

258 320 

259Check for Codex updates on startup (set to false only when updates are centrally managed).321Enable or disable analytics for this machine/profile. When unset, the client default applies.

260 322 

261Key323Key

262 324 

263`cli_auth_credentials_store`325`approval_policy`

264 326 

265Type / Values327Type / Values

266 328 

267`file | keyring | auto`329`untrusted | on-request | never | { reject = { sandbox_approval = bool, rules = bool, mcp_elicitations = bool } }`

268 330 

269Details331Details

270 332 

271Control where the CLI stores cached credentials (file-based auth.json vs OS keychain).333Controls when Codex pauses for approval before executing commands. You can also use `approval_policy = { reject = { ... } }` to auto-reject specific prompt categories while keeping other prompts interactive. `on-failure` is deprecated; use `on-request` for interactive runs or `never` for non-interactive runs.

272 334 

273Key335Key

274 336 

275`compact_prompt`337`approval_policy.reject.mcp_elicitations`

276 338 

277Type / Values339Type / Values

278 340 

279`string`341`boolean`

280 342 

281Details343Details

282 344 

283Inline override for the history compaction prompt.345When `true`, MCP elicitation prompts are auto-rejected instead of shown to the user.

284 346 

285Key347Key

286 348 

287`developer_instructions`349`approval_policy.reject.rules`

288 350 

289Type / Values351Type / Values

290 352 

291`string`353`boolean`

292 354 

293Details355Details

294 356 

295Additional developer instructions injected into the session (optional).357When `true`, approvals triggered by execpolicy `prompt` rules are auto-rejected.

296 358 

297Key359Key

298 360 

299`disable_paste_burst`361`approval_policy.reject.sandbox_approval`

300 362 

301Type / Values363Type / Values

302 364 


304 366 

305Details367Details

306 368 

307Disable burst-paste detection in the TUI.369When `true`, sandbox escalation approval prompts are auto-rejected.

308 370 

309Key371Key

310 372 

311`experimental_compact_prompt_file`373`apps._default.destructive_enabled`

312 374 

313Type / Values375Type / Values

314 376 

315`string (path)`377`boolean`

316 378 

317Details379Details

318 380 

319Load the compaction prompt override from a file (experimental).381Default allow/deny for app tools with `destructive_hint = true`.

320 382 

321Key383Key

322 384 

323`experimental_use_freeform_apply_patch`385`apps._default.enabled`

324 386 

325Type / Values387Type / Values

326 388 


328 390 

329Details391Details

330 392 

331Legacy name for enabling freeform apply\_patch; prefer `[features].apply_patch_freeform` or `codex --enable apply_patch_freeform`.393Default app enabled state for all apps unless overridden per app.

332 394 

333Key395Key

334 396 

335`experimental_use_unified_exec_tool`397`apps._default.open_world_enabled`

336 398 

337Type / Values399Type / Values

338 400 


340 402 

341Details403Details

342 404 

343Legacy name for enabling unified exec; prefer `[features].unified_exec` or `codex --enable unified_exec`.405Default allow/deny for app tools with `open_world_hint = true`.

344 406 

345Key407Key

346 408 

347`features.apply_patch_freeform`409`apps.<id>.default_tools_approval_mode`

348 410 

349Type / Values411Type / Values

350 412 

351`boolean`413`auto | prompt | approve`

352 414 

353Details415Details

354 416 

355Expose the freeform `apply_patch` tool (experimental).417Default approval behavior for tools in this app unless a per-tool override exists.

356 418 

357Key419Key

358 420 

359`features.apps`421`apps.<id>.default_tools_enabled`

360 422 

361Type / Values423Type / Values

362 424 


364 426 

365Details427Details

366 428 

367Enable ChatGPT Apps/connectors support (experimental).429Default enabled state for tools in this app unless a per-tool override exists.

368 430 

369Key431Key

370 432 

371`features.apps_mcp_gateway`433`apps.<id>.destructive_enabled`

372 434 

373Type / Values435Type / Values

374 436 


376 438 

377Details439Details

378 440 

379Route Apps MCP calls through the OpenAI connectors MCP gateway (`https://api.openai.com/v1/connectors/mcp/`) instead of legacy routing (experimental).441Allow or block tools in this app that advertise `destructive_hint = true`.

380 442 

381Key443Key

382 444 

383`features.child_agents_md`445`apps.<id>.enabled`

384 446 

385Type / Values447Type / Values

386 448 


388 450 

389Details451Details

390 452 

391Append AGENTS.md scope/precedence guidance even when no AGENTS.md is present (experimental).453Enable or disable a specific app/connector by id (default: true).

392 454 

393Key455Key

394 456 

395`features.collaboration_modes`457`apps.<id>.open_world_enabled`

396 458 

397Type / Values459Type / Values

398 460 


400 462 

401Details463Details

402 464 

403Enable collaboration modes such as plan mode (stable; on by default).465Allow or block tools in this app that advertise `open_world_hint = true`.

404 466 

405Key467Key

406 468 

407`features.elevated_windows_sandbox`469`apps.<id>.tools.<tool>.approval_mode`

408 470 

409Type / Values471Type / Values

410 472 

411`boolean`473`auto | prompt | approve`

412 474 

413Details475Details

414 476 

415Enable the elevated Windows sandbox pipeline (experimental).477Per-tool approval behavior override for a single app tool.

416 478 

417Key479Key

418 480 

419`features.experimental_windows_sandbox`481`apps.<id>.tools.<tool>.enabled`

420 482 

421Type / Values483Type / Values

422 484 


424 486 

425Details487Details

426 488 

427Run the Windows restricted-token sandbox (experimental).489Per-tool enabled override for an app tool (for example `repos/list`).

428 490 

429Key491Key

430 492 

431`features.multi_agent`493`background_terminal_max_timeout`

432 494 

433Type / Values495Type / Values

434 496 

435`boolean`497`number`

436 498 

437Details499Details

438 500 

439Enable multi-agent collaboration tools (`spawn\_agent`, `send\_input`, `resume\_agent`, `wait`, and `close\_agent`) (experimental; off by default).501Maximum poll window in milliseconds for empty `write_stdin` polls (background terminal polling). Default: `300000` (5 minutes). Replaces the older `background_terminal_timeout` key.

440 502 

441Key503Key

442 504 

443`features.personality`505`chatgpt_base_url`

444 506 

445Type / Values507Type / Values

446 508 

447`boolean`509`string`

448 510 

449Details511Details

450 512 

451Enable personality selection controls (stable; on by default).513Override the base URL used during the ChatGPT login flow.

452 514 

453Key515Key

454 516 

455`features.powershell_utf8`517`check_for_update_on_startup`

456 518 

457Type / Values519Type / Values

458 520 


460 522 

461Details523Details

462 524 

463Force PowerShell UTF-8 output (defaults to true).525Check for Codex updates on startup (set to false only when updates are centrally managed).

464 526 

465Key527Key

466 528 

467`features.remote_models`529`cli_auth_credentials_store`

468 530 

469Type / Values531Type / Values

470 532 

471`boolean`533`file | keyring | auto`

472 534 

473Details535Details

474 536 

475Refresh remote model list before showing readiness (experimental).537Control where the CLI stores cached credentials (file-based auth.json vs OS keychain).

476 538 

477Key539Key

478 540 

479`features.request_rule`541`commit_attribution`

480 542 

481Type / Values543Type / Values

482 544 

483`boolean`545`string`

484 546 

485Details547Details

486 548 

487Enable Smart approvals (`prefix_rule` suggestions on escalation requests; stable; on by default).549Override the commit co-author trailer text. Set an empty string to disable automatic attribution.

488 550 

489Key551Key

490 552 

491`features.runtime_metrics`553`compact_prompt`

492 554 

493Type / Values555Type / Values

494 556 

495`boolean`557`string`

496 558 

497Details559Details

498 560 

499Show runtime metrics summary in TUI turn separators (experimental).561Inline override for the history compaction prompt.

500 562 

501Key563Key

502 564 

503`features.search_tool`565`developer_instructions`

504 566 

505Type / Values567Type / Values

506 568 

507`boolean`569`string`

508 570 

509Details571Details

510 572 

511Enable `search_tool_bm25` for Apps tool discovery before invoking app MCP tools (experimental).573Additional developer instructions injected into the session (optional).

512 574 

513Key575Key

514 576 

515`features.shell_snapshot`577`disable_paste_burst`

516 578 

517Type / Values579Type / Values

518 580 


520 582 

521Details583Details

522 584 

523Snapshot shell environment to speed up repeated commands (beta).585Disable burst-paste detection in the TUI.

524 586 

525Key587Key

526 588 

527`features.shell_tool`589`experimental_compact_prompt_file`

528 590 

529Type / Values591Type / Values

530 592 

531`boolean`593`string (path)`

532 594 

533Details595Details

534 596 

535Enable the default `shell` tool for running commands (stable; on by default).597Load the compaction prompt override from a file (experimental).

536 598 

537Key599Key

538 600 

539`features.unified_exec`601`experimental_use_unified_exec_tool`

540 602 

541Type / Values603Type / Values

542 604 


544 606 

545Details607Details

546 608 

547Use the unified PTY-backed exec tool (beta).609Legacy name for enabling unified exec; prefer `[features].unified_exec` or `codex --enable unified_exec`.

548 610 

549Key611Key

550 612 

551`features.use_linux_sandbox_bwrap`613`features.apps`

552 614 

553Type / Values615Type / Values

554 616 


556 618 

557Details619Details

558 620 

559Use the bubblewrap-based Linux sandbox pipeline (experimental; off by default).621Enable ChatGPT Apps/connectors support (experimental).

560 622 

561Key623Key

562 624 

563`features.web_search`625`features.apps_mcp_gateway`

564 626 

565Type / Values627Type / Values

566 628 


568 630 

569Details631Details

570 632 

571Deprecated legacy toggle; prefer the top-level `web_search` setting.633Route Apps MCP calls through the OpenAI connectors MCP gateway (`https://api.openai.com/v1/connectors/mcp/`) instead of legacy routing (experimental).

572 634 

573Key635Key

574 636 

575`features.web_search_cached`637`features.artifact`

576 638 

577Type / Values639Type / Values

578 640 


580 642 

581Details643Details

582 644 

583Deprecated legacy toggle. When `web_search` is unset, true maps to `web_search = "cached"`.645Enable native artifact tools such as slides and spreadsheets (under development).

584 646 

585Key647Key

586 648 

587`features.web_search_request`649`features.child_agents_md`

588 650 

589Type / Values651Type / Values

590 652 


592 654 

593Details655Details

594 656 

595Deprecated legacy toggle. When `web_search` is unset, true maps to `web_search = "live"`.657Append AGENTS.md scope/precedence guidance even when no AGENTS.md is present (experimental).

596 658 

597Key659Key

598 660 

599`feedback.enabled`661`features.collaboration_modes`

600 662 

601Type / Values663Type / Values

602 664 


604 666 

605Details667Details

606 668 

607Enable feedback submission via `/feedback` across Codex surfaces (default: true).669Legacy toggle for collaboration modes. Plan and default modes are available in current builds without setting this key.

608 670 

609Key671Key

610 672 

611`file_opener`673`features.default_mode_request_user_input`

612 674 

613Type / Values675Type / Values

614 676 

615`vscode | vscode-insiders | windsurf | cursor | none`677`boolean`

616 678 

617Details679Details

618 680 

619URI scheme used to open citations from Codex output (default: `vscode`).681Allow `request_user_input` in default collaboration mode (under development; off by default).

620 682 

621Key683Key

622 684 

623`forced_chatgpt_workspace_id`685`features.elevated_windows_sandbox`

624 686 

625Type / Values687Type / Values

626 688 

627`string (uuid)`689`boolean`

628 690 

629Details691Details

630 692 

631Limit ChatGPT logins to a specific workspace identifier.693Legacy toggle for an earlier elevated Windows sandbox rollout. Current builds do not use it.

632 694 

633Key695Key

634 696 

635`forced_login_method`697`features.enable_request_compression`

636 698 

637Type / Values699Type / Values

638 700 

639`chatgpt | api`701`boolean`

640 702 

641Details703Details

642 704 

643Restrict Codex to a specific authentication method.705Compress streaming request bodies with zstd when supported (stable; on by default).

644 706 

645Key707Key

646 708 

647`hide_agent_reasoning`709`features.experimental_windows_sandbox`

648 710 

649Type / Values711Type / Values

650 712 


652 714 

653Details715Details

654 716 

655Suppress reasoning events in both the TUI and `codex exec` output.717Legacy toggle for an earlier Windows sandbox rollout. Current builds do not use it.

656 718 

657Key719Key

658 720 

659`history.max_bytes`721`features.fast_mode`

660 722 

661Type / Values723Type / Values

662 724 

663`number`725`boolean`

664 726 

665Details727Details

666 728 

667If set, caps the history file size in bytes by dropping oldest entries.729Enable Fast mode selection and the `service_tier = "fast"` path (stable; on by default).

668 730 

669Key731Key

670 732 

671`history.persistence`733`features.image_detail_original`

672 734 

673Type / Values735Type / Values

674 736 

675`save-all | none`737`boolean`

676 738 

677Details739Details

678 740 

679Control whether Codex saves session transcripts to history.jsonl.741Allow image outputs with `detail = "original"` on supported models (under development).

680 742 

681Key743Key

682 744 

683`include_apply_patch_tool`745`features.image_generation`

684 746 

685Type / Values747Type / Values

686 748 


688 750 

689Details751Details

690 752 

691Legacy name for enabling freeform apply\_patch; prefer `[features].apply_patch_freeform`.753Enable the built-in image generation tool (under development).

692 754 

693Key755Key

694 756 

695`instructions`757`features.multi_agent`

696 758 

697Type / Values759Type / Values

698 760 

699`string`761`boolean`

700 762 

701Details763Details

702 764 

703Reserved for future use; prefer `model_instructions_file` or `AGENTS.md`.765Enable multi-agent collaboration tools (`spawn_agent`, `send_input`, `resume_agent`, `wait`, `close_agent`, and `spawn_agents_on_csv`) (experimental; off by default).

704 766 

705Key767Key

706 768 

707`log_dir`769`features.personality`

708 770 

709Type / Values771Type / Values

710 772 

711`string (path)`773`boolean`

712 774 

713Details775Details

714 776 

715Directory where Codex writes log files (for example `codex-tui.log`); defaults to `$CODEX_HOME/log`.777Enable personality selection controls (stable; on by default).

716 778 

717Key779Key

718 780 

719`mcp_oauth_callback_port`781`features.powershell_utf8`

720 782 

721Type / Values783Type / Values

722 784 

723`integer`785`boolean`

724 786 

725Details787Details

726 788 

727Optional fixed port for the local HTTP callback server used during MCP OAuth login. When unset, Codex binds to an ephemeral port chosen by the OS.789Force PowerShell UTF-8 output. Enabled by default on Windows and off elsewhere.

728 790 

729Key791Key

730 792 

731`mcp_oauth_credentials_store`793`features.prevent_idle_sleep`

732 794 

733Type / Values795Type / Values

734 796 

735`auto | file | keyring`797`boolean`

736 798 

737Details799Details

738 800 

739Preferred store for MCP OAuth credentials.801Prevent the machine from sleeping while a turn is actively running (experimental; off by default).

740 802 

741Key803Key

742 804 

743`mcp_servers.<id>.args`805`features.remote_models`

744 806 

745Type / Values807Type / Values

746 808 

747`array<string>`809`boolean`

748 810 

749Details811Details

750 812 

751Arguments passed to the MCP stdio server command.813Legacy toggle for an older remote-model readiness flow. Current builds do not use it.

752 814 

753Key815Key

754 816 

755`mcp_servers.<id>.bearer_token_env_var`817`features.request_rule`

756 818 

757Type / Values819Type / Values

758 820 

759`string`821`boolean`

760 822 

761Details823Details

762 824 

763Environment variable sourcing the bearer token for an MCP HTTP server.825Legacy toggle for Smart approvals. Current builds include this behavior by default, so most users can leave this unset.

764 826 

765Key827Key

766 828 

767`mcp_servers.<id>.command`829`features.responses_websockets`

768 830 

769Type / Values831Type / Values

770 832 

771`string`833`boolean`

772 834 

773Details835Details

774 836 

775Launcher command for an MCP stdio server.837Prefer the Responses API WebSocket transport for supported providers (under development).

776 838 

777Key839Key

778 840 

779`mcp_servers.<id>.cwd`841`features.responses_websockets_v2`

780 842 

781Type / Values843Type / Values

782 844 

783`string`845`boolean`

784 846 

785Details847Details

786 848 

787Working directory for the MCP stdio server process.849Enable Responses API WebSocket v2 mode (under development).

788 850 

789Key851Key

790 852 

791`mcp_servers.<id>.disabled_tools`853`features.runtime_metrics`

792 854 

793Type / Values855Type / Values

794 856 

795`array<string>`857`boolean`

796 858 

797Details859Details

798 860 

799Deny list applied after `enabled_tools` for the MCP server.861Show runtime metrics summary in TUI turn separators (experimental).

800 862 

801Key863Key

802 864 

803`mcp_servers.<id>.enabled`865`features.search_tool`

804 866 

805Type / Values867Type / Values

806 868 


808 870 

809Details871Details

810 872 

811Disable an MCP server without removing its configuration.873Legacy toggle for an older Apps discovery flow. Current builds do not use it.

812 874 

813Key875Key

814 876 

815`mcp_servers.<id>.enabled_tools`877`features.shell_snapshot`

816 878 

817Type / Values879Type / Values

818 880 

819`array<string>`881`boolean`

820 882 

821Details883Details

822 884 

823Allow list of tool names exposed by the MCP server.885Snapshot shell environment to speed up repeated commands (stable; on by default).

824 886 

825Key887Key

826 888 

827`mcp_servers.<id>.env`889`features.shell_tool`

828 890 

829Type / Values891Type / Values

830 892 

831`map<string,string>`893`boolean`

832 894 

833Details895Details

834 896 

835Environment variables forwarded to the MCP stdio server.897Enable the default `shell` tool for running commands (stable; on by default).

836 898 

837Key899Key

838 900 

839`mcp_servers.<id>.env_http_headers`901`features.skill_env_var_dependency_prompt`

840 902 

841Type / Values903Type / Values

842 904 

843`map<string,string>`905`boolean`

844 906 

845Details907Details

846 908 

847HTTP headers populated from environment variables for an MCP HTTP server.909Prompt for missing skill environment-variable dependencies (under development).

848 910 

849Key911Key

850 912 

851`mcp_servers.<id>.env_vars`913`features.skill_mcp_dependency_install`

852 914 

853Type / Values915Type / Values

854 916 

855`array<string>`917`boolean`

856 918 

857Details919Details

858 920 

859Additional environment variables to whitelist for an MCP stdio server.921Allow prompting and installing missing MCP dependencies for skills (stable; on by default).

860 922 

861Key923Key

862 924 

863`mcp_servers.<id>.http_headers`925`features.sqlite`

864 926 

865Type / Values927Type / Values

866 928 

867`map<string,string>`929`boolean`

868 930 

869Details931Details

870 932 

871Static HTTP headers included with each MCP HTTP request.933Enable SQLite-backed state persistence (stable; on by default).

872 934 

873Key935Key

874 936 

875`mcp_servers.<id>.required`937`features.steer`

876 938 

877Type / Values939Type / Values

878 940 


880 942 

881Details943Details

882 944 

883When true, fail startup/resume if this enabled MCP server cannot initialize.945Legacy toggle from an earlier Enter/Tab steering rollout. Current builds always use the current steering behavior.

884 946 

885Key947Key

886 948 

887`mcp_servers.<id>.startup_timeout_ms`949`features.undo`

888 950 

889Type / Values951Type / Values

890 952 

891`number`953`boolean`

892 954 

893Details955Details

894 956 

895Alias for `startup_timeout_sec` in milliseconds.957Enable undo support (stable; off by default).

896 958 

897Key959Key

898 960 

899`mcp_servers.<id>.startup_timeout_sec`961`features.unified_exec`

900 962 

901Type / Values963Type / Values

902 964 

903`number`965`boolean`

904 966 

905Details967Details

906 968 

907Override the default 10s startup timeout for an MCP server.969Use the unified PTY-backed exec tool (stable; enabled by default except on Windows).

908 970 

909Key971Key

910 972 

911`mcp_servers.<id>.tool_timeout_sec`973`features.use_linux_sandbox_bwrap`

912 974 

913Type / Values975Type / Values

914 976 

915`number`977`boolean`

916 978 

917Details979Details

918 980 

919Override the default 60s per-tool timeout for an MCP server.981Use the bubblewrap-based Linux sandbox pipeline (experimental; off by default).

920 982 

921Key983Key

922 984 

923`mcp_servers.<id>.url`985`features.web_search`

924 986 

925Type / Values987Type / Values

926 988 

927`string`989`boolean`

928 990 

929Details991Details

930 992 

931Endpoint for an MCP streamable HTTP server.993Deprecated legacy toggle; prefer the top-level `web_search` setting.

932 994 

933Key995Key

934 996 

935`model`997`features.web_search_cached`

936 998 

937Type / Values999Type / Values

938 1000 

939`string`1001`boolean`

940 1002 

941Details1003Details

942 1004 

943Model to use (e.g., `gpt-5-codex`).1005Deprecated legacy toggle. When `web_search` is unset, true maps to `web_search = "cached"`.

944 1006 

945Key1007Key

946 1008 

947`model_auto_compact_token_limit`1009`features.web_search_request`

948 1010 

949Type / Values1011Type / Values

950 1012 

951`number`1013`boolean`

952 1014 

953Details1015Details

954 1016 

955Token threshold that triggers automatic history compaction (unset uses model defaults).1017Deprecated legacy toggle. When `web_search` is unset, true maps to `web_search = "live"`.

956 1018 

957Key1019Key

958 1020 

959`model_context_window`1021`feedback.enabled`

960 1022 

961Type / Values1023Type / Values

962 1024 

963`number`1025`boolean`

964 1026 

965Details1027Details

966 1028 

967Context window tokens available to the active model.1029Enable feedback submission via `/feedback` across Codex surfaces (default: true).

968 1030 

969Key1031Key

970 1032 

971`model_instructions_file`1033`file_opener`

972 1034 

973Type / Values1035Type / Values

974 1036 

975`string (path)`1037`vscode | vscode-insiders | windsurf | cursor | none`

976 1038 

977Details1039Details

978 1040 

979Replacement for built-in instructions instead of `AGENTS.md`.1041URI scheme used to open citations from Codex output (default: `vscode`).

980 1042 

981Key1043Key

982 1044 

983`model_provider`1045`forced_chatgpt_workspace_id`

984 1046 

985Type / Values1047Type / Values

986 1048 

987`string`1049`string (uuid)`

988 1050 

989Details1051Details

990 1052 

991Provider id from `model_providers` (default: `openai`).1053Limit ChatGPT logins to a specific workspace identifier.

992 1054 

993Key1055Key

994 1056 

995`model_providers.<id>.base_url`1057`forced_login_method`

1058 

1059Type / Values

1060 

1061`chatgpt | api`

1062 

1063Details

1064 

1065Restrict Codex to a specific authentication method.

1066 

1067Key

1068 

1069`hide_agent_reasoning`

1070 

1071Type / Values

1072 

1073`boolean`

1074 

1075Details

1076 

1077Suppress reasoning events in both the TUI and `codex exec` output.

1078 

1079Key

1080 

1081`history.max_bytes`

1082 

1083Type / Values

1084 

1085`number`

1086 

1087Details

1088 

1089If set, caps the history file size in bytes by dropping oldest entries.

1090 

1091Key

1092 

1093`history.persistence`

1094 

1095Type / Values

1096 

1097`save-all | none`

1098 

1099Details

1100 

1101Control whether Codex saves session transcripts to history.jsonl.

1102 

1103Key

1104 

1105`instructions`

996 1106 

997Type / Values1107Type / Values

998 1108 


1000 1110 

1001Details1111Details

1002 1112 

1003API base URL for the model provider.1113Reserved for future use; prefer `model_instructions_file` or `AGENTS.md`.

1004 1114 

1005Key1115Key

1006 1116 

1007`model_providers.<id>.env_http_headers`1117`log_dir`

1008 1118 

1009Type / Values1119Type / Values

1010 1120 

1011`map<string,string>`1121`string (path)`

1012 1122 

1013Details1123Details

1014 1124 

1015HTTP headers populated from environment variables when present.1125Directory where Codex writes log files (for example `codex-tui.log`); defaults to `$CODEX_HOME/log`.

1016 1126 

1017Key1127Key

1018 1128 

1019`model_providers.<id>.env_key`1129`mcp_oauth_callback_port`

1130 

1131Type / Values

1132 

1133`integer`

1134 

1135Details

1136 

1137Optional fixed port for the local HTTP callback server used during MCP OAuth login. When unset, Codex binds to an ephemeral port chosen by the OS.

1138 

1139Key

1140 

1141`mcp_oauth_callback_url`

1020 1142 

1021Type / Values1143Type / Values

1022 1144 


1024 1146 

1025Details1147Details

1026 1148 

1027Environment variable supplying the provider API key.1149Optional redirect URI override for MCP OAuth login (for example, a devbox ingress URL). `mcp_oauth_callback_port` still controls the callback listener port.

1028 1150 

1029Key1151Key

1030 1152 

1031`model_providers.<id>.env_key_instructions`1153`mcp_oauth_credentials_store`

1154 

1155Type / Values

1156 

1157`auto | file | keyring`

1158 

1159Details

1160 

1161Preferred store for MCP OAuth credentials.

1162 

1163Key

1164 

1165`mcp_servers.<id>.args`

1166 

1167Type / Values

1168 

1169`array<string>`

1170 

1171Details

1172 

1173Arguments passed to the MCP stdio server command.

1174 

1175Key

1176 

1177`mcp_servers.<id>.bearer_token_env_var`

1032 1178 

1033Type / Values1179Type / Values

1034 1180 


1036 1182 

1037Details1183Details

1038 1184 

1039Optional setup guidance for the provider API key.1185Environment variable sourcing the bearer token for an MCP HTTP server.

1040 1186 

1041Key1187Key

1042 1188 

1043`model_providers.<id>.experimental_bearer_token`1189`mcp_servers.<id>.command`

1044 1190 

1045Type / Values1191Type / Values

1046 1192 


1048 1194 

1049Details1195Details

1050 1196 

1051Direct bearer token for the provider (discouraged; use `env_key`).1197Launcher command for an MCP stdio server.

1052 1198 

1053Key1199Key

1054 1200 

1055`model_providers.<id>.http_headers`1201`mcp_servers.<id>.cwd`

1202 

1203Type / Values

1204 

1205`string`

1206 

1207Details

1208 

1209Working directory for the MCP stdio server process.

1210 

1211Key

1212 

1213`mcp_servers.<id>.disabled_tools`

1214 

1215Type / Values

1216 

1217`array<string>`

1218 

1219Details

1220 

1221Deny list applied after `enabled_tools` for the MCP server.

1222 

1223Key

1224 

1225`mcp_servers.<id>.enabled`

1226 

1227Type / Values

1228 

1229`boolean`

1230 

1231Details

1232 

1233Disable an MCP server without removing its configuration.

1234 

1235Key

1236 

1237`mcp_servers.<id>.enabled_tools`

1238 

1239Type / Values

1240 

1241`array<string>`

1242 

1243Details

1244 

1245Allow list of tool names exposed by the MCP server.

1246 

1247Key

1248 

1249`mcp_servers.<id>.env`

1056 1250 

1057Type / Values1251Type / Values

1058 1252 


1060 1254 

1061Details1255Details

1062 1256 

1063Static HTTP headers added to provider requests.1257Environment variables forwarded to the MCP stdio server.

1064 1258 

1065Key1259Key

1066 1260 

1067`model_providers.<id>.name`1261`mcp_servers.<id>.env_http_headers`

1068 1262 

1069Type / Values1263Type / Values

1070 1264 

1071`string`1265`map<string,string>`

1072 1266 

1073Details1267Details

1074 1268 

1075Display name for a custom model provider.1269HTTP headers populated from environment variables for an MCP HTTP server.

1076 1270 

1077Key1271Key

1078 1272 

1079`model_providers.<id>.query_params`1273`mcp_servers.<id>.env_vars`

1274 

1275Type / Values

1276 

1277`array<string>`

1278 

1279Details

1280 

1281Additional environment variables to whitelist for an MCP stdio server.

1282 

1283Key

1284 

1285`mcp_servers.<id>.http_headers`

1080 1286 

1081Type / Values1287Type / Values

1082 1288 


1084 1290 

1085Details1291Details

1086 1292 

1087Extra query parameters appended to provider requests.1293Static HTTP headers included with each MCP HTTP request.

1088 1294 

1089Key1295Key

1090 1296 

1091`model_providers.<id>.request_max_retries`1297`mcp_servers.<id>.oauth_resource`

1092 1298 

1093Type / Values1299Type / Values

1094 1300 

1095`number`1301`string`

1096 1302 

1097Details1303Details

1098 1304 

1099Retry count for HTTP requests to the provider (default: 4).1305Optional RFC 8707 OAuth resource parameter to include during MCP login.

1100 1306 

1101Key1307Key

1102 1308 

1103`model_providers.<id>.requires_openai_auth`1309`mcp_servers.<id>.required`

1104 1310 

1105Type / Values1311Type / Values

1106 1312 


1108 1314 

1109Details1315Details

1110 1316 

1111The provider uses OpenAI authentication (defaults to false).1317When true, fail startup/resume if this enabled MCP server cannot initialize.

1112 1318 

1113Key1319Key

1114 1320 

1115`model_providers.<id>.stream_idle_timeout_ms`1321`mcp_servers.<id>.scopes`

1322 

1323Type / Values

1324 

1325`array<string>`

1326 

1327Details

1328 

1329OAuth scopes to request when authenticating to that MCP server.

1330 

1331Key

1332 

1333`mcp_servers.<id>.startup_timeout_ms`

1334 

1335Type / Values

1336 

1337`number`

1338 

1339Details

1340 

1341Alias for `startup_timeout_sec` in milliseconds.

1342 

1343Key

1344 

1345`mcp_servers.<id>.startup_timeout_sec`

1346 

1347Type / Values

1348 

1349`number`

1350 

1351Details

1352 

1353Override the default 10s startup timeout for an MCP server.

1354 

1355Key

1356 

1357`mcp_servers.<id>.tool_timeout_sec`

1358 

1359Type / Values

1360 

1361`number`

1362 

1363Details

1364 

1365Override the default 60s per-tool timeout for an MCP server.

1366 

1367Key

1368 

1369`mcp_servers.<id>.url`

1370 

1371Type / Values

1372 

1373`string`

1374 

1375Details

1376 

1377Endpoint for an MCP streamable HTTP server.

1378 

1379Key

1380 

1381`model`

1382 

1383Type / Values

1384 

1385`string`

1386 

1387Details

1388 

1389Model to use (e.g., `gpt-5-codex`).

1390 

1391Key

1392 

1393`model_auto_compact_token_limit`

1394 

1395Type / Values

1396 

1397`number`

1398 

1399Details

1400 

1401Token threshold that triggers automatic history compaction (unset uses model defaults).

1402 

1403Key

1404 

1405`model_catalog_json`

1406 

1407Type / Values

1408 

1409`string (path)`

1410 

1411Details

1412 

1413Optional path to a JSON model catalog loaded on startup. Profile-level `profiles.<name>.model_catalog_json` can override this per profile.

1414 

1415Key

1416 

1417`model_context_window`

1418 

1419Type / Values

1420 

1421`number`

1422 

1423Details

1424 

1425Context window tokens available to the active model.

1426 

1427Key

1428 

1429`model_instructions_file`

1430 

1431Type / Values

1432 

1433`string (path)`

1434 

1435Details

1436 

1437Replacement for built-in instructions instead of `AGENTS.md`.

1438 

1439Key

1440 

1441`model_provider`

1442 

1443Type / Values

1444 

1445`string`

1446 

1447Details

1448 

1449Provider id from `model_providers` (default: `openai`).

1450 

1451Key

1452 

1453`model_providers.<id>.base_url`

1454 

1455Type / Values

1456 

1457`string`

1458 

1459Details

1460 

1461API base URL for the model provider.

1462 

1463Key

1464 

1465`model_providers.<id>.env_http_headers`

1466 

1467Type / Values

1468 

1469`map<string,string>`

1470 

1471Details

1472 

1473HTTP headers populated from environment variables when present.

1474 

1475Key

1476 

1477`model_providers.<id>.env_key`

1478 

1479Type / Values

1480 

1481`string`

1482 

1483Details

1484 

1485Environment variable supplying the provider API key.

1486 

1487Key

1488 

1489`model_providers.<id>.env_key_instructions`

1490 

1491Type / Values

1492 

1493`string`

1494 

1495Details

1496 

1497Optional setup guidance for the provider API key.

1498 

1499Key

1500 

1501`model_providers.<id>.experimental_bearer_token`

1502 

1503Type / Values

1504 

1505`string`

1506 

1507Details

1508 

1509Direct bearer token for the provider (discouraged; use `env_key`).

1510 

1511Key

1512 

1513`model_providers.<id>.http_headers`

1514 

1515Type / Values

1516 

1517`map<string,string>`

1518 

1519Details

1520 

1521Static HTTP headers added to provider requests.

1522 

1523Key

1524 

1525`model_providers.<id>.name`

1526 

1527Type / Values

1528 

1529`string`

1530 

1531Details

1532 

1533Display name for a custom model provider.

1534 

1535Key

1536 

1537`model_providers.<id>.query_params`

1538 

1539Type / Values

1540 

1541`map<string,string>`

1542 

1543Details

1544 

1545Extra query parameters appended to provider requests.

1546 

1547Key

1548 

1549`model_providers.<id>.request_max_retries`

1550 

1551Type / Values

1552 

1553`number`

1554 

1555Details

1556 

1557Retry count for HTTP requests to the provider (default: 4).

1558 

1559Key

1560 

1561`model_providers.<id>.requires_openai_auth`

1562 

1563Type / Values

1564 

1565`boolean`

1566 

1567Details

1568 

1569The provider uses OpenAI authentication (defaults to false).

1570 

1571Key

1572 

1573`model_providers.<id>.stream_idle_timeout_ms`

1574 

1575Type / Values

1576 

1577`number`

1578 

1579Details

1580 

1581Idle timeout for SSE streams in milliseconds (default: 300000).

1582 

1583Key

1584 

1585`model_providers.<id>.stream_max_retries`

1586 

1587Type / Values

1588 

1589`number`

1590 

1591Details

1592 

1593Retry count for SSE streaming interruptions (default: 5).

1594 

1595Key

1596 

1597`model_providers.<id>.supports_websockets`

1598 

1599Type / Values

1600 

1601`boolean`

1602 

1603Details

1604 

1605Whether that provider supports the Responses API WebSocket transport.

1606 

1607Key

1608 

1609`model_providers.<id>.wire_api`

1610 

1611Type / Values

1612 

1613`responses`

1614 

1615Details

1616 

1617Protocol used by the provider. `responses` is the only supported value, and it is the default when omitted.

1618 

1619Key

1620 

1621`model_reasoning_effort`

1622 

1623Type / Values

1624 

1625`minimal | low | medium | high | xhigh`

1626 

1627Details

1628 

1629Adjust reasoning effort for supported models (Responses API only; `xhigh` is model-dependent).

1630 

1631Key

1632 

1633`model_reasoning_summary`

1634 

1635Type / Values

1636 

1637`auto | concise | detailed | none`

1638 

1639Details

1640 

1641Select reasoning summary detail or disable summaries entirely.

1642 

1643Key

1644 

1645`model_supports_reasoning_summaries`

1646 

1647Type / Values

1648 

1649`boolean`

1650 

1651Details

1652 

1653Force Codex to send or not send reasoning metadata.

1654 

1655Key

1656 

1657`model_verbosity`

1658 

1659Type / Values

1660 

1661`low | medium | high`

1662 

1663Details

1664 

1665Optional GPT-5 Responses API verbosity override; when unset, the selected model/preset default is used.

1666 

1667Key

1668 

1669`notice.hide_full_access_warning`

1670 

1671Type / Values

1672 

1673`boolean`

1674 

1675Details

1676 

1677Track acknowledgement of the full access warning prompt.

1678 

1679Key

1680 

1681`notice.hide_gpt-5.1-codex-max_migration_prompt`

1682 

1683Type / Values

1684 

1685`boolean`

1686 

1687Details

1688 

1689Track acknowledgement of the gpt-5.1-codex-max migration prompt.

1690 

1691Key

1692 

1693`notice.hide_gpt5_1_migration_prompt`

1694 

1695Type / Values

1696 

1697`boolean`

1698 

1699Details

1700 

1701Track acknowledgement of the GPT-5.1 migration prompt.

1702 

1703Key

1704 

1705`notice.hide_rate_limit_model_nudge`

1706 

1707Type / Values

1708 

1709`boolean`

1710 

1711Details

1712 

1713Track opt-out of the rate limit model switch reminder.

1714 

1715Key

1716 

1717`notice.hide_world_writable_warning`

1718 

1719Type / Values

1720 

1721`boolean`

1722 

1723Details

1724 

1725Track acknowledgement of the Windows world-writable directories warning.

1726 

1727Key

1728 

1729`notice.model_migrations`

1730 

1731Type / Values

1732 

1733`map<string,string>`

1734 

1735Details

1736 

1737Track acknowledged model migrations as old->new mappings.

1738 

1739Key

1740 

1741`notify`

1742 

1743Type / Values

1744 

1745`array<string>`

1746 

1747Details

1748 

1749Command invoked for notifications; receives a JSON payload from Codex.

1750 

1751Key

1752 

1753`oss_provider`

1754 

1755Type / Values

1756 

1757`lmstudio | ollama`

1758 

1759Details

1760 

1761Default local provider used when running with `--oss` (defaults to prompting if unset).

1762 

1763Key

1764 

1765`otel.environment`

1766 

1767Type / Values

1768 

1769`string`

1770 

1771Details

1772 

1773Environment tag applied to emitted OpenTelemetry events (default: `dev`).

1774 

1775Key

1776 

1777`otel.exporter`

1778 

1779Type / Values

1780 

1781`none | otlp-http | otlp-grpc`

1782 

1783Details

1784 

1785Select the OpenTelemetry exporter and provide any endpoint metadata.

1786 

1787Key

1788 

1789`otel.exporter.<id>.endpoint`

1790 

1791Type / Values

1792 

1793`string`

1794 

1795Details

1796 

1797Exporter endpoint for OTEL logs.

1798 

1799Key

1800 

1801`otel.exporter.<id>.headers`

1802 

1803Type / Values

1804 

1805`map<string,string>`

1806 

1807Details

1808 

1809Static headers included with OTEL exporter requests.

1810 

1811Key

1812 

1813`otel.exporter.<id>.protocol`

1814 

1815Type / Values

1816 

1817`binary | json`

1818 

1819Details

1820 

1821Protocol used by the OTLP/HTTP exporter.

1822 

1823Key

1824 

1825`otel.exporter.<id>.tls.ca-certificate`

1826 

1827Type / Values

1828 

1829`string`

1830 

1831Details

1832 

1833CA certificate path for OTEL exporter TLS.

1834 

1835Key

1836 

1837`otel.exporter.<id>.tls.client-certificate`

1116 1838 

1117Type / Values1839Type / Values

1118 1840 

1119`number`1841`string`

1120 1842 

1121Details1843Details

1122 1844 

1123Idle timeout for SSE streams in milliseconds (default: 300000).1845Client certificate path for OTEL exporter TLS.

1124 1846 

1125Key1847Key

1126 1848 

1127`model_providers.<id>.stream_max_retries`1849`otel.exporter.<id>.tls.client-private-key`

1128 1850 

1129Type / Values1851Type / Values

1130 1852 

1131`number`1853`string`

1132 1854 

1133Details1855Details

1134 1856 

1135Retry count for SSE streaming interruptions (default: 5).1857Client private key path for OTEL exporter TLS.

1136 1858 

1137Key1859Key

1138 1860 

1139`model_providers.<id>.wire_api`1861`otel.log_user_prompt`

1140 1862 

1141Type / Values1863Type / Values

1142 1864 

1143`chat | responses`1865`boolean`

1144 1866 

1145Details1867Details

1146 1868 

1147Protocol used by the provider (defaults to `chat` if omitted).1869Opt in to exporting raw user prompts with OpenTelemetry logs.

1148 1870 

1149Key1871Key

1150 1872 

1151`model_reasoning_effort`1873`otel.metrics_exporter`

1152 1874 

1153Type / Values1875Type / Values

1154 1876 

1155`minimal | low | medium | high | xhigh`1877`none | statsig | otlp-http | otlp-grpc`

1156 1878 

1157Details1879Details

1158 1880 

1159Adjust reasoning effort for supported models (Responses API only; `xhigh` is model-dependent).1881Select the OpenTelemetry metrics exporter (defaults to `statsig`).

1160 1882 

1161Key1883Key

1162 1884 

1163`model_reasoning_summary`1885`otel.trace_exporter`

1164 1886 

1165Type / Values1887Type / Values

1166 1888 

1167`auto | concise | detailed | none`1889`none | otlp-http | otlp-grpc`

1168 1890 

1169Details1891Details

1170 1892 

1171Select reasoning summary detail or disable summaries entirely.1893Select the OpenTelemetry trace exporter and provide any endpoint metadata.

1172 1894 

1173Key1895Key

1174 1896 

1175`model_supports_reasoning_summaries`1897`otel.trace_exporter.<id>.endpoint`

1176 1898 

1177Type / Values1899Type / Values

1178 1900 

1179`boolean`1901`string`

1180 1902 

1181Details1903Details

1182 1904 

1183Force Codex to send or not send reasoning metadata.1905Trace exporter endpoint for OTEL logs.

1184 1906 

1185Key1907Key

1186 1908 

1187`model_verbosity`1909`otel.trace_exporter.<id>.headers`

1188 1910 

1189Type / Values1911Type / Values

1190 1912 

1191`low | medium | high`1913`map<string,string>`

1192 1914 

1193Details1915Details

1194 1916 

1195Control GPT-5 Responses API verbosity (defaults to `medium`).1917Static headers included with OTEL trace exporter requests.

1196 1918 

1197Key1919Key

1198 1920 

1199`notice.hide_full_access_warning`1921`otel.trace_exporter.<id>.protocol`

1200 1922 

1201Type / Values1923Type / Values

1202 1924 

1203`boolean`1925`binary | json`

1204 1926 

1205Details1927Details

1206 1928 

1207Track acknowledgement of the full access warning prompt.1929Protocol used by the OTLP/HTTP trace exporter.

1208 1930 

1209Key1931Key

1210 1932 

1211`notice.hide_gpt-5.1-codex-max_migration_prompt`1933`otel.trace_exporter.<id>.tls.ca-certificate`

1212 1934 

1213Type / Values1935Type / Values

1214 1936 

1215`boolean`1937`string`

1216 1938 

1217Details1939Details

1218 1940 

1219Track acknowledgement of the gpt-5.1-codex-max migration prompt.1941CA certificate path for OTEL trace exporter TLS.

1220 1942 

1221Key1943Key

1222 1944 

1223`notice.hide_gpt5_1_migration_prompt`1945`otel.trace_exporter.<id>.tls.client-certificate`

1224 1946 

1225Type / Values1947Type / Values

1226 1948 

1227`boolean`1949`string`

1228 1950 

1229Details1951Details

1230 1952 

1231Track acknowledgement of the GPT-5.1 migration prompt.1953Client certificate path for OTEL trace exporter TLS.

1232 1954 

1233Key1955Key

1234 1956 

1235`notice.hide_rate_limit_model_nudge`1957`otel.trace_exporter.<id>.tls.client-private-key`

1236 1958 

1237Type / Values1959Type / Values

1238 1960 

1239`boolean`1961`string`

1240 1962 

1241Details1963Details

1242 1964 

1243Track opt-out of the rate limit model switch reminder.1965Client private key path for OTEL trace exporter TLS.

1244 1966 

1245Key1967Key

1246 1968 

1247`notice.hide_world_writable_warning`1969`permissions.network.admin_url`

1248 1970 

1249Type / Values1971Type / Values

1250 1972 

1251`boolean`1973`string`

1252 1974 

1253Details1975Details

1254 1976 

1255Track acknowledgement of the Windows world-writable directories warning.1977Admin endpoint for the managed network proxy.

1256 1978 

1257Key1979Key

1258 1980 

1259`notice.model_migrations`1981`permissions.network.allow_local_binding`

1260 1982 

1261Type / Values1983Type / Values

1262 1984 

1263`map<string,string>`1985`boolean`

1264 1986 

1265Details1987Details

1266 1988 

1267Track acknowledged model migrations as old->new mappings.1989Permit local bind/listen operations through the managed proxy.

1268 1990 

1269Key1991Key

1270 1992 

1271`notify`1993`permissions.network.allow_unix_sockets`

1272 1994 

1273Type / Values1995Type / Values

1274 1996 


1276 1998 

1277Details1999Details

1278 2000 

1279Command invoked for notifications; receives a JSON payload from Codex.2001Allowlist of Unix socket paths permitted through the managed proxy.

1280 2002 

1281Key2003Key

1282 2004 

1283`oss_provider`2005`permissions.network.allow_upstream_proxy`

1284 2006 

1285Type / Values2007Type / Values

1286 2008 

1287`lmstudio | ollama`2009`boolean`

1288 2010 

1289Details2011Details

1290 2012 

1291Default local provider used when running with `--oss` (defaults to prompting if unset).2013Allow the managed proxy to chain to another upstream proxy.

1292 2014 

1293Key2015Key

1294 2016 

1295`otel.environment`2017`permissions.network.allowed_domains`

1296 2018 

1297Type / Values2019Type / Values

1298 2020 

1299`string`2021`array<string>`

1300 2022 

1301Details2023Details

1302 2024 

1303Environment tag applied to emitted OpenTelemetry events (default: `dev`).2025Allowlist of domains permitted through the managed proxy.

1304 2026 

1305Key2027Key

1306 2028 

1307`otel.exporter`2029`permissions.network.dangerously_allow_all_unix_sockets`

1308 2030 

1309Type / Values2031Type / Values

1310 2032 

1311`none | otlp-http | otlp-grpc`2033`boolean`

1312 2034 

1313Details2035Details

1314 2036 

1315Select the OpenTelemetry exporter and provide any endpoint metadata.2037Allow the proxy to use arbitrary Unix sockets instead of the default restricted set.

1316 2038 

1317Key2039Key

1318 2040 

1319`otel.exporter.<id>.endpoint`2041`permissions.network.dangerously_allow_non_loopback_admin`

1320 2042 

1321Type / Values2043Type / Values

1322 2044 

1323`string`2045`boolean`

1324 2046 

1325Details2047Details

1326 2048 

1327Exporter endpoint for OTEL logs.2049Permit non-loopback bind addresses for the managed proxy admin listener.

1328 2050 

1329Key2051Key

1330 2052 

1331`otel.exporter.<id>.headers`2053`permissions.network.dangerously_allow_non_loopback_proxy`

1332 2054 

1333Type / Values2055Type / Values

1334 2056 

1335`map<string,string>`2057`boolean`

1336 2058 

1337Details2059Details

1338 2060 

1339Static headers included with OTEL exporter requests.2061Permit non-loopback bind addresses for the managed proxy listener.

1340 2062 

1341Key2063Key

1342 2064 

1343`otel.exporter.<id>.protocol`2065`permissions.network.denied_domains`

1344 2066 

1345Type / Values2067Type / Values

1346 2068 

1347`binary | json`2069`array<string>`

1348 2070 

1349Details2071Details

1350 2072 

1351Protocol used by the OTLP/HTTP exporter.2073Denylist of domains blocked by the managed proxy.

1352 2074 

1353Key2075Key

1354 2076 

1355`otel.exporter.<id>.tls.ca-certificate`2077`permissions.network.enable_socks5`

1356 2078 

1357Type / Values2079Type / Values

1358 2080 

1359`string`2081`boolean`

1360 2082 

1361Details2083Details

1362 2084 

1363CA certificate path for OTEL exporter TLS.2085Expose a SOCKS5 listener from the managed network proxy.

1364 2086 

1365Key2087Key

1366 2088 

1367`otel.exporter.<id>.tls.client-certificate`2089`permissions.network.enable_socks5_udp`

1368 2090 

1369Type / Values2091Type / Values

1370 2092 

1371`string`2093`boolean`

1372 2094 

1373Details2095Details

1374 2096 

1375Client certificate path for OTEL exporter TLS.2097Allow UDP over the SOCKS5 listener when enabled.

1376 2098 

1377Key2099Key

1378 2100 

1379`otel.exporter.<id>.tls.client-private-key`2101`permissions.network.enabled`

1380 2102 

1381Type / Values2103Type / Values

1382 2104 

1383`string`2105`boolean`

1384 2106 

1385Details2107Details

1386 2108 

1387Client private key path for OTEL exporter TLS.2109Enable the managed network proxy configuration for subprocesses.

1388 2110 

1389Key2111Key

1390 2112 

1391`otel.log_user_prompt`2113`permissions.network.mode`

1392 2114 

1393Type / Values2115Type / Values

1394 2116 

1395`boolean`2117`limited | full`

1396 2118 

1397Details2119Details

1398 2120 

1399Opt in to exporting raw user prompts with OpenTelemetry logs.2121Network proxy mode used for subprocess traffic.

1400 2122 

1401Key2123Key

1402 2124 

1403`otel.trace_exporter`2125`permissions.network.proxy_url`

1404 2126 

1405Type / Values2127Type / Values

1406 2128 

1407`none | otlp-http | otlp-grpc`2129`string`

1408 2130 

1409Details2131Details

1410 2132 

1411Select the OpenTelemetry trace exporter and provide any endpoint metadata.2133HTTP proxy endpoint used by the managed network proxy.

1412 2134 

1413Key2135Key

1414 2136 

1415`otel.trace_exporter.<id>.endpoint`2137`permissions.network.socks_url`

1416 2138 

1417Type / Values2139Type / Values

1418 2140 


1420 2142 

1421Details2143Details

1422 2144 

1423Trace exporter endpoint for OTEL logs.2145SOCKS5 proxy endpoint used by the managed network proxy.

1424 2146 

1425Key2147Key

1426 2148 

1427`otel.trace_exporter.<id>.headers`2149`personality`

1428 2150 

1429Type / Values2151Type / Values

1430 2152 

1431`map<string,string>`2153`none | friendly | pragmatic`

1432 2154 

1433Details2155Details

1434 2156 

1435Static headers included with OTEL trace exporter requests.2157Default communication style for models that advertise `supportsPersonality`; can be overridden per thread/turn or via `/personality`.

1436 2158 

1437Key2159Key

1438 2160 

1439`otel.trace_exporter.<id>.protocol`2161`plan_mode_reasoning_effort`

1440 2162 

1441Type / Values2163Type / Values

1442 2164 

1443`binary | json`2165`none | minimal | low | medium | high | xhigh`

1444 2166 

1445Details2167Details

1446 2168 

1447Protocol used by the OTLP/HTTP trace exporter.2169Plan-mode-specific reasoning override. When unset, Plan mode uses its built-in preset default.

1448 2170 

1449Key2171Key

1450 2172 

1451`otel.trace_exporter.<id>.tls.ca-certificate`2173`profile`

1452 2174 

1453Type / Values2175Type / Values

1454 2176 


1456 2178 

1457Details2179Details

1458 2180 

1459CA certificate path for OTEL trace exporter TLS.2181Default profile applied at startup (equivalent to `--profile`).

1460 2182 

1461Key2183Key

1462 2184 

1463`otel.trace_exporter.<id>.tls.client-certificate`2185`profiles.<name>.*`

1464 2186 

1465Type / Values2187Type / Values

1466 2188 

1467`string`2189`various`

1468 2190 

1469Details2191Details

1470 2192 

1471Client certificate path for OTEL trace exporter TLS.2193Profile-scoped overrides for any of the supported configuration keys.

1472 2194 

1473Key2195Key

1474 2196 

1475`otel.trace_exporter.<id>.tls.client-private-key`2197`profiles.<name>.analytics.enabled`

1476 2198 

1477Type / Values2199Type / Values

1478 2200 

1479`string`2201`boolean`

1480 2202 

1481Details2203Details

1482 2204 

1483Client private key path for OTEL trace exporter TLS.2205Profile-scoped analytics enablement override.

1484 2206 

1485Key2207Key

1486 2208 

1487`personality`2209`profiles.<name>.experimental_use_unified_exec_tool`

1488 2210 

1489Type / Values2211Type / Values

1490 2212 

1491`none | friendly | pragmatic`2213`boolean`

1492 2214 

1493Details2215Details

1494 2216 

1495Default communication style for models that advertise `supportsPersonality`; can be overridden per thread/turn or via `/personality`.2217Legacy name for enabling unified exec; prefer `[features].unified_exec`.

1496 2218 

1497Key2219Key

1498 2220 

1499`profile`2221`profiles.<name>.model_catalog_json`

1500 2222 

1501Type / Values2223Type / Values

1502 2224 

1503`string`2225`string (path)`

1504 2226 

1505Details2227Details

1506 2228 

1507Default profile applied at startup (equivalent to `--profile`).2229Profile-scoped model catalog JSON path override (applied on startup only; overrides the top-level `model_catalog_json` for that profile).

1508 2230 

1509Key2231Key

1510 2232 

1511`profiles.<name>.*`2233`profiles.<name>.model_instructions_file`

1512 2234 

1513Type / Values2235Type / Values

1514 2236 

1515`various`2237`string (path)`

1516 2238 

1517Details2239Details

1518 2240 

1519Profile-scoped overrides for any of the supported configuration keys.2241Profile-scoped replacement for the built-in instruction file.

1520 2242 

1521Key2243Key

1522 2244 

1523`profiles.<name>.experimental_use_freeform_apply_patch`2245`profiles.<name>.oss_provider`

1524 2246 

1525Type / Values2247Type / Values

1526 2248 

1527`boolean`2249`lmstudio | ollama`

1528 2250 

1529Details2251Details

1530 2252 

1531Legacy name for enabling freeform apply\_patch; prefer `[features].apply_patch_freeform`.2253Profile-scoped OSS provider for `--oss` sessions.

1532 2254 

1533Key2255Key

1534 2256 

1535`profiles.<name>.experimental_use_unified_exec_tool`2257`profiles.<name>.personality`

1536 2258 

1537Type / Values2259Type / Values

1538 2260 

1539`boolean`2261`none | friendly | pragmatic`

1540 2262 

1541Details2263Details

1542 2264 

1543Legacy name for enabling unified exec; prefer `[features].unified_exec`.2265Profile-scoped communication style override for supported models.

1544 2266 

1545Key2267Key

1546 2268 

1547`profiles.<name>.include_apply_patch_tool`2269`profiles.<name>.plan_mode_reasoning_effort`

1548 2270 

1549Type / Values2271Type / Values

1550 2272 

1551`boolean`2273`none | minimal | low | medium | high | xhigh`

1552 2274 

1553Details2275Details

1554 2276 

1555Legacy name for enabling freeform apply\_patch; prefer `[features].apply_patch_freeform`.2277Profile-scoped Plan-mode reasoning override.

1556 2278 

1557Key2279Key

1558 2280 

1559`profiles.<name>.oss_provider`2281`profiles.<name>.service_tier`

1560 2282 

1561Type / Values2283Type / Values

1562 2284 

1563`lmstudio | ollama`2285`flex | fast`

1564 2286 

1565Details2287Details

1566 2288 

1567Profile-scoped OSS provider for `--oss` sessions.2289Profile-scoped service tier preference for new turns.

1568 2290 

1569Key2291Key

1570 2292 

1571`profiles.<name>.personality`2293`profiles.<name>.tools_view_image`

1572 2294 

1573Type / Values2295Type / Values

1574 2296 

1575`none | friendly | pragmatic`2297`boolean`

1576 2298 

1577Details2299Details

1578 2300 

1579Profile-scoped communication style override for supported models.2301Enable or disable the `view_image` tool in that profile.

1580 2302 

1581Key2303Key

1582 2304 


1592 2314 

1593Key2315Key

1594 2316 

2317`profiles.<name>.windows.sandbox`

2318 

2319Type / Values

2320 

2321`unelevated | elevated`

2322 

2323Details

2324 

2325Profile-scoped Windows sandbox mode override.

2326 

2327Key

2328 

1595`project_doc_fallback_filenames`2329`project_doc_fallback_filenames`

1596 2330 

1597Type / Values2331Type / Values


1712 2446 

1713Key2447Key

1714 2448 

2449`service_tier`

2450 

2451Type / Values

2452 

2453`flex | fast`

2454 

2455Details

2456 

2457Preferred service tier for new turns. `fast` is honored only when the `features.fast_mode` gate is enabled.

2458 

2459Key

2460 

1715`shell_environment_policy.exclude`2461`shell_environment_policy.exclude`

1716 2462 

1717Type / Values2463Type / Values


1832 2578 

1833Key2579Key

1834 2580 

2581`sqlite_home`

2582 

2583Type / Values

2584 

2585`string (path)`

2586 

2587Details

2588 

2589Directory where Codex stores the SQLite-backed state DB used by agent jobs and other resumable runtime state.

2590 

2591Key

2592 

1835`suppress_unstable_features_warning`2593`suppress_unstable_features_warning`

1836 2594 

1837Type / Values2595Type / Values


1856 2614 

1857Key2615Key

1858 2616 

2617`tools.view_image`

2618 

2619Type / Values

2620 

2621`boolean`

2622 

2623Details

2624 

2625Enable the local-image attachment tool `view_image`.

2626 

2627Key

2628 

1859`tools.web_search`2629`tools.web_search`

1860 2630 

1861Type / Values2631Type / Values


1904 2674 

1905Key2675Key

1906 2676 

2677`tui.model_availability_nux.<model>`

2678 

2679Type / Values

2680 

2681`integer`

2682 

2683Details

2684 

2685Internal startup-tooltip state keyed by model slug.

2686 

2687Key

2688 

1907`tui.notification_method`2689`tui.notification_method`

1908 2690 

1909Type / Values2691Type / Values


1952 2734 

1953Key2735Key

1954 2736 

2737`tui.theme`

2738 

2739Type / Values

2740 

2741`string`

2742 

2743Details

2744 

2745Syntax-highlighting theme override (kebab-case theme name).

2746 

2747Key

2748 

1955`web_search`2749`web_search`

1956 2750 

1957Type / Values2751Type / Values


1974 2768 

1975Track Windows onboarding acknowledgement (Windows only).2769Track Windows onboarding acknowledgement (Windows only).

1976 2770 

2771Key

2772 

2773`windows.sandbox`

2774 

2775Type / Values

2776 

2777`unelevated | elevated`

2778 

2779Details

2780 

2781Windows-only native sandbox mode when running Codex natively on Windows.

2782 

1977Expand to view all2783Expand to view all

1978 2784 

1979You can find the latest JSON schema for `config.toml` [here](https://developers.openai.com/codex/config-schema.json).2785You can find the latest JSON schema for `config.toml` [here](https://developers.openai.com/codex/config-schema.json).


1988 2794 

1989## `requirements.toml`2795## `requirements.toml`

1990 2796 

1991`requirements.toml` is an admin-enforced configuration file that constrains security-sensitive settings users cant override. For details, locations, and examples, see [Admin-enforced requirements](https://developers.openai.com/codex/security#admin-enforced-requirements-requirementstoml).2797`requirements.toml` is an admin-enforced configuration file that constrains security-sensitive settings users can't override. For details, locations, and examples, see [Admin-enforced requirements](https://developers.openai.com/codex/enterprise/managed-configuration#admin-enforced-requirements-requirementstoml).

1992 2798 

1993For ChatGPT Business and Enterprise users, Codex can also apply cloud-fetched2799For ChatGPT Business and Enterprise users, Codex can also apply cloud-fetched

1994requirements. See the security page for precedence details.2800requirements. See the security page for precedence details.

1995 2801 

2802Use `[features]` in `requirements.toml` to pin feature flags by the same

2803canonical keys that `config.toml` uses. Omitted keys remain unconstrained.

2804 

1996| Key | Type / Values | Details |2805| Key | Type / Values | Details |

1997| --- | --- | --- |2806| --- | --- | --- |

1998| `allowed_approval_policies` | `array<string>` | Allowed values for `approval\_policy`. |2807| `allowed_approval_policies` | `array<string>` | Allowed values for `approval_policy` (for example `untrusted`, `on-request`, `never`, and `reject`). |

1999| `allowed_sandbox_modes` | `array<string>` | Allowed values for `sandbox_mode`. |2808| `allowed_sandbox_modes` | `array<string>` | Allowed values for `sandbox_mode`. |

2000| `allowed_web_search_modes` | `array<string>` | Allowed values for `web_search` (`disabled`, `cached`, `live`). `disabled` is always allowed; an empty list effectively allows only `disabled`. |2809| `allowed_web_search_modes` | `array<string>` | Allowed values for `web_search` (`disabled`, `cached`, `live`). `disabled` is always allowed; an empty list effectively allows only `disabled`. |

2810| `features` | `table` | Pinned feature values keyed by the canonical names from `config.toml`'s `[features]` table. |

2811| `features.<name>` | `boolean` | Require a specific canonical feature key to stay enabled or disabled. |

2001| `mcp_servers` | `table` | Allowlist of MCP servers that may be enabled. Both the server name (`<id>`) and its identity must match for the MCP server to be enabled. Any configured MCP server not in the allowlist (or with a mismatched identity) is disabled. |2812| `mcp_servers` | `table` | Allowlist of MCP servers that may be enabled. Both the server name (`<id>`) and its identity must match for the MCP server to be enabled. Any configured MCP server not in the allowlist (or with a mismatched identity) is disabled. |

2002| `mcp_servers.<id>.identity` | `table` | Identity rule for a single MCP server. Set either `command` (stdio) or `url` (streamable HTTP). |2813| `mcp_servers.<id>.identity` | `table` | Identity rule for a single MCP server. Set either `command` (stdio) or `url` (streamable HTTP). |

2003| `mcp_servers.<id>.identity.command` | `string` | Allow an MCP stdio server when its `mcp_servers.<id>.command` matches this command. |2814| `mcp_servers.<id>.identity.command` | `string` | Allow an MCP stdio server when its `mcp_servers.<id>.command` matches this command. |


2020 2831 

2021Details2832Details

2022 2833 

2023Allowed values for `approval\_policy`.2834Allowed values for `approval_policy` (for example `untrusted`, `on-request`, `never`, and `reject`).

2024 2835 

2025Key2836Key

2026 2837 


2048 2859 

2049Key2860Key

2050 2861 

2862`features`

2863 

2864Type / Values

2865 

2866`table`

2867 

2868Details

2869 

2870Pinned feature values keyed by the canonical names from `config.toml`'s `[features]` table.

2871 

2872Key

2873 

2874`features.<name>`

2875 

2876Type / Values

2877 

2878`boolean`

2879 

2880Details

2881 

2882Require a specific canonical feature key to stay enabled or disabled.

2883 

2884Key

2885 

2051`mcp_servers`2886`mcp_servers`

2052 2887 

2053Type / Values2888Type / Values

config-sample.md +188 −109

Details

1# Sample Configuration1# Sample Configuration

2 2 

3A complete example config.toml you can copy and adapt3Use this example configuration as a starting point. It includes most keys Codex reads from `config.toml`, along with default behaviors, recommended values where helpful, and short notes.

4 

5Use this example configuration as a starting point. It includes most keys Codex reads from `config.toml`, along with defaults and short notes.

6 4 

7For explanations and guidance, see:5For explanations and guidance, see:

8 6 

9- [Config basics](https://developers.openai.com/codex/config-basic)7- [Config basics](https://developers.openai.com/codex/config-basic)

10- [Advanced Config](https://developers.openai.com/codex/config-advanced)8- [Advanced Config](https://developers.openai.com/codex/config-advanced)

11- [Config Reference](https://developers.openai.com/codex/config-reference)9- [Config Reference](https://developers.openai.com/codex/config-reference)

10- [Sandbox and approvals](https://developers.openai.com/codex/agent-approvals-security#sandbox-and-approvals)

11- [Managed configuration](https://developers.openai.com/codex/enterprise/managed-configuration)

12 12 

13Use the snippet below as a reference. Copy only the keys and sections you need into `~/.codex/config.toml` (or into a project-scoped `.codex/config.toml`), then adjust values for your setup.13Use the snippet below as a reference. Copy only the keys and sections you need into `~/.codex/config.toml` (or into a project-scoped `.codex/config.toml`), then adjust values for your setup.

14 14 

15```toml15```toml

16# Codex example configuration (config.toml)16# Codex example configuration (config.toml)

17#17#

18# This file lists all keys Codex reads from config.toml, their default values,18# This file lists the main keys Codex reads from config.toml, along with default

19# and concise explanations. Values here mirror the effective defaults compiled19# behaviors, recommended examples, and concise explanations. Adjust as needed.

20# into the CLI. Adjust as needed.

21#20#

22# Notes21# Notes

23# - Root keys must appear before tables in TOML.22# - Root keys must appear before tables in TOML.


28# Core Model Selection27# Core Model Selection

29################################################################################28################################################################################

30 29 

31# Primary model used by Codex. Default: "gpt-5.2-codex" on all platforms.30# Primary model used by Codex. Recommended example for most users: "gpt-5.4".

32model = "gpt-5.2-codex"31model = "gpt-5.4"

33 32 

34# Default communication style for supported models. Default: "friendly".33# Communication style for supported models. Allowed values: none | friendly | pragmatic

35# Allowed values: none | friendly | pragmatic34# personality = "pragmatic"

36# personality = "friendly"

37 35 

38# Optional model override for /review. Default: unset (uses current session model).36# Optional model override for /review. Default: unset (uses current session model).

39# review_model = "gpt-5.2-codex"37# review_model = "gpt-5.4"

40 38 

41# Provider id selected from [model_providers]. Default: "openai".39# Provider id selected from [model_providers]. Default: "openai".

42model_provider = "openai"40model_provider = "openai"


44# Default OSS provider for --oss sessions. When unset, Codex prompts. Default: unset.42# Default OSS provider for --oss sessions. When unset, Codex prompts. Default: unset.

45# oss_provider = "ollama"43# oss_provider = "ollama"

46 44 

47# Optional manual model metadata. When unset, Codex auto-detects from model.45# Preferred service tier. `fast` is honored only when enabled in [features].

48# Uncomment to force values.46# service_tier = "flex" # fast | flex

47 

48# Optional manual model metadata. When unset, Codex uses model or preset defaults.

49# model_context_window = 128000 # tokens; default: auto for model49# model_context_window = 128000 # tokens; default: auto for model

50# model_auto_compact_token_limit = 0 # tokens; unset uses model defaults50# model_auto_compact_token_limit = 64000 # tokens; unset uses model defaults

51# tool_output_token_limit = 10000 # tokens stored per tool output; default: 10000 for gpt-5.2-codex51# tool_output_token_limit = 12000 # tokens stored per tool output

52# model_catalog_json = "/absolute/path/to/models.json" # optional startup-only model catalog override

53# background_terminal_max_timeout = 300000 # ms; max empty write_stdin poll window (default 5m)

52# log_dir = "/absolute/path/to/codex-logs" # directory for Codex logs; default: "$CODEX_HOME/log"54# log_dir = "/absolute/path/to/codex-logs" # directory for Codex logs; default: "$CODEX_HOME/log"

55# sqlite_home = "/absolute/path/to/codex-state" # optional SQLite-backed runtime state directory

53 56 

54################################################################################57################################################################################

55# Reasoning & Verbosity (Responses API capable models)58# Reasoning & Verbosity (Responses API capable models)

56################################################################################59################################################################################

57 60 

58# Reasoning effort: minimal | low | medium | high | xhigh (default: medium; xhigh on gpt-5.2-codex and gpt-5.2)61# Reasoning effort: minimal | low | medium | high | xhigh

59model_reasoning_effort = "medium"62# model_reasoning_effort = "medium"

60 63 

61# Reasoning summary: auto | concise | detailed | none (default: auto)64# Optional override used when Codex runs in plan mode: none | minimal | low | medium | high | xhigh

65# plan_mode_reasoning_effort = "high"

66 

67# Reasoning summary: auto | concise | detailed | none

62# model_reasoning_summary = "auto"68# model_reasoning_summary = "auto"

63 69 

64# Text verbosity for GPT-5 family (Responses API): low | medium | high (default: medium)70# Text verbosity for GPT-5 family (Responses API): low | medium | high

65# model_verbosity = "medium"71# model_verbosity = "medium"

66 72 

67# Force enable or disable reasoning summaries for current model73# Force enable or disable reasoning summaries for current model.

68# model_supports_reasoning_summaries = true74# model_supports_reasoning_summaries = true

69 75 

70################################################################################76################################################################################


74# Additional user instructions are injected before AGENTS.md. Default: unset.80# Additional user instructions are injected before AGENTS.md. Default: unset.

75# developer_instructions = ""81# developer_instructions = ""

76 82 

77# (Ignored) Optional legacy base instructions override (prefer AGENTS.md). Default: unset.

78# instructions = ""

79 

80# Inline override for the history compaction prompt. Default: unset.83# Inline override for the history compaction prompt. Default: unset.

81# compact_prompt = ""84# compact_prompt = ""

82 85 

86# Override the default commit co-author trailer. Set to "" to disable it.

87# commit_attribution = "Jane Doe <jane@example.com>"

88 

83# Override built-in base instructions with a file path. Default: unset.89# Override built-in base instructions with a file path. Default: unset.

84# model_instructions_file = "/absolute/or/relative/path/to/instructions.txt"90# model_instructions_file = "/absolute/or/relative/path/to/instructions.txt"

85 91 

86# Migration note: experimental_instructions_file was renamed to model_instructions_file (deprecated).

87 

88# Load the compact prompt override from a file. Default: unset.92# Load the compact prompt override from a file. Default: unset.

89# experimental_compact_prompt_file = "/absolute/or/relative/path/to/compact_prompt.txt"93# experimental_compact_prompt_file = "/absolute/or/relative/path/to/compact_prompt.txt"

90 94 

91# Legacy name for apply_patch_freeform. Default: false

92include_apply_patch_tool = false

93 

94################################################################################95################################################################################

95# Notifications96# Notifications

96################################################################################97################################################################################

97 98 

98# External notifier program (argv array). When unset: disabled.99# External notifier program (argv array). When unset: disabled.

99# Example: notify = ["notify-send", "Codex"]100# notify = ["notify-send", "Codex"]

100notify = [ ]

101 101 

102################################################################################102################################################################################

103# Approval & Sandbox103# Approval & Sandbox


107# - untrusted: only known-safe read-only commands auto-run; others prompt107# - untrusted: only known-safe read-only commands auto-run; others prompt

108# - on-request: model decides when to ask (default)108# - on-request: model decides when to ask (default)

109# - never: never prompt (risky)109# - never: never prompt (risky)

110# - { reject = { ... } }: auto-reject selected prompt categories

110approval_policy = "on-request"111approval_policy = "on-request"

112# Example granular auto-reject policy:

113# approval_policy = { reject = { sandbox_approval = true, rules = false, mcp_elicitations = false } }

114 

115# Allow login-shell semantics for shell-based tools when they request `login = true`.

116# Default: true. Set false to force non-login shells and reject explicit login-shell requests.

117allow_login_shell = true

111 118 

112# Filesystem/network sandbox policy for tool calls:119# Filesystem/network sandbox policy for tool calls:

113# - read-only (default)120# - read-only (default)


122# Where to persist CLI login credentials: file (default) | keyring | auto129# Where to persist CLI login credentials: file (default) | keyring | auto

123cli_auth_credentials_store = "file"130cli_auth_credentials_store = "file"

124 131 

125# Base URL for ChatGPT auth flow (not OpenAI API). Default:132# Base URL for ChatGPT auth flow (not OpenAI API).

126chatgpt_base_url = "https://chatgpt.com/backend-api/"133chatgpt_base_url = "https://chatgpt.com/backend-api/"

127 134 

128# Restrict ChatGPT login to a specific workspace id. Default: unset.135# Restrict ChatGPT login to a specific workspace id. Default: unset.

129# forced_chatgpt_workspace_id = ""136# forced_chatgpt_workspace_id = "00000000-0000-0000-0000-000000000000"

130 137 

131# Force login mechanism when Codex would normally auto-select. Default: unset.138# Force login mechanism when Codex would normally auto-select. Default: unset.

132# Allowed values: chatgpt | api139# Allowed values: chatgpt | api


134 141 

135# Preferred store for MCP OAuth credentials: auto (default) | file | keyring142# Preferred store for MCP OAuth credentials: auto (default) | file | keyring

136mcp_oauth_credentials_store = "auto"143mcp_oauth_credentials_store = "auto"

137 

138# Optional fixed port for MCP OAuth callback: 1-65535. Default: unset.144# Optional fixed port for MCP OAuth callback: 1-65535. Default: unset.

139# mcp_oauth_callback_port = 4321145# mcp_oauth_callback_port = 4321

146# Optional redirect URI override for MCP OAuth login (for example, remote devbox ingress).

147# Custom callback paths are supported. `mcp_oauth_callback_port` still controls the listener port.

148# mcp_oauth_callback_url = "https://devbox.example.internal/callback"

140 149 

141################################################################################150################################################################################

142# Project Documentation Controls151# Project Documentation Controls


187# If you use --yolo or another full access sandbox setting, web search defaults to live.196# If you use --yolo or another full access sandbox setting, web search defaults to live.

188web_search = "cached"197web_search = "cached"

189 198 

199# Active profile name. When unset, no profile is applied.

200# profile = "default"

201 

202# Suppress the warning shown when under-development feature flags are enabled.

203# suppress_unstable_features_warning = true

204 

190################################################################################205################################################################################

191# Profiles (named presets)206# Agents (multi-agent roles and limits)

192################################################################################207################################################################################

193 208 

194# Active profile name. When unset, no profile is applied.209[agents]

195# profile = "default"210# Maximum concurrently open agent threads. Default: 6

211# max_threads = 6

212# Maximum nested spawn depth. Root session starts at depth 0. Default: 1

213# max_depth = 1

214# Default timeout per worker for spawn_agents_on_csv jobs. When unset, the tool defaults to 1800 seconds.

215# job_max_runtime_seconds = 1800

216 

217# [agents.reviewer]

218# description = "Find correctness, security, and test risks in code."

219# config_file = "./agents/reviewer.toml" # relative to the config.toml that defines it

220# nickname_candidates = ["Athena", "Ada"]

196 221 

197################################################################################222################################################################################

198# Skills (per-skill overrides)223# Skills (per-skill overrides)


200 225 

201# Disable or re-enable a specific skill without deleting it.226# Disable or re-enable a specific skill without deleting it.

202[[skills.config]]227[[skills.config]]

203# path = "/path/to/skill"228# path = "/path/to/skill/SKILL.md"

204# enabled = false229# enabled = false

205 230 

206################################################################################

207# Experimental toggles (legacy; prefer [features])

208################################################################################

209 

210experimental_use_unified_exec_tool = false

211 

212# Include apply_patch via freeform editing path (affects default tool set). Default: false

213experimental_use_freeform_apply_patch = false

214 

215################################################################################231################################################################################

216# Sandbox settings (tables)232# Sandbox settings (tables)

217################################################################################233################################################################################


234[shell_environment_policy]250[shell_environment_policy]

235# inherit: all (default) | core | none251# inherit: all (default) | core | none

236inherit = "all"252inherit = "all"

237# Skip default excludes for names containing KEY/SECRET/TOKEN (case-insensitive). Default: true253# Skip default excludes for names containing KEY/SECRET/TOKEN (case-insensitive). Default: false

238ignore_default_excludes = true254ignore_default_excludes = false

239# Case-insensitive glob patterns to remove (e.g., "AWS_*", "AZURE_*"). Default: []255# Case-insensitive glob patterns to remove (e.g., "AWS_*", "AZURE_*"). Default: []

240exclude = []256exclude = []

241# Explicit key/value overrides (always win). Default: {}257# Explicit key/value overrides (always win). Default: {}


245# Experimental: run via user shell profile. Default: false261# Experimental: run via user shell profile. Default: false

246experimental_use_profile = false262experimental_use_profile = false

247 263 

264################################################################################

265# Managed network proxy settings

266################################################################################

267 

268[permissions.network]

269# enabled = true

270# proxy_url = "http://127.0.0.1:43128"

271# admin_url = "http://127.0.0.1:43129"

272# enable_socks5 = false

273# socks_url = "http://127.0.0.1:43130"

274# enable_socks5_udp = false

275# allow_upstream_proxy = false

276# dangerously_allow_non_loopback_proxy = false

277# dangerously_allow_non_loopback_admin = false

278# dangerously_allow_all_unix_sockets = false

279# mode = "limited" # limited | full

280# allowed_domains = ["api.openai.com"]

281# denied_domains = ["example.com"]

282# allow_unix_sockets = ["/var/run/docker.sock"]

283# allow_local_binding = false

284 

248################################################################################285################################################################################

249# History (table)286# History (table)

250################################################################################287################################################################################


253# save-all (default) | none290# save-all (default) | none

254persistence = "save-all"291persistence = "save-all"

255# Maximum bytes for history file; oldest entries are trimmed when exceeded. Example: 5242880292# Maximum bytes for history file; oldest entries are trimmed when exceeded. Example: 5242880

256# max_bytes = 0293# max_bytes = 5242880

257 294 

258################################################################################295################################################################################

259# UI, Notifications, and Misc (tables)296# UI, Notifications, and Misc (tables)


276# Control alternate screen usage (auto skips it in Zellij to preserve scrollback).313# Control alternate screen usage (auto skips it in Zellij to preserve scrollback).

277# alternate_screen = "auto"314# alternate_screen = "auto"

278 315 

279# Ordered list of footer status-line item IDs. Default: null (disabled).316# Ordered list of footer status-line item IDs. When unset, Codex uses:

317# ["model-with-reasoning", "context-remaining", "current-dir"].

318# Set to [] to hide the footer.

280# status_line = ["model", "context-remaining", "git-branch"]319# status_line = ["model", "context-remaining", "git-branch"]

281 320 

321# Syntax-highlighting theme (kebab-case). Use /theme in the TUI to preview and save.

322# You can also add custom .tmTheme files under $CODEX_HOME/themes.

323# theme = "catppuccin-mocha"

324 

325# Internal tooltip state keyed by model slug. Usually managed by Codex.

326# [tui.model_availability_nux]

327# "gpt-5.4" = 1

328 

329# Enable or disable analytics for this machine. When unset, Codex uses its default behavior.

330[analytics]

331enabled = true

332 

282# Control whether users can submit feedback from `/feedback`. Default: true333# Control whether users can submit feedback from `/feedback`. Default: true

283[feedback]334[feedback]

284enabled = true335enabled = true


292# "hide_gpt-5.1-codex-max_migration_prompt" = true343# "hide_gpt-5.1-codex-max_migration_prompt" = true

293# model_migrations = { "gpt-4.1" = "gpt-5.1" }344# model_migrations = { "gpt-4.1" = "gpt-5.1" }

294 345 

295# Suppress the warning shown when under-development feature flags are enabled.

296# suppress_unstable_features_warning = true

297 

298################################################################################346################################################################################

299# Centralized Feature Flags (preferred)347# Centralized Feature Flags (preferred)

300################################################################################348################################################################################

301 349 

302[features]350[features]

303# Leave this table empty to accept defaults. Set explicit booleans to opt in/out.351# Leave this table empty to accept defaults. Set explicit booleans to opt in/out.

304shell_tool = true352# shell_tool = true

305# apps = false353# apps = false

306# apps_mcp_gateway = false354# apps_mcp_gateway = false

307# Deprecated legacy toggles; prefer the top-level `web_search` setting.355# unified_exec = false

308# web_search = false356# shell_snapshot = false

309# web_search_cached = false357# multi_agent = false

310# web_search_request = false

311unified_exec = false

312shell_snapshot = false

313apply_patch_freeform = false

314# search_tool = false

315# personality = true358# personality = true

316request_rule = true359# use_linux_sandbox_bwrap = false

317collaboration_modes = true360# runtime_metrics = true

318use_linux_sandbox_bwrap = false361# powershell_utf8 = true

319experimental_windows_sandbox = false362# child_agents_md = false

320elevated_windows_sandbox = false363# sqlite = true

321remote_models = false364# fast_mode = true

322runtime_metrics = false365# enable_request_compression = true

323powershell_utf8 = true366# image_generation = false

324child_agents_md = false367# skill_mcp_dependency_install = true

368# skill_env_var_dependency_prompt = false

369# default_mode_request_user_input = false

370# artifact = false

371# prevent_idle_sleep = false

372# responses_websockets = false

373# responses_websockets_v2 = false

374# image_detail_original = false

325 375 

326################################################################################376################################################################################

327# Define MCP servers under this table. Leave empty to disable.377# Define MCP servers under this table. Leave empty to disable.


343# tool_timeout_sec = 60.0 # optional; default 60.0 seconds393# tool_timeout_sec = 60.0 # optional; default 60.0 seconds

344# enabled_tools = ["search", "summarize"] # optional allow-list394# enabled_tools = ["search", "summarize"] # optional allow-list

345# disabled_tools = ["slow-tool"] # optional deny-list (applied after allow-list)395# disabled_tools = ["slow-tool"] # optional deny-list (applied after allow-list)

396# scopes = ["read:docs"] # optional OAuth scopes

397# oauth_resource = "https://docs.example.com/" # optional OAuth resource

346 398 

347# --- Example: Streamable HTTP transport ---399# --- Example: Streamable HTTP transport ---

348# [mcp_servers.github]400# [mcp_servers.github]


355# startup_timeout_sec = 10.0 # optional407# startup_timeout_sec = 10.0 # optional

356# tool_timeout_sec = 60.0 # optional408# tool_timeout_sec = 60.0 # optional

357# enabled_tools = ["list_issues"] # optional allow-list409# enabled_tools = ["list_issues"] # optional allow-list

410# disabled_tools = ["delete_issue"] # optional deny-list

411# scopes = ["repo"] # optional OAuth scopes

358 412 

359################################################################################413################################################################################

360# Model Providers414# Model Providers

361################################################################################415################################################################################

362 416 

363# Built-ins include:417# Built-ins include:

364# - openai (Responses API; requires login or OPENAI_API_KEY via auth flow)418# - openai

365# - oss (Chat Completions API; defaults to http://localhost:11434/v1)419# - ollama

420# - lmstudio

366 421 

367[model_providers]422[model_providers]

368 423 


370# [model_providers.openaidr]425# [model_providers.openaidr]

371# name = "OpenAI Data Residency"426# name = "OpenAI Data Residency"

372# base_url = "https://us.api.openai.com/v1" # example with 'us' domain prefix427# base_url = "https://us.api.openai.com/v1" # example with 'us' domain prefix

373# wire_api = "responses" # "responses" | "chat" (default varies)428# wire_api = "responses" # only supported value

374# # requires_openai_auth = true # built-in OpenAI defaults to true429# # requires_openai_auth = true # built-in OpenAI defaults to true

375# # request_max_retries = 4 # default 4; max 100430# # request_max_retries = 4 # default 4; max 100

376# # stream_max_retries = 5 # default 5; max 100431# # stream_max_retries = 5 # default 5; max 100

377# # stream_idle_timeout_ms = 300000 # default 300_000 (5m)432# # stream_idle_timeout_ms = 300000 # default 300_000 (5m)

433# # supports_websockets = true # optional

378# # experimental_bearer_token = "sk-example" # optional dev-only direct bearer token434# # experimental_bearer_token = "sk-example" # optional dev-only direct bearer token

379# # http_headers = { "X-Example" = "value" }435# # http_headers = { "X-Example" = "value" }

380# # env_http_headers = { "OpenAI-Organization" = "OPENAI_ORGANIZATION", "OpenAI-Project" = "OPENAI_PROJECT" }436# # env_http_headers = { "OpenAI-Organization" = "OPENAI_ORGANIZATION", "OpenAI-Project" = "OPENAI_PROJECT" }

381 437 

382# --- Example: Azure (Chat/Responses depending on endpoint) ---438# --- Example: Azure/OpenAI-compatible provider ---

383# [model_providers.azure]439# [model_providers.azure]

384# name = "Azure"440# name = "Azure"

385# base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"441# base_url = "https://YOUR_PROJECT_NAME.openai.azure.com/openai"

386# wire_api = "responses" # or "chat" per endpoint442# wire_api = "responses"

387# query_params = { api-version = "2025-04-01-preview" }443# query_params = { api-version = "2025-04-01-preview" }

388# env_key = "AZURE_OPENAI_API_KEY"444# env_key = "AZURE_OPENAI_API_KEY"

389# # env_key_instructions = "Set AZURE_OPENAI_API_KEY in your environment"445# env_key_instructions = "Set AZURE_OPENAI_API_KEY in your environment"

446# # supports_websockets = false

390 447 

391# --- Example: Local OSS (e.g., Ollama-compatible) ---448# --- Example: Local OSS (e.g., Ollama-compatible) ---

392# [model_providers.ollama]449# [model_providers.ollama]

393# name = "Ollama"450# name = "Ollama"

394# base_url = "http://localhost:11434/v1"451# base_url = "http://localhost:11434/v1"

395# wire_api = "chat"452# wire_api = "responses"

453 

454################################################################################

455# Apps / Connectors

456################################################################################

457 

458# Optional per-app controls.

459[apps]

460# [_default] applies to all apps unless overridden per app.

461# [apps._default]

462# enabled = true

463# destructive_enabled = true

464# open_world_enabled = true

465#

466# [apps.google_drive]

467# enabled = false

468# destructive_enabled = false # block destructive-hint tools for this app

469# default_tools_enabled = true

470# default_tools_approval_mode = "prompt" # auto | prompt | approve

471#

472# [apps.google_drive.tools."files/delete"]

473# enabled = false

474# approval_mode = "approve"

396 475 

397################################################################################476################################################################################

398# Profiles (named presets)477# Profiles (named presets)


401[profiles]480[profiles]

402 481 

403# [profiles.default]482# [profiles.default]

404# model = "gpt-5.2-codex"483# model = "gpt-5.4"

405# model_provider = "openai"484# model_provider = "openai"

406# approval_policy = "on-request"485# approval_policy = "on-request"

407# sandbox_mode = "read-only"486# sandbox_mode = "read-only"

487# service_tier = "flex"

408# oss_provider = "ollama"488# oss_provider = "ollama"

409# model_reasoning_effort = "medium"489# model_reasoning_effort = "medium"

490# plan_mode_reasoning_effort = "high"

410# model_reasoning_summary = "auto"491# model_reasoning_summary = "auto"

411# model_verbosity = "medium"492# model_verbosity = "medium"

412# personality = "friendly" # or "pragmatic" or "none"493# personality = "pragmatic" # or "friendly" or "none"

413# chatgpt_base_url = "https://chatgpt.com/backend-api/"494# chatgpt_base_url = "https://chatgpt.com/backend-api/"

495# model_catalog_json = "./models.json"

496# model_instructions_file = "/absolute/or/relative/path/to/instructions.txt"

414# experimental_compact_prompt_file = "./compact_prompt.txt"497# experimental_compact_prompt_file = "./compact_prompt.txt"

415# include_apply_patch_tool = false498# tools_view_image = true

416# experimental_use_unified_exec_tool = false

417# experimental_use_freeform_apply_patch = false

418# tools.web_search = false # deprecated legacy alias; prefer top-level `web_search`

419# features = { unified_exec = false }499# features = { unified_exec = false }

420 500 

421################################################################################

422# Apps / Connectors

423################################################################################

424 

425# Optional per-app controls.

426[apps]

427# [apps.google_drive]

428# enabled = false

429# disabled_reason = "user" # or "unknown"

430 

431################################################################################501################################################################################

432# Projects (trust levels)502# Projects (trust levels)

433################################################################################503################################################################################

434 504 

435# Mark specific worktrees as trusted or untrusted.

436[projects]505[projects]

506# Mark specific worktrees as trusted or untrusted.

437# [projects."/absolute/path/to/project"]507# [projects."/absolute/path/to/project"]

438# trust_level = "trusted" # or "untrusted"508# trust_level = "trusted" # or "untrusted"

439 509 

510################################################################################

511# Tools

512################################################################################

513 

514[tools]

515# view_image = true

516 

440################################################################################517################################################################################

441# OpenTelemetry (OTEL) - disabled by default518# OpenTelemetry (OTEL) - disabled by default

442################################################################################519################################################################################


450exporter = "none"527exporter = "none"

451# Trace exporter: none (default) | otlp-http | otlp-grpc528# Trace exporter: none (default) | otlp-http | otlp-grpc

452trace_exporter = "none"529trace_exporter = "none"

530# Metrics exporter: none | statsig | otlp-http | otlp-grpc

531metrics_exporter = "statsig"

453 532 

454# Example OTLP/HTTP exporter configuration533# Example OTLP/HTTP exporter configuration

455# [otel.exporter."otlp-http"]534# [otel.exporter."otlp-http"]


459# [otel.exporter."otlp-http".headers]538# [otel.exporter."otlp-http".headers]

460# "x-otlp-api-key" = "${OTLP_TOKEN}"539# "x-otlp-api-key" = "${OTLP_TOKEN}"

461 540 

462# Example OTLP/gRPC exporter configuration

463# [otel.exporter."otlp-grpc"]

464# endpoint = "https://otel.example.com:4317",

465# headers = { "x-otlp-meta" = "abc123" }

466 

467# Example OTLP exporter with mutual TLS

468# [otel.exporter."otlp-http"]

469# endpoint = "https://otel.example.com/v1/logs"

470# protocol = "binary"

471 

472# [otel.exporter."otlp-http".headers]

473# "x-otlp-api-key" = "${OTLP_TOKEN}"

474 

475# [otel.exporter."otlp-http".tls]541# [otel.exporter."otlp-http".tls]

476# ca-certificate = "certs/otel-ca.pem"542# ca-certificate = "certs/otel-ca.pem"

477# client-certificate = "/etc/codex/certs/client.pem"543# client-certificate = "/etc/codex/certs/client.pem"

478# client-private-key = "/etc/codex/certs/client-key.pem"544# client-private-key = "/etc/codex/certs/client-key.pem"

545 

546# Example OTLP/gRPC trace exporter configuration

547# [otel.trace_exporter."otlp-grpc"]

548# endpoint = "https://otel.example.com:4317"

549# headers = { "x-otlp-meta" = "abc123" }

550 

551################################################################################

552# Windows

553################################################################################

554 

555[windows]

556# Native Windows sandbox mode (Windows only): unelevated | elevated

557sandbox = "unelevated"

479```558```

Details

1# Custom Prompts1# Custom Prompts

2 2 

3Deprecated. Use skills for reusable prompts

4 

5Custom prompts are deprecated. Use [skills](https://developers.openai.com/codex/skills) for reusable3Custom prompts are deprecated. Use [skills](https://developers.openai.com/codex/skills) for reusable

6 instructions that Codex can invoke explicitly or implicitly.4 instructions that Codex can invoke explicitly or implicitly.

7 5 

Details

1# Admin Setup1# Admin Setup

2 2 

3Set up Codex for your ChatGPT Enterprise workspace

4 

5This guide is for ChatGPT Enterprise admins who want to set up Codex for their workspace.3This guide is for ChatGPT Enterprise admins who want to set up Codex for their workspace.

6 4 

5Use this page as the step-by-step rollout guide. It focuses on setup order and decision points. For detailed policy, configuration, and monitoring details, use the linked pages: [Authentication](https://developers.openai.com/codex/auth), [Agent approvals & security](https://developers.openai.com/codex/agent-approvals-security), [Managed configuration](https://developers.openai.com/codex/enterprise/managed-configuration), and [Governance](https://developers.openai.com/codex/enterprise/governance).

6 

7## Enterprise-grade security and privacy7## Enterprise-grade security and privacy

8 8 

9Codex supports ChatGPT Enterprise security features, including:9Codex supports ChatGPT Enterprise security features, including:

10 10 

11- No training on enterprise data11- No training on enterprise data

12- Zero data retention for the CLI and IDE12- Zero data retention for the App, CLI, and IDE (code remains in developer environment)

13- Residency and retention follow ChatGPT Enterprise policies13- Residency and retention that follow ChatGPT Enterprise policies

14- Granular user access controls14- Granular user access controls

15- Data encryption at rest (AES 256) and in transit (TLS 1.2+)15- Data encryption at rest (AES-256) and in transit (TLS 1.2+)

16 16 

17For more, see [Security](https://developers.openai.com/codex/security).17For security controls and runtime protections, see [Agent approvals & security](https://developers.openai.com/codex/agent-approvals-security). Refer to [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) for more details.

18 18 

19## Local vs. cloud setup19## Local vs. cloud setup

20 20 

21Codex operates in two environments: local and cloud.21Codex operates in two environments: local and cloud.

22 22 

231. Local use includes the Codex app, CLI, and IDE extension. The agent runs on the developer’s computer in a sandbox.231. **Codex local** includes the Codex app, CLI, and IDE extension. The agent runs on the developer’s computer in a sandbox.

242. Use in the cloud includes Codex cloud, iOS, Code Review, and tasks created by the [Slack integration](https://developers.openai.com/codex/integrations/slack). The agent runs remotely in a hosted container with your codebase.242. **Codex cloud** includes hosted Codex features (including Codex cloud, iOS, Code Review, and tasks created by the [Slack integration](https://developers.openai.com/codex/integrations/slack) or [Linear integration](https://developers.openai.com/codex/integrations/linear)). The agent runs remotely in a hosted container with your codebase.

25 25 

26Use separate permissions and role-based access control (RBAC) to control access to local and cloud features. You can enable local, cloud, or both for all users or for specific groups.26You can enable local, cloud, or both, and control access with workspace settings and role-based access control (RBAC).

27 27 

28## Codex local setup28## Step 0: Owners and rollout decision

29 29 

30### Enable Codex app, CLI, and IDE extension in workspace settings30Ensure you have the following owners:

31 31 

32To enable Codex locally for workspace members, go to [Workspace Settings > Settings and Permissions](https://chatgpt.com/admin/settings). Turn on **Allow members to use Codex Local**. This setting doesn’t require the GitHub connector.32- Workspace owner with access to ChatGPT Enterprise

33- IT management owner for managed configuration

34- Governance owner for analytics / compliance review

33 35 

34After you turn this on, users can sign in to use the Codex app, CLI, and IDE extension with their ChatGPT account. If you turn off this setting, users who attempt to use the Codex app, CLI, or IDE will see the following error: “403 - Unauthorized. Contact your ChatGPT administrator for access.”36A rollout decision:

35 37 

36## Team Config38- Codex local only (Codex app, CLI, and IDE extension)

39- Codex cloud only (Codex web, GitHub code review)

40- Both local + cloud

37 41 

38Teams who want to standardize Codex across an organization can use Team Config to share defaults, rules, and skills without duplicating setup on every local configuration.42Review [authentication](https://developers.openai.com/codex/auth) before rollout:

39 43 

40| Type | Path | Use it to |44- Codex local supports ChatGPT sign-in or API keys. Confirm MFA/SSO requirements and any managed login restrictions in authentication

41| ------------------------------------ | ------------- | ---------------------------------------------------------------------------- |45- Codex cloud requires ChatGPT sign-in

42| [Config basics](https://developers.openai.com/codex/config-basic) | `config.toml` | Set defaults for sandbox mode, approvals, model, reasoning effort, and more. |

43| [Rules](https://developers.openai.com/codex/rules) | `rules/` | Control which commands Codex can run outside the sandbox. |

44| [Skills](https://developers.openai.com/codex/skills) | `skills/` | Make shared skills available to your team. |

45 46 

46For locations and precedence, see [Config basics](https://developers.openai.com/codex/config-basic#configuration-precedence).47## Step 1: Enable workspace toggles

48 

49Turn on only the Codex features you plan to roll out in this phase.

50 

51Go to [Workspace Settings > Settings and Permissions](https://chatgpt.com/admin/settings).

52 

53### Codex local

54 

55Turn on **Allow members to use Codex Local**.

47 56 

48## Codex cloud setup57This enables use of the Codex app, CLI, and IDE extension for allowed users.

58 

59If this toggle is off, users who attempt to use the Codex app, CLI, or IDE will see the following error: “403 - Unauthorized. Contact your ChatGPT administrator for access.”

60 

61#### Enable device code authentication for Codex CLI

62 

63Allow developers to sign in with device codes when using Codex CLI in a non-interactive environment. More details in [authentication](https://developers.openai.com/codex/auth/).

64 

65![Codex local toggle](/images/codex/enterprise/local-toggle-config.png)

66 

67### Codex cloud

49 68 

50### Prerequisites69### Prerequisites

51 70 


59 78 

60Start by turning on the ChatGPT GitHub Connector in the Codex section of [Workspace Settings > Settings and Permissions](https://chatgpt.com/admin/settings).79Start by turning on the ChatGPT GitHub Connector in the Codex section of [Workspace Settings > Settings and Permissions](https://chatgpt.com/admin/settings).

61 80 

62To enable Codex cloud for your workspace, turn on **Allow members to use Codex cloud**.81To enable Codex cloud for your workspace, turn on **Allow members to use Codex cloud**. Once enabled, users can access Codex directly from the left-hand navigation panel in ChatGPT.

82 

83Note that it may take up to 10 minutes for Codex to appear in ChatGPT.

84 

85#### Allow members to administer Codex

86 

87Allows users to view overall Codex [workspace analytics](https://chatgpt.com/codex/settings/analytics), access [cloud-managed requirements](https://chatgpt.com/codex/settings/managed-configs), and manage Cloud environments (edit and delete).

88 

89Codex cloud not required.

90 

91#### Enable Codex Slack app to post answers on task completion

92 

93Codex posts its full answer back to Slack when the task completes. Otherwise, Codex posts only a link to the task.

94 

95To learn more, see [Codex in Slack](https://developers.openai.com/codex/integrations/slack).

63 96 

64Once enabled, users can access Codex directly from the left-hand navigation panel in ChatGPT.97#### Enable Codex agent to access the internet

98 

99By default, Codex cloud agents have no internet access during runtime to help protect against security and safety risks like prompt injection.

100 

101This setting enables users to use an allowlist for common software dependency domains, add more domains and trusted sites, and specify allowed HTTP methods.

102 

103For security implications of internet access and runtime controls, see [Agent approvals & security](https://developers.openai.com/codex/agent-approvals-security).

65 104 

66![Codex cloud toggle](/images/codex/enterprise/cloud-toggle-config.png)105![Codex cloud toggle](/images/codex/enterprise/cloud-toggle-config.png)

67 106 

68After you turn on Codex in your Enterprise workspace settings, it may take up107## Step 2: Set up custom roles (RBAC)

69to 10 minutes for Codex to appear in ChatGPT.

70 108 

71### Configure the GitHub Connector IP allow list109Use RBAC to control which users or groups can access Codex local and Codex cloud.

72 110 

73To control which IP addresses can connect to your ChatGPT GitHub connector, configure these IP ranges:111### What RBAC lets you do

74 112 

75- [ChatGPT egress IP ranges](https://openai.com/chatgpt-actions.json)113Workspace Owners can use RBAC in ChatGPT admin settings to:

76- [Codex container egress IP ranges](https://openai.com/chatgpt-agents.json)

77 114 

78These IP ranges can change. Consider checking them automatically and updating your allow list based on the latest values.115- Set a default role for users who are not assigned any custom role

116- Create custom roles with granular permissions

117- Assign one or more custom roles to Groups (including SCIM-synced groups)

118- Manage roles centrally from the Custom Roles tab

79 119 

80### Allow members to administer Codex120Users can inherit multiple roles, and permissions resolve to the maximum allowed across those roles.

81 121 

82This toggle allows users to view Codex workspace analytics and manage environments (edit and delete).122### Important behavior to plan for

83 123 

84Codex supports role-based access (see [Role-based access (RBAC)](#role-based-access-rbac)), so you can turn on this toggle for a specific subset of users.124Users in any custom role group do not use the workspace default permissions.

85 125 

86### Enable Codex Slack app to post answers on task completion126If you are gradually rolling out Codex, one suggestion is to have a “Codex Users” group and a second “Codex Admin” group that has the “Allow members to administer Codex” toggle enabled.

87 127 

88Codex integrates with Slack. When a user mentions `@Codex` in Slack, Codex starts a cloud task, gets context from the Slack thread, and responds with a link to a PR to review in the thread.128For RBAC setup details and the full permission model, see the [OpenAI RBAC Help Center article](https://help.openai.com/en/articles/11750701-rbac).

89 129 

90To allow the Slack app to post answers on task completion, turn on **Allow Codex Slack app to post answers on task completion**. When enabled, Codex posts its full answer back to Slack when the task completes. Otherwise, Codex posts only a link to the task.130## Step 3: Configure Codex local managed settings

91 131 

92To learn more, see [Codex in Slack](https://developers.openai.com/codex/integrations/slack).132For Codex local, set an admin-approved baseline for local behavior before broader rollout.

93 133 

94### Enable Codex agent to access the internet134### Use managed configuration for two different goals

95 135 

96By default, Codex cloud agents have no internet access during runtime to help protect against security and safety risks like prompt injection.136- **Requirements** (`requirements.toml`): Admin-enforced constraints users cannot override

137- **Managed defaults** (`managed_config.toml`): Starting values applied when Codex launches

97 138 

98As an admin, you can allow users to enable agent internet access in their environments. To enable it, turn on **Allow Codex agent to access the internet**.139### Team Config

99 140 

100When this setting is on, users can use an allow list for common software dependency domains, add more domains and trusted sites, and specify allowed HTTP methods.141Teams who want to standardize Codex across an organization can use Team Config to share defaults, rules, and skills without duplicating setup on every local configuration.

101 142 

102### Enable code review with Codex cloud143| Type | Path | Use it to |

144| ------------------------------------ | ------------- | ---------------------------------------------------------------------------- |

145| [Config basics](https://developers.openai.com/codex/config-basic) | `config.toml` | Set defaults for sandbox mode, approvals, model, reasoning effort, and more. |

146| [Rules](https://developers.openai.com/codex/rules) | `rules/` | Control which commands Codex can run outside the sandbox. |

147| [Skills](https://developers.openai.com/codex/skills) | `skills/` | Make shared skills available to your team. |

148 

149For locations and precedence, see [Config basics](https://developers.openai.com/codex/config-basic#configuration-precedence).

150 

151### Recommended first decisions for local rollout

152 

153Define a baseline for your pilot:

154 

155- Approval policy posture

156- Sandbox mode posture

157- Web search posture

158- MCP / connectors policy

159- Local logging and telemetry posture

103 160 

104To allow Codex to do code reviews, go to [Settings Code review](https://chatgpt.com/codex/settings/code-review).161For exact keys, precedence, MDM deployment, and examples, see [Managed configuration](https://developers.openai.com/codex/enterprise/managed-configuration) and [Agent approvals & security](https://developers.openai.com/codex/agent-approvals-security).

105 162 

106Users can specify whether they want Codex to review their pull requests. Users can also configure whether code review runs for all contributors to a repository.163If you plan to restrict login method or workspace for local clients, see the admin-managed authentication restrictions in [Authentication](https://developers.openai.com/codex/auth).

107 164 

108Codex supports two types of code reviews:165## Step 4: Configure Codex cloud usage (if enabled)

109 166 

1101. Automatically triggered code reviews when a user opens a PR for review.167This step covers repository and environment setup after the Codex cloud workspace toggle is enabled.

1112. Reactive code reviews when a user mentions @Codex to look at issues. For example, “@Codex fix this CI error” or “@Codex address that feedback.”

112 168 

113## Role-based access (RBAC)169### Connect Codex cloud to repositories

114 170 

115Codex supports role-based access. RBAC is a security and permissions model used to control access to systems or resources based on a user’s role assignments.1711. Navigate to [Codex](https://chatgpt.com/codex) and select **Get started**

1722. Select **Connect to GitHub** to install the ChatGPT GitHub Connector if you haven't already connected GitHub to ChatGPT

1733. Install or authorize the ChatGPT GitHub Connector

1744. Choose an installation target for the ChatGPT Connector (typically your main organization)

1755. Allow the repositories you want to connect to Codex

116 176 

117To enable RBAC for Codex, navigate to Settings & Permissions → Custom Roles in [ChatGPT’s admin page](https://chatgpt.com/admin/settings) and assign roles to groups created in the Groups tab.177For more, see [Cloud environments](https://developers.openai.com/codex/cloud/environments).

118 178 

119This simplifies permission management for Codex and improves security in your ChatGPT workspace. To learn more, see the [Help Center article](https://help.openai.com/en/articles/11750701-rbac).179Codex uses short-lived, least-privilege GitHub App installation tokens for each operation and respects the user's existing GitHub repository permissions and branch protection rules.

180 

181### Configure IP addresses (as needed)

182 

183Configure connector / IP allow lists if required by your network policy with these [egress IP ranges](https://openai.com/chatgpt-agents.json).

184 

185These IP ranges can change. Consider checking them automatically and updating your allow list based on the latest values.

186 

187### Enable code review with Codex cloud

120 188 

121## Set up your first Codex cloud environment189To allow Codex to perform code reviews on GitHub, go to [Settings → Code review](https://chatgpt.com/codex/settings/code-review).

122 190 

1231. Go to Codex cloud and select **Get started**.191Code review can be configured at the repository level. Users can also enable auto review for their PRs and choose when Codex automatically triggers a review. More details on [GitHub](https://developers.openai.com/codex/integrations/github) integration page.

1242. Select **Connect to GitHub** to install the ChatGPT GitHub Connector if you haven’t already connected GitHub to ChatGPT.

125 - Allow the ChatGPT Connector for your account.

126 - Choose an installation target for the ChatGPT Connector (typically your main organization).

127 - Allow the repositories you want to connect to Codex (a GitHub admin may need to approve this).

1283. Create your first environment by selecting the repository most relevant to your developers, then select **Create environment**.

129 - Add the email addresses of any environment collaborators to give them edit access.

1304. Start a few starter tasks (for example, writing tests, fixing bugs, or exploring code).

131 192 

132You have now created your first environment. Users who connect to GitHub can create tasks using this environment. Users who have access to the repository can also push pull requests generated from their tasks.193Additional integration docs for [Slack](https://developers.openai.com/codex/integrations/slack), [GitHub](https://developers.openai.com/codex/integrations/github), and [Linear](https://developers.openai.com/codex/integrations/linear).

133 194 

134### Environment management195## Step 5: Set up governance and observability

135 196 

136As a ChatGPT workspace administrator, you can edit and delete Codex environments in your workspace.197Codex gives enterprise teams several options for visibility into adoption and impact. Set up governance early so your team can monitor adoption, investigate issues, and support compliance workflows.

137 198 

138### Connect more GitHub repositories with Codex cloud199### Codex governance typically uses

139 200 

1401. Select **Environments**, or open the environment selector and select **Manage Environments**.201- Analytics Dashboard for quick, self-serve visibility

1412. Select **Create Environment**.202- Analytics API for programmatic reporting and BI integration

1423. Select the repository you want to connect.203- Compliance API for audit and investigation workflows

1434. Enter a name and description.

1445. Select the environment visibility.

1456. Select **Create Environment**.

146 204 

147Codex automatically optimizes your environment setup by reviewing your codebase. Avoid advanced environment configuration until you observe specific performance issues. For more, see [Codex cloud](https://developers.openai.com/codex/cloud).205### Recommended minimum setup

148 206 

149### Share setup instructions with users207- Assign an owner for adoption reporting

208- Assign an owner for audit and compliance review

209- Define a review cadence

210- Decide what success looks like

150 211 

151You can share these steps with end users:212For details and examples, see [Governance](https://developers.openai.com/codex/enterprise/governance).

152 213 

1531. Go to [Codex](https://chatgpt.com/codex) in the left-hand panel of ChatGPT.214## Step 6: Confirm and validate setup

1542. Select **Connect to GitHub** in the prompt composer if you’re not already connected.

155 - Sign in to GitHub.

1563. You can now use shared environments with your workspace or create your own environment.

1574. Try a task in both Ask and Code mode. For example:

158 - Ask: Find bugs in this codebase.

159 - Write code: Improve test coverage following the existing test patterns.

160 215 

161## Track Codex usage216### What to verify

162 217 

163- For workspaces with rate limits, use [Settings Usage](https://chatgpt.com/codex/settings/usage) to view workspace metrics for Codex.218- Users can sign in to Codex local (ChatGPT or API key)

164- For more detail on enterprise governance, refer to the [Governance](https://developers.openai.com/codex/enterprise/governance) page.219- (If enabled) Users can sign in to Codex cloud (ChatGPT sign-in required)

165- For enterprise workspaces with flexible pricing, you can see credit usage in the ChatGPT workspace billing console.220- MFA and SSO requirements match your enterprise security policy

221- RBAC and workspace toggles produce the expected access behavior

222- Managed configuration is applied for users

223- Governance data is visible for admins

166 224 

167## Zero data retention (ZDR)225For authentication options and enterprise login restrictions, see [Authentication](https://developers.openai.com/codex/auth).

168 226 

169Codex supports OpenAI organizations with [Zero Data Retention (ZDR)](https://platform.openai.com/docs/guides/your-data#zero-data-retention) enabled.227Once your team is confident with setup, you can confidently roll Codex out to additional teams and organizations.

Details

1# Governance1# Governance

2 2 

3Governance guidance for managing Codex in your organization

4 

5# Governance and Observability3# Governance and Observability

6 4 

7Codex gives enterprise teams visibility into adoption and impact, plus the auditability needed for security and compliance programs. Use the self-serve dashboard for day-to-day tracking, the Analytics API for programmatic reporting, and the Compliance API to export detailed logs into your governance stack.5Codex gives enterprise teams visibility into adoption and impact, plus the auditability needed for security and compliance programs. Use the self-serve dashboard for day-to-day tracking, the Analytics API for programmatic reporting, and the Compliance API to export detailed logs into your governance stack.


90 88 

91The Compliance API gives enterprises a way to export logs and metadata for Codex activity so you can connect that data to your existing audit, monitoring, and security workflows. It is designed for use with tools like eDiscovery, DLP, SIEM, or other compliance systems.89The Compliance API gives enterprises a way to export logs and metadata for Codex activity so you can connect that data to your existing audit, monitoring, and security workflows. It is designed for use with tools like eDiscovery, DLP, SIEM, or other compliance systems.

92 90 

91For Codex usage authenticated through ChatGPT, Compliance API exports provide audit records for Codex activity and can be used in investigations and compliance workflows. These audit logs are retained for up to 30 days. API-key-authenticated Codex usage follows your API organization settings and is not included in Compliance API exports.

92 

93### What you can export93### What you can export

94 94 

95#### Activity logs95#### Activity logs

Details

1# Managed configuration

2 

3Enterprise admins can control local Codex behavior in two ways:

4 

5- **Requirements**: admin-enforced constraints that users can't override.

6- **Managed defaults**: starting values applied when Codex launches. Users can still change settings during a session; Codex reapplies managed defaults the next time it starts.

7 

8## Admin-enforced requirements (requirements.toml)

9 

10Requirements constrain security-sensitive settings (approval policy, sandbox mode, web search mode, and optionally which MCP servers can be enabled). When resolving configuration (for example from `config.toml`, profiles, or CLI config overrides), if a value conflicts with an enforced requirement, Codex falls back to a requirements-compatible value and notifies the user. If an `mcp_servers` allowlist is configured, Codex enables an MCP server only when both its name and identity match an approved entry; otherwise, Codex disables it.

11 

12Requirements can also be used to constrain [feature flags](https://developers.openai.com/codex/config-basic/#feature-flags) via the `[features]` table in `requirements.toml`. Note features are generally not security-sensitive, but enterprises have the option of pinning values, if desired. Omitted keys remain unconstrained.

13 

14For the exact key list, see the [`requirements.toml` section in Configuration Reference](https://developers.openai.com/codex/config-reference#requirementstoml).

15 

16### Locations and precedence

17 

18Requirements layers are applied in this order (earlier wins per field):

19 

201. Cloud-managed requirements (ChatGPT Business or Enterprise)

212. macOS managed preferences (MDM) via `com.openai.codex:requirements_toml_base64`

223. System `requirements.toml` (`/etc/codex/requirements.toml` on Unix systems, including Linux/macOS)

23 

24Across layers, requirements are merged per field: if an earlier layer sets a field (including an empty list), later layers do not override that field, but lower layers can still fill fields that remain unset.

25 

26For backwards compatibility, Codex also interprets legacy `managed_config.toml` fields `approval_policy` and `sandbox_mode` as requirements (allowing only that single value).

27 

28### Cloud-managed requirements

29 

30When you sign in with ChatGPT on a Business or Enterprise plan, Codex can also fetch admin-enforced requirements from the Codex service. This is another source of `requirements.toml`-compatible requirements. This applies across Codex surfaces, including the CLI, App, and IDE Extension.

31 

32#### Configure cloud-managed requirements

33 

34Go to the [Codex managed-config page](https://chatgpt.com/codex/settings/managed-configs).

35 

36Create a new managed requirements file using the same format and keys as `requirements.toml`.

37 

38```toml

39enforce_residency = "us"

40allowed_approval_policies = ["on-request"]

41allowed_sandbox_modes = ["read-only", "workspace-write"]

42 

43[rules]

44prefix_rules = [

45 { pattern = [{ any_of = ["bash", "sh", "zsh"] }], decision = "prompt", justification = "Require explicit approval for shell entrypoints" },

46]

47```

48 

49Save the configuration. Once saved, the updated managed requirements apply immediately for matching users.

50For more examples, see [Example requirements.toml](#example-requirementstoml).

51 

52#### Assign requirements to groups

53 

54Admins can configure different managed requirements for different user groups, and also set a default fallback requirements policy.

55 

56If a user matches multiple group-specific rules, the first matching rule applies. Codex does not fill unset requirement fields from later matching group rules.

57 

58For example, if the first matching group rule sets only `allowed_sandbox_modes = ["read-only"]` and a later matching group rule sets `allowed_approval_policies = ["on-request"]`, Codex applies only the first matching group rule and does not fill `allowed_approval_policies` from the later rule.

59 

60#### How Codex applies cloud-managed requirements locally

61 

62When a user starts Codex and signs in with ChatGPT on a Business or Enterprise plan, Codex applies managed requirements on a best-effort basis. Codex first checks for a valid, unexpired local managed requirements cache entry and uses it if available. If the cache is missing, expired, invalid, or does not match the current auth identity, Codex attempts to fetch managed requirements from the service (with retries) and writes a new signed cache entry on success. If no valid cached entry is available and the fetch fails or times out, Codex continues without the managed requirements layer.

63 

64After cache resolution, managed requirements are enforced as part of the normal requirements layering described above.

65 

66### Example requirements.toml

67 

68This example blocks `--ask-for-approval never` and `--sandbox danger-full-access` (including `--yolo`):

69 

70```toml

71allowed_approval_policies = ["untrusted", "on-request"]

72allowed_sandbox_modes = ["read-only", "workspace-write"]

73```

74 

75You can also constrain web search mode:

76 

77```toml

78allowed_web_search_modes = ["cached"] # "disabled" remains implicitly allowed

79```

80 

81`allowed_web_search_modes = []` effectively allows only `"disabled"`.

82For example, `allowed_web_search_modes = ["cached"]` prevents live web search even in `danger-full-access` sessions.

83 

84You can also pin [feature flags](https://developers.openai.com/codex/config-basic/#feature-flags):

85 

86```

87[features]

88personality = true

89unified_exec = false

90```

91 

92Use the canonical feature keys from `config.toml`’s `[features]` table. Codex normalizes the effective feature set to satisfy these pins and rejects conflicting writes to `config.toml` or profile-scoped feature settings.

93 

94### Enforce command rules from requirements

95 

96Admins can also enforce restrictive command rules from `requirements.toml`

97using a `[rules]` table. These rules merge with regular `.rules` files, and the

98most restrictive decision still wins.

99 

100Unlike `.rules`, requirements rules must specify `decision`, and that decision

101must be `"prompt"` or `"forbidden"` (not `"allow"`).

102 

103```toml

104[rules]

105prefix_rules = [

106 { pattern = [{ token = "rm" }], decision = "forbidden", justification = "Use git clean -fd instead." },

107 { pattern = [{ token = "git" }, { any_of = ["push", "commit"] }], decision = "prompt", justification = "Require review before mutating history." },

108]

109```

110 

111To restrict which MCP servers Codex can enable, add an `mcp_servers` approved list. For stdio servers, match on `command`; for streamable HTTP servers, match on `url`:

112 

113```toml

114[mcp_servers.docs]

115identity = { command = "codex-mcp" }

116 

117[mcp_servers.remote]

118identity = { url = "https://example.com/mcp" }

119```

120 

121If `mcp_servers` is present but empty, Codex disables all MCP servers.

122 

123## Managed defaults (`managed_config.toml`)

124 

125Managed defaults merge on top of a user's local `config.toml` and take precedence over any CLI `--config` overrides, setting the starting values when Codex launches. Users can still change those settings during a session; Codex reapplies managed defaults the next time it starts.

126 

127Make sure your managed defaults meet your requirements; Codex rejects disallowed values.

128 

129### Precedence and layering

130 

131Codex assembles the effective configuration in this order (top overrides bottom):

132 

133- Managed preferences (macOS MDM; highest precedence)

134- `managed_config.toml` (system/managed file)

135- `config.toml` (user's base configuration)

136 

137CLI `--config key=value` overrides apply to the base, but managed layers override them. This means each run starts from the managed defaults even if you provide local flags.

138 

139Cloud-managed requirements affect the requirements layer (not managed defaults). See the Admin-enforced requirements section above for precedence.

140 

141### Locations

142 

143- Linux/macOS (Unix): `/etc/codex/managed_config.toml`

144- Windows/non-Unix: `~/.codex/managed_config.toml`

145 

146If the file is missing, Codex skips the managed layer.

147 

148### macOS managed preferences (MDM)

149 

150On macOS, admins can push a device profile that provides base64-encoded TOML payloads at:

151 

152- Preference domain: `com.openai.codex`

153- Keys:

154 - `config_toml_base64` (managed defaults)

155 - `requirements_toml_base64` (requirements)

156 

157Codex parses these "managed preferences" payloads as TOML. For managed defaults (`config_toml_base64`), managed preferences have the highest precedence. For requirements (`requirements_toml_base64`), precedence follows the cloud-managed requirements order described above. The same requirements-side `[features]` table works in `requirements_toml_base64`; use canonical feature keys there as well.

158 

159### MDM setup workflow

160 

161Codex honors standard macOS MDM payloads, so you can distribute settings with tooling like `Jamf Pro`, `Fleet`, or `Kandji`. A lightweight deployment looks like:

162 

1631. Build the managed payload TOML and encode it with `base64` (no wrapping).

1642. Drop the string into your MDM profile under the `com.openai.codex` domain at `config_toml_base64` (managed defaults) or `requirements_toml_base64` (requirements).

1653. Push the profile, then ask users to restart Codex and confirm the startup config summary reflects the managed values.

1664. When revoking or changing policy, update the managed payload; the CLI reads the refreshed preference the next time it launches.

167 

168Avoid embedding secrets or high-churn dynamic values in the payload. Treat the managed TOML like any other MDM setting under change control.

169 

170### Example managed_config.toml

171 

172```toml

173# Set conservative defaults

174approval_policy = "on-request"

175sandbox_mode = "workspace-write"

176 

177[sandbox_workspace_write]

178network_access = false # keep network disabled unless explicitly allowed

179 

180[otel]

181environment = "prod"

182exporter = "otlp-http" # point at your collector

183log_user_prompt = false # keep prompts redacted

184# exporter details live under exporter tables; see Monitoring and telemetry above

185```

186 

187### Recommended guardrails

188 

189- Prefer `workspace-write` with approvals for most users; reserve full access for controlled containers.

190- Keep `network_access = false` unless your security review allows a collector or domains required by your workflows.

191- Use managed configuration to pin OTel settings (exporter, environment), but keep `log_user_prompt = false` unless your policy explicitly allows storing prompt contents.

192- Periodically audit diffs between local `config.toml` and managed policy to catch drift; managed layers should win over local flags and files.

explore.md +22 −5

Details

1# Explore – Codex1# Explore – Codex

2 2 

3Get ideas on what you can build with Codex

4 

5## Get started3## Get started

6 4 

7![](https://developers.openai.com/codex/colorcons/gamepad.png)Build a classic Snake game in this repo.Copied![](https://developers.openai.com/codex/colorcons/search.png)Find and fix bugs in my codebase with minimal, high-confidence changes.Copied![](https://developers.openai.com/codex/colorcons/sparkles.png)Propose and implement one high-leverage viral feature for my app.Copied![](https://developers.openai.com/codex/colorcons/tab-layout.png)Create a dashboard for ….Copied![](https://developers.openai.com/codex/colorcons/wand.png)Create an interactive prototype based on my meeting notes.Copied![](https://developers.openai.com/codex/colorcons/briefcase.png)Analyze a sales call and implement the highest-impact missing features.Copied![](https://developers.openai.com/codex/colorcons/brain.png)Explain the top failure modes of my application's architecture.Copied![](https://developers.openai.com/codex/colorcons/book.png)Write a bedtime story for a 5-year-old about my system's architecture.Copied5- Build a classic Snake game in this repo.

6- Find and fix bugs in my codebase with minimal, high-confidence changes.

7- Propose and implement one high-leverage viral feature for my app.

8- Create a dashboard for ….

9- Create an interactive prototype based on my meeting notes.

10- Analyze a sales call and implement the highest-impact missing features.

11- Explain the top failure modes of my application's architecture.

12- Write a bedtime story for a 5-year-old about my system's architecture.

8 13 

9## Use skills14## Use skills

10 15 

11![](https://developers.openai.com/codex/colorcons/poem.png)Create a one-page $pdf that summarizes this app.Copied![](https://developers.openai.com/codex/colorcons/design.png)Implement designs from my Figma file in this codebase using $figma-implement-design.Copied![](https://developers.openai.com/codex/colorcons/rocket.png)Deploy this project to Vercel with $vercel-deploy and a safe, minimal setup.Copied![](https://developers.openai.com/codex/colorcons/maps.png)Create a $doc with a 6-week roadmap for my app.Copied![](https://developers.openai.com/codex/colorcons/video.png)Analyze my codebase and create an investor/influencer-style ad concept for it using $sora.Copied![](https://developers.openai.com/codex/colorcons/tab-search.png)$gh-fix-ci iterate on my PR until CI is green.Copied![](https://developers.openai.com/codex/colorcons/medical.png)Monitor incoming bug reports on $sentry and attempt fixes.Copied![](https://developers.openai.com/codex/colorcons/child.png)Generate a $pdf bedtime story children's book.Copied![](https://developers.openai.com/codex/colorcons/connectors.png)Query my database and create a $spreadsheet with my top 10 customers.Copied16- Create a one-page $pdf that summarizes this app.

17- Implement designs from my Figma file in this codebase using $figma-implement-design.

18- Deploy this project to Vercel with $vercel-deploy and a safe, minimal setup.

19- Create a $doc with a 6-week roadmap for my app.

20- Analyze my codebase and create an investor/influencer-style ad concept for it using $sora.

21- $gh-fix-ci iterate on my PR until CI is green.

22- Monitor incoming bug reports on $sentry and attempt fixes.

23- Generate a $pdf bedtime story children's book.

24- Query my database and create a $spreadsheet with my top 10 customers.

12 25 

13## Create automations26## Create automations

14 27 

15Automate recurring tasks. Codex adds findings to the inbox and archives runs with nothing to report.28Automate recurring tasks. Codex adds findings to the inbox and archives runs with nothing to report.

16 29 

17![](https://developers.openai.com/codex/colorcons/calendar.png)Scan recent commits for likely bugs and propose minimal fixes.Copied![](https://developers.openai.com/codex/colorcons/book.png)Draft release notes from merged PRs.Copied![](https://developers.openai.com/codex/colorcons/chat.png)Summarize yesterday’s git activity for standup.Copied![](https://developers.openai.com/codex/colorcons/trends.png)Summarize CI failures and flaky tests.Copied![](https://developers.openai.com/codex/colorcons/trophy.png)Create a small classic game with minimal scope.Copied30- Scan recent commits for likely bugs and propose minimal fixes.

31- Draft release notes from merged PRs.

32- Summarize yesterday’s git activity for standup.

33- Summarize CI failures and flaky tests.

34- Create a small classic game with minimal scope.

Details

1# Feature Maturity1# Feature Maturity

2 2 

3How to interpret feature maturity levels in Codex docs and releases

4 

5Some Codex features ship behind a maturity label so you can understand how reliable each one is, what might change, and what level of support to expect.3Some Codex features ship behind a maturity label so you can understand how reliable each one is, what might change, and what level of support to expect.

6 4 

7| Maturity | What it means | Guidance |5| Maturity | What it means | Guidance |

Details

1# Codex GitHub Action1# Codex GitHub Action

2 2 

3Trigger Codex actions from GitHub Events

4 

5Use the Codex GitHub Action (`openai/codex-action@v1`) to run Codex in CI/CD jobs, apply patches, or post reviews from a GitHub Actions workflow.3Use the Codex GitHub Action (`openai/codex-action@v1`) to run Codex in CI/CD jobs, apply patches, or post reviews from a GitHub Actions workflow.

6The action installs the Codex CLI, starts the Responses API proxy when you provide an API key, and runs `codex exec` under the permissions you specify.4The action installs the Codex CLI, starts the Responses API proxy when you provide an API key, and runs `codex exec` under the permissions you specify.

7 5 

Details

1# Custom instructions with AGENTS.md1# Custom instructions with AGENTS.md

2 2 

3Give Codex extra instructions and context for your project

4 

5Codex reads `AGENTS.md` files before doing any work. By layering global guidance with project-specific overrides, you can start each task with consistent expectations, no matter which repository you open.3Codex reads `AGENTS.md` files before doing any work. By layering global guidance with project-specific overrides, you can start each task with consistent expectations, no matter which repository you open.

6 4 

7## How Codex discovers guidance5## How Codex discovers guidance

Details

1# Use Codex with the Agents SDK1# Use Codex with the Agents SDK

2 2 

3Invoke Codex as an MCP server to build multi-agent development workflows

4 

5# Running Codex as an MCP server3# Running Codex as an MCP server

6 4 

7You can run Codex as an MCP server and connect it from other MCP clients (for example, an agent built with the [OpenAI Agents SDK](https://openai.github.io/openai-agents-js/guides/mcp/)).5You can run Codex as an MCP server and connect it from other MCP clients (for example, an agent built with the [OpenAI Agents SDK](https://openai.github.io/openai-agents-js/guides/mcp/)).

Details

1# Building an AI-Native Engineering Team1# Building an AI-Native Engineering Team

2 2 

3How coding agents speed up the software development lifecycle

4 

5## Introduction3## Introduction

6 4 

7AI models are rapidly expanding the range of tasks they can perform, with significant implications for engineering. Frontier systems now sustain multi-hour reasoning: as of August 2025, METR found that leading models could complete **2 hours and 17 minutes** of continuous work with roughly **50% confidence** of producing a correct answer.5AI models are rapidly expanding the range of tasks they can perform, with significant implications for engineering. Frontier systems now sustain multi-hour reasoning: as of August 2025, METR found that leading models could complete **2 hours and 17 minutes** of continuous work with roughly **50% confidence** of producing a correct answer.

ide.md +0 −6

Details

1# Codex IDE extension1# Codex IDE extension

2 2 

3Pair with Codex in your IDE

4 

5Codex is OpenAI's coding agent that can read, edit, and run code. It helps you build faster, squash bugs, and understand unfamiliar code. With the Codex VS Code extension, you can use Codex side by side in your IDE or delegate tasks to Codex Cloud.3Codex is OpenAI's coding agent that can read, edit, and run code. It helps you build faster, squash bugs, and understand unfamiliar code. With the Codex VS Code extension, you can use Codex side by side in your IDE or delegate tasks to Codex Cloud.

6 4 

7ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. Learn more about [what's included](https://developers.openai.com/codex/pricing).5ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. Learn more about [what's included](https://developers.openai.com/codex/pricing).


90Use slash commands to control how Codex behaves and quickly change common settings from chat.](https://developers.openai.com/codex/ide/slash-commands)[### Extension settings88Use slash commands to control how Codex behaves and quickly change common settings from chat.](https://developers.openai.com/codex/ide/slash-commands)[### Extension settings

91 89 

92Tune Codex to your workflow with editor settings for models, approvals, and other defaults.](https://developers.openai.com/codex/ide/settings)90Tune Codex to your workflow with editor settings for models, approvals, and other defaults.](https://developers.openai.com/codex/ide/settings)

93 

94[Next

95 

96Features](https://developers.openai.com/codex/ide/features)

ide/commands.md +0 −8

Details

1# Codex IDE extension commands1# Codex IDE extension commands

2 2 

3Reference for Codex IDE extension commands and keyboard shortcuts

4 

5Use these commands to control Codex from the VS Code Command Palette. You can also bind them to keyboard shortcuts.3Use these commands to control Codex from the VS Code Command Palette. You can also bind them to keyboard shortcuts.

6 4 

7## Assign a key binding5## Assign a key binding


23| `chatgpt.implementTodo` | - | Ask Codex to address the selected TODO comment |21| `chatgpt.implementTodo` | - | Ask Codex to address the selected TODO comment |

24| `chatgpt.newCodexPanel` | - | Create a new Codex panel |22| `chatgpt.newCodexPanel` | - | Create a new Codex panel |

25| `chatgpt.openSidebar` | - | Opens the Codex sidebar panel |23| `chatgpt.openSidebar` | - | Opens the Codex sidebar panel |

26 

27[Previous

28 

29Settings](https://developers.openai.com/codex/ide/settings)[Next

30 

31Slash commands](https://developers.openai.com/codex/ide/slash-commands)

ide/features.md +1 −9

Details

1# Codex IDE extension features1# Codex IDE extension features

2 2 

3What you can do with the Codex IDE extension

4 

5The Codex IDE extension gives you access to Codex directly in VS Code, Cursor, Windsurf, and other VS Code-compatible editors. It uses the same agent as the Codex CLI and shares the same configuration.3The Codex IDE extension gives you access to Codex directly in VS Code, Cursor, Windsurf, and other VS Code-compatible editors. It uses the same agent as the Codex CLI and shares the same configuration.

6 4 

7## Prompting Codex5## Prompting Codex


59 57 

60## Web search58## Web search

61 59 

62Codex ships with a first-party web search tool. For local tasks in the Codex IDE Extension, Codex enables web search by default and serves results from a web search cache. The cache is an OpenAI-maintained index of web results, so cached mode returns pre-indexed results instead of fetching live pages. This reduces exposure to prompt injection from arbitrary live content, but you should still treat web results as untrusted. If you configure your sandbox for [full access](https://developers.openai.com/codex/security), web search defaults to live results. See [Config basics](https://developers.openai.com/codex/config-basic) to disable web search or switch to live results that fetch the most recent data.60Codex ships with a first-party web search tool. For local tasks in the Codex IDE Extension, Codex enables web search by default and serves results from a web search cache. The cache is an OpenAI-maintained index of web results, so cached mode returns pre-indexed results instead of fetching live pages. This reduces exposure to prompt injection from arbitrary live content, but you should still treat web results as untrusted. If you configure your sandbox for [full access](https://developers.openai.com/codex/agent-approvals-security), web search defaults to live results. See [Config basics](https://developers.openai.com/codex/config-basic) to disable web search or switch to live results that fetch the most recent data.

63 61 

64You'll see `web_search` items in the transcript or `codex exec --json` output whenever Codex looks something up.62You'll see `web_search` items in the transcript or `codex exec --json` output whenever Codex looks something up.

65 63 


72## See also70## See also

73 71 

74- [Codex IDE extension settings](https://developers.openai.com/codex/ide/settings)72- [Codex IDE extension settings](https://developers.openai.com/codex/ide/settings)

75 

76[Previous

77 

78Overview](https://developers.openai.com/codex/ide)[Next

79 

80Settings](https://developers.openai.com/codex/ide/settings)

ide/settings.md +0 −8

Details

1# Codex IDE extension settings1# Codex IDE extension settings

2 2 

3Reference for Codex IDE extension settings

4 

5Use these settings to customize the Codex IDE extension.3Use these settings to customize the Codex IDE extension.

6 4 

7## Change a setting5## Change a setting


23| `chatgpt.localeOverride` | Preferred language for the Codex UI. Leave empty to detect automatically. |21| `chatgpt.localeOverride` | Preferred language for the Codex UI. Leave empty to detect automatically. |

24| `chatgpt.openOnStartup` | Focus the Codex sidebar when the extension finishes starting. |22| `chatgpt.openOnStartup` | Focus the Codex sidebar when the extension finishes starting. |

25| `chatgpt.runCodexInWindowsSubsystemForLinux` | Windows only: Run Codex in WSL when Windows Subsystem for Linux (WSL) is available. Recommended for improved sandbox security and better performance. Codex agent mode on Windows currently requires WSL. Changing this setting reloads VS Code to apply the change. |23| `chatgpt.runCodexInWindowsSubsystemForLinux` | Windows only: Run Codex in WSL when Windows Subsystem for Linux (WSL) is available. Recommended for improved sandbox security and better performance. Codex agent mode on Windows currently requires WSL. Changing this setting reloads VS Code to apply the change. |

26 

27[Previous

28 

29Features](https://developers.openai.com/codex/ide/features)[Next

30 

31IDE Commands](https://developers.openai.com/codex/ide/commands)

Details

1# Codex IDE extension slash commands1# Codex IDE extension slash commands

2 2 

3Reference for slash commands in the Codex IDE extension

4 

5Slash commands let you control Codex without leaving the chat input. Use them to check status, switch between local and cloud mode, or send feedback.3Slash commands let you control Codex without leaving the chat input. Use them to check status, switch between local and cloud mode, or send feedback.

6 4 

7## Use a slash command5## Use a slash command


21| `/local` | Switch to local mode to run the task in your workspace. |19| `/local` | Switch to local mode to run the task in your workspace. |

22| `/review` | Start code review mode to review uncommitted changes or compare against a base branch. |20| `/review` | Start code review mode to review uncommitted changes or compare against a base branch. |

23| `/status` | Show the thread ID, context usage, and rate limits. |21| `/status` | Show the thread ID, context usage, and rate limits. |

24 

25[Previous

26 

27IDE Commands](https://developers.openai.com/codex/ide/commands)

Details

1# Use Codex in GitHub1# Use Codex in GitHub

2 2 

3Run Codex code review in pull requests

4 

5Use Codex to review pull requests without leaving GitHub. Add a pull request comment with `@codex review`, and Codex replies with a standard GitHub code review.3Use Codex to review pull requests without leaving GitHub. Add a pull request comment with `@codex review`, and Codex replies with a standard GitHub code review.

6 4 

7## Set up code review5## Set up code review

Details

1# Use Codex in Linear1# Use Codex in Linear

2 2 

3Run Codex tasks from Linear issues

4 

5Use Codex in Linear to delegate work from issues. Assign an issue to Codex or mention `@Codex` in a comment, and Codex creates a cloud task and replies with progress and results.3Use Codex in Linear to delegate work from issues. Assign an issue to Codex or mention `@Codex` in a comment, and Codex creates a cloud task and replies with progress and results.

6 4 

7Codex in Linear is available on paid plans (see [Pricing](https://developers.openai.com/codex/pricing)).5Codex in Linear is available on paid plans (see [Pricing](https://developers.openai.com/codex/pricing)).


22 20 

23After you install the integration, you can assign issues to Codex the same way you assign them to teammates. Codex starts work and posts updates back to the issue.21After you install the integration, you can assign issues to Codex the same way you assign them to teammates. Codex starts work and posts updates back to the issue.

24 22 

25![Assigning Codex to a Linear issue (light mode)](/images/codex/integrations/linear-assign-codex-light.webp)![Assigning Codex to a Linear issue (dark mode)](/images/codex/integrations/linear-assign-codex-dark.webp)23![Assigning Codex to a Linear issue (light mode)](/images/codex/integrations/linear-assign-codex-light.webp)

26 24 

27### Mention `@Codex` in comments25### Mention `@Codex` in comments

28 26 

29You can also mention `@Codex` in comment threads to delegate work or ask questions. After Codex replies, follow up in the thread to continue the same session.27You can also mention `@Codex` in comment threads to delegate work or ask questions. After Codex replies, follow up in the thread to continue the same session.

30 28 

31![Mentioning Codex in a Linear issue comment (light mode)](/images/codex/integrations/linear-comment-light.webp)![Mentioning Codex in a Linear issue comment (dark mode)](/images/codex/integrations/linear-comment-dark.webp)29![Mentioning Codex in a Linear issue comment (light mode)](/images/codex/integrations/linear-comment-light.webp)

32 30 

33After Codex starts working on an issue, it [chooses an environment and repo](#how-codex-chooses-an-environment-and-repo) to work in.31After Codex starts working on an issue, it [chooses an environment and repo](#how-codex-chooses-an-environment-and-repo) to work in.

34To pin a specific repo, include it in your comment, for example: `@Codex fix this in openai/codex`.32To pin a specific repo, include it in your comment, for example: `@Codex fix this in openai/codex`.


58Linear assigns new issues that enter triage to Codex automatically.56Linear assigns new issues that enter triage to Codex automatically.

59When you use triage rules, Codex runs tasks using the account of the issue creator.57When you use triage rules, Codex runs tasks using the account of the issue creator.

60 58 

61![Screenshot of an example triage rule assigning everything to Codex and labeling it in the "Triage" status (light mode)](/images/codex/integrations/linear-triage-rule-light.webp)![Screenshot of an example triage rule assigning everything to Codex and labeling it in the "Triage" status (dark mode)](/images/codex/integrations/linear-triage-rule-dark.webp)59![Screenshot of an example triage rule assigning everything to Codex and labeling it in the "Triage" status (light mode)](/images/codex/integrations/linear-triage-rule-light.webp)

62 60 

63## Data usage, privacy, and security61## Data usage, privacy, and security

64 62 

65When you mention `@Codex` or assign an issue to it, Codex receives your issue content to understand your request and create a task.63When you mention `@Codex` or assign an issue to it, Codex receives your issue content to understand your request and create a task.

66Data handling follows OpenAI's [Privacy Policy](https://openai.com/privacy), [Terms of Use](https://openai.com/terms/), and other applicable [policies](https://openai.com/policies).64Data handling follows OpenAI's [Privacy Policy](https://openai.com/privacy), [Terms of Use](https://openai.com/terms/), and other applicable [policies](https://openai.com/policies).

67For more on security, see the [Codex security documentation](https://developers.openai.com/codex/security).65For more on security, see the [Codex security documentation](https://developers.openai.com/codex/agent-approvals-security).

68 66 

69Codex uses large language models that can make mistakes. Always review answers and diffs.67Codex uses large language models that can make mistakes. Always review answers and diffs.

70 68 

Details

1# Use Codex in Slack1# Use Codex in Slack

2 2 

3Ask Codex to run tasks from channels and threads

4 

5Use Codex in Slack to kick off coding tasks from channels and threads. Mention `@Codex` with a prompt, and Codex creates a cloud task and replies with the results.3Use Codex in Slack to kick off coding tasks from channels and threads. Mention `@Codex` with a prompt, and Codex creates a cloud task and replies with the results.

6 4 

7![Codex Slack integration in action](/images/codex/integrations/slack-example.png)5![Codex Slack integration in action](/images/codex/integrations/slack-example.png)


33 31 

34When you mention `@Codex`, Codex receives your message and thread history to understand your request and create a task.32When you mention `@Codex`, Codex receives your message and thread history to understand your request and create a task.

35Data handling follows OpenAI's [Privacy Policy](https://openai.com/privacy), [Terms of Use](https://openai.com/terms/), and other applicable [policies](https://openai.com/policies).33Data handling follows OpenAI's [Privacy Policy](https://openai.com/privacy), [Terms of Use](https://openai.com/terms/), and other applicable [policies](https://openai.com/policies).

36For more on security, see the Codex [security documentation](https://developers.openai.com/codex/security).34For more on security, see the Codex [security documentation](https://developers.openai.com/codex/agent-approvals-security).

37 35 

38Codex uses large language models that can make mistakes. Always review answers and diffs.36Codex uses large language models that can make mistakes. Always review answers and diffs.

39 37 

mcp.md +9 −3

Details

1# Model Context Protocol1# Model Context Protocol

2 2 

3Give Codex access to third-party tools and context

4 

5Model Context Protocol (MCP) connects models to tools and context. Use it to give Codex access to third-party documentation, or to let it interact with developer tools like your browser or Figma.3Model Context Protocol (MCP) connects models to tools and context. Use it to give Codex access to third-party documentation, or to let it interact with developer tools like your browser or Figma.

6 4 

7Codex supports MCP servers in both the CLI and the IDE extension.5Codex supports MCP servers in both the CLI and the IDE extension.


77- `enabled_tools` (optional): Tool allow list.75- `enabled_tools` (optional): Tool allow list.

78- `disabled_tools` (optional): Tool deny list (applied after `enabled_tools`).76- `disabled_tools` (optional): Tool deny list (applied after `enabled_tools`).

79 77 

80If your OAuth provider requires a static callback URI, set the top-level `mcp_oauth_callback_port` in `config.toml`. If unset, Codex binds to an ephemeral port.78If your OAuth provider requires a fixed callback port, set the top-level `mcp_oauth_callback_port` in `config.toml`. If unset, Codex binds to an ephemeral port.

79 

80If your MCP OAuth flow must use a specific callback URL (for example, a remote devbox ingress URL or a custom callback path), set `mcp_oauth_callback_url`. Codex uses this value as the OAuth `redirect_uri` while still using `mcp_oauth_callback_port` for the callback listener port. Local callback URLs (for example `localhost`) bind on loopback; non-local callback URLs bind on `0.0.0.0` so the callback can reach the host.

81 81 

82#### config.toml examples82#### config.toml examples

83 83 


90MY_ENV_VAR = "MY_ENV_VALUE"90MY_ENV_VAR = "MY_ENV_VALUE"

91```91```

92 92 

93```toml

94# Optional MCP OAuth callback overrides (used by `codex mcp login`)

95mcp_oauth_callback_port = 5555

96mcp_oauth_callback_url = "https://devbox.example.internal/callback"

97```

98 

93```toml99```toml

94[mcp_servers.figma]100[mcp_servers.figma]

95url = "https://mcp.figma.com/mcp"101url = "https://mcp.figma.com/mcp"

models.md +39 −29

Details

1# Codex Models1# Codex Models

2 2 

3Meet the AI models that power Codex

4 

5## Recommended models3## Recommended models

6 4 

7![gpt-5.3-codex](/images/codex/codex-wallpaper-1.webp)5![gpt-5.4](/images/api/models/gpt-5.4.jpg)

8 6 

9gpt-5.3-codex7gpt-5.4

10 8 

11Most capable agentic coding model to date, combining frontier coding performance with stronger reasoning and professional knowledge capabilities.9Flagship frontier model for professional work that brings the industry-leading coding capabilities of GPT-5.3-Codex together with stronger reasoning, tool use, and agentic workflows.

12 10 

13codex -m gpt-5.3-codex11codex -m gpt-5.4

14 12 

15Copy command13Copy command

16 14 


28 26 

29API Access27API Access

30 28 

31![gpt-5.3-codex-spark](/images/codex/codex-wallpaper-2.webp)29![gpt-5.3-codex](/images/codex/codex-wallpaper-1.webp)

32 30 

33gpt-5.3-codex-spark31gpt-5.3-codex

34 32 

35Text-only research preview model optimized for near-instant, real-time coding iteration. Available to ChatGPT Pro users.33Industry-leading coding model for complex software engineering. Its coding capabilities now also power GPT-5.4.

36 34 

37codex -m gpt-5.3-codex-spark35codex -m gpt-5.3-codex

38 36 

39Copy command37Copy command

40 38 


52 50 

53API Access51API Access

54 52 

55![gpt-5.2-codex](/images/codex/gpt-5.2-codex.png)53![gpt-5.3-codex-spark](/images/codex/codex-wallpaper-2.webp)

56 54 

57gpt-5.2-codex55gpt-5.3-codex-spark

58 56 

59Advanced coding model for real-world engineering. Succeeded by GPT-5.3-Codex.57Text-only research preview model optimized for near-instant, real-time coding iteration. Available to ChatGPT Pro users.

60 58 

61codex -m gpt-5.2-codex59codex -m gpt-5.3-codex-spark

62 60 

63Copy command61Copy command

64 62 


76 74 

77API Access75API Access

78 76 

79For most coding tasks in Codex, start with gpt-5.3-codex. It is available for77For most tasks in Codex, start with `gpt-5.4`. It combines strong coding,

80ChatGPT-authenticated Codex sessions in the Codex app, CLI, IDE extension, and78reasoning, native computer use, and broader professional workflows in one

81Codex Cloud. API access for GPT-5.3-Codex will come soon. The79model. The `gpt-5.3-codex-spark` model is available in research preview for

82gpt-5.3-codex-spark model is available in research preview for ChatGPT Pro80ChatGPT Pro subscribers and is optimized for near-instant, real-time coding

83subscribers.81iteration.

84 82 

85## Alternative models83## Alternative models

86 84 

87![gpt-5.2](https://cdn.openai.com/API/docs/images/model-page/model-art/gpt-5.2.jpg)85![gpt-5.2-codex](/images/codex/gpt-5.2-codex.png)

86 

87gpt-5.2-codex

88 

89Advanced coding model for real-world engineering. Succeeded by GPT-5.3-Codex.

90 

91codex -m gpt-5.2-codex

92 

93Copy command

94 

95Show details

96 

97![gpt-5.2](/images/api/models/gpt-5.2.jpg)

88 98 

89gpt-5.299gpt-5.2

90 100 

91Our best general agentic model for tasks across industries and domains.101Previous general-purpose model for coding and agentic tasks across industries and domains. Succeeded by GPT-5.4.

92 102 

93codex -m gpt-5.2103codex -m gpt-5.2

94 104 


96 106 

97Show details107Show details

98 108 

99![gpt-5.1-codex-max](https://cdn.openai.com/API/docs/images/model-page/model-art/gpt-5.1-codex-max.jpg)109![gpt-5.1-codex-max](/images/api/models/gpt-5.1-codex-max.jpg)

100 110 

101gpt-5.1-codex-max111gpt-5.1-codex-max

102 112 


108 118 

109Show details119Show details

110 120 

111![gpt-5.1](https://cdn.openai.com/API/docs/images/model-page/model-art/gpt-5.1.jpg)121![gpt-5.1](/images/api/models/gpt-5.1.jpg)

112 122 

113gpt-5.1123gpt-5.1

114 124 


120 130 

121Show details131Show details

122 132 

123![gpt-5.1-codex](https://cdn.openai.com/API/docs/images/model-page/model-art/gpt-5.1-codex.jpg)133![gpt-5.1-codex](/images/api/models/gpt-5.1-codex.jpg)

124 134 

125gpt-5.1-codex135gpt-5.1-codex

126 136 


132 142 

133Show details143Show details

134 144 

135![gpt-5-codex](https://cdn.openai.com/API/docs/images/model-page/model-art/gpt-5-codex.jpg)145![gpt-5-codex](/images/api/models/gpt-5-codex.jpg)

136 146 

137gpt-5-codex147gpt-5-codex

138 148 


144 154 

145Show details155Show details

146 156 

147![gpt-5-codex-mini](https://cdn.openai.com/API/docs/images/model-page/model-art/gpt-5-codex.jpg)157![gpt-5-codex-mini](/images/api/models/gpt-5-codex.jpg)

148 158 

149gpt-5-codex-mini159gpt-5-codex-mini

150 160 


156 166 

157Show details167Show details

158 168 

159![gpt-5](https://cdn.openai.com/API/docs/images/model-page/model-art/gpt-5.jpg)169![gpt-5](/images/api/models/gpt-5.jpg)

160 170 

161gpt-5171gpt-5

162 172 


184The Codex CLI and IDE extension use the same `config.toml` [configuration file](https://developers.openai.com/codex/config-basic). To specify a model, add a `model` entry to your configuration file. If you don't specify a model, the Codex app, CLI, or IDE Extension defaults to a recommended model.194The Codex CLI and IDE extension use the same `config.toml` [configuration file](https://developers.openai.com/codex/config-basic). To specify a model, add a `model` entry to your configuration file. If you don't specify a model, the Codex app, CLI, or IDE Extension defaults to a recommended model.

185 195 

186```196```

187model = "gpt-5.2"197model = "gpt-5.4"

188```198```

189 199 

190### Choosing a different local model temporarily200### Choosing a different local model temporarily


194To start a new Codex CLI thread with a specific model or to specify the model for `codex exec` you can use the `--model`/`-m` flag:204To start a new Codex CLI thread with a specific model or to specify the model for `codex exec` you can use the `--model`/`-m` flag:

195 205 

196```bash206```bash

197codex -m gpt-5.3-codex207codex -m gpt-5.4

198```208```

199 209 

200### Choosing your model for cloud tasks210### Choosing your model for cloud tasks

multi-agent.md +208 −27

Details

1# Multi-agents1# Multi-agents

2 2 

3Use experimental multi-agent collaboration in Codex CLI

4 

5Codex can run multi-agent workflows by spawning specialized agents in parallel and then collecting their results in one response. This can be particularly helpful for complex tasks that are highly parallel, such as codebase exploration or implementing a multi-step feature plan.3Codex can run multi-agent workflows by spawning specialized agents in parallel and then collecting their results in one response. This can be particularly helpful for complex tasks that are highly parallel, such as codebase exploration or implementing a multi-step feature plan.

6 4 

7With multi-agent workflows you can also define your own set of agents with different model configurations and instructions depending on the agent.5With multi-agent workflows you can also define your own set of agents with different model configurations and instructions depending on the agent.

8 6 

7For the concepts and tradeoffs behind multi-agent workflows (including context pollution/context rot and model-selection guidance), see [Multi-agents concepts](https://developers.openai.com/codex/concepts/multi-agents).

8 

9## Enable multi-agent9## Enable multi-agent

10 10 

11Multi-agent workflows are currently experimental and need to be explicitly enabled.11Multi-agent workflows are currently experimental and need to be explicitly enabled.


31 31 

32Codex will automatically decide when to spawn a new agent or you can explicitly ask it to do so.32Codex will automatically decide when to spawn a new agent or you can explicitly ask it to do so.

33 33 

34For long-running commands or polling workflows, Codex can also use the built-in `monitor` role, tuned for waiting and repeated status checks.

35 

34To see it in action, try the following prompt on your project:36To see it in action, try the following prompt on your project:

35 37 

36```38```


47 49 

48- Use `/agent` in the CLI to switch between active agent threads and inspect the ongoing thread.50- Use `/agent` in the CLI to switch between active agent threads and inspect the ongoing thread.

49- Ask Codex directly to steer a running sub-agent, stop it, or close completed agent threads.51- Ask Codex directly to steer a running sub-agent, stop it, or close completed agent threads.

52- The `wait` tool supports long polling windows for monitoring workflows (up to 1 hour per call).

53 

54## Process CSV batches with sub-agents

55 

56Use `spawn_agents_on_csv` when you have many similar tasks that map to one row per work item. Codex reads the CSV, spawns one worker sub-agent per row, waits for the full batch to finish, and exports the combined results to CSV.

57 

58This works well for repeated audits such as:

59 

60- reviewing one file, package, or service per row

61- checking a list of incidents, PRs, or migration targets

62- generating structured summaries for many similar inputs

63 

64The tool accepts:

65 

66- `csv_path` for the source CSV

67- `instruction` for the worker prompt template, using `{column_name}` placeholders

68- `id_column` when you want stable item ids from a specific column

69- `output_schema` when each worker should return a JSON object with a fixed shape

70- `output_csv_path`, `max_concurrency`, and `max_runtime_seconds` for job control

71 

72Each worker must call `report_agent_job_result` exactly once. If a worker exits without reporting a result, Codex marks that row with an error in the exported CSV.

73 

74Example prompt:

75 

76```

77Create /tmp/components.csv with columns path,owner and one row per frontend component.

78 

79Then call spawn_agents_on_csv with:

80- csv_path: /tmp/components.csv

81- id_column: path

82- instruction: "Review {path} owned by {owner}. Return JSON with keys path, risk, summary, and follow_up via report_agent_job_result."

83- output_csv_path: /tmp/components-review.csv

84- output_schema: an object with required string fields path, risk, summary, and follow_up

85```

86 

87When you run this through `codex exec`, Codex shows a single-line progress update on `stderr` while the batch is running. The exported CSV includes the original row data plus metadata such as `job_id`, `item_id`, `status`, `last_error`, and `result_json`.

88 

89Related runtime settings:

90 

91- `agents.max_threads` caps how many agent threads can stay open concurrently.

92- `agents.job_max_runtime_seconds` sets the default per-worker timeout for CSV fan-out jobs. A per-call `max_runtime_seconds` override takes precedence.

93- `sqlite_home` controls where Codex stores the SQLite-backed state used for agent jobs and their exported results.

50 94 

51## Approvals and sandbox controls95## Approvals and sandbox controls

52 96 

53Sub-agents inherit your current sandbox policy, but they run with97Sub-agents inherit your current sandbox policy.

54non-interactive approvals. If a sub-agent attempts an action that would require98 

55a new approval, that action fails and the error is surfaced in the parent99In interactive CLI sessions, approval requests can surface from inactive agent

56workflow.100threads even while you are looking at the main thread. The approval overlay

101shows the source thread label, and you can press `o` to open that thread before

102you approve, reject, or answer the request.

103 

104In non-interactive flows, or whenever a run can’t surface a fresh approval,

105an action that needs new approval fails and Codex surfaces the error back to the

106parent workflow.

107 

108Codex also reapplies the parent turn’s live runtime overrides when it spawns a

109child. That includes sandbox and approval choices you set interactively during

110the session, such as `/approvals` changes or `--yolo`, even if the selected

111agent role loads a config file with different defaults.

57 112 

58You can also override the sandbox configuration for individual [agent roles](#agent-roles) such as explicitly marking an agent to work in read-only mode.113You can also override the sandbox configuration for individual [agent roles](#agent-roles) such as explicitly marking an agent to work in read-only mode.

59 114 


61 116 

62You configure agent roles in the `[agents]` section of your [configuration](https://developers.openai.com/codex/config-basic#configuration-precedence).117You configure agent roles in the `[agents]` section of your [configuration](https://developers.openai.com/codex/config-basic#configuration-precedence).

63 118 

64Agent roles can be defined either in your local configuration (typically `~/.codex/config.toml`) or shared in a project-specific `.codex/config.toml`.119Define agent roles either in your local configuration (typically `~/.codex/config.toml`) or in a project-specific `.codex/config.toml`.

65 120 

66Each role can provide guidance (`description`) for when Codex should use this agent, and optionally load a121Each role can provide guidance (`description`) for when Codex should use this agent, and optionally load a

67role-specific config file (`config_file`) when Codex spawns an agent with that role.122role-specific config file (`config_file`) when Codex spawns an agent with that role.

68 123 

69Codex ships with built-in roles:124Codex ships with built-in roles:

70 125 

71- `default`126- `default`: general-purpose fallback role.

72- `worker`127- `worker`: execution-focused role for implementation and fixes.

73- `explorer`128- `explorer`: read-heavy codebase exploration role.

129- `monitor`: long-running command/task monitoring role (optimized for waiting/polling).

74 130 

75Each agent role can override your default configuration. Common settings to override for an agent role are:131Each agent role can override your default configuration. Common settings to override for an agent role are:

76 132 

77- `model` and `model_reasoning_effort` to select a specific model for your agent role133- `model` and `model_reasoning_effort` to select a specific model for your agent role

78- `sandbox_mode` to mark an agent as `read-only`134- `sandbox_mode` to mark an agent as `read-only`

79- `developer_instructions` to give the agent role additional instructions without relying on the parent agent for passing them135- `developer_instructions` to give the agent role extra instructions without relying on the parent agent to pass them

80 136 

81### Schema137### Schema

82 138 

83| Field | Type | Required | Purpose |139| Field | Type | Required | Purpose |

84| --- | --- | --- | --- |140| --- | --- | --- | --- |

85| `agents.max_threads` | number | No | Maximum number of concurrently open agent threads. |141| `agents.max_threads` | number | No | Concurrent open agent thread cap. |

86| `[agents.<name>]` | table | No | Declares a role. `<name>` is used as the `agent_type` when spawning an agent. |142| `agents.max_depth` | number | No | Spawned agent nesting depth (root session starts at 0). |

143| `agents.job_max_runtime_seconds` | number | No | Default timeout per worker for `spawn_agents_on_csv` jobs. |

144| `[agents.<name>]` | table | No | Role declaration. `<name>` becomes the `agent_type` when spawning an agent. |

87| `agents.<name>.description` | string | No | Human-facing role guidance shown to Codex when it decides which role to use. |145| `agents.<name>.description` | string | No | Human-facing role guidance shown to Codex when it decides which role to use. |

88| `agents.<name>.config_file` | string (path) | No | Path to a TOML config layer applied to spawned agents for that role. |146| `agents.<name>.config_file` | string (path) | No | Path to a TOML config layer applied to spawned agents for that role. |

89 147 

90**Notes:**148**Notes:**

91 149 

92- Unknown fields in `[agents.<name>]` are rejected.150- Codex rejects unknown fields in `[agents.<name>]`.

93- Relative `config_file` paths are resolved relative to the `config.toml` file that defines the role.151- `agents.max_threads` defaults to `6` when you leave it unset.

152- `agents.max_depth` defaults to `1`, which allows a direct child agent to spawn but prevents deeper nesting.

153- `agents.job_max_runtime_seconds` is optional. When you leave it unset, `spawn_agents_on_csv` falls back to its per-call default timeout of 1800 seconds per worker.

154- Codex resolves relative `config_file` paths relative to the `config.toml` file that defines the role.

155- Codex validates `agents.<name>.config_file` at config load time, and it must point to an existing file.

94- If a role name matches a built-in role (for example, `explorer`), your user-defined role takes precedence.156- If a role name matches a built-in role (for example, `explorer`), your user-defined role takes precedence.

95- If Codex can’t load a role config file, agent spawns can fail until you fix the file.157- If Codex can’t load a role config file, agent spawns can fail until you fix the file.

96- Any configuration not set by the agent role will be inherited from the parent session.158- The agent inherits any configuration that the role doesn’t set from the parent session.

97 159 

98### Example agent roles160### Example agent roles

99 161 

100Below is an example that overrides the definitions for the built-in `default` and `explorer` agent roles and defines a new `reviewer` role.162The best role definitions are narrow and opinionated. Give each role one clear job, a tool surface that matches that job, and instructions that keep it from drifting into adjacent work.

163 

164#### Example 1: PR review team

165 

166This pattern splits review into three focused roles:

167 

168- `explorer` maps the codebase and gathers evidence.

169- `reviewer` looks for correctness, security, and test risks.

170- `docs_researcher` checks framework or API documentation through a dedicated MCP server.

101 171 

102Example `~/.codex/config.toml`:172Project config (`.codex/config.toml`):

103 173 

104```174```

105[agents.default]175[agents]

106description = "General-purpose helper."176max_threads = 6

177max_depth = 1

178 

179[agents.explorer]

180description = "Read-only codebase explorer for gathering evidence before changes are proposed."

181config_file = "agents/explorer.toml"

107 182 

108[agents.reviewer]183[agents.reviewer]

109description = "Find security, correctness, and test risks in code."184description = "PR reviewer focused on correctness, security, and missing tests."

110config_file = "agents/reviewer.toml"185config_file = "agents/reviewer.toml"

111 186 

112[agents.explorer]187[agents.docs_researcher]

113description = "Fast codebase explorer for read-heavy tasks."188description = "Documentation specialist that uses the docs MCP server to verify APIs and framework behavior."

114config_file = "agents/custom-explorer.toml"189config_file = "agents/docs-researcher.toml"

190```

191 

192`agents/explorer.toml`:

193 

194```

195model = "gpt-5.3-codex-spark"

196model_reasoning_effort = "medium"

197sandbox_mode = "read-only"

198developer_instructions = """

199Stay in exploration mode.

200Trace the real execution path, cite files and symbols, and avoid proposing fixes unless the parent agent asks for them.

201Prefer fast search and targeted file reads over broad scans.

202"""

115```203```

116 204 

117Example config file for the `reviewer` role (`~/.codex/agents/reviewer.toml`):205`agents/reviewer.toml`:

118 206 

119```207```

120model = "gpt-5.3-codex"208model = "gpt-5.3-codex"

121model_reasoning_effort = "high"209model_reasoning_effort = "high"

122developer_instructions = "Focus on high priority issues, write tests to validate hypothesis before flagging an issue. When finding security issues give concrete steps on how to reproduce the vulnerability."210sandbox_mode = "read-only"

211developer_instructions = """

212Review code like an owner.

213Prioritize correctness, security, behavior regressions, and missing test coverage.

214Lead with concrete findings, include reproduction steps when possible, and avoid style-only comments unless they hide a real bug.

215"""

123```216```

124 217 

125Example config file for the `explorer` role (`~/.codex/agents/custom-explorer.toml`):218`agents/docs-researcher.toml`:

126 219 

127```220```

128model = "gpt-5.3-codex-spark"221model = "gpt-5.3-codex-spark"

129model_reasoning_effort = "medium"222model_reasoning_effort = "medium"

130sandbox_mode = "read-only"223sandbox_mode = "read-only"

224developer_instructions = """

225Use the docs MCP server to confirm APIs, options, and version-specific behavior.

226Return concise answers with links or exact references when available.

227Do not make code changes.

228"""

229 

230[mcp_servers.openaiDeveloperDocs]

231url = "https://developers.openai.com/mcp"

232```

233 

234This setup works well for prompts like:

235 

236```

237Review this branch against main. Have explorer map the affected code paths, reviewer find real risks, and docs_researcher verify the framework APIs that the patch relies on.

238```

239 

240#### Example 2: Frontend integration debugging team

241 

242This pattern is useful for UI regressions, flaky browser flows, or integration bugs that cross application code and the running product.

243 

244Project config (`.codex/config.toml`):

245 

246```

247[agents]

248max_threads = 6

249max_depth = 1

250 

251[agents.explorer]

252description = "Read-only codebase explorer for locating the relevant frontend and backend code paths."

253config_file = "agents/explorer.toml"

254 

255[agents.browser_debugger]

256description = "UI debugger that uses browser tooling to reproduce issues and capture evidence."

257config_file = "agents/browser-debugger.toml"

258 

259[agents.worker]

260description = "Implementation-focused agent for small, targeted fixes after the issue is understood."

261config_file = "agents/worker.toml"

262```

263 

264`agents/explorer.toml`:

265 

266```

267model = "gpt-5.3-codex-spark"

268model_reasoning_effort = "medium"

269sandbox_mode = "read-only"

270developer_instructions = """

271Map the code that owns the failing UI flow.

272Identify entry points, state transitions, and likely files before the worker starts editing.

273"""

274```

275 

276`agents/browser-debugger.toml`:

277 

278```

279model = "gpt-5.3-codex"

280model_reasoning_effort = "high"

281sandbox_mode = "workspace-write"

282developer_instructions = """

283Reproduce the issue in the browser, capture exact steps, and report what the UI actually does.

284Use browser tooling for screenshots, console output, and network evidence.

285Do not edit application code.

286"""

287 

288[mcp_servers.chrome_devtools]

289url = "http://localhost:3000/mcp"

290startup_timeout_sec = 20

291```

292 

293`agents/worker.toml`:

294 

295```

296model = "gpt-5.3-codex"

297model_reasoning_effort = "medium"

298developer_instructions = """

299Own the fix once the issue is reproduced.

300Make the smallest defensible change, keep unrelated files untouched, and validate only the behavior you changed.

301"""

302 

303[[skills.config]]

304path = "/Users/me/.agents/skills/docs-editor/SKILL.md"

305enabled = false

306```

307 

308This setup works well for prompts like:

309 

310```

311Investigate why the settings modal fails to save. Have browser_debugger reproduce it, explorer trace the responsible code path, and worker implement the smallest fix once the failure mode is clear.

131```312```

Details

1# Non-interactive mode1# Non-interactive mode

2 2 

3Use `codex exec` to run Codex in scripts and CI

4 

5Non-interactive mode lets you run Codex from scripts (for example, continuous integration (CI) jobs) without opening the interactive TUI.3Non-interactive mode lets you run Codex from scripts (for example, continuous integration (CI) jobs) without opening the interactive TUI.

6You invoke it with `codex exec`.4You invoke it with `codex exec`.

7 5 


113 111 

114`codex exec` reuses saved CLI authentication by default. In CI, it's common to provide credentials explicitly:112`codex exec` reuses saved CLI authentication by default. In CI, it's common to provide credentials explicitly:

115 113 

114### Use API key auth (recommended)

115 

116- Set `CODEX_API_KEY` as a secret environment variable for the job.116- Set `CODEX_API_KEY` as a secret environment variable for the job.

117- Keep prompts and tool output in mind: they can include sensitive code or data.117- Keep prompts and tool output in mind: they can include sensitive code or data.

118 118 


124 124 

125`CODEX_API_KEY` is only supported in `codex exec`.125`CODEX_API_KEY` is only supported in `codex exec`.

126 126 

127Use ChatGPT-managed auth in CI/CD (advanced)

128 

129Read this if you need to run CI/CD jobs with a Codex user account instead of an

130API key, such as enterprise teams using ChatGPT-managed Codex access on trusted

131runners or users who need ChatGPT/Codex rate limits instead of API key usage.

132 

133API keys are the right default for automation because they are simpler to

134provision and rotate. Use this path only if you specifically need to run as

135your Codex account.

136 

137Treat `~/.codex/auth.json` like a password: it contains access tokens. Don't

138commit it, paste it into tickets, or share it in chat.

139 

140Do not use this workflow for public or open-source repositories. If `codex login`

141is not an option on the runner, seed `auth.json` through secure storage, run

142Codex on the runner so Codex refreshes it in place, and persist the updated file

143between runs.

144 

145See [Maintain Codex account auth in CI/CD (advanced)](https://developers.openai.com/codex/auth/ci-cd-auth).

146 

127## Resume a non-interactive session147## Resume a non-interactive session

128 148 

129If you need to continue a previous run (for example, a two-stage pipeline), use the `resume` subcommand:149If you need to continue a previous run (for example, a two-stage pipeline), use the `resume` subcommand:

open-source.md +2 −2

Details

1# Open Source1# Open Source

2 2 

3Open-source components of Codex and where to collaborate

4 

5OpenAI develops key parts of Codex in the open. That work lives on GitHub so you can follow progress, report issues, and contribute improvements.3OpenAI develops key parts of Codex in the open. That work lives on GitHub so you can follow progress, report issues, and contribute improvements.

6 4 

5If you maintain a widely used open-source project or want to nominate maintainers stewarding important projects, you can also [apply to the Codex open source program](https://developers.openai.com/codex/community/codex-for-oss) for API credits, ChatGPT Pro with Codex, and selective access to Codex Security.

6 

7## Open-source components7## Open-source components

8 8 

9| Component | Where to find | Notes |9| Component | Where to find | Notes |

overview.md +7 −7

Details

1# Codex1# Codex

2 2 

3One agent for everywhere you code3![Codex app showing a project sidebar, thread list, and review pane](/images/codex/app/codex-app-basic-light.webp)

4 

5![Codex app showing a project sidebar, thread list, and review pane](/images/codex/app/codex-app-basic-light.webp) ![Codex app showing a project sidebar, thread list, and review pane](/images/codex/app/codex-app-basic-dark.webp)

6 

7![Codex app showing a project sidebar, thread list, and review pane](/images/codex/app/codex-app-basic-light.webp) ![Codex app showing a project sidebar, thread list, and review pane](/images/codex/app/codex-app-basic-dark.webp)

8 4 

9Codex is OpenAI’s coding agent for software development. ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. It can help you:5Codex is OpenAI’s coding agent for software development. ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. It can help you:

10 6 


26 22 

27 Learn more](https://developers.openai.com/codex/explore) [### Community23 Learn more](https://developers.openai.com/codex/explore) [### Community

28 24 

29Join the OpenAI Discord to ask questions, share workflows and connect with others.25Explore Codex Ambassadors and upcoming community meetups by location.

26 

27 See community](https://developers.openai.com/codex/community/meetups) [### Codex for OSS

28 

29Apply or nominate maintainers for API credits, ChatGPT Pro with Codex, and selective Codex Security access.

30 30 

31 Join the Discord](https://discord.gg/openai)31 Learn more](https://developers.openai.com/codex/community/codex-for-oss)

prompting.md +1 −3

Details

1# Prompting1# Prompting

2 2 

3Interacting with the Codex agent

4 

5## Prompts3## Prompts

6 4 

7You interact with Codex by sending prompts (user messages) that describe what you want it to do.5You interact with Codex by sending prompts (user messages) that describe what you want it to do.


33 31 

34Threads can run either locally or in the cloud:32Threads can run either locally or in the cloud:

35 33 

36- **Local threads** run on your machine. Codex can read and edit your files and run commands, so you can see what changes and use your existing tools. To reduce the risk of unwanted changes outside your workspace, local threads run in a [sandbox](https://developers.openai.com/codex/security).34- **Local threads** run on your machine. Codex can read and edit your files and run commands, so you can see what changes and use your existing tools. To reduce the risk of unwanted changes outside your workspace, local threads run in a [sandbox](https://developers.openai.com/codex/agent-approvals-security).

37- **Cloud threads** run in an isolated [environment](https://developers.openai.com/codex/cloud/environments). Codex clones your repository and checks out the branch it's working on. Cloud threads are useful when you want to run work in parallel or delegate tasks from another device. To use cloud threads with your repo, push your code to GitHub first. You can also [delegate tasks from your local machine](https://developers.openai.com/codex/ide/cloud-tasks), which includes your current working state.35- **Cloud threads** run in an isolated [environment](https://developers.openai.com/codex/cloud/environments). Codex clones your repository and checks out the branch it's working on. Cloud threads are useful when you want to run work in parallel or delegate tasks from another device. To use cloud threads with your repo, push your code to GitHub first. You can also [delegate tasks from your local machine](https://developers.openai.com/codex/ide/cloud-tasks), which includes your current working state.

38 36 

39## Context37## Context

quickstart.md +14 −12

Details

1# Quickstart1# Quickstart

2 2 

3Start using Codex in your IDE, CLI, or the cloud

4 

5ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. Using Codex with your ChatGPT subscription gives you access to the latest Codex models and features.3ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. Using Codex with your ChatGPT subscription gives you access to the latest Codex models and features.

6 4 

7You can also use Codex with API credits by signing in with an OpenAI API key.5You can also use Codex with API credits by signing in with an OpenAI API key.


12 10 

13## Setup11## Setup

14 12 

15Choose an option

16 

17AppRecommended (macOS only)IDE extensionCodex in your IDECLICodex in your terminalCloudCodex in your browser

18 

19The Codex app is available on macOS (Apple Silicon).13The Codex app is available on macOS (Apple Silicon).

20 14 

211. Download and install the Codex app151. Download and install the Codex app

22 16 

23 The Codex app is currently only available for macOS.17 Download the Codex app for Windows or macOS.

24 18 

25 [Download for macOS](https://persistent.oaistatic.com/codex-app-prod/Codex.dmg)19 [Download for macOS](https://persistent.oaistatic.com/codex-app-prod/Codex.dmg)

26 20 

27 [Get notified for Windows and Linux](https://openai.com/form/codex-app/)21 [Get notified for Linux](https://openai.com/form/codex-app/)

282. Open Codex and sign in222. Open Codex and sign in

29 23 

30 Once you downloaded and installed the Codex app, open it and sign in with your ChatGPT account or an OpenAI API key.24 Once you downloaded and installed the Codex app, open it and sign in with your ChatGPT account or an OpenAI API key.


42 36 

43 You can ask Codex anything about the project or your computer in general. Here are some examples:37 You can ask Codex anything about the project or your computer in general. Here are some examples:

44 38 

45 ![](https://developers.openai.com/codex/colorcons/brain.png)Tell me about this projectCopied![](https://developers.openai.com/codex/colorcons/gamepad.png)Build a classic Snake game in this repo.Copied![](https://developers.openai.com/codex/colorcons/search.png)Find and fix bugs in my codebase with minimal, high-confidence changes.Copied39- Tell me about this project

40- Build a classic Snake game in this repo.

41- Find and fix bugs in my codebase with minimal, high-confidence changes.

46 42 

47 If you need more inspiration, check out the [explore section](https://developers.openai.com/codex/explore).43 If you need more inspiration, check out the [explore section](https://developers.openai.com/codex/explore).

48 44 


67 63 

68 Codex starts in Agent mode by default, which lets it read files, run commands, and write changes in your project directory.64 Codex starts in Agent mode by default, which lets it read files, run commands, and write changes in your project directory.

69 65 

70 ![](https://developers.openai.com/codex/colorcons/brain.png)Tell me about this projectCopied![](https://developers.openai.com/codex/colorcons/gamepad.png)Build a classic Snake game in this repo.Copied![](https://developers.openai.com/codex/colorcons/search.png)Find and fix bugs in my codebase with minimal, high-confidence changes.Copied66- Tell me about this project

67- Build a classic Snake game in this repo.

68- Find and fix bugs in my codebase with minimal, high-confidence changes.

714. Use Git checkpoints694. Use Git checkpoints

72 70 

73 Codex can modify your codebase, so consider creating Git checkpoints before and after each task so you can easily revert changes if needed.71 Codex can modify your codebase, so consider creating Git checkpoints before and after each task so you can easily revert changes if needed.


96 94 

97 Once authenticated, you can ask Codex to perform tasks in the current directory.95 Once authenticated, you can ask Codex to perform tasks in the current directory.

98 96 

99 ![](https://developers.openai.com/codex/colorcons/brain.png)Tell me about this projectCopied![](https://developers.openai.com/codex/colorcons/gamepad.png)Build a classic Snake game in this repo.Copied![](https://developers.openai.com/codex/colorcons/search.png)Find and fix bugs in my codebase with minimal, high-confidence changes.Copied97- Tell me about this project

98- Build a classic Snake game in this repo.

99- Find and fix bugs in my codebase with minimal, high-confidence changes.

1004. Use Git checkpoints1004. Use Git checkpoints

101 101 

102 Codex can modify your codebase, so consider creating Git checkpoints before and after each task so you can easily revert changes if needed.102 Codex can modify your codebase, so consider creating Git checkpoints before and after each task so you can easily revert changes if needed.


115 115 

116 Once your environment is ready, launch coding tasks from the [Codex interface](https://chatgpt.com/codex). You can monitor progress in real time by viewing logs, or let tasks run in the background.116 Once your environment is ready, launch coding tasks from the [Codex interface](https://chatgpt.com/codex). You can monitor progress in real time by viewing logs, or let tasks run in the background.

117 117 

118 ![](https://developers.openai.com/codex/colorcons/brain.png)Tell me about this projectCopied![](https://developers.openai.com/codex/colorcons/brain.png)Explain the top failure modes of my application's architecture.Copied![](https://developers.openai.com/codex/colorcons/search.png)Find and fix bugs in my codebase with minimal, high-confidence changes.Copied118- Tell me about this project

119- Explain the top failure modes of my application's architecture.

120- Find and fix bugs in my codebase with minimal, high-confidence changes.

1194. Review changes and create a pull request1214. Review changes and create a pull request

120 122 

121 When a task completes, review the proposed changes in the diff view. You can iterate on the results or create a pull request directly in your GitHub repository.123 When a task completes, review the proposed changes in the diff view. You can iterate on the results or create a pull request directly in your GitHub repository.

rules.md +1 −3

Details

1# Rules1# Rules

2 2 

3Control which commands Codex can run outside the sandbox

4 

5Use rules to control which commands Codex can run outside the sandbox.3Use rules to control which commands Codex can run outside the sandbox.

6 4 

7Rules are experimental and may change.5Rules are experimental and may change.


45carefully before accepting it.43carefully before accepting it.

46 44 

47Admins can also enforce restrictive `prefix_rule` entries from45Admins can also enforce restrictive `prefix_rule` entries from

48[`requirements.toml`](https://developers.openai.com/codex/security#admin-enforced-requirements-requirementstoml).46[`requirements.toml`](https://developers.openai.com/codex/enterprise/managed-configuration#admin-enforced-requirements-requirementstoml).

49 47 

50## Understand rule fields48## Understand rule fields

51 49 

sdk.md +0 −2

Details

1# Codex SDK1# Codex SDK

2 2 

3Programmatically control local Codex agents

4 

5If you use Codex through the Codex CLI, the IDE extension, or Codex Web, you can also control it programmatically.3If you use Codex through the Codex CLI, the IDE extension, or Codex Web, you can also control it programmatically.

6 4 

7Use the SDK when you need to:5Use the SDK when you need to:

security.md +22 −372

Details

1# Codex Security1# Codex Security

2 2 

3How to securely operate and manage Codex agents3Codex Security helps engineering and security teams find, validate, and remediate likely vulnerabilities in connected GitHub repositories.

4 4 

5Codex helps protect your code and data and reduces the risk of misuse.5This page covers Codex Security, the product that scans connected GitHub

6 repositories for likely security issues. For Codex sandboxing, approvals,

7 network controls, and admin settings, see [Agent approvals &

8 security](https://developers.openai.com/codex/agent-approvals-security).

6 9 

7By default, the agent runs with network access turned off. Locally, Codex uses an OS-enforced sandbox that limits what it can touch (typically to the current workspace), plus an approval policy that controls when it must stop and ask you before acting.10It helps teams:

8 11 

9## Sandbox and approvals121. **Find likely vulnerabilities** by using a repo-specific threat model and real code context.

132. **Reduce noise** by validating findings before you review them.

143. **Move findings toward fixes** with ranked results, evidence, and suggested patch options.

10 15 

11Codex security controls come from two layers that work together:16## How it works

12 17 

13- **Sandbox mode**: What Codex can do technically (for example, where it can write and whether it can reach the network) when it executes model-generated commands.18Codex Security scans connected repositories commit by commit.

14- **Approval policy**: When Codex must ask you before it executes an action (for example, leaving the sandbox, using the network, or running commands outside a trusted set).19It builds scan context from your repo, checks likely vulnerabilities against that context, and validates high-signal issues in an isolated environment before surfacing them.

15 20 

16Codex uses different sandbox modes depending on where you run it:21You get a workflow focused on:

17 22 

18- **Codex cloud**: Runs in isolated OpenAI-managed containers, preventing access to your host system or unrelated data. You can expand access intentionally (for example, to install dependencies or allow specific domains) when needed. Network access is always enabled during the setup phase, which runs before the agent has access to your code.23- repo-specific context instead of generic signatures

19- **Codex CLI / IDE extension**: OS-level mechanisms enforce sandbox policies. Defaults include no network access and write permissions limited to the active workspace. You can configure the sandbox, approval policy, and network settings based on your risk tolerance.24- validation evidence that helps reduce false positives

25- suggested fixes you can review in GitHub

20 26 

21In the `Auto` preset (for example, `--full-auto`), Codex can read files, make edits, and run commands in the working directory automatically.27## Access and prerequisites

22 28 

23Codex asks for approval to edit files outside the workspace or to run commands that require network access. If you want to chat or plan without making changes, switch to `read-only` mode with the `/permissions` command.29Codex Security works with connected GitHub repositories through Codex Web. OpenAI manages access. If you need access or a repository isn't visible, contact your OpenAI account team and confirm the repository is available through your Codex Web workspace.

24 30 

25Codex can also elicit approval for app (connector) tool calls that advertise side effects, even when the action isn’t a shell command or file change.31## Related docs

26 32 

27## Network access [Elevated Risk](https://help.openai.com/articles/20001061)33- [Codex Security setup](https://developers.openai.com/codex/security/setup) covers setup, scanning, and findings review.

28 34- [FAQ](https://developers.openai.com/codex/security/faq) covers common product questions.

29For Codex cloud, see [agent internet access](https://developers.openai.com/codex/cloud/internet-access) to enable full internet access or a domain allow list.35- [Improving the threat model](https://developers.openai.com/codex/security/threat-model) explains how to tune scope, attack surface, and criticality assumptions.

30 

31For the Codex app, CLI, or IDE Extension, the default `workspace-write` sandbox mode keeps network access turned off unless you enable it in your configuration:

32 

33```

34[sandbox_workspace_write]

35network_access = true

36```

37 

38You can also control the [web search tool](https://platform.openai.com/docs/guides/tools-web-search) without granting full network access to spawned commands. Codex defaults to using a web search cache to access results. The cache is an OpenAI-maintained index of web results, so cached mode returns pre-indexed results instead of fetching live pages. This reduces exposure to prompt injection from arbitrary live content, but you should still treat web results as untrusted. If you are using `--yolo` or another [full access sandbox setting](#common-sandbox-and-approval-combinations), web search defaults to live results. Use `--search` or set `web_search = "live"` to allow live browsing, or set it to `"disabled"` to turn the tool off:

39 

40```

41web_search = "cached" # default

42# web_search = "disabled"

43# web_search = "live" # same as --search

44```

45 

46Use caution when enabling network access or web search in Codex. Prompt injection can cause the agent to fetch and follow untrusted instructions.

47 

48## Defaults and recommendations

49 

50- On launch, Codex detects whether the folder is version-controlled and recommends:

51 - Version-controlled folders: `Auto` (workspace write + on-request approvals)

52 - Non-version-controlled folders: `read-only`

53- Depending on your setup, Codex may also start in `read-only` until you explicitly trust the working directory (for example, via an onboarding prompt or `/permissions`).

54- The workspace includes the current directory and temporary directories like `/tmp`. Use the `/status` command to see which directories are in the workspace.

55- To accept the defaults, run `codex`.

56- You can set these explicitly:

57 - `codex --sandbox workspace-write --ask-for-approval on-request`

58 - `codex --sandbox read-only --ask-for-approval on-request`

59 

60### Protected paths in writable roots

61 

62In the default `workspace-write` sandbox policy, writable roots still include protected paths:

63 

64- `<writable_root>/.git` is protected as read-only whether it appears as a directory or file.

65- If `<writable_root>/.git` is a pointer file (`gitdir: ...`), the resolved Git directory path is also protected as read-only.

66- `<writable_root>/.agents` is protected as read-only when it exists as a directory.

67- `<writable_root>/.codex` is protected as read-only when it exists as a directory.

68- Protection is recursive, so everything under those paths is read-only.

69 

70### Run without approval prompts

71 

72You can disable approval prompts with `--ask-for-approval never` or `-a never` (shorthand).

73 

74This option works with all `--sandbox` modes, so you still control Codex’s level of autonomy. Codex makes a best effort within the constraints you set.

75 

76If you need Codex to read files, make edits, and run commands with network access without approval prompts, use `--sandbox danger-full-access` (or the `--dangerously-bypass-approvals-and-sandbox` flag). Use caution before doing so.

77 

78### Common sandbox and approval combinations

79 

80| Intent | Flags | Effect |

81| --- | --- | --- |

82| Auto (preset) | *no flags needed* or `--full-auto` | Codex can read files, make edits, and run commands in the workspace. Codex requires approval to edit outside the workspace or to access network. |

83| Safe read-only browsing | `--sandbox read-only --ask-for-approval on-request` | Codex can read files and answer questions. Codex requires approval to make edits, run commands, or access network. |

84| Read-only non-interactive (CI) | `--sandbox read-only --ask-for-approval never` | Codex can only read files; never asks for approval. |

85| Automatically edit but ask for approval to run untrusted commands | `--sandbox workspace-write --ask-for-approval untrusted` | Codex can read and edit files but asks for approval before running untrusted commands. |

86| Dangerous full access | `--dangerously-bypass-approvals-and-sandbox` (alias: `--yolo`) | [Elevated Risk](https://help.openai.com/articles/20001061) No sandbox; no approvals *(not recommended)* |

87 

88`--full-auto` is a convenience alias for `--sandbox workspace-write --ask-for-approval on-request`.

89 

90With `--ask-for-approval untrusted`, Codex runs only known-safe read operations automatically. Commands that can mutate state or trigger external execution paths (for example, destructive Git operations or Git output/config-override flags) require approval.

91 

92#### Configuration in `config.toml`

93 

94```

95# Always ask for approval mode

96approval_policy = "untrusted"

97sandbox_mode = "read-only"

98 

99# Optional: Allow network in workspace-write mode

100[sandbox_workspace_write]

101network_access = true

102```

103 

104You can also save presets as profiles, then select them with `codex --profile <name>`:

105 

106```

107[profiles.full_auto]

108approval_policy = "on-request"

109sandbox_mode = "workspace-write"

110 

111[profiles.readonly_quiet]

112approval_policy = "never"

113sandbox_mode = "read-only"

114```

115 

116### Test the sandbox locally

117 

118To see what happens when a command runs under the Codex sandbox, use these Codex CLI commands:

119 

120```

121# macOS

122codex sandbox macos [--full-auto] [--log-denials] [COMMAND]...

123# Linux

124codex sandbox linux [--full-auto] [COMMAND]...

125```

126 

127The `sandbox` command is also available as `codex debug`, and the platform helpers have aliases (for example `codex sandbox seatbelt` and `codex sandbox landlock`).

128 

129## OS-level sandbox

130 

131Codex enforces the sandbox differently depending on your OS:

132 

133- **macOS** uses Seatbelt policies and runs commands using `sandbox-exec` with a profile (`-p`) that corresponds to the `--sandbox` mode you selected.

134- **Linux** uses `Landlock` plus `seccomp` by default. You can opt into the alternative Linux sandbox pipeline with `features.use_linux_sandbox_bwrap = true` (or `-c use_linux_sandbox_bwrap=true`).

135- **Windows** uses the Linux sandbox implementation when running in [Windows Subsystem for Linux (WSL)](https://developers.openai.com/codex/windows#windows-subsystem-for-linux). When running natively on Windows, you can enable an [experimental sandbox](https://developers.openai.com/codex/windows#windows-experimental-sandbox) implementation.

136 

137If you use the Codex IDE extension on Windows, it supports WSL directly. Set the following in your VS Code settings to keep the agent inside WSL whenever it’s available:

138 

139```

140{

141 "chatgpt.runCodexInWindowsSubsystemForLinux": true

142}

143```

144 

145This ensures the IDE extension inherits Linux sandbox semantics for commands, approvals, and filesystem access even when the host OS is Windows. Learn more in the [Windows setup guide](https://developers.openai.com/codex/windows).

146 

147The native Windows sandbox is experimental and has important limitations. For example, it can’t prevent writes in directories where the `Everyone` SID already has write permissions (for example, world-writable folders). See the [Windows setup guide](https://developers.openai.com/codex/windows#windows-experimental-sandbox) for details and mitigation steps.

148 

149When you run Linux in a containerized environment such as Docker, the sandbox may not work if the host or container configuration doesn’t support the required `Landlock` and `seccomp` features.

150 

151In that case, configure your Docker container to provide the isolation you need, then run `codex` with `--sandbox danger-full-access` (or the `--dangerously-bypass-approvals-and-sandbox` flag) inside the container.

152 

153## Version control

154 

155Codex works best with a version control workflow:

156 

157- Work on a feature branch and keep `git status` clean before delegating. This keeps Codex patches easier to isolate and revert.

158- Prefer patch-based workflows (for example, `git diff`/`git apply`) over editing tracked files directly. Commit frequently so you can roll back in small increments.

159- Treat Codex suggestions like any other PR: run targeted verification, review diffs, and document decisions in commit messages for auditing.

160 

161## Monitoring and telemetry

162 

163Codex supports opt-in monitoring via OpenTelemetry (OTel) to help teams audit usage, investigate issues, and meet compliance requirements without weakening local security defaults. Telemetry is off by default; enable it explicitly in your configuration.

164 

165### Overview

166 

167- Codex turns off OTel export by default to keep local runs self-contained.

168- When enabled, Codex emits structured log events covering conversations, API requests, SSE/WebSocket stream activity, user prompts (redacted by default), tool approval decisions, and tool results.

169- Codex tags exported events with `service.name` (originator), CLI version, and an environment label to separate dev/staging/prod traffic.

170 

171### Enable OTel (opt-in)

172 

173Add an `[otel]` block to your Codex configuration (typically `~/.codex/config.toml`), choosing an exporter and whether to log prompt text.

174 

175```

176[otel]

177environment = "staging" # dev | staging | prod

178exporter = "none" # none | otlp-http | otlp-grpc

179log_user_prompt = false # redact prompt text unless policy allows

180```

181 

182- `exporter = "none"` leaves instrumentation active but doesn’t send data anywhere.

183- To send events to your own collector, pick one of:

184 

185```

186[otel]

187exporter = { otlp-http = {

188 endpoint = "https://otel.example.com/v1/logs",

189 protocol = "binary",

190 headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }

191}}

192```

193 

194```

195[otel]

196exporter = { otlp-grpc = {

197 endpoint = "https://otel.example.com:4317",

198 headers = { "x-otlp-meta" = "abc123" }

199}}

200```

201 

202Codex batches events and flushes them on shutdown. Codex exports only telemetry produced by its OTel module.

203 

204### Event categories

205 

206Representative event types include:

207 

208- `codex.conversation_starts` (model, reasoning settings, sandbox/approval policy)

209- `codex.api_request` (attempt, status/success, duration, and error details)

210- `codex.sse_event` (stream event kind, success/failure, duration, plus token counts on `response.completed`)

211- `codex.websocket_request` and `codex.websocket_event` (request duration plus per-message kind/success/error)

212- `codex.user_prompt` (length; content redacted unless explicitly enabled)

213- `codex.tool_decision` (approved/denied, source: configuration vs. user)

214- `codex.tool_result` (duration, success, output snippet)

215 

216Associated OTel metrics (counter plus duration histogram pairs) include `codex.api_request`, `codex.sse_event`, `codex.websocket.request`, `codex.websocket.event`, and `codex.tool.call` (with corresponding `.duration_ms` instruments).

217 

218For the full event catalog and configuration reference, see the [Codex configuration documentation on GitHub](https://github.com/openai/codex/blob/main/docs/config.md#otel).

219 

220### Security and privacy guidance

221 

222- Keep `log_user_prompt = false` unless policy explicitly permits storing prompt contents. Prompts can include source code and sensitive data.

223- Route telemetry only to collectors you control; apply retention limits and access controls aligned with your compliance requirements.

224- Treat tool arguments and outputs as sensitive. Favor redaction at the collector or SIEM when possible.

225- Review local data retention settings (for example, `history.persistence` / `history.max_bytes`) if you don’t want Codex to save session transcripts under `CODEX_HOME`. See [Advanced Config](https://developers.openai.com/codex/config-advanced#history-persistence) and [Configuration Reference](https://developers.openai.com/codex/config-reference).

226- If you run the CLI with network access turned off, OTel export can’t reach your collector. To export, allow network access in `workspace-write` mode for the OTel endpoint, or export from Codex cloud with the collector domain on your approved list.

227- Review events periodically for approval/sandbox changes and unexpected tool executions.

228 

229OTel is optional and designed to complement, not replace, the sandbox and approval protections described above.

230 

231## Managed configuration

232 

233Enterprise admins can control local Codex behavior in two ways:

234 

235- **Requirements**: admin-enforced constraints that users can’t override.

236- **Managed defaults**: starting values applied when Codex launches. Users can still change settings during a session; Codex reapplies managed defaults the next time it starts.

237 

238### Admin-enforced requirements (requirements.toml)

239 

240Requirements constrain security-sensitive settings (approval policy, sandbox mode, web search mode, and optionally which MCP servers you can enable). If a user explicitly selects a disallowed value (via `config.toml`, CLI flags, profiles, or in-session UI), Codex rejects the change. If a value isn’t explicitly set and the default conflicts with requirements, Codex falls back to a requirements-compliant default. If you configure an `mcp_servers` approved list, Codex enables an MCP server only when both its name and identity match an approved entry; otherwise, Codex turns it off.

241 

242#### Locations

243 

244- Linux/macOS (Unix): `/etc/codex/requirements.toml`

245- macOS MDM: preference domain `com.openai.codex`, key `requirements_toml_base64`

246 

247#### Cloud requirements (Business and Enterprise)

248 

249When you sign in with ChatGPT on a Business or Enterprise plan, Codex can also

250fetch admin-enforced requirements from the Codex service. This applies across

251Codex surfaces, including the TUI, `codex exec`, and `codex app-server`.

252 

253Cloud requirements are currently best-effort. If the fetch fails or times out,

254Codex continues without the cloud layer.

255 

256Requirements layer in this order (higher wins):

257 

258- macOS managed preferences (MDM; highest precedence)

259- Cloud requirements (ChatGPT Business or Enterprise)

260- `/etc/codex/requirements.toml`

261 

262Cloud requirements only fill unset requirement fields, so higher-precedence

263managed layers still win when both specify the same constraint.

264 

265For backwards compatibility, Codex also interprets legacy `managed_config.toml` fields `approval_policy` and `sandbox_mode` as requirements (allowing only that single value).

266 

267#### Example requirements.toml

268 

269This example blocks `--ask-for-approval never` and `--sandbox danger-full-access` (including `--yolo`):

270 

271```

272allowed_approval_policies = ["untrusted", "on-request"]

273allowed_sandbox_modes = ["read-only", "workspace-write"]

274```

275 

276You can also constrain web search mode:

277 

278```

279allowed_web_search_modes = ["cached"] # "disabled" remains implicitly allowed

280```

281 

282`allowed_web_search_modes = []` effectively allows only `"disabled"`.

283For example, `allowed_web_search_modes = ["cached"]` prevents live web search even in `danger-full-access` sessions.

284 

285#### Enforce command rules from requirements

286 

287Admins can also enforce restrictive command rules from `requirements.toml`

288using a `[rules]` table. These rules merge with regular `.rules` files, and the

289most restrictive decision still wins.

290 

291Unlike `.rules`, requirements rules must specify `decision`, and that decision

292must be `"prompt"` or `"forbidden"` (not `"allow"`).

293 

294```

295[rules]

296prefix_rules = [

297 { pattern = [{ token = "rm" }], decision = "forbidden", justification = "Use git clean -fd instead." },

298 { pattern = [{ token = "git" }, { any_of = ["push", "commit"] }], decision = "prompt", justification = "Require review before mutating history." },

299]

300```

301 

302To restrict which MCP servers Codex can enable, add an `mcp_servers` approved list. For stdio servers, match on `command`; for streamable HTTP servers, match on `url`:

303 

304```

305[mcp_servers.docs]

306identity = { command = "codex-mcp" }

307 

308[mcp_servers.remote]

309identity = { url = "https://example.com/mcp" }

310```

311 

312If `mcp_servers` is present but empty, Codex disables all MCP servers.

313 

314### Managed defaults (managed\_config.toml)

315 

316Managed defaults merge on top of a user’s local `config.toml` and take precedence over any CLI `--config` overrides, setting the starting values when Codex launches. Users can still change those settings during a session; Codex reapplies managed defaults the next time it starts.

317 

318Make sure your managed defaults meet your requirements; Codex rejects disallowed values.

319 

320#### Precedence and layering

321 

322Codex assembles the effective configuration in this order (top overrides bottom):

323 

324- Managed preferences (macOS MDM; highest precedence)

325- `managed_config.toml` (system/managed file)

326- `config.toml` (user’s base configuration)

327 

328CLI `--config key=value` overrides apply to the base, but managed layers override them. This means each run starts from the managed defaults even if you provide local flags.

329 

330Cloud requirements affect the requirements layer (not managed defaults). See

331[Admin-enforced requirements](https://developers.openai.com/codex/security#admin-enforced-requirements-requirementstoml)

332for their precedence.

333 

334#### Locations

335 

336- Linux/macOS (Unix): `/etc/codex/managed_config.toml`

337- Windows/non-Unix: `~/.codex/managed_config.toml`

338 

339If the file is missing, Codex skips the managed layer.

340 

341#### macOS managed preferences (MDM)

342 

343On macOS, admins can push a device profile that provides base64-encoded TOML payloads at:

344 

345- Preference domain: `com.openai.codex`

346- Keys:

347 - `config_toml_base64` (managed defaults)

348 - `requirements_toml_base64` (requirements)

349 

350Codex parses these “managed preferences” payloads as TOML and applies them with the highest precedence.

351 

352### MDM setup workflow

353 

354Codex honors standard macOS MDM payloads, so you can distribute settings with tooling like `Jamf Pro`, `Fleet`, or `Kandji`. A lightweight deployment looks like:

355 

3561. Build the managed payload TOML and encode it with `base64` (no wrapping).

3572. Drop the string into your MDM profile under the `com.openai.codex` domain at `config_toml_base64` (managed defaults) or `requirements_toml_base64` (requirements).

3583. Push the profile, then ask users to restart Codex and confirm the startup config summary reflects the managed values.

3594. When revoking or changing policy, update the managed payload; the CLI reads the refreshed preference the next time it launches.

360 

361Avoid embedding secrets or high-churn dynamic values in the payload. Treat the managed TOML like any other MDM setting under change control.

362 

363### Example managed\_config.toml

364 

365```

366# Set conservative defaults

367approval_policy = "on-request"

368sandbox_mode = "workspace-write"

369 

370[sandbox_workspace_write]

371network_access = false # keep network disabled unless explicitly allowed

372 

373[otel]

374environment = "prod"

375exporter = "otlp-http" # point at your collector

376log_user_prompt = false # keep prompts redacted

377# exporter details live under exporter tables; see Monitoring and telemetry above

378```

379 

380### Recommended guardrails

381 

382- Prefer `workspace-write` with approvals for most users; reserve full access for controlled containers.

383- Keep `network_access = false` unless your security review allows a collector or domains required by your workflows.

384- Use managed configuration to pin OTel settings (exporter, environment), but keep `log_user_prompt = false` unless your policy explicitly allows storing prompt contents.

385- Periodically audit diffs between local `config.toml` and managed policy to catch drift; managed layers should win over local flags and files.

security/faq.md +104 −0 added

Details

1# FAQ

2 

3## Getting started

4 

5### What is Codex Security?

6 

7Software security remains one of the hardest and most important problems in engineering. Codex Security is an LLM-driven security analysis toolkit that inspects source code and returns structured, ranked vulnerability findings with proposed patches. It helps developers and security teams discover and fix security issues at scale.

8 

9### Why does it matter?

10 

11Software is foundational to modern industry and society, and vulnerabilities create systemic risk. Codex Security supports a defender-first workflow by continuously identifying likely issues, validating them when possible, and proposing fixes. That helps teams improve security without slowing development.

12 

13### What business problem does Codex Security solve?

14 

15Codex Security shortens the path from a suspected issue to a confirmed, reproducible finding with evidence and a proposed patch. That reduces triage load and cuts false positives compared with traditional scanners alone.

16 

17### How does Codex Security work?

18 

19Codex Security runs analysis in an ephemeral, isolated container and temporarily clones the target repository. It performs code-level analysis and returns structured findings with a description, file and location, criticality, root cause, and a suggested remediation.

20 

21For findings that include verification steps, the system executes proposed commands or tests in the same sandbox, records success or failure, exit codes, stdout, stderr, test results, and any generated diffs or artifacts, and attaches that output as evidence for review.

22 

23### Does it replace SAST?

24 

25No. Codex Security complements SAST. It adds semantic, LLM-based reasoning and automated validation, while existing SAST tools still provide broad deterministic coverage.

26 

27## Features

28 

29### What is the analysis pipeline?

30 

31Codex Security follows a staged pipeline:

32 

331. **Analysis** builds a threat model for the repository.

342. **Commit scanning** reviews merged commits and repository history for likely issues.

353. **Validation** tries to reproduce likely vulnerabilities in a sandbox to reduce false positives.

364. **Patching** integrates with Codex to propose patches that reviewers can inspect before opening a PR.

37 

38It works alongside engineers in GitHub, Codex, and standard review workflows.

39 

40### What languages are supported?

41 

42Codex Security is language-agnostic. In practice, performance depends on the model's reasoning ability for the language and framework used by the repository.

43 

44### What outputs do I get after the scan completes?

45 

46You get ranked findings with criticality, validation status, and a proposed patch when one is available. Findings can also include crash output, reproduction evidence, call-path context, and related annotations.

47 

48### How is customer code isolated?

49 

50Each analysis and validation job runs in an ephemeral Codex container with session-scoped tools. Artifacts are extracted for review, and the container is torn down after the job completes.

51 

52### Does Codex Security auto-apply patches?

53 

54No. The proposed patch is a recommended remediation. Users can review it and push it as a PR to GitHub from the findings UI, but Codex Security does not auto-apply changes to the repository.

55 

56### Does the project need to be built for scanning?

57 

58No. Codex Security can produce findings from repository and commit context without a compile step. During auto-validation, it may try to build the project inside the container if that helps reproduce the issue. For environment setup details, see [Codex cloud environments](https://developers.openai.com/codex/cloud/environments).

59 

60### How does Codex Security reduce false positives and avoid broken patches?

61 

62Codex Security uses two stages. First, the model ranks likely issues. Then auto-validation tries to reproduce each issue in a clean container. Findings that successfully reproduce are marked as validated, which helps reduce false positives before human review.

63 

64### How long do initial scans take, and what happens after that?

65 

66Initial scan time depends on repository size, build time, and how many findings proceed to validation. For some repositories, scans can take several hours. For larger repositories, they can take multiple days. Later scans are usually faster because they focus on new commits and incremental changes.

67 

68### What is a threat model?

69 

70A threat model is the scan-time security context for a repository. It combines a concise project overview with attack-surface details such as entry points, trust boundaries, auth assumptions, and risky components. For more detail, see [Improving the threat model](https://developers.openai.com/codex/security/threat-model).

71 

72### How is a threat model generated?

73 

74Codex Security prompts the model to summarize the repository architecture and security entry points, classify the repository type, run specialized extractors, and merge the results into a project overview or threat model artifact used throughout the scan.

75 

76### Does it replace manual security review?

77 

78No. Codex Security accelerates review and helps rank findings, but it does not replace code-level validation, exploitability checks, or human threat assessment.

79 

80### Can I edit the threat model?

81 

82Yes. Codex Security creates the initial threat model, and you can update it as the architecture, risks, and business context change. For the editing workflow, see [Improving the threat model](https://developers.openai.com/codex/security/threat-model).

83 

84### Do I need to configure a scan before using threat modeling?

85 

86Yes. Threat-model guidance is tied to how and what you scan, so you need to configure the repository first. See [Codex Security setup](https://developers.openai.com/codex/security/setup).

87 

88### What does the proposed patch contain?

89 

90The proposed patch contains a minimal actionable diff with filename and line context when a remediation can be generated for the finding.

91 

92### Does the patch directly modify my PR branch?

93 

94No. The workflow generates a diff, patch file, or suggested change for maintainers and reviewers to inspect before applying.

95 

96## Validation

97 

98### What is auto-validation?

99 

100Auto-validation is the phase that tries to reproduce a suspected issue in an isolated container. It records whether reproduction succeeded or failed and captures logs, commands, and related artifacts as evidence.

101 

102### What happens if validation fails?

103 

104The finding remains unvalidated. Logs and reports still capture what was attempted so engineers can retry, investigate further, or adjust the reproduction steps.

security/setup.md +97 −0 added

Details

1# Codex Security setup

2 

3This page walks you from initial access to reviewed findings and remediation pull requests in Codex Security.

4 

5Confirm you've set up Codex Cloud first. If not, see [Codex

6 Cloud](https://developers.openai.com/codex/cloud) to get started.

7 

8## 1. Access and environment

9 

10Codex Security scans GitHub repositories connected through [Codex Cloud](https://developers.openai.com/codex/cloud).

11 

12- Confirm your workspace has access to Codex Security.

13- Confirm the repository you want to scan is available in Codex Cloud.

14 

15Go to [Codex environments](https://chatgpt.com/codex/settings/environments) and check whether the repository already has an environment. If it doesn't, create one there before continuing.

16 

17[Open environments](https://chatgpt.com/codex/settings/environments)

18 

19![Codex environments](/_astro/create_environment.M-EPszPH.png)

20 

21## 2. New security scan

22 

23After the environment exists, go to [Create a security scan](https://chatgpt.com/codex/security/scans/new) and choose the repository you just connected.

24 

25[Create a security scan](https://chatgpt.com/codex/security/scans/new)

26 

27Codex Security scans repositories from newest commits backward first. It uses this to build and refresh scan context as new commits come in.

28 

29To configure a repository:

30 

311. Select the GitHub organization.

322. Select the repository.

333. Select the branch you want to scan.

344. Select the environment.

355. Choose a **history window**. Longer windows provide more context, but backfill takes longer.

366. Click **Create**.

37 

38![Create a security scan](/_astro/create_scan.mEjmf4U_.png)

39 

40## 3. Initial scans can take a while

41 

42When you create the scan, Codex Security first runs a commit-level security pass across the selected history window.

43The initial backfill can take a few hours, especially for larger repositories or longer windows.

44If findings aren't visible right away, this is expected. Wait for the initial scan to finish before opening a ticket or troubleshooting.

45 

46Initial scan setup is automatic and thorough. This can take a few hours. Don’t

47 be alarmed if the first set of findings is delayed.

48 

49## 4. Review scans and improve the threat model

50 

51[Review scans](https://chatgpt.com/codex/security/scans)

52 

53![Threat model editor in Codex Security](/_astro/review_threat_model.JTLMQEmx.png)

54 

55When the initial scan finishes, open the scan and review the threat model that was generated.

56After initial findings appear, update the threat model so it matches your architecture, trust boundaries, and business context.

57This helps Codex Security rank issues for your team.

58 

59If you want scan results to change, you can edit the threat model with your

60 updated scope, priorities, and assumptions.

61 

62After initial findings appear, revisit the model so scan guidance stays aligned with current priorities.

63Keeping it current helps Codex Security produce better suggestions.

64 

65For a deeper explanation of threat models and how they affect criticality and triage, see [Improving the threat model](https://developers.openai.com/codex/security/threat-model).

66 

67## 5. Review findings and patch

68 

69After the initial backfill completes, review findings from the **Findings** view.

70 

71[Open findings](https://chatgpt.com/codex/security/findings)

72 

73You can use two views:

74 

75- **Recommended Findings**: an evolving top 10 list of the most critical issues in the repo

76- **All Findings**: a sortable, filterable table of findings across the repository

77 

78![Recommended findings view](https://developers.openai.com/codex/security/images/aardvark_recommended_findings.png)

79 

80Click a finding to open its detail page, which includes:

81 

82- a concise description of the issue

83- key metadata such as commit details and file paths

84- contextual reasoning about impact

85- relevant code excerpts

86- call-path or data-flow context when available

87- validation steps and validation output

88 

89You can review each finding and create a PR directly from the finding detail page.

90 

91[Review findings and create a PR](https://chatgpt.com/codex/security/findings)

92 

93## Related docs

94 

95- [Codex Security](https://developers.openai.com/codex/security) gives the product overview.

96- [FAQ](https://developers.openai.com/codex/security/faq) covers common questions.

97- [Improving the threat model](https://developers.openai.com/codex/security/threat-model) explains how to improve scan context and finding prioritization.

security/threat-model.md +40 −0 added

Details

1# Improving the threat model

2 

3Learn what a threat model is and how editing it improves Codex Security's suggestions.

4 

5## What a threat model is

6 

7A threat model is a short security summary of how your repository works. In Codex Security, you edit it as a `project overview`, and the system uses it as scan context for future scans, prioritization, and review.

8 

9Codex Security creates the first draft from the code. If the findings feel off, this is the first thing to edit.

10 

11A useful threat model calls out:

12 

13- entry points and untrusted inputs

14- trust boundaries and auth assumptions

15- sensitive data paths or privileged actions

16- the areas your team wants reviewed first

17 

18For example:

19 

20> Public API for account changes. Accepts JSON requests and file uploads. Uses an internal auth service for identity checks and writes billing changes through an internal service. Focus review on auth checks, upload parsing, and service-to-service trust boundaries.

21 

22That gives Codex Security a better starting point for future scans and finding prioritization.

23 

24## Improving and revisiting the threat model

25 

26If you want to improve the results, edit the threat model first. Use it when findings are missing the areas you care about or showing up in places you don't expect. The threat model changes future scan context.

27 

28Some users copy the current threat model into Codex, have a conversation to

29 improve it based on the areas they want reviewed more closely, and then paste

30 the updated version back into the web UI.

31 

32### Where to edit

33 

34To review or update the threat model, go to [Codex Security scans](https://chatgpt.com/codex/security/scans), open the repository, and click **Edit**.

35 

36## Related docs

37 

38- [Codex Security setup](https://developers.openai.com/codex/security/setup) covers repository setup and findings review.

39- [Codex Security](https://developers.openai.com/codex/security) gives the product overview.

40- [FAQ](https://developers.openai.com/codex/security/faq) covers common questions.

skills.md +2 −4

Details

1# Agent Skills1# Agent Skills

2 2 

3Give Codex new capabilities and expertise

4 

5Use agent skills to extend Codex with task-specific capabilities. A skill packages instructions, resources, and optional scripts so Codex can follow a workflow reliably. You can share skills across teams or with the community. Skills build on the [open agent skills standard](https://agentskills.io).3Use agent skills to extend Codex with task-specific capabilities. A skill packages instructions, resources, and optional scripts so Codex can follow a workflow reliably. You can share skills across teams or with the community. Skills build on the [open agent skills standard](https://agentskills.io).

6 4 

7Skills are available in the Codex CLI, IDE extension, and Codex app.5Skills are available in the Codex CLI, IDE extension, and Codex app.


69 67 

70## Install skills68## Install skills

71 69 

72To install skills beyond the built-ins, use `$skill-installer`:70To install skills beyond the built-ins, use `$skill-installer`. For example, to install the `$linear` skill:

73 71 

74```bash72```bash

75$skill-installer install the linear skill from the .experimental folder73$skill-installer linear

76```74```

77 75 

78You can also prompt the installer to download skills from other repositories. Codex detects newly installed skills automatically; if one doesn’t appear, restart Codex.76You can also prompt the installer to download skills from other repositories. Codex detects newly installed skills automatically; if one doesn’t appear, restart Codex.

speed.md +24 −0 added

Details

1# Speed

2 

3## Fast mode

4 

5Codex offers the ability to increase the speed of the model for increased

6credit consumption.

7 

8Fast mode is currently supported on GPT-5.4. When enabled, speed is increased

9by 1.5x and credits are consumed at a 2x rate.

10 

11Enable it by typing `/fast`. It’s available in Codex IDE Extensions, Codex

12CLI, and the Codex app when you sign in with ChatGPT. With an API key, Codex

13uses standard API pricing instead and you can’t use `/fast`.

14 

15[

16Your browser does not support the video tag.

17](/videos/codex/fast-mode-demo.mp4)

18 

19## Codex-Spark

20 

21GPT-5.3-Codex-Spark is a separate fast, less-capable Codex model optimized for near-instant, real-time coding iteration. Unlike fast mode, which speeds up GPT-5.4 at a higher credit rate,

22Codex-Spark is its own model choice and has its own usage limits.

23 

24During research preview Codex-Spark is only available for ChatGPT Pro subscribers.

videos.md +0 −2

Details

1# Videos1# Videos

2 

3Learn how to use Codex with demos, walkthroughs, and talks

windows.md +34 −25

Details

1# Windows1# Windows

2 2 

3Tips for running Codex on Windows3The easiest way to use Codex on Windows is to use the [Codex app](https://developers.openai.com/codex/app/windows). You can also [set up the IDE extension](https://developers.openai.com/codex/ide) or [install the CLI](https://developers.openai.com/codex/cli) and run it from PowerShell.

4 4 

5The easiest way to use Codex on Windows is to [set up the IDE extension](https://developers.openai.com/codex/ide) or [install the CLI](https://developers.openai.com/codex/cli) and run it from PowerShell.5[![](/images/codex/codex-banner-icon.webp)

6 6 

7When you run Codex natively on Windows, the agent mode uses an experimental Windows sandbox to block filesystem writes outside the working folder and prevent network access without your explicit approval. [Learn more below](#windows-experimental-sandbox).7Use the Codex app on Windows

8 8 

9Instead, you can use [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/install) (WSL2). WSL2 gives you a Linux shell, Unix-style semantics, and tooling that match many tasks that models see in training.9Work across projects, run parallel agent threads, and review results in one place with the native Windows app.](https://developers.openai.com/codex/app/windows)

10 

11When you run Codex natively on Windows, agent mode uses a [Windows sandbox](#windows-sandbox) to block filesystem writes outside the working folder and prevent network access without your explicit approval. [Learn more below](#windows-sandbox).

12 

13If you prefer to have Codex use [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/install) (WSL2), [read the instructions](#windows-subsystem-for-linux) below.

14 

15## Windows sandbox

16 

17Native Windows sandbox support includes two modes that you can configure in `config.toml`:

18 

19```

20[windows]

21sandbox = "unelevated" # or "elevated"

22```

23 

24How `elevated` mode works:

25 

26- Uses a Restricted Token approach with filesystem ACLs to limit which files the sandbox can write to.

27- Runs commands as a dedicated Windows Sandbox User.

28- Limits network access by installing Windows Firewall rules.

29 

30### Grant sandbox read access

31 

32When a command fails because the Windows sandbox can't read a directory, use:

33 

34```text

35/sandbox-add-read-dir C:\absolute\directory\path

36```

37 

38The path must be an existing absolute directory. After the command succeeds, later commands that run in the sandbox can read that directory during the current session.

10 39 

11## Windows Subsystem for Linux40## Windows Subsystem for Linux

12 41 


83 ```112 ```

84- If you need Windows access to files, they’re under `\wsl$\Ubuntu\home&lt;user>` in Explorer.113- If you need Windows access to files, they’re under `\wsl$\Ubuntu\home&lt;user>` in Explorer.

85 114 

86## Windows experimental sandbox115## Troubleshooting and FAQ

87 

88The Windows sandbox support is experimental. How it works:

89 

90- Launches commands inside a restricted token derived from an AppContainer profile.

91- Grants only specifically requested filesystem capabilities by attaching capability security identifiers to that profile.

92- Disables outbound network access by overriding proxy-related environment variables and inserting stub executables for common network tools.

93 

94Its primary limitation is that it can’t prevent file writes, deletions, or creations in any directory where the Everyone SID already has write permissions (for example, world-writable folders). When using the Windows sandbox, Codex scans for folders where Everyone has write access and recommends that you remove that access.

95 

96### Grant sandbox read access

97 

98When a command fails because the Windows sandbox can't read a directory, use:

99 

100```text

101/sandbox-add-read-dir C:\absolute\directory\path

102```

103 

104The path must be an existing absolute directory. After the command succeeds, later commands that run in the sandbox can read that directory during the current session.

105 

106### Troubleshooting and FAQ

107 116 

108#### Installed extension, but it’s unresponsive117#### Installed extension, but it’s unresponsive

109 118 

workflows.md +0 −2

Details

1# Workflows1# Workflows

2 2 

3Development usage patterns with Codex

4 

5Codex works best when you treat it like a teammate with explicit context and a clear definition of "done."3Codex works best when you treat it like a teammate with explicit context and a clear definition of "done."

6This page gives end-to-end workflow examples for the Codex IDE extension, the Codex CLI, and Codex cloud.4This page gives end-to-end workflow examples for the Codex IDE extension, the Codex CLI, and Codex cloud.

7 5