config-advanced.md +281 −38
2 2
3Use these options when you need more control over providers, policies, and integrations. For a quick start, see [Config basics](https://developers.openai.com/codex/config-basic).3Use these options when you need more control over providers, policies, and integrations. For a quick start, see [Config basics](https://developers.openai.com/codex/config-basic).
4 4
55For background on project guidance, reusable capabilities, custom slash commands, multi-agent workflows, and integrations, see [Customization](https://developers.openai.com/codex/concepts/customization). For configuration keys, see [Configuration Reference](https://developers.openai.com/codex/config-reference).For background on project guidance, reusable capabilities, custom slash commands, subagent workflows, and integrations, see [Customization](https://developers.openai.com/codex/concepts/customization). For configuration keys, see [Configuration Reference](https://developers.openai.com/codex/config-reference).
6 6
7## Profiles7## Profiles
8 8
15Define profiles under `[profiles.<name>]` in `config.toml`, then run `codex --profile <name>`:15Define profiles under `[profiles.<name>]` in `config.toml`, then run `codex --profile <name>`:
16 16
17```toml17```toml
1818model = "gpt-5-codex"model = "gpt-5.4"
19approval_policy = "on-request"19approval_policy = "on-request"
20model_catalog_json = "/Users/me/.codex/model-catalogs/default.json"
20 21
21[profiles.deep-review]22[profiles.deep-review]
22model = "gpt-5-pro"23model = "gpt-5-pro"
23model_reasoning_effort = "high"24model_reasoning_effort = "high"
24approval_policy = "never"25approval_policy = "never"
26model_catalog_json = "/Users/me/.codex/model-catalogs/deep-review.json"
25 27
26[profiles.lightweight]28[profiles.lightweight]
27model = "gpt-4.1"29model = "gpt-4.1"
30 32
31To make a profile the default, add `profile = "deep-review"` at the top level of `config.toml`. Codex loads that profile unless you override it on the command line.33To make a profile the default, add `profile = "deep-review"` at the top level of `config.toml`. Codex loads that profile unless you override it on the command line.
32 34
35Profiles can also override `model_catalog_json`. When both the top level and the selected profile set `model_catalog_json`, Codex prefers the profile value.
36
33## One-off overrides from the CLI37## One-off overrides from the CLI
34 38
35In addition to editing `~/.codex/config.toml`, you can override configuration for a single run from the CLI:39In addition to editing `~/.codex/config.toml`, you can override configuration for a single run from the CLI:
41 45
42```shell46```shell
43# Dedicated flag47# Dedicated flag
4448codex --model gpt-5.2codex --model gpt-5.4
45 49
46# Generic key/value override (value is TOML, not JSON)50# Generic key/value override (value is TOML, not JSON)
4751codex --config model='"gpt-5.2"'codex --config model='"gpt-5.4"'
48codex --config sandbox_workspace_write.network_access=true52codex --config sandbox_workspace_write.network_access=true
49codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'53codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'
50```54```
70 74
71For shared defaults, rules, and skills checked into repos or system paths, see [Team Config](https://developers.openai.com/codex/enterprise/admin-setup#team-config).75For shared defaults, rules, and skills checked into repos or system paths, see [Team Config](https://developers.openai.com/codex/enterprise/admin-setup#team-config).
72 76
7377If you just need to point the built-in OpenAI provider at an LLM proxy, router, or data-residency enabled project, set environment variable `OPENAI_BASE_URL` instead of defining a new provider. This overrides the default OpenAI endpoint without a `config.toml` change.If you just need to point the built-in OpenAI provider at an LLM proxy, router, or data-residency enabled project, set `openai_base_url` in `config.toml` instead of defining a new provider. This changes the base URL for the built-in `openai` provider without requiring a separate `model_providers.<id>` entry.
74 78
75```toml79```toml
7680export OPENAI_BASE_URL="https://api.openai.com/v1"openai_base_url = "https://us.api.openai.com/v1"
77codex
78```81```
79 82
80## Project config files (`.codex/config.toml`)83## Project config files (`.codex/config.toml`)
81 84
82In addition to your user config, Codex reads project-scoped overrides from `.codex/config.toml` files inside your repo. Codex walks from the project root to your current working directory and loads every `.codex/config.toml` it finds. If multiple files define the same key, the closest file to your working directory wins.85In addition to your user config, Codex reads project-scoped overrides from `.codex/config.toml` files inside your repo. Codex walks from the project root to your current working directory and loads every `.codex/config.toml` it finds. If multiple files define the same key, the closest file to your working directory wins.
83 86
8487For security, Codex loads project-scoped config files only when the project is trusted. If the project is untrusted, Codex ignores `.codex/config.toml` files in the project.For security, Codex loads project-scoped config files only when the project is trusted. If the project is untrusted, Codex ignores project `.codex/` layers, including `.codex/config.toml`, project-local hooks, and project-local rules. User and system layers remain separate and still load.
88
89Relative paths inside a project config (for example, `model_instructions_file`) are resolved relative to the `.codex/` folder that contains the `config.toml`.
90
91## Hooks (experimental)
92
93Codex can also load lifecycle hooks from either `hooks.json` files or inline
94`[hooks]` tables in `config.toml` files that sit next to active config layers.
95
96In practice, the two most useful locations are:
97
98- `~/.codex/hooks.json`
99- `~/.codex/config.toml`
100- `<repo>/.codex/hooks.json`
101- `<repo>/.codex/config.toml`
85 102
86103Relative paths inside a project config (for example, `experimental_instructions_file`) are resolved relative to the `.codex/` folder that contains the `config.toml`.Project-local hooks load only when the project `.codex/` layer is trusted.
104User-level hooks remain independent of project trust.
105
106Turn hooks on with:
107
108```toml
109[features]
110codex_hooks = true
111```
112
113Inline TOML hooks use the same event structure as `hooks.json`:
114
115```toml
116[[hooks.PreToolUse]]
117matcher = "^Bash$"
118
119[[hooks.PreToolUse.hooks]]
120type = "command"
121command = '/usr/bin/python3 "$(git rev-parse --show-toplevel)/.codex/hooks/pre_tool_use_policy.py"'
122timeout = 30
123statusMessage = "Checking Bash command"
124```
125
126If a single layer contains both `hooks.json` and inline `[hooks]`, Codex loads
127both and warns. Prefer one representation per layer.
128
129For the current event list, input fields, output behavior, and limitations, see
130[Hooks](https://developers.openai.com/codex/hooks).
87 131
88## Agent roles (`[agents]` in `config.toml`)132## Agent roles (`[agents]` in `config.toml`)
89 133
90134For multi-agent role configuration (`[agents]` in `config.toml`), see [Multi-agents](https://developers.openai.com/codex/multi-agent).For subagent role configuration (`[agents]` in `config.toml`), see [Subagents](https://developers.openai.com/codex/subagents).
91 135
92## Project root detection136## Project root detection
93 137
104 148
105## Custom model providers149## Custom model providers
106 150
107151A model provider defines how Codex connects to a model (base URL, wire API, and optional HTTP headers).A model provider defines how Codex connects to a model (base URL, wire API, authentication, and optional HTTP headers). Custom providers can't reuse the reserved built-in provider IDs: `openai`, `ollama`, and `lmstudio`.
108 152
109Define additional providers and point `model_provider` at them:153Define additional providers and point `model_provider` at them:
110 154
111```toml155```toml
112156model = "gpt-5.1"model = "gpt-5.4"
113model_provider = "proxy"157model_provider = "proxy"
114 158
115[model_providers.proxy]159[model_providers.proxy]
117base_url = "http://proxy.example.com"161base_url = "http://proxy.example.com"
118env_key = "OPENAI_API_KEY"162env_key = "OPENAI_API_KEY"
119 163
120164[model_providers.ollama][model_providers.local_ollama]
121name = "Ollama"165name = "Ollama"
122base_url = "http://localhost:11434/v1"166base_url = "http://localhost:11434/v1"
123 167
135env_http_headers = { "X-Example-Features" = "EXAMPLE_FEATURES" }179env_http_headers = { "X-Example-Features" = "EXAMPLE_FEATURES" }
136```180```
137 181
182Use command-backed authentication when a provider needs Codex to fetch bearer tokens from an external credential helper:
183
184```toml
185[model_providers.proxy]
186name = "OpenAI using LLM proxy"
187base_url = "https://proxy.example.com/v1"
188wire_api = "responses"
189
190[model_providers.proxy.auth]
191command = "/usr/local/bin/fetch-codex-token"
192args = ["--audience", "codex"]
193timeout_ms = 5000
194refresh_interval_ms = 300000
195```
196
197The auth command receives no `stdin` and must print the token to stdout. Codex trims surrounding whitespace, treats an empty token as an error, and refreshes proactively at `refresh_interval_ms`; set `refresh_interval_ms = 0` to refresh only after an authentication retry. Don't combine `[model_providers.<id>.auth]` with `env_key`, `experimental_bearer_token`, or `requires_openai_auth`.
198
199### Amazon Bedrock provider
200
201Codex includes a built-in `amazon-bedrock` model provider. Set it directly as
202`model_provider`; unlike custom providers, this built-in provider supports only
203the nested AWS profile and region overrides.
204
205```toml
206model_provider = "amazon-bedrock"
207model = "<bedrock-model-id>"
208
209[model_providers.amazon-bedrock.aws]
210profile = "default"
211region = "eu-central-1"
212```
213
214If you omit `profile`, Codex uses the standard AWS credential chain. Set
215`region` to the supported Bedrock region that should handle requests.
216
138## OSS mode (local providers)217## OSS mode (local providers)
139 218
140Codex can run against a local "open source" provider (for example, Ollama or LM Studio) when you pass `--oss`. If you pass `--oss` without specifying a provider, Codex uses `oss_provider` as the default.219Codex can run against a local "open source" provider (for example, Ollama or LM Studio) when you pass `--oss`. If you pass `--oss` without specifying a provider, Codex uses `oss_provider` as the default.
153env_key = "AZURE_OPENAI_API_KEY"232env_key = "AZURE_OPENAI_API_KEY"
154query_params = { api-version = "2025-04-01-preview" }233query_params = { api-version = "2025-04-01-preview" }
155wire_api = "responses"234wire_api = "responses"
156
157[model_providers.openai]
158request_max_retries = 4235request_max_retries = 4
159stream_max_retries = 10236stream_max_retries = 10
160stream_idle_timeout_ms = 300000237stream_idle_timeout_ms = 300000
161```238```
162 239
240To change the base URL for the built-in OpenAI provider, use `openai_base_url`; don't create `[model_providers.openai]`, because you can't override built-in provider IDs.
241
163## ChatGPT customers using data residency242## ChatGPT customers using data residency
164 243
165Projects created with [data residency](https://help.openai.com/en/articles/9903489-data-residency-and-inference-residency-for-chatgpt) enabled can create a model provider to update the base_url with the [correct prefix](https://platform.openai.com/docs/guides/your-data#which-models-and-features-are-eligible-for-data-residency).244Projects created with [data residency](https://help.openai.com/en/articles/9903489-data-residency-and-inference-residency-for-chatgpt) enabled can create a model provider to update the base_url with the [correct prefix](https://platform.openai.com/docs/guides/your-data#which-models-and-features-are-eligible-for-data-residency).
186 265
187Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access).266Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access).
188 267
189268For operational details that are easy to miss while editing `config.toml`, see [Common sandbox and approval combinations](https://developers.openai.com/codex/security#common-sandbox-and-approval-combinations), [Protected paths in writable roots](https://developers.openai.com/codex/security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/security#network-access).For operational details to keep in mind while editing `config.toml`, see [Common sandbox and approval combinations](https://developers.openai.com/codex/agent-approvals-security#common-sandbox-and-approval-combinations), [Protected paths in writable roots](https://developers.openai.com/codex/agent-approvals-security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/agent-approvals-security#network-access).
190 269
191270```You can also use a granular approval policy (`approval_policy = { granular = { ... } }`) to allow or auto-reject individual prompt categories. This is useful when you want normal interactive approvals for some cases but want others, such as `request_permissions` or skill-script prompts, to fail closed automatically.
192271approval_policy = "untrusted" # Other options: on-request, never
272Set `approvals_reviewer = "auto_review"` to route eligible interactive approval
273requests through automatic review. This changes the reviewer, not the sandbox
274boundary.
275
276Use `[auto_review].policy` for local reviewer policy instructions. Managed
277`guardian_policy_config` takes precedence.
278
279```toml
280approval_policy = "untrusted" # Other options: on-request, never, or { granular = { ... } }
281approvals_reviewer = "user" # Or "auto_review" for automatic review
193sandbox_mode = "workspace-write"282sandbox_mode = "workspace-write"
283allow_login_shell = false # Optional hardening: disallow login shells for shell tools
284
285# Example granular approval policy:
286# approval_policy = { granular = {
287# sandbox_approval = true,
288# rules = true,
289# mcp_elicitations = true,
290# request_permissions = false,
291# skill_approval = false
292# } }
194 293
195[sandbox_workspace_write]294[sandbox_workspace_write]
196exclude_tmpdir_env_var = false # Allow $TMPDIR295exclude_tmpdir_env_var = false # Allow $TMPDIR
197exclude_slash_tmp = false # Allow /tmp296exclude_slash_tmp = false # Allow /tmp
198writable_roots = ["/Users/YOU/.pyenv/shims"]297writable_roots = ["/Users/YOU/.pyenv/shims"]
199network_access = false # Opt in to outbound network298network_access = false # Opt in to outbound network
299
300[auto_review]
301policy = """
302Use your organization's automatic review policy.
303"""
200```304```
201 305
202306Need the complete key list (including profile-scoped overrides and requirements constraints)? See [Configuration Reference](https://developers.openai.com/codex/config-reference) and [Managed configuration](https://developers.openai.com/codex/security#managed-configuration).### Named permission profiles
307
308Set `default_permissions` to reuse a sandbox profile by name. Codex includes
309the built-in profiles `:read-only`, `:workspace`, and `:danger-no-sandbox`:
310
311```toml
312default_permissions = ":workspace"
313```
314
315For custom profiles, point `default_permissions` at a name you define under
316`[permissions.<name>]`:
317
318```toml
319default_permissions = "workspace"
320
321[permissions.workspace.filesystem]
322":project_roots" = { "." = "write", "**/*.env" = "none" }
323glob_scan_max_depth = 3
324
325[permissions.workspace.network]
326enabled = true
327mode = "limited"
328
329[permissions.workspace.network.domains]
330"api.openai.com" = "allow"
331```
332
333Use built-in names with a leading colon. Custom names don't use a leading
334colon and must have matching `permissions` tables.
335
336Need the complete key list (including profile-scoped overrides and requirements constraints)? See [Configuration Reference](https://developers.openai.com/codex/config-reference) and [Managed configuration](https://developers.openai.com/codex/enterprise/managed-configuration).
203 337
204In workspace-write mode, some environments keep `.git/` and `.codex/`338In workspace-write mode, some environments keep `.git/` and `.codex/`
205 read-only even when the rest of the workspace is writable. This is why339 read-only even when the rest of the workspace is writable. This is why
295| `codex.tool.call` | counter | `tool`, `success` | Tool invocation count by tool name and success/failure. |429| `codex.tool.call` | counter | `tool`, `success` | Tool invocation count by tool name and success/failure. |
296| `codex.tool.call.duration_ms` | histogram | `tool`, `success` | Tool execution duration in milliseconds by tool name and outcome. |430| `codex.tool.call.duration_ms` | histogram | `tool`, `success` | Tool execution duration in milliseconds by tool name and outcome. |
297 431
298432For more security and privacy guidance around telemetry, see [Security](https://developers.openai.com/codex/security#monitoring-and-telemetry).For more security and privacy guidance around telemetry, see [Security](https://developers.openai.com/codex/agent-approvals-security#monitoring-and-telemetry).
299 433
300### Metrics434### Metrics
301 435
318 452
319#### Metrics catalog453#### Metrics catalog
320 454
321455Each metric includes the required fields plus the default context fields above. Every metric is prefixed by `codex.`.Each metric includes the required fields plus the default context fields above. Metric names below omit the `codex.` prefix.
456Most metric names are centralized in `codex-rs/otel/src/metrics/names.rs`; feature-specific metrics emitted outside that file are included here too.
322If a metric includes the `tool` field, it reflects the internal tool used (for example, `apply_patch` or `shell`) and doesn't contain the actual shell command or patch `codex` is trying to apply.457If a metric includes the `tool` field, it reflects the internal tool used (for example, `apply_patch` or `shell`) and doesn't contain the actual shell command or patch `codex` is trying to apply.
323 458
459#### Runtime and model transport
460
461| Metric | Type | Fields | Description |
462| --- | --- | --- | --- |
463| `api_request` | counter | `status`, `success` | API request count by HTTP status and success/failure. |
464| `api_request.duration_ms` | histogram | `status`, `success` | API request duration in milliseconds. |
465| `sse_event` | counter | `kind`, `success` | SSE event count by event kind and success/failure. |
466| `sse_event.duration_ms` | histogram | `kind`, `success` | SSE event processing duration in milliseconds. |
467| `websocket.request` | counter | `success` | WebSocket request count by success/failure. |
468| `websocket.request.duration_ms` | histogram | `success` | WebSocket request duration in milliseconds. |
469| `websocket.event` | counter | `kind`, `success` | WebSocket message/event count by type and success/failure. |
470| `websocket.event.duration_ms` | histogram | `kind`, `success` | WebSocket message/event processing duration in milliseconds. |
471| `responses_api_overhead.duration_ms` | histogram | | Responses API overhead timing from websocket responses. |
472| `responses_api_inference_time.duration_ms` | histogram | | Responses API inference timing from websocket responses. |
473| `responses_api_engine_iapi_ttft.duration_ms` | histogram | | Responses API engine IAPI time-to-first-token timing. |
474| `responses_api_engine_service_ttft.duration_ms` | histogram | | Responses API engine service time-to-first-token timing. |
475| `responses_api_engine_iapi_tbt.duration_ms` | histogram | | Responses API engine IAPI time-between-token timing. |
476| `responses_api_engine_service_tbt.duration_ms` | histogram | | Responses API engine service time-between-token timing. |
477| `transport.fallback_to_http` | counter | `from_wire_api` | WebSocket-to-HTTP fallback count. |
478| `remote_models.fetch_update.duration_ms` | histogram | | Time to fetch remote model definitions. |
479| `remote_models.load_cache.duration_ms` | histogram | | Time to load the remote model cache. |
480| `startup_prewarm.duration_ms` | histogram | `status` | Startup prewarm duration by outcome. |
481| `startup_prewarm.age_at_first_turn_ms` | histogram | `status` | Startup prewarm age when the first real turn resolves it. |
482| `cloud_requirements.fetch.duration_ms` | histogram | | Workspace-managed cloud requirements fetch duration. |
483| `cloud_requirements.fetch_attempt` | counter | See note | Workspace-managed cloud requirements fetch attempts. |
484| `cloud_requirements.fetch_final` | counter | See note | Final workspace-managed cloud requirements fetch outcome. |
485| `cloud_requirements.load` | counter | `trigger`, `outcome` | Workspace-managed cloud requirements load outcome. |
486
487The `cloud_requirements.fetch_attempt` metric includes `trigger`, `attempt`, `outcome`, and `status_code` fields. The `cloud_requirements.fetch_final` metric includes `trigger`, `outcome`, `reason`, `attempt_count`, and `status_code` fields.
488
489#### Turn and tool activity
490
491| Metric | Type | Fields | Description |
492| --- | --- | --- | --- |
493| `turn.e2e_duration_ms` | histogram | | End-to-end time for a full turn. |
494| `turn.ttft.duration_ms` | histogram | | Time to first token for a turn. |
495| `turn.ttfm.duration_ms` | histogram | | Time to first model output item for a turn. |
496| `turn.network_proxy` | counter | `active`, `tmp_mem_enabled` | Whether the managed network proxy was active for the turn. |
497| `turn.memory` | counter | `read_allowed`, `feature_enabled`, `config_use_memories`, `has_citations` | Per-turn memory read availability and memory citation usage. |
498| `turn.tool.call` | histogram | `tmp_mem_enabled` | Number of tool calls in the turn. |
499| `turn.token_usage` | histogram | `token_type`, `tmp_mem_enabled` | Per-turn token usage by token type (`total`, `input`, `cached_input`, `output`, or `reasoning_output`). |
500| `tool.call` | counter | `tool`, `success` | Tool invocation count by tool name and success/failure. |
501| `tool.call.duration_ms` | histogram | `tool`, `success` | Tool execution duration in milliseconds by tool name and outcome. |
502| `tool.unified_exec` | counter | `tty` | Unified exec tool calls by TTY mode. |
503| `approval.requested` | counter | `tool`, `approved` | Tool approval request result (`approved`, `approved_with_amendment`, `approved_for_session`, `denied`, `abort`). |
504| `mcp.call` | counter | See note | MCP tool invocation result. |
505| `mcp.call.duration_ms` | histogram | See note | MCP tool invocation duration. |
506| `mcp.tools.list.duration_ms` | histogram | `cache` | MCP tool-list duration, including cache hit/miss state. |
507| `mcp.tools.fetch_uncached.duration_ms` | histogram | | Duration of uncached MCP tool fetches. |
508| `mcp.tools.cache_write.duration_ms` | histogram | | Duration of Codex Apps MCP tool-cache writes. |
509| `hooks.run` | counter | `hook_name`, `source`, `status` | Hook run count by hook name, source, and status. |
510| `hooks.run.duration_ms` | histogram | `hook_name`, `source`, `status` | Hook run duration in milliseconds. |
511
512The `mcp.call` and `mcp.call.duration_ms` metrics include `status`; normal tool-call emissions also include `tool`, plus `connector_id` and `connector_name` when available. Blocked Codex Apps MCP calls may emit `mcp.call` with only `status`.
513
514#### Threads, tasks, and features
515
324| Metric | Type | Fields | Description |516| Metric | Type | Fields | Description |
325| --- | --- | --- | --- |517| --- | --- | --- | --- |
326| `feature.state` | counter | `feature`, `value` | Feature values that differ from defaults (emit one row per non-default). |518| `feature.state` | counter | `feature`, `value` | Feature values that differ from defaults (emit one row per non-default). |
327519| `thread.started` | counter | `is_git` | New thread created. || `status_line` | counter | | Session started with a configured status line. |
328520| `thread.fork` | counter | | New thread created by forking an existing thread. || `model_warning` | counter | | Warning sent to the model. |
521| `thread.started` | counter | `is_git` | New thread created, tagged by whether the working directory is in a Git repo. |
522| `conversation.turn.count` | counter | | User/assistant turns per thread, recorded at the end of the thread. |
523| `thread.fork` | counter | `source` | New thread created by forking an existing thread. |
329| `thread.rename` | counter | | Thread renamed. |524| `thread.rename` | counter | | Thread renamed. |
525| `thread.side` | counter | `source` | Side conversation created. |
526| `thread.skills.enabled_total` | histogram | | Number of skills enabled for a new thread. |
527| `thread.skills.kept_total` | histogram | | Number of enabled skills kept after prompt rendering. |
528| `thread.skills.truncated` | histogram | | Whether skill rendering truncated the enabled skills list (`1` or `0`). |
330| `task.compact` | counter | `type` | Number of compactions per type (`remote` or `local`), including manual and auto. |529| `task.compact` | counter | `type` | Number of compactions per type (`remote` or `local`), including manual and auto. |
331| `task.user_shell` | counter | | Number of user shell actions (`!` in the TUI for example). |
332| `task.review` | counter | | Number of reviews triggered. |530| `task.review` | counter | | Number of reviews triggered. |
333| `task.undo` | counter | | Number of undo actions triggered. |531| `task.undo` | counter | | Number of undo actions triggered. |
334532| `approval.requested` | counter | `tool`, `approved` | Tool approval request result (`approved`, `approved_with_amendment`, `approved_for_session`, `denied`, `abort`). || `task.user_shell` | counter | | Number of user shell actions (`!` in the TUI for example). |
335533| `conversation.turn.count` | counter | | User/assistant turns per thread, recorded at the end of the thread. || `shell_snapshot` | counter | See note | Whether taking a shell snapshot succeeded. |
336| `turn.e2e_duration_ms` | histogram | | End-to-end time for a full turn. |
337| `mcp.call` | counter | `status` | MCP tool invocation result (`ok` or error string). |
338| `model_warning` | counter | | Warning sent to the model. |
339| `tool.call` | counter | `tool`, `success` | Tool invocation result (`success`: `true` or `false`). |
340| `tool.call.duration_ms` | histogram | `tool`, `success` | Tool execution time. |
341| `remote_models.fetch_update.duration_ms` | histogram | | Time to fetch remote model definitions. |
342| `remote_models.load_cache.duration_ms` | histogram | | Time to load the remote model cache. |
343| `shell_snapshot` | counter | `success` | Whether taking a shell snapshot succeeded. |
344| `shell_snapshot.duration_ms` | histogram | `success` | Time to take a shell snapshot. |534| `shell_snapshot.duration_ms` | histogram | `success` | Time to take a shell snapshot. |
345535| `db.init` | counter | `status` | State DB initialization outcomes (`opened`, `created`, `open_error`, `init_error`). || `skill.injected` | counter | `status`, `skill` | Skill injection outcomes by skill. |
536| `plugins.startup_sync` | counter | `transport`, `status` | Curated plugin startup sync attempts. |
537| `plugins.startup_sync.final` | counter | `transport`, `status` | Final curated plugin startup sync outcome. |
538| `multi_agent.spawn` | counter | `role` | Agent spawns by role. |
539| `multi_agent.resume` | counter | | Agent resumes. |
540| `multi_agent.nickname_pool_reset` | counter | | Agent nickname pool resets. |
541
542The `shell_snapshot` metric includes `success` and, on failures, `failure_reason`.
543
544#### Memory and local state
545
546| Metric | Type | Fields | Description |
547| --- | --- | --- | --- |
548| `memory.phase1` | counter | `status` | Memory phase 1 job counts by status. |
549| `memory.phase1.e2e_ms` | histogram | | End-to-end duration for memory phase 1. |
550| `memory.phase1.output` | counter | | Memory phase 1 outputs written. |
551| `memory.phase1.token_usage` | histogram | `token_type` | Memory phase 1 token usage by token type. |
552| `memory.phase2` | counter | `status` | Memory phase 2 job counts by status. |
553| `memory.phase2.e2e_ms` | histogram | | End-to-end duration for memory phase 2. |
554| `memory.phase2.input` | counter | | Memory phase 2 input count. |
555| `memory.phase2.token_usage` | histogram | `token_type` | Memory phase 2 token usage by token type. |
556| `memories.usage` | counter | `kind`, `tool`, `success` | Memory usage by kind, tool, and success/failure. |
557| `external_agent_config.detect` | counter | See note | External agent config detections by migration item type. |
558| `external_agent_config.import` | counter | See note | External agent config imports by migration item type. |
346| `db.backfill` | counter | `status` | Initial state DB backfill results (`upserted`, `failed`). |559| `db.backfill` | counter | `status` | Initial state DB backfill results (`upserted`, `failed`). |
347560| `db.backfill.duration_ms` | histogram | `status` | Duration of the initial state DB backfill, tagged with `success`, `failed`, or `partial_failure`. || `db.backfill.duration_ms` | histogram | `status` | Duration of the initial state DB backfill. |
348561| `db.error` | counter | `stage` | Errors during state DB operations (for example, `extract_metadata_from_rollout`, `backfill_sessions`, `apply_rollout_items`). || `db.error` | counter | `stage` | Errors during state DB operations. |
349562| `db.compare_error` | counter | `stage`, `reason` | State DB discrepancies detected during reconciliation. |
563The `external_agent_config.detect` and `external_agent_config.import` metrics include `migration_type`; skills migrations also include `skills_count`.
564
565#### Windows sandbox
566
567| Metric | Type | Fields | Description |
568| --- | --- | --- | --- |
569| `windows_sandbox.setup_success` | counter | `originator`, `mode` | Windows sandbox setup successes. |
570| `windows_sandbox.setup_failure` | counter | `originator`, `mode` | Windows sandbox setup failures. |
571| `windows_sandbox.setup_duration_ms` | histogram | `result`, `originator`, `mode` | Windows sandbox setup duration. |
572| `windows_sandbox.elevated_setup_success` | counter | | Elevated Windows sandbox setup successes. |
573| `windows_sandbox.elevated_setup_failure` | counter | See note | Elevated Windows sandbox setup failures. |
574| `windows_sandbox.elevated_setup_canceled` | counter | See note | Canceled elevated Windows sandbox setup attempts. |
575| `windows_sandbox.elevated_setup_duration_ms` | histogram | `result` | Elevated Windows sandbox setup duration. |
576| `windows_sandbox.elevated_prompt_shown` | counter | | Elevated sandbox setup prompt shown. |
577| `windows_sandbox.elevated_prompt_accept` | counter | | Elevated sandbox setup prompt accepted. |
578| `windows_sandbox.elevated_prompt_use_legacy` | counter | | User chose legacy sandbox from the elevated prompt. |
579| `windows_sandbox.elevated_prompt_quit` | counter | | User quit from the elevated prompt. |
580| `windows_sandbox.fallback_prompt_shown` | counter | | Fallback sandbox prompt shown. |
581| `windows_sandbox.fallback_retry_elevated` | counter | | User retried elevated setup from the fallback prompt. |
582| `windows_sandbox.fallback_use_legacy` | counter | | User chose legacy sandbox from the fallback prompt. |
583| `windows_sandbox.fallback_prompt_quit` | counter | | User quit from the fallback prompt. |
584| `windows_sandbox.legacy_setup_preflight_failed` | counter | See note | Legacy Windows sandbox setup preflight failure. |
585| `windows_sandbox.setup_elevated_sandbox_command` | counter | | Elevated sandbox setup command invoked. |
586| `windows_sandbox.createprocessasuserw_failed` | counter | `error_code`, `path_kind`, `exe`, `level` | Windows `CreateProcessAsUserW` failures. |
587
588The elevated setup failure metrics include `code` and `message` when Windows setup failure details are available, and may include `originator` when emitted from the shared setup path. The `windows_sandbox.legacy_setup_preflight_failed` metric includes `originator` when emitted from the shared setup path, but fallback-prompt preflight failures may not include any fields.
350 589
351### Feedback controls590### Feedback controls
352 591
424- `notify` runs an external program (good for webhooks, desktop notifiers, CI hooks).663- `notify` runs an external program (good for webhooks, desktop notifiers, CI hooks).
425- `tui.notifications` is built in to the TUI and can optionally filter by event type (for example, `agent-turn-complete` and `approval-requested`).664- `tui.notifications` is built in to the TUI and can optionally filter by event type (for example, `agent-turn-complete` and `approval-requested`).
426- `tui.notification_method` controls how the TUI emits terminal notifications (`auto`, `osc9`, or `bel`).665- `tui.notification_method` controls how the TUI emits terminal notifications (`auto`, `osc9`, or `bel`).
666- `tui.notification_condition` controls whether TUI notifications fire only when
667 the terminal is `unfocused` or `always`.
427 668
428In `auto` mode, Codex prefers OSC 9 notifications (a terminal escape sequence some terminals interpret as a desktop notification) and falls back to BEL (`\x07`) otherwise.669In `auto` mode, Codex prefers OSC 9 notifications (a terminal escape sequence some terminals interpret as a desktop notification) and falls back to BEL (`\x07`) otherwise.
429 670
470 711
471- `tui.notifications`: enable/disable notifications (or restrict to specific types)712- `tui.notifications`: enable/disable notifications (or restrict to specific types)
472- `tui.notification_method`: choose `auto`, `osc9`, or `bel` for terminal notifications713- `tui.notification_method`: choose `auto`, `osc9`, or `bel` for terminal notifications
714- `tui.notification_condition`: choose `unfocused` or `always` for when
715 notifications fire
473- `tui.animations`: enable/disable ASCII animations and shimmer effects716- `tui.animations`: enable/disable ASCII animations and shimmer effects
474- `tui.alternate_screen`: control alternate screen usage (set to `never` to keep terminal scrollback)717- `tui.alternate_screen`: control alternate screen usage (set to `never` to keep terminal scrollback)
475- `tui.show_tooltips`: show or hide onboarding tooltips on the welcome screen718- `tui.show_tooltips`: show or hide onboarding tooltips on the welcome screen