config-advanced.md +21 −13
2 2
3Use these options when you need more control over providers, policies, and integrations. For a quick start, see [Config basics](https://developers.openai.com/codex/config-basic).3Use these options when you need more control over providers, policies, and integrations. For a quick start, see [Config basics](https://developers.openai.com/codex/config-basic).
4 4
55For background on project guidance, reusable capabilities, custom slash commands, multi-agent workflows, and integrations, see [Customization](https://developers.openai.com/codex/concepts/customization). For configuration keys, see [Configuration Reference](https://developers.openai.com/codex/config-reference).For background on project guidance, reusable capabilities, custom slash commands, subagent workflows, and integrations, see [Customization](https://developers.openai.com/codex/concepts/customization). For configuration keys, see [Configuration Reference](https://developers.openai.com/codex/config-reference).
6 6
7## Profiles7## Profiles
8 8
45 45
46```shell46```shell
47# Dedicated flag47# Dedicated flag
4848codex --model gpt-5.2codex --model gpt-5.4
49 49
50# Generic key/value override (value is TOML, not JSON)50# Generic key/value override (value is TOML, not JSON)
5151codex --config model='"gpt-5.2"'codex --config model='"gpt-5.4"'
52codex --config sandbox_workspace_write.network_access=true52codex --config sandbox_workspace_write.network_access=true
53codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'53codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'
54```54```
74 74
75For shared defaults, rules, and skills checked into repos or system paths, see [Team Config](https://developers.openai.com/codex/enterprise/admin-setup#team-config).75For shared defaults, rules, and skills checked into repos or system paths, see [Team Config](https://developers.openai.com/codex/enterprise/admin-setup#team-config).
76 76
7777If you just need to point the built-in OpenAI provider at an LLM proxy, router, or data-residency enabled project, set environment variable `OPENAI_BASE_URL` instead of defining a new provider. This overrides the default OpenAI endpoint without a `config.toml` change.If you just need to point the built-in OpenAI provider at an LLM proxy, router, or data-residency enabled project, set `openai_base_url` in `config.toml` instead of defining a new provider. This changes the base URL for the built-in `openai` provider without requiring a separate `model_providers.<id>` entry.
78 78
79```toml79```toml
8080export OPENAI_BASE_URL="https://api.openai.com/v1"openai_base_url = "https://us.api.openai.com/v1"
81codex
82```81```
83 82
84## Project config files (`.codex/config.toml`)83## Project config files (`.codex/config.toml`)
87 86
88For security, Codex loads project-scoped config files only when the project is trusted. If the project is untrusted, Codex ignores `.codex/config.toml` files in the project.87For security, Codex loads project-scoped config files only when the project is trusted. If the project is untrusted, Codex ignores `.codex/config.toml` files in the project.
89 88
9089Relative paths inside a project config (for example, `experimental_instructions_file`) are resolved relative to the `.codex/` folder that contains the `config.toml`.Relative paths inside a project config (for example, `model_instructions_file`) are resolved relative to the `.codex/` folder that contains the `config.toml`.
91 90
92## Agent roles (`[agents]` in `config.toml`)91## Agent roles (`[agents]` in `config.toml`)
93 92
9493For multi-agent role configuration (`[agents]` in `config.toml`), see [Multi-agents](https://developers.openai.com/codex/multi-agent).For subagent role configuration (`[agents]` in `config.toml`), see [Subagents](https://developers.openai.com/codex/subagents).
95 94
96## Project root detection95## Project root detection
97 96
190 189
191Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access).190Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access).
192 191
193192For operational details that are easy to miss while editing `config.toml`, see [Common sandbox and approval combinations](https://developers.openai.com/codex/security#common-sandbox-and-approval-combinations), [Protected paths in writable roots](https://developers.openai.com/codex/security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/security#network-access).For operational details to keep in mind while editing `config.toml`, see [Common sandbox and approval combinations](https://developers.openai.com/codex/agent-approvals-security#common-sandbox-and-approval-combinations), [Protected paths in writable roots](https://developers.openai.com/codex/agent-approvals-security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/agent-approvals-security#network-access).
194 193
195194You can also use a granular reject policy (`approval_policy = { reject = { ... } }`) to auto-reject only selected prompt categories (sandbox approvals, execpolicy rule prompts, or MCP elicitations) while keeping other prompts interactive.You can also use a granular approval policy (`approval_policy = { granular = { ... } }`) to allow or auto-reject individual prompt categories. This is useful when you want normal interactive approvals for some cases but want others, such as `request_permissions` or skill-script prompts, to fail closed automatically.
196 195
197```196```
198197approval_policy = "untrusted" # Other options: on-request, never, or { reject = { ... } }approval_policy = "untrusted" # Other options: on-request, never, or { granular = { ... } }
199sandbox_mode = "workspace-write"198sandbox_mode = "workspace-write"
200allow_login_shell = false # Optional hardening: disallow login shells for shell tools199allow_login_shell = false # Optional hardening: disallow login shells for shell tools
201 200
201# Example granular approval policy:
202# approval_policy = { granular = {
203# sandbox_approval = true,
204# rules = true,
205# mcp_elicitations = true,
206# request_permissions = false,
207# skill_approval = false
208# } }
209
202[sandbox_workspace_write]210[sandbox_workspace_write]
203exclude_tmpdir_env_var = false # Allow $TMPDIR211exclude_tmpdir_env_var = false # Allow $TMPDIR
204exclude_slash_tmp = false # Allow /tmp212exclude_slash_tmp = false # Allow /tmp
206network_access = false # Opt in to outbound network214network_access = false # Opt in to outbound network
207```215```
208 216
209217Need the complete key list (including profile-scoped overrides and requirements constraints)? See [Configuration Reference](https://developers.openai.com/codex/config-reference) and [Managed configuration](https://developers.openai.com/codex/security#managed-configuration).Need the complete key list (including profile-scoped overrides and requirements constraints)? See [Configuration Reference](https://developers.openai.com/codex/config-reference) and [Managed configuration](https://developers.openai.com/codex/enterprise/managed-configuration).
210 218
211In workspace-write mode, some environments keep `.git/` and `.codex/`219In workspace-write mode, some environments keep `.git/` and `.codex/`
212 read-only even when the rest of the workspace is writable. This is why220 read-only even when the rest of the workspace is writable. This is why
302| `codex.tool.call` | counter | `tool`, `success` | Tool invocation count by tool name and success/failure. |310| `codex.tool.call` | counter | `tool`, `success` | Tool invocation count by tool name and success/failure. |
303| `codex.tool.call.duration_ms` | histogram | `tool`, `success` | Tool execution duration in milliseconds by tool name and outcome. |311| `codex.tool.call.duration_ms` | histogram | `tool`, `success` | Tool execution duration in milliseconds by tool name and outcome. |
304 312
305313For more security and privacy guidance around telemetry, see [Security](https://developers.openai.com/codex/security#monitoring-and-telemetry).For more security and privacy guidance around telemetry, see [Security](https://developers.openai.com/codex/agent-approvals-security#monitoring-and-telemetry).
306 314
307### Metrics315### Metrics
308 316