config-advanced.md +64 −19
2 2
3Use these options when you need more control over providers, policies, and integrations. For a quick start, see [Config basics](https://developers.openai.com/codex/config-basic).3Use these options when you need more control over providers, policies, and integrations. For a quick start, see [Config basics](https://developers.openai.com/codex/config-basic).
4 4
55For background on project guidance, reusable capabilities, custom slash commands, multi-agent workflows, and integrations, see [Customization](https://developers.openai.com/codex/concepts/customization). For configuration keys, see [Configuration Reference](https://developers.openai.com/codex/config-reference).For background on project guidance, reusable capabilities, custom slash commands, subagent workflows, and integrations, see [Customization](https://developers.openai.com/codex/concepts/customization). For configuration keys, see [Configuration Reference](https://developers.openai.com/codex/config-reference).
6 6
7## Profiles7## Profiles
8 8
15Define profiles under `[profiles.<name>]` in `config.toml`, then run `codex --profile <name>`:15Define profiles under `[profiles.<name>]` in `config.toml`, then run `codex --profile <name>`:
16 16
17```toml17```toml
1818model = "gpt-5-codex"model = "gpt-5.4"
19approval_policy = "on-request"19approval_policy = "on-request"
20model_catalog_json = "/Users/me/.codex/model-catalogs/default.json"20model_catalog_json = "/Users/me/.codex/model-catalogs/default.json"
21 21
45 45
46```shell46```shell
47# Dedicated flag47# Dedicated flag
4848codex --model gpt-5.2codex --model gpt-5.4
49 49
50# Generic key/value override (value is TOML, not JSON)50# Generic key/value override (value is TOML, not JSON)
5151codex --config model='"gpt-5.2"'codex --config model='"gpt-5.4"'
52codex --config sandbox_workspace_write.network_access=true52codex --config sandbox_workspace_write.network_access=true
53codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'53codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'
54```54```
74 74
75For shared defaults, rules, and skills checked into repos or system paths, see [Team Config](https://developers.openai.com/codex/enterprise/admin-setup#team-config).75For shared defaults, rules, and skills checked into repos or system paths, see [Team Config](https://developers.openai.com/codex/enterprise/admin-setup#team-config).
76 76
7777If you just need to point the built-in OpenAI provider at an LLM proxy, router, or data-residency enabled project, set environment variable `OPENAI_BASE_URL` instead of defining a new provider. This overrides the default OpenAI endpoint without a `config.toml` change.If you just need to point the built-in OpenAI provider at an LLM proxy, router, or data-residency enabled project, set `openai_base_url` in `config.toml` instead of defining a new provider. This changes the base URL for the built-in `openai` provider without requiring a separate `model_providers.<id>` entry.
78 78
79```toml79```toml
8080export OPENAI_BASE_URL="https://api.openai.com/v1"openai_base_url = "https://us.api.openai.com/v1"
81codex
82```81```
83 82
84## Project config files (`.codex/config.toml`)83## Project config files (`.codex/config.toml`)
87 86
88For security, Codex loads project-scoped config files only when the project is trusted. If the project is untrusted, Codex ignores `.codex/config.toml` files in the project.87For security, Codex loads project-scoped config files only when the project is trusted. If the project is untrusted, Codex ignores `.codex/config.toml` files in the project.
89 88
9089Relative paths inside a project config (for example, `experimental_instructions_file`) are resolved relative to the `.codex/` folder that contains the `config.toml`.Relative paths inside a project config (for example, `model_instructions_file`) are resolved relative to the `.codex/` folder that contains the `config.toml`.
90
91## Hooks (experimental)
92
93Codex can also load lifecycle hooks from `hooks.json` files that sit next to
94active config layers.
95
96In practice, the two most useful locations are:
97
98- `~/.codex/hooks.json`
99- `<repo>/.codex/hooks.json`
100
101Turn hooks on with:
102
103```toml
104[features]
105codex_hooks = true
106```
107
108For the current event list, input fields, output behavior, and limitations, see
109[Hooks](https://developers.openai.com/codex/hooks).
91 110
92## Agent roles (`[agents]` in `config.toml`)111## Agent roles (`[agents]` in `config.toml`)
93 112
94113For multi-agent role configuration (`[agents]` in `config.toml`), see [Multi-agents](https://developers.openai.com/codex/multi-agent).For subagent role configuration (`[agents]` in `config.toml`), see [Subagents](https://developers.openai.com/codex/subagents).
95 114
96## Project root detection115## Project root detection
97 116
108 127
109## Custom model providers128## Custom model providers
110 129
111130A model provider defines how Codex connects to a model (base URL, wire API, and optional HTTP headers).A model provider defines how Codex connects to a model (base URL, wire API, authentication, and optional HTTP headers). Custom providers can't reuse the reserved built-in provider IDs: `openai`, `ollama`, and `lmstudio`.
112 131
113Define additional providers and point `model_provider` at them:132Define additional providers and point `model_provider` at them:
114 133
115```toml134```toml
116135model = "gpt-5.1"model = "gpt-5.4"
117model_provider = "proxy"136model_provider = "proxy"
118 137
119[model_providers.proxy]138[model_providers.proxy]
121base_url = "http://proxy.example.com"140base_url = "http://proxy.example.com"
122env_key = "OPENAI_API_KEY"141env_key = "OPENAI_API_KEY"
123 142
124143[model_providers.ollama][model_providers.local_ollama]
125name = "Ollama"144name = "Ollama"
126base_url = "http://localhost:11434/v1"145base_url = "http://localhost:11434/v1"
127 146
139env_http_headers = { "X-Example-Features" = "EXAMPLE_FEATURES" }158env_http_headers = { "X-Example-Features" = "EXAMPLE_FEATURES" }
140```159```
141 160
161Use command-backed authentication when a provider needs Codex to fetch bearer tokens from an external credential helper:
162
163```toml
164[model_providers.proxy]
165name = "OpenAI using LLM proxy"
166base_url = "https://proxy.example.com/v1"
167wire_api = "responses"
168
169[model_providers.proxy.auth]
170command = "/usr/local/bin/fetch-codex-token"
171args = ["--audience", "codex"]
172timeout_ms = 5000
173refresh_interval_ms = 300000
174```
175
176The auth command receives no `stdin` and must print the token to stdout. Codex trims surrounding whitespace, treats an empty token as an error, and refreshes proactively at `refresh_interval_ms`; set `refresh_interval_ms = 0` to refresh only after an authentication retry. Don't combine `[model_providers.<id>.auth]` with `env_key`, `experimental_bearer_token`, or `requires_openai_auth`.
177
142## OSS mode (local providers)178## OSS mode (local providers)
143 179
144Codex can run against a local "open source" provider (for example, Ollama or LM Studio) when you pass `--oss`. If you pass `--oss` without specifying a provider, Codex uses `oss_provider` as the default.180Codex can run against a local "open source" provider (for example, Ollama or LM Studio) when you pass `--oss`. If you pass `--oss` without specifying a provider, Codex uses `oss_provider` as the default.
157env_key = "AZURE_OPENAI_API_KEY"193env_key = "AZURE_OPENAI_API_KEY"
158query_params = { api-version = "2025-04-01-preview" }194query_params = { api-version = "2025-04-01-preview" }
159wire_api = "responses"195wire_api = "responses"
160
161[model_providers.openai]
162request_max_retries = 4196request_max_retries = 4
163stream_max_retries = 10197stream_max_retries = 10
164stream_idle_timeout_ms = 300000198stream_idle_timeout_ms = 300000
165```199```
166 200
201To change the base URL for the built-in OpenAI provider, use `openai_base_url`; don't create `[model_providers.openai]`, because you can't override built-in provider IDs.
202
167## ChatGPT customers using data residency203## ChatGPT customers using data residency
168 204
169Projects created with [data residency](https://help.openai.com/en/articles/9903489-data-residency-and-inference-residency-for-chatgpt) enabled can create a model provider to update the base_url with the [correct prefix](https://platform.openai.com/docs/guides/your-data#which-models-and-features-are-eligible-for-data-residency).205Projects created with [data residency](https://help.openai.com/en/articles/9903489-data-residency-and-inference-residency-for-chatgpt) enabled can create a model provider to update the base_url with the [correct prefix](https://platform.openai.com/docs/guides/your-data#which-models-and-features-are-eligible-for-data-residency).
190 226
191Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access).227Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access).
192 228
193229For operational details that are easy to miss while editing `config.toml`, see [Common sandbox and approval combinations](https://developers.openai.com/codex/security#common-sandbox-and-approval-combinations), [Protected paths in writable roots](https://developers.openai.com/codex/security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/security#network-access).For operational details to keep in mind while editing `config.toml`, see [Common sandbox and approval combinations](https://developers.openai.com/codex/agent-approvals-security#common-sandbox-and-approval-combinations), [Protected paths in writable roots](https://developers.openai.com/codex/agent-approvals-security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/agent-approvals-security#network-access).
194 230
195231You can also use a granular reject policy (`approval_policy = { reject = { ... } }`) to auto-reject only selected prompt categories (sandbox approvals, execpolicy rule prompts, or MCP elicitations) while keeping other prompts interactive.You can also use a granular approval policy (`approval_policy = { granular = { ... } }`) to allow or auto-reject individual prompt categories. This is useful when you want normal interactive approvals for some cases but want others, such as `request_permissions` or skill-script prompts, to fail closed automatically.
196 232
197```233```
198234approval_policy = "untrusted" # Other options: on-request, never, or { reject = { ... } }approval_policy = "untrusted" # Other options: on-request, never, or { granular = { ... } }
199sandbox_mode = "workspace-write"235sandbox_mode = "workspace-write"
200allow_login_shell = false # Optional hardening: disallow login shells for shell tools236allow_login_shell = false # Optional hardening: disallow login shells for shell tools
201 237
238# Example granular approval policy:
239# approval_policy = { granular = {
240# sandbox_approval = true,
241# rules = true,
242# mcp_elicitations = true,
243# request_permissions = false,
244# skill_approval = false
245# } }
246
202[sandbox_workspace_write]247[sandbox_workspace_write]
203exclude_tmpdir_env_var = false # Allow $TMPDIR248exclude_tmpdir_env_var = false # Allow $TMPDIR
204exclude_slash_tmp = false # Allow /tmp249exclude_slash_tmp = false # Allow /tmp
206network_access = false # Opt in to outbound network251network_access = false # Opt in to outbound network
207```252```
208 253
209254Need the complete key list (including profile-scoped overrides and requirements constraints)? See [Configuration Reference](https://developers.openai.com/codex/config-reference) and [Managed configuration](https://developers.openai.com/codex/security#managed-configuration).Need the complete key list (including profile-scoped overrides and requirements constraints)? See [Configuration Reference](https://developers.openai.com/codex/config-reference) and [Managed configuration](https://developers.openai.com/codex/enterprise/managed-configuration).
210 255
211In workspace-write mode, some environments keep `.git/` and `.codex/`256In workspace-write mode, some environments keep `.git/` and `.codex/`
212 read-only even when the rest of the workspace is writable. This is why257 read-only even when the rest of the workspace is writable. This is why
302| `codex.tool.call` | counter | `tool`, `success` | Tool invocation count by tool name and success/failure. |347| `codex.tool.call` | counter | `tool`, `success` | Tool invocation count by tool name and success/failure. |
303| `codex.tool.call.duration_ms` | histogram | `tool`, `success` | Tool execution duration in milliseconds by tool name and outcome. |348| `codex.tool.call.duration_ms` | histogram | `tool`, `success` | Tool execution duration in milliseconds by tool name and outcome. |
304 349
305350For more security and privacy guidance around telemetry, see [Security](https://developers.openai.com/codex/security#monitoring-and-telemetry).For more security and privacy guidance around telemetry, see [Security](https://developers.openai.com/codex/agent-approvals-security#monitoring-and-telemetry).
306 351
307### Metrics352### Metrics
308 353