config-advanced.md +17 −9
2 2
3Use these options when you need more control over providers, policies, and integrations. For a quick start, see [Config basics](https://developers.openai.com/codex/config-basic).3Use these options when you need more control over providers, policies, and integrations. For a quick start, see [Config basics](https://developers.openai.com/codex/config-basic).
4 4
55For background on project guidance, reusable capabilities, custom slash commands, multi-agent workflows, and integrations, see [Customization](https://developers.openai.com/codex/concepts/customization). For configuration keys, see [Configuration Reference](https://developers.openai.com/codex/config-reference).For background on project guidance, reusable capabilities, custom slash commands, subagent workflows, and integrations, see [Customization](https://developers.openai.com/codex/concepts/customization). For configuration keys, see [Configuration Reference](https://developers.openai.com/codex/config-reference).
6 6
7## Profiles7## Profiles
8 8
74 74
75For shared defaults, rules, and skills checked into repos or system paths, see [Team Config](https://developers.openai.com/codex/enterprise/admin-setup#team-config).75For shared defaults, rules, and skills checked into repos or system paths, see [Team Config](https://developers.openai.com/codex/enterprise/admin-setup#team-config).
76 76
7777If you just need to point the built-in OpenAI provider at an LLM proxy, router, or data-residency enabled project, set environment variable `OPENAI_BASE_URL` instead of defining a new provider. This overrides the default OpenAI endpoint without a `config.toml` change.If you just need to point the built-in OpenAI provider at an LLM proxy, router, or data-residency enabled project, set `openai_base_url` in `config.toml` instead of defining a new provider. This changes the base URL for the built-in `openai` provider without requiring a separate `model_providers.<id>` entry.
78 78
79```toml79```toml
8080export OPENAI_BASE_URL="https://api.openai.com/v1"openai_base_url = "https://us.api.openai.com/v1"
81codex
82```81```
83 82
84## Project config files (`.codex/config.toml`)83## Project config files (`.codex/config.toml`)
87 86
88For security, Codex loads project-scoped config files only when the project is trusted. If the project is untrusted, Codex ignores `.codex/config.toml` files in the project.87For security, Codex loads project-scoped config files only when the project is trusted. If the project is untrusted, Codex ignores `.codex/config.toml` files in the project.
89 88
9089Relative paths inside a project config (for example, `experimental_instructions_file`) are resolved relative to the `.codex/` folder that contains the `config.toml`.Relative paths inside a project config (for example, `model_instructions_file`) are resolved relative to the `.codex/` folder that contains the `config.toml`.
91 90
92## Agent roles (`[agents]` in `config.toml`)91## Agent roles (`[agents]` in `config.toml`)
93 92
9493For multi-agent role configuration (`[agents]` in `config.toml`), see [Multi-agents](https://developers.openai.com/codex/multi-agent).For subagent role configuration (`[agents]` in `config.toml`), see [Subagents](https://developers.openai.com/codex/subagents).
95 94
96## Project root detection95## Project root detection
97 96
190 189
191Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access).190Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access).
192 191
193192For operational details that are easy to miss while editing `config.toml`, see [Common sandbox and approval combinations](https://developers.openai.com/codex/agent-approvals-security#common-sandbox-and-approval-combinations), [Protected paths in writable roots](https://developers.openai.com/codex/agent-approvals-security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/agent-approvals-security#network-access).For operational details to keep in mind while editing `config.toml`, see [Common sandbox and approval combinations](https://developers.openai.com/codex/agent-approvals-security#common-sandbox-and-approval-combinations), [Protected paths in writable roots](https://developers.openai.com/codex/agent-approvals-security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/agent-approvals-security#network-access).
194 193
195194You can also use a granular reject policy (`approval_policy = { reject = { ... } }`) to auto-reject only selected prompt categories, such as sandbox approvals, `execpolicy` rule prompts, or MCP input requests (`mcp_elicitations`), while keeping other prompts interactive.You can also use a granular approval policy (`approval_policy = { granular = { ... } }`) to allow or auto-reject individual prompt categories. This is useful when you want normal interactive approvals for some cases but want others, such as `request_permissions` or skill-script prompts, to fail closed automatically.
196 195
197```196```
198197approval_policy = "untrusted" # Other options: on-request, never, or { reject = { ... } }approval_policy = "untrusted" # Other options: on-request, never, or { granular = { ... } }
199sandbox_mode = "workspace-write"198sandbox_mode = "workspace-write"
200allow_login_shell = false # Optional hardening: disallow login shells for shell tools199allow_login_shell = false # Optional hardening: disallow login shells for shell tools
201 200
201# Example granular approval policy:
202# approval_policy = { granular = {
203# sandbox_approval = true,
204# rules = true,
205# mcp_elicitations = true,
206# request_permissions = false,
207# skill_approval = false
208# } }
209
202[sandbox_workspace_write]210[sandbox_workspace_write]
203exclude_tmpdir_env_var = false # Allow $TMPDIR211exclude_tmpdir_env_var = false # Allow $TMPDIR
204exclude_slash_tmp = false # Allow /tmp212exclude_slash_tmp = false # Allow /tmp