config-advanced.md +21 −4
127 127
128## Custom model providers128## Custom model providers
129 129
130130A model provider defines how Codex connects to a model (base URL, wire API, and optional HTTP headers).A model provider defines how Codex connects to a model (base URL, wire API, authentication, and optional HTTP headers). Custom providers can't reuse the reserved built-in provider IDs: `openai`, `ollama`, and `lmstudio`.
131 131
132Define additional providers and point `model_provider` at them:132Define additional providers and point `model_provider` at them:
133 133
140base_url = "http://proxy.example.com"140base_url = "http://proxy.example.com"
141env_key = "OPENAI_API_KEY"141env_key = "OPENAI_API_KEY"
142 142
143143[model_providers.ollama][model_providers.local_ollama]
144name = "Ollama"144name = "Ollama"
145base_url = "http://localhost:11434/v1"145base_url = "http://localhost:11434/v1"
146 146
158env_http_headers = { "X-Example-Features" = "EXAMPLE_FEATURES" }158env_http_headers = { "X-Example-Features" = "EXAMPLE_FEATURES" }
159```159```
160 160
161Use command-backed authentication when a provider needs Codex to fetch bearer tokens from an external credential helper:
162
163```toml
164[model_providers.proxy]
165name = "OpenAI using LLM proxy"
166base_url = "https://proxy.example.com/v1"
167wire_api = "responses"
168
169[model_providers.proxy.auth]
170command = "/usr/local/bin/fetch-codex-token"
171args = ["--audience", "codex"]
172timeout_ms = 5000
173refresh_interval_ms = 300000
174```
175
176The auth command receives no `stdin` and must print the token to stdout. Codex trims surrounding whitespace, treats an empty token as an error, and refreshes proactively at `refresh_interval_ms`; set `refresh_interval_ms = 0` to refresh only after an authentication retry. Don't combine `[model_providers.<id>.auth]` with `env_key`, `experimental_bearer_token`, or `requires_openai_auth`.
177
161## OSS mode (local providers)178## OSS mode (local providers)
162 179
163Codex can run against a local "open source" provider (for example, Ollama or LM Studio) when you pass `--oss`. If you pass `--oss` without specifying a provider, Codex uses `oss_provider` as the default.180Codex can run against a local "open source" provider (for example, Ollama or LM Studio) when you pass `--oss`. If you pass `--oss` without specifying a provider, Codex uses `oss_provider` as the default.
176env_key = "AZURE_OPENAI_API_KEY"193env_key = "AZURE_OPENAI_API_KEY"
177query_params = { api-version = "2025-04-01-preview" }194query_params = { api-version = "2025-04-01-preview" }
178wire_api = "responses"195wire_api = "responses"
179
180[model_providers.openai]
181request_max_retries = 4196request_max_retries = 4
182stream_max_retries = 10197stream_max_retries = 10
183stream_idle_timeout_ms = 300000198stream_idle_timeout_ms = 300000
184```199```
185 200
201To change the base URL for the built-in OpenAI provider, use `openai_base_url`; don't create `[model_providers.openai]`, because you can't override built-in provider IDs.
202
186## ChatGPT customers using data residency203## ChatGPT customers using data residency
187 204
188Projects created with [data residency](https://help.openai.com/en/articles/9903489-data-residency-and-inference-residency-for-chatgpt) enabled can create a model provider to update the base_url with the [correct prefix](https://platform.openai.com/docs/guides/your-data#which-models-and-features-are-eligible-for-data-residency).205Projects created with [data residency](https://help.openai.com/en/articles/9903489-data-residency-and-inference-residency-for-chatgpt) enabled can create a model provider to update the base_url with the [correct prefix](https://platform.openai.com/docs/guides/your-data#which-models-and-features-are-eligible-for-data-residency).