SpyBara
Go Premium Account
2026
4 Mar 2026, 06:20
14 May 2026, 21:00 14 May 2026, 07:00 13 May 2026, 00:57 12 May 2026, 01:59 11 May 2026, 18:00 7 May 2026, 20:02 7 May 2026, 17:08 5 May 2026, 23:00 2 May 2026, 06:45 2 May 2026, 00:48 1 May 2026, 18:29 30 Apr 2026, 18:36 29 Apr 2026, 12:40 29 Apr 2026, 00:50 25 Apr 2026, 06:37 25 Apr 2026, 00:42 24 Apr 2026, 18:20 24 Apr 2026, 12:28 23 Apr 2026, 18:31 23 Apr 2026, 12:28 23 Apr 2026, 00:46 22 Apr 2026, 18:29 22 Apr 2026, 00:42 21 Apr 2026, 18:29 21 Apr 2026, 12:30 21 Apr 2026, 06:45 20 Apr 2026, 18:26 20 Apr 2026, 06:53 18 Apr 2026, 18:18 17 Apr 2026, 00:44 16 Apr 2026, 18:31 16 Apr 2026, 00:46 15 Apr 2026, 18:31 15 Apr 2026, 06:44 14 Apr 2026, 18:31 14 Apr 2026, 12:29 13 Apr 2026, 18:37 13 Apr 2026, 00:44 12 Apr 2026, 06:38 10 Apr 2026, 18:23 9 Apr 2026, 00:33 8 Apr 2026, 18:32 8 Apr 2026, 00:40 7 Apr 2026, 00:40 2 Apr 2026, 18:23 31 Mar 2026, 06:35 31 Mar 2026, 00:39 28 Mar 2026, 06:26 28 Mar 2026, 00:36 27 Mar 2026, 18:23 27 Mar 2026, 00:39 26 Mar 2026, 18:27 25 Mar 2026, 18:24 23 Mar 2026, 18:22 20 Mar 2026, 00:35 18 Mar 2026, 12:23 18 Mar 2026, 00:36 17 Mar 2026, 18:24 17 Mar 2026, 00:33 16 Mar 2026, 18:25 16 Mar 2026, 12:23 14 Mar 2026, 00:32 13 Mar 2026, 18:15 13 Mar 2026, 00:34 11 Mar 2026, 00:31 9 Mar 2026, 00:34 8 Mar 2026, 18:10 8 Mar 2026, 00:35 7 Mar 2026, 18:10 7 Mar 2026, 06:14 7 Mar 2026, 00:33 6 Mar 2026, 00:38 5 Mar 2026, 18:41 5 Mar 2026, 06:22 5 Mar 2026, 00:34 4 Mar 2026, 18:18 4 Mar 2026, 06:20 3 Mar 2026, 18:20 3 Mar 2026, 00:35 27 Feb 2026, 18:15 24 Feb 2026, 06:27 24 Feb 2026, 00:33 23 Feb 2026, 18:27 21 Feb 2026, 00:33 20 Feb 2026, 12:16 19 Feb 2026, 20:53 19 Feb 2026, 20:37
6 Mar 2026, 00:38
14 May 2026, 21:00 14 May 2026, 07:00 13 May 2026, 00:57 12 May 2026, 01:59 11 May 2026, 18:00 7 May 2026, 20:02 7 May 2026, 17:08 5 May 2026, 23:00 2 May 2026, 06:45 2 May 2026, 00:48 1 May 2026, 18:29 30 Apr 2026, 18:36 29 Apr 2026, 12:40 29 Apr 2026, 00:50 25 Apr 2026, 06:37 25 Apr 2026, 00:42 24 Apr 2026, 18:20 24 Apr 2026, 12:28 23 Apr 2026, 18:31 23 Apr 2026, 12:28 23 Apr 2026, 00:46 22 Apr 2026, 18:29 22 Apr 2026, 00:42 21 Apr 2026, 18:29 21 Apr 2026, 12:30 21 Apr 2026, 06:45 20 Apr 2026, 18:26 20 Apr 2026, 06:53 18 Apr 2026, 18:18 17 Apr 2026, 00:44 16 Apr 2026, 18:31 16 Apr 2026, 00:46 15 Apr 2026, 18:31 15 Apr 2026, 06:44 14 Apr 2026, 18:31 14 Apr 2026, 12:29 13 Apr 2026, 18:37 13 Apr 2026, 00:44 12 Apr 2026, 06:38 10 Apr 2026, 18:23 9 Apr 2026, 00:33 8 Apr 2026, 18:32 8 Apr 2026, 00:40 7 Apr 2026, 00:40 2 Apr 2026, 18:23 31 Mar 2026, 06:35 31 Mar 2026, 00:39 28 Mar 2026, 06:26 28 Mar 2026, 00:36 27 Mar 2026, 18:23 27 Mar 2026, 00:39 26 Mar 2026, 18:27 25 Mar 2026, 18:24 23 Mar 2026, 18:22 20 Mar 2026, 00:35 18 Mar 2026, 12:23 18 Mar 2026, 00:36 17 Mar 2026, 18:24 17 Mar 2026, 00:33 16 Mar 2026, 18:25 16 Mar 2026, 12:23 14 Mar 2026, 00:32 13 Mar 2026, 18:15 13 Mar 2026, 00:34 11 Mar 2026, 00:31 9 Mar 2026, 00:34 8 Mar 2026, 18:10 8 Mar 2026, 00:35 7 Mar 2026, 18:10 7 Mar 2026, 06:14 7 Mar 2026, 00:33 6 Mar 2026, 00:38 5 Mar 2026, 18:41 5 Mar 2026, 06:22 5 Mar 2026, 00:34 4 Mar 2026, 18:18 4 Mar 2026, 06:20 3 Mar 2026, 18:20 3 Mar 2026, 00:35 27 Feb 2026, 18:15 24 Feb 2026, 06:27 24 Feb 2026, 00:33 23 Feb 2026, 18:27 21 Feb 2026, 00:33 20 Feb 2026, 12:16 19 Feb 2026, 20:53 19 Feb 2026, 20:37
Tue 3 00:35 Tue 3 18:20 Wed 4 06:20 Wed 4 18:18 Thu 5 00:34 Thu 5 06:22 Thu 5 18:41 Fri 6 00:38 Sat 7 00:33 Sat 7 06:14 Sat 7 18:10 Sun 8 00:35 Sun 8 18:10 Mon 9 00:34 Wed 11 00:31 Fri 13 00:34 Fri 13 18:15 Sat 14 00:32 Mon 16 12:23 Mon 16 18:25 Tue 17 00:33 Tue 17 18:24 Wed 18 00:36 Wed 18 12:23 Fri 20 00:35 Mon 23 18:22 Wed 25 18:24 Thu 26 18:27 Fri 27 00:39 Fri 27 18:23 Sat 28 00:36 Sat 28 06:26 Tue 31 00:39 Tue 31 06:35

app.md +4 −2

Details

4 4 

5ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. Learn more about [what's included](https://developers.openai.com/codex/pricing).5ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. Learn more about [what's included](https://developers.openai.com/codex/pricing).

6 6 

7![Codex app for Windows showing a project sidebar, active thread, and review pane](/images/codex/windows/codex-windows-light.webp)

8 

7![Codex app window with a project sidebar, active thread, and review pane](/images/codex/app/app-screenshot-light.webp)9![Codex app window with a project sidebar, active thread, and review pane](/images/codex/app/app-screenshot-light.webp)

8 10 

9## Getting started11## Getting started


12 14 

131. Download and install the Codex app151. Download and install the Codex app

14 16 

15 The Codex app is currently only available for macOS.17 Download the Codex app for Windows or macOS.

16 18 

17 [Download for macOS](https://persistent.oaistatic.com/codex-app-prod/Codex.dmg)19 [Download for macOS](https://persistent.oaistatic.com/codex-app-prod/Codex.dmg)

18 20 

19 [Get notified for Windows and Linux](https://openai.com/form/codex-app/)21 [Get notified for Linux](https://openai.com/form/codex-app/)

202. Open Codex and sign in222. Open Codex and sign in

21 23 

22 Once you downloaded and installed the Codex app, open it and sign in with your ChatGPT account or an OpenAI API key.24 Once you downloaded and installed the Codex app, open it and sign in with your ChatGPT account or an OpenAI API key.

app-server.md +14 −10

Details

232- `externalAgentConfig/import` - apply selected external-agent migration items by passing explicit `migrationItems` with `cwd` (`null` for home).232- `externalAgentConfig/import` - apply selected external-agent migration items by passing explicit `migrationItems` with `cwd` (`null` for home).

233- `config/value/write` - write a single configuration key/value to the user's `config.toml` on disk.233- `config/value/write` - write a single configuration key/value to the user's `config.toml` on disk.

234- `config/batchWrite` - apply configuration edits atomically to the user's `config.toml` on disk.234- `config/batchWrite` - apply configuration edits atomically to the user's `config.toml` on disk.

235- `configRequirements/read` - fetch requirements from `requirements.toml` and/or MDM, including allow-lists and residency requirements (or `null` if you havent set any up).235- `configRequirements/read` - fetch requirements from `requirements.toml` and/or MDM, including allow-lists, pinned `featureRequirements`, and residency/network requirements (or `null` if you haven't set any up).

236 236 

237## Models237## Models

238 238 


244{ "method": "model/list", "id": 6, "params": { "limit": 20, "includeHidden": false } }244{ "method": "model/list", "id": 6, "params": { "limit": 20, "includeHidden": false } }

245{ "id": 6, "result": {245{ "id": 6, "result": {

246 "data": [{246 "data": [{

247 "id": "gpt-5.2-codex",247 "id": "gpt-5.4",

248 "model": "gpt-5.2-codex",248 "model": "gpt-5.4",

249 "upgrade": "gpt-5.3-codex",249 "displayName": "GPT-5.4",

250 "displayName": "GPT-5.2 Codex",

251 "hidden": false,250 "hidden": false,

252 "defaultReasoningEffort": "medium",251 "defaultReasoningEffort": "medium",

253 "reasoningEffort": [{252 "supportedReasoningEfforts": [{

254 "effort": "low",253 "reasoningEffort": "low",

255 "description": "Lower latency"254 "description": "Lower latency"

256 }],255 }],

257 "inputModalities": ["text", "image"],256 "inputModalities": ["text", "image"],


264 263 

265Each model entry can include:264Each model entry can include:

266 265 

267- `reasoningEffort` - supported effort options for the model.266- `supportedReasoningEfforts` - supported effort options for the model.

268- `defaultReasoningEffort` - suggested default effort for clients.267- `defaultReasoningEffort` - suggested default effort for clients.

269- `upgrade` - optional recommended upgrade model id for migration prompts in clients.268- `upgrade` - optional recommended upgrade model id for migration prompts in clients.

269- `upgradeInfo` - optional upgrade metadata for migration prompts in clients.

270- `hidden` - whether the model is hidden from the default picker list.270- `hidden` - whether the model is hidden from the default picker list.

271- `inputModalities` - supported input types for the model (for example `text`, `image`).271- `inputModalities` - supported input types for the model (for example `text`, `image`).

272- `supportsPersonality` - whether the model supports personality-specific instructions such as `/personality`.272- `supportsPersonality` - whether the model supports personality-specific instructions such as `/personality`.


724 "requirements": {724 "requirements": {

725 "allowedApprovalPolicies": ["onRequest", "unlessTrusted"],725 "allowedApprovalPolicies": ["onRequest", "unlessTrusted"],

726 "allowedSandboxModes": ["readOnly", "workspaceWrite"],726 "allowedSandboxModes": ["readOnly", "workspaceWrite"],

727 "featureRequirements": {

728 "personality": true,

729 "unified_exec": false

730 },

727 "network": {731 "network": {

728 "enabled": true,732 "enabled": true,

729 "allowedDomains": ["api.openai.com"],733 "allowedDomains": ["api.openai.com"],


734} }738} }

735```739```

736 740 

737`result.requirements` is `null` when no requirements are configured. When present, the optional `network` object carries managed proxy constraints (domain rules, proxy settings, and unix-socket policy).741`result.requirements` is `null` when no requirements are configured. See the docs on [`requirements.toml`](https://developers.openai.com/codex/config-reference#requirementstoml) for details on supported keys and values.

738 742 

739### Windows sandbox setup (`windowsSandbox/setupStart`)743### Windows sandbox setup (`windowsSandbox/setupStart`)

740 744 


868 872 

869When `networkApprovalContext` is present, the prompt is for managed network access (not a general shell-command approval). The current v2 schema exposes the target `host` and `protocol`; clients should render a network-specific prompt and not rely on `command` being a user-meaningful shell command preview.873When `networkApprovalContext` is present, the prompt is for managed network access (not a general shell-command approval). The current v2 schema exposes the target `host` and `protocol`; clients should render a network-specific prompt and not rely on `command` being a user-meaningful shell command preview.

870 874 

871Codex deduplicates concurrent network approval prompts by destination (`host`, protocol, and port). The app-server may therefore send one prompt that unblocks multiple queued requests to the same destination, while different ports on the same host are treated separately.875Codex groups concurrent network approval prompts by destination (`host`, protocol, and port). The app-server may therefore send one prompt that unblocks multiple queued requests to the same destination, while different ports on the same host are treated separately.

872 876 

873### File change approvals877### File change approvals

874 878 

app/features.md +10 −0

Details

101 101 

102![Integrated terminal drawer open beneath a Codex thread](/images/codex/app/integrated-terminal-light.webp)102![Integrated terminal drawer open beneath a Codex thread](/images/codex/app/integrated-terminal-light.webp)

103 103 

104## Native Windows sandbox

105 

106On Windows, Codex can run natively in PowerShell with a native Windows sandbox

107instead of requiring WSL or a virtual machine. This lets you stay in

108Windows-native workflows while keeping bounded permissions in place.

109 

110[Learn more about Windows setup and sandboxing](https://developers.openai.com/codex/app/windows).

111 

112![Codex app Windows sandbox setup prompt above the message composer](/images/codex/windows/windows-sandbox-setup.webp)

113 

104## Voice dictation114## Voice dictation

105 115 

106Use your voice to prompt Codex. Hold <kbd>Ctrl</kbd>+<kbd>M</kbd> while the composer is visible and start talking. Your voice will be transcribed. Edit the transcribed prompt or hit send to have Codex start work.116Use your voice to prompt Codex. Hold <kbd>Ctrl</kbd>+<kbd>M</kbd> while the composer is visible and start talking. Your voice will be transcribed. Edit the transcribed prompt or hit send to have Codex start work.

Details

32### Only some threads appear in the sidebar32### Only some threads appear in the sidebar

33 33 

34The sidebar allows filtering of threads depending on the state of a project. If34The sidebar allows filtering of threads depending on the state of a project. If

35youre missing threads, check whether you have any filters applied by clicking35you're missing threads, click the filter icon next to the **Threads** label and

36the filter icon next to the **Threads** label.36switch to Chronological. If you still don't see the thread, open

37[Settings](codex://settings) and check the archived chats or archived threads

38section.

37 39 

38### Code doesn't run on a worktree40### Code doesn't run on a worktree

39 41 

app/windows.md +170 −35

Details

1# Codex app1# Windows

2 2 

3The Codex app is a focused desktop experience for working on Codex threads in parallel, with built-in worktree support, automations, and Git functionality.3The [Codex app for Windows](https://apps.microsoft.com/detail/9plm9xgg6vks?hl=en-US&gl=US) gives you one interface for

4working across projects, running parallel agent threads, and reviewing results.

5It runs natively on Windows using PowerShell and the

6[Windows sandbox](https://developers.openai.com/codex/windows#windows-sandbox), or you can configure it to

7run in [Windows Subsystem for Linux (WSL)](#windows-subsystem-for-linux-wsl).

4 8 

5ChatGPT Plus, Pro, Business, Edu, and Enterprise plans include Codex. Learn more about [what’s included](https://developers.openai.com/codex/pricing).9![Codex app for Windows showing a project sidebar, active thread, and review pane](/images/codex/windows/codex-windows-light.webp)

6 10 

7![Codex app window with a project sidebar, active thread, and review pane](/images/codex/app/app-screenshot-light.webp)11## Download and update the Codex app

8 12 

9## Getting started13Download the Codex app from the

14[Microsoft Store](https://apps.microsoft.com/detail/9plm9xgg6vks?hl=en-US&gl=US).

10 15 

11The Codex app is available on macOS (Apple Silicon).16Then follow the [quickstart](https://developers.openai.com/codex/quickstart?setup=app) to get started.

12 17 

131. Download and install the Codex app18To update the app, open the Microsoft Store, go to **Downloads**, and click

19**Check for updates**. The Store installs the latest version afterward.

14 20 

15 The Codex app is currently only available for macOS.21For enterprises, administrators can deploy the app with Microsoft Store app

22distribution through enterprise management tools.

16 23 

17 [Download for macOS](https://persistent.oaistatic.com/codex-app-prod/Codex.dmg)24If you prefer a command-line install path, or need an alternative to opening

25the Microsoft Store UI, run:

18 26 

19 [Get notified for Windows and Linux](https://openai.com/form/codex-app/)27```powershell

202. Open Codex and sign in28winget install Codex -s msstore

29```

21 30 

22 Once you downloaded and installed the Codex app, open it and sign in with your ChatGPT account or an OpenAI API key.31## Customize for your dev setup

23 32 

24 If you sign in with an OpenAI API key, some functionality such as [cloud threads](https://developers.openai.com/codex/prompting#threads) might not be available.33### Preferred editor

253. Select a project

26 34 

27 Choose a project folder that you want Codex to work in.35Choose a default app for **Open**, such as Visual Studio, VS Code, or another

36editor. You can override that choice per project. If you already picked a

37different app from the **Open** menu for a project, that project-specific

38choice takes precedence.

28 39 

29If you used the Codex app, CLI, or IDE Extension before you’ll see past projects that you worked on.40![Codex app settings showing the default Open In app on Windows](/images/codex/windows/open-in-windows-light.webp)

30 41 

314. Send your first message42### Integrated terminal

32 43 

33 After choosing the project, make sure **Local** is selected to have Codex work on your machine and send your first message to Codex.44You can also choose the default integrated terminal. Depending on what you have

45installed, options include:

34 46 

35 You can ask Codex anything about the project or your computer in general. Here are some examples:47- PowerShell

48- Command Prompt

49- Git Bash

50- WSL

36 51 

37- Tell me about this project52This change applies only to new terminal sessions. If you already have an

38- Build a classic Snake game in this repo.53integrated terminal open, restart the app or start a new thread before

39- Find and fix bugs in my codebase with minimal, high-confidence changes.54expecting the new default terminal to appear.

40 55 

41 If you need more inspiration, check out the [explore section](https://developers.openai.com/codex/explore).56![Codex app settings showing the integrated terminal selection on Windows](/images/codex/windows/integrated-shell-light.webp)

42 57 

58## Windows Subsystem for Linux (WSL)

43 59 

44## Work with the Codex app60By default, the Codex app uses the Windows-native agent. That means the agent

61runs commands in PowerShell. The app can still work with projects that live in

62Windows Subsystem for Linux (WSL) by using the `wsl` CLI when needed.

45 63 

46[### Multitask across projects64If you want to add a project from the WSL filesystem, click **Add new project**

65or press <kbd>Ctrl</kbd>+<kbd>O</kbd>, then type `\\wsl$\` into the File

66Explorer window. From there, choose your Linux distribution and the folder you

67want to open.

47 68 

48Run multiple tasks in parallel and switch quickly between them.](https://developers.openai.com/codex/app/features#multitask-across-projects)[### Built-in Git tools69If you plan to keep using the Windows-native agent, prefer storing projects on

70your Windows filesystem and accessing them from WSL through

71`/mnt/<drive>/...`. This setup is more reliable than opening projects

72directly from the WSL filesystem.

49 73 

50Review diffs, comment inline, stage or revert chunks, and commit without leaving the app.](https://developers.openai.com/codex/app/features#built-in-git-tools)[### Worktrees for parallel tasks74If you want the agent itself to run in WSL, open **[Settings](codex://settings)**,

75switch the agent from Windows native to WSL, and **restart the app**. The

76change doesn't take effect until you restart. Your projects should remain in

77place after restart.

51 78 

52Isolate changes of multiple Codex threads using built-in Git worktree support.](https://developers.openai.com/codex/app/worktrees)[### Skills support79![Codex app settings showing the agent selector with Windows native and WSL options](/images/codex/windows/wsl-select-light.webp)

53 80 

54Give your Codex agent additional capabilities and reuse skills across App, CLI, and IDE Extension.](https://developers.openai.com/codex/app/features#skills-support)[### Automations81You configure the integrated terminal independently from the agent. See

82[Customize for your dev setup](#customize-for-your-dev-setup) for the

83terminal options. You can keep the agent in WSL and still use PowerShell in the

84terminal, or use WSL for both, depending on your workflow.

55 85 

56Pair skills with automations to automate recurring tasks in the background. Codex adds findings to the inbox, or automatically archives runs if there’s nothing to report.](https://developers.openai.com/codex/app/automations)[### Built-in terminal86## Useful developer tools

57 87 

58Open a terminal per thread to test your changes, run dev servers, scripts, and custom commands.](https://developers.openai.com/codex/app/features#integrated-terminal)[### Local environments88Codex works best when a few common developer tools are already installed:

59 89 

60Define worktree setup scripts and common project actions for easy access.](https://developers.openai.com/codex/app/local-environments)[### Sync with the IDE extension90- **Git**: Powers the review panel in the Codex app and lets you inspect or

91 revert changes.

92- **Node.js**: A common tool that the agent uses to perform tasks more

93 efficiently.

94- **Python**: A common tool that the agent uses to perform tasks more

95 efficiently.

96- **.NET SDK**: Useful when you want to build native Windows apps.

97- **GitHub CLI**: Powers GitHub-specific functionality in the Codex app.

61 98 

62Share Auto Context and active threads across app and IDE sessions.](https://developers.openai.com/codex/app/features#sync-with-the-ide-extension)[### MCP support99Install them with the default Windows package manager `winget` by pasting this

100into the [integrated terminal](https://developers.openai.com/codex/app/features#integrated-terminal) or

101asking Codex to install them:

63 102 

64Connect your Codex agent to additional services using MCP.](https://developers.openai.com/codex/app/features#mcp-support)103```powershell

104winget install --id Git.Git

105winget install --id OpenJS.NodeJS.LTS

106winget install --id Python.Python.3.14

107winget install --id Microsoft.DotNet.SDK.10

108winget install --id GitHub.cli

109```

65 110 

111After installing GitHub CLI, run `gh auth login` to enable GitHub features in

112the app.

66 113 

67Need help? Visit the [troubleshooting guide](https://developers.openai.com/codex/app/troubleshooting).114If you need a different Python or .NET version, change the package IDs to the

115version you want.

116 

117## Troubleshooting and FAQ

118 

119### Run commands with elevated permissions

120 

121If you need Codex to run commands with elevated permissions, start the Codex app

122itself as an administrator. After installation, open the Start menu, find

123Codex, and choose Run as administrator. The Codex agent inherits that

124permission level.

125 

126### PowerShell execution policy blocks commands

127 

128If you have never used tools such as Node.js or `npm` in PowerShell before, the

129Codex agent or integrated terminal may hit execution policy errors.

130 

131This can also happen if Codex creates PowerShell scripts for you. In that case,

132you may need a less restrictive execution policy before PowerShell will run

133them.

134 

135An error may look something like this:

136 

137```text

138npm.ps1 cannot be loaded because running scripts is disabled on this system.

139```

140 

141A common fix is to set the execution policy to `RemoteSigned`:

142 

143```powershell

144Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

145```

146 

147For details and other options, check Microsoft's

148[execution policy guide](https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_execution_policies)

149before changing the policy.

150 

151### Local environment scripts on Windows

152 

153If your [local environment](https://developers.openai.com/codex/app/local-environments) uses cross-platform

154commands such as `npm` scripts, you can keep one shared setup script or

155set of actions for every platform.

156 

157If you need Windows-specific behavior, create Windows-specific setup scripts or

158Windows-specific actions.

159 

160Actions run in the environment used by your integrated terminal. See

161[Customize for your dev setup](#customize-for-your-dev-setup).

162 

163Local setup scripts run in the agent environment: WSL if the agent uses WSL,

164and PowerShell otherwise.

165 

166### Share config, auth, and sessions with WSL

167 

168The Windows app uses the same Codex home directory as native Codex on Windows:

169`%USERPROFILE%\.codex`.

170 

171If you also run the Codex CLI inside WSL, the CLI uses the Linux home

172directory by default, so it does not automatically share configuration, cached

173auth, or session history with the Windows app.

174 

175To share them, use one of these approaches:

176 

177- Sync WSL `~/.codex` with `%USERPROFILE%\.codex` on your file system.

178- Point WSL at the Windows Codex home directory by setting `CODEX_HOME`:

179 

180```bash

181export CODEX_HOME=/mnt/c/Users/<windows-user>/.codex

182```

183 

184If you want that setting in every shell, add it to your WSL shell profile, such

185as `~/.bashrc` or `~/.zshrc`.

186 

187### Git features are unavailable

188 

189If you don't have Git installed natively on Windows, the app can't use some

190features. Install it with `winget install Git.Git` from PowerShell or `cmd.exe`.

191 

192### Git isn't detected for projects opened from `\\wsl$`

193 

194For now, if you want to use the Windows-native agent with a project that is

195also accessible from WSL, the most reliable workaround is to store the project

196on the native Windows drive and access it in WSL through `/mnt/<drive>/...`.

197 

198### Cmder is not listed in the open dialog

199 

200If Cmder is installed but doesn’t show in Codex’s open dialog, add it to the

201Windows Start Menu: right-click Cmder and choose **Add to Start**, then restart

202Codex or reboot.

cli.md +1 −1

Details

55 55 

56Run `codex` to start an interactive terminal UI (TUI) session.](https://developers.openai.com/codex/cli/features#running-in-interactive-mode)[### Control model and reasoning56Run `codex` to start an interactive terminal UI (TUI) session.](https://developers.openai.com/codex/cli/features#running-in-interactive-mode)[### Control model and reasoning

57 57 

58Use `/model` to switch between GPT-5.3-Codex and other available models, or adjust reasoning levels.](https://developers.openai.com/codex/cli/features#models-reasoning)[### Image inputs58Use `/model` to switch between GPT-5.4, GPT-5.3-Codex, and other available models, or adjust reasoning levels.](https://developers.openai.com/codex/cli/features#models-reasoning)[### Image inputs

59 59 

60Attach screenshots or design specs so Codex reads them alongside your prompt.](https://developers.openai.com/codex/cli/features#image-inputs)[### Run local code review60Attach screenshots or design specs so Codex reads them alongside your prompt.](https://developers.openai.com/codex/cli/features#image-inputs)[### Run local code review

61 61 

cli/features.md +3 −3

Details

46 46 

47## Models and reasoning47## Models and reasoning

48 48 

49For most coding tasks in Codex, `gpt-5.3-codex` is the go-to model. It’s available for ChatGPT-authenticated Codex sessions in the Codex app, CLI, IDE extension, and Codex Cloud. For extra fast tasks, ChatGPT Pro subscribers have access to the GPT-5.3-Codex-Spark model in research preview.49For most tasks in Codex, `gpt-5.4` is the recommended model. It brings the industry-leading coding capabilities of `gpt-5.3-codex` to OpenAI’s flagship frontier model, combining frontier coding performance with stronger reasoning, native computer use, and broader professional workflows. For extra fast tasks, ChatGPT Pro subscribers have access to the GPT-5.3-Codex-Spark model in research preview.

50 50 

51Switch models mid-session with the /model command, or specify one when launching the CLI.51Switch models mid-session with the `/model` command, or specify one when launching the CLI.

52 52 

53```bash53```bash

54codex --model gpt-5.3-codex54codex --model gpt-5.4

55```55```

56 56 

57[Learn more about the models available in Codex](https://developers.openai.com/codex/models).57[Learn more about the models available in Codex](https://developers.openai.com/codex/models).

Details

45 45 

46```shell46```shell

47# Dedicated flag47# Dedicated flag

48codex --model gpt-5.248codex --model gpt-5.4

49 49 

50# Generic key/value override (value is TOML, not JSON)50# Generic key/value override (value is TOML, not JSON)

51codex --config model='"gpt-5.2"'51codex --config model='"gpt-5.4"'

52codex --config sandbox_workspace_write.network_access=true52codex --config sandbox_workspace_write.network_access=true

53codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'53codex --config 'shell_environment_policy.include_only=["PATH","HOME"]'

54```54```


190 190 

191Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access).191Pick approval strictness (affects when Codex pauses) and sandbox level (affects file/network access).

192 192 

193For operational details that are easy to miss while editing `config.toml`, see [Common sandbox and approval combinations](https://developers.openai.com/codex/security#common-sandbox-and-approval-combinations), [Protected paths in writable roots](https://developers.openai.com/codex/security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/security#network-access).193For operational details people often miss while editing `config.toml`, see [Common sandbox and approval combinations](https://developers.openai.com/codex/security#common-sandbox-and-approval-combinations), [Protected paths in writable roots](https://developers.openai.com/codex/security#protected-paths-in-writable-roots), and [Network access](https://developers.openai.com/codex/security#network-access).

194 194 

195You can also use a granular reject policy (`approval_policy = { reject = { ... } }`) to auto-reject only selected prompt categories (sandbox approvals, execpolicy rule prompts, or MCP elicitations) while keeping other prompts interactive.195You can also use a granular reject policy (`approval_policy = { reject = { ... } }`) to auto-reject only selected prompt categories, such as sandbox approvals, `execpolicy` rule prompts, or MCP input requests (`mcp_elicitations`), while keeping other prompts interactive.

196 196 

197```197```

198approval_policy = "untrusted" # Other options: on-request, never, or { reject = { ... } }198approval_policy = "untrusted" # Other options: on-request, never, or { reject = { ... } }

config-basic.md +2 −2

Details

46Choose the model Codex uses by default in the CLI and IDE.46Choose the model Codex uses by default in the CLI and IDE.

47 47 

48```toml48```toml

49model = "gpt-5.2"49model = "gpt-5.4"

50```50```

51 51 

52#### Approval prompts52#### Approval prompts


71 71 

72#### Windows sandbox mode72#### Windows sandbox mode

73 73 

74When running Codex natively on Windows, set the native sandbox mode to `elevated` in the `windows` table. Use `unelevated` only if you do not have administrator permissions or if elevated setup fails.74When running Codex natively on Windows, set the native sandbox mode to `elevated` in the `windows` table. Use `unelevated` only if you don't have administrator permissions or if elevated setup fails.

75 75 

76```toml76```toml

77[windows]77[windows]

Details

2227For ChatGPT Business and Enterprise users, Codex can also apply cloud-fetched2227For ChatGPT Business and Enterprise users, Codex can also apply cloud-fetched

2228requirements. See the security page for precedence details.2228requirements. See the security page for precedence details.

2229 2229 

2230Use `[features]` in `requirements.toml` to pin feature flags by the same

2231canonical keys that `config.toml` uses. Omitted keys remain unconstrained.

2232 

2230| Key | Type / Values | Details |2233| Key | Type / Values | Details |

2231| --- | --- | --- |2234| --- | --- | --- |

2232| `allowed_approval_policies` | `array<string>` | Allowed values for `approval_policy` (for example `untrusted`, `on-request`, `never`, and `reject`). |2235| `allowed_approval_policies` | `array<string>` | Allowed values for `approval_policy` (for example `untrusted`, `on-request`, `never`, and `reject`). |

2233| `allowed_sandbox_modes` | `array<string>` | Allowed values for `sandbox_mode`. |2236| `allowed_sandbox_modes` | `array<string>` | Allowed values for `sandbox_mode`. |

2234| `allowed_web_search_modes` | `array<string>` | Allowed values for `web_search` (`disabled`, `cached`, `live`). `disabled` is always allowed; an empty list effectively allows only `disabled`. |2237| `allowed_web_search_modes` | `array<string>` | Allowed values for `web_search` (`disabled`, `cached`, `live`). `disabled` is always allowed; an empty list effectively allows only `disabled`. |

2238| `features` | `table` | Pinned feature values keyed by the canonical names from `config.toml`'s `[features]` table. |

2239| `features.<name>` | `boolean` | Require a specific canonical feature key to stay enabled or disabled. |

2235| `mcp_servers` | `table` | Allowlist of MCP servers that may be enabled. Both the server name (`<id>`) and its identity must match for the MCP server to be enabled. Any configured MCP server not in the allowlist (or with a mismatched identity) is disabled. |2240| `mcp_servers` | `table` | Allowlist of MCP servers that may be enabled. Both the server name (`<id>`) and its identity must match for the MCP server to be enabled. Any configured MCP server not in the allowlist (or with a mismatched identity) is disabled. |

2236| `mcp_servers.<id>.identity` | `table` | Identity rule for a single MCP server. Set either `command` (stdio) or `url` (streamable HTTP). |2241| `mcp_servers.<id>.identity` | `table` | Identity rule for a single MCP server. Set either `command` (stdio) or `url` (streamable HTTP). |

2237| `mcp_servers.<id>.identity.command` | `string` | Allow an MCP stdio server when its `mcp_servers.<id>.command` matches this command. |2242| `mcp_servers.<id>.identity.command` | `string` | Allow an MCP stdio server when its `mcp_servers.<id>.command` matches this command. |


2282 2287 

2283Key2288Key

2284 2289 

2290`features`

2291 

2292Type / Values

2293 

2294`table`

2295 

2296Details

2297 

2298Pinned feature values keyed by the canonical names from `config.toml`'s `[features]` table.

2299 

2300Key

2301 

2302`features.<name>`

2303 

2304Type / Values

2305 

2306`boolean`

2307 

2308Details

2309 

2310Require a specific canonical feature key to stay enabled or disabled.

2311 

2312Key

2313 

2285`mcp_servers`2314`mcp_servers`

2286 2315 

2287Type / Values2316Type / Values

config-sample.md +17 −12

Details

1# Sample Configuration1# Sample Configuration

2 2 

3Use this example configuration as a starting point. It includes most keys Codex reads from `config.toml`, along with defaults and short notes.3Use this example configuration as a starting point. It includes most keys Codex reads from `config.toml`, along with default behaviors, recommended values where helpful, and short notes.

4 4 

5For explanations and guidance, see:5For explanations and guidance, see:

6 6 


15```toml15```toml

16# Codex example configuration (config.toml)16# Codex example configuration (config.toml)

17#17#

18# This file lists all keys Codex reads from config.toml, their default values,18# This file lists all keys Codex reads from config.toml, along with default

19# and concise explanations. Values here mirror the effective defaults compiled19# behaviors, recommended examples, and concise explanations. Adjust as needed.

20# into the CLI. Adjust as needed.

21#20#

22# Notes21# Notes

23# - Root keys must appear before tables in TOML.22# - Root keys must appear before tables in TOML.


28# Core Model Selection27# Core Model Selection

29################################################################################28################################################################################

30 29 

31# Primary model used by Codex. Default: "gpt-5.2-codex" on all platforms.30# Primary model used by Codex. Recommended example for most users: "gpt-5.4".

32model = "gpt-5.2-codex"31model = "gpt-5.4"

33 32 

34# Default communication style for supported models. Default: "friendly".33# Default communication style for supported models. Default: "friendly".

35# Allowed values: none | friendly | pragmatic34# Allowed values: none | friendly | pragmatic

36# personality = "friendly"35# personality = "friendly"

37 36 

38# Optional model override for /review. Default: unset (uses current session model).37# Optional model override for /review. Default: unset (uses current session model).

39# review_model = "gpt-5.2-codex"38# review_model = "gpt-5.4"

40 39 

41# Provider id selected from [model_providers]. Default: "openai".40# Provider id selected from [model_providers]. Default: "openai".

42model_provider = "openai"41model_provider = "openai"


48# Uncomment to force values.47# Uncomment to force values.

49# model_context_window = 128000 # tokens; default: auto for model48# model_context_window = 128000 # tokens; default: auto for model

50# model_auto_compact_token_limit = 0 # tokens; unset uses model defaults49# model_auto_compact_token_limit = 0 # tokens; unset uses model defaults

51# tool_output_token_limit = 10000 # tokens stored per tool output; default: 10000 for gpt-5.2-codex50# tool_output_token_limit = 10000 # tokens stored per tool output

52# model_catalog_json = "/absolute/path/to/models.json" # optional startup-only model catalog override51# model_catalog_json = "/absolute/path/to/models.json" # optional startup-only model catalog override

53# background_terminal_max_timeout = 300000 # ms; max empty write_stdin poll window (default 5m)52# background_terminal_max_timeout = 300000 # ms; max empty write_stdin poll window (default 5m)

54# log_dir = "/absolute/path/to/codex-logs" # directory for Codex logs; default: "$CODEX_HOME/log"53# log_dir = "/absolute/path/to/codex-logs" # directory for Codex logs; default: "$CODEX_HOME/log"


58# Reasoning & Verbosity (Responses API capable models)57# Reasoning & Verbosity (Responses API capable models)

59################################################################################58################################################################################

60 59 

61# Reasoning effort: minimal | low | medium | high | xhigh (default: medium; xhigh on gpt-5.2-codex and gpt-5.2)60# Reasoning effort: minimal | low | medium | high | xhigh (default: medium; `xhigh` availability is model-dependent)

62model_reasoning_effort = "medium"61model_reasoning_effort = "medium"

63 62 

64# Reasoning summary: auto | concise | detailed | none (default: auto)63# Reasoning summary: auto | concise | detailed | none (default: auto)


125# - danger-full-access (no sandbox; extremely risky)124# - danger-full-access (no sandbox; extremely risky)

126sandbox_mode = "read-only"125sandbox_mode = "read-only"

127 126 

127[windows]

128# Native Windows sandbox mode (Windows only): unelevated | elevated

129sandbox = "unelevated"

130 

128################################################################################131################################################################################

129# Authentication & Login132# Authentication & Login

130################################################################################133################################################################################


434[profiles]437[profiles]

435 438 

436# [profiles.default]439# [profiles.default]

437# model = "gpt-5.2-codex"440# model = "gpt-5.4"

438# model_provider = "openai"441# model_provider = "openai"

439# approval_policy = "on-request"442# approval_policy = "on-request"

440# sandbox_mode = "read-only"443# sandbox_mode = "read-only"


532 535 

533[windows]536[windows]

534 537 

535# Native Windows sandbox mode (Windows only): unelevated | elevated538# Native Windows sandbox mode (Windows only). The example below uses the

536 539 

537sandbox = "unelevated"540# recommended elevated mode.

541 

542sandbox = “elevated”

Details

9 9 

10Requirements constrain security-sensitive settings (approval policy, sandbox mode, web search mode, and optionally which MCP servers can be enabled). When resolving configuration (for example from `config.toml`, profiles, or CLI config overrides), if a value conflicts with an enforced requirement, Codex falls back to a requirements-compatible value and notifies the user. If an `mcp_servers` allowlist is configured, Codex enables an MCP server only when both its name and identity match an approved entry; otherwise, Codex disables it.10Requirements constrain security-sensitive settings (approval policy, sandbox mode, web search mode, and optionally which MCP servers can be enabled). When resolving configuration (for example from `config.toml`, profiles, or CLI config overrides), if a value conflicts with an enforced requirement, Codex falls back to a requirements-compatible value and notifies the user. If an `mcp_servers` allowlist is configured, Codex enables an MCP server only when both its name and identity match an approved entry; otherwise, Codex disables it.

11 11 

12Requirements can also be used to constrain [feature flags](https://developers.openai.com/codex/config-basic/#feature-flags) via the `[features]` table in `requirements.toml`. Note features are generally not security-sensitive, but enterprises have the option of pinning values, if desired. Omitted keys remain unconstrained.

13 

12For the exact key list, see the [`requirements.toml` section in Configuration Reference](https://developers.openai.com/codex/config-reference#requirementstoml).14For the exact key list, see the [`requirements.toml` section in Configuration Reference](https://developers.openai.com/codex/config-reference#requirementstoml).

13 15 

14### Locations and precedence16### Locations and precedence


79`allowed_web_search_modes = []` effectively allows only `"disabled"`.81`allowed_web_search_modes = []` effectively allows only `"disabled"`.

80For example, `allowed_web_search_modes = ["cached"]` prevents live web search even in `danger-full-access` sessions.82For example, `allowed_web_search_modes = ["cached"]` prevents live web search even in `danger-full-access` sessions.

81 83 

84You can also pin [feature flags](https://developers.openai.com/codex/config-basic/#feature-flags):

85 

86```

87[features]

88personality = true

89unified_exec = false

90```

91 

92Use the canonical feature keys from `config.toml`’s `[features]` table. Codex normalizes the effective feature set to satisfy these pins and rejects conflicting writes to `config.toml` or profile-scoped feature settings.

93 

82### Enforce command rules from requirements94### Enforce command rules from requirements

83 95 

84Admins can also enforce restrictive command rules from `requirements.toml`96Admins can also enforce restrictive command rules from `requirements.toml`


142 - `config_toml_base64` (managed defaults)154 - `config_toml_base64` (managed defaults)

143 - `requirements_toml_base64` (requirements)155 - `requirements_toml_base64` (requirements)

144 156 

145Codex parses these managed preferences payloads as TOML. For managed defaults (`config_toml_base64`), managed preferences have the highest precedence. For requirements (`requirements_toml_base64`), precedence follows the cloud-managed requirements order described above.157Codex parses these "managed preferences" payloads as TOML. For managed defaults (`config_toml_base64`), managed preferences have the highest precedence. For requirements (`requirements_toml_base64`), precedence follows the cloud-managed requirements order described above. The same requirements-side `[features]` table works in `requirements_toml_base64`; use canonical feature keys there as well.

146 158 

147### MDM setup workflow159### MDM setup workflow

148 160 

models.md +32 −20

Details

2 2 

3## Recommended models3## Recommended models

4 4 

5![gpt-5.3-codex](/images/codex/codex-wallpaper-1.webp)5![gpt-5.4](/images/api/models/gpt-5.4.jpg)

6 6 

7gpt-5.3-codex7gpt-5.4

8 8 

9Most capable agentic coding model to date, combining frontier coding performance with stronger reasoning and professional knowledge capabilities.9Flagship frontier model for professional work that brings the industry-leading coding capabilities of GPT-5.3-Codex together with stronger reasoning, tool use, and agentic workflows.

10 10 

11codex -m gpt-5.3-codex11codex -m gpt-5.4

12 12 

13Copy command13Copy command

14 14 


26 26 

27API Access27API Access

28 28 

29![gpt-5.3-codex-spark](/images/codex/codex-wallpaper-2.webp)29![gpt-5.3-codex](/images/codex/codex-wallpaper-1.webp)

30 30 

31gpt-5.3-codex-spark31gpt-5.3-codex

32 32 

33Text-only research preview model optimized for near-instant, real-time coding iteration. Available to ChatGPT Pro users.33Industry-leading coding model for complex software engineering. Its coding capabilities now also power GPT-5.4.

34 34 

35codex -m gpt-5.3-codex-spark35codex -m gpt-5.3-codex

36 36 

37Copy command37Copy command

38 38 


50 50 

51API Access51API Access

52 52 

53![gpt-5.2-codex](/images/codex/gpt-5.2-codex.png)53![gpt-5.3-codex-spark](/images/codex/codex-wallpaper-2.webp)

54 54 

55gpt-5.2-codex55gpt-5.3-codex-spark

56 56 

57Advanced coding model for real-world engineering. Succeeded by GPT-5.3-Codex.57Text-only research preview model optimized for near-instant, real-time coding iteration. Available to ChatGPT Pro users.

58 58 

59codex -m gpt-5.2-codex59codex -m gpt-5.3-codex-spark

60 60 

61Copy command61Copy command

62 62 


74 74 

75API Access75API Access

76 76 

77For most coding tasks in Codex, start with gpt-5.3-codex. It is available for77For most tasks in Codex, start with `gpt-5.4`. It combines strong coding,

78ChatGPT-authenticated Codex sessions in the Codex app, CLI, IDE extension, and78reasoning, native computer use, and broader professional workflows in one

79Codex Cloud. API access for GPT-5.3-Codex will come soon. The79model. The `gpt-5.3-codex-spark` model is available in research preview for

80gpt-5.3-codex-spark model is available in research preview for ChatGPT Pro80ChatGPT Pro subscribers and is optimized for near-instant, real-time coding

81subscribers.81iteration.

82 82 

83## Alternative models83## Alternative models

84 84 

85![gpt-5.2-codex](/images/codex/gpt-5.2-codex.png)

86 

87gpt-5.2-codex

88 

89Advanced coding model for real-world engineering. Succeeded by GPT-5.3-Codex.

90 

91codex -m gpt-5.2-codex

92 

93Copy command

94 

95Show details

96 

85![gpt-5.2](/images/api/models/gpt-5.2.jpg)97![gpt-5.2](/images/api/models/gpt-5.2.jpg)

86 98 

87gpt-5.299gpt-5.2

88 100 

89Our best general agentic model for tasks across industries and domains.101Previous general-purpose model for coding and agentic tasks across industries and domains. Succeeded by GPT-5.4.

90 102 

91codex -m gpt-5.2103codex -m gpt-5.2

92 104 


182The Codex CLI and IDE extension use the same `config.toml` [configuration file](https://developers.openai.com/codex/config-basic). To specify a model, add a `model` entry to your configuration file. If you don't specify a model, the Codex app, CLI, or IDE Extension defaults to a recommended model.194The Codex CLI and IDE extension use the same `config.toml` [configuration file](https://developers.openai.com/codex/config-basic). To specify a model, add a `model` entry to your configuration file. If you don't specify a model, the Codex app, CLI, or IDE Extension defaults to a recommended model.

183 195 

184```196```

185model = "gpt-5.2"197model = "gpt-5.4"

186```198```

187 199 

188### Choosing a different local model temporarily200### Choosing a different local model temporarily


192To start a new Codex CLI thread with a specific model or to specify the model for `codex exec` you can use the `--model`/`-m` flag:204To start a new Codex CLI thread with a specific model or to specify the model for `codex exec` you can use the `--model`/`-m` flag:

193 205 

194```bash206```bash

195codex -m gpt-5.3-codex207codex -m gpt-5.4

196```208```

197 209 

198### Choosing your model for cloud tasks210### Choosing your model for cloud tasks

quickstart.md +2 −2

Details

14 14 

151. Download and install the Codex app151. Download and install the Codex app

16 16 

17 The Codex app is currently only available for macOS.17 Download the Codex app for Windows or macOS.

18 18 

19 [Download for macOS](https://persistent.oaistatic.com/codex-app-prod/Codex.dmg)19 [Download for macOS](https://persistent.oaistatic.com/codex-app-prod/Codex.dmg)

20 20 

21 [Get notified for Windows and Linux](https://openai.com/form/codex-app/)21 [Get notified for Linux](https://openai.com/form/codex-app/)

222. Open Codex and sign in222. Open Codex and sign in

23 23 

24 Once you downloaded and installed the Codex app, open it and sign in with your ChatGPT account or an OpenAI API key.24 Once you downloaded and installed the Codex app, open it and sign in with your ChatGPT account or an OpenAI API key.

skills.md +2 −2

Details

67 67 

68## Install skills68## Install skills

69 69 

70To install skills beyond the built-ins, use `$skill-installer`:70To install skills beyond the built-ins, use `$skill-installer`. For example, to install the `$linear` skill:

71 71 

72```bash72```bash

73$skill-installer install the linear skill from the .experimental folder73$skill-installer linear

74```74```

75 75 

76You can also prompt the installer to download skills from other repositories. Codex detects newly installed skills automatically; if one doesn’t appear, restart Codex.76You can also prompt the installer to download skills from other repositories. Codex detects newly installed skills automatically; if one doesn’t appear, restart Codex.

speed.md +23 −0 added

Details

1# Speed

2 

3## Fast mode

4 

5Codex offers the ability to increase the speed of the model for increased

6credit consumption.

7 

8Fast mode is currently supported on GPT-5.4. When enabled, speed is increased

9by 1.5x and credits are consumed at a 2x rate.

10 

11Enable it by typing `/fast`. It is available in Codex IDE Extensions, Codex

12CLI, and the Codex app.

13 

14[

15Your browser does not support the video tag.

16](/videos/codex/fast-mode-demo.mp4)

17 

18## Codex-Spark

19 

20GPT-5.3-Codex-Spark is a separate fast, less-capable Codex model optimized for near-instant, real-time coding iteration. Unlike fast mode, which speeds up GPT-5.4 at a higher credit rate,

21Codex-Spark is its own model choice and has its own usage limits.

22 

23During research preview Codex-Spark is only available for ChatGPT Pro subscribers.

windows.md +7 −1

Details

1# Windows1# Windows

2 2 

3The easiest way to use Codex on Windows is to [set up the IDE extension](https://developers.openai.com/codex/ide) or [install the CLI](https://developers.openai.com/codex/cli) and run it from PowerShell.3The easiest way to use Codex on Windows is to use the [Codex app](https://developers.openai.com/codex/app/windows). You can also [set up the IDE extension](https://developers.openai.com/codex/ide) or [install the CLI](https://developers.openai.com/codex/cli) and run it from PowerShell.

4 

5[![](/images/codex/codex-banner-icon.webp)

6 

7Use the Codex app on Windows

8 

9Work across projects, run parallel agent threads, and review results in one place with the native Windows app.](https://developers.openai.com/codex/app/windows)

4 10 

5When you run Codex natively on Windows, agent mode uses a [Windows sandbox](#windows-sandbox) to block filesystem writes outside the working folder and prevent network access without your explicit approval. [Learn more below](#windows-sandbox).11When you run Codex natively on Windows, agent mode uses a [Windows sandbox](#windows-sandbox) to block filesystem writes outside the working folder and prevent network access without your explicit approval. [Learn more below](#windows-sandbox).

6 12