use-cases/iterate-on-difficult-problems.md +139 −0 added
1---
2name: Iterate on difficult problems
3tagline: Use Codex as a scored improvement loop to solve hard tasks.
4summary: Give Codex an evaluation system, such as scripts and reviewable
5 artifacts, so it can keep improving a hard task until the scores are good
6 enough.
7bestFor:
8 - Problems where each iteration can be scored, but the best result usually
9 takes many passes
10 - Tasks with visual or subjective outputs that need both deterministic checks
11 and an LLM-as-a-judge score
12 - Long-running Codex sessions where you want progress tracked clearly instead
13 of relying on context
14starterPrompt:
15 title: Keep Iterating Until the Eval Passes
16 body: >-
17 I have a difficult task in this workspace and I want you to run it as an
18 eval-driven improvement loop.
19
20
21 Before changing anything:
22
23 - Read `AGENTS.md`.
24
25 - Find the script or command that scores the current output.
26
27
28 Iteration loop:
29
30 - Make one focused improvement at a time.
31
32 - Re-run the eval command after each meaningful change.
33
34 - Log the scores and what changed.
35
36 - Inspect generated artifacts directly. If the output is visual, use
37 `view_image`.
38
39 - Keep going until both the overall score and the LLM average are above 90%.
40
41
42 Constraints:
43
44 - Do not stop at the first acceptable result.
45
46 - Do not revert to an earlier version unless the new result is clearly worse
47 in scores or artifacts.
48
49 - If the eval improves but is still below target, explain the bottleneck and
50 continue.
51
52
53 Output:
54
55 - current best scores
56
57 - log of major iterations
58
59 - remaining risks or weak spots
60relatedLinks:
61 - label: Custom instructions with AGENTS.md
62 url: /codex/guides/agents-md
63 - label: Codex workflows
64 url: /codex/workflows
65---
66
67## Introduction
68
69Some tasks are easy to verify in one shot: the build passes, the tests go green, and you are done. But there are some optimization problems that are difficult to solve, and need many iterations with a tight evaluation loop. To know which direction to go in, Codex needs to inspect the current output, score it, decide the next change, and repeat until the result is actually good.
70
71This type of use case pairs well with a custom UI that lets you inspect progress visually, by having Codex log the outputs and generated artifacts for each iteration.
72You can watch Codex continue working in the app while the target artifact, model output, or generated asset keeps improving.
73The key is to give Codex the necessary scripts to generate the evaluation metrics and the artifacts to inspect.
74
75## Start with evals
76
77Before the task begins, define how success will be measured. The best setup usually combines:
78
79- **Deterministic checks:** things the scripts can score directly, such as constraint violations or deterministic metrics computed with code
80- **LLM-as-a-judge checks:** rubric-based scores for qualities that are harder to encode exactly, such as resemblance, readability, usefulness, or overall quality - this can rely on text or image outputs
81
82If the subjective part matters, give Codex a script that can call a model for example using the [Responses API](https://developers.openai.com/api/reference/resources/responses/methods/create) and return structured scores. The point is not to replace deterministic checks, it's to supplement them with a consistent judge for the part humans would otherwise assess by eye.
83
84The loop works best when the eval output is machine-readable, saved after every run, and easy to compare over time.
85
86**Tip**: Ask Codex to generate the evaluation script for you, describing the
87 checks you want to run.
88
89## Give Codex a stopping rule
90
91Hard tasks often drift because the prompt says “keep improving” without saying when to stop. Make the stopping rule explicit.
92
93A practical pattern is:
94
951. Set a target for the overall score.
962. Set a separate target for the LLM-judge average.
973. Tell Codex to continue until both are above the threshold, not just one.
98
99For example, if the goal is a high-quality artifact, ask Codex to keep going until both the overall score and the LLM average are above 90%. That makes the task legible: Codex can tell whether it is still below target, where the gap is, and whether the latest change helped.
100
101## Keep a running log of the loop
102
103Long-running work is much more reliable when Codex keeps notes about the loop instead of trying to remember everything from the thread.
104
105That running log should record:
106
107- the current best scores
108- what changed on the last iteration
109- what the eval said got better or worse
110- what Codex plans to try next
111
112This is especially important when the task runs for a long time. The log becomes the handoff point for the next session and the self-evaluation record for the current one.
113
114## Inspect the artifact, not just the logs
115
116For some difficult tasks, the code diff and metric output are not enough. Codex should look at the artifact it produced.
117
118If the output is visual, such as a generated image, layout, or rendered state, let Codex inspect that artifact directly, for example when the output lives on disk as an image and compare the current result to the prior best result or to the intended rubric.
119
120This makes the loop stronger:
121
122- the eval script reports the score
123- the artifact shows what the score missed
124- the next change is grounded in both
125
126That combination is much more effective than changing code blindly between runs.
127
128## Make every iteration explicit
129
130Ask Codex to follow the same loop every time:
131
1321. Run the evals on the current baseline.
1332. Identify the biggest failure mode from the scores and artifacts.
1343. Make one focused change that addresses that bottleneck.
1354. Re-run the evals.
1365. Log the new scores and whether the change helped.
1376. Continue until the thresholds are met.
138
139This discipline matters. If each iteration changes too many things at once, Codex cannot tell which idea improved the score. If it skips logging, the session becomes hard to trust and hard to resume.