# Issue #224: OpenAI rejected the request Status: closed | Created: April 17, 2025 | Author: odcpw ## Description ⚠️ OpenAI rejected the request (request ID: req_807b5f248c8df9365ddfb64e0991673e). Please verify your settings and try again. I got the above error after some connection issues (using codex on the move with phone hotspot) After regaining internet connection, I wasn't able to end any more messages (kept getting this error for any message that I sent) and had to close the app (losing all context and then having to rebuild a lot). ## Comments (12) ### nerzhulart - April 17, 2025 The same to me ### ivaaaan - April 17, 2025 The same for me ### maheshrijal - April 17, 2025 You need go to https://platform.openai.com/settings/organization/general and add a payment method. Then verify org if required. Also, if you are accessing it from a company device, most likely the request is being blocked by organization policy. I was seeing the issue earlier on my work issued laptop as well. ### ivaaaan - April 17, 2025 I have balance & payment method, still doesn't work ### aikanarov - April 17, 2025 I think it´s not using the correct org. ### hirenpatelatl - April 17, 2025 I got the same error last night. And the issue resolved after I requested a specific version of the model instead of using the default model. model: o4-mini-2025-04-16 ### hirenpatelatl - April 17, 2025 After this I had to install: brew install ripgrep After all of that I was still not able to complete a request because it couldn't update the files on my file system. This was the last troubleshooting exchange I had before I gave up. Here’s what’s happening — you don’t need any extra installs beyond the ones for ripgrep and the standard Codex CLI prerequisites. The “Invalid Add File Line” or “aborted” errors come from using the shell tool for patching, not from a missing binary. 🐛 Root Cause The CLI’s shell function is only meant to run plain shell commands. When you feed it a multi‑file patch, it tries to interpret each diff line (*** Add File: …, etc.) as a shell command and then fails with “Invalid Add File Line” or abruptly aborts  . 🔧 No Additional Installs Needed You’ve already installed the only external tool Codex CLI depends on besides Node 22+ and Git: • rg (ripgrep) for file indexing—install via brew install ripgrep on macOS, sudo apt install ripgrep on Linux, etc. • The Codex CLI itself (npm install -g @openai/codex), which bundles its own helper logic—no further binaries are required to manage patches.  ✨ Correct Multi‑File Patching Instead of invoking functions.shell with patch text, emit a top‑level apply_patch action in the commentary channel. Codex CLI recognizes this directive, splits out each file diff, and writes them correctly. For example: {"cmd": ["apply_patch", "*** Begin Patch *** Add File: iCloud-Ref-Conversion/PRD.md # Product Requirements Document …full markdown… *** End Patch *** Begin Patch *** Add File: iCloud-Ref-Conversion/SwiftFileFeatures.md # Swift File Features …full markdown… *** End Patch"]} This tells the agent exactly where each file starts and ends, avoiding any shell‑parsing errors  . 📌 How It Works Under the Hood • The apply_patch command is defined in the CLI’s agent loop as a distinct tool (see the ApplyPatchCommand type) so patch diffs aren’t run through your shell . • It looks for *** Begin Patch/*** End Patch markers to separate multiple files in one payload. • Any attempt to push that payload via the shell tool instead will be mis‑tokenized and rejected. ✅ Next Steps 1. Remove patch text from any functions.shell calls. 2. Use a single apply_patch payload as shown above. 3. Rerun your codex command and Codex CLI will create or update each file as intended—no further installs required. ### heyitsnoah - April 17, 2025 > I got the same error last night. And the issue resolved after I requested a specific version of the model instead of using the default model ID. FWIW, I did the same thing and found it worked. And then I got o4-mini working by referencing the correct model ID (`o4-mini-2025-04-16`). No idea why it was defaulting to a non-working model ID. ### anishchhaparwal - April 17, 2025 Image I resolve this issue by enable models in my limits settings. ### KcPele - April 17, 2025 i am still experiencing this error, I am on Linux 24, my Openai organisation is verified. the prompt: Hello Then I got this error ```bash status: undefined, headers: undefined, request_id: undefined, error: { type: 'invalid_request_error', code: 'invalid_prompt', message: 'Invalid prompt: your prompt was flagged as potentially violating our usage policy. Please try again with a different prompt: https://platform.openai.com/docs/guides/reasoning#advice-on-prompting', param: null }, code: 'invalid_prompt', param: null, type: 'invalid_request_error' ``` ### anant2614 - May 21, 2025 I'm using Azure OpenAI provider and seeing this error. Any resolutions so far? ### codex-maintainers - August 7, 2025 Thank you for the feedback! Please try using the latest version of Codex CLI to see if the issue persists. If it does, feel free to report it again. ## Labels - bug