
Company News
Socket Named Top Sales Organization by RepVue
Socket won two 2026 Reppy Awards from RepVue, ranking in the top 5% of all sales orgs. AE Alexandra Lister shares what it's like to grow a sales career here.
@promethean-os/opencode-openai-codex-auth
Advanced tools
OpenAI ChatGPT (Codex backend) OAuth auth plugin for opencode - use your ChatGPT Plus/Pro subscription instead of API credits
This plugin enables opencode to use OpenAI's Codex backend via ChatGPT Plus/Pro OAuth authentication, allowing you to use your ChatGPT subscription instead of OpenAI Platform API credits.
Found this useful? Check out the original project by numman-ali and follow X @nummanthinks for future updates!
Important: This plugin is designed for personal development use only with your own ChatGPT Plus/Pro subscription. By using this tool, you agree to:
This tool uses OpenAI's official OAuth authentication (the same method as OpenAI's official Codex CLI). However, users are responsible for ensuring their usage complies with OpenAI's terms.
For production applications or commercial use, use the OpenAI Platform API with proper API keys.
prompt_cache_keyenablePromptCaching: trueFor the complete experience with all reasoning variants matching the official Codex CLI:
config/full-opencode.json to your opencode config file:{
"$schema": "https://opencode.ai/config.json",
"plugin": [
"@promethean-os/opencode-openai-codex-auth"
],
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
},
"models": {
"gpt-5.1-codex-low": {
"name": "GPT 5.1 Codex Low (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-medium": {
"name": "GPT 5.1 Codex Medium (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-high": {
"name": "GPT 5.1 Codex High (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-mini-medium": {
"name": "GPT 5.1 Codex Mini Medium (OAuth)",
"limit": {
"context": 200000,
"output": 100000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-codex-mini-high": {
"name": "GPT 5.1 Codex Mini High (OAuth)",
"limit": {
"context": 200000,
"output": 100000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-none": {
"name": "GPT 5.1 None (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "none",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-low": {
"name": "GPT 5.1 Low (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "auto",
"textVerbosity": "low",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-medium": {
"name": "GPT 5.1 Medium (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5.1-high": {
"name": "GPT 5.1 High (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "high",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5-codex-low": {
"name": "GPT 5 Codex Low (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5-codex-medium": {
"name": "GPT 5 Codex Medium (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5-codex-high": {
"name": "GPT 5 Codex High (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5-codex-mini-medium": {
"name": "GPT 5 Codex Mini Medium (OAuth)",
"limit": {
"context": 200000,
"output": 100000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5-codex-mini-high": {
"name": "GPT 5 Codex Mini High (OAuth)",
"limit": {
"context": 200000,
"output": 100000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5-minimal": {
"name": "GPT 5 Minimal (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "minimal",
"reasoningSummary": "auto",
"textVerbosity": "low",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5-low": {
"name": "GPT 5 Low (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "auto",
"textVerbosity": "low",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5-medium": {
"name": "GPT 5 Medium (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5-high": {
"name": "GPT 5 High (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "high",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5-mini": {
"name": "GPT 5 Mini (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "low",
"reasoningSummary": "auto",
"textVerbosity": "low",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
},
"gpt-5-nano": {
"name": "GPT 5 Nano (OAuth)",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "minimal",
"reasoningSummary": "auto",
"textVerbosity": "low",
"include": [
"reasoning.encrypted_content"
],
"store": false
}
}
}
}
}
}
Global config: ~/.config/opencode/opencode.json
Project config: <project>/.opencode.json
This now gives you 20 model variants: the new GPT-5.1 lineup (recommended) plus every legacy gpt-5 preset for backwards compatibility.
All appear in the opencode model selector as "GPT 5.1 Codex Low (OAuth)", "GPT 5 High (OAuth)", etc.
When using config/full-opencode.json, you get these GPT-5.1 presets plus the original gpt-5 variants:
| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
|---|---|---|---|
gpt-5.1-codex-low | GPT 5.1 Codex Low (OAuth) | Low | Fast code generation on the newest Codex tier |
gpt-5.1-codex-medium | GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code + tooling workflows |
gpt-5.1-codex-high | GPT 5.1 Codex High (OAuth) | High | Multi-step coding tasks with deep tool use |
gpt-5.1-codex-mini-medium | GPT 5.1 Codex Mini Medium (OAuth) | Medium | Budget-friendly Codex runs (200k/100k tokens) |
gpt-5.1-codex-mini-high | GPT 5.1 Codex Mini High (OAuth) | High | Cheaper Codex tier with maximum reasoning |
gpt-5.1-none | GPT 5.1 None (OAuth) | None | Latency-sensitive chat/tasks using the new "no reasoning" mode |
gpt-5.1-low | GPT 5.1 Low (OAuth) | Low | Fast general-purpose chat with light reasoning |
gpt-5.1-medium | GPT 5.1 Medium (OAuth) | Medium | Default adaptive reasoning for everyday work |
gpt-5.1-high | GPT 5.1 High (OAuth) | High | Deep analysis when reliability matters most |
| CLI Model ID | TUI Display Name | Reasoning Effort | Best For |
|---|
| gpt-5-codex-low | GPT 5 Codex Low (OAuth) | Low | Fast code generation |
| gpt-5-codex-medium | GPT 5 Codex Medium (OAuth) | Medium | Balanced code tasks |
| gpt-5-codex-high | GPT 5 Codex High (OAuth) | High | Complex code & tools |
| gpt-5-codex-mini-medium | GPT 5 Codex Mini Medium (OAuth) | Medium | Cheaper Codex tier (200k/100k) |
| gpt-5-codex-mini-high | GPT 5 Codex Mini High (OAuth) | High | Codex Mini with maximum reasoning |
| gpt-5-minimal | GPT 5 Minimal (OAuth) | Minimal | Quick answers, simple tasks |
| gpt-5-low | GPT 5 Low (OAuth) | Low | Faster responses with light reasoning |
| gpt-5-medium | GPT 5 Medium (OAuth) | Medium | Balanced general-purpose tasks |
| gpt-5-high | GPT 5 High (OAuth) | High | Deep reasoning, complex problems |
| gpt-5-mini | GPT 5 Mini (OAuth) | Low | Lightweight tasks |
| gpt-5-nano | GPT 5 Nano (OAuth) | Minimal | Maximum speed |
Usage: --model=openai/<CLI Model ID> (e.g., --model=openai/gpt-5-codex-low)
Display: TUI shows the friendly name (e.g., "GPT 5 Codex Low (OAuth)")
Note: All
gpt-5.1-codex-mini*and legacygpt-5-codex-mini*presets normalize to the ChatGPT sluggpt-5.1-codex-mini(200k input / 100k output tokens).
All accessed via your ChatGPT Plus/Pro subscription.
Important: Always include the openai/ prefix:
# ✅ Correct
model: openai/gpt-5-codex-low
# ❌ Wrong - will fail
model: gpt-5-codex-low
See Configuration Guide for advanced usage.
When no configuration is specified, the plugin uses these defaults for all GPT-5 models:
{
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium"
}
reasoningEffort: "medium" - Balanced computational effort for reasoningreasoningSummary: "auto" - Automatically adapts summary verbositytextVerbosity: "medium" - Balanced output lengthThese defaults match the official Codex CLI behavior and can be customized (see Configuration below). GPT-5.1 requests automatically start at reasoningEffort: "none", while Codex/Codex Mini presets continue to clamp to their supported levels.
The easiest way to get started is to use config/full-opencode.json, which provides:
See Installation for setup instructions.
If you want to customize settings yourself, you can configure options at provider or model level.
⚠️ Important: The two base models have different supported values.
| Setting | GPT-5 / GPT-5.1 Values | GPT-5-Codex / Codex Mini Values | Plugin Default |
|---|---|---|---|
reasoningEffort | none, minimal, low, medium, high | low, medium, high | medium |
reasoningSummary | auto, detailed | auto, detailed | auto |
textVerbosity | low, medium, high | medium only | medium |
include | Array of strings | Array of strings | ["reasoning.encrypted_content"] |
Note:
minimaleffort is auto-normalized tolowfor gpt-5-codex (not supported by the API).noneis only supported on GPT-5.1 general models; when used with legacy gpt-5 it is normalized tominimal.
Set these in ~/.opencode/openai-codex-auth-config.json:
codexMode (default true): enable the Codex ↔ OpenCode bridge promptenablePromptCaching (default true): keep a stable prompt_cache_key and preserved message IDs so Codex can reuse cached prompts, reducing token usage and costsApply settings to all models:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["@promethean-os/opencode-openai-codex-auth"],
"model": "openai/gpt-5-codex",
"provider": {
"openai": {
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed"
}
}
}
}
Create your own named variants in the model selector:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["@promethean-os/opencode-openai-codex-auth"],
"provider": {
"openai": {
"models": {
"codex-fast": {
"name": "My Fast Codex",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "low"
}
},
"gpt-5-smart": {
"name": "My Smart GPT-5",
"limit": {
"context": 400000,
"output": 128000
},
"options": {
"reasoningEffort": "high",
"textVerbosity": "high"
}
}
}
}
}
}
Config key (e.g., codex-fast) is used in CLI: --model=openai/codex-fast
name field (e.g., "My Fast Codex") appears in model selector
Model type is auto-detected from the key (contains "codex" → gpt-5-codex, else → gpt-5)
For advanced options, custom presets, and troubleshooting:
📖 Configuration Guide - Complete reference with examples
This plugin respects the same rate limits enforced by OpenAI's official Codex CLI:
Note: Excessive usage or violations of OpenAI's terms may result in temporary throttling or account review by OpenAI.
Common Issues:
opencode auth login againopenai/ prefix (e.g., --model=openai/gpt-5-codex-low)Full troubleshooting guide: docs/troubleshooting.md
Enable detailed logging:
DEBUG_CODEX_PLUGIN=1 opencode run "your prompt"
For full request/response logs:
ENABLE_PLUGIN_REQUEST_LOGGING=1 opencode run "your prompt"
Logs saved to: ~/.opencode/logs/codex-plugin/
See Troubleshooting Guide for details.
This plugin uses OpenAI's official OAuth authentication (the same method as their official Codex CLI). It's designed for personal coding assistance with your own ChatGPT subscription.
However, users are responsible for ensuring their usage complies with OpenAI's Terms of Use. This means:
No. This plugin is intended for personal development only.
For commercial applications, production systems, or services serving multiple users, you must obtain proper API access through the OpenAI Platform API.
Using OAuth authentication for personal coding assistance aligns with OpenAI's official Codex CLI use case. However, violating OpenAI's terms could result in account action:
Safe use:
Risky use:
Critical distinction:
OAuth is a proper, supported authentication method. Session token scraping and reverse-engineering private APIs are explicitly prohibited by OpenAI's terms.
This is not a "free API alternative."
This plugin allows you to use your existing ChatGPT subscription for terminal-based coding assistance (the same use case as OpenAI's official Codex CLI).
If you need API access for applications, automation, or commercial use, you should purchase proper API access from OpenAI Platform.
No. This is an independent open-source project. It uses OpenAI's publicly available OAuth authentication system but is not endorsed, sponsored by, or affiliated with OpenAI.
ChatGPT, GPT-5, and Codex are trademarks of OpenAI.
Prompt caching is enabled by default to save you money:
You can disable it by creating ~/.opencode/openai-codex-auth-config.json with:
{
"enablePromptCaching": false
}
Warning: Disabling caching will dramatically increase your token usage and costs.
This plugin implements OAuth authentication for OpenAI's Codex backend, using the same authentication flow as:
Based on research and working implementations from:
Not affiliated with OpenAI. ChatGPT, GPT-5, GPT-4, GPT-3, Codex, and OpenAI are trademarks of OpenAI, L.L.C. This is an independent open-source project and is not endorsed by, sponsored by, or affiliated with OpenAI.
📖 Documentation:
MIT
FAQs
OpenAI ChatGPT (Codex backend) OAuth auth plugin for opencode - use your ChatGPT Plus/Pro subscription instead of API credits
We found that @promethean-os/opencode-openai-codex-auth demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Company News
Socket won two 2026 Reppy Awards from RepVue, ranking in the top 5% of all sales orgs. AE Alexandra Lister shares what it's like to grow a sales career here.

Security News
NIST will stop enriching most CVEs under a new risk-based model, narrowing the NVD's scope as vulnerability submissions continue to surge.

Company News
/Security News
Socket is an initial recipient of OpenAI's Cybersecurity Grant Program, which commits $10M in API credits to defenders securing open source software.