
Security News
Axios Supply Chain Attack Reaches OpenAI macOS Signing Pipeline, Forces Certificate Rotation
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.
openai-code
Advanced tools
An unofficial proxy layer that lets you use Anthropic Claude Code with any OpenAI API backend.
An unofficial proxy layer that lets you use Anthropic Claude Code with OpenAI backends.
This repository provides a proxy server that allows Claude Code to work with OpenAI models instead of Anthropic's Claude models. The proxy translates requests in the Anthropic API format to OpenAI API calls and converts the responses back to Anthropic's format.
Smarter, Faster and Cheaper than Claude Code.
0.2.32)
Technically:
:so) - searches StackOverflow and includes up to 3 answers in the conversation:p) - add OPENAI_CODE_PERPLEXITY_API_KEY to the .env of the project where you use.:d command:v command activates the vector database based code similarity search:v3 -> topK 3 matches included in reasoningInstead of the default Claude Code directoryStructure context,
OpenAI Code now discovers project files recursively with an advanced algorithm and auto-updating.
To shrink the directory listing in the context, a new .claudeignore
can be created (in your project root folder). Every glob pattern in this file will be ignored.
This can be helpful if you have files tracked in Git, but they should not be discoverable by the LLM.
npm install -g @anthropic-ai/claude-code@0.2.32In light of recent actions by Anthropic's legal department against open-source projects related to Claude Code, the author of this proxy has chosen to remain anonymous. This project is perfectly legal and does not violate any EU or US laws. However, the author cannot engage in legal disputes. This decision is not meant to raise any concerns about privacy or safety for developers using this proxy. On the contrary, I have a strong commitment to privacy and safety. Developers are encouraged to review the code and suggest improvements via email. Unfortunately, due to legal risks, it is not feasible to host a public repository at this time.
No specific setup needed, just run: npx openai-code@2.2.1. It will download this repo's code and execute it, binding on port 6543.
Noob Warning: If
npxis not found, you need to install Node.js first.
Whenever openai-code receives a request, it analyzes the system prompt prepared by Claude Code. It will find out the working directory and read the .env and CLAUDE_RULES.md from the requesting project's working directory. This way, OpenAI Code can offer awesome, project/request-based customizations!
To customize, create a .env file in the project directory you want OpenAI Code to work with, and set the following variables:
OPENAI_CODE_API_KEY="your-openai-api-key"
# Optional settings:
#OPENAI_CODE_BASE_URL="https://api.openai.com/v1" # Base URL for the OpenAI API.
#PROXY_URL="http://your-proxy-server:port" # HTTP proxy URL if needed.
#REASONING_MODEL="o3-mini" # Reasoning model to use (default: o3-mini).
#OPENAI_CODE_EMBEDDING_MODEL="text-embedding-3-small" # Embedding model (default if not specified).
# Optional identification settings:
#OPENAI_CODE_ORGANIZATION_ID="your-organization-id"
#OPENAI_CODE_PROJECT_ID="your-project-id"
You might want to set the environment variable (NOT in the project directory but globally) OPENAI_CODE_PORT to use a different port than 6543 for starting the proxy.
You can also run imperatively: OPENAI_CODE_PORT=7654 npx openai-code@2.2.1
Important: Restart your shell or source your configuration file to register the new alias.
Functional differences only occur when the reasoning model in use differs in behavior or because my prompts instruct the model differently. For example, I explicitly PROHIBIT the reading of any .env files. It's not perfect, but better than not doing anything about it...

It is important to note that this project does not infringe upon any EU or US laws, nor does it violate the DMCA, as it does not utilize any Anthropic prompts or code. Each of my prompts has been meticulously designed by me, an experienced AI engineer.
Rant: This project is developed as free software, single-handedly, in just a few hours. Meanwhile, companies like OpenAI and Anthropic employ entire development teams with million-dollar budgets to achieve similar outcomes. Let's eat the rich, or so they say :)
By reducing the number of tokens used to an absolute minimum, I not only decrease the cost but also significantly enhance the speed of all operations.
Rant 2: One might praise the competency of Anthropic's business department. Well, let's just say that the verbosity in Anthropic's original system prompts results in tremendous waste of tokens, increased cost and decreased speed.
Anthropic's original prompts also point to wrong tool names in their own prompts... thanks to your behavior Anthropic, I leave it to you to find out what I mean by this. Have fun!
My streamlined approach ensures that a typical refactoring task, including writing tests and documentation, can be completed in a few seconds for ~2 Cents.
The system is built around an Express-based proxy server (implemented in src/index.mjs) that handles HTTP requests by translating Anthropic-formatted messages into OpenAI API calls and efficiently coordinating tool execution, including dynamic goal tracking and indexing workflows.
The vector indexing is managed through dedicated modules (src/vectorindex.mjs and src/vectordb.mjs), which automatically scan code files, generate embeddings using OpenAI's models, and store them persistently (default JSON file: CLAUDE_VECTORDB.json). This setup ensures that the instance is always up-to-date with the latest code changes.
Key performance optimizations include an optimized matrix multiplication routine that employs loop unrolling. This algorithm accelerates the computation of dot products – essential for calculating cosine similarities between query embeddings and document embeddings – thereby delivering fast and accurate semantic search results.
This project automatically selects the appropriate model and reasoning strength for prompt execution.
According to my research:
o3-mini is the optimal OpenAI reasoning model right now (obviously). This is the base model for all reasoning. The reasoning strength, however, is selected according to the actual demands. Whenever an error occurs, the reasoning strength is increased, striking a balance between speed, cost, and quality.Do you plan to contribute and email me suggestions? Do you plan to review my code and check if I might keylog every keyboard entry or send all your secret credentials to my evil server?
Here's an outline of this project's codebase for you to start:
src/index.mjs (right, I gave a F on architecture for a few-hundred lines codebase). All prompts are located in src/prompts.mjs.https-proxy-agent is used for when a PROXY_URL is set (useful for enterprise environments or when behind a "great" firewall).logToFile function, and server errors are handled gracefully with appropriate HTTP responses. All default logging happens in the console (stdout)..claude.json) in your home user directory, if it doesn't exist yet, setting default Claude Code values for user settings.I'll leave this here, shall I ever want or need to verify that I'm the original author of this codebase.
AAAAC3NzaC1lZDI1NTE5AAAAIMpneofHS0ciT1pVEgZhbqqzbmUgPz0z/VjU91daL5uB
FAQs
An unofficial proxy layer that lets you use Anthropic Claude Code with any OpenAI API backend.
We found that openai-code demonstrated a not healthy version release cadence and project activity because the last version was released a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
OpenAI rotated macOS signing certificates after a malicious Axios package reached its CI pipeline in a broader software supply chain attack.

Security News
Open source is under attack because of how much value it creates. It has been the foundation of every major software innovation for the last three decades. This is not the time to walk away from it.

Security News
Socket CEO Feross Aboukhadijeh breaks down how North Korea hijacked Axios and what it means for the future of software supply chain security.