
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
Use Claude Code with ANY AI model - OpenAI, Groq, Gemini, Local Models, OpenRouter's 100+ models, and more!
https://github.com/user-attachments/assets/5fdfce3d-dc20-4825-b4df-ea2787e54858
Hey there! I'm Unclecode, author of Crawl4AI ( ⭐).
After trying alternatives like Gemini CLI and Qwen Code, I realized something: The magic of Claude Code isn't just the model - it's the assistant itself. The way it's engineered as an agentic coding assistant is what makes it so efficient. I wanted this incredible experience with ALL models, not just Claude. So I built Antomix!
The result: A universal proxy that converts any app connecting to Anthropic to work with:
As a result, you can run this as a proxy and convert any app connecting to Anthropic to all other models. Anyway, have fun, star it ⭐, follow me, and share your experience!
npm install -g antomix
Make sure your API keys are set in your system (How to get API keys):
export GROQ_API_KEY="your-groq-key" # For Groq (super fast!)
export OPENAI_API_KEY="your-openai-key" # For OpenAI
export GEMINI_API_KEY="your-gemini-key" # For Gemini
export OPENROUTER_API_KEY="your-or-key" # For OpenRouter (100+ models)
# Local models (Ollama) work without API keys!
# Interactive selection - choose with arrow keys ✅/❌ indicators
antomix claude
# Or specify profile directly
antomix claude --profile openrouter-qwen # [openai|groq|gemini|ollama|...]
💡 Tip: Use
antomix profilesto list all available profiles. Missing API keys? Antomix guides you through setup!
[!IMPORTANT] When you exit Claude Code, the proxy automatically stops and cleans up!
[GIF Here I will add a video or GIF animation to show how it works]
Get diverse perspectives from multiple AI models in one shot! The $$colab command lets you query several models in parallel and see all their responses together.
# Ask multiple models for help debugging
$$colab o3,gpt41,groq-qwen,grok4 Why is my Redis connection timing out in production?
# Get creative ideas from different AI perspectives
$$colab gpt41,open-geminipro,sonnet4,grok4 Write a catchy marketing tagline for an eco-friendly water bottle
# Compare solutions from various models
$$colab o3,open-qwen,groq-deepseek,groq-llama What is the most efficient sorting algorithm for partially sorted data
# Use `fresh` to exclude conversation history for unbiased responses
$$colab open-qwen,o3pro,sonnet4 fresh Review this architecture without context
How it works:
o3,gpt41,groq-llama ✅ not o3, gpt41, groq-llama ❌)[!TIP] Check the detailed docs below for pre-configured model sets like
think,code, anddocsthat group the best models for specific tasks!
https://github.com/user-attachments/assets/1a48f6e0-1f4c-408d-b100-88f26bb4e343
$$ Command - Use Any Model for Single MessagesSwitch models temporarily for individual messages without changing your main profile:
# Using shortcuts (25+ pre-configured) - just type $$[shortcut]
$$groq-qwen What is the capital of France?
$$o3pro Solve this complex problem: [problem]
$$open-grok4 Write a funny story about AI
$$groq-llama Fast Groq inference
# Or use explicit $$set: syntax
$$set:groq-qwen What is the capital of France?
$$set:o3pro Solve this complex problem: [problem]
# Using direct profile/model syntax with $$set:
$$set:groq/llama-3.3-70b-versatile Explain quantum computing
$$set:openai/o3-pro Analyze this code: [code]
$$set:openrouter-qwen/anthropic/claude-opus-4 Deep analysis needed
Available shortcuts:
groq-qwen groq-llama groq-deepseek groq-kimi2 (fast inference)o3pro o3 o3mini o4 gpt41 (latest models)gemini-flash gemini-pro (direct Google API)cerebras-coder cerebras-qwen (ultra-fast large models)open-qwen open-geminipro open-mistral open-grok4 (100+ models)opus4 sonnet4 haiku35 (Claude models via OpenRouter)[!TIP] Create your own shortcuts! Add custom shortcuts in
~/.antomix/shortcuts.ymlor useantomix shortcuts add mymodel profile/model. See Shortcuts Management below for details.
Manage profiles and system settings:
$$switch-profile groq # Switch main profile to Groq
$$status # Check current model and status
$$shortcuts # List and manage shortcuts
$$profiles # See all available profiles
[!TIP]
$$[shortcut]is the easiest way! Just type$$groq-qwen messageinstead of$$set:groq-qwen message. Both work! Temporary vs Permanent:$$commands are temporary (one message),$$switch-profileis permanent (changes your main model).
Switch models without restarting using $ commands:
# In Claude Code, type any of these:
$$switch-profile groq # Switch to Groq
$$switch-profile openai # Switch to OpenAI
$$status # Check current model
$$profiles # List all available profiles
$$help # Show all commands
Control whether the proxy converts requests or passes them through directly:
# Enable proxy conversion (default)
$$proxy on
# → Converts Claude requests to target model requests
# → Uses the current profile's model mappings
# → This is the normal operating mode
# Disable proxy conversion (passthrough)
$$proxy off
# → Direct passthrough to original APIs
# → No conversion or modification
# → Useful for debugging or using original APIs
# Check current proxy status
$$proxy status
# → Shows if proxy is ON (converting) or OFF (passthrough)
The $$status command also shows proxy status alongside profile info.
Set up Antomix to run as a background service:
# Point Claude Code (or any app) to Antomix
export ANTHROPIC_BASE_URL="http://localhost:3000"
# Start Antomix with your preferred model
antomix start --profile groq --port 3000
# Check status
antomix status
# Stop when done
antomix stop
[!IMPORTANT] Any application that uses Anthropic's API will now use your chosen model! No code changes needed.
antomix start [--profile <name>] [--port <port>] # Start proxy server
antomix stop # Stop server
antomix status # Show status
antomix switch <profile> # Switch running server profile
antomix profiles # List all available profiles
antomix profiles list # List all available profiles
antomix profiles list --verbose # Show detailed profile information
antomix profiles show groq # Show full YAML configuration of a profile
antomix profiles create # Create a new custom profile interactively
antomix profiles create my-provider # Create profile with specific name
antomix profiles create groq # Duplicate existing 'groq' profile
antomix profiles edit my-provider # Edit custom profile in nano
antomix profiles remove my-provider # Remove custom profile
antomix export <filename> # Export configuration
antomix shortcuts # List all shortcuts
antomix shortcuts list # List all shortcuts
antomix shortcuts edit # Edit shortcuts file in nano
antomix shortcuts add <name> <profile/model> # Add new shortcut
antomix shortcuts remove <name> # Remove shortcut
antomix shortcuts stats # Show shortcuts statistics
antomix colab # List all colab sets
antomix colab list # List all colab sets
antomix colab add <name> <models> [-- <suffix>] # Add new colab set
antomix colab remove <name> # Remove colab set
antomix logs # View recent logs
antomix logs --follow # Follow logs in real-time
antomix logs --level error # Show only error logs
antomix logs --session <id> # Show logs for specific session
Get instant help and answers about Antomix using AI:
antomix ask "<question>" # Ask questions about Antomix
antomix ask "how do I create a custom profile?" # Get help with specific tasks
antomix ask "what models are available?" # Learn about available models
antomix ask "how to use $$colab command?" # Learn about specific features
Features:
First-time setup:
rm ~/.antomix/cache/ask-profile.jsonNote: Quotes are required for questions with special characters:
antomix ask "how to create a profile?" # ✅ Correct
antomix ask how to create a profile? # ❌ Shell may interpret ? as wildcard
antomix --help # Show help
antomix --version # Show version
Use these commands directly in Claude Code or any connected application:
$$ Command - Temporary Model Switching# Using shortcuts (fastest way) - just type $$[shortcut]
$$groq-qwen How does photosynthesis work?
$$o3pro Solve this complex reasoning task
$$open-grok4 Tell me a joke about programming
# Or use explicit $$set: syntax
$$set:groq-qwen How does photosynthesis work?
$$set:o3pro Solve this complex reasoning task
# Using full profile/model syntax with $$set:
$$set:groq/qwen/qwen3-32b Quick question here
$$set:openai/o3-pro Complex analysis needed
$$set:openrouter-qwen/x-ai/grok-4 Creative writing task
$$shortcuts # List all available shortcuts
$$shortcuts add myfast groq/llama-3.3-70b-versatile # Add custom shortcut
$$shortcuts remove myfast # Remove shortcut
$$shortcuts stats # Show shortcuts statistics
Creating Custom Shortcuts:
~/.antomix/shortcuts.yml directlyantomix shortcuts add mymodel profile/modelantomix shortcuts add mychat openai/gpt-4$$mychat What is the weather like?$$colab Command - Collaborative AI Queries# Using named sets (recommended for common tasks)
$$colab think How do I scale this architecture to 1M users?
$$colab code Implement a rate limiter with Redis
$$colab docs Write API documentation for this endpoint
# Direct model lists (comma-separated, NO spaces!)
$$colab o3,gpt41,sonnet4 Analyze this code for security issues
$$colab groq-llama,groq-deepseek,open-qwen fresh Compare these database options
$$colab open-qwen,o3pro,grok4 What is wrong with this algorithm?
# Managing collaborative sets
$$colab set review gpt41,open-geminiflash -- Please review this critically
$$colab set debug o3,gpt41 -- Debug this step by step
$$colab remove debug
$$colab # List all available sets
Syntax:
$$colab <set-name> <query> - Use a pre-configured set$$colab <models> [fresh] <query> - Direct model list$$colab set <name> <models> [-- <suffix>] - Create new set$$colab remove <name> - Remove a set$$switch-profile <name> # Switch to different model
$$profiles # List all available profiles
$$status # Show current profile and status
$$models # Show model mappings
$$map <model> <target> # Override model mapping
$$cat-profile <name> # Show profile configuration
$$proxy on # Enable proxy conversion (Claude → Target models)
$$proxy off # Disable proxy (direct passthrough mode)
$$proxy status # Check if proxy is converting or passthrough
$$ping # Test connectivity
$$help # Show all $ commands
$$export <filename> # Export current config
$$ask <question> # Get AI-powered help using current profile
$$ask Command:
$$ask how do I create a custom profile?antomix ask from CLI first to cache docsCreate custom profiles easily with the interactive CLI:
# Create a new profile interactively
antomix profiles create
# Create with a specific name
antomix profiles create my-provider
The interactive wizard will guide you through:
Example session:
$ antomix profiles create
🔧 Create New Profile
Press Enter to use default values
Profile filename: my-llm
Display name: My LLM Provider
Description: Custom LLM provider for specialized models
API base URL: https://api.myllm.com/v1
Environment variable for API key: MY_LLM_API_KEY
Add custom headers? No
Model Mappings (map Claude models to your provider's models):
Map claude-opus-4 to: my-llm-large
Map claude-sonnet-4 to: my-llm-medium
Map claude-3-5-haiku to: my-llm-fast
✅ Profile created successfully!
Location: ~/.antomix/profiles/my-llm.yml
Environment variable: MY_LLM_API_KEY
To use this profile:
1. Set your API key: export MY_LLM_API_KEY="your-api-key"
2. Start with: antomix claude --profile my-llm
3. Or switch to it: $$switch-profile my-llm
Custom profiles are stored in ~/.antomix/profiles/ as YAML files:
# ~/.antomix/profiles/my-custom.yml
name: "Custom Provider"
description: "Route requests to my custom API"
# Model mappings - maps Claude models to your provider's models
models:
"claude-opus-4-20250514":
- "your-best-model"
"claude-sonnet-4-20250514":
- "your-balanced-model"
"claude-3-5-haiku-20241022":
- "your-fast-model"
# Parameter transformations for your models
parameters:
"*": # All models
"[max_tokens]": "max_completion_tokens" # Rename parameter
"max_completion_tokens": 4096 # Set default limit
# API configuration
api:
base_url: "https://api.yourprovider.com/v1"
api_key: "$YOUR_PROVIDER_API_KEY"
headers:
# Custom headers if needed
Authorization: "Bearer $YOUR_PROVIDER_API_KEY"
# For providers with non-standard OpenAI endpoints (like Google Gemini)
# Set absolute_url: true to use base_url as the complete endpoint
absolute_url: false # Default: false (appends /v1/chat/completions)
Note on absolute_url: Most providers follow OpenAI's URL pattern where you provide a base URL and /v1/chat/completions is appended. However, some providers like Google Gemini use a different pattern. For these cases, set absolute_url: true and provide the complete endpoint URL:
# Example: Google Gemini configuration
api:
absolute_url: true
base_url: "https://generativelanguage.googleapis.com/v1beta/openai/chat/completions"
api_key: "$GEMINI_API_KEY"
You can manually edit these files after creation to fine-tune settings.
GPT-OSS-120B is a reasoning-capable model available through multiple providers. Antomix automatically adjusts the reasoning_effort parameter based on which Claude model you're using:
reasoning_effort: "low" (fast responses)reasoning_effort: "medium" (balanced reasoning)reasoning_effort: "high" (deep reasoning)Available GPT-OSS profiles:
groq-gpt-oss - Via Groq (ultra-fast)cerebras-gpt-oss - Via Cerebrasopenrouter-gpt-oss - Via OpenRouterUsage example:
# Start with GPT-OSS reasoning model
antomix claude --profile groq-gpt-oss
# The reasoning effort auto-adjusts based on your Claude model choice
This feature uses the new model_parameters section in profiles to apply parameters based on the source (Claude) model rather than just the destination model.
# View recent logs
antomix logs
# Follow logs in real-time
antomix logs --follow
# Filter by log level
antomix logs --level error
antomix logs --level warn
antomix logs --level info
# View logs for a specific session
antomix logs --session <session-id>
Log locations:
~/.antomix/logs/daily/~/.antomix/logs/sessions/~/.antomix/logs/antomix-error-YYYY-MM-DD.logDon't have API keys yet? Here's where to create them:
[!TIP] Start with Groq or OpenRouter! They offer free tiers and are super fast. You can always add other providers later.
groq - Groq API (super fast inference)groq-gpt-oss - Groq with GPT-OSS-120B (reasoning model)openai - OpenAI GPT modelsgemini - Google Gemini (direct API)cerebras - Cerebras AI (ultra-fast large models)cerebras-gpt-oss - Cerebras with GPT-OSS-120B (reasoning model)openrouter-gemini - Google Gemini via OpenRouteropenrouter-qwen - Qwen via OpenRouteropenrouter-kimi - Kimi via OpenRouteropenrouter-gpt-oss - OpenRouter with GPT-OSS-120B (reasoning model)ollama-qwen - Qwen via Ollama (local)default - OpenAI GPT-4.1 and O3 by defaultFound a bug? Want a new provider?
Following options will be available soon:
License to be determined. Please check back for updates on licensing terms.
⭐ If Antomix saves you time, please star it! ⭐
Made with ❤️ by Unclecode
FAQs
Bidirectional proxy between Anthropic and OpenAI APIs
The npm package antomix receives a total of 57 weekly downloads. As such, antomix popularity was classified as not popular.
We found that antomix demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.