
Security News
Attackers Are Hunting High-Impact Node.js Maintainers in a Coordinated Social Engineering Campaign
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.
Self-hosted Claude Code & Cursor proxy with Databricks,AWS BedRock,Azure adapters, openrouter, Ollama,llamacpp,LM Studio, workspace tooling, and MCP integration.
Cursor / Cline / Continue / Claude Code / Clawdbot / Codex/ KiloCode
↓
Lynkr
↓
Local LLMs | OpenRouter | Azure | Databricks | AWS BedRock | Ollama | LMStudio | Gemini
Lynkr is a self-hosted proxy server that unlocks Claude Code CLI , Cursor IDE and Codex Cli by enabling:
Perfect for:
Option 1: NPM Package (Recommended)
# Install globally
npm install -g pino-pretty
npm install -g lynkr
lynkr start
Option 2: Git Clone
# Clone repository
git clone https://github.com/vishalveerareddy123/Lynkr.git
cd Lynkr
# Install dependencies
npm install
# Create .env from example
cp .env.example .env
# Edit .env with your provider credentials
nano .env
# Start server
npm start
Node.js Compatibility:
Option 3: Docker
docker-compose up -d
Lynkr supports 10+ LLM providers:
| Provider | Type | Models | Cost | Privacy |
|---|---|---|---|---|
| AWS Bedrock | Cloud | 100+ (Claude, Titan, Llama, Mistral, etc.) | $$-$$$ | Cloud |
| Databricks | Cloud | Claude Sonnet 4.5, Opus 4.5 | $$$ | Cloud |
| OpenRouter | Cloud | 100+ (GPT, Claude, Llama, Gemini, etc.) | $-$$ | Cloud |
| Ollama | Local | Unlimited (free, offline) | FREE | 🔒 100% Local |
| llama.cpp | Local | GGUF models | FREE | 🔒 100% Local |
| Azure OpenAI | Cloud | GPT-4o, GPT-5, o1, o3 | $$$ | Cloud |
| Azure Anthropic | Cloud | Claude models | $$$ | Cloud |
| OpenAI | Cloud | GPT-4o, o1, o3 | $$$ | Cloud |
| LM Studio | Local | Local models with GUI | FREE | 🔒 100% Local |
| MLX OpenAI Server | Local | Apple Silicon (M1/M2/M3/M4) | FREE | 🔒 100% Local |
📖 Full Provider Configuration Guide
Configure Claude Code CLI to use Lynkr:
# Set Lynkr as backend
export ANTHROPIC_BASE_URL=http://localhost:8081
export ANTHROPIC_API_KEY=dummy
# Run Claude Code
claude "Your prompt here"
That's it! Claude Code now uses your configured provider.
Configure Cursor IDE to use Lynkr:
Open Cursor Settings
Cmd+, | Windows/Linux: Ctrl+,Configure OpenAI API Settings
sk-lynkr (any non-empty value)http://localhost:8081/v1claude-3.5-sonnet (or your provider's model)Test It
Cmd+L / Ctrl+LCmd+K / Ctrl+KConfigure OpenAI Codex CLI to use Lynkr as its backend.
export OPENAI_BASE_URL=http://localhost:8081/v1
export OPENAI_API_KEY=dummy
codex
Edit ~/.codex/config.toml:
# Set Lynkr as the default provider
model_provider = "lynkr"
model = "gpt-4o"
# Define the Lynkr provider
[model_providers.lynkr]
name = "Lynkr Proxy"
base_url = "http://localhost:8081/v1"
wire_api = "responses"
# Optional: Trust your project directories
[projects."/path/to/your/project"]
trust_level = "trusted"
| Option | Description | Example |
|---|---|---|
model_provider | Active provider name | "lynkr" |
model | Model to request (mapped by Lynkr) | "gpt-4o", "claude-sonnet-4-5" |
base_url | Lynkr endpoint | "http://localhost:8081/v1" |
wire_api | API format (responses or chat) | "responses" |
trust_level | Project trust (trusted, sandboxed) | "trusted" |
To connect Codex to a remote Lynkr instance:
[model_providers.lynkr-remote]
name = "Remote Lynkr"
base_url = "http://192.168.1.100:8081/v1"
wire_api = "responses"
| Issue | Solution |
|---|---|
| Same response for all queries | Disable semantic cache: SEMANTIC_CACHE_ENABLED=false |
| Tool calls not executing | Increase threshold: POLICY_TOOL_LOOP_THRESHOLD=15 |
| Slow first request | Keep Ollama loaded: OLLAMA_KEEP_ALIVE=24h |
| Connection refused | Ensure Lynkr is running: npm start |
Note: Codex uses the OpenAI Responses API format. Lynkr automatically converts this to your configured provider's format.
Lynkr supports ClawdBot via its OpenAI-compatible API. ClawdBot users can route requests through Lynkr to access any supported provider.
Configuration in ClawdBot:
| Setting | Value |
|---|---|
| Model/auth provider | Copilot |
| Copilot auth method | Copilot Proxy (local) |
| Copilot Proxy base URL | http://localhost:8081/v1 |
| Model IDs | Any model your Lynkr provider supports |
Available models (depending on your Lynkr provider):
gpt-5.2, gpt-5.1-codex, claude-opus-4.5, claude-sonnet-4.5, claude-haiku-4.5, gemini-3-pro, gemini-3-flash, and more.
🌐 Remote Support: ClawdBot can connect to Lynkr on any machine - use any IP/hostname in the Proxy base URL (e.g.,
http://192.168.1.100:8081/v1orhttp://gpu-server:8081/v1).
Lynkr includes an optional semantic response cache that returns cached responses for semantically similar prompts, reducing latency and costs.
Enable Semantic Cache:
# Requires an embeddings provider (Ollama recommended)
ollama pull nomic-embed-text
# Add to .env
SEMANTIC_CACHE_ENABLED=true
SEMANTIC_CACHE_THRESHOLD=0.95
OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
OLLAMA_EMBEDDINGS_ENDPOINT=http://localhost:11434/api/embeddings
| Setting | Default | Description |
|---|---|---|
SEMANTIC_CACHE_ENABLED | false | Enable/disable semantic caching |
SEMANTIC_CACHE_THRESHOLD | 0.95 | Similarity threshold (0.0-1.0) |
Note: Without a proper embeddings provider, the cache uses hash-based fallback which may cause false matches. Use Ollama with
nomic-embed-textfor best results.
┌─────────────────┐
│ AI Tools │
└────────┬────────┘
│ Anthropic/OpenAI Format
↓
┌─────────────────┐
│ Lynkr Proxy │
│ Port: 8081 │
│ │
│ • Format Conv. │
│ • Token Optim. │
│ • Provider Route│
│ • Tool Calling │
│ • Caching │
└────────┬────────┘
│
├──→ Databricks (Claude 4.5)
├──→ AWS Bedrock (100+ models)
├──→ OpenRouter (100+ models)
├──→ Ollama (local, free)
├──→ llama.cpp (local, free)
├──→ Azure OpenAI (GPT-4o, o1)
├──→ OpenAI (GPT-4o, o3)
└──→ Azure Anthropic (Claude)
100% Local (FREE)
export MODEL_PROVIDER=ollama
export OLLAMA_MODEL=qwen2.5-coder:latest
export OLLAMA_EMBEDDINGS_MODEL=nomic-embed-text
npm start
💡 Tip: Prevent slow cold starts by keeping Ollama models loaded:
launchctl setenv OLLAMA_KEEP_ALIVE "24h"(macOS) or setOLLAMA_KEEP_ALIVE=24henv var. See troubleshooting.
Remote Ollama (GPU Server)
export MODEL_PROVIDER=ollama
export OLLAMA_ENDPOINT=http://192.168.1.100:11434 # Any IP or hostname
export OLLAMA_MODEL=llama3.1:70b
npm start
🌐 Note: All provider endpoints support remote addresses - not limited to localhost. Use any IP, hostname, or domain.
MLX OpenAI Server (Apple Silicon)
# Terminal 1: Start MLX server
mlx-openai-server launch --model-path mlx-community/Qwen2.5-Coder-7B-Instruct-4bit --model-type lm
# Terminal 2: Start Lynkr
export MODEL_PROVIDER=openai
export OPENAI_ENDPOINT=http://localhost:8000/v1/chat/completions
export OPENAI_API_KEY=not-needed
npm start
🍎 Apple Silicon optimized - Native MLX performance on M1/M2/M3/M4 Macs. See MLX setup guide.
AWS Bedrock (100+ models)
export MODEL_PROVIDER=bedrock
export AWS_BEDROCK_API_KEY=your-key
export AWS_BEDROCK_MODEL_ID=anthropic.claude-3-5-sonnet-20241022-v2:0
npm start
OpenRouter (simplest cloud)
export MODEL_PROVIDER=openrouter
export OPENROUTER_API_KEY=sk-or-v1-your-key
npm start
** You can setup multiple models like local models 📖 More Examples
We welcome contributions! Please see:
Apache 2.0 - See LICENSE file for details.
Made with ❤️ by developers, for developers.
FAQs
Self-hosted Claude Code & Cursor proxy with Databricks,AWS BedRock,Azure adapters, openrouter, Ollama,llamacpp,LM Studio, workspace tooling, and MCP integration.
We found that lynkr demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Security News
Multiple high-impact npm maintainers confirm they have been targeted in the same social engineering campaign that compromised Axios.

Security News
Axios compromise traced to social engineering, showing how attacks on maintainers can bypass controls and expose the broader software supply chain.

Security News
Node.js has paused its bug bounty program after funding ended, removing payouts for vulnerability reports but keeping its security process unchanged.