๐ง NeuroLink

Enterprise AI Development Platform with universal provider support, factory pattern architecture, and access to 100+ AI models through LiteLLM integration. Production-ready with TypeScript support.
NeuroLink is an Enterprise AI Development Platform that unifies 12 major AI providers with intelligent fallback and built-in tool support. Available as both a programmatic SDK and professional CLI tool. Features LiteLLM integration for 100+ models, plus 6 core tools working across all providers. Extracted from production use at Juspay.
๐ NEW: LiteLLM Integration - Access 100+ AI Models
NeuroLink now supports LiteLLM, providing unified access to 100+ AI models from all major providers through a single interface:
- ๐ Universal Access: OpenAI, Anthropic, Google, Mistral, Meta, and more
- ๐ฏ Unified Interface: OpenAI-compatible API for all models
- ๐ฐ Cost Optimization: Automatic routing to cost-effective models
- โก Load Balancing: Automatic failover and load distribution
- ๐ Analytics: Built-in usage tracking and monitoring
pip install litellm && litellm --port 4000
npx @juspay/neurolink generate "Hello" --provider litellm --model "openai/gpt-4o"
npx @juspay/neurolink generate "Hello" --provider litellm --model "anthropic/claude-3-5-sonnet"
npx @juspay/neurolink generate "Hello" --provider litellm --model "google/gemini-2.0-flash"
๐ Complete LiteLLM Integration Guide - Setup, configuration, and 100+ model access
๐ NEW: SageMaker Integration - Deploy Your Custom AI Models
NeuroLink now supports Amazon SageMaker, enabling you to deploy and use your own custom trained models through NeuroLink's unified interface:
- ๐๏ธ Custom Model Hosting - Deploy your fine-tuned models on AWS infrastructure
- ๐ฐ Cost Control - Pay only for inference usage with auto-scaling capabilities
- ๐ Enterprise Security - Full control over model infrastructure and data privacy
- โก Performance - Dedicated compute resources with predictable latency
- ๐ Monitoring - Built-in CloudWatch metrics and logging
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
npx @juspay/neurolink generate "Analyze this data" --provider sagemaker
npx @juspay/neurolink sagemaker status
npx @juspay/neurolink sagemaker benchmark my-endpoint
๐ Complete SageMaker Integration Guide - Setup, deployment, and custom model access
๐ Enterprise Platform Features
- ๐ญ Factory Pattern Architecture - Unified provider management through BaseProvider inheritance
- ๐ง Tools-First Design - All providers include built-in tool support without additional configuration
- ๐ LiteLLM Integration - 100+ models from all major providers through unified interface
- ๐ข Enterprise Proxy Support - Comprehensive corporate proxy support with MCP compatibility
- ๐๏ธ Enterprise Architecture - Production-ready with clean abstractions
- ๐ Configuration Management - Flexible provider configuration with automatic backups
- โ
Type Safety - Industry-standard TypeScript interfaces
- โก Performance - Fast response times with streaming support and 68% improved status checks
- ๐ก๏ธ Error Recovery - Graceful failures with provider fallback and retry logic
- ๐ Analytics & Evaluation - Built-in usage tracking and AI-powered quality assessment
- ๐ฏ Real-time Event Monitoring - EventEmitter integration for progress tracking and debugging
- ๐ง External MCP Integration - Model Context Protocol with 6 built-in tools + full external MCP server support
- ๐ Lighthouse Integration - Unified tool registration API supporting both object and array formats for seamless Lighthouse tool import
๐ Quick Start
๐ NEW: Revolutionary Interactive Setup - Transform Your Developer Experience!
๐ BREAKTHROUGH: Setup in 2-3 minutes (vs 15+ minutes manual setup)
pnpm cli setup
npx @juspay/neurolink generate "Hello, AI"
npx @juspay/neurolink gen "Write code"
npx @juspay/neurolink stream "Tell a story"
npx @juspay/neurolink status
๐ฏ Why This Changes Everything:
- โฑ๏ธ Time Savings: 15+ minutes โ 2-3 minutes (83% faster)
- ๐ก๏ธ Error Reduction: 90% fewer credential/configuration errors
- ๐จ Professional UX: Beautiful terminal interface with colors and animations
- ๐ Smart Validation: Real-time API key format checking and endpoint testing
- ๐ Safe Management: Preserves existing .env content, creates backups automatically
- ๐ง Intelligent Guidance: Context-aware recommendations based on use case
Developer Feedback: "Setup went from the most frustrating part to the most delightful part of using NeuroLink"
Provider-Specific Setup (if you prefer targeted setup)
npx @juspay/neurolink setup --provider google-ai
or pnpm cli setup-google-ai
npx @juspay/neurolink setup --provider openai
or pnpm cli setup-openai
npx @juspay/neurolink setup --provider anthropic
or pnpm cli setup-anthropic
npx @juspay/neurolink setup --provider azure
or pnpm cli setup-azure
npx @juspay/neurolink setup --provider bedrock
or pnpm cli setup-bedrock
npx @juspay/neurolink setup --provider huggingface
or pnpm cli setup-huggingface
pnpm cli setup-gcp
npx @juspay/neurolink setup --status
npx @juspay/neurolink setup --list
Alternative: Manual Setup (Advanced Users)
pip install litellm && litellm --port 4000
export LITELLM_BASE_URL="http://localhost:4000"
export LITELLM_API_KEY="sk-anything"
npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "openai/gpt-4o"
npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "anthropic/claude-3-5-sonnet"
export OPENAI_COMPATIBLE_BASE_URL="https://api.openrouter.ai/api/v1"
export OPENAI_COMPATIBLE_API_KEY="sk-or-v1-your-api-key"
npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible
export OPENAI_COMPATIBLE_MODEL="claude-3-5-sonnet"
npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible
export GOOGLE_AI_API_KEY="AIza-your-google-ai-api-key"
npx @juspay/neurolink generate "Hello, AI" --provider google-ai
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
npx @juspay/neurolink generate "Hello, AI" --provider sagemaker
npm install @juspay/neurolink
node -e "
const { NeuroLink } = require('@juspay/neurolink');
(async () => {
const neurolink = new NeuroLink();
// Add external filesystem MCP server
await neurolink.addExternalMCPServer('filesystem', {
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
transport: 'stdio'
});
// External tools automatically available in generate()
const result = await neurolink.generate({
input: { text: 'List files in the current directory' }
});
console.log('๐ External MCP integration working!');
console.log(result.content);
})();
"
Basic Usage
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
const autoResult = await neurolink.generate({
input: { text: "Write a business email" },
provider: "google-ai",
timeout: "30s",
});
console.log(autoResult.content);
console.log(`Used: ${autoResult.provider}`);
Conversation Memory
NeuroLink supports automatic conversation history management that maintains context across multiple turns within sessions. This enables AI to remember previous interactions and provide contextually aware responses. Session-based memory isolation ensures privacy between different conversations.
const neurolink = new NeuroLink({
conversationMemory: {
enabled: true,
maxSessions: 50,
maxTurnsPerSession: 20,
},
});
๐ CLI-SDK Consistency (NEW! โจ)
Method aliases that match CLI command names:
const result1 = await provider.generate({ input: { text: "Hello" } });
const result2 = await provider.gen({ input: { text: "Hello" } });
const provider = createBestAIProvider();
const story = await provider.generate({
input: { text: "Write a short story about AI" },
maxTokens: 200,
});
const poem = await provider.generate({ input: { text: "Write a poem" } });
const joke = await provider.gen({ input: { text: "Tell me a joke" } });
Enhanced Features
CLI with Analytics & Evaluation
npx @juspay/neurolink generate "Write a business email"
npx @juspay/neurolink generate "Write code" --provider litellm --model "anthropic/claude-3-5-sonnet"
npx @juspay/neurolink generate "Write a proposal" --enable-analytics --enable-evaluation --debug
npx @juspay/neurolink stream "What time is it and write a file with the current date"
SDK and Enhancement Features
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
const result = await neurolink.generate({
input: { text: "Write a business proposal" },
enableAnalytics: true,
enableEvaluation: true,
context: { project: "Q1-sales" },
});
console.log("๐ Usage:", result.analytics);
console.log("โญ Quality:", result.evaluation);
console.log("Response:", result.content);
Environment Setup
echo 'OPENAI_API_KEY="sk-your-openai-key"' > .env
echo 'GOOGLE_AI_API_KEY="AIza-your-google-ai-key"' >> .env
echo 'AWS_ACCESS_KEY_ID="your-aws-access-key"' >> .env
echo 'GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"' >> .env
echo 'GOOGLE_VERTEX_PROJECT="your-gcp-project-id"' >> .env
echo 'GOOGLE_VERTEX_LOCATION="us-central1"' >> .env
npx @juspay/neurolink status
JSON Format Support (Complete)
NeuroLink provides comprehensive JSON input/output support for both CLI and SDK:
npx @juspay/neurolink generate "Summary of AI trends" --format json
npx @juspay/neurolink gen "Create a user profile" --format json --provider google-ai
{
"content": "AI trends include increased automation...",
"provider": "google-ai",
"model": "gemini-2.5-flash",
"usage": {
"promptTokens": 15,
"completionTokens": 127,
"totalTokens": 142
},
"responseTime": 1234
}
import { createBestAIProvider } from "@juspay/neurolink";
const provider = createBestAIProvider();
const result = await provider.generate({
input: { text: "Create a product specification" },
schema: {
type: "object",
properties: {
name: { type: "string" },
price: { type: "number" },
features: { type: "array", items: { type: "string" } },
},
},
});
const productData = JSON.parse(result.content);
console.log(productData.name, productData.price, productData.features);
๐ Complete Setup Guide - All providers with detailed instructions
๐ NEW: Websearch Tool with Google Vertex AI Grounding
NeuroLink now includes a powerful websearch tool that uses Google's native search grounding technology for real-time web information:
- ๐ Native Google Search - Uses Google's search grounding via Vertex AI
- ๐ฏ Real-time Results - Access current web information during AI conversations
- ๐ Credential Protection - Only activates when Google Vertex AI credentials are properly configured
Quick Setup & Test
pnpm run build
cp .env.example .env
node test-websearch-grounding.j
Complete Google Vertex AI Setup
Configure Environment Variables
GOOGLE_APPLICATION_CREDENTIALS="/absolute/path/to/neurolink-service-account.json"
GOOGLE_VERTEX_PROJECT="YOUR-PROJECT-ID"
GOOGLE_VERTEX_LOCATION="us-central1"
Step 3: Test the Setup
pnpm run build
node test-websearch-grounding.js
npx @juspay/neurolink generate "Weather in Tokyo now" --provider vertex
**Note:** The websearch tool gracefully handles missing credentials - it only activates when valid Google Vertex AI credentials are configured. Without proper credentials, other tools continue to work normally and AI responses fall back to training data.
- ๐ **LiteLLM Integration** - **Access 100+ AI models** from all major providers through unified interface
- ๐ **Smart Model Auto-Discovery** - OpenAI Compatible provider automatically detects available models via `/v1/models` endpoint
- ๐ญ **Factory Pattern Architecture** - Unified provider management with BaseProvider inheritance
- ๐ง **Tools-First Design** - All providers automatically include 7 direct tools (getCurrentTime, readFile, listDirectory, calculateMath, writeFile, searchFiles, websearchGrounding)
- ๐ **12 AI Providers** - OpenAI, Bedrock, Vertex AI, Google AI Studio, Anthropic, Azure, **LiteLLM**, **OpenAI Compatible**, Hugging Face, Ollama, Mistral AI, **SageMaker**
- ๐ฐ **Cost Optimization** - Automatic selection of cheapest models and LiteLLM routing
- โก **Automatic Fallback** - Never fail when providers are down, intelligent provider switching
- ๐ฅ๏ธ **CLI + SDK** - Use from command line or integrate programmatically with TypeScript support
- ๐ก๏ธ **Production Ready** - Enterprise-grade error handling, performance optimization, extracted from production
- ๐ข **Enterprise Proxy Support** - Comprehensive corporate proxy support with zero configuration
- โ
**External MCP Integration** - Model Context Protocol with built-in tools + full external MCP server support
- ๐ **Smart Model Resolution** - Fuzzy matching, aliases, and capability-based search across all providers
- ๐ **Local AI Support** - Run completely offline with Ollama or through LiteLLM proxy
- ๐ **Universal Model Access** - Direct providers + 100,000+ models via Hugging Face + 100+ models via LiteLLM
- ๐ง **Automatic Context Summarization** - Stateful, long-running conversations with automatic history summarization.
- ๐ **Analytics & Evaluation** - Built-in usage tracking and AI-powered quality assessment
| Component | Status | Description |
| ---------------------- | -------------- | ---------------------------------------------------------------- |
| Built-in Tools | โ
**Working** | 6 core tools fully functional across all providers |
| SDK Custom Tools | โ
**Working** | Register custom tools programmatically |
| **External MCP Tools** | โ
**Working** | **Full external MCP server support with dynamic tool discovery** |
| Tool Execution | โ
**Working** | Real-time AI tool calling with all tool types |
| **Streaming Support** | โ
**Working** | **External MCP tools work with streaming generation** |
| **Multi-Provider** | โ
**Working** | **External tools work across all AI providers** |
| **CLI Integration** | โ
**READY** | **Production-ready with external MCP support** |
```bash
npx @juspay/neurolink generate "What time is it?" --debug
import { NeuroLink } from '@juspay/neurolink';
const neurolink = new NeuroLink();
// Add external MCP server (e.g., Bitbucket)
await neurolink.addExternalMCPServer('bitbucket', {
command: 'npx',
args: ['-y', '@nexus2520/bitbucket-mcp-server'],
transport: 'stdio',
env: {
BITBUCKET_USERNAME: process.env.BITBUCKET_USERNAME,
BITBUCKET_TOKEN: process.env.BITBUCKET_TOKEN,
BITBUCKET_BASE_URL: 'https://bitbucket.example.com'
}
});
// Use external MCP tools in generation
const result = await neurolink.generate({
input: { text: 'Get pull request #123 details from the main repository' },
disableTools: false // External MCP tools automatically available
});
npx @juspay/neurolink mcp discover --format table
๐ง SDK Custom Tool Registration (NEW!)
Register your own tools programmatically with the SDK:
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
neurolink.registerTool("weatherLookup", {
description: "Get current weather for a city",
parameters: z.object({
city: z.string().describe("City name"),
units: z.enum(["celsius", "fahrenheit"]).optional(),
}),
execute: async ({ city, units = "celsius" }) => {
return {
city,
temperature: 22,
units,
condition: "sunny",
};
},
});
const result = await neurolink.generate({
input: { text: "What's the weather in London?" },
provider: "google-ai",
});
neurolink.registerTools({
stockPrice: {
description: "Get stock price",
execute: async () => ({ price: 150.25 }),
},
calculator: {
description: "Calculate math",
execute: async () => ({ result: 42 }),
},
});
neurolink.registerTools([
{
name: "lighthouseTool1",
tool: {
description: "Lighthouse analytics tool",
parameters: z.object({
merchantId: z.string(),
dateRange: z.string().optional(),
}),
execute: async ({ merchantId, dateRange }) => {
return { data: "analytics result" };
},
},
},
{
name: "lighthouseTool2",
tool: {
description: "Payment processing tool",
execute: async () => ({ status: "processed" }),
},
},
]);
๐ฐ Smart Model Selection
NeuroLink features intelligent model selection and cost optimization:
Cost Optimization Features
- ๐ฐ Automatic Cost Optimization: Selects cheapest models for simple tasks
- ๐ LiteLLM Model Routing: Access 100+ models with automatic load balancing
- ๐ Capability-Based Selection: Find models with specific features (vision, function calling)
- โก Intelligent Fallback: Seamless switching when providers fail
npx @juspay/neurolink generate "Hello" --optimize-cost
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"
npx @juspay/neurolink generate "Write code"
โจ Interactive Loop Mode
NeuroLink features a powerful interactive loop mode that transforms the CLI into a persistent, stateful session. This allows you to run multiple commands, set session-wide variables, and maintain conversation history without restarting.
Start the Loop
npx @juspay/neurolink loop
Example Session
$ npx @juspay/neurolink loop
neurolink ยป set provider google-ai
โ provider set to google-ai
neurolink ยป set temperature 0.8
โ temperature set to 0.8
neurolink ยป generate "Tell me a fun fact about space"
The quietest place on Earth is an anechoic chamber at Microsoft's headquarters in Redmond, Washington. The background noise is so low that it's measured in negative decibels, and you can hear your own heartbeat.
neurolink ยป exit
Conversation Memory in Loop Mode
Start the loop with conversation memory to have the AI remember the context of your previous commands.
npx @juspay/neurolink loop --enable-conversation-memory
๐ป Essential Examples
CLI Commands
npx @juspay/neurolink generate "What time is it?"
npx @juspay/neurolink gen "What time is it?"
npx @juspay/neurolink generate "What time is it?" --disable-tools
npx @juspay/neurolink generate "Explain quantum computing in detail" --timeout 1m
npx @juspay/neurolink stream "What time is it?"
npx @juspay/neurolink stream "Tell me a story" --disable-tools
npx @juspay/neurolink stream "Write a long story" --timeout 5m
npx @juspay/neurolink status --verbose
echo -e "Write a haiku\nExplain gravity" > prompts.txt
npx @juspay/neurolink batch prompts.txt --output results.json
npx @juspay/neurolink batch prompts.txt --timeout 45s --output results.json
SDK Integration
export const POST: RequestHandler = async ({ request }) => {
const { message } = await request.json();
const provider = createBestAIProvider();
try {
const result = await provider.stream({
input: { text: message },
timeout: "2m",
});
for await (const chunk of result.stream) {
console.log(chunk.content);
}
const legacyResult = await provider.stream({ input: { text:
prompt: message,
timeout: "2m",
});
return new Response(result.toReadableStream());
} catch (error) {
if (error.name === "TimeoutError") {
return new Response("Request timed out", { status: 408 });
}
throw error;
}
};
export async function POST(request: NextRequest) {
const { prompt } = await request.json();
const provider = createBestAIProvider();
const result = await provider.generate({
prompt,
timeout: process.env.AI_TIMEOUT || "30s",
});
return NextResponse.json({ text: result.content });
}
๐ฌ See It In Action
No installation required! Experience NeuroLink through comprehensive visual documentation:
๐ฑ Interactive Web Demo
cd neurolink-demo && node server.js
- Real AI Integration: All 9 providers functional with live generation
- Complete Use Cases: Business, creative, and developer scenarios
- Performance Metrics: Live provider analytics and response times
- Privacy Options: Test local AI with Ollama
๐ฅ๏ธ CLI Demonstrations
๐ Web Interface Videos
๐ Complete Visual Documentation - All screenshots and videos
๐ Documentation
Getting Started
Advanced Features
Reference
๐๏ธ Supported Providers & Models
| ๐ LiteLLM ๐ | 100+ Models (All Providers) | Proxy Server | Varies | โ
Full | Universal Access |
| ๐ OpenAI Compatible ๐ | Any OpenAI-compatible endpoint | API Key + Base URL | Varies | โ
Full | Auto-Discovery + Flexibility |
| Google AI Studio | Gemini 2.5 Flash/Pro | API Key | โ
| โ
Full | Free Tier Available |
| OpenAI | GPT-4o, GPT-4o-mini | API Key | โ | โ
Full | Industry Standard |
| Anthropic | Claude 3.5 Sonnet | API Key | โ | โ
Full | Advanced Reasoning |
| Amazon Bedrock | Claude 3.5/3.7 Sonnet | AWS Credentials | โ | โ
Full* | Enterprise Scale |
| Google Vertex AI | Gemini 2.5 Flash | Service Account | โ | โ
Full | Enterprise Google |
| Azure OpenAI | GPT-4, GPT-3.5 | API Key + Endpoint | โ | โ
Full | Microsoft Ecosystem |
| Ollama ๐ | Llama 3.2, Gemma, Mistral (Local) | None (Local) | โ
| โ ๏ธ Partial | Complete Privacy |
| Hugging Face ๐ | 100,000+ open source models | API Key | โ
| โ ๏ธ Partial | Open Source |
| Mistral AI ๐ | Tiny, Small, Medium, Large | API Key | โ
| โ
Full | European/GDPR |
| Amazon SageMaker ๐ | Custom Models (Your Endpoints) | AWS Credentials | โ | โ
Full | Custom Model Hosting |
Tool Support Legend:
- โ
Full: All tools working correctly
- โ ๏ธ Partial: Tools visible but may not execute properly
- โ Limited: Issues with model or configuration
- * Bedrock requires valid AWS credentials, Ollama requires specific models like gemma3n for tool support
โจ Auto-Selection: NeuroLink automatically chooses the best available provider based on speed, reliability, and configuration.
๐ Smart Model Auto-Discovery (OpenAI Compatible)
The OpenAI Compatible provider includes intelligent model discovery that automatically detects available models from any endpoint:
export OPENAI_COMPATIBLE_BASE_URL="https://api.your-endpoint.ai/v1"
export OPENAI_COMPATIBLE_API_KEY="your-api-key"
npx @juspay/neurolink generate "Hello!" --provider openai-compatible
export OPENAI_COMPATIBLE_MODEL="gemini-2.5-pro"
npx @juspay/neurolink generate "Hello!" --provider openai-compatible
How it works:
- Queries
/v1/models endpoint to discover available models
- Automatically selects the first available model when none specified
- Falls back gracefully if discovery fails
- Works with any OpenAI-compatible service (OpenRouter, vLLM, LiteLLM, etc.)
๐ฏ Production Features
Enterprise-Grade Reliability
- Automatic Failover: Seamless provider switching on failures
- Error Recovery: Comprehensive error handling and logging
- Performance Monitoring: Built-in analytics and metrics
- Type Safety: Full TypeScript support with IntelliSense
AI Platform Capabilities
- MCP Foundation: Universal AI development platform with 10+ specialized tools
- Analysis Tools: Usage optimization, performance benchmarking, parameter tuning
- Workflow Tools: Test generation, code refactoring, documentation, debugging
- Extensibility: Connect external tools and services via MCP protocol
- ๐ Dynamic Server Management: Programmatically add MCP servers at runtime
๐ง External MCP Server Management โ
AVAILABLE NOW
External MCP integration is now production-ready:
- โ
6 built-in tools working across all providers
- โ
SDK custom tool registration
- โ
External MCP server management (add, remove, list, test servers)
- โ
Dynamic tool discovery (automatic tool registration from external servers)
- โ
Multi-provider support (external tools work with all AI providers)
- โ
Streaming integration (external tools work with real-time streaming)
- โ
Enhanced tool tracking (proper parameter extraction and execution logging)
const neurolink = new NeuroLink();
await neurolink.addExternalMCPServer(serverId, config);
await neurolink.removeExternalMCPServer(serverId);
const servers = neurolink.listExternalMCPServers();
const server = neurolink.getExternalMCPServer(serverId);
const tools = neurolink.getExternalMCPTools();
const serverTools = neurolink.getExternalMCPServerTools(serverId);
const result = await neurolink.executeExternalMCPTool(
serverId,
toolName,
params,
);
const stats = neurolink.getExternalMCPStatistics();
await neurolink.shutdownExternalMCPServers();
๐ค Contributing
We welcome contributions! Please see our Contributing Guidelines for details.
Development Setup
git clone https://github.com/juspay/neurolink
cd neurolink
pnpm install
npx husky install
pnpm setup:complete
pnpm test:adaptive
pnpm build:complete
Enterprise Developer Experience
NeuroLink features enterprise-grade build rule enforcement with comprehensive quality validation:
pnpm run validate:all
pnpm run validate:security
pnpm run validate:env
pnpm run quality:metrics
pnpm run check:all
pnpm run format
pnpm run lint
pnpm setup:complete
pnpm env:setup
pnpm env:backup
pnpm test:adaptive
pnpm test:providers
pnpm docs:sync
pnpm content:generate
pnpm build:complete
pnpm dev:health
Build Rule Enforcement: All commits automatically validated with pre-commit hooks. See Contributing Guidelines for complete requirements.
๐ Complete Automation Guide - All 72+ commands and automation features
๐ License
MIT ยฉ Juspay Technologies
๐ Related Projects
Built with โค๏ธ by Juspay Technologies