
Research
Malicious fezbox npm Package Steals Browser Passwords from Cookies via Innovative QR Code Steganographic Technique
A malicious package uses a QR code as steganography in an innovative technique.
@juspay/neurolink
Advanced tools
Universal AI Development Platform with working MCP integration, multi-provider support, and professional CLI. Built-in tools operational, 58+ external MCP servers discoverable. Connect to filesystem, GitHub, database operations, and more. Build, test, and
Enterprise AI Development Platform with universal provider support, factory pattern architecture, and access to 100+ AI models through LiteLLM integration. Production-ready with TypeScript support.
NeuroLink is an Enterprise AI Development Platform that unifies 12 major AI providers with intelligent fallback and built-in tool support. Available as both a programmatic SDK and professional CLI tool. Features LiteLLM integration for 100+ models, plus 6 core tools working across all providers. Extracted from production use at Juspay.
NeuroLink now supports LiteLLM, providing unified access to 100+ AI models from all major providers through a single interface:
# Quick start with LiteLLM
pip install litellm && litellm --port 4000
# Use any of 100+ models through one interface
npx @juspay/neurolink generate "Hello" --provider litellm --model "openai/gpt-4o"
npx @juspay/neurolink generate "Hello" --provider litellm --model "anthropic/claude-3-5-sonnet"
npx @juspay/neurolink generate "Hello" --provider litellm --model "google/gemini-2.0-flash"
๐ Complete LiteLLM Integration Guide - Setup, configuration, and 100+ model access
NeuroLink now supports Amazon SageMaker, enabling you to deploy and use your own custom trained models through NeuroLink's unified interface:
# Quick start with SageMaker
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
# Use your custom deployed models
npx @juspay/neurolink generate "Analyze this data" --provider sagemaker
npx @juspay/neurolink sagemaker status # Check endpoint health
npx @juspay/neurolink sagemaker benchmark my-endpoint # Performance testing
๐ Complete SageMaker Integration Guide - Setup, deployment, and custom model access
๐ BREAKTHROUGH: Setup in 2-3 minutes (vs 15+ minutes manual setup)
# ๐ฏ **MAIN SETUP WIZARD** - Beautiful guided experience
pnpm cli setup
# โจ **REVOLUTIONARY FEATURES:**
# ๐จ Beautiful ASCII art welcome screen
# ๐ Interactive provider comparison table
# โก Real-time credential validation with format checking
# ๐ Atomic .env file management (preserves existing content)
# ๐ง Smart recommendations (Google AI free tier, OpenAI for pro users)
# ๐ก๏ธ Cross-platform compatibility with graceful error recovery
# ๐ 90% reduction in setup errors vs manual configuration
# ๐ **INSTANT PRODUCTIVITY** - Use any AI provider immediately:
npx @juspay/neurolink generate "Hello, AI" # Auto-selects best provider
npx @juspay/neurolink gen "Write code" # Shortest form
npx @juspay/neurolink stream "Tell a story" # Real-time streaming
npx @juspay/neurolink status # Check all providers
๐ฏ Why This Changes Everything:
Developer Feedback: "Setup went from the most frustrating part to the most delightful part of using NeuroLink"
# Setup individual providers with guided wizards
npx @juspay/neurolink setup --provider google-ai # Free tier, perfect for beginners
or pnpm cli setup-google-ai
npx @juspay/neurolink setup --provider openai # Industry standard, professional use
or pnpm cli setup-openai
npx @juspay/neurolink setup --provider anthropic # Advanced reasoning, safety-focused
or pnpm cli setup-anthropic
npx @juspay/neurolink setup --provider azure # Enterprise features, compliance
or pnpm cli setup-azure
npx @juspay/neurolink setup --provider bedrock # AWS ecosystem integration
or pnpm cli setup-bedrock
npx @juspay/neurolink setup --provider huggingface # Open source models, 100k+ options
or pnpm cli setup-huggingface
pnpm cli setup-gcp # For using Vertex
# Check setup status anytime
npx @juspay/neurolink setup --status
npx @juspay/neurolink setup --list # View all available providers
# Option 1: LiteLLM - Access 100+ models through one interface
pip install litellm && litellm --port 4000
export LITELLM_BASE_URL="http://localhost:4000"
export LITELLM_API_KEY="sk-anything"
# Use any of 100+ models
npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "openai/gpt-4o"
npx @juspay/neurolink generate "Hello, AI" --provider litellm --model "anthropic/claude-3-5-sonnet"
# Option 2: OpenAI Compatible - Use any OpenAI-compatible endpoint with auto-discovery
export OPENAI_COMPATIBLE_BASE_URL="https://api.openrouter.ai/api/v1"
export OPENAI_COMPATIBLE_API_KEY="sk-or-v1-your-api-key"
# Auto-discovers available models via /v1/models endpoint
npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible
# Or specify a model explicitly
export OPENAI_COMPATIBLE_MODEL="claude-3-5-sonnet"
npx @juspay/neurolink generate "Hello, AI" --provider openai-compatible
# Option 3: Direct Provider - Quick setup with Google AI Studio (free tier)
export GOOGLE_AI_API_KEY="AIza-your-google-ai-api-key"
npx @juspay/neurolink generate "Hello, AI" --provider google-ai
# Option 4: Amazon SageMaker - Use your custom deployed models
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"
export SAGEMAKER_DEFAULT_ENDPOINT="your-endpoint-name"
npx @juspay/neurolink generate "Hello, AI" --provider sagemaker
# SDK Installation for using in your typescript projects
npm install @juspay/neurolink
# ๐ NEW: External MCP Server Integration Quick Test
node -e "
const { NeuroLink } = require('@juspay/neurolink');
(async () => {
const neurolink = new NeuroLink();
// Add external filesystem MCP server
await neurolink.addExternalMCPServer('filesystem', {
command: 'npx',
args: ['-y', '@modelcontextprotocol/server-filesystem', '/tmp'],
transport: 'stdio'
});
// External tools automatically available in generate()
const result = await neurolink.generate({
input: { text: 'List files in the current directory' }
});
console.log('๐ External MCP integration working!');
console.log(result.content);
})();
"
import { NeuroLink } from "@juspay/neurolink";
// Auto-select best available provider
const neurolink = new NeuroLink();
const autoResult = await neurolink.generate({
input: { text: "Write a business email" },
provider: "google-ai", // or let it auto-select
timeout: "30s",
});
console.log(autoResult.content);
console.log(`Used: ${autoResult.provider}`);
NeuroLink supports automatic conversation history management that maintains context across multiple turns within sessions. This enables AI to remember previous interactions and provide contextually aware responses. Session-based memory isolation ensures privacy between different conversations.
// Enable conversation memory with configurable limits
const neurolink = new NeuroLink({
conversationMemory: {
enabled: true,
maxSessions: 50, // Keep last 50 sessions
maxTurnsPerSession: 20, // Keep last 20 turns per session
},
});
Method aliases that match CLI command names:
// The following methods are equivalent:
const result1 = await provider.generate({ input: { text: "Hello" } }); // Original
const result2 = await provider.gen({ input: { text: "Hello" } }); // Matches CLI 'gen'
// Use whichever style you prefer:
const provider = createBestAIProvider();
// Detailed method name
const story = await provider.generate({
input: { text: "Write a short story about AI" },
maxTokens: 200,
});
// CLI-style method names
const poem = await provider.generate({ input: { text: "Write a poem" } });
const joke = await provider.gen({ input: { text: "Tell me a joke" } });
# Basic AI generation with auto-provider selection
npx @juspay/neurolink generate "Write a business email"
# LiteLLM with specific model
npx @juspay/neurolink generate "Write code" --provider litellm --model "anthropic/claude-3-5-sonnet"
# With analytics and evaluation
npx @juspay/neurolink generate "Write a proposal" --enable-analytics --enable-evaluation --debug
# Streaming with tools (default behavior)
npx @juspay/neurolink stream "What time is it and write a file with the current date"
import { NeuroLink } from "@juspay/neurolink";
// Enhanced generation with analytics
const neurolink = new NeuroLink();
const result = await neurolink.generate({
input: { text: "Write a business proposal" },
enableAnalytics: true, // Get usage & cost data
enableEvaluation: true, // Get AI quality scores
context: { project: "Q1-sales" },
});
console.log("๐ Usage:", result.analytics);
console.log("โญ Quality:", result.evaluation);
console.log("Response:", result.content);
# Create .env file (automatically loaded by CLI)
echo 'OPENAI_API_KEY="sk-your-openai-key"' > .env
echo 'GOOGLE_AI_API_KEY="AIza-your-google-ai-key"' >> .env
echo 'AWS_ACCESS_KEY_ID="your-aws-access-key"' >> .env
# ๐ NEW: Google Vertex AI for Websearch Tool
echo 'GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"' >> .env
echo 'GOOGLE_VERTEX_PROJECT="your-gcp-project-id"' >> .env
echo 'GOOGLE_VERTEX_LOCATION="us-central1"' >> .env
# Test configuration
npx @juspay/neurolink status
# SDK Env Provider Check - Advanced provider testing with fallback detection
pnpm run test:providers
# Example output:
โ
Google AI: Working (197 tokens)
โ ๏ธ OpenAI: Failed (Fallback to google-ai)
โ ๏ธ AWS Bedrock: Failed (Fallback to google-ai)
NeuroLink provides comprehensive JSON input/output support for both CLI and SDK:
# CLI JSON Output - Structured data for scripts
npx @juspay/neurolink generate "Summary of AI trends" --format json
npx @juspay/neurolink gen "Create a user profile" --format json --provider google-ai
# Example JSON Output:
{
"content": "AI trends include increased automation...",
"provider": "google-ai",
"model": "gemini-2.5-flash",
"usage": {
"promptTokens": 15,
"completionTokens": 127,
"totalTokens": 142
},
"responseTime": 1234
}
// SDK JSON Input/Output - Full TypeScript support
import { createBestAIProvider } from "@juspay/neurolink";
const provider = createBestAIProvider();
// Structured input
const result = await provider.generate({
input: { text: "Create a product specification" },
schema: {
type: "object",
properties: {
name: { type: "string" },
price: { type: "number" },
features: { type: "array", items: { type: "string" } },
},
},
});
// Access structured response
const productData = JSON.parse(result.content);
console.log(productData.name, productData.price, productData.features);
๐ Complete Setup Guide - All providers with detailed instructions
NeuroLink now includes a powerful websearch tool that uses Google's native search grounding technology for real-time web information:
# 1. Build the project first
pnpm run build
# 2. Set up environment variables (see detailed setup below)
cp .env.example .env
# Edit .env with your Google Vertex AI credentials
# 3. Test the websearch tool directly
node test-websearch-grounding.j
# Add to your .env file
GOOGLE_APPLICATION_CREDENTIALS="/absolute/path/to/neurolink-service-account.json"
GOOGLE_VERTEX_PROJECT="YOUR-PROJECT-ID"
GOOGLE_VERTEX_LOCATION="us-central1"
# Build the project first
pnpm run build
# Run the dedicated test script
node test-websearch-grounding.js
### Using the Websearch Tool
#### CLI Usage (Works with All Providers)
# With specific providers - websearch works across all providers
npx @juspay/neurolink generate "Weather in Tokyo now" --provider vertex
**Note:** The websearch tool gracefully handles missing credentials - it only activates when valid Google Vertex AI credentials are configured. Without proper credentials, other tools continue to work normally and AI responses fall back to training data.
## โจ Key Features
- ๐ **LiteLLM Integration** - **Access 100+ AI models** from all major providers through unified interface
- ๐ **Smart Model Auto-Discovery** - OpenAI Compatible provider automatically detects available models via `/v1/models` endpoint
- ๐ญ **Factory Pattern Architecture** - Unified provider management with BaseProvider inheritance
- ๐ง **Tools-First Design** - All providers automatically include 7 direct tools (getCurrentTime, readFile, listDirectory, calculateMath, writeFile, searchFiles, websearchGrounding)
- ๐ **12 AI Providers** - OpenAI, Bedrock, Vertex AI, Google AI Studio, Anthropic, Azure, **LiteLLM**, **OpenAI Compatible**, Hugging Face, Ollama, Mistral AI, **SageMaker**
- ๐ฐ **Cost Optimization** - Automatic selection of cheapest models and LiteLLM routing
- โก **Automatic Fallback** - Never fail when providers are down, intelligent provider switching
- ๐ฅ๏ธ **CLI + SDK** - Use from command line or integrate programmatically with TypeScript support
- ๐ก๏ธ **Production Ready** - Enterprise-grade error handling, performance optimization, extracted from production
- ๐ข **Enterprise Proxy Support** - Comprehensive corporate proxy support with zero configuration
- โ
**External MCP Integration** - Model Context Protocol with built-in tools + full external MCP server support
- ๐ **Smart Model Resolution** - Fuzzy matching, aliases, and capability-based search across all providers
- ๐ **Local AI Support** - Run completely offline with Ollama or through LiteLLM proxy
- ๐ **Universal Model Access** - Direct providers + 100,000+ models via Hugging Face + 100+ models via LiteLLM
- ๐ง **Automatic Context Summarization** - Stateful, long-running conversations with automatic history summarization.
- ๐ **Analytics & Evaluation** - Built-in usage tracking and AI-powered quality assessment
## ๐ ๏ธ External MCP Integration Status โ
**PRODUCTION READY**
| Component | Status | Description |
| ---------------------- | -------------- | ---------------------------------------------------------------- |
| Built-in Tools | โ
**Working** | 6 core tools fully functional across all providers |
| SDK Custom Tools | โ
**Working** | Register custom tools programmatically |
| **External MCP Tools** | โ
**Working** | **Full external MCP server support with dynamic tool discovery** |
| Tool Execution | โ
**Working** | Real-time AI tool calling with all tool types |
| **Streaming Support** | โ
**Working** | **External MCP tools work with streaming generation** |
| **Multi-Provider** | โ
**Working** | **External tools work across all AI providers** |
| **CLI Integration** | โ
**READY** | **Production-ready with external MCP support** |
### โ
External MCP Integration Demo
```bash
# Test built-in tools (works immediately)
npx @juspay/neurolink generate "What time is it?" --debug
# ๐ NEW: External MCP server integration (SDK)
import { NeuroLink } from '@juspay/neurolink';
const neurolink = new NeuroLink();
// Add external MCP server (e.g., Bitbucket)
await neurolink.addExternalMCPServer('bitbucket', {
command: 'npx',
args: ['-y', '@nexus2520/bitbucket-mcp-server'],
transport: 'stdio',
env: {
BITBUCKET_USERNAME: process.env.BITBUCKET_USERNAME,
BITBUCKET_TOKEN: process.env.BITBUCKET_TOKEN,
BITBUCKET_BASE_URL: 'https://bitbucket.example.com'
}
});
// Use external MCP tools in generation
const result = await neurolink.generate({
input: { text: 'Get pull request #123 details from the main repository' },
disableTools: false // External MCP tools automatically available
});
# Discover available MCP servers
npx @juspay/neurolink mcp discover --format table
Register your own tools programmatically with the SDK:
import { NeuroLink } from "@juspay/neurolink";
const neurolink = new NeuroLink();
// Register a simple tool
neurolink.registerTool("weatherLookup", {
description: "Get current weather for a city",
parameters: z.object({
city: z.string().describe("City name"),
units: z.enum(["celsius", "fahrenheit"]).optional(),
}),
execute: async ({ city, units = "celsius" }) => {
// Your implementation here
return {
city,
temperature: 22,
units,
condition: "sunny",
};
},
});
// Use it in generation
const result = await neurolink.generate({
input: { text: "What's the weather in London?" },
provider: "google-ai",
});
// Register multiple tools - Object format (existing)
neurolink.registerTools({
stockPrice: {
description: "Get stock price",
execute: async () => ({ price: 150.25 }),
},
calculator: {
description: "Calculate math",
execute: async () => ({ result: 42 }),
},
});
// Register multiple tools - Array format (Lighthouse compatible)
neurolink.registerTools([
{
name: "lighthouseTool1",
tool: {
description: "Lighthouse analytics tool",
parameters: z.object({
merchantId: z.string(),
dateRange: z.string().optional(),
}),
execute: async ({ merchantId, dateRange }) => {
// Lighthouse tool implementation with Zod schema
return { data: "analytics result" };
},
},
},
{
name: "lighthouseTool2",
tool: {
description: "Payment processing tool",
execute: async () => ({ status: "processed" }),
},
},
]);
NeuroLink features intelligent model selection and cost optimization:
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost
# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"
# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider
NeuroLink features a powerful interactive loop mode that transforms the CLI into a persistent, stateful session. This allows you to run multiple commands, set session-wide variables, and maintain conversation history without restarting.
npx @juspay/neurolink loop
# Start the interactive session
$ npx @juspay/neurolink loop
neurolink ยป set provider google-ai
โ provider set to google-ai
neurolink ยป set temperature 0.8
โ temperature set to 0.8
neurolink ยป generate "Tell me a fun fact about space"
The quietest place on Earth is an anechoic chamber at Microsoft's headquarters in Redmond, Washington. The background noise is so low that it's measured in negative decibels, and you can hear your own heartbeat.
# Exit the session
neurolink ยป exit
Start the loop with conversation memory to have the AI remember the context of your previous commands.
npx @juspay/neurolink loop --enable-conversation-memory
# Text generation with automatic MCP tool detection (default)
npx @juspay/neurolink generate "What time is it?"
# Alternative short form
npx @juspay/neurolink gen "What time is it?"
# Disable tools for training-data-only responses
npx @juspay/neurolink generate "What time is it?" --disable-tools
# With custom timeout for complex prompts
npx @juspay/neurolink generate "Explain quantum computing in detail" --timeout 1m
# Real-time streaming with agent support (default)
npx @juspay/neurolink stream "What time is it?"
# Streaming without tools (traditional mode)
npx @juspay/neurolink stream "Tell me a story" --disable-tools
# Streaming with extended timeout
npx @juspay/neurolink stream "Write a long story" --timeout 5m
# Provider diagnostics
npx @juspay/neurolink status --verbose
# Batch processing
echo -e "Write a haiku\nExplain gravity" > prompts.txt
npx @juspay/neurolink batch prompts.txt --output results.json
# Batch with custom timeout per request
npx @juspay/neurolink batch prompts.txt --timeout 45s --output results.json
// SvelteKit API route with timeout handling
export const POST: RequestHandler = async ({ request }) => {
const { message } = await request.json();
const provider = createBestAIProvider();
try {
// NEW: Primary streaming method (recommended)
const result = await provider.stream({
input: { text: message },
timeout: "2m", // 2 minutes for streaming
});
// Process stream
for await (const chunk of result.stream) {
// Handle streaming content
console.log(chunk.content);
}
// LEGACY: Backward compatibility (still works)
const legacyResult = await provider.stream({ input: { text:
prompt: message,
timeout: "2m", // 2 minutes for streaming
});
return new Response(result.toReadableStream());
} catch (error) {
if (error.name === "TimeoutError") {
return new Response("Request timed out", { status: 408 });
}
throw error;
}
};
// Next.js API route with timeout
export async function POST(request: NextRequest) {
const { prompt } = await request.json();
const provider = createBestAIProvider();
const result = await provider.generate({
prompt,
timeout: process.env.AI_TIMEOUT || "30s", // Configurable timeout
});
return NextResponse.json({ text: result.content });
}
No installation required! Experience NeuroLink through comprehensive visual documentation:
cd neurolink-demo && node server.js
# Visit http://localhost:9876 for live demo
๐ Complete Visual Documentation - All screenshots and videos
Provider | Models | Auth Method | Free Tier | Tool Support | Key Benefit |
---|---|---|---|---|---|
๐ LiteLLM ๐ | 100+ Models (All Providers) | Proxy Server | Varies | โ Full | Universal Access |
๐ OpenAI Compatible ๐ | Any OpenAI-compatible endpoint | API Key + Base URL | Varies | โ Full | Auto-Discovery + Flexibility |
Google AI Studio | Gemini 2.5 Flash/Pro | API Key | โ | โ Full | Free Tier Available |
OpenAI | GPT-4o, GPT-4o-mini | API Key | โ | โ Full | Industry Standard |
Anthropic | Claude 3.5 Sonnet | API Key | โ | โ Full | Advanced Reasoning |
Amazon Bedrock | Claude 3.5/3.7 Sonnet | AWS Credentials | โ | โ Full* | Enterprise Scale |
Google Vertex AI | Gemini 2.5 Flash | Service Account | โ | โ Full | Enterprise Google |
Azure OpenAI | GPT-4, GPT-3.5 | API Key + Endpoint | โ | โ Full | Microsoft Ecosystem |
Ollama ๐ | Llama 3.2, Gemma, Mistral (Local) | None (Local) | โ | โ ๏ธ Partial | Complete Privacy |
Hugging Face ๐ | 100,000+ open source models | API Key | โ | โ ๏ธ Partial | Open Source |
Mistral AI ๐ | Tiny, Small, Medium, Large | API Key | โ | โ Full | European/GDPR |
Amazon SageMaker ๐ | Custom Models (Your Endpoints) | AWS Credentials | โ | โ Full | Custom Model Hosting |
Tool Support Legend:
โจ Auto-Selection: NeuroLink automatically chooses the best available provider based on speed, reliability, and configuration.
The OpenAI Compatible provider includes intelligent model discovery that automatically detects available models from any endpoint:
# Setup - no model specified
export OPENAI_COMPATIBLE_BASE_URL="https://api.your-endpoint.ai/v1"
export OPENAI_COMPATIBLE_API_KEY="your-api-key"
# Auto-discovers and uses first available model
npx @juspay/neurolink generate "Hello!" --provider openai-compatible
# โ ๐ Auto-discovered model: claude-sonnet-4 from 3 available models
# Or specify explicitly to skip discovery
export OPENAI_COMPATIBLE_MODEL="gemini-2.5-pro"
npx @juspay/neurolink generate "Hello!" --provider openai-compatible
How it works:
/v1/models
endpoint to discover available modelsExternal MCP integration is now production-ready:
// Complete external MCP server API
const neurolink = new NeuroLink();
// Server management
await neurolink.addExternalMCPServer(serverId, config);
await neurolink.removeExternalMCPServer(serverId);
const servers = neurolink.listExternalMCPServers();
const server = neurolink.getExternalMCPServer(serverId);
// Tool management
const tools = neurolink.getExternalMCPTools();
const serverTools = neurolink.getExternalMCPServerTools(serverId);
// Direct tool execution
const result = await neurolink.executeExternalMCPTool(
serverId,
toolName,
params,
);
// Statistics and monitoring
const stats = neurolink.getExternalMCPStatistics();
await neurolink.shutdownExternalMCPServers();
We welcome contributions! Please see our Contributing Guidelines for details.
git clone https://github.com/juspay/neurolink
cd neurolink
pnpm install
npx husky install # Setup git hooks for build rule enforcement
pnpm setup:complete # One-command setup with all automation
pnpm test:adaptive # Intelligent testing
pnpm build:complete # Full build pipeline
NeuroLink features enterprise-grade build rule enforcement with comprehensive quality validation:
# Quality & Validation (required for all commits)
pnpm run validate:all # Run all validation checks
pnpm run validate:security # Security scanning with gitleaks
pnpm run validate:env # Environment consistency checks
pnpm run quality:metrics # Generate quality score report
# Development Workflow
pnpm run check:all # Pre-commit validation simulation
pnpm run format # Auto-fix code formatting
pnpm run lint # ESLint validation with zero-error tolerance
# Environment & Setup (2-minute initialization)
pnpm setup:complete # Complete project setup
pnpm env:setup # Safe .env configuration
pnpm env:backup # Environment backup
# Testing (60-80% faster)
pnpm test:adaptive # Intelligent test selection
pnpm test:providers # AI provider validation
# Documentation & Content
pnpm docs:sync # Cross-file documentation sync
pnpm content:generate # Automated content creation
# Build & Deployment
pnpm build:complete # 7-phase enterprise pipeline
pnpm dev:health # System health monitoring
Build Rule Enforcement: All commits automatically validated with pre-commit hooks. See Contributing Guidelines for complete requirements.
๐ Complete Automation Guide - All 72+ commands and automation features
MIT ยฉ Juspay Technologies
Built with โค๏ธ by Juspay Technologies
FAQs
Universal AI Development Platform with working MCP integration, multi-provider support, and professional CLI. Built-in tools operational, 58+ external MCP servers discoverable. Connect to filesystem, GitHub, database operations, and more. Build, test, and
We found that @juspay/neurolink demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago.ย It has 7 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
A malicious package uses a QR code as steganography in an innovative technique.
Research
/Security News
Socket identified 80 fake candidates targeting engineering roles, including suspected North Korean operators, exposing the new reality of hiring as a security function.
Application Security
/Research
/Security News
Socket detected multiple compromised CrowdStrike npm packages, continuing the "Shai-Hulud" supply chain attack that has now impacted nearly 500 packages.