πŸš€ DAY 5 OF LAUNCH WEEK: Introducing Socket Firewall Enterprise.Learn more β†’
Socket
Book a DemoInstallSign in
Socket

@juspay/neurolink

Package Overview
Dependencies
Maintainers
7
Versions
137
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@juspay/neurolink

Universal AI Development Platform with working MCP integration, multi-provider support, and professional CLI. Built-in tools operational, 58+ external MCP servers discoverable. Connect to filesystem, GitHub, database operations, and more. Build, test, and

latest
Source
npmnpm
Version
7.53.1
Version published
Maintainers
7
Created
Source

NPM Version Downloads GitHub Stars License TypeScript CI

Enterprise AI development platform with unified provider access, production-ready tooling, and an opinionated factory architecture. NeuroLink ships as both a TypeScript SDK and a professional CLI so teams can build, operate, and iterate on AI features quickly.

NeuroLink is the universal AI integration platform that unifies 12 major AI providers and 100+ models under one consistent API.

Extracted from production systems at Juspay and battle-tested at enterprise scale, NeuroLink provides a production-ready solution for integrating AI into any application. Whether you're building with OpenAI, Anthropic, Google, AWS Bedrock, Azure, or any of our 12 supported providers, NeuroLink gives you a single, consistent interface that works everywhere.

Why NeuroLink? Switch providers with a single parameter change, leverage 64+ built-in tools and MCP servers, deploy with confidence using enterprise features like Redis memory and multi-provider failover, and optimize costs automatically with intelligent routing. Use it via our professional CLI or TypeScript SDKβ€”whichever fits your workflow.

Where we're headed: We're building for the future of AIβ€”edge-first execution and continuous streaming architectures that make AI practically free and universally available. Read our vision β†’

Get Started in <5 Minutes β†’

What's New (Q4 2025)

  • CSV File Support – Attach CSV files to prompts for AI-powered data analysis with auto-detection. β†’ CSV Guide
  • PDF File Support – Process PDF documents with native visual analysis for Vertex AI, Anthropic, Bedrock, AI Studio. β†’ PDF Guide
  • LiteLLM Integration – Access 100+ AI models from all major providers through unified interface. β†’ Setup Guide
  • SageMaker Integration – Deploy and use custom trained models on AWS infrastructure. β†’ Setup Guide
  • Human-in-the-loop workflows – Pause generation for user approval/input before tool execution. β†’ HITL Guide
  • Guardrails middleware – Block PII, profanity, and unsafe content with built-in filtering. β†’ Guardrails Guide
  • Context summarization – Automatic conversation compression for long-running sessions. β†’ Summarization Guide
  • Redis conversation export – Export full session history as JSON for analytics and debugging. β†’ History Guide

Q3 highlights (multimodal chat, auto-evaluation, loop sessions, orchestration) are now in Platform Capabilities below.

Get Started in Two Steps

# 1. Run the interactive setup wizard (select providers, validate keys)
pnpm dlx @juspay/neurolink setup

# 2. Start generating with automatic provider selection
npx @juspay/neurolink generate "Write a launch plan for multimodal chat"

Need a persistent workspace? Launch loop mode with npx @juspay/neurolink loop - Learn more β†’

🌟 Complete Feature Set

NeuroLink is a comprehensive AI development platform. Every feature below is production-ready and fully documented.

πŸ€– AI Provider Integration

12 providers unified under one API - Switch providers with a single parameter change.

ProviderModelsFree TierTool SupportStatusDocumentation
OpenAIGPT-4o, GPT-4o-mini, o1βŒβœ… Fullβœ… ProductionSetup Guide
AnthropicClaude 3.5/3.7 Sonnet, OpusβŒβœ… Fullβœ… ProductionSetup Guide
Google AI StudioGemini 2.5 Flash/Proβœ… Free Tierβœ… Fullβœ… ProductionSetup Guide
AWS BedrockClaude, Titan, Llama, NovaβŒβœ… Fullβœ… ProductionSetup Guide
Google VertexGemini via GCPβŒβœ… Fullβœ… ProductionSetup Guide
Azure OpenAIGPT-4, GPT-4o, o1βŒβœ… Fullβœ… ProductionSetup Guide
LiteLLM100+ models unifiedVariesβœ… Fullβœ… ProductionSetup Guide
AWS SageMakerCustom deployed modelsβŒβœ… Fullβœ… ProductionSetup Guide
Mistral AIMistral Large, Smallβœ… Free Tierβœ… Fullβœ… ProductionSetup Guide
Hugging Face100,000+ modelsβœ… Free⚠️ Partialβœ… ProductionSetup Guide
OllamaLocal models (Llama, Mistral)βœ… Free (Local)⚠️ Partialβœ… ProductionSetup Guide
OpenAI CompatibleAny OpenAI-compatible endpointVariesβœ… Fullβœ… ProductionSetup Guide

πŸ“– Provider Comparison Guide - Detailed feature matrix and selection criteria

πŸ”§ Built-in Tools & MCP Integration

6 Core Tools (work across all providers, zero configuration):

ToolPurposeAuto-AvailableDocumentation
getCurrentTimeReal-time clock accessβœ…Tool Reference
readFileFile system readingβœ…Tool Reference
writeFileFile system writingβœ…Tool Reference
listDirectoryDirectory listingβœ…Tool Reference
calculateMathMathematical operationsβœ…Tool Reference
websearchGroundingGoogle Vertex web search⚠️ Requires credentialsTool Reference

58+ External MCP Servers supported (GitHub, PostgreSQL, Google Drive, Slack, and more):

// Add any MCP server dynamically
await neurolink.addExternalMCPServer("github", {
  command: "npx",
  args: ["-y", "@modelcontextprotocol/server-github"],
  transport: "stdio",
  env: { GITHUB_TOKEN: process.env.GITHUB_TOKEN },
});

// Tools automatically available to AI
const result = await neurolink.generate({
  input: { text: 'Create a GitHub issue titled "Bug in auth flow"' },
});

πŸ“– MCP Integration Guide - Setup external servers

πŸ’» Developer Experience Features

SDK-First Design with TypeScript, IntelliSense, and type safety:

FeatureDescriptionDocumentation
Auto Provider SelectionIntelligent provider fallbackSDK Guide
Streaming ResponsesReal-time token streamingStreaming Guide
Conversation MemoryAutomatic context managementMemory Guide
Full Type SafetyComplete TypeScript typesType Reference
Error HandlingGraceful provider fallbackError Guide
Analytics & EvaluationUsage tracking, quality scoresAnalytics Guide
Middleware SystemRequest/response hooksMiddleware Guide
Framework IntegrationNext.js, SvelteKit, ExpressFramework Guides

🏒 Enterprise & Production Features

Production-ready capabilities for regulated industries:

FeatureDescriptionUse CaseDocumentation
Enterprise ProxyCorporate proxy supportBehind firewallsProxy Setup
Redis MemoryDistributed conversation stateMulti-instance deploymentRedis Guide
Cost OptimizationAutomatic cheapest model selectionBudget controlCost Guide
Multi-Provider FailoverAutomatic provider switchingHigh availabilityFailover Guide
Telemetry & MonitoringOpenTelemetry integrationObservabilityTelemetry Guide
Security HardeningCredential management, auditingComplianceSecurity Guide
Custom Model HostingSageMaker integrationPrivate modelsSageMaker Guide
Load BalancingLiteLLM proxy integrationScale & routingLoad Balancing

Security & Compliance:

  • βœ… SOC2 Type II compliant deployments
  • βœ… ISO 27001 certified infrastructure compatible
  • βœ… GDPR-compliant data handling (EU providers available)
  • βœ… HIPAA compatible (with proper configuration)
  • βœ… Hardened OS verified (SELinux, AppArmor)
  • βœ… Zero credential logging
  • βœ… Encrypted configuration storage

πŸ“– Enterprise Deployment Guide - Complete production checklist

🎨 Professional CLI

15+ commands for every workflow:

CommandPurposeExampleDocumentation
setupInteractive provider configurationneurolink setupSetup Guide
generateText generationneurolink gen "Hello"Generate
streamStreaming generationneurolink stream "Story"Stream
statusProvider health checkneurolink statusStatus
loopInteractive sessionneurolink loopLoop
mcpMCP server managementneurolink mcp discoverMCP CLI
modelsModel listingneurolink modelsModels
evalModel evaluationneurolink evalEval

πŸ“– Complete CLI Reference - All commands and options

πŸ’° Smart Model Selection

NeuroLink features intelligent model selection and cost optimization:

Cost Optimization Features

  • πŸ’° Automatic Cost Optimization: Selects cheapest models for simple tasks
  • πŸ”„ LiteLLM Model Routing: Access 100+ models with automatic load balancing
  • πŸ” Capability-Based Selection: Find models with specific features (vision, function calling)
  • ⚑ Intelligent Fallback: Seamless switching when providers fail
# Cost optimization - automatically use cheapest model
npx @juspay/neurolink generate "Hello" --optimize-cost

# LiteLLM specific model selection
npx @juspay/neurolink generate "Complex analysis" --provider litellm --model "anthropic/claude-3-5-sonnet"

# Auto-select best available provider
npx @juspay/neurolink generate "Write code" # Automatically chooses optimal provider

✨ Interactive Loop Mode

NeuroLink features a powerful interactive loop mode that transforms the CLI into a persistent, stateful session. This allows you to run multiple commands, set session-wide variables, and maintain conversation history without restarting.

Start the Loop

npx @juspay/neurolink loop

Example Session

# Start the interactive session
$ npx @juspay/neurolink loop

neurolink Β» set provider google-ai
βœ“ provider set to google-ai

neurolink Β» set temperature 0.8
βœ“ temperature set to 0.8

neurolink Β» generate "Tell me a fun fact about space"
The quietest place on Earth is an anechoic chamber at Microsoft's headquarters in Redmond, Washington. The background noise is so low that it's measured in negative decibels, and you can hear your own heartbeat.

# Exit the session
neurolink Β» exit

Conversation Memory in Loop Mode

Start the loop with conversation memory to have the AI remember the context of your previous commands.

npx @juspay/neurolink loop --enable-conversation-memory

Skip the wizard and configure manually? See docs/getting-started/provider-setup.md.

CLI & SDK Essentials

neurolink CLI mirrors the SDK so teams can script experiments and codify them later.

# Discover available providers and models
npx @juspay/neurolink status
npx @juspay/neurolink models list --provider google-ai

# Route to a specific provider/model
npx @juspay/neurolink generate "Summarize customer feedback" \
  --provider azure --model gpt-4o-mini

# Turn on analytics + evaluation for observability
npx @juspay/neurolink generate "Draft release notes" \
  --enable-analytics --enable-evaluation --format json
import { NeuroLink } from "@juspay/neurolink";

const neurolink = new NeuroLink({
  conversationMemory: {
    enabled: true,
    store: "redis",
  },
  enableOrchestration: true,
});

const result = await neurolink.generate({
  input: {
    text: "Create a comprehensive analysis",
    files: [
      "./sales_data.csv", // Auto-detected as CSV
      "examples/data/invoice.pdf", // Auto-detected as PDF
      "./diagrams/architecture.png", // Auto-detected as image
    ],
  },
  provider: "vertex", // PDF-capable provider (see docs/features/pdf-support.md)
  enableEvaluation: true,
  region: "us-east-1",
});

console.log(result.content);
console.log(result.evaluation?.overallScore);

Full command and API breakdown lives in docs/cli/commands.md and docs/sdk/api-reference.md.

Platform Capabilities at a Glance

CapabilityHighlights
Provider unification12+ providers with automatic fallback, cost-aware routing, provider orchestration (Q3).
Multimodal pipelineStream images + CSV data + PDF documents across providers with local/remote assets. Auto-detection for mixed file types.
Quality & governanceAuto-evaluation engine (Q3), guardrails middleware (Q4), HITL workflows (Q4), audit logging.
Memory & contextConversation memory, Mem0 integration, Redis history export (Q4), context summarization (Q4).
CLI toolingLoop sessions (Q3), setup wizard, config validation, Redis auto-detect, JSON output.
Enterprise opsProxy support, regional routing (Q3), telemetry hooks, configuration management.
Tool ecosystemMCP auto discovery, LiteLLM hub access, SageMaker custom deployment, web search.

Documentation Map

AreaWhen to UseLink
Getting startedInstall, configure, run first promptdocs/getting-started/index.md
Feature guidesUnderstand new functionality front-to-backdocs/features/index.md
CLI referenceCommand syntax, flags, loop sessionsdocs/cli/index.md
SDK referenceClasses, methods, optionsdocs/sdk/index.md
IntegrationsLiteLLM, SageMaker, MCP, Mem0docs/LITELLM-INTEGRATION.md
OperationsConfiguration, troubleshooting, provider matrixdocs/reference/index.md
Visual demosScreens, GIFs, interactive toursdocs/demos/index.md

Integrations

Contributing & Support

NeuroLink is built with ❀️ by Juspay. Contributions, questions, and production feedback are always welcome.

Keywords

ai

FAQs

Package last updated on 27 Oct 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts