Latest Threat Research:Malicious dYdX Packages Published to npm and PyPI After Maintainer Compromise.Details
Socket
Book a DemoInstallSign in
Socket

@xyne/xyne-cli

Package Overview
Dependencies
Maintainers
3
Versions
31
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

@xyne/xyne-cli

Xyne CLI - A powerful AI assistant in your terminal with file operations, bash mode, drag-and-drop support, and multi-provider AI integration

Source
npmnpm
Version
0.0.26
Version published
Weekly downloads
128
-32.28%
Maintainers
3
Weekly downloads
 
Created
Source

Xyne CLI

A powerful AI assistant in your terminal with file operations, bash mode, drag-and-drop support, and multi-provider AI integration

Xyne CLI Banner

npm version License: MIT

Xyne CLI brings the power of modern AI assistants directly to your terminal with an intuitive interface, rich file handling capabilities, and seamless integration with multiple AI providers.

✨ Features

🎯 Core Capabilities

  • Interactive Chat Interface - Beautiful terminal UI powered by Ink
  • Multi-Provider AI Support - Works with Vertex AI and LiteLLM
  • File Operations - Read, write, edit, and search files with AI assistance
  • Conversation Management - Save, load, and resume conversations
  • Smart Context Management - Automatic conversation compacting when context limits are reached

🛠️ Advanced Features

  • Bash Mode - Execute shell commands with ! prefix
  • File Attachments - Drag & drop files or use @filename syntax
  • Long Paste Support - Intelligent handling of large text pastes
  • Word Navigation - Option+Arrow keys for word jumping and deletion
  • Command System - Built-in /help, /clear, /export and more
  • MCP Integration - Model Context Protocol for extensible tools

📁 File Handling

  • Drag & Drop Support - Drop files directly into the terminal
  • File Type Detection - Automatic detection of images, PDFs, code, and text files
  • Multiple Format Support - Images (PNG, JPG, GIF, WebP), PDFs, and text files up to 25MB
  • Smart File Paths - Use @filepath to reference files in conversations

⌨️ Productivity Features

  • Bash Mode - Type ! to execute shell commands directly
  • Input History - Use ↑/↓ arrows to navigate through previous inputs
  • File Browser - Type @ to browse and select files interactively
  • Message Queue - Queue messages while AI is processing
  • Interruption Support - Double ESC to interrupt AI processing

📦 Installation

npm install -g @xyne/xyne-cli

Verify Installation

xyne --version

Quick Start

# Start interactive session
xyne

# Start with debug logging
xyne --debug

# Load a previous conversation
xyne --load=path/to/conversation.json

# Show help
xyne --help

# Perform update
xyne  update

🔧 Configuration

AI Provider Setup

Vertex AI (Default)

# Set up Vertex AI (requires Google Cloud SDK)
export VERTEX_PROJECT_ID="your-project-id"
export VERTEX_REGION="us-east5"
export VERTEX_MODEL="claude-sonnet-4@20250514"
export VERTEX=1

Note: Vertex AI is the default provider. If no environment variables are set, the system will use:

  • Project ID: dev-ai-epsilon (default)
  • Region: us-east5 (default)
  • Model: claude-sonnet-4@20250514 (default)

Supported Vertex AI Models:

  • claude-sonnet-4@20250514 - Claude Sonnet 4 with thinking capabilities
  • gemini-2.5-pro - Google Gemini 2.5 Pro with enhanced reasoning

LiteLLM (Alternative)

# Set up LiteLLM for other providers
export LITE_LLM_API_KEY="your-api-key"
export LITE_LLM_URL="https://api.openai.com/v1"
export LITE_LLM_MODEL="gpt-4"
export LITE_LLM=1

Supported LiteLLM Models:

  • Hugging Face: glm-45-fp8, glm-46-fp8
  • Anthropic: claude-sonnet-4, claude-sonnet-4-20250514, claude-sonnet-4-5
  • Google: gemini-2.5-pro

MCP Servers

# Add command-based MCP servers
xyne mcp add context7 sh -c "npx -y @upstash/context7-mcp 2>/dev/null"
xyne mcp add filesystem npx @modelcontextprotocol/server-filesystem /path/to/directory

# Add HTTP transport MCP server
xyne mcp add deepwiki --transport=http --url=https://mcp.deepwiki.com/mcp

# Add with environment variables
xyne mcp add myserver npx my-mcp-server --env=API_KEY=secret --env=DEBUG=true

# Add from JSON configuration
xyne mcp add-json github '{"command":"docker","args":["run","-i","--rm","ghcr.io/github/github-mcp-server"]}'

# Add to global configuration
xyne mcp add context7 sh -c "npx -y @upstash/context7-mcp 2>/dev/null" --global

# List configured MCP servers
xyne mcp list

# Get details about a server
xyne mcp get filesystem

# Remove MCP server
xyne mcp remove filesystem

🚀 Usage

Interactive Chat

# Start a conversation
xyne
> Hello! How can I help you today?

# Ask questions
> What files are in my current directory?

# Get help with commands
> /help

One-Shot Prompts

# Basic prompt
xyne prompt "What is the capital of France?"

# Prompt with custom system prompt
xyne prompt "Help me code" --system-prompt="You are a senior software engineer"
xyne prompt "Analyze this" --system="You are concise and direct"

# Prompt with specific tools only
xyne prompt "Read and analyze files" --tools=read,grep,ls
xyne prompt "File operations only" --tools="read,write,edit"

# Combine system prompt and tools
xyne prompt "Help me debug" --system="You are helpful" --tools=read,grep,bash

# Prompt from piped input
echo "Analyze this code" | xyne prompt
cat file.txt | xyne prompt "Summarize this content"

# Flexible argument order (all equivalent)
xyne prompt "Hello world" --system="Be helpful"
xyne prompt --system="Be helpful" "Hello world"
xyne prompt --tools=ls "List files" --system="Be concise"

Available Tools

When using --tools, you can specify any combination of:

  • read - Read files
  • write - Write files
  • edit - Edit files
  • multiedit - Multiple file edits
  • grep - Search patterns in files
  • glob - File pattern matching
  • ls - Directory listing
  • bash - Execute shell commands
  • todo-write - Task management

Bash Mode

# Execute shell commands with ! prefix
> !ls -la
> !git status
> !npm install express

# Or use the bash command
> /bash ls -la

File Operations

# Reference files in conversation
> Can you read @README.md and summarize it?

# Drag and drop files (automatically detected)
> # Drop a file into terminal
> I just attached an image, what do you see?

# Create and edit files
> Create a new Python script that calculates fibonacci numbers
> Edit @script.py to add error handling

Advanced Features

# Load previous conversation
xyne --load=conversation.json

# Export conversation
> /export conversation.md
> /export conversation.json

# Clear conversation
> /clear

# Enable debug mode
xyne --debug

Keyboard Shortcuts

  • ↑/↓ Arrows - Navigate input history
  • Option + ←/→ - Jump between words
  • Option + Backspace/Delete - Delete words
  • Ctrl + C - Interrupt (press twice to exit)
  • Double ESC - Interrupt AI processing or show message selection
  • @ - Open file browser
  • ! - Enter bash mode

🎨 File Support

Supported File Types

TypeExtensionsMax SizeFeatures
Images.png, .jpg, .jpeg, .gif, .webp, .svg, .bmp, .ico10MBVisual analysis, OCR
PDFs.pdf25MBText extraction, analysis
Text.txt, .md, .json, .yaml, .yml, .html, .css, .xml5MBFull content analysis
Code.js, .ts, .jsx, .tsx, .py, .go, .rs, .java, .cpp, .c, .php, .rb, .swift, .kt5MBSyntax highlighting, analysis

Additional Support:

  • Files without extensions (e.g., .gitignore, Dockerfile, Makefile)
  • Common configuration files automatically detected as text
  • Content-based detection for unknown file types

File Attachment Methods

  • Drag & Drop - Drop files directly into the terminal
  • File References - Use @filename or @path/to/file
  • File Browser - Type @ to browse and select files
  • Clipboard - Paste file paths automatically detected

Areas for Contribution

  • New AI Providers - Add support for additional AI providers
  • File Handlers - Support for new file types and formats
  • UI Improvements - Enhance the terminal interface
  • Performance - Optimize conversation handling and file processing
  • Documentation - Improve guides and examples
  • Testing - Add comprehensive test coverage

Bug Reports

When reporting bugs, please include:

  • Your operating system and Node.js version
  • Steps to reproduce the issue
  • Expected vs actual behavior
  • Any error messages or logs (use --debug flag)

Feature Requests

For feature requests, please:

  • Search existing issues first
  • Provide a clear description of the feature
  • Explain the use case and benefits
  • Consider contributing the feature yourself!

📚 Documentation

Environment Variables

VariableDescriptionDefault
VERTEX_PROJECT_IDGoogle Cloud Project IDdev-ai-epsilon
VERTEX_REGIONVertex AI regionus-east5
VERTEX_MODELVertex AI modelclaude-sonnet-4@20250514
LITE_LLM_API_KEYLiteLLM API key-
LITE_LLM_URLLiteLLM base URL-
LITE_LLM_MODELLiteLLM model name-
LOG_LEVELLogging leveloff

Interactive Commands

CommandDescription
/helpShow available commands
/clearClear conversation history
/exportExport conversation
/mcpList MCP servers
/exitExit the application

CLI Commands

CommandDescription
xyneStart interactive chat
xyne prompt <text>Execute one-shot prompt
xyne mcp add <name> [options...]Add MCP server
xyne mcp remove <name>Remove MCP server
xyne mcp listList all MCP servers
xyne mcp get <name>Get MCP server details
xyne mcp add-json <name> <json>Add MCP server from JSON
xyne configShow current configuration
xyne updateUpdate to latest version
xyne --helpShow help information
xyne --versionShow version information

📄 License

This project is licensed under the MIT License - see the LICENSE file for details.

🙏 Acknowledgments

  • Built with Ink for beautiful terminal UIs
  • Powered by React for component architecture
  • Supports Vertex AI and LiteLLM
  • Inspired by the needs of developers who live in the terminal

Made with ❤️ by the Xyne Team

Keywords

xyne

FAQs

Package last updated on 10 Dec 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts