Research
Security News
Malicious npm Package Targets Solana Developers and Hijacks Funds
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
QLLM CLI: A versatile CLI tool for interacting with multiple AI/LLM providers. Features include chat sessions, one-time queries, image handling, and conversation management. Streamlines AI development with easy provider/model switching and configuration.
Welcome to QLLM CLI, a powerful command-line interface for seamless interaction with Large Language Models (LLMs). QLLM CLI provides a unified platform that supports multiple providers and empowers users with extensive configuration options and features.
Key Highlights:
QLLM CLI offers a robust set of features designed for effective AI interaction:
🌐 Multi-provider Support: Seamlessly switch between LLM providers through qllm-lib integration.
💬 Interactive Chat Sessions:
❓ One-time Question Answering: Quick answers for standalone queries with the ask
command.
🖼️ Image Input Support: Analyze images from multiple sources:
🎛️ Model Parameters: Fine-tune AI behavior with:
📋 Provider Management:
🔄 Response Handling:
⚙️ Configuration System:
To use QLLM CLI, ensure you have Node.js installed on your system. Then install globally via npm:
npm install -g qllm
Verify the installation:
qllm --version
QLLM CLI provides flexible configuration management through both interactive and command-line interfaces.
Run the interactive configuration wizard:
qllm configure
The wizard guides you through configuring:
Provider Settings
Model Parameters
Other Settings
Set individual configuration values:
qllm configure --set <key=value>
View current configuration:
qllm configure --list
Get a specific setting:
qllm configure --get <key>
Settings are stored in ~/.qllmrc
as JSON. While manual editing is possible, using the configure
commands is recommended.
QLLM CLI supports three main interaction modes:
qllm ask "What is the capital of France?"
qllm chat
qllm run template.yaml
Include images in your queries:
# Local file
qllm ask "What's in this image?" -i path/to/image.jpg
# URL
qllm ask "Describe this image" -i https://example.com/image.jpg
# Clipboard
qllm ask "Analyze this image" --use-clipboard
# Screenshot
qllm ask "What's on my screen?" --screenshot 1
Control output behavior:
# Save to file
qllm ask "Query" -o output.txt
# Disable streaming
qllm ask "Query" --no-stream
# Add system message
qllm ask "Query" --system-message "You are a helpful assistant"
QLLM CLI supports running predefined templates:
qllm run template.yaml
Template options:
-v, --variables
: Provide template variables in JSON format-ns, --no-stream
: Disable response streaming-o, --output
: Save response to file-e, --extract
: Extract specific variables from responseIn chat mode, use these commands:
/help
: Show available commands/new
: Start new conversation/save
: Save conversation/load
: Load conversation/list
: Show conversation history/clear
: Clear conversation/models
: List available models/providers
: List providers/options
: Show chat options/set <option> <value>
: Set chat option/image <path>
: Add image/clearimages
: Clear image buffer/listimages
: List images in bufferList available providers:
qllm list providers
List models for a provider:
qllm list models <provider>
Options:
-f, --full
: Show full model details-s, --sort <field>
: Sort by field (id, created)-r, --reverse
: Reverse sort order-c, --columns
: Select display columnsConfigure providers using environment variables:
export OPENAI_API_KEY=your_key_here
export ANTHROPIC_API_KEY=your_key_here
Use QLLM with piped input:
echo "Explain quantum computing" | qllm ask
cat article.txt | qllm ask "Summarize this:"
qllm [template] # Run a template or start ask mode if no template
qllm ask [question] # Ask a one-time question
qllm chat # Start interactive chat session
qllm configure # Configure settings
qllm list # List providers or models
-p, --provider <provider> # LLM provider to use
-m, --model <model> # Specific model to use
--max-tokens <number> # Maximum tokens to generate
--temperature <number> # Temperature for generation (0-1)
--log-level <level> # Set log level (error, warn, info, debug)
-i, --image <path> # Include image file or URL (multiple allowed)
--use-clipboard # Use image from clipboard
--screenshot <number> # Capture screenshot from display
-ns, --no-stream # Disable response streaming
-o, --output <file> # Save response to file
-s, --system-message # Set system message
-l, --list # List all settings
-s, --set <key=value> # Set a configuration value
-g, --get <key> # Get a configuration value
list providers # List available providers
list models <provider> # List models for provider
-f, --full # Show full model details
-s, --sort <field> # Sort by field
-r, --reverse # Reverse sort order
-c, --columns # Select columns to display
-t, --type <type> # Template source type (file, url, inline)
-v, --variables <json> # Template variables in JSON format
-e, --extract <vars> # Variables to extract from response
# Direct question
qllm ask "What is quantum computing?"
# With system message
qllm ask "Explain like I'm 5: What is gravity?" --system-message "You are a teacher for young children"
# Start chat with default settings
qllm chat
# Start chat with specific provider and model
qllm chat -p openai -m gpt-4
# Analyze a single image
qllm ask "What's in this image?" -i photo.jpg
# Compare multiple images
qllm ask "What are the differences?" -i image1.jpg -i image2.jpg
# Capture and analyze screen
qllm ask "What's on my screen?" --screenshot 1
# Use clipboard image
qllm ask "Analyze this diagram" --use-clipboard
# Run template with variables
qllm run template.yaml -v '{"name": "John", "age": 30}'
# Extract specific variables
qllm run analysis.yaml -e "summary,key_points"
# Save to file
qllm ask "Write a story about AI" -o story.txt
# Disable streaming for batch processing
qllm ask "Generate a report" --no-stream
# List available providers
qllm list providers
# View models for specific provider
qllm list models openai -f
# Set default provider
qllm configure --set provider=openai
# Set default model
qllm configure --set model=gpt-4
# View all settings
qllm configure --list
# Check specific setting
qllm configure --get model
# Pipe text for analysis
cat document.txt | qllm ask "Summarize this text"
# Process command output
ls -l | qllm ask "Explain these file permissions"
Configuration Issues
qllm configure --list
Provider Errors
qllm list providers
qllm list models <provider>
Image Input Problems
Network Issues
Common error messages and solutions:
"Invalid provider"
qllm list providers
to see available providersqllm configure --set provider=<provider>
"Invalid model"
qllm list models <provider>
qllm configure --set model=<model>
"Configuration error"
qllm configure
"API key not found"
Version Issues
qllm --version
npm update -g qllm
Installation Problems
sudo npm install -g qllm
npm cache clean --force
If issues persist:
qllm <command> --help
for command-specific helpqllm --log-level debug <command>
We welcome contributions to QLLM CLI! Here's how you can help:
git clone https://github.com/your-username/qllm.git
cd qllm
npm install
git checkout -b feature/your-feature-name
Code Style
Testing
npm test
Documentation
git add .
git commit -m "feat: description of your changes"
git push origin feature/your-feature-name
QLLM CLI is licensed under the Apache License, Version 2.0.
Copyright 2023 Quantalogic
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
QLLM CLI is made possible thanks to:
Special thanks to all who have contributed to making this project better!
FAQs
QLLM CLI: A versatile CLI tool for interacting with multiple AI/LLM providers. Features include chat sessions, one-time queries, image handling, and conversation management. Streamlines AI development with easy provider/model switching and configuration.
We found that qllm demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 0 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.
Security News
Socket's package search now displays weekly downloads for npm packages, helping developers quickly assess popularity and make more informed decisions.