QLLM: Quantalogic Large Language Model CLI & AI Toolbox 🚀
Table of Contents
- Introduction
- Features
- Installation
- Configuration
- Usage
- Advanced Features
- Command Reference
- Examples
- Troubleshooting
- Contributing
- License
- Acknowledgements
1. Introduction
Welcome to QLLM CLI, a powerful command-line interface for seamless interaction with Large Language Models (LLMs). QLLM CLI provides a unified platform that supports multiple providers and empowers users with extensive configuration options and features.
Key Highlights:
- Multi-provider support through qllm-lib integration
- Rich, interactive chat experiences with conversation management
- Efficient one-time question answering
- Advanced image input capabilities for visual analysis
- Fine-grained control over model parameters
- Comprehensive configuration system
2. Features
QLLM CLI offers a robust set of features designed for effective AI interaction:
-
🌐 Multi-provider Support: Seamlessly switch between LLM providers through qllm-lib integration.
-
💬 Interactive Chat Sessions:
- Context-aware conversations with history management
- Real-time streaming responses
- System message customization
-
❓ One-time Question Answering: Quick answers for standalone queries with the ask
command.
-
🖼️ Image Input Support: Analyze images from multiple sources:
- Local files (Supported formats: jpg, jpeg, png, gif, bmp, webp)
- URLs pointing to online images
- Clipboard images
- Screen captures with display selection
-
🎛️ Model Parameters: Fine-tune AI behavior with:
- Temperature (0.0 to 1.0)
- Max tokens
- Top P
- Frequency penalty
- Presence penalty
- Stop sequences
-
📋 Provider Management:
- List available providers
- View supported models per provider
- Configure default provider and model
-
🔄 Response Handling:
- Stream responses in real-time
- Save responses to files
- Extract specific variables from responses
-
⚙️ Configuration System:
- Interactive configuration setup
- JSON-based configuration storage
- Environment variable support
3. Installation
To use QLLM CLI, ensure you have Node.js installed on your system. Then install globally via npm:
npm install -g qllm
Verify the installation:
qllm --version
4. Configuration
QLLM CLI provides flexible configuration management through both interactive and command-line interfaces.
Interactive Configuration
Run the interactive configuration wizard:
qllm configure
The wizard guides you through configuring:
-
Provider Settings
- Default Provider
- Default Model
-
Model Parameters
- Temperature (0.0 to 1.0)
- Max Tokens
- Top P
- Frequency Penalty
- Presence Penalty
- Stop Sequences
-
Other Settings
- Log Level
- Custom Prompt Directory
Command-line Configuration
Set individual configuration values:
qllm configure --set <key=value>
View current configuration:
qllm configure --list
Get a specific setting:
qllm configure --get <key>
Configuration File
Settings are stored in ~/.qllmrc
as JSON. While manual editing is possible, using the configure
commands is recommended.
5. Usage
QLLM CLI supports three main interaction modes:
- Direct Questions
qllm ask "What is the capital of France?"
- Interactive Chat
qllm chat
- Template-based Execution
qllm run template.yaml
Image Analysis
Include images in your queries:
qllm ask "What's in this image?" -i path/to/image.jpg
qllm ask "Describe this image" -i https://example.com/image.jpg
qllm ask "Analyze this image" --use-clipboard
qllm ask "What's on my screen?" --screenshot 1
Response Options
Control output behavior:
qllm ask "Query" -o output.txt
qllm ask "Query" --no-stream
qllm ask "Query" --system-message "You are a helpful assistant"
6. Advanced Features
Template-based Execution
QLLM CLI supports running predefined templates:
qllm run template.yaml
Template options:
-v, --variables
: Provide template variables in JSON format-ns, --no-stream
: Disable response streaming-o, --output
: Save response to file-e, --extract
: Extract specific variables from response
Chat Commands
In chat mode, use these commands:
/help
: Show available commands/new
: Start new conversation/save
: Save conversation/load
: Load conversation/list
: Show conversation history/clear
: Clear conversation/models
: List available models/providers
: List providers/options
: Show chat options/set <option> <value>
: Set chat option/image <path>
: Add image/clearimages
: Clear image buffer/listimages
: List images in buffer
Provider and Model Management
List available providers:
qllm list providers
List models for a provider:
qllm list models <provider>
Options:
-f, --full
: Show full model details-s, --sort <field>
: Sort by field (id, created)-r, --reverse
: Reverse sort order-c, --columns
: Select display columns
Environment Variables
Configure providers using environment variables:
export OPENAI_API_KEY=your_key_here
export ANTHROPIC_API_KEY=your_key_here
Piped Input Support
Use QLLM with piped input:
echo "Explain quantum computing" | qllm ask
cat article.txt | qllm ask "Summarize this:"
7. Command Reference
Core Commands
qllm [template]
qllm ask [question]
qllm chat
qllm configure
qllm list
Global Options
-p, --provider <provider>
-m, --model <model>
--max-tokens <number>
--temperature <number>
--log-level <level>
Ask Command Options
-i, --image <path>
--use-clipboard
--screenshot <number>
-ns, --no-stream
-o, --output <file>
-s, --system-message
Configure Command Options
-l, --list
-s, --set <key=value>
-g, --get <key>
List Command Options
list providers
list models <provider>
-f, --full
-s, --sort <field>
-r, --reverse
-c, --columns
Template Options
-t, --type <type>
-v, --variables <json>
-e, --extract <vars>
8. Examples
Basic Usage
- Simple Questions
qllm ask "What is quantum computing?"
qllm ask "Explain like I'm 5: What is gravity?" --system-message "You are a teacher for young children"
- Interactive Chat
qllm chat
qllm chat -p openai -m gpt-4
Working with Images
- Local Image Analysis
qllm ask "What's in this image?" -i photo.jpg
qllm ask "What are the differences?" -i image1.jpg -i image2.jpg
- Screen Analysis
qllm ask "What's on my screen?" --screenshot 1
qllm ask "Analyze this diagram" --use-clipboard
Advanced Features
- Template Usage
qllm run template.yaml -v '{"name": "John", "age": 30}'
qllm run analysis.yaml -e "summary,key_points"
- Output Control
qllm ask "Write a story about AI" -o story.txt
qllm ask "Generate a report" --no-stream
- Provider Management
qllm list providers
qllm list models openai -f
Configuration
- Setting Preferences
qllm configure --set provider=openai
qllm configure --set model=gpt-4
- Viewing Settings
qllm configure --list
qllm configure --get model
Using with Pipes
cat document.txt | qllm ask "Summarize this text"
ls -l | qllm ask "Explain these file permissions"
9. Troubleshooting
Common Issues
-
Configuration Issues
- Check your configuration:
qllm configure --list
- Verify API keys are set correctly in environment variables
- Ensure provider and model selections are valid
-
Provider Errors
-
Image Input Problems
- Verify supported formats: jpg, jpeg, png, gif, bmp, webp
- Check file permissions and paths
- For clipboard issues, ensure image is properly copied
- For screenshots, verify display number is correct
-
Network Issues
- Check internet connection
- Verify no firewall blocking
- Try with --no-stream option to rule out streaming issues
Error Messages
Common error messages and solutions:
-
"Invalid provider"
- Use
qllm list providers
to see available providers - Set valid provider:
qllm configure --set provider=<provider>
-
"Invalid model"
- Check available models:
qllm list models <provider>
- Set valid model:
qllm configure --set model=<model>
-
"Configuration error"
- Reset configuration: Remove ~/.qllmrc
- Reconfigure:
qllm configure
-
"API key not found"
- Set required environment variables
- Verify API key format and validity
Updates and Installation
-
Version Issues
- Check current version:
qllm --version
- Update to latest:
npm update -g qllm
-
Installation Problems
- Verify Node.js version (14+)
- Try with sudo if permission errors:
sudo npm install -g qllm
- Clear npm cache if needed:
npm cache clean --force
Getting Help
If issues persist:
- Check the GitHub Issues
- Use
qllm <command> --help
for command-specific help - Run with debug logging:
qllm --log-level debug <command>
10. Contributing
We welcome contributions to QLLM CLI! Here's how you can help:
Development Setup
- Fork and clone the repository:
git clone https://github.com/your-username/qllm.git
cd qllm
- Install dependencies:
npm install
- Create a feature branch:
git checkout -b feature/your-feature-name
Development Guidelines
-
Code Style
- Follow existing code style
- Use TypeScript for type safety
- Add JSDoc comments for public APIs
- Keep functions focused and modular
-
Testing
- Add tests for new features
- Ensure existing tests pass:
npm test
- Include both unit and integration tests
-
Documentation
- Update README.md for new features
- Add JSDoc comments
- Include examples in documentation
- Keep documentation synchronized with code
Submitting Changes
- Commit your changes:
git add .
git commit -m "feat: description of your changes"
- Push to your fork:
git push origin feature/your-feature-name
- Create a Pull Request:
- Provide clear description of changes
- Reference any related issues
- Include test results
- List any breaking changes
Code Review Process
- Maintainers will review your PR
- Address any requested changes
- Once approved, changes will be merged
- Your contribution will be acknowledged
11. License
QLLM CLI is licensed under the Apache License, Version 2.0.
Copyright 2023 Quantalogic
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
12. Acknowledgements
QLLM CLI is made possible thanks to:
- The open-source community
- Contributors and maintainers
- LLM providers and their APIs
- Node.js and npm ecosystem
Special thanks to all who have contributed to making this project better!