Fake List Generator
A Node.js CLI tool that generates lists using OpenAI-compatible APIs (like OpenRouter).
Disclosure: This entire project was generated by AI using Claude Sonnet 4. All code, documentation, and project structure were created automatically based on the user's requirements.
Installation
npm install
Usage
Basic Usage
fake-list-llm 20 "band names"
This will generate 20 band names using the default model (qwen/qwen-turbo) and OpenRouter endpoint.
Command Line Options
-m, --model <model>: Model to use (overrides config file)
-e, --endpoint <url>: API endpoint URL (overrides config file)
-k, --api-key <key>: API key (overrides config file and environment variable)
-p, --prompt <prompt>: Custom prompt template (overrides config file)
--verbose: Enable verbose output (overrides config file)
-c, --config <path>: Path to custom config file
--init-config: Create default user config file
--show-config-paths: Show config file search paths
Examples
fake-list-llm 10 "colors"
fake-list-llm 15 "animals" --model "anthropic/claude-3-haiku"
fake-list-llm 5 "fruits" --endpoint "https://api.openai.com/v1"
fake-list-llm 8 "programming languages" --prompt "List {count} {concept} that are popular in 2024:"
fake-list-llm 12 "desserts" --verbose
Configuration Files
The tool supports configuration files in TOML format with proper layering:
- System config (read-only, for administrators)
- User config (your personal settings)
- Override config (specified with
--config)
- Command line options (highest priority)
Config File Locations
The tool follows XDG Base Directory specification:
Linux:
- System:
/etc/xdg/fake-list-llm/config.toml
- User:
~/.config/fake-list-llm/config.toml
macOS:
- System:
/Library/Preferences/fake-list-llm/config.toml
- User:
~/Library/Preferences/fake-list-llm/config.toml
Windows:
- System:
C:\ProgramData\fake-list-llm\config.toml
- User:
%APPDATA%\fake-list-llm\config.toml
Creating Your Config File
fake-list-llm --init-config
fake-list-llm --show-config-paths
Configuration Options
All options can be set in config files:
model = "qwen/qwen-turbo"
endpoint = "https://openrouter.ai/api/v1"
prompt = "Generate a list of {count} {concept}. Each item should be on a new line, numbered from 1 to {count}."
verbose = false
Environment Variables
Set your OpenRouter API key:
export OPENROUTER_API_KEY="your-api-key-here"
Features
- Streaming responses: Results are streamed to the terminal as they're generated
- Flexible configuration: Customizable model, endpoint, API key, and prompt via config files or CLI
- Configuration layering: System → User → Override → CLI options (highest priority)
- XDG-compliant paths: Follows standard configuration file locations across platforms
- OpenAI-compatible: Works with any OpenAI-compatible API endpoint
- Error handling: Clear error messages for common issues
- Verbose mode: Optional detailed output for debugging
Requirements
- Node.js 16.0.0 or higher
- Valid API key for your chosen provider