🌟 Overview
Perplexity Search is a command-line tool and Python library that leverages the power of Perplexity AI to provide accurate, technical search results. It's designed for developers, researchers, and technical users who need quick access to precise information, code examples, and technical documentation. It also includes an interactive mode for multi-turn conversations.
✨ Features
- Interactive Mode: Engage in a conversational interface where you can ask multiple queries in sequence.
- Conversation Context: Maintain context across multiple turns in interactive mode.
- Markdown Output: Save conversation history to a markdown file.
- Perform searches using different LLaMA models (small, large, huge)
- Configurable API key support via environment variable or direct input
- Command-line interface for easy usage
- Focused on retrieving technical information with code examples
- Returns responses formatted in markdown
- Optimized for factual and numerical data
- Debug logging
Installation
pip install plexsearch
Usage
As a Python Module
from perplexity_search import perform_search
result = perform_search("What is Python's time complexity for list operations?")
result = perform_search("What are the differences between Python 3.11 and 3.12?", api_key="your-api-key")
result = perform_search("Show me example code for Python async/await", model="llama-3.1-sonar-huge-128k-online")
Command Line Interface
Interactive Mode
To enter interactive mode, simply run the command without any query arguments:
plexsearch
In interactive mode, you can type your queries one by one. Type exit
or press Ctrl-D
to quit the interactive session.
plexsearch "What is Python's time complexity for list operations?"
plexsearch --model llama-3.1-sonar-huge-128k-online "What are the differences between Python 3.11 and 3.12?"
plexsearch --api-key your-api-key "Show me example code for Python async/await"
plexsearch tell me about frogs
plexsearch --no-stream "tell me about frogs"
plexsearch --citations "tell me about Python's GIL"
Note: Streaming is automatically disabled when running inside Aider to prevent
filling up the context window.
Configuration
API Key
Set your Perplexity API key in one of these ways:
- Environment variable:
export PERPLEXITY_API_KEY=your-api-key
echo 'export PERPLEXITY_API_KEY=your-api-key' >> ~/.bashrc
- Pass directly in code or CLI:
--api-key your-api-key
Available Models
The following models can be specified using the --model
parameter:
llama-3.1-sonar-small-128k-online
(Faster, lighter model)llama-3.1-sonar-large-128k-online
(Default, balanced model)llama-3.1-sonar-huge-128k-online
(Most capable model)
Conversation Logging
You can log your conversation to a file using the --log-file
parameter.
Markdown Output
You can save your conversation to a markdown file using the --markdown-file
parameter.
Requirements
- Python 3.x
- requests library
- rich library
- feedparser library
- Perplexity API key (obtain from Perplexity API)
Error Handling
The tool includes error handling for:
- Missing API keys
- Invalid API responses
- Network issues
- Invalid model selections
Contributing
We welcome contributions! Please see our CONTRIBUTING.md for more details on how to contribute to this project. Check our CHANGELOG.md for recent updates and changes.
FAQ
Q: How do I get an API key for Perplexity?
A: You can obtain an API key by signing up on the Perplexity API website.
Q: What models are available for search?
A: The available models are small
, large
, and huge
.
License
MIT License - see the LICENSE file for details