
Security News
Vite Releases Technical Preview of Rolldown-Vite, a Rust-Based Bundler
Vite releases Rolldown-Vite, a Rust-based bundler preview offering faster builds and lower memory usage as a drop-in replacement for Vite.
Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can transcribe yt videos, temporary email and phone number generation, has TTS support, webai (terminal gpt and open interpreter) and offline LLMs and more
Your All-in-One Python Toolkit for Web Search, AI Interaction, Digital Utilities, and More
Access diverse search engines, cutting-edge AI models, temporary communication tools, media utilities, developer helpers, and powerful CLI interfaces – all through one unified library.
[!IMPORTANT] Webscout supports three types of compatibility:
- Native Compatibility: Webscout's own native API for maximum flexibility
- OpenAI Compatibility: Use providers with OpenAI-compatible interfaces
- Local LLM Compatibility: Run local models with Inferno, an OpenAI-compatible server (now a standalone package)
Choose the approach that best fits your needs! For OpenAI compatibility, check the OpenAI Providers README or see the OpenAI-Compatible API Server section below.
[!NOTE] Webscout supports over 90 AI providers including: LLAMA, C4ai, Venice, Copilot, HuggingFaceChat, PerplexityLabs, DeepSeek, WiseCat, GROQ, OPENAI, GEMINI, DeepInfra, Meta, YEPCHAT, TypeGPT, ChatGPTClone, ExaAI, Claude, Anthropic, Cloudflare, AI21, Cerebras, and many more. All providers follow similar usage patterns with consistent interfaces.
pip install inferno-llm
)
Webscout supports multiple installation methods to fit your workflow:
# Install from PyPI
pip install -U webscout
# Install with API server dependencies
pip install -U "webscout[api]"
# Install with development dependencies
pip install -U "webscout[dev]"
UV is a fast Python package manager. Webscout has full UV support:
# Install UV first (if not already installed)
pip install uv
# Install Webscout with UV
uv add webscout
# Install with API dependencies
uv add "webscout[api]"
# Run Webscout directly with UV (no installation needed)
uv run webscout --help
# Run with API dependencies
uv run --extra api webscout-server
# Install as a UV tool for global access
uv tool install webscout
# Use UV tool commands
webscout --help
webscout-server --help
# Clone the repository
git clone https://github.com/OEvortex/Webscout.git
cd Webscout
# Install in development mode with UV
uv sync --extra dev --extra api
# Or with pip
pip install -e ".[dev,api]"
# Pull and run the Docker image
docker pull oevortex/webscout:latest
docker run -it oevortex/webscout:latest
After installation, you can immediately start using Webscout:
# Check version
webscout version
# Search the web
webscout text -k "python programming"
# Start API server
webscout-server
# Get help
webscout --help
Webscout provides a powerful command-line interface for quick access to its features. You can use it in multiple ways:
After installing with uv tool install webscout
or pip install webscout
:
# Get help
webscout --help
# Start API server
webscout-server
# Run directly with UV (downloads and runs automatically)
uv run webscout --help
uv run --extra api webscout-server
# Traditional Python module execution
python -m webscout --help
python -m webscout.client # Start API server
Command | Description | Example |
---|---|---|
webscout text -k "query" | Perform a text search | webscout text -k "python programming" |
webscout answers -k "query" | Get instant answers | webscout answers -k "what is AI" |
webscout images -k "query" | Search for images | webscout images -k "nature photography" |
webscout videos -k "query" | Search for videos | webscout videos -k "python tutorials" |
webscout news -k "query" | Search for news articles | webscout news -k "technology trends" |
webscout maps -k "query" | Perform a maps search | webscout maps -k "restaurants near me" |
webscout translate -k "text" | Translate text | webscout translate -k "hello world" |
webscout suggestions -k "query" | Get search suggestions | webscout suggestions -k "how to" |
webscout weather -l "location" | Get weather information | webscout weather -l "New York" |
webscout version | Display the current version | webscout version |
Google Search Commands:
Command | Description | Example |
---|---|---|
webscout google_text -k "query" | Google text search | webscout google_text -k "machine learning" |
webscout google_news -k "query" | Google news search | webscout google_news -k "AI breakthrough" |
webscout google_suggestions -q "query" | Google suggestions | webscout google_suggestions -q "python" |
Yep Search Commands:
Command | Description | Example |
---|---|---|
webscout yep_text -k "query" | Yep text search | webscout yep_text -k "web development" |
webscout yep_images -k "query" | Yep image search | webscout yep_images -k "landscapes" |
webscout yep_suggestions -q "query" | Yep suggestions | webscout yep_suggestions -q "javascript" |
Inferno is now a standalone package. Install it separately with:
pip install inferno-llm
After installation, you can use its CLI for managing and using local LLMs:
inferno --help
Command | Description |
---|---|
inferno pull <model> | Download a model from Hugging Face |
inferno list | List downloaded models |
inferno serve <model> | Start a model server with OpenAI-compatible API |
inferno run <model> | Chat with a model interactively |
inferno remove <model> | Remove a downloaded model |
inferno version | Show version information |
For more information, visit the Inferno GitHub repository or PyPI package page.
[!NOTE] Hardware requirements for running models with Inferno:
- Around 2 GB of RAM for 1B models
- Around 4 GB of RAM for 3B models
- At least 8 GB of RAM for 7B models
- 16 GB of RAM for 13B models
- 32 GB of RAM for 33B models
- GPU acceleration is recommended for better performance
Webscout includes an OpenAI-compatible API server that allows you to use any supported provider with tools and applications designed for OpenAI's API.
# Start with default settings (port 8000)
webscout-server
# Start with custom port
webscout-server --port 8080
# Start with API key authentication
webscout-server --api-key "your-secret-key"
# Specify a default provider
webscout-server --default-provider "Claude"
# Run in debug mode
webscout-server --debug
# Get help for all options
webscout-server --help
# Using UV (no installation required)
uv run --extra api webscout-server
# Using Python module
python -m webscout.client
# Legacy method (still supported)
python -m webscout.Provider.OPENAI.api
# Method 1: Using the helper function
from webscout.client import start_server # <--- Now recommended
# Start with default settings
start_server()
# Start with custom settings
start_server(port=8080, api_key="your-secret-key", default_provider="Claude")
# Method 2: Using the run_api function for more control
from webscout.client import run_api
run_api(
host="0.0.0.0",
debug=True
)
Once the server is running, you can use it with any OpenAI client library or tool:
# Using the OpenAI Python client
from openai import OpenAI
client = OpenAI(
api_key="your-secret-key", # Only needed if you set an API key
base_url="http://localhost:8000/v1" # Point to your local server
)
# Chat completion
response = client.chat.completions.create(
model="gpt-4", # This can be any model name registered with Webscout
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
]
)
print(response.choices[0].message.content)
# Basic chat completion request
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-secret-key" \
-d '{
"model": "gpt-4",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
]
}'
# List available models
curl http://localhost:8000/v1/models \
-H "Authorization: Bearer your-secret-key"
GET /v1/models
- List all available modelsGET /v1/models/{model_name}
- Get information about a specific modelPOST /v1/chat/completions
- Create a chat completionWebscout provides multiple search engine interfaces for diverse search capabilities.
from webscout import YepSearch
# Initialize YepSearch
yep = YepSearch(
timeout=20, # Optional: Set custom timeout
proxies=None, # Optional: Use proxies
verify=True # Optional: SSL verification
)
# Text Search
text_results = yep.text(
keywords="artificial intelligence",
region="all", # Optional: Region for results
safesearch="moderate", # Optional: "on", "moderate", "off"
max_results=10 # Optional: Limit number of results
)
# Image Search
image_results = yep.images(
keywords="nature photography",
region="all",
safesearch="moderate",
max_results=10
)
# Get search suggestions
suggestions = yep.suggestions("hist")
from webscout import GoogleSearch
# Initialize GoogleSearch
google = GoogleSearch(
timeout=10, # Optional: Set custom timeout
proxies=None, # Optional: Use proxies
verify=True # Optional: SSL verification
)
# Text Search
text_results = google.text(
keywords="artificial intelligence",
region="us", # Optional: Region for results
safesearch="moderate", # Optional: "on", "moderate", "off"
max_results=10 # Optional: Limit number of results
)
for result in text_results:
print(f"Title: {result.title}")
print(f"URL: {result.url}")
print(f"Description: {result.description}")
# News Search
news_results = google.news(
keywords="technology trends",
region="us",
safesearch="moderate",
max_results=5
)
# Get search suggestions
suggestions = google.suggestions("how to")
# Legacy usage is still supported
from webscout import search
results = search("Python programming", num_results=5)
Webscout provides powerful interfaces to DuckDuckGo's search capabilities through the WEBS
and AsyncWEBS
classes.
from webscout import WEBS
# Use as a context manager for proper resource management
with WEBS() as webs:
# Simple text search
results = webs.text("python programming", max_results=5)
for result in results:
print(f"Title: {result['title']}\nURL: {result['url']}")
import asyncio
from webscout import AsyncWEBS
async def search_multiple_terms(search_terms):
async with AsyncWEBS() as webs:
# Create tasks for each search term
tasks = [webs.text(term, max_results=5) for term in search_terms]
# Run all searches concurrently
results = await asyncio.gather(*tasks)
return results
async def main():
terms = ["python", "javascript", "machine learning"]
all_results = await search_multiple_terms(terms)
# Process results
for i, term_results in enumerate(all_results):
print(f"Results for '{terms[i]}':\n")
for result in term_results:
print(f"- {result['title']}")
print("\n")
# Run the async function
asyncio.run(main())
[!TIP] Always use these classes with a context manager (
with
statement) to ensure proper resource management and cleanup.
The WEBS class provides comprehensive access to DuckDuckGo's search capabilities through a clean, intuitive API.
Method | Description | Example |
---|---|---|
text() | General web search | webs.text('python programming') |
answers() | Instant answers | webs.answers('population of france') |
images() | Image search | webs.images('nature photography') |
videos() | Video search | webs.videos('documentary') |
news() | News articles | webs.news('technology') |
maps() | Location search | webs.maps('restaurants', place='new york') |
translate() | Text translation | webs.translate('hello', to='es') |
suggestions() | Search suggestions | webs.suggestions('how to') |
weather() | Weather information | webs.weather('london') |
from webscout import WEBS
with WEBS() as webs:
results = webs.text(
'artificial intelligence',
region='wt-wt', # Optional: Region for results
safesearch='off', # Optional: 'on', 'moderate', 'off'
timelimit='y', # Optional: Time limit ('d'=day, 'w'=week, 'm'=month, 'y'=year)
max_results=10 # Optional: Limit number of results
)
for result in results:
print(f"Title: {result['title']}")
print(f"URL: {result['url']}")
print(f"Description: {result['body']}\n")
from webscout import WEBS
import datetime
def fetch_formatted_news(keywords, timelimit='d', max_results=20):
"""Fetch and format news articles"""
with WEBS() as webs:
# Get news results
news_results = webs.news(
keywords,
region="wt-wt",
safesearch="off",
timelimit=timelimit, # 'd'=day, 'w'=week, 'm'=month
max_results=max_results
)
# Format the results
formatted_news = []
for i, item in enumerate(news_results, 1):
# Format the date
date = datetime.datetime.fromisoformat(item['date']).strftime('%B %d, %Y')
# Create formatted entry
entry = f"{i}. {item['title']}\n"
entry += f" Published: {date}\n"
entry += f" {item['body']}\n"
entry += f" URL: {item['url']}\n"
formatted_news.append(entry)
return formatted_news
# Example usage
news = fetch_formatted_news('artificial intelligence', timelimit='w', max_results=5)
print('\n'.join(news))
from webscout import WEBS
with WEBS() as webs:
# Get weather for a location
weather = webs.weather("New York")
# Access weather data
if weather:
print(f"Location: {weather.get('location', 'Unknown')}")
print(f"Temperature: {weather.get('temperature', 'N/A')}")
print(f"Conditions: {weather.get('condition', 'N/A')}")
Webscout provides easy access to a wide range of AI models and voice options.
Access and manage Large Language Models with Webscout's model utilities.
from webscout import model
from rich import print
# List all available LLM models
all_models = model.llm.list()
print(f"Total available models: {len(all_models)}")
# Get a summary of models by provider
summary = model.llm.summary()
print("Models by provider:")
for provider, count in summary.items():
print(f" {provider}: {count} models")
# Get models for a specific provider
provider_name = "PerplexityLabs"
available_models = model.llm.get(provider_name)
print(f"\n{provider_name} models:")
if isinstance(available_models, list):
for i, model_name in enumerate(available_models, 1):
print(f" {i}. {model_name}")
else:
print(f" {available_models}")
Access and manage Text-to-Speech voices across multiple providers.
from webscout import model
from rich import print
# List all available TTS voices
all_voices = model.tts.list()
print(f"Total available voices: {len(all_voices)}")
# Get a summary of voices by provider
summary = model.tts.summary()
print("\nVoices by provider:")
for provider, count in summary.items():
print(f" {provider}: {count} voices")
# Get voices for a specific provider
provider_name = "ElevenlabsTTS"
available_voices = model.tts.get(provider_name)
print(f"\n{provider_name} voices:")
if isinstance(available_voices, dict):
for voice_name, voice_id in list(available_voices.items())[:5]: # Show first 5 voices
print(f" - {voice_name}: {voice_id}")
if len(available_voices) > 5:
print(f" ... and {len(available_voices) - 5} more")
Webscout offers a comprehensive collection of AI chat providers, giving you access to various language models through a consistent interface.
Provider | Description | Key Features |
---|---|---|
OPENAI | OpenAI's models | GPT-3.5, GPT-4, tool calling |
GEMINI | Google's Gemini models | Web search capabilities |
Meta | Meta's AI assistant | Image generation, web search |
GROQ | Fast inference platform | High-speed inference, tool calling |
LLAMA | Meta's Llama models | Open weights models |
DeepInfra | Various open models | Multiple model options |
Cohere | Cohere's language models | Command models |
PerplexityLabs | Perplexity AI | Web search integration |
YEPCHAT | Yep.com's AI | Streaming responses |
ChatGPTClone | ChatGPT-like interface | Multiple model options |
TypeGPT | TypeChat models | Multiple model options |
from webscout import Meta
# For basic usage (no authentication required)
meta_ai = Meta()
# Simple text prompt
response = meta_ai.chat("What is the capital of France?")
print(response)
# For authenticated usage with web search and image generation
meta_ai = Meta(fb_email="your_email@example.com", fb_password="your_password")
# Text prompt with web search
response = meta_ai.ask("What are the latest developments in quantum computing?")
print(response["message"])
print("Sources:", response["sources"])
# Image generation
response = meta_ai.ask("Create an image of a futuristic city")
for media in response.get("media", []):
print(media["url"])
from webscout import GROQ, WEBS
import json
# Initialize GROQ client
client = GROQ(api_key="your_api_key")
# Define helper functions
def calculate(expression):
"""Evaluate a mathematical expression"""
try:
result = eval(expression)
return json.dumps({"result": result})
except Exception as e:
return json.dumps({"error": str(e)})
def search(query):
"""Perform a web search"""
try:
results = WEBS().text(query, max_results=3)
return json.dumps({"results": results})
except Exception as e:
return json.dumps({"error": str(e)})
# Register functions with GROQ
client.add_function("calculate", calculate)
client.add_function("search", search)
# Define tool specifications
tools = [
{
"type": "function",
"function": {
"name": "calculate",
"description": "Evaluate a mathematical expression",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The mathematical expression to evaluate"
}
},
"required": ["expression"]
}
}
},
{
"type": "function",
"function": {
"name": "search",
"description": "Perform a web search",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query"
}
},
"required": ["query"]
}
}
}
]
# Use the tools
response = client.chat("What is 25 * 4 + 10?", tools=tools)
print(response)
response = client.chat("Find information about quantum computing", tools=tools)
print(response)
Webscout provides tools to convert and quantize Hugging Face models into the GGUF format for offline use.
from webscout.Extra.gguf import ModelConverter
# Create a converter instance
converter = ModelConverter(
model_id="mistralai/Mistral-7B-Instruct-v0.2", # Hugging Face model ID
quantization_methods="q4_k_m" # Quantization method
)
# Run the conversion
converter.convert()
Method | Description |
---|---|
fp16 | 16-bit floating point - maximum accuracy, largest size |
q2_k | 2-bit quantization (smallest size, lowest accuracy) |
q3_k_l | 3-bit quantization (large) - balanced for size/accuracy |
q3_k_m | 3-bit quantization (medium) - good balance for most use cases |
q3_k_s | 3-bit quantization (small) - optimized for speed |
q4_0 | 4-bit quantization (version 0) - standard 4-bit compression |
q4_1 | 4-bit quantization (version 1) - improved accuracy over q4_0 |
q4_k_m | 4-bit quantization (medium) - balanced for most models |
q4_k_s | 4-bit quantization (small) - optimized for speed |
q5_0 | 5-bit quantization (version 0) - high accuracy, larger size |
q5_1 | 5-bit quantization (version 1) - improved accuracy over q5_0 |
q5_k_m | 5-bit quantization (medium) - best balance for quality/size |
q5_k_s | 5-bit quantization (small) - optimized for speed |
q6_k | 6-bit quantization - highest accuracy, largest size |
q8_0 | 8-bit quantization - maximum accuracy, largest size |
python -m webscout.Extra.gguf convert -m "mistralai/Mistral-7B-Instruct-v0.2" -q "q4_k_m"
Contributions are welcome! If you'd like to contribute to Webscout, please follow these steps:
Made with ❤️ by the Webscout team
FAQs
Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can transcribe yt videos, temporary email and phone number generation, has TTS support, webai (terminal gpt and open interpreter) and offline LLMs and more
We found that webscout demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 2 open source maintainers collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Vite releases Rolldown-Vite, a Rust-based bundler preview offering faster builds and lower memory usage as a drop-in replacement for Vite.
Research
Security News
A malicious npm typosquat uses remote commands to silently delete entire project directories after a single mistyped install.
Research
Security News
Malicious PyPI package semantic-types steals Solana private keys via transitive dependency installs using monkey patching and blockchain exfiltration.