🚀 Big News: Socket Acquires Coana to Bring Reachability Analysis to Every Appsec Team.Learn more
Socket
DemoInstallSign in
Socket

webscout

Package Overview
Dependencies
Maintainers
2
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

webscout

Search for anything using Google, DuckDuckGo, phind.com, Contains AI models, can transcribe yt videos, temporary email and phone number generation, has TTS support, webai (terminal gpt and open interpreter) and offline LLMs and more

8.3
Source
PyPI
Maintainers
2
WebScout Logo

Webscout

Your All-in-One Python Toolkit for Web Search, AI Interaction, Digital Utilities, and More

Access diverse search engines, cutting-edge AI models, temporary communication tools, media utilities, developer helpers, and powerful CLI interfaces – all through one unified library.

PyPI Version Monthly Downloads Total Downloads Python Version Ask DeepWiki

📋 Table of Contents

[!IMPORTANT] Webscout supports three types of compatibility:

  • Native Compatibility: Webscout's own native API for maximum flexibility
  • OpenAI Compatibility: Use providers with OpenAI-compatible interfaces
  • Local LLM Compatibility: Run local models with Inferno, an OpenAI-compatible server (now a standalone package)

Choose the approach that best fits your needs! For OpenAI compatibility, check the OpenAI Providers README or see the OpenAI-Compatible API Server section below.

[!NOTE] Webscout supports over 90 AI providers including: LLAMA, C4ai, Venice, Copilot, HuggingFaceChat, PerplexityLabs, DeepSeek, WiseCat, GROQ, OPENAI, GEMINI, DeepInfra, Meta, YEPCHAT, TypeGPT, ChatGPTClone, ExaAI, Claude, Anthropic, Cloudflare, AI21, Cerebras, and many more. All providers follow similar usage patterns with consistent interfaces.

Telegram Group Developer Telegram YouTube LinkedIn Instagram Buy Me A Coffee

🚀 Features

Search & AI

  • Comprehensive Search: Leverage Google, DuckDuckGo, and Yep for diverse search results
  • AI Powerhouse: Access and interact with various AI models through three compatibility options:
    • Native API: Use Webscout's native interfaces for providers like OpenAI, Cohere, Gemini, and many more
    • OpenAI-Compatible Providers: Seamlessly integrate with various AI providers using standardized OpenAI-compatible interfaces
    • Local LLMs with Inferno: Run local models with an OpenAI-compatible server (now available as a standalone package)
  • AI Search: AI-powered search engines with advanced capabilities

Media & Content Tools

  • YouTube Toolkit: Advanced YouTube video and transcript management with multi-language support
  • Text-to-Speech (TTS): Convert text into natural-sounding speech using multiple AI-powered providers
  • Text-to-Image: Generate high-quality images using a wide range of AI art providers
  • Weather Tools: Retrieve detailed weather information for any location

Developer Tools

  • GitAPI: Powerful GitHub data extraction toolkit without authentication requirements for public data
  • SwiftCLI: A powerful and elegant CLI framework for beautiful command-line interfaces
  • LitPrinter: Styled console output with rich formatting and colors
  • LitLogger: Simplified logging with customizable formats and color schemes
  • LitAgent: Modern user agent generator that keeps your requests undetectable
  • Scout: Advanced web parsing and crawling library with intelligent HTML/XML parsing
  • Inferno: Run local LLMs with an OpenAI-compatible API and interactive CLI (now a standalone package: pip install inferno-llm)
  • GGUF Conversion: Convert and quantize Hugging Face models to GGUF format

Privacy & Utilities

  • Tempmail & Temp Number: Generate temporary email addresses and phone numbers
  • Awesome Prompts: Curated collection of system prompts for specialized AI personas

⚙️ Installation

Webscout supports multiple installation methods to fit your workflow:

📦 Standard Installation

# Install from PyPI
pip install -U webscout

# Install with API server dependencies
pip install -U "webscout[api]"

# Install with development dependencies
pip install -U "webscout[dev]"

UV is a fast Python package manager. Webscout has full UV support:

# Install UV first (if not already installed)
pip install uv

# Install Webscout with UV
uv add webscout

# Install with API dependencies
uv add "webscout[api]"

# Run Webscout directly with UV (no installation needed)
uv run webscout --help

# Run with API dependencies
uv run --extra api webscout-server

# Install as a UV tool for global access
uv tool install webscout

# Use UV tool commands
webscout --help
webscout-server --help

🔧 Development Installation

# Clone the repository
git clone https://github.com/OEvortex/Webscout.git
cd Webscout

# Install in development mode with UV
uv sync --extra dev --extra api

# Or with pip
pip install -e ".[dev,api]"

🐳 Docker Installation

# Pull and run the Docker image
docker pull oevortex/webscout:latest
docker run -it oevortex/webscout:latest

📱 Quick Start Commands

After installation, you can immediately start using Webscout:

# Check version
webscout version

# Search the web
webscout text -k "python programming"

# Start API server
webscout-server

# Get help
webscout --help

🖥️ Command Line Interface

Webscout provides a powerful command-line interface for quick access to its features. You can use it in multiple ways:

After installing with uv tool install webscout or pip install webscout:

# Get help
webscout --help

# Start API server
webscout-server

🔧 UV Run Commands (No Installation Required)

# Run directly with UV (downloads and runs automatically)
uv run webscout --help
uv run --extra api webscout-server

📦 Python Module Commands

# Traditional Python module execution
python -m webscout --help
python -m webscout.client  # Start API server
🔍 Web Search Commands

CommandDescriptionExample
webscout text -k "query"Perform a text searchwebscout text -k "python programming"
webscout answers -k "query"Get instant answerswebscout answers -k "what is AI"
webscout images -k "query"Search for imageswebscout images -k "nature photography"
webscout videos -k "query"Search for videoswebscout videos -k "python tutorials"
webscout news -k "query"Search for news articleswebscout news -k "technology trends"
webscout maps -k "query"Perform a maps searchwebscout maps -k "restaurants near me"
webscout translate -k "text"Translate textwebscout translate -k "hello world"
webscout suggestions -k "query"Get search suggestionswebscout suggestions -k "how to"
webscout weather -l "location"Get weather informationwebscout weather -l "New York"
webscout versionDisplay the current versionwebscout version

Google Search Commands:

CommandDescriptionExample
webscout google_text -k "query"Google text searchwebscout google_text -k "machine learning"
webscout google_news -k "query"Google news searchwebscout google_news -k "AI breakthrough"
webscout google_suggestions -q "query"Google suggestionswebscout google_suggestions -q "python"

Yep Search Commands:

CommandDescriptionExample
webscout yep_text -k "query"Yep text searchwebscout yep_text -k "web development"
webscout yep_images -k "query"Yep image searchwebscout yep_images -k "landscapes"
webscout yep_suggestions -q "query"Yep suggestionswebscout yep_suggestions -q "javascript"

Inferno LLM Commands

Inferno is now a standalone package. Install it separately with:

pip install inferno-llm

After installation, you can use its CLI for managing and using local LLMs:

inferno --help
CommandDescription
inferno pull <model>Download a model from Hugging Face
inferno listList downloaded models
inferno serve <model>Start a model server with OpenAI-compatible API
inferno run <model>Chat with a model interactively
inferno remove <model>Remove a downloaded model
inferno versionShow version information

For more information, visit the Inferno GitHub repository or PyPI package page.

[!NOTE] Hardware requirements for running models with Inferno:

  • Around 2 GB of RAM for 1B models
  • Around 4 GB of RAM for 3B models
  • At least 8 GB of RAM for 7B models
  • 16 GB of RAM for 13B models
  • 32 GB of RAM for 33B models
  • GPU acceleration is recommended for better performance
🔄 OpenAI-Compatible API Server

Webscout includes an OpenAI-compatible API server that allows you to use any supported provider with tools and applications designed for OpenAI's API.

Starting the API Server

# Start with default settings (port 8000)
webscout-server

# Start with custom port
webscout-server --port 8080

# Start with API key authentication
webscout-server --api-key "your-secret-key"

# Specify a default provider
webscout-server --default-provider "Claude"

# Run in debug mode
webscout-server --debug

# Get help for all options
webscout-server --help

Alternative Methods

# Using UV (no installation required)
uv run --extra api webscout-server

# Using Python module
python -m webscout.client

# Legacy method (still supported)
python -m webscout.Provider.OPENAI.api

From Python Code

# Method 1: Using the helper function
from webscout.client import start_server  # <--- Now recommended

# Start with default settings
start_server()

# Start with custom settings
start_server(port=8080, api_key="your-secret-key", default_provider="Claude")

# Method 2: Using the run_api function for more control
from webscout.client import run_api

run_api(
    host="0.0.0.0",
    debug=True
)

Using the API

Once the server is running, you can use it with any OpenAI client library or tool:

# Using the OpenAI Python client
from openai import OpenAI

client = OpenAI(
    api_key="your-secret-key",  # Only needed if you set an API key
    base_url="http://localhost:8000/v1"  # Point to your local server
)

# Chat completion
response = client.chat.completions.create(
    model="gpt-4",  # This can be any model name registered with Webscout
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello, how are you?"}
    ]
)

print(response.choices[0].message.content)

Using with cURL

# Basic chat completion request
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-secret-key" \
  -d '{
    "model": "gpt-4",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Hello, how are you?"}
    ]
  }'

# List available models
curl http://localhost:8000/v1/models \
  -H "Authorization: Bearer your-secret-key"

Available Endpoints

  • GET /v1/models - List all available models
  • GET /v1/models/{model_name} - Get information about a specific model
  • POST /v1/chat/completions - Create a chat completion

🔍 Search Engines

Webscout provides multiple search engine interfaces for diverse search capabilities.

YepSearch - Yep.com Interface

from webscout import YepSearch

# Initialize YepSearch
yep = YepSearch(
    timeout=20,  # Optional: Set custom timeout
    proxies=None,  # Optional: Use proxies
    verify=True   # Optional: SSL verification
)

# Text Search
text_results = yep.text(
    keywords="artificial intelligence",
    region="all",           # Optional: Region for results
    safesearch="moderate",  # Optional: "on", "moderate", "off"
    max_results=10          # Optional: Limit number of results
)

# Image Search
image_results = yep.images(
    keywords="nature photography",
    region="all",
    safesearch="moderate",
    max_results=10
)

# Get search suggestions
suggestions = yep.suggestions("hist")

GoogleSearch - Google Interface

from webscout import GoogleSearch

# Initialize GoogleSearch
google = GoogleSearch(
    timeout=10,  # Optional: Set custom timeout
    proxies=None,  # Optional: Use proxies
    verify=True   # Optional: SSL verification
)

# Text Search
text_results = google.text(
    keywords="artificial intelligence",
    region="us",           # Optional: Region for results
    safesearch="moderate",  # Optional: "on", "moderate", "off"
    max_results=10          # Optional: Limit number of results
)
for result in text_results:
    print(f"Title: {result.title}")
    print(f"URL: {result.url}")
    print(f"Description: {result.description}")

# News Search
news_results = google.news(
    keywords="technology trends",
    region="us",
    safesearch="moderate",
    max_results=5
)

# Get search suggestions
suggestions = google.suggestions("how to")

# Legacy usage is still supported
from webscout import search
results = search("Python programming", num_results=5)

🦆 DuckDuckGo Search with WEBS and AsyncWEBS

Webscout provides powerful interfaces to DuckDuckGo's search capabilities through the WEBS and AsyncWEBS classes.

Synchronous Usage with WEBS

from webscout import WEBS

# Use as a context manager for proper resource management
with WEBS() as webs:
    # Simple text search
    results = webs.text("python programming", max_results=5)
    for result in results:
        print(f"Title: {result['title']}\nURL: {result['url']}")

Asynchronous Usage with AsyncWEBS

import asyncio
from webscout import AsyncWEBS

async def search_multiple_terms(search_terms):
    async with AsyncWEBS() as webs:
        # Create tasks for each search term
        tasks = [webs.text(term, max_results=5) for term in search_terms]
        # Run all searches concurrently
        results = await asyncio.gather(*tasks)
        return results

async def main():
    terms = ["python", "javascript", "machine learning"]
    all_results = await search_multiple_terms(terms)

    # Process results
    for i, term_results in enumerate(all_results):
        print(f"Results for '{terms[i]}':\n")
        for result in term_results:
            print(f"- {result['title']}")
        print("\n")

# Run the async function
asyncio.run(main())

[!TIP] Always use these classes with a context manager (with statement) to ensure proper resource management and cleanup.

💻 WEBS API Reference

The WEBS class provides comprehensive access to DuckDuckGo's search capabilities through a clean, intuitive API.

Available Search Methods

MethodDescriptionExample
text()General web searchwebs.text('python programming')
answers()Instant answerswebs.answers('population of france')
images()Image searchwebs.images('nature photography')
videos()Video searchwebs.videos('documentary')
news()News articleswebs.news('technology')
maps()Location searchwebs.maps('restaurants', place='new york')
translate()Text translationwebs.translate('hello', to='es')
suggestions()Search suggestionswebs.suggestions('how to')
weather()Weather informationwebs.weather('london')
Example: Text Search

from webscout import WEBS

with WEBS() as webs:
    results = webs.text(
        'artificial intelligence',
        region='wt-wt',        # Optional: Region for results
        safesearch='off',      # Optional: 'on', 'moderate', 'off'
        timelimit='y',         # Optional: Time limit ('d'=day, 'w'=week, 'm'=month, 'y'=year)
        max_results=10         # Optional: Limit number of results
    )

    for result in results:
        print(f"Title: {result['title']}")
        print(f"URL: {result['url']}")
        print(f"Description: {result['body']}\n")

Example: News Search with Formatting

from webscout import WEBS
import datetime

def fetch_formatted_news(keywords, timelimit='d', max_results=20):
    """Fetch and format news articles"""
    with WEBS() as webs:
        # Get news results
        news_results = webs.news(
            keywords,
            region="wt-wt",
            safesearch="off",
            timelimit=timelimit,  # 'd'=day, 'w'=week, 'm'=month
            max_results=max_results
        )

        # Format the results
        formatted_news = []
        for i, item in enumerate(news_results, 1):
            # Format the date
            date = datetime.datetime.fromisoformat(item['date']).strftime('%B %d, %Y')

            # Create formatted entry
            entry = f"{i}. {item['title']}\n"
            entry += f"   Published: {date}\n"
            entry += f"   {item['body']}\n"
            entry += f"   URL: {item['url']}\n"

            formatted_news.append(entry)

        return formatted_news

# Example usage
news = fetch_formatted_news('artificial intelligence', timelimit='w', max_results=5)
print('\n'.join(news))

Example: Weather Information

from webscout import WEBS

with WEBS() as webs:
    # Get weather for a location
    weather = webs.weather("New York")

    # Access weather data
    if weather:
        print(f"Location: {weather.get('location', 'Unknown')}")
        print(f"Temperature: {weather.get('temperature', 'N/A')}")
        print(f"Conditions: {weather.get('condition', 'N/A')}")

🤖 AI Models and Voices

Webscout provides easy access to a wide range of AI models and voice options.

LLM Models

Access and manage Large Language Models with Webscout's model utilities.

from webscout import model
from rich import print

# List all available LLM models
all_models = model.llm.list()
print(f"Total available models: {len(all_models)}")

# Get a summary of models by provider
summary = model.llm.summary()
print("Models by provider:")
for provider, count in summary.items():
    print(f"  {provider}: {count} models")

# Get models for a specific provider
provider_name = "PerplexityLabs"
available_models = model.llm.get(provider_name)
print(f"\n{provider_name} models:")
if isinstance(available_models, list):
    for i, model_name in enumerate(available_models, 1):
        print(f"  {i}. {model_name}")
else:
    print(f"  {available_models}")

TTS Voices

Access and manage Text-to-Speech voices across multiple providers.

from webscout import model
from rich import print

# List all available TTS voices
all_voices = model.tts.list()
print(f"Total available voices: {len(all_voices)}")

# Get a summary of voices by provider
summary = model.tts.summary()
print("\nVoices by provider:")
for provider, count in summary.items():
    print(f"  {provider}: {count} voices")

# Get voices for a specific provider
provider_name = "ElevenlabsTTS"
available_voices = model.tts.get(provider_name)
print(f"\n{provider_name} voices:")
if isinstance(available_voices, dict):
    for voice_name, voice_id in list(available_voices.items())[:5]:  # Show first 5 voices
        print(f"  - {voice_name}: {voice_id}")
    if len(available_voices) > 5:
        print(f"  ... and {len(available_voices) - 5} more")

💬 AI Chat Providers

Webscout offers a comprehensive collection of AI chat providers, giving you access to various language models through a consistent interface.

ProviderDescriptionKey Features
OPENAIOpenAI's modelsGPT-3.5, GPT-4, tool calling
GEMINIGoogle's Gemini modelsWeb search capabilities
MetaMeta's AI assistantImage generation, web search
GROQFast inference platformHigh-speed inference, tool calling
LLAMAMeta's Llama modelsOpen weights models
DeepInfraVarious open modelsMultiple model options
CohereCohere's language modelsCommand models
PerplexityLabsPerplexity AIWeb search integration
YEPCHATYep.com's AIStreaming responses
ChatGPTCloneChatGPT-like interfaceMultiple model options
TypeGPTTypeChat modelsMultiple model options
Example: Using Meta AI

from webscout import Meta

# For basic usage (no authentication required)
meta_ai = Meta()

# Simple text prompt
response = meta_ai.chat("What is the capital of France?")
print(response)

# For authenticated usage with web search and image generation
meta_ai = Meta(fb_email="your_email@example.com", fb_password="your_password")

# Text prompt with web search
response = meta_ai.ask("What are the latest developments in quantum computing?")
print(response["message"])
print("Sources:", response["sources"])

# Image generation
response = meta_ai.ask("Create an image of a futuristic city")
for media in response.get("media", []):
    print(media["url"])

Example: GROQ with Tool Calling

from webscout import GROQ, WEBS
import json

# Initialize GROQ client
client = GROQ(api_key="your_api_key")

# Define helper functions
def calculate(expression):
    """Evaluate a mathematical expression"""
    try:
        result = eval(expression)
        return json.dumps({"result": result})
    except Exception as e:
        return json.dumps({"error": str(e)})

def search(query):
    """Perform a web search"""
    try:
        results = WEBS().text(query, max_results=3)
        return json.dumps({"results": results})
    except Exception as e:
        return json.dumps({"error": str(e)})

# Register functions with GROQ
client.add_function("calculate", calculate)
client.add_function("search", search)

# Define tool specifications
tools = [
    {
        "type": "function",
        "function": {
            "name": "calculate",
            "description": "Evaluate a mathematical expression",
            "parameters": {
                "type": "object",
                "properties": {
                    "expression": {
                        "type": "string",
                        "description": "The mathematical expression to evaluate"
                    }
                },
                "required": ["expression"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "search",
            "description": "Perform a web search",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "The search query"
                    }
                },
                "required": ["query"]
            }
        }
    }
]

# Use the tools
response = client.chat("What is 25 * 4 + 10?", tools=tools)
print(response)

response = client.chat("Find information about quantum computing", tools=tools)
print(response)

GGUF Model Conversion

Webscout provides tools to convert and quantize Hugging Face models into the GGUF format for offline use.

from webscout.Extra.gguf import ModelConverter

# Create a converter instance
converter = ModelConverter(
    model_id="mistralai/Mistral-7B-Instruct-v0.2",  # Hugging Face model ID
    quantization_methods="q4_k_m"                  # Quantization method
)

# Run the conversion
converter.convert()

Available Quantization Methods

MethodDescription
fp1616-bit floating point - maximum accuracy, largest size
q2_k2-bit quantization (smallest size, lowest accuracy)
q3_k_l3-bit quantization (large) - balanced for size/accuracy
q3_k_m3-bit quantization (medium) - good balance for most use cases
q3_k_s3-bit quantization (small) - optimized for speed
q4_04-bit quantization (version 0) - standard 4-bit compression
q4_14-bit quantization (version 1) - improved accuracy over q4_0
q4_k_m4-bit quantization (medium) - balanced for most models
q4_k_s4-bit quantization (small) - optimized for speed
q5_05-bit quantization (version 0) - high accuracy, larger size
q5_15-bit quantization (version 1) - improved accuracy over q5_0
q5_k_m5-bit quantization (medium) - best balance for quality/size
q5_k_s5-bit quantization (small) - optimized for speed
q6_k6-bit quantization - highest accuracy, largest size
q8_08-bit quantization - maximum accuracy, largest size

Command Line Usage

python -m webscout.Extra.gguf convert -m "mistralai/Mistral-7B-Instruct-v0.2" -q "q4_k_m"

🤝 Contributing

Contributions are welcome! If you'd like to contribute to Webscout, please follow these steps:

  • Fork the repository
  • Create a new branch for your feature or bug fix
  • Make your changes and commit them with descriptive messages
  • Push your branch to your forked repository
  • Submit a pull request to the main repository

🙏 Acknowledgments

  • All the amazing developers who have contributed to the project
  • The open-source community for their support and inspiration

Made with ❤️ by the Webscout team

Keywords

search

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts