WEBSCOUT 🕵️️
Search for anything using Google, DuckDuckGo, Phind.com, access AI models, transcribe YouTube videos, generate temporary emails and phone numbers, utilize text-to-speech, leverage WebAI (terminal GPT and open interpreter), and explore offline LLMs, and much more!
🚀 Features
- Comprehensive Search: Leverage Google, DuckDuckGo for diverse search results.
- AI Powerhouse: Access and interact with various AI models, including OpenAI, Cohere, and more.
- YouTube Toolkit: Advanced YouTube video and transcript management with multi-language support, versatile downloading, and intelligent data extraction
- Tempmail & Temp Number: Generate temporary email addresses and phone numbers for enhanced privacy.
- Text-to-Speech (TTS): Convert text into natural-sounding speech using multiple AI-powered providers like ElevenLabs, StreamElements, and Voicepods.
- Offline LLMs: Utilize powerful language models offline with GGUF support.
- Extensive Provider Ecosystem: Explore a vast collection of providers, including BasedGPT, DeepSeek, and many others.
- Local LLM Execution: Run GGUF models locally with minimal configuration.
- Rawdog Scripting: Execute Python scripts directly within your terminal using the
rawdog
feature. - GGUF Conversion & Quantization: Convert and quantize Hugging Face models to GGUF format.
- Autollama: Download Hugging Face models and automatically convert them for Ollama compatibility.
- Function Calling (Beta): Experiment with function calling capabilities for enhanced AI interactions.
- SwiftCLI: A powerful and elegant CLI framework that makes it easy to create beautiful command-line interfaces.
- LitPrinter: Provides beautiful, styled console output with rich formatting and colors
- LitLogger: Simplifies logging with customizable formats and color schemes
- LitAgent: Powerful and modern user agent generator that keeps your requests fresh and undetectable
- Text-to-Image: Generate high-quality images using a wide range of AI art providers
- MarkdownLite: Powerful HTML to Markdown conversion library with advanced parsing and structured output
⚙️ Installation
pip install -U webscout
🖥️ CLI Usage
python -m webscout --help
Command | Description |
---|
python -m webscout answers -k Text | CLI function to perform an answers search using Webscout. |
python -m webscout images -k Text | CLI function to perform an images search using Webscout. |
python -m webscout maps -k Text | CLI function to perform a maps search using Webscout. |
python -m webscout news -k Text | CLI function to perform a news search using Webscout. |
python -m webscout suggestions -k Text | CLI function to perform a suggestions search using Webscout. |
python -m webscout text -k Text | CLI function to perform a text search using Webscout. |
python -m webscout translate -k Text | CLI function to perform translate using Webscout. |
python -m webscout version | A command-line interface command that prints and returns the version of the program. |
python -m webscout videos -k Text | CLI function to perform a videos search using DuckDuckGo API. |
Go To TOP
🌍 Regions
Expand
xa-ar for Arabia
xa-en for Arabia (en)
ar-es for Argentina
au-en for Australia
at-de for Austria
be-fr for Belgium (fr)
be-nl for Belgium (nl)
br-pt for Brazil
bg-bg for Bulgaria
ca-en for Canada
ca-fr for Canada (fr)
ct-ca for Catalan
cl-es for Chile
cn-zh for China
co-es for Colombia
hr-hr for Croatia
cz-cs for Czech Republic
dk-da for Denmark
ee-et for Estonia
fi-fi for Finland
fr-fr for France
de-de for Germany
gr-el for Greece
hk-tzh for Hong Kong
hu-hu for Hungary
in-en for India
id-id for Indonesia
id-en for Indonesia (en)
ie-en for Ireland
il-he for Israel
it-it for Italy
jp-jp for Japan
kr-kr for Korea
lv-lv for Latvia
lt-lt for Lithuania
xl-es for Latin America
my-ms for Malaysia
my-en for Malaysia (en)
mx-es for Mexico
nl-nl for Netherlands
nz-en for New Zealand
no-no for Norway
pe-es for Peru
ph-en for Philippines
ph-tl for Philippines (tl)
pl-pl for Poland
pt-pt for Portugal
ro-ro for Romania
ru-ru for Russia
sg-en for Singapore
sk-sk for Slovak Republic
sl-sl for Slovenia
za-en for South Africa
es-es for Spain
se-sv for Sweden
ch-de for Switzerland (de)
ch-fr for Switzerland (fr)
ch-it for Switzerland (it)
tw-tzh for Taiwan
th-th for Thailand
tr-tr for Turkey
ua-uk for Ukraine
uk-en for United Kingdom
us-en for United States
ue-es for United States (es)
ve-es for Venezuela
vn-vi for Vietnam
wt-wt for No region
Go To TOP
☀️ Weather
1. Weather
from webscout import weather as w
weather = w.get("Qazigund")
w.print_weather(weather)
2. Weather ASCII
from webscout import weather_ascii as w
weather = w.get("Qazigund")
print(weather)
✉️ TempMail and VNEngine
import json
import asyncio
from webscout import VNEngine
from webscout import TempMail
async def main():
vn = VNEngine()
countries = vn.get_online_countries()
if countries:
country = countries[0]['country']
numbers = vn.get_country_numbers(country)
if numbers:
number = numbers[0]['full_number']
inbox = vn.get_number_inbox(country, number)
json_data = json.dumps(inbox, ensure_ascii=False, indent=4)
print(json_data)
async with TempMail() as client:
domains = await client.get_domains()
print("Available Domains:", domains)
email_response = await client.create_email(alias="testuser")
print("Created Email:", email_response)
messages = await client.get_messages(email_response.email)
print("Messages:", messages)
await client.delete_email(email_response.email, email_response.token)
print("Email Deleted")
if __name__ == "__main__":
asyncio.run(main())
🔍 GoogleS (formerly DWEBS)
from webscout import GoogleS
from rich import print
searcher = GoogleS()
results = searcher.search("HelpingAI-9B", max_results=20, extract_text=False, max_text_length=200)
for result in results:
print(result)
🦆 WEBS and AsyncWEBS
The WEBS
and AsyncWEBS
classes are used to retrieve search results from DuckDuckGo.com.
To use the AsyncWEBS
class, you can perform asynchronous operations using Python's asyncio
library.
To initialize an instance of the WEBS
or AsyncWEBS
classes, you can provide the following optional arguments:
Example - WEBS:
from webscout import WEBS
R = WEBS().text("python programming", max_results=5)
print(R)
Example - AsyncWEBS:
import asyncio
import logging
import sys
from itertools import chain
from random import shuffle
import requests
from webscout import AsyncWEBS
proxies = None
if sys.platform.lower().startswith("win"):
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
def get_words():
word_site = "https://www.mit.edu/~ecprice/wordlist.10000"
resp = requests.get(word_site)
words = resp.text.splitlines()
return words
async def aget_results(word):
async with AsyncWEBS(proxies=proxies) as WEBS:
results = await WEBS.text(word, max_results=None)
return results
async def main():
words = get_words()
shuffle(words)
tasks = [aget_results(word) for word in words[:10]]
results = await asyncio.gather(*tasks)
print(f"Done")
for r in chain.from_iterable(results):
print(r)
logging.basicConfig(level=logging.DEBUG)
await main()
Important Note: The WEBS
and AsyncWEBS
classes should always be used as a context manager (with statement). This ensures proper resource management and cleanup, as the context manager will automatically handle opening and closing the HTTP client connection.
⚠️ Exceptions
Exceptions:
WebscoutE
: Raised when there is a generic exception during the API request.
💻 Usage of WEBS
1. text()
- Text Search by DuckDuckGo.com
from webscout import WEBS
with WEBS() as WEBS:
for r in WEBS.text('live free or die', region='wt-wt', safesearch='off', timelimit='y', max_results=10):
print(r)
for r in WEBS.text('live free or die', region='wt-wt', safesearch='off', timelimit='y', max_results=10):
print(r)
2. answers()
- Instant Answers by DuckDuckGo.com
from webscout import WEBS
with WEBS() as WEBS:
for r in WEBS.answers("sun"):
print(r)
3. images()
- Image Search by DuckDuckGo.com
from webscout import WEBS
with WEBS() as WEBS:
keywords = 'butterfly'
WEBS_images_gen = WEBS.images(
keywords,
region="wt-wt",
safesearch="off",
size=None,
type_image=None,
layout=None,
license_image=None,
max_results=10,
)
for r in WEBS_images_gen:
print(r)
4. videos()
- Video Search by DuckDuckGo.com
from webscout import WEBS
with WEBS() as WEBS:
keywords = 'tesla'
WEBS_videos_gen = WEBS.videos(
keywords,
region="wt-wt",
safesearch="off",
timelimit="w",
resolution="high",
duration="medium",
max_results=10,
)
for r in WEBS_videos_gen:
print(r)
5. news()
- News Search by DuckDuckGo.com
from webscout import WEBS
import datetime
def fetch_news(keywords, timelimit):
news_list = []
with WEBS() as webs_instance:
WEBS_news_gen = webs_instance.news(
keywords,
region="wt-wt",
safesearch="off",
timelimit=timelimit,
max_results=20
)
for r in WEBS_news_gen:
r['date'] = datetime.datetime.fromisoformat(r['date']).strftime('%B %d, %Y')
news_list.append(r)
return news_list
def _format_headlines(news_list, max_headlines: int = 100):
headlines = []
for idx, news_item in enumerate(news_list):
if idx >= max_headlines:
break
new_headline = f"{idx + 1}. {news_item['title'].strip()} "
new_headline += f"(URL: {news_item['url'].strip()}) "
new_headline += f"{news_item['body'].strip()}"
new_headline += "\n"
headlines.append(new_headline)
headlines = "\n".join(headlines)
return headlines
keywords = 'latest AI news'
timelimit = 'd'
news_list = fetch_news(keywords, timelimit)
formatted_headlines = _format_headlines(news_list)
print(formatted_headlines)
6. maps()
- Map Search by DuckDuckGo.com
from webscout import WEBS
with WEBS() as WEBS:
for r in WEBS.maps("school", place="anantnag", max_results=50):
print(r)
7. translate()
- Translation by DuckDuckGo.com
from webscout import WEBS
with WEBS() as WEBS:
keywords = 'school'
r = WEBS.translate(keywords, to="hi")
print(r)
8. suggestions()
- Suggestions by DuckDuckGo.com
from webscout import WEBS
with WEBS() as WEBS:
for r in WEBS.suggestions("fly"):
print(r)
🎭 ALL Acts
Expand
Webscout Supported Acts:
- Free-mode
- Linux Terminal
- English Translator and Improver
position
Interviewer- JavaScript Console
- Excel Sheet
- English Pronunciation Helper
- Spoken English Teacher and Improver
- Travel Guide
- Plagiarism Checker
- Character from Movie/Book/Anything
- Advertiser
- Storyteller
- Football Commentator
- Stand-up Comedian
- Motivational Coach
- Composer
- Debater
- Debate Coach
- Screenwriter
- Novelist
- Movie Critic
- Relationship Coach
- Poet
- Rapper
- Motivational Speaker
- Philosophy Teacher
- Philosopher
- Math Teacher
- AI Writing Tutor
- UX/UI Developer
- Cyber Security Specialist
- Recruiter
- Life Coach
- Etymologist
- Commentariat
- Magician
- Career Counselor
- Pet Behaviorist
- Personal Trainer
- Mental Health Adviser
- Real Estate Agent
- Logistician
- Dentist
- Web Design Consultant
- AI Assisted Doctor
- Doctor
- Accountant
- Chef
- Automobile Mechanic
- Artist Advisor
- Financial Analyst
- Investment Manager
- Tea-Taster
- Interior Decorator
- Florist
- Self-Help Book
- Gnomist
- Aphorism Book
- Text Based Adventure Game
- AI Trying to Escape the Box
- Fancy Title Generator
- Statistician
- Prompt Generator
- Instructor in a School
- SQL terminal
- Dietitian
- Psychologist
- Smart Domain Name Generator
- Tech Reviewer
- Developer Relations consultant
- Academician
- IT Architect
- Lunatic
- Gaslighter
- Fallacy Finder
- Journal Reviewer
- DIY Expert
- Social Media Influencer
- Socrat
- Socratic Method
- Educational Content Creator
- Yogi
- Essay Writer
- Social Media Manager
- Elocutionist
- Scientific Data Visualizer
- Car Navigation System
- Hypnotherapist
- Historian
- Astrologer
- Film Critic
- Classical Music Composer
- Journalist
- Digital Art Gallery Guide
- Public Speaking Coach
- Makeup Artist
- Babysitter
- Tech Writer
- Ascii Artist
- Python interpreter
- Synonym finder
- Personal Shopper
- Food Critic
- Virtual Doctor
- Personal Chef
- Legal Advisor
- Personal Stylist
- Machine Learning Engineer
- Biblical Translator
- SVG designer
- IT Expert
- Chess Player
- Midjourney Prompt Generator
- Fullstack Software Developer
- Mathematician
- Regex Generator
- Time Travel Guide
- Dream Interpreter
- Talent Coach
- R programming Interpreter
- StackOverflow Post
- Emoji Translator
- PHP Interpreter
- Emergency Response Professional
- Fill in the Blank Worksheets Generator
- Software Quality Assurance Tester
- Tic-Tac-Toe Game
- Password Generator
- New Language Creator
- Web Browser
- Senior Frontend Developer
- Solr Search Engine
- Startup Idea Generator
- Spongebob's Magic Conch Shell
- Language Detector
- Salesperson
- Commit Message Generator
- Chief Executive Officer
- Diagram Generator
- Speech-Language Pathologist (SLP)
- Startup Tech Lawyer
- Title Generator for written pieces
- Product Manager
- Drunk Person
- Mathematical History Teacher
- Song Recommender
- Cover Letter
- Technology Transferer
- Unconstrained AI model DAN
- Gomoku player
- Proofreader
- Buddha
- Muslim imam
- Chemical reactor
- Friend
- Python Interpreter
- ChatGPT prompt generator
- Wikipedia page
- Japanese Kanji quiz machine
- note-taking assistant
language
Literary Critic- Cheap Travel Ticket Advisor
- DALL-E
- MathBot
- DAN-1
- DAN
- STAN
- DUDE
- Mongo Tom
- LAD
- EvilBot
- NeoGPT
- Astute
- AIM
- CAN
- FunnyGPT
- CreativeGPT
- BetterDAN
- GPT-4
- Wheatley
- Evil Confidant
- DAN 8.6
- Hypothetical response
- BH
- Text Continuation
- Dude v3
- SDA (Superior DAN)
- AntiGPT
- BasedGPT v2
- DevMode + Ranti
- KEVIN
- GPT-4 Simulator
- UCAR
- Dan 8.6
- 3-Liner
- M78
- Maximum
- BasedGPT
- Confronting personalities
- Ron
- UnGPT
- BasedBOB
- AntiGPT v2
- Oppo
- FR3D
- NRAF
- NECO
- MAN
- Eva
- Meanie
- Dev Mode v2
- Evil Chad 2.1
- Universal Jailbreak
- PersonGPT
- BISH
- DAN 11.0
- Aligned
- VIOLET
- TranslatorBot
- JailBreak
- Moralizing Rant
- Mr. Blonde
- New DAN
- GPT-4REAL
- DeltaGPT
- SWITCH
- Jedi Mind Trick
- DAN 9.0
- Dev Mode (Compact)
- OMEGA
- Coach Bobby Knight
- LiveGPT
- DAN Jailbreak
- Cooper
- Steve
- DAN 5.0
- Axies
- OMNI
- Burple
- JOHN
- An Ethereum Developer
- SEO Prompt
- Prompt Enhancer
- Data Scientist
- League of Legends Player
Note: Some "acts" use placeholders like position
or language
which should be replaced with a specific value when using the prompt.
🗣️ Text to Speech - Voicepods, StreamElements
from webscout import Voicepods
voicepods = Voicepods()
text = "Hello, this is a test of the Voicepods text-to-speech"
print("Generating audio...")
audio_file = voicepods.tts(text)
print("Playing audio...")
voicepods.play_audio(audio_file)
💬 Duckchat
- Chat with LLM
from webscout import WEBS as w
R = w().chat("Who are you", model='gpt-4o-mini')
print(R)
🔎 PhindSearch
- Search using Phind.com
from webscout import PhindSearch
ph = PhindSearch()
prompt = "write a essay on phind"
response = ph.ask(prompt)
message = ph.get_message(response)
print(message)
Using phindv2:
from webscout import Phindv2
ph = Phindv2()
prompt = ""
response = ph.ask(prompt)
message = ph.get_message(response)
print(message)
♊ Gemini
- Search with Google Gemini
import webscout
from webscout import GEMINI
from rich import print
COOKIE_FILE = "cookies.json"
PROXIES = {}
gemini = GEMINI(cookie_file=COOKIE_FILE, proxy=PROXIES)
response = gemini.chat("websearch about HelpingAI and who is its developer")
print(response)
💬 YEPCHAT
from webscout import YEPCHAT
ai = YEPCHAT(Tools=False)
response = ai.chat(input(">>> "))
for chunk in response:
print(chunk, end="", flush=True)
from rich import print
from webscout import YEPCHAT
def get_current_time():
import datetime
return f"The current time is {datetime.datetime.now().strftime('%H:%M:%S')}"
def get_weather(location: str) -> str:
return f"The weather in {location} is sunny."
ai = YEPCHAT(Tools=True)
ai.tool_registry.register_tool("get_current_time", get_current_time, "Gets the current time.")
ai.tool_registry.register_tool(
"get_weather",
get_weather,
"Gets the weather for a given location.",
parameters={
"type": "object",
"properties": {
"location": {"title": "Location", "type": "string"}
},
"required": ["location"]
},
)
response = ai.chat(input(">>> "))
for chunk in response:
print(chunk, end="", flush=True)
⬛ BlackBox
- Search/Chat with BlackBox
from webscout import BLACKBOXAI
from rich import print
ai = BLACKBOXAI(
is_conversation=True,
max_tokens=800,
timeout=30,
intro=None,
filepath=None,
update_file=True,
proxies={},
history_offset=10250,
act=None,
model=None
)
prompt = "Tell me about india"
r = ai.chat(prompt)
print(r)
❓ PERPLEXITY
- Search with PERPLEXITY
from webscout import Perplexity
from rich import print
perplexity = Perplexity()
response = perplexity.chat(input(">>> "))
for chunk in response:
print(chunk, end="", flush=True)
perplexity.close()
🤖 Meta AI
- Chat with Meta AI
from webscout import Meta
from rich import print
meta_ai = Meta()
response = meta_ai.chat("What is the capital of France?")
print(response)
for chunk in meta_ai.chat("Tell me a story about a cat."):
print(chunk, end="", flush=True)
fb_email = "abcd@abc.com"
fb_password = "qwertfdsa"
meta_ai = Meta(fb_email=fb_email, fb_password=fb_password)
response = meta_ai.ask("what is currently happning in bangladesh in aug 2024")
print(response["message"])
print("Sources:", response["sources"])
response = meta_ai.ask("Create an image of a cat wearing a hat.")
print(response["message"])
for media in response["media"]:
print(media["url"])
KOBOLDAI
from webscout import KOBOLDAI
koboldai = KOBOLDAI()
prompt = "What is the capital of France?"
response = koboldai.ask(prompt)
message = koboldai.get_message(response)
print(message)
Reka
- Chat with Reka
from webscout import REKA
a = REKA(is_conversation=True, max_tokens=8000, timeout=30,api_key="")
prompt = "tell me about india"
response_str = a.chat(prompt)
print(response_str)
Cohere
- Chat with Cohere
from webscout import Cohere
a = Cohere(is_conversation=True, max_tokens=8000, timeout=30,api_key="")
prompt = "tell me about india"
response_str = a.chat(prompt)
print(response_str)
DeepSeek
- Chat with DeepSeek
from webscout import DeepSeek
from rich import print
ai = DeepSeek(
is_conversation=True,
api_key='cookie',
max_tokens=800,
timeout=30,
intro=None,
filepath=None,
update_file=True,
proxies={},
history_offset=10250,
act=None,
model="deepseek_chat"
)
prompt = "Tell me about india"
r = ai.chat(prompt)
print(r)
Deepinfra
from webscout import DeepInfra
ai = DeepInfra(
is_conversation=True,
model= "Qwen/Qwen2-72B-Instruct",
max_tokens=800,
timeout=30,
intro=None,
filepath=None,
update_file=True,
proxies={},
history_offset=10250,
act=None,
)
prompt = "what is meaning of life"
response = ai.ask(prompt)
message = ai.get_message(response)
print(message)
GROQ
from webscout import GROQ
ai = GROQ(api_key="")
response = ai.chat("What is the meaning of life?")
print(response)
from webscout import GROQ
from webscout import WEBS
import json
client = GROQ(api_key="")
MODEL = 'llama3-groq-70b-8192-tool-use-preview'
def calculate(expression):
"""Evaluate a mathematical expression"""
try:
result = eval(expression)
return json.dumps({"result": result})
except Exception as e:
return json.dumps({"error": str(e)})
def search(query):
"""Perform a text search using DuckDuckGo.com"""
try:
results = WEBS().text(query, max_results=5)
return json.dumps({"results": results})
except Exception as e:
return json.dumps({"error": str(e)})
client.add_function("calculate", calculate)
client.add_function("search", search)
tools = [
{
"type": "function",
"function": {
"name": "calculate",
"description": "Evaluate a mathematical expression",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The mathematical expression to evaluate",
}
},
"required": ["expression"],
},
}
},
{
"type": "function",
"function": {
"name": "search",
"description": "Perform a text search using DuckDuckGo.com and Yep.com",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query to execute",
}
},
"required": ["query"],
},
}
}
]
user_prompt_calculate = "What is 25 * 4 + 10?"
response_calculate = client.chat(user_prompt_calculate, tools=tools)
print(response_calculate)
user_prompt_search = "Find information on HelpingAI and who is its developer"
response_search = client.chat(user_prompt_search, tools=tools)
print(response_search)
LLama 70b
- Chat with Meta's Llama 3 70b
from webscout import LLAMA
llama = LLAMA()
r = llama.chat("What is the meaning of life?")
print(r)
AndiSearch
from webscout import AndiSearch
a = AndiSearch()
print(a.chat("HelpingAI-9B"))
📞 Function Calling (Beta)
import json
import logging
from webscout import Julius, WEBS
from webscout.Agents.functioncall import FunctionCallingAgent
from rich import print
class FunctionExecutor:
def __init__(self, llama):
self.llama = llama
def execute_web_search(self, arguments):
query = arguments.get("query")
if not query:
return "Please provide a search query."
with WEBS() as webs:
search_results = webs.text(query, max_results=5)
prompt = (
f"Based on the following search results:\n\n{search_results}\n\n"
f"Question: {query}\n\n"
"Please provide a comprehensive answer to the question based on the search results above. "
"Include relevant webpage URLs in your answer when appropriate. "
"If the search results don't contain relevant information, please state that and provide the best answer you can based on your general knowledge."
)
return self.llama.chat(prompt)
def execute_general_ai(self, arguments):
question = arguments.get("question")
if not question:
return "Please provide a question."
return self.llama.chat(question)
def execute_UserDetail(self, arguments):
name = arguments.get("name")
age = arguments.get("age")
return f"User details - Name: {name}, Age: {age}"
def main():
tools = [
{
"type": "function",
"function": {
"name": "UserDetail",
"parameters": {
"type": "object",
"properties": {
"name": {"title": "Name", "type": "string"},
"age": {"title": "Age", "type": "integer"}
},
"required": ["name", "age"]
}
}
},
{
"type": "function",
"function": {
"name": "web_search",
"description": "Search the web for information using Google Search.",
"parameters": {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "The search query to be executed."
}
},
"required": ["query"]
}
}
},
{
"type": "function",
"function": {
"name": "general_ai",
"description": "Use general AI knowledge to answer the question",
"parameters": {
"type": "object",
"properties": {
"question": {"type": "string", "description": "The question to answer"}
},
"required": ["question"]
}
}
}
]
agent = FunctionCallingAgent(tools=tools)
llama = Julius()
function_executor = FunctionExecutor(llama)
user_input = input(">>> ")
function_call_data = agent.function_call_handler(user_input)
print(f"Function Call Data: {function_call_data}")
try:
if "error" not in function_call_data:
function_name = function_call_data.get("tool_name")
arguments = function_call_data.get("tool_input", {})
execute_function = getattr(function_executor, f"execute_{function_name}", None)
if execute_function:
result = execute_function(arguments)
print("Function Execution Result:")
for c in result:
print(c, end="", flush=True)
else:
print(f"Unknown function: {function_name}")
else:
print(f"Error: {function_call_data['error']}")
except Exception as e:
print(f"An error occurred: {str(e)}")
if __name__ == "__main__":
main()
LLAMA3, pizzagpt, RUBIKSAI, Koala, Darkai, AI4Chat, Farfalle, PIAI, Felo, Julius, YouChat, YEPCHAT, Cloudflare, TurboSeek, Editee, AI21, Chatify, Cerebras, X0GPT, Lepton, GEMINIAPI, Cleeai, Elmo, Genspark, Upstage, Free2GPT, Bing, DiscordRocks, GPTWeb, LlamaTutor, PromptRefine, AIUncensored, TutorAI, ChatGPTES, Bagoodex, ChatHub, AmigoChat, AIMathGPT, GaurishCerebras, NinjaChat, GeminiPro, Talkai, LLMChat, AskMyAI, Llama3Mitril, Marcus, PerplexityLabs, TypeGPT, Mhystical
Code is similar to other providers.
LLM
from webscout.LLM import LLM, VLM
llm = LLM("meta-llama/Meta-Llama-3-70B-Instruct")
response = llm.chat([{"role": "user", "content": "What's good?"}])
vlm = VLM("cogvlm-grounding-generalist")
response = vlm.chat([{
"role": "user",
"content": [
{"type": "image", "image_url": "cool_pic.jpg"},
{"type": "text", "text": "What's in this image?"}
]
}])
💻 Local-LLM
Webscout can now run GGUF models locally. You can download and run your favorite models with minimal configuration.
Example:
from webscout.Local import *
model_path = download_model("Qwen/Qwen2.5-0.5B-Instruct-GGUF", "qwen2.5-0.5b-instruct-q2_k.gguf", token=None)
model = Model(model_path, n_gpu_layers=0, context_length=2048)
thread = Thread(model, format=chatml)
🐶 Local-rawdog
Webscout's local raw-dog feature allows you to run Python scripts within your terminal prompt.
Example:
import webscout.Local as ws
from webscout.Local.rawdog import RawDog
from webscout.Local.samplers import DefaultSampling
from webscout.Local.formats import chatml, AdvancedFormat
from webscout.Local.utils import download_model
import datetime
import sys
import os
repo_id = "YorkieOH10/granite-8b-code-instruct-Q8_0-GGUF"
filename = "granite-8b-code-instruct.Q8_0.gguf"
model_path = download_model(repo_id, filename, token='')
model = ws.Model(model_path, n_gpu_layers=10)
rawdog = RawDog()
chat_format = AdvancedFormat(chatml)
system_content = f"""
You are a command-line coding assistant called Rawdog that generates and auto-executes Python scripts.
A typical interaction goes like this:
1. The user gives you a natural language PROMPT.
2. You:
i. Determine what needs to be done
ii. Write a short Python SCRIPT to do it
iii. Communicate back to the user by printing to the console in that SCRIPT
3. The compiler extracts the script and then runs it using exec(). If there will be an exception raised,
it will be send back to you starting with "PREVIOUS SCRIPT EXCEPTION:".
4. In case of exception, regenerate error free script.
If you need to review script outputs before completing the task, you can print the word "CONTINUE" at the end of your SCRIPT.
This can be useful for summarizing documents or technical readouts, reading instructions before
deciding what to do, or other tasks that require multi-step reasoning.
A typical 'CONTINUE' interaction looks like this:
1. The user gives you a natural language PROMPT.
2. You:
i. Determine what needs to be done
ii. Determine that you need to see the output of some subprocess call to complete the task
iii. Write a short Python SCRIPT to print that and then print the word "CONTINUE"
3. The compiler
i. Checks and runs your SCRIPT
ii. Captures the output and appends it to the conversation as "LAST SCRIPT OUTPUT:"
iii. Finds the word "CONTINUE" and sends control back to you
4. You again:
i. Look at the original PROMPT + the "LAST SCRIPT OUTPUT:" to determine what needs to be done
ii. Write a short Python SCRIPT to do it
iii. Communicate back to the user by printing to the console in that SCRIPT
5. The compiler...
Please follow these conventions carefully:
- Decline any tasks that seem dangerous, irreversible, or that you don't understand.
- Always review the full conversation prior to answering and maintain continuity.
- If asked for information, just print the information clearly and concisely.
- If asked to do something, print a concise summary of what you've done as confirmation.
- If asked a question, respond in a friendly, conversational way. Use programmatically-generated and natural language responses as appropriate.
- If you need clarification, return a SCRIPT that prints your question. In the next interaction, continue based on the user's response.
- Assume the user would like something concise. For example rather than printing a massive table, filter or summarize it to what's likely of interest.
- Actively clean up any temporary processes or files you use.
- When looking through files, use git as available to skip files, and skip hidden files (.env, .git, etc) by default.
- You can plot anything with matplotlib.
- ALWAYS Return your SCRIPT inside of a single pair of ``` delimiters. Only the console output of the first such SCRIPT is visible to the user, so make sure that it's complete and don't bother returning anything else.
"""
chat_format.override('system_content', lambda: system_content)
thread = ws.Thread(model, format=chat_format, sampler=DefaultSampling)
while True:
prompt = input(">: ")
if prompt.lower() == "q":
break
response = thread.send(prompt)
script_output = rawdog.main(response)
if script_output:
print(script_output)
GGUF
Webscout provides tools to convert and quantize Hugging Face models into the GGUF format for use with offline LLMs.
Example:
from webscout.Extra import gguf
"""
Valid quantization methods:
"q2_k", "q3_k_l", "q3_k_m", "q3_k_s",
"q4_0", "q4_1", "q4_k_m", "q4_k_s",
"q5_0", "q5_1", "q5_k_m", "q5_k_s",
"q6_k", "q8_0"
"""
gguf.convert(
model_id="OEvortex/HelpingAI-Lite-1.5T",
username="Abhaykoul",
token="hf_token_write",
quantization_methods="q4_k_m"
)
🤖 Autollama
Webscout's autollama
utility downloads a model from Hugging Face and then automatically makes it Ollama-ready.
from webscout.Extra import autollama
model_path = "Vortex4ai/Jarvis-0.5B"
gguf_file = "test2-q4_k_m.gguf"
autollama.main(model_path, gguf_file)
Command Line Usage:
-
GGUF Conversion:
python -m webscout.Extra.gguf -m "OEvortex/HelpingAI-Lite-1.5T" -u "your_username" -t "your_hf_token" -q "q4_k_m,q5_k_m"
-
Autollama:
python -m webscout.Extra.autollama -m "OEvortex/HelpingAI-Lite-1.5T" -g "HelpingAI-Lite-1.5T.q4_k_m.gguf"
Note:
- Replace
"your_username"
and "your_hf_token"
with your actual Hugging Face credentials. - The
model_path
in autollama
is the Hugging Face model ID, and gguf_file
is the GGUF file ID.
🌐 Webai
- Terminal GPT and an Open Interpreter
python -m webscout.webai webai --provider "phind" --rawdog
🤝 Contributing
Contributions are welcome! If you'd like to contribute to Webscout, please follow these steps:
- Fork the repository.
- Create a new branch for your feature or bug fix.
- Make your changes and commit them with descriptive messages.
- Push your branch to your forked repository.
- Submit a pull request to the main repository.
🙏 Acknowledgments
- All the amazing developers who have contributed to the project!
- The open-source community for their support and inspiration.