
Security News
TypeScript is Porting Its Compiler to Go for 10x Faster Builds
TypeScript is porting its compiler to Go, delivering 10x faster builds, lower memory usage, and improved editor performance for a smoother developer experience.
A Python package that provides easy-to-use utilities for working with various Language Models (LLMs) and Vision Models. 🎯 But everything is free ! (working on generous free plans of some AI platforms)
uv pip install free-llm-toolbox
Before using the library, you need to configure your API keys in a .env
file:
GROQ_API_KEY=your_groq_key
GITHUB_TOKEN=your_github_token
GOOGLE_API_KEY=your_google_key
SAMBANOVA_API_KEY=your_sambanova_key
CEREBRAS_API_KEY=your_cerebras_key
from free_llm_toolbox import LanguageModel
# Initialize a session with your preferred model
session = LanguageModel(
model_name="gemini-2.0-flash",
provider="google",
temperature=0.7
)
# Generate a response
response = session.answer("What is the capital of France?")
print(response)
from free_llm_toolbox import ImageAnalyzerAgent
analyzer = ImageAnalyzerAgent()
description = analyzer.describe(
"path/to/image.jpg",
prompt="Describe the image",
vllm_provider="groq",
vllm_name="llama-3.2-90b-vision-preview"
)
print(description)
from free_llm_toolbox import LanguageModel
# Initialize a session with your preferred model
session = LanguageModel(
model_name="llama-3-70b",
provider="groq",
temperature=0.7,
top_k=45,
top_p=0.95
)
# Simple text generation
response = session.answer("What is the capital of France?")
# JSON-formatted response with Pydantic validation
from pydantic import BaseModel
class LocationInfo(BaseModel):
city: str
country: str
description: str
response = session.answer(
"What is the capital of France?",
json_formatting=True,
pydantic_object=LocationInfo
)
# Using custom tools
tools = [
{
"name": "weather",
"description": "Get current weather",
"function": get_weather
}
]
response, tool_calls = session.answer(
"What's the weather in Paris?",
tool_list=tools
)
# Streaming responses
for chunk in session.answer(
"Tell me a long story.",
stream=True
):
print(chunk, end="", flush=True)
from free_llm_toolbox import ImageAnalyzerAgent
# Initialize the agent
analyzer = ImageAnalyzerAgent()
# Analyze an image
description = analyzer.describe(
image_path="path/to/image.jpg",
prompt="Describe this image in detail",
vllm_provider="groq"
)
print(description)
Note: This list is not exhaustive. The library supports any new model ID released by these providers - you just need to get the correct model ID from your provider's documentation.
Provider | Model | LLM Provider ID | Model ID | Price | Rate Limit (per min) | Context Window | Speed |
---|---|---|---|---|---|---|---|
Gemini Pro Exp | gemini-2.0-pro-exp-02-05 | Free | 60 | 32,768 | Ultra Fast | ||
Gemini Flash | gemini-2.0-flash | Free | 60 | 32,768 | Ultra Fast | ||
Gemini Flash Thinking | gemini-2.0-flash-thinking-exp-01-21 | Free | 60 | 32,768 | Ultra Fast | ||
Gemini Flash Lite | gemini-2.0-flash-lite-preview-02-05 | Free | 60 | 32,768 | Ultra Fast | ||
GitHub | O3 Mini | github | o3-mini | Free | 50 | 8,192 | Fast |
GitHub | GPT-4o | github | gpt-4o | Free | 50 | 8,192 | Fast |
GitHub | GPT-4o Mini | github | gpt-4o-mini | Free | 50 | 8,192 | Fast |
GitHub | O1 Mini | github | o1-mini | Free | 50 | 8,192 | Fast |
GitHub | O1 Preview | github | o1-preview | Free | 50 | 8,192 | Fast |
GitHub | Meta Llama 3.1 405B | github | meta-Llama-3.1-405B-Instruct | Free | 50 | 8,192 | Fast |
GitHub | DeepSeek R1 | github | DeepSeek-R1 | Free | 50 | 8,192 | Fast |
Groq | DeepSeek R1 Distill Llama 70B | groq | deepseek-r1-distill-llama-70b | Free | 100 | 131,072 | Ultra Fast |
Groq | Llama 3.3 70B Versatile | groq | llama-3.3-70b-versatile | Free | 100 | 131,072 | Ultra Fast |
Groq | Llama 3.1 8B Instant | groq | llama-3.1-8b-instant | Free | 100 | 131,072 | Ultra Fast |
Groq | Llama 3.2 3B Preview | groq | llama-3.2-3b-preview | Free | 100 | 131,072 | Ultra Fast |
SambaNova | Llama3 405B | sambanova | llama3-405b | Free | 60 | 8,000 | Fast |
Provider | Model | Vision Provider ID | Model ID | Price | Rate Limit (per min) | Speed |
---|---|---|---|---|---|---|
Gemini Vision Exp | gemini | gemini-exp-1206 | Free | 60 | Ultra Fast | |
Gemini Vision Flash | gemini | gemini-2.0-flash | Free | 60 | Ultra Fast | |
GitHub | GPT-4o Vision | github | gpt-4o | Free | 50 | Fast |
GitHub | GPT-4o Mini Vision | github | gpt-4o-mini | Free | 50 | Fast |
from free_llm_toolbox import LanguageModel
# Initialize a session with specific provider and model IDs
session = LanguageModel(
model_name="llama-3.3-70b-versatile", # Model ID from the table above
provider="groq", # Provider ID from the table above
temperature=0.7
)
Contributions are welcome! Feel free to:
This project is licensed under the MIT License. See the LICENSE file for details.
FAQs
A Python toolbox for easy interaction with various LLMs and Vision models
We found that free-llm-toolbox demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
TypeScript is porting its compiler to Go, delivering 10x faster builds, lower memory usage, and improved editor performance for a smoother developer experience.
Research
Security News
The Socket Research Team has discovered six new malicious npm packages linked to North Korea’s Lazarus Group, designed to steal credentials and deploy backdoors.
Security News
Socket CEO Feross Aboukhadijeh discusses the open web, open source security, and how Socket tackles software supply chain attacks on The Pair Program podcast.