New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

free-llm-toolbox

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

free-llm-toolbox

A Python toolbox for easy interaction with various LLMs and Vision models

0.1.8
PyPI
Maintainers
1

Free LLM Toolbox 🚀

A Python package that provides easy-to-use utilities for working with various Language Models (LLMs) and Vision Models. 🎯 But everything is free ! (working on generous free plans of some AI platforms)

Features

  • Text generation with multiple LLM providers support
  • Image analysis and description capabilities
  • Support for models like Llama, Groq, and Google's Gemini
  • Streaming responses
  • Tool integration support
  • JSON output formatting
  • Customizable system prompts

Installation 💻

uv pip install free-llm-toolbox

Configuration ⚙️

Before using the library, you need to configure your API keys in a .env file:

GROQ_API_KEY=your_groq_key
GITHUB_TOKEN=your_github_token
GOOGLE_API_KEY=your_google_key
SAMBANOVA_API_KEY=your_sambanova_key
CEREBRAS_API_KEY=your_cerebras_key

Quick Start

Text Generation

from free_llm_toolbox import LanguageModel

# Initialize a session with your preferred model
session = LanguageModel(
    model_name="gemini-2.0-flash",
    provider="google",
    temperature=0.7
)

# Generate a response
response = session.answer("What is the capital of France?")
print(response)

Image Analysis

from free_llm_toolbox import ImageAnalyzerAgent

analyzer = ImageAnalyzerAgent()
description = analyzer.describe(
    "path/to/image.jpg",
    prompt="Describe the image",
    vllm_provider="groq",
    vllm_name="llama-3.2-90b-vision-preview"
)
print(description)

Usage 🎮

Text Models 📚

from free_llm_toolbox import LanguageModel

# Initialize a session with your preferred model
session = LanguageModel(
    model_name="llama-3-70b",
    provider="groq",
    temperature=0.7,
    top_k=45,
    top_p=0.95
)

# Simple text generation
response = session.answer("What is the capital of France?")

# JSON-formatted response with Pydantic validation
from pydantic import BaseModel

class LocationInfo(BaseModel):
    city: str
    country: str
    description: str

response = session.answer(
    "What is the capital of France?",
    json_formatting=True,
    pydantic_object=LocationInfo
)

# Using custom tools
tools = [
    {
        "name": "weather",
        "description": "Get current weather",
        "function": get_weather
    }
]
response, tool_calls = session.answer(
    "What's the weather in Paris?",
    tool_list=tools
)

# Streaming responses
for chunk in session.answer(
    "Tell me a long story.",
    stream=True
):
    print(chunk, end="", flush=True)

Vision Models 👁️

from free_llm_toolbox import ImageAnalyzerAgent

# Initialize the agent
analyzer = ImageAnalyzerAgent()

# Analyze an image
description = analyzer.describe(
    image_path="path/to/image.jpg",
    prompt="Describe this image in detail",
    vllm_provider="groq"
)
print(description)

Available Models 📊

Note: This list is not exhaustive. The library supports any new model ID released by these providers - you just need to get the correct model ID from your provider's documentation.

Text Models

ProviderModelLLM Provider IDModel IDPriceRate Limit (per min)Context WindowSpeed
GoogleGemini Pro Expgooglegemini-2.0-pro-exp-02-05Free6032,768Ultra Fast
GoogleGemini Flashgooglegemini-2.0-flashFree6032,768Ultra Fast
GoogleGemini Flash Thinkinggooglegemini-2.0-flash-thinking-exp-01-21Free6032,768Ultra Fast
GoogleGemini Flash Litegooglegemini-2.0-flash-lite-preview-02-05Free6032,768Ultra Fast
GitHubO3 Minigithubo3-miniFree508,192Fast
GitHubGPT-4ogithubgpt-4oFree508,192Fast
GitHubGPT-4o Minigithubgpt-4o-miniFree508,192Fast
GitHubO1 Minigithubo1-miniFree508,192Fast
GitHubO1 Previewgithubo1-previewFree508,192Fast
GitHubMeta Llama 3.1 405Bgithubmeta-Llama-3.1-405B-InstructFree508,192Fast
GitHubDeepSeek R1githubDeepSeek-R1Free508,192Fast
GroqDeepSeek R1 Distill Llama 70Bgroqdeepseek-r1-distill-llama-70bFree100131,072Ultra Fast
GroqLlama 3.3 70B Versatilegroqllama-3.3-70b-versatileFree100131,072Ultra Fast
GroqLlama 3.1 8B Instantgroqllama-3.1-8b-instantFree100131,072Ultra Fast
GroqLlama 3.2 3B Previewgroqllama-3.2-3b-previewFree100131,072Ultra Fast
SambaNovaLlama3 405Bsambanovallama3-405bFree608,000Fast

Vision Models

ProviderModelVision Provider IDModel IDPriceRate Limit (per min)Speed
GoogleGemini Vision Expgeminigemini-exp-1206Free60Ultra Fast
GoogleGemini Vision Flashgeminigemini-2.0-flashFree60Ultra Fast
GitHubGPT-4o Visiongithubgpt-4oFree50Fast
GitHubGPT-4o Mini Visiongithubgpt-4o-miniFree50Fast

Usage Example with Provider ID and Model ID

from free_llm_toolbox import LanguageModel

# Initialize a session with specific provider and model IDs
session = LanguageModel(
    model_name="llama-3.3-70b-versatile",  # Model ID from the table above
    provider="groq",                        # Provider ID from the table above
    temperature=0.7
)

Requirements

  • Python 3.8 or higher
  • Required dependencies will be automatically installed

Key Features ⭐

  • Simple and intuitive session-based interface
  • Support for both vision and text models
  • Simple configuration with .env file
  • Automatic context management
  • Tool support for compatible models
  • JSON output formatting with Pydantic validation
  • Response streaming support
  • Smart caching system
  • CPU and GPU support

Contributing 🤝

Contributions are welcome! Feel free to:

  • Fork the project
  • Create your feature branch
  • Commit your changes
  • Push to the branch
  • Open a Pull Request

License 📄

This project is licensed under the MIT License. See the LICENSE file for details.

Keywords

ai

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts