Esperanto 🌐
Esperanto is a powerful Python library that provides a unified interface for interacting with various Large Language Model (LLM) providers. It simplifies the process of working with different AI models (LLMs, Embedders, Transcribers) APIs by offering a consistent interface while maintaining provider-specific optimizations.
Features ✨
-
Unified Interface: Work with multiple LLM providers using a consistent API
-
Provider Support:
- OpenAI (GPT-4, GPT-3.5, o1, Whisper, TTS)
- Anthropic (Claude 3)
- OpenRouter (Access to multiple models)
- xAI (Grok)
- Groq (Mixtral, Llama, Whisper)
- Google GenAI (Gemini LLM, Text To Speech, Embedding)
- Vertex AI (Google Cloud)
- Ollama (Local deployment)
- ElevenLabs (Text-to-Speech)
-
Embedding Support: Multiple embedding providers for vector representations
-
Speech-to-Text Support: Transcribe audio using multiple providers
-
Text-to-Speech Support: Generate speech using multiple providers
-
Async Support: Both synchronous and asynchronous API calls
-
Streaming: Support for streaming responses
-
Structured Output: JSON output formatting (where supported)
-
LangChain Integration: Easy conversion to LangChain chat models
For detailed information about our providers, check out:
Installation 🚀
Install Esperanto using pip:
pip install esperanto
For specific providers, install with their extras:
pip install "esperanto[openai]"
pip install "esperanto[anthropic]"
pip install "esperanto[google]"
pip install "esperanto[vertex]"
pip install "esperanto[groq]"
pip install "esperanto[ollama]"
pip install "esperanto[all]"
Provider Support Matrix
Provider | LLM Support | Embedding Support | Speech-to-Text | Text-to-Speech | JSON Mode |
---|
OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ |
Anthropic | ✅ | ❌ | ❌ | ❌ | ✅ |
Groq | ✅ | ❌ | ✅ | ❌ | ✅ |
Google (GenAI) | ✅ | ✅ | ❌ | ✅ | ✅ |
Vertex AI | ✅ | ✅ | ❌ | ❌ | ❌ |
Ollama | ✅ | ✅ | ❌ | ❌ | ❌ |
ElevenLabs | ❌ | ❌ | ❌ | ✅ | ❌ |
Quick Start 🏃♂️
You can use Esperanto in two ways: directly with provider-specific classes or through the AI Factory.
Using AI Factory
from esperanto.factory import AIFactory
providers = AIFactory.get_available_providers()
print(providers)
model = AIFactory.create_language("openai", "gpt-3.5-turbo")
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the capital of France?"},
]
response = model.chat_complete(messages)
model = AIFactory.create_embedding("openai", "text-embedding-3-small")
texts = ["Hello, world!", "Another text"]
embeddings = model.embed(texts)
Standardized Responses
All providers in Esperanto return standardized response objects, making it easy to work with different models without changing your code.
LLM Responses
from esperanto.factory import AIFactory
model = AIFactory.create_language("openai", "gpt-3.5-turbo")
messages = [{"role": "user", "content": "Hello!"}]
response = model.chat_complete(messages)
print(response.choices[0].message.content)
print(response.choices[0].message.role)
print(response.model)
print(response.usage.total_tokens)
for chunk in model.chat_complete(messages):
print(chunk.choices[0].delta.content)
Embedding Responses
from esperanto.factory import AIFactory
model = AIFactory.create_embedding("openai", "text-embedding-3-small")
texts = ["Hello, world!", "Another text"]
response = model.embed(texts)
print(response.data[0].embedding)
print(response.data[0].index)
print(response.model)
print(response.usage.total_tokens)
The standardized response objects ensure consistency across different providers, making it easy to:
- Switch between providers without changing your application code
- Handle responses in a uniform way
- Access common attributes like token usage and model information
Links 🔗
License 📄
This project is licensed under the MIT License - see the LICENSE file for details.