![Oracle Drags Its Feet in the JavaScript Trademark Dispute](https://cdn.sanity.io/images/cgdhsj6q/production/919c3b22c24f93884c548d60cbb338e819ff2435-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Esperanto is a powerful Python library that provides a unified interface for interacting with various Large Language Model (LLM) providers. It simplifies the process of working with different AI models (LLMs, Embedders, Transcribers) APIs by offering a consistent interface while maintaining provider-specific optimizations.
Unified Interface: Work with multiple LLM providers using a consistent API
Provider Support:
Embedding Support: Multiple embedding providers for vector representations
Speech-to-Text Support: Transcribe audio using multiple providers
Text-to-Speech Support: Generate speech using multiple providers
Async Support: Both synchronous and asynchronous API calls
Streaming: Support for streaming responses
Structured Output: JSON output formatting (where supported)
LangChain Integration: Easy conversion to LangChain chat models
For detailed information about our providers, check out:
Install Esperanto using pip:
pip install esperanto
For specific providers, install with their extras:
# For OpenAI support
pip install "esperanto[openai]"
# For Anthropic support
pip install "esperanto[anthropic]"
# For Google (GenAI) support
pip install "esperanto[google]"
# For Vertex AI support
pip install "esperanto[vertex]"
# For Groq support
pip install "esperanto[groq]"
# For Ollama support
pip install "esperanto[ollama]"
# For all providers
pip install "esperanto[all]"
Provider | LLM Support | Embedding Support | Speech-to-Text | Text-to-Speech | JSON Mode |
---|---|---|---|---|---|
OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ |
Anthropic | ✅ | ❌ | ❌ | ❌ | ✅ |
Groq | ✅ | ❌ | ✅ | ❌ | ✅ |
Google (GenAI) | ✅ | ✅ | ❌ | ✅ | ✅ |
Vertex AI | ✅ | ✅ | ❌ | ❌ | ❌ |
Ollama | ✅ | ✅ | ❌ | ❌ | ❌ |
ElevenLabs | ❌ | ❌ | ❌ | ✅ | ❌ |
You can use Esperanto in two ways: directly with provider-specific classes or through the AI Factory.
from esperanto.factory import AIFactory
# Get available providers for each model type
providers = AIFactory.get_available_providers()
print(providers)
# Output:
# {
# 'language': ['openai', 'anthropic', 'google', 'groq', 'ollama', 'openrouter', 'xai'],
# 'embedding': ['openai', 'google', 'ollama', 'vertex'],
# 'speech_to_text': ['openai', 'groq'],
# 'text_to_speech': ['openai', 'elevenlabs', 'google']
# }
# Create a language model instance with structured output (JSON)
model = AIFactory.create_language(
"openai",
"gpt-3.5-turbo",
structured={"type": "json"}
)
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the capital of France?"},
]
response = model.chat_complete(messages) # Response will be in JSON format
# Create an embedding instance
model = AIFactory.create_embedding("openai", "text-embedding-3-small")
texts = ["Hello, world!", "Another text"]
embeddings = model.embed(texts)
All providers in Esperanto return standardized response objects, making it easy to work with different models without changing your code.
from esperanto.factory import AIFactory
model = AIFactory.create_language("openai", "gpt-3.5-turbo")
messages = [{"role": "user", "content": "Hello!"}]
# All LLM responses follow this structure
response = model.chat_complete(messages)
print(response.choices[0].message.content) # The actual response text
print(response.choices[0].message.role) # 'assistant'
print(response.model) # The model used
print(response.usage.total_tokens) # Token usage information
# For streaming responses
for chunk in model.chat_complete(messages):
print(chunk.choices[0].delta.content) # Partial response text
from esperanto.factory import AIFactory
model = AIFactory.create_embedding("openai", "text-embedding-3-small")
texts = ["Hello, world!", "Another text"]
# All embedding responses follow this structure
response = model.embed(texts)
print(response.data[0].embedding) # Vector for first text
print(response.data[0].index) # Index of the text (0)
print(response.model) # The model used
print(response.usage.total_tokens) # Token usage information
The standardized response objects ensure consistency across different providers, making it easy to:
This project is licensed under the MIT License - see the LICENSE file for details.
FAQs
A unified interface for various AI model providers
We found that esperanto demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.