OpenAI Unofficial Python SDK
An Free & Unlimited unofficial Python SDK for the OpenAI API, providing seamless integration and easy-to-use methods for interacting with OpenAI's latest powerful AI models, including GPT-4o (Including gpt-4o-audio-preview & gpt-4o-realtime-preview Models), GPT-4, GPT-3.5 Turbo, DALL·E 3, Whisper & Text-to-Speech (TTS) models for Free
Table of Contents
Features
- Comprehensive Model Support: Integrate with the latest OpenAI models, including GPT-4, GPT-4o, GPT-3.5 Turbo, DALL·E 3, Whisper, Text-to-Speech (TTS) models, and the newest audio preview and real-time models.
- Chat Completions: Generate chat-like responses using a variety of models.
- Streaming Responses: Support for streaming chat completions, including real-time models for instantaneous outputs.
- Audio Generation: Generate high-quality speech audio with various voice options using TTS models.
- Audio and Text Responses: Utilize models like
gpt-4o-audio-preview
to receive both audio and text responses. - Image Generation: Create stunning images using DALL·E models with customizable parameters.
- Audio Transcription: Convert speech to text using Whisper models.
- Easy to Use: Simple and intuitive methods to interact with various endpoints.
- Extensible: Designed to be easily extendable for future OpenAI models and endpoints.
Installation
Install the package via pip:
pip install -U openai-unofficial
Quick Start
from openai_unofficial import OpenAIUnofficial
client = OpenAIUnofficial()
response = client.chat.completions.create(
messages=[{"role": "user", "content": "Say hello!"}],
model="gpt-4o"
)
print(response.choices[0].message.content)
Usage Examples
List Available Models
from openai_unofficial import OpenAIUnofficial
client = OpenAIUnofficial()
models = client.list_models()
print("Available Models:")
for model in models['data']:
print(f"- {model['id']}")
Basic Chat Completion
from openai_unofficial import OpenAIUnofficial
client = OpenAIUnofficial()
response = client.chat.completions.create(
messages=[{"role": "user", "content": "Tell me a joke."}],
model="gpt-4o"
)
print("ChatBot:", response.choices[0].message.content)
Chat Completion with Image Input
from openai_unofficial import OpenAIUnofficial
client = OpenAIUnofficial()
response = client.chat.completions.create(
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
}
},
],
}],
model="gpt-4o-mini-2024-07-18"
)
print("Response:", response.choices[0].message.content)
Streaming Chat Completion
from openai_unofficial import OpenAIUnofficial
client = OpenAIUnofficial()
completion_stream = client.chat.completions.create(
messages=[{"role": "user", "content": "Write a short story in 3 sentences."}],
model="gpt-4o-mini-2024-07-18",
stream=True
)
for chunk in completion_stream:
content = chunk.choices[0].delta.content
if content:
print(content, end='', flush=True)
Audio Generation with TTS Model
from openai_unofficial import OpenAIUnofficial
client = OpenAIUnofficial()
audio_data = client.audio.create(
input_text="This is a test of the TTS capabilities!",
model="tts-1-hd",
voice="nova"
)
with open("tts_output.mp3", "wb") as f:
f.write(audio_data)
print("TTS Audio saved as tts_output.mp3")
Chat Completion with Audio Preview Model
from openai_unofficial import OpenAIUnofficial
client = OpenAIUnofficial()
response = client.chat.completions.create(
messages=[{"role": "user", "content": "Tell me a fun fact."}],
model="gpt-4o-audio-preview",
modalities=["text", "audio"],
audio={"voice": "fable", "format": "wav"}
)
message = response.choices[0].message
print("Text Response:", message.content)
if message.audio and 'data' in message.audio:
from base64 import b64decode
with open("audio_preview.wav", "wb") as f:
f.write(b64decode(message.audio['data']))
print("Audio saved as audio_preview.wav")
Image Generation
from openai_unofficial import OpenAIUnofficial
client = OpenAIUnofficial()
response = client.image.create(
prompt="A futuristic cityscape at sunset",
model="dall-e-3",
size="1024x1024"
)
print("Image URL:", response.data[0].url)
Audio Speech Recognition with Whisper Model
from openai_unofficial import OpenAIUnofficial
client = OpenAIUnofficial()
with open("speech.mp3", "rb") as audio_file:
transcription = client.audio.transcribe(
file=audio_file,
model="whisper-1"
)
print("Transcription:", transcription.text)
Function Calling and Tool Usage
The SDK supports OpenAI's function calling capabilities, allowing you to define and use tools/functions in your conversations. Here are examples of function calling & tool usage:
Basic Function Calling
⚠️ Important Note: In the current version (0.1.2), complex or multiple function calling is not yet fully supported. The SDK currently supports basic function calling capabilities. Support for multiple function calls and more complex tool usage patterns will be added in upcoming releases.
from openai_unofficial import OpenAIUnofficial
import json
client = OpenAIUnofficial()
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g., San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit"
}
},
"required": ["location"]
}
}
}
]
def get_current_weather(location: str, unit: str = "celsius") -> str:
return f"The current weather in {location} is 22°{unit[0].upper()}"
messages = [
{"role": "user", "content": "What's the weather like in London?"}
]
response = client.chat.completions.create(
model="gpt-4o-mini-2024-07-18",
messages=messages,
tools=tools,
tool_choice="auto"
)
assistant_message = response.choices[0].message
messages.append(assistant_message.to_dict())
if assistant_message.tool_calls:
for tool_call in assistant_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
function_response = get_current_weather(**function_args)
messages.append({
"role": "tool",
"tool_call_id": tool_call.id,
"name": function_name,
"content": function_response
})
final_response = client.chat.completions.create(
model="gpt-4o-mini-2024-07-18",
messages=messages
)
print("Final Response:", final_response.choices[0].message.content)
Contributing
Contributions are welcome! Please follow these steps:
- Fork the repository.
- Create a new branch:
git checkout -b feature/my-feature
. - Commit your changes:
git commit -am 'Add new feature'
. - Push to the branch:
git push origin feature/my-feature
. - Open a pull request.
Please ensure your code adheres to the project's coding standards and passes all tests.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Note: This SDK is unofficial and not affiliated with OpenAI.
If you encounter any issues or have suggestions, please open an issue on GitHub.
Supported Models
Here's a partial list of models that the SDK currently supports. For Complete list, check out the /models
endpoint: