![Oracle Drags Its Feet in the JavaScript Trademark Dispute](https://cdn.sanity.io/images/cgdhsj6q/production/919c3b22c24f93884c548d60cbb338e819ff2435-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
An OpenAI implementation of the OmniAI interface supporting ChatGPT, Whisper, Text-to-Voice, Voice-to-Text, and more. This library is community maintained.
gem install omniai-openai
A client is setup as follows if ENV['OPENAI_API_KEY']
exists:
client = OmniAI::OpenAI::Client.new
A client may also be passed the following options:
api_key
(required - default is ENV['OPENAI_API_KEY']
)api_prefix
(optional) - used with a host when necessaryorganization
(optional)project
(optional)host
(optional) useful for usage with Ollama, LocalAI or other OpenAI API compatible servicesGlobal configuration is supported for the following options:
OmniAI::OpenAI.configure do |config|
config.api_key = 'sk-...' # default: ENV['OPENAI_API_KEY']
config.organization = '...' # default: ENV['OPENAI_ORGANIZATION']
config.project = '...' # default: ENV['OPENAI_PROJECT']
config.host = '...' # default: 'https://api.openai.com' - override for usage with LocalAI / Ollama
end
LocalAI offers built in compatability with the OpenAI specification. To initialize a client that points to a Ollama change the host accordingly:
client = OmniAI::OpenAI::Client.new(host: 'http://localhost:8080', api_key: nil)
For details on installation or running LocalAI see the getting started tutorial.
Ollama offers built in compatability with the OpenAI specification. To initialize a client that points to a Ollama change the host accordingly:
client = OmniAI::OpenAI::Client.new(host: 'http://localhost:11434', api_key: nil)
For details on installation or running Ollama checkout the project README.
Other fee-based systems/services have adopted all or some of the OpenAI API. For example open_router.ai is a web-services that provides access to many models and providers using their own as well as an OpenAI API.
client = OmniAI::OpenAI::Client.new(
host: 'https://open_router.ai',
api_key: ENV['OPENROUTER_API_KEY'],
api_prefix: '/api')
A chat completion is generated by passing in a simple text prompt:
completion = client.chat('Tell me a joke!')
completion.content # 'Why did the chicken cross the road? To get to the other side.'
A chat completion may also be generated by using a prompt builder:
completion = client.chat do |prompt|
prompt.system('Your are an expert in geography.')
prompt.user('What is the capital of Canada?')
end
completion.content # 'The capital of Canada is Ottawa.'
model
takes an optional string (default is gpt-4o
):
completion = client.chat('How fast is a cheetah?', model: OmniAI::OpenAI::Chat::Model::GPT_3_5_TURBO)
completion.content # 'A cheetah can reach speeds over 100 km/h.'
temperature
takes an optional float between 0.0
and 2.0
(defaults is 0.7
):
completion = client.chat('Pick a number between 1 and 5', temperature: 2.0)
completion.content # '3'
OpenAI API Reference temperature
stream
takes an optional a proc to stream responses in real-time chunks instead of waiting for a complete response:
stream = proc do |chunk|
print(chunk.content) # 'Better', 'three', 'hours', ...
end
client.chat('Be poetic.', stream:)
format
takes an optional symbol (:json
) and that setes the response_format
to json_object
:
completion = client.chat(format: :json) do |prompt|
prompt.system(OmniAI::Chat::JSON_PROMPT)
prompt.user('What is the name of the drummer for the Beatles?')
end
JSON.parse(completion.content) # { "name": "Ringo" }
OpenAI API Reference response_format
When using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message.
A transcription is generated by passing in a path to a file:
transcription = client.transcribe(file.path)
transcription.text # '...'
prompt
is optional and can provide additional context for transcribing:
transcription = client.transcribe(file.path, prompt: '')
transcription.text # '...'
format
is optional and supports json
, text
, srt
or vtt
:
transcription = client.transcribe(file.path, format: OmniAI::Transcribe::Format::TEXT)
transcription.text # '...'
OpenAI API Reference response_format
language
is optional and may improve accuracy and latency:
transcription = client.transcribe(file.path, language: OmniAI::Transcribe::Language::SPANISH)
transcription.text
temperature
is optional and must be between 0.0 (more deterministic) and 1.0 (less deterministic):
transcription = client.transcribe(file.path, temperature: 0.2)
transcription.text
OpenAI API Reference temperature
Speech can be generated by passing text with a block:
File.open('example.ogg', 'wb') do |file|
client.speak('How can a clam cram in a clean cream can?') do |chunk|
file << chunk
end
end
If a block is not provided then a tempfile is returned:
tempfile = client.speak('Can you can a can as a canner can can a can?')
tempfile.close
tempfile.unlink
voice
is optional and must be one of the supported voices:
client.speak('She sells seashells by the seashore.', voice: OmniAI::OpenAI::Speak::Voice::SHIMMER)
model
is optional and must be either tts-1
or tts-1-hd
(default):
client.speak('I saw a kitten eating chicken in the kitchen.', format: OmniAI::OpenAI::Speak::Model::TTS_1)
speed
is optional and must be between 0.25 and 0.40:
client.speak('How much wood would a woodchuck chuck if a woodchuck could chuck wood?', speed: 4.0)
format
is optional and supports MP3
(default), OPUS
, AAC
, FLAC
, WAV
or PCM
:
client.speak('A pessemistic pest exists amidst us.', format: OmniAI::OpenAI::Speak::Format::FLAC)
client.files.find(id: 'file_...')
client.files.all
file = client.files.build(io: File.open('demo.pdf', 'wb'))
file.save!
file = client.files.build(io: 'demo.pdf'))
file.save!
file = client.files.find(id: 'file_...')
File.open('...', 'wb') do |file|
file.content do |chunk|
file << chunk
end
end
client.files.destroy!('file_...')
client.assistants.find(id: 'asst_...')
client.assistants.all
assistant = client.assistants.build
assistant.name = 'Ringo'
assistant.model = OmniAI::OpenAI::Chat::Model::GPT_4
assistant.description = 'The drummer for the Beatles.'
assistant.save!
assistant = client.assistants.find(id: 'asst_...')
assistant.name = 'George'
assistant.model = OmniAI::OpenAI::Chat::Model::GPT_4
assistant.description = 'A guitarist for the Beatles.'
assistant.save!
client.assistants.destroy!('asst_...')
client.threads.find(id: 'thread_...')
thread = client.threads.build
thread.metadata = { user: 'Ringo' }
thread.save!
thread = client.threads.find(id: 'thread_...')
thread.metadata = { user: 'Ringo' }
thread.save!
client.threads.destroy!('thread_...')
thread = client.threads.find(id: 'thread_...')
message = thread.messages.find(id: 'msg_...')
message.save!
thread = client.threads.find(id: 'thread_...')
thread.messages.all
thread = client.threads.find(id: 'thread_...')
message = thread.messages.build(role: 'user', content: 'Hello?')
message.save!
thread = client.threads.find(id: 'thread_...')
message = thread.messages.build(role: 'user', content: 'Hello?')
message.save!
thread = client.threads.find(id: 'thread_...')
run = thread.runs.find(id: 'run_...')
run.save!
thread = client.threads.find(id: 'thread_...')
thread.runs.all
run = client.runs.find(id: 'thread_...')
run = thread.runs.build
run.metadata = { user: 'Ringo' }
run.save!
thread = client.threads.find(id: 'thread_...')
run = thread.messages.find(id: 'run_...')
run.metadata = { user: 'Ringo' }
run.save!
run.terminated? # false
run.poll!
run.terminated? # true
run.status # 'cancelled' / 'failed' / 'completed' / 'expired'
thread = client.threads.find(id: 'thread_...')
run = thread.runs.cancel!(id: 'run_...')
Text can be converted into a vector embedding for similarity comparison usage via:
response = client.embed('The quick brown fox jumps over a lazy dog.')
response.embedding # [0.0, ...]
FAQs
Unknown package
We found that omniai-openai demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.