New Case Study:See how Anthropic automated 95% of dependency reviews with Socket.Learn More
Socket
Sign inDemoInstall
Socket

llm_ruby

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

llm_ruby

  • 0.3.2
  • Rubygems
  • Socket score

Version published
Maintainers
1
Created
Source

LLMRuby

Gem Version Workflow Status License: MIT

LLMRuby is a Ruby gem that provides a consistent interface for interacting with multiple Large Language Model (LLM) APIs. Most OpenAI, Anthropic and Gemini models are currently supported.

Installation

Add this line to your application's Gemfile:

gem 'llm_ruby'

And then execute:

bundle install

Or install it yourself as:

gem install llm_ruby

Usage

Basic Usage

require 'llm_ruby'

# Initialize an LLM instance
llm = LLM.from_string!("gpt-4")

# Create a client
client = llm.client

# Send a chat message
response = client.chat([{role: :user, content: "Hello, world!"}])

puts response.content

Streaming Responses

LLMRuby supports streaming responses:

require 'llm_ruby'

# Initialize an LLM instance
llm = LLM.from_string!("gpt-4o")

# Create a client
client = llm.client

# Define the on_message callback
on_message = proc do |message|
  puts "Received message chunk: #{message}"
end

# Define the on_complete callback
on_complete = proc do |stop_reason|
  puts "Streaming complete. Stop reason: #{stop_reason}"
end

# Send a chat message with streaming enabled
response = client.chat(
  [{role: :user, content: "Hello, world!"}],
  stream: true,
  on_message: on_message,
  on_complete: on_complete
)

puts response.content

Using the Response Object

The response object returned by the client.chat method contains several useful fields:

  • content: The final content of the response.
  • raw_response: The raw response payload for non-streaming requests and the array of chunks for streaming requests.
  • stop_reason: The reason why the response generation was stopped.

Here is an example of how to use the response object:

# Initialize an LLM instance
llm = LLM.from_string!("gpt-4o")

# Create a client
client = llm.client

# Send a chat message
response = client.chat([{role: :user, content: "Hello, world!"}])

# Access the response fields
puts "Response content: #{response.content}"
puts "Raw response: #{response.raw_response}"
puts "Stop reason: #{response.stop_reason}"

Available Models

LLMRuby supports various OpenAI models, including GPT-3.5 and GPT-4 variants. You can see the full list of supported models in the KNOWN_MODELS constant:

OpenAI Models

Canonical NameDisplay Name
gpt-3.5-turboGPT-3.5 Turbo
gpt-3.5-turbo-0125GPT-3.5 Turbo 0125
gpt-3.5-turbo-16kGPT-3.5 Turbo 16K
gpt-3.5-turbo-1106GPT-3.5 Turbo 1106
gpt-4GPT-4
gpt-4-1106-previewGPT-4 Turbo 1106
gpt-4-turbo-2024-04-09GPT-4 Turbo 2024-04-09
gpt-4-0125-previewGPT-4 Turbo 0125
gpt-4-turbo-previewGPT-4 Turbo
gpt-4-0613GPT-4 0613
gpt-4oGPT-4o
gpt-4o-miniGPT-4o Mini
gpt-4o-mini-2024-07-18GPT-4o Mini 2024-07-18
gpt-4o-2024-05-13GPT-4o 2024-05-13
gpt-4o-2024-08-06GPT-4o 2024-08-06
gpt-4o-2024-11-20GPT-4o 2024-11-20
chatgpt-4o-latestChatGPT 4o Latest
o1o1
o1-2024-12-17o1 2024-12-17
o1-previewo1 Preview
o1-preview-2024-09-12o1 Preview 2024-09-12
o1-minio1 Mini
o1-mini-2024-09-12o1 Mini 2024-09-12
o3-minio3 Mini
o3-mini-2025-01-31o3 Mini 2025-01-31

Anthropic Models

Canonical NameDisplay Name
claude-3-5-sonnet-20241022Claude 3.5 Sonnet 2024-10-22
claude-3-5-haiku-20241022Claude 3.5 Haiku 2024-10-22
claude-3-5-sonnet-20240620Claude 3.5 Sonnet 2024-06-20
claude-3-opus-20240229Claude 3.5 Opus 2024-02-29
claude-3-sonnet-20240229Claude 3.5 Sonnet 2024-02-29
claude-3-haiku-20240307Claude 3.5 Opus 2024-03-07

Google Models

Canonical NameDisplay Name
gemini-2.0-flashGemini 2.0 Flash
gemini-2.0-flash-lite-preview-02-05Gemini 2.0 Flash Lite Preview 02-05
gemini-1.5-flashGemini 1.5 Flash
gemini-1.5-proGemini 1.5 Pro
gemini-1.5-flash-8bGemini 1.5 Flash 8B

Configuration

Set your OpenAI, Anthropic or Google API key as an environment variable:

export OPENAI_API_KEY=your_api_key_here
export ANTHROPIC_API_KEY=your_api_key_here
export GEMINI_API_KEY=your_api_key_here

Structured Outputs

OpenAI and Gemini models can be configured to generate responses that adhere to a provided schema. Even though each use a different format for configuring this schema, llm_ruby can handle the translation for you, so that you can share a single schema definition across models.


llm = LLM.from_string!("gpt-4o")

# Create a client
client = llm.client

# Send a chat message
response_format = LLM::Schema.new("test_schema", {"type" => "object", "properties" => {"name" => {"type" => "string"}, "age" => {"type" => "integer"}}, "additionalProperties" => false, "required" => ["name", "age"]})
# or load the schema from a file: LLM::Schema.from_file('myschema.json')
response = client.chat([{role: :user, content: "Hello, world!"}], response_format: response_format)

response.structured_output[:name] # Alex
response.structured_output_object.name # Alex

Development

After checking out the repo, run bin/setup to install dependencies. Then, run rake spec to run the tests. You can also run bin/console for an interactive prompt that will allow you to experiment.

To install this gem onto your local machine, run bundle exec rake install.

Contributing

Bug reports and pull requests are welcome.

License

The gem is available as open source under the terms of the MIT License.

FAQs

Package last updated on 24 Feb 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc