
Security News
Open Source CAI Framework Handles Pen Testing Tasks up to 3,600× Faster Than Humans
CAI is a new open source AI framework that automates penetration testing tasks like scanning and exploitation up to 3,600× faster than humans.
LLMRuby is a Ruby gem that provides a consistent interface for interacting with multiple Large Language Model (LLM) APIs. Most OpenAI, Anthropic and Gemini models are currently supported.
Add this line to your application's Gemfile:
gem 'llm_ruby'
And then execute:
bundle install
Or install it yourself as:
gem install llm_ruby
require 'llm_ruby'
# Initialize an LLM instance
llm = LLM.from_string!("gpt-4")
# Create a client
client = llm.client
# Send a chat message
response = client.chat([{role: :user, content: "Hello, world!"}])
puts response.content
LLMRuby supports streaming responses:
require 'llm_ruby'
# Initialize an LLM instance
llm = LLM.from_string!("gpt-4o")
# Create a client
client = llm.client
# Define the on_message callback
on_message = proc do |message|
puts "Received message chunk: #{message}"
end
# Define the on_complete callback
on_complete = proc do |stop_reason|
puts "Streaming complete. Stop reason: #{stop_reason}"
end
# Send a chat message with streaming enabled
response = client.chat(
[{role: :user, content: "Hello, world!"}],
stream: true,
on_message: on_message,
on_complete: on_complete
)
puts response.content
The response object returned by the client.chat
method contains several useful fields:
content
: The final content of the response.raw_response
: The raw response payload for non-streaming requests and the array of chunks for streaming requests.stop_reason
: The reason why the response generation was stopped.Here is an example of how to use the response object:
# Initialize an LLM instance
llm = LLM.from_string!("gpt-4o")
# Create a client
client = llm.client
# Send a chat message
response = client.chat([{role: :user, content: "Hello, world!"}])
# Access the response fields
puts "Response content: #{response.content}"
puts "Raw response: #{response.raw_response}"
puts "Stop reason: #{response.stop_reason}"
LLMRuby supports various OpenAI models, including GPT-3.5 and GPT-4 variants. You can see the full list of supported models in the KNOWN_MODELS
constant:
Canonical Name | Display Name |
---|---|
gpt-3.5-turbo | GPT-3.5 Turbo |
gpt-3.5-turbo-0125 | GPT-3.5 Turbo 0125 |
gpt-3.5-turbo-16k | GPT-3.5 Turbo 16K |
gpt-3.5-turbo-1106 | GPT-3.5 Turbo 1106 |
gpt-4 | GPT-4 |
gpt-4-1106-preview | GPT-4 Turbo 1106 |
gpt-4-turbo-2024-04-09 | GPT-4 Turbo 2024-04-09 |
gpt-4-0125-preview | GPT-4 Turbo 0125 |
gpt-4-turbo-preview | GPT-4 Turbo |
gpt-4-0613 | GPT-4 0613 |
gpt-4o | GPT-4o |
gpt-4o-mini | GPT-4o Mini |
gpt-4o-mini-2024-07-18 | GPT-4o Mini 2024-07-18 |
gpt-4o-2024-05-13 | GPT-4o 2024-05-13 |
gpt-4o-2024-08-06 | GPT-4o 2024-08-06 |
gpt-4o-2024-11-20 | GPT-4o 2024-11-20 |
chatgpt-4o-latest | ChatGPT 4o Latest |
o1 | o1 |
o1-2024-12-17 | o1 2024-12-17 |
o1-preview | o1 Preview |
o1-preview-2024-09-12 | o1 Preview 2024-09-12 |
o1-mini | o1 Mini |
o1-mini-2024-09-12 | o1 Mini 2024-09-12 |
o3-mini | o3 Mini |
o3-mini-2025-01-31 | o3 Mini 2025-01-31 |
Canonical Name | Display Name |
---|---|
claude-3-5-sonnet-20241022 | Claude 3.5 Sonnet 2024-10-22 |
claude-3-5-haiku-20241022 | Claude 3.5 Haiku 2024-10-22 |
claude-3-5-sonnet-20240620 | Claude 3.5 Sonnet 2024-06-20 |
claude-3-opus-20240229 | Claude 3.5 Opus 2024-02-29 |
claude-3-sonnet-20240229 | Claude 3.5 Sonnet 2024-02-29 |
claude-3-haiku-20240307 | Claude 3.5 Opus 2024-03-07 |
Canonical Name | Display Name |
---|---|
gemini-2.0-flash | Gemini 2.0 Flash |
gemini-2.0-flash-lite-preview-02-05 | Gemini 2.0 Flash Lite Preview 02-05 |
gemini-1.5-flash | Gemini 1.5 Flash |
gemini-1.5-pro | Gemini 1.5 Pro |
gemini-1.5-flash-8b | Gemini 1.5 Flash 8B |
Set your OpenAI, Anthropic or Google API key as an environment variable:
export OPENAI_API_KEY=your_api_key_here
export ANTHROPIC_API_KEY=your_api_key_here
export GEMINI_API_KEY=your_api_key_here
OpenAI and Gemini models can be configured to generate responses that adhere to a provided schema. Even though each use a different format for configuring this schema, llm_ruby
can handle the translation for you, so that you can share a single schema definition across models.
llm = LLM.from_string!("gpt-4o")
# Create a client
client = llm.client
# Send a chat message
response_format = LLM::Schema.new("test_schema", {"type" => "object", "properties" => {"name" => {"type" => "string"}, "age" => {"type" => "integer"}}, "additionalProperties" => false, "required" => ["name", "age"]})
# or load the schema from a file: LLM::Schema.from_file('myschema.json')
response = client.chat([{role: :user, content: "Hello, world!"}], response_format: response_format)
response.structured_output[:name] # Alex
response.structured_output_object.name # Alex
After checking out the repo, run bin/setup
to install dependencies. Then, run rake spec
to run the tests. You can also run bin/console
for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run bundle exec rake install
.
Bug reports and pull requests are welcome.
The gem is available as open source under the terms of the MIT License.
FAQs
Unknown package
We found that llm_ruby demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
CAI is a new open source AI framework that automates penetration testing tasks like scanning and exploitation up to 3,600× faster than humans.
Security News
Deno 2.4 brings back bundling, improves dependency updates and telemetry, and makes the runtime more practical for real-world JavaScript projects.
Security News
CVEForecast.org uses machine learning to project a record-breaking surge in vulnerability disclosures in 2025.