RubyGPT
This gem aims to provide an easy-to-use Ruby wrapper for all modules of OpenAI's ChatGPT API. It is designed to be simple and easy to use, while also providing a high level of customization. It is also aiming to work efficiently in Ruby on Rails applications.
Capabilities
The gem is in the early stages. It's designed to be easily extendable for other modules of the OpenAI APIs and products in the future. See Contributing section for more details.
Installation
Install the gem directly via bundler:
$ bundle install rubygpt
Or add it to your Gemfile
for your project:
gem "rubygpt"
And then execute:
$ bundle install
Configuration
In order to access the OpenAI APIs, you must configure the Rubygpt client with your API key and the preferred ChatGPT model to use. This can be done globally for the entire application, or on a per-request basis.
Rubygpt.configure(api_key: 'YOUR_API_KEY', model: 'gpt-3.5-turbo')
Alternatively, you can provide a block to set the configuration options:
Rubygpt.configure do |config|
config.api_key = 'YOUR_API_KEY'
config.model = 'gpt-3.5-turbo'
end
The above examples will create a singleton client that works across the entire application.
If you'd like to use different configurations for different parts of your application, you can manually create client instances and configure them separately:
client_gpt3 = Rubygpt::Client.new(api_key: 'YOUR_API_KEY', model: 'gpt-3.5-turbo')
client_gpt4 = Rubygpt::Client.new do |config|
config.api_key = 'YOUR_SECOND_API_KEY'
config.model = 'gpt-4'
config.organization_id = 'OPENAI_ORG_ID'
end
chat_requester_gpt3 = Rubygpt::Requester::ChatRequester.new(client_gpt3)
chat_requester_gpt4 = Rubygpt::Requester::ChatRequester.new(client_gpt4)
The following attributes can be configured when initializing the client:
api_key
(required): Your OpenAI API keymodel
(required): The model to use for the API requests.api_url
: The base URL for the API requests. The default is https://api.openai.com/v1
organization_id
: The organization ID to use for the API requests.connection_adapter
: The HTTP connection adapter to use for the API requests. The default is :faraday
Connection Adapters
The Rubygpt client uses Faraday to manage HTTP connections. This allows the entire power of Faraday to be used for the API requests, including diverse HTTP adapters and features like streaming.
Chat Completions API
Chat Completions is a Text Completion feature provided by OpenAI's ChatGPT. It can be used to generate human-like responses to a given prompt. It is one of the core features of the ChatGPT.
See the OpenAI Chat Completions API documentation and related Chat API reference for more information.
Sending Messages
After configuring the Rubygpt client, you can perform requests to the Chat Completions API directly.
Rubygpt.chat.create("Where is London?.")
Rubygpt.chat.create(["Where is UK?", "What Continent?", "What timezone?"])
To use the received responses, refer to the Using Chat Completion Responses section.
Customizing the Messages
By default each message is sent with system
role and the provided contents in the call.
Rubygpt.chat.create("Test message.")
You can customize the request by providing additional parameters to the create
method.
A Message
object consist of following attributes:
role
: The role of the messages author.content
: The content of the message.name
: The name of the messages author or function name. (has multiple use cases, see OpenAI docs for details)tool_calls
: The tool calls generated by the model, such as function calls. (role: assistant only)tool_call_id
: Tool call that this message is responding to. (role: tool only)
For extended details on the attributes, visit the OpenAI Chat API reference.
Rubygpt.chat.create(role: 'user', content: "What is Ruby?")
Rubygpt.chat.create(role: 'assistant', name: 'furkan', content: "Ruby is a...")
It is also possible to send multiple message objects with a single request.
messages = [
{ role: 'user', content: "foo" },
{ role: 'assistant', name: 'johndoe', content: "bar" }
]
Rubygpt.chat.create(messages)
Rubygpt.chat.create(messages:)
Customizing The Requests
You can send any available request body parameter supported by OpenAI Chat Completion API to the Rubygpt.chat.create
method. Just send the messages in the messages:
keyword argument. Any additional parameters you'll provide will be passed-through to the request body directly.
Rubygpt.chat.create(
n: 3,
messages: ["Explain the history of Istanbul."],
model: 'gpt-4-turbo-preview',
max_tokens: 100,
frequency_penalty: 1.0,
temperature: 1,
user: 'feapaydin'
)
JSON Mode Messages
The JSON Mode is a feature of the Chat Completions API that forces the model to generate a JSON response.
A common way to use Chat Completions is to instruct the model to always return a JSON object that makes sense for your use case, by specifying this in the system message. While this does work in some cases, occasionally the models may generate output that does not parse to valid JSON objects.
See JSON Mode official docs for more details.
To send a message in JSON mode, you can simply send json: true
option along with your message.
Rubygpt.chat.create(content: "List all programming languages by their creation date in a json.", json: true)
messages = [
{ role: 'user', content: "List all programming languages by their creation date as JSON." },
{ role: 'user', content: "Also add their creator's name to the objects to JSON attributes." }
]
Rubygpt.chat.create(messages:, json: true)
An important note is that the messages
data must contain the keyword json
("explain in JSON format...") when using the JSON mode. This is required by ChatGPT APIs.
Stream Mode
Streaming mode is a feature of the Chat Completions API that allows the model to generate a continuous stream of messages. This is useful for chat applications where the model is expected to generate multiple or longer responses to a single prompt.
Stream mode is not supported at the moment, but it's planned to be implemented in the future versions.
Using Chat Completion Responses
Regardless of the amount of messages sent, the response will be an instance of Response::ChatCompletion
object.
The object wraps a set of methods to easily access the response data provided by the OpenAI Chat API: Create.
response = Rubygpt.chat.create("What time is it?", "Also tell me the date.")
response.messages
response.read
response.failed?
response.cost
response.to_h
Each Choice in the response is an instance of Response::ChatCompletion::Choice
object.
response = Rubygpt.chat.create("What time is it?", "Also tell me the date.")
response.choices
response.choices.first.index
response.choices.first.message
response.choices.first.content
response.choices.first.role
response.choices.first.finish_reason
response.choices.first.failed?
response.choices.first.to_h
response.choices.first.logprobs
Development
After checking out the repo, run bin/setup
to install dependencies. Then, run rspec
to run the tests. You can also run bin/console
for an interactive prompt that will allow you to experiment.
The console contains a pre-configured Rubygpt client with a test API key. See bin/console for more details.
To set the API key for the tests, set the environment variable OPENAI_API_KEY
to access the APIs.
export OPENAI_API_KEY=your_api_key
Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/feapaydin/rubygpt. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the code of conduct.
Participations are much welcome in this project as it's still in the early stages of development. You can contribute by addressing the issues flagged as good first issue
or help wanted
in the issues section. You can also contribute by opening new issues, suggesting new features, or reporting bugs.
Code of Conduct
Everyone interacting in the Rubygpt project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.