Chat GPT
This is OpenAI Chat GPT ruby wrapper (beta). Read more https://beta.openai.com/docs/introduction
Installation
Install the gem and add to the application's Gemfile by executing:
bundle add chat_gpt
If bundler is not being used to manage dependencies, install the gem by executing:
gem install chat_gpt
Usage
Visit https://openai.com to get an API key for the ChatGPT API
ChatGpt.configure do |config|
config.key = 'API_KEY'
end
require 'chat_gpt'
chat_gpt = ChatGpt::Api.new
response = chat_gpt.send("Hello, how are you?")
puts response.inspect
response = chat_gpt.send("That's good to hear. What have you been up to lately?", conversation_id: 'abc123')
puts response.inspect
The response:
{
conversation_id: 'unique_conversation_id',
responses: [
'An array of messages from ChatGPT'
]
}
Parameter: max_tokens
In the context of using the ChatGPT API, max_tokens
refers to the maximum number of tokens (i.e. individual words or punctuation marks) that can be included in each response generated by ChatGPT. This parameter is used to limit the length of the responses generated by ChatGPT and to prevent the API from generating excessively long responses.
By default, the max_tokens
parameter is set to 100, which means that ChatGPT will generate responses with a maximum of 100 tokens each. However, you can adjust this parameter when making a request to the API in order to generate longer or shorter responses, depending on your needs.
It's important to note that setting a higher value for max_tokens will not necessarily result in longer responses from ChatGPT. The length of the responses generated by ChatGPT will also depend on the input prompt and the model being used. Additionally, the max_tokens parameter is subject to certain limitations imposed by the API provider, so you should be aware of these limitations when setting this parameter.
Parameter: temperature
In the context of a language model such as ChatGPT, the temperature
parameter is a value that controls the level of randomness in the model's responses. A higher temperature value will cause the model to generate more diverse and unpredictable responses, while a lower temperature value will cause the model to generate more predictable and repetitive responses.
For example, if you set the temperature parameter to a high value, such as 0.8, the ChatGPT model will be more likely to generate unique and creative responses to your prompts. This can be useful if you want to explore different ideas or possibilities, or if you want the model to generate more interesting and varied responses.
On the other hand, if you set the temperature parameter to a low value, such as 0.2, the ChatGPT model will be more likely to generate responses that are similar to previous responses and follow predictable patterns. This can be useful if you want the model to generate more consistent and coherent responses, or if you want to control the content of the responses more closely.
Overall, the temperature
parameter allows you to adjust the level of randomness in the ChatGPT model's responses and fine-tune the output to suit your needs.
ChatGPT models
The ChatGPT API offers several different models to choose from, each with its own unique characteristics and capabilities. The available models are:
curie
: This model is named after Marie Curie, a famous scientist known for her pioneering work in the field of radioactivity. This model is well-suited for generating responses to scientific or technical prompts.ada
: This model is named after Ada Lovelace, a 19th-century mathematician who is considered to be the world's first computer programmer. This model is well-suited for generating responses to mathematical or computational prompts.grace
: This model is named after Grace Hopper, a pioneering computer scientist and United States Navy Rear Admiral. This model is well-suited for generating responses to prompts related to computers, technology, or the military.flora
: This model is named after Flora Stieglitz, a pioneering photographer who was known for her portraits and landscapes. This model is well-suited for generating responses to prompts related to photography or the arts.davinci
: This model is named after Leonardo da Vinci, a famous artist, scientist, and inventor.This model is well-suited for generating responses to prompts that are broad or open-ended in nature.
Development
After checking out the repo, run bin/setup
to install dependencies. Then, run rake spec
to run the tests. You can also run bin/console
for an interactive prompt that will allow you to experiment.
To install this gem onto your local machine, run bundle exec rake install
. To release a new version, update the version number in version.rb
, and then run bundle exec rake release
, which will create a git tag for the version, push git commits and the created tag, and push the .gem
file to rubygems.org.
Contributing
Bug reports and pull requests are welcome on GitHub at https://github.com/dpaluy/chat_gpt. This project is intended to be a safe, welcoming space for collaboration, and contributors are expected to adhere to the code of conduct.
License
The gem is available as open source under the terms of the MIT License.
Code of Conduct
Everyone interacting in the chatGpt project's codebases, issue trackers, chat rooms and mailing lists is expected to follow the code of conduct.