![Oracle Drags Its Feet in the JavaScript Trademark Dispute](https://cdn.sanity.io/images/cgdhsj6q/production/919c3b22c24f93884c548d60cbb338e819ff2435-1024x1024.webp?w=400&fit=max&auto=format)
Security News
Oracle Drags Its Feet in the JavaScript Trademark Dispute
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
chat-completion-utils
Advanced tools
ChatCompletion Utils is a Python library that provides utility functions to very easily interact with OpenAI's Chat Completion API. It enables you to easily generate chat-friendly prompts, count tokens, and auto-select which model to use based on context length for the chat completion API endpoint. It also has a few other bells and whistles ;)
Ensure you have installed the required packages:
poetry install
First, set your OpenAI API key in the environment:
export OPENAI_API_KEY=<your_api_key>
Then, you can use the functions provided in the library:
from chat_completion_utils import (
llm,
_code_prompt,
num_tokens_from_messages,
select_model,
buid_prompt
)
# Generate a 'ChatCompletion' response with the OpenAI API
book_summary = llm(
"Provide a brief summary of the following book series.", # 'system' instruction
"Harry Potter series.", # 'user' prompt
0.5 # model temperature
)
print(book_summary)
# Use 'prompt partials' (e.g. `_code_prompt()`) to add pre-defined protective language to your prompts
web_app = llm(
_code_prompt(),
"geneate a React web application framework"
)
# Calculate tokens in a list of messages
## - `llm()` bundles this functionality
messages = [
{"role": "system", "content": "Translate the following English to French" },
{"role": "user", "content": "Hello, how are you?" }
]
token_count = num_tokens_from_messages(messages, model="gpt-4")
print(token_count)
# Select the appropriate model to use based on token count
## - `llm()` bundles this functionality
## - auto switch to 'gpt-4-32k' if you need to, otherwise goes with the cheaper 'gpt-4' (or 'gpt-3.5-turbo' if you ask it to)
selected_model = select_model(messages)
print(selected_model)
# Construct prompt objects
## - `llm()` bundles this functionality
## - You shouldn't need to use this, but.. maybe I'm wrong. Go wild!
prompt = build_prompt(
system_content=_code_prompt("Generate Ruby code for the given user prompt"),
user_content="function to compute a factorial."
)
print(prompt)
llm()
Generate a chat-based completion using the OpenAI API.
(str): The response from the model.
_code_prompt()
Generate a code-only prompt.
(str): The modified prompt for code-only output.
num_tokens_from_messages()
Count the number of tokens used by a list of messages.
(int): The number of tokens used by the list of messages.
select_model()
Select the appropriate model based on token count.
(str): The selected model name.
buid_prompt()
Build a list of messages to use as input for the OpenAI API.
(list): A list of messages to be used as input for the OpenAI API.
The MODELS constant is a dictionary containing information about the supported models and their properties, such as the maximum number of tokens allowed.
FAQs
Unknown package
We found that chat-completion-utils demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Oracle seeks to dismiss fraud claims in the JavaScript trademark dispute, delaying the case and avoiding questions about its right to the name.
Security News
The Linux Foundation is warning open source developers that compliance with global sanctions is mandatory, highlighting legal risks and restrictions on contributions.
Security News
Maven Central now validates Sigstore signatures, making it easier for developers to verify the provenance of Java packages.