
Security News
NIST Under Federal Audit for NVD Processing Backlog and Delays
As vulnerability data bottlenecks grow, the federal government is formally investigating NIST’s handling of the National Vulnerability Database.
The auxiliary tool is used to invoke the online LLM service API, supporting local caching, prompt rendering, and configuration management.
English | 简体中文
The auxiliary tool is used to invoke the online LLM service API, supporting local caching, prompt rendering, and configuration management.
pip install llm-quiver
Assuming you have a configuration file path/to/gpt.toml
, you need to fill in your own API_KEY, content as follows:
API_TYPE = "azure_openai"
API_BASE = "https://endpoint.openai.azure.com/"
API_VERSION = "2023-05-15"
API_KEY = "********************************"
MODEL_NAME = "gpt-4o-20240513"
temperature = 0.0
max_tokens = 4096
enable_cache = true
cache_dir = "oai_cache"
Running code:
from llm_quiver import LLMQuiver
# Initialize
llm = LLMQuiver(
config_path="path/to/gpt.toml",
)
# Text generation mode
prompt_values = ["Who are you?"]
responses = llm.generate(prompt_values)
# Default role is system
# ["I am an AI assistant developed by OpenAI, designed to help answer questions, provide information, and complete various tasks. How can I help you?"]
# Chat mode
messages = [[{"role": "user", "content": "Who are you?"}]]
responses = llm.chat(messages)
# ["I am an AI assistant developed by OpenAI, designed to help answer questions, provide information, and engage in conversations. Feel free to ask me anything!"]
First, create a TOML template file, for example hello_world.toml
:
[hello_world_template]
prompt = "Hello {name}, who are you?"
Then you can use it like this:
from llm_quiver import TomlLLMQuiver
# Specify template during initialization
llm = TomlLLMQuiver(
config_path="path/to/gpt.toml",
toml_prompt_name="hello_world_template",
toml_template_file="path/to/hello_world.toml"
)
# Pass template parameters
prompt_values = [dict(name="GPT")]
responses = llm.generate(prompt_values)
There are two ways to configure API keys and other parameters:
Configuration can be loaded by passing parameter config_path="path/to/config.toml" or setting environment variable "export LLMQUIVER_CONFIG=path/to/config.toml". Parameters like API_TYPE, API_BASE, API_VERSION, API_KEY, MODEL_NAME can also be set in environment variables.
llm = TomlLLMQuiver(
config_path="path/to/config.toml",
toml_prompt_name="template_name",
toml_template_file="path/to/template.toml"
)
Configuration file example:
API_TYPE = "azure_openai"
API_BASE = "https://endpoint.openai.azure.com/"
API_VERSION = "2023-05-15"
API_KEY = "********************************"
MODEL_NAME = "gpt-4o-20240513"
temperature = 0.0
max_tokens = 4096
enable_cache = true
cache_dir = "oai_cache"
FAQs
The auxiliary tool is used to invoke the online LLM service API, supporting local caching, prompt rendering, and configuration management.
We found that llm-quiver demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
As vulnerability data bottlenecks grow, the federal government is formally investigating NIST’s handling of the National Vulnerability Database.
Research
Security News
Socket’s Threat Research Team has uncovered 60 npm packages using post-install scripts to silently exfiltrate hostnames, IP addresses, DNS servers, and user directories to a Discord-controlled endpoint.
Security News
TypeScript Native Previews offers a 10x faster Go-based compiler, now available on npm for public testing with early editor and language support.