
Research
Security News
The Growing Risk of Malicious Browser Extensions
Socket researchers uncover how browser extensions in trusted stores are used to hijack sessions, redirect traffic, and manipulate user behavior.
The auxiliary tool is used to invoke the online LLM service API, supporting local caching, prompt rendering, and configuration management.
English | 简体中文
The auxiliary tool is used to invoke the online LLM service API, supporting local caching, prompt rendering, and configuration management.
pip install llm-quiver
Assuming you have a configuration file path/to/gpt.toml
, you need to fill in your own API_KEY, content as follows:
API_TYPE = "azure_openai"
API_BASE = "https://endpoint.openai.azure.com/"
API_VERSION = "2023-05-15"
API_KEY = "********************************"
MODEL_NAME = "gpt-4o-20240513"
temperature = 0.0
max_tokens = 4096
enable_cache = true
cache_dir = "oai_cache"
Running code:
from llm_quiver import LLMQuiver
# Initialize
llm = LLMQuiver(
config_path="path/to/gpt.toml",
)
# Text generation mode
prompt_values = ["Who are you?"]
responses = llm.generate(prompt_values)
# Default role is system
# ["I am an AI assistant developed by OpenAI, designed to help answer questions, provide information, and complete various tasks. How can I help you?"]
# Chat mode
messages = [[{"role": "user", "content": "Who are you?"}]]
responses = llm.chat(messages)
# ["I am an AI assistant developed by OpenAI, designed to help answer questions, provide information, and engage in conversations. Feel free to ask me anything!"]
First, create a TOML template file, for example hello_world.toml
:
[hello_world_template]
prompt = "Hello {name}, who are you?"
Then you can use it like this:
from llm_quiver import TomlLLMQuiver
# Specify template during initialization
llm = TomlLLMQuiver(
config_path="path/to/gpt.toml",
toml_prompt_name="hello_world_template",
toml_template_file="path/to/hello_world.toml"
)
# Pass template parameters
prompt_values = [dict(name="GPT")]
responses = llm.generate(prompt_values)
There are two ways to configure API keys and other parameters:
Configuration can be loaded by passing parameter config_path="path/to/config.toml" or setting environment variable "export LLMQUIVER_CONFIG=path/to/config.toml". Parameters like API_TYPE, API_BASE, API_VERSION, API_KEY, MODEL_NAME can also be set in environment variables.
llm = TomlLLMQuiver(
config_path="path/to/config.toml",
toml_prompt_name="template_name",
toml_template_file="path/to/template.toml"
)
Configuration file example:
API_TYPE = "azure_openai"
API_BASE = "https://endpoint.openai.azure.com/"
API_VERSION = "2023-05-15"
API_KEY = "********************************"
MODEL_NAME = "gpt-4o-20240513"
temperature = 0.0
max_tokens = 4096
enable_cache = true
cache_dir = "oai_cache"
FAQs
The auxiliary tool is used to invoke the online LLM service API, supporting local caching, prompt rendering, and configuration management.
We found that llm-quiver demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover how browser extensions in trusted stores are used to hijack sessions, redirect traffic, and manipulate user behavior.
Research
Security News
An in-depth analysis of credential stealers, crypto drainers, cryptojackers, and clipboard hijackers abusing open source package registries to compromise Web3 development environments.
Security News
pnpm 10.12.1 introduces a global virtual store for faster installs and new options for managing dependencies with version catalogs.