
Security News
Open Source Maintainers Feeling the Weight of the EU’s Cyber Resilience Act
The EU Cyber Resilience Act is prompting compliance requests that open source maintainers may not be obligated or equipped to handle.
The giskard_hub library allows you to interact with the Giskard Hub, a platform that centralizes the validation process of LLM applications, empowering product teams to ensure all functional, business & legal requirements are met, and keeping them in close contact with the development team to avoid delayed deployment timelines.
The Giskard Hub is a platform that centralizes the validation process of LLM applications, empowering product teams to ensure all functional, business & legal requirements are met, and keeping them in close contact with the development team to avoid delayed deployment timelines.
The giskard_hub
Python library provides a simple way for developers and data
scientists to manage and evaluate LLM applications in their development workflow
during the prototyping phase and for continuous integration testing.
Read the quickstart guide to get up and running with the giskard_hub
library.
You will learn how to execute local evaluations from a notebook, script or CLI, and
synchronize them to the Giskard Hub platform.
Access the full docs at: https://docs-hub.giskard.ai/
The library is compatible with Python 3.9 to 3.12.
pip install giskard-hub
You can now use the client to interact with the Hub. You will be able to control the Hub programmatically, independently of the UI. Let's start by initializing a client instance:
from giskard_hub import HubClient
hub = HubClient()
You can provide the API key and Hub URL as arguments. Head over to your Giskard Hub instance and click on the user icon in the top right corner. You will find your personal API key, click on the button to copy it.
hub = HubClient(
api_key="YOUR_GSK_API_KEY",
hub_url="THE_GSK_HUB_URL",
)
You can now use the hub
client to control the Giskard Hub! Let's start by
creating a fresh project.
project = hub.projects.create(
name="My first project",
description="This is a test project to get started with the Giskard Hub client library",
)
That's it! You have created a project. You will now see it in the Hub UI project selector.
Tip
If you have an already existing project, you can easily retrieve it.
Either use hub.projects.list()
to get a list of all projects, or use
hub.projects.retrieve("YOUR_PROJECT_ID")
to get a specific project.
Let's now create a dataset and add a conversation example.
# Let's create a dataset
dataset = hub.datasets.create(
project_id=project.id,
name="My first dataset",
description="This is a test dataset",
)
We can now add a conversation example to the dataset. This will be used for the model evaluation.
# Add a conversation example
hub.conversations.create(
dataset_id=dataset.id,
messages=[
dict(role="user", content="What is the capital of France?"),
dict(role="assistant", content="Paris"),
dict(role="user", content="What is the capital of Germany?"),
],
demo_output=dict(
role="assistant",
content="I don't know that!",
metadata=dict(
response_time=random.random(),
test_metadata="No matter which kind of metadata",
),
),
checks=[
dict(identifier="correctness", params={"reference": "Berlin"}),
dict(identifier="conformity", params={"rules": ["The agent should always provide short and concise answers."]}),
]
)
These are the attributes you can set for a conversation (the only
required attribute is messages
):
messages
: A list of messages in the conversation. Each message is a dictionary with the following keys:
role
: The role of the message, either "user" or "assistant".content
: The content of the message.demo_output
: A demonstration of a (possibly wrong) output from the
model with an optional metadata. This is just for demonstration purposes.
checks
: A list of checks that the conversation should pass. This is used for evaluation. Each check is a dictionary with the following keys:
identifier
: The identifier of the check. If it's a built-in check, you will also need to provide the params
dictionary. The built-in checks are:
correctness
: The output of the model should match the reference.conformity
: The conversation should follow a set of rules.groundedness
: The output of the model should be grounded in the conversation.string_match
: The output of the model should contain a specific string (keyword or sentence).metadata
: The metadata output of the model should match a list of JSON path rules.params
: A dictionary of parameters for built-in checks. The parameters depend on the check type:
correctness
check, the parameter is reference
(type: str
), which is the expected output.conformity
check, the parameter is rules
(type: list[str]
), which is a list of rules that the conversation should follow.groundedness
check, the parameter is context
(type: str
), which is the context in which the model should ground its output.string_match
check, the parameter is keyword
(type: str
), which is the string that the model's output should contain.metadata
check, the parameter is json_path_rules
(type: list[dict]
), which is a list of dictionaries with the following keys:
json_path
: The JSON path to the value that the model's output should contain.expected_value
: The expected value at the JSON path.expected_value_type
: The expected type of the value at the JSON path, one of string
, number
, boolean
.You can add as many conversations as you want to the dataset.
Again, you'll find your newly created dataset in the Hub UI.
Before running our first evaluation, we'll need to set up a model. You'll need an API endpoint ready to serve the model. Then, you can configure the model API in the Hub:
model = hub.models.create(
project_id=project.id,
name="My Bot",
description="A chatbot for demo purposes",
url="https://my-model-endpoint.example.com/bot_v1",
supported_languages=["en", "fr"],
# if your model endpoint needs special headers:
headers={"X-API-Key": "MY_TOKEN"},
)
We can test that everything is working well by running a chat with the model:
response = model.chat(
messages=[
dict(role="user", content="What is the capital of France?"),
dict(role="assistant", content="Paris"),
dict(role="user", content="What is the capital of Germany?"),
],
)
print(response)
If all is working well, this will return something like
ModelOutput(
message=ChatMessage(
role='assistant',
content='The capital of Germany is Berlin.'
),
metadata={}
)
We can now launch a remote evaluation of our model!
eval_run = client.evaluate(
model=model,
dataset=dataset,
name="test-run", # optional
)
The evaluation will run asynchronously on the Hub. To retrieve the results once the run is complete, you can use the following:
# This will block until the evaluation status is "finished"
eval_run.wait_for_completion()
# Print the evaluation metrics
eval_run.print_metrics()
Tip
You can directly pass IDs to the evaluate function, e.g.
model=model_id
and dataset=dataset_id
, without having to retrieve
the objects first.
FAQs
The giskard_hub library allows you to interact with the Giskard Hub, a platform that centralizes the validation process of LLM applications, empowering product teams to ensure all functional, business & legal requirements are met, and keeping them in close contact with the development team to avoid delayed deployment timelines.
We found that giskard-hub demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
The EU Cyber Resilience Act is prompting compliance requests that open source maintainers may not be obligated or equipped to handle.
Security News
Crates.io adds Trusted Publishing support, enabling secure GitHub Actions-based crate releases without long-lived API tokens.
Research
/Security News
Undocumented protestware found in 28 npm packages disrupts UI for Russian-language users visiting Russian and Belarusian domains.