
Research
Security News
The Growing Risk of Malicious Browser Extensions
Socket researchers uncover how browser extensions in trusted stores are used to hijack sessions, redirect traffic, and manipulate user behavior.
openinference-instrumentation-haystack
Advanced tools
Python auto-instrumentation library for LLM applications implemented with Haystack.
Haystack Pipelines and Components (ex. PromptBuilder, OpenAIGenerator, etc.) are fully OpenTelemetry-compatible and can be sent to an OpenTelemetry collector for monitoring, such as arize-phoenix
.
pip install openinference-instrumentation-haystack
This quickstart shows you how to instrument your Haystack-orchestrated LLM application
Through your terminal, install required packages.
pip install openinference-instrumentation-haystack haystack-ai arize-phoenix opentelemetry-sdk opentelemetry-exporter-otlp
You can install Phoenix and start it with the following terminal commands:
pip install arize-phoenix
python -m phoenix.server.main serve
Start Phoenix in the background as a collector. By default, it listens on http://localhost:6006
. You can visit the app via a browser at the same address. (Phoenix does not send data over the internet. It only operates locally on your machine.)
Try the following in a Python file.
Set up HaystackInstrumentor
to trace your application and sends the traces to Phoenix at the endpoint defined below.
from openinference.instrumentation.haystack import HaystackInstrumentor
from opentelemetry import trace as trace_api
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk import trace as trace_sdk
from opentelemetry.sdk.trace.export import SimpleSpanProcessor
import os
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "YOUR_KEY_HERE"
# Set up the tracer, using Arize Phoenix as the endpoint
endpoint = "http://127.0.0.1:6006/v1/traces"
tracer_provider = trace_sdk.TracerProvider()
trace_api.set_tracer_provider(tracer_provider)
tracer_provider.add_span_processor(SimpleSpanProcessor(OTLPSpanExporter(endpoint)))
# Instrument the Haystack application
HaystackInstrumentor().instrument()
Set up a simple Pipeline with a template using OpenAIGenerator
.
from haystack import Pipeline
from haystack.components.generators import OpenAIGenerator
# Initialize the pipeline
pipeline = Pipeline()
# Initialize the OpenAI generator component
llm = OpenAIGenerator(model="gpt-3.5-turbo")
# Add the generator component to the pipeline
pipeline.add_component("llm", llm)
# Define the question
question = "What is the location of the Hanging Gardens of Babylon?"
# Run the pipeline with the question
response = pipeline.run({"llm": {"prompt": question}})
print(response)
Now, on the Phoenix UI on your browser, you should see the traces from your Haystack application. Specifically, you can see attributes from the execution of the OpenAIGenerator.
FAQs
OpenInference Haystack Instrumentation
We found that openinference-instrumentation-haystack demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Research
Security News
Socket researchers uncover how browser extensions in trusted stores are used to hijack sessions, redirect traffic, and manipulate user behavior.
Research
Security News
An in-depth analysis of credential stealers, crypto drainers, cryptojackers, and clipboard hijackers abusing open source package registries to compromise Web3 development environments.
Security News
pnpm 10.12.1 introduces a global virtual store for faster installs and new options for managing dependencies with version catalogs.