Welcome to TruLens-Eval!
Don't just vibe-check your llm app! Systematically evaluate and track your
LLM experiments with TruLens. As you develop your app including prompts, models,
retreivers, knowledge sources and more, TruLens-Eval is the tool you need to
understand its performance.
Fine-grained, stack-agnostic instrumentation and comprehensive evaluations help
you to identify failure modes & systematically iterate to improve your
application.
Read more about the core concepts behind TruLens including [Feedback
Functions](https://www.trulens.org/trulens_eval/getting_started/core_concepts/
The RAG Triad,
and Honest, Harmless and Helpful
Evals.
TruLens in the development workflow
Build your first prototype then connect instrumentation and logging with
TruLens. Decide what feedbacks you need, and specify them with TruLens to run
alongside your app. Then iterate and compare versions of your app in an
easy-to-use user interface 👇
Installation and Setup
Install the trulens-eval pip package from PyPI.
pip install trulens-eval
Quick Usage
Walk through how to instrument and evaluate a RAG built from scratch with
TruLens.
💡 Contributing
Interested in contributing? See our contributing
guide for more details.