You're Invited:Meet the Socket Team at BlackHat and DEF CON in Las Vegas, Aug 4-6.RSVP
Socket
Book a DemoInstallSign in
Socket

evalsharp

Package Overview
Dependencies
Maintainers
0
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install
Package was removed
Sorry, it seems this package was removed from the registry

evalsharp

0.1.0-alpha
nugetNuGet
Version published
Maintainers
0
Created
Source

EvalSharp 🧠

LLM Evaluation for .NET Developers — No Python Required

EvalSharp brings the power of reliable LLM evaluation directly to your C# projects. Inspired by DeepEval, but designed for the .NET ecosystem, EvalSharp lets you measure LLM outputs with confidence using familiar C# tools and patterns.

🔥 Key Features

  • Fully Native .NET API — Designed for C# developers; no Python dependencies.
  • Out-of-the-box Metrics — Evaluate Answer Relevancy, Contextual Recall, GEval, and more.
  • LLM-as-a-Judge — Supports OpenAI, Azure OpenAI, and custom chat clients.
  • Easy Customization — Build your own metrics tailored to your use case.

⚡ Quick Start

  • Install EvalSharp
dotnet add package EvalSharp
  • Create an Evaluator
var cases = new[]
{
    new TType
    {
        UserInput    = "Please summarize the article on climate change impacts.",
        LLMOutput   = "The article talks about how technology is advancing rapidly.",
    }
};

var evaluator = Evaluator.FromData(
    ChatClient.GetInstance(),
    cases,
    c => new MetricEvaluationContext
    {
        InitialInput    = c.UserInput,
        ActualOutput    = c.LLMOutput
    }
);
  • Add Metrics
evaluator.AddAnswerRelevancy(includeReason: true);
  • Evaluate Your LLM Output
var result = await evaluator.RunAsync();

✅ Unit Testing with EvalTest.AssertAsync

In addition to evaluating datasets with the Evaluator, EvalSharp makes it easy to include LLM evaluation in your unit tests. The EvalTest.AssertAsync method allows you to assert results for a single test with one or more metrics.

Example: Asserting Multiple Metrics in a Unit Test

using EvalSharp.Models;
using EvalSharp.Scoring;
using Xunit.Abstractions;

public class MyEvalTests
{
    public MyEvalTests(ITestOutputHelper testOutputHelper)
    {
        _testOutputHelper = testOutputHelper;
    }

    [Fact]
    public async Task SingleTest_MultipleMetrics()
    {
        var testData = new EvaluatorTestData
        {
            InitialInput = "Summarize the meeting.",
            ActualOutput = "The meeting summary is provided below...",
        };

        var rel_config = new AnswerRelevancyMetricConfiguration
        {
            IncludeReason = true,
            Threshold = 0.9
        };

        var geval_config = new GEvalMetricConfiguration
        {
            Threshold = 0.5,
            Criteria = "Does the output correctly explain concepts, events, or processes based on the input prompt?"
        };

        var metrics = new List<Metric>
        {
            new AnswerRelevancyMetric(ChatClient.GetInstance(), rel_config),
            new GEvalMetric(ChatClient.GetInstance(), geval_config)
        };

        await EvalTest.AssertAsync(testData, metrics, _testOutputHelper.WriteLine);
    }
}

✅ Supports multiple metrics in a single call
✅ Output results to your preferred sink (e.g., Console, Xunit test output)
✅ Ideal for lightweight, targeted LLM evaluation in CI/CD pipelines

🛠 Metrics Included

Answer Relevancy — Is the LLM's response relevant to the input?
Bias — Checks for content biases.
Contextual Precision — Measures if output precisely reflects provided context.
Contextual Recall — Assesses how much of the relevant context was included in the output.
Faithfulness — Evaluates factual correctness and grounding of the output.
GEval — Enforces structure, logical flow, and coverage expectations.
Hallucination — Detects whether the LLM generated unsupported or fabricated content.
Match — Compares expected and actual output for equality or similarity.
Prompt Alignment — Ensures output follows the intent and structure of the prompt.
Summarization — Scores the quality and accuracy of generated summaries.
Task Completion — Measures whether the LLM's output fulfills the requested task.
Tool Correctness — Evaluates whether tool-augmented LLM responses are correct.

💡 Why EvalSharp?

  • No need to switch to Python for LLM evaluation
  • Designed with .NET 8 in mind
  • Beautiful, easy to digest outputs
  • Ideal for both RAG and general LLM application testing
  • Easy to extend with your own custom metrics

🚧 Future Roadmap

We're just getting started. Here's what's coming soon to EvalSharp:

  • Additional Built-in Metrics (e.g., DAG, RAGAS, Contextual Relevancy, Toxicity, JSON Correctness)
  • Data Synthesizer
  • Token Usage/ Cost Calculation
  • Additional Scorers (Rouge, Truth Identification, etc.)
  • Expanded Examples and Tutorials
  • Conversational Metrics

📄 License

This project is licensed under the MIT License. See the LICENSE file for details.

Portions of this project include content adapted from deepeval, which is licensed under the Apache License 2.0. See the NOTICE file for attribution.

Acknowledgements

Aviron Software would like to give a special thanks to the team at DeepEval. Their original metrics and prompts are the catalysts for this project.

Keywords

FAQs

Package last updated on 30 Jun 2025

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts