🚀 Big News: Socket Acquires Coana to Bring Reachability Analysis to Every Appsec Team.Learn more

freshstack

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

freshstack

A framework to generate realistic IR & RAG Benchmarks.

0.0.5
Maintainers
1

FreshStack

A Repository for Constructing Realistic IR/RAG Benchmarks

Paper | Website | Leaderboard | Dataset

FreshStack is a modular framework to automatically build realistic IR/RAG benchmarks from niche, community-sourced technical content (e.g., Stack Overflow + GitHub repositories). It supports:

  • Scraping human-asked queries based on StackOverflow.
  • Gathering up-to-date corpora via chunking any GitHub repository.
  • Retrieval evaluation of any dense/multi-vector model on the Freshstack repository.
  • Datasets released under CC-BY-SA 4.0 and code and scripts under Apache 2.0 License.

Installation

Install via pip, tested with Python 3.10+:

pip install freshstack

If you want to build from source, use:

git clone https://github.com/fresh-stack/freshstack.git
cd freshstack
pip install -e .

🚀 Quickstart: Load Freshstack Dataset

from freshstack.datasets import DataLoader

freshstack_dataloader = DataLoader(
    queries_repo="freshstack/queries-oct-2024", 
    corpus_repo="freshstack/corpus-oct-2024",
    topic="langchain") # or "yolo", "angular", "laravel" or "godot"

# Loads the corpus, queries and nuggets in the BEIR format
corpus, queries, nuggets = dataloader.load(split="test")

# Loads the qrels (nuggets), qrels (query) and query to nugget mapping
qrels_nuggets, qrels_query, query_to_nuggets = dataloader.load_qrels(split="test")

🚀 Quickstart: Model Evaluation

1. Evaluate only the retrieved results

# Your runfile can be stored as a .txt in the following format: [qid, Q0, docid, 0, score, run_name], e.g.,
# 76185522 Q0 angular/adev/src/content/tutorials/learn-angular/steps/14-routerLink/answer/src/app/app.component.ts_0_368 0 0.7353782057762146 your_model_name

from freshstack import util
from freshstack.retrieval.evaluation import EvaluateRetrieval

# retrieval_results: dict[str, dict[str, str]] with qid: {doc_id: score}
retrieval_results = util.load_runfile("<path_to_your_runfile>")
evaluator = EvaluateRetrieval(k_values=[10, 20, 50])
alpha_ndcg, coverage, recall = evaluator.evaluate(
    qrels_nuggets=qrels_nuggets,
    query_to_nuggets=query_to_nuggets,
    qrels_query=qrels_query,
    results=retrieval_results,
)

2. Evaluate any dense embedding model (e.g., Qwen3-0.6B-embedding) using BEIR.

Make sure you install the latest PyPI BEIR repository: pip install beir

from beir.retrieval import models
from beir.retrieval.evaluation import EvaluateRetrieval as BEIREval
from beir.retrieval.search.dense import DenseRetrievalExactSearch as DRES
from freshstack.retrieval.evaluation import EvaluateRetrieval

# Custom query prompt for evaluating the Qwen3-0.6B model on Freshstack.
query_prompt = "Instruct: Given a technical question, retrieve relevant code snippets or technical documentation that best answer the question\nQuery: "

model = DRES(models.SentenceBERT(
    "Qwen/Qwen3-Embedding-0.6B",
    max_length=2048, # IMP: keep the max_length for both query & passage atleast 2048 tokens.
    prompts={"query": query_prompt, "passage": ""},
    model_kwargs={
        "attn_implementation": "flash_attention_2", 
        "device_map": "auto", 
        "torch_dtype": "bfloat16"
    },
    tokenizer_kwargs={"padding_side": "left"},
), batch_size=32)

retriever = BEIREval(model, score_function="cos_sim")
retrieval_results = retriever.retrieve(corpus=corpus, queries=queries)

# Evaluate and compute retrieval score once you have results
evaluator = EvaluateRetrieval(k_values=[10, 20, 50])
alpha_ndcg, coverage, recall = evaluator.evaluate(
    qrels_nuggets=qrels_nuggets,
    query_to_nuggets=query_to_nuggets,
    qrels_query=qrels_query,
    results=retrieval_results,
)

3. Evaluate any multi-vector model (e.g., ColBERT) using Pylate.

Make sure you install the latest PyLate repository: pip install pylate.

from pylate import indexes, models, retrieve
from freshstack.retrieval.evaluation import EvaluateRetrieval

# Step 1: Load the ColBERT model
model = models.ColBERT(
    model_name_or_path="lightonai/GTE-ModernColBERT-v1",
    query_length=2048, document_length=2048
)

# Step 2: Initialize the Voyager index or (PLAID index)
index = indexes.Voyager(
    index_folder=f"./langchain_index",
    index_name="index",
    override=False,  # This overwrites the existing index if any
)

# Step 3: Encode the documents and add them to index
documents_ids = list(corpus.keys())
documents_embeddings = model.encode(
    [doc["text"] for doc in corpus.values()],
    batch_size=32,
    is_query=False,  # Ensure that it is set to False to indicate that these are documents, not queries
)

index.add_documents(
    documents_ids=documents_ids,
    documents_embeddings=documents_embeddings,
)

# Step 5: Compute query embeddings
query_ids = list(queries.keys())
queries_embeddings = model.encode(
    list(queries.values()),
    batch_size=32,
    is_query=True,  # Ensure that it is set to False to indicate that these are queries
)

# Step 6: Initialize the ColBERT retriever with the Voyager index & retrieve documents
retriever = retrieve.ColBERT(index=index)
scores = retriever.retrieve(
    queries_embeddings=queries_embeddings,
    k=50,  # Retrieve top-k results based on the maximum k value specified
    batch_size=1,  # We have kept a batch size of 1 to avoid memory issues.
    device="cpu",  # Use CPU for inference, change to "cuda" if you have a GPU available.
)

# Step 7: Prepare the results in the required BEIR format
retrieval_results = {}
for query_id, doc_scores in zip(query_ids, scores):
    retrieval_results[query_id] = {}
    for doc_id, score in doc_scores:
        retrieval_results[query_id][doc_id] = score

# Step 8: Evaluate and compute retrieval score once you have results
evaluator = EvaluateRetrieval(k_values=[10, 20, 50])
alpha_ndcg, coverage, recall = evaluator.evaluate(
    qrels_nuggets=qrels_nuggets,
    query_to_nuggets=query_to_nuggets,
    qrels_query=qrels_query,
    results=retrieval_results,
)

📚 Raw Freshstack Datasets (Oct 2024)

The raw freshstack datasets can be downloaded via HF:

from datasets import load_dataset
queries = load_dataset("freshstack/queries-oct-2024", subset="yolo")
corpus = load_dataset("freshstack/corpus-oct-2024", subset="yolo")

🧭 Project Structure

freshstack/
├─ examples/            # contains examples
│   ├─ chunking/        # examples for github repo chunking
│   ├─ evaluation/      # examples for model eval on freshstack
├─ freshstack/          # core logic modules
│   ├─ retrieval/       # code for retrieval evaluation
│   ├─ datasets/        # code for freshstakc dataloader
│   └─ chunking/        # code for github repo chunking
└─ pyproject.toml

FreshStack Leaderboard

The upto date leaderboard for Freshstack (version oct-2024) is provided here: https://fresh-stack.github.io/#leaderboard.

NOTE: Below is the snapshot of the Freshstack leaderboard from Jun 12th 2025.

Model NameSizeDateAVERAGE α@10AVERAGE C@20AVERAGE R@50LANGCHAIN α@10LANGCHAIN C@20LANGCHAIN R@50YOLO α@10YOLO C@20YOLO R@50LARAVEL α@10LARAVEL C@20LARAVEL R@50ANGULAR α@10ANGULAR C@20ANGULAR R@50GODOT α@10GODOT C@20GODOT R@50
Oracle: Fusion (BM25; ...) (Nuggets)-2024‑11‑010.5410.8680.7550.5190.8810.6550.6010.8760.8250.5660.8880.8180.5440.8810.7560.4760.8150.719
Oracle: BM25 (Nuggets)-2024‑11‑010.4880.7680.5560.4670.7390.4460.5190.7960.6570.5400.8400.6540.4850.7870.5360.4280.6800.489
Oracle: Voyage Large 2 (Nuggets)-2024‑11‑010.4040.7690.5860.4190.7630.5080.4300.8450.6750.4090.7910.6240.4060.7330.5330.3530.7150.590
Oracle: BGE (Gemma-2) (Nuggets)9B2024‑11‑010.3890.7350.5470.3080.6670.4050.4610.7840.5720.4480.8060.6660.3930.7550.5360.3350.6640.555
Qwen3‑8B (Emb)8B2025‑06‑050.3650.6890.5250.3310.6940.4230.3930.7280.5670.4210.7480.6150.3730.7000.5020.3070.5760.521
Qwen3‑4B (Emb)4B2025‑06‑050.3470.6560.4900.3200.6750.4150.4040.7440.5500.4020.7480.6040.3040.6180.4420.3030.4960.440
Fusion (BM25; BGE; E5; Voyage)-2024‑11‑010.3430.6690.5390.3370.7000.4770.3040.6270.5340.4250.7480.6460.3850.7190.5320.2650.5500.505
Oracle: E5 (Mistral-7B) (Nuggets)7B2024‑11‑010.3370.6640.4960.3230.6840.4320.4370.7370.5540.2870.6310.5320.3460.6700.4700.2920.5960.494
Stella‑1.5B v51.5B2025‑01‑010.3170.6150.4790.3150.6600.3880.3340.6240.5590.3700.6810.5900.3300.6300.4140.2370.4810.443
Voyage Large 2-2024‑11‑010.2890.5890.4380.2460.5280.3080.2700.5700.4530.3450.7010.5430.3040.6250.4270.2820.5220.458
Stella‑400M v5400M2025‑01‑010.2760.5780.4220.2850.6080.3560.2410.5380.4470.3200.6480.5340.2880.6190.3590.2440.4760.412
BGE (Gemma-2)9B2024‑11‑010.2690.5690.4270.2160.5480.3370.2580.5470.4300.3480.6990.5740.3230.5710.3780.2000.4790.419
Qwen3‑0.6B (Emb)596M2025‑06‑050.2620.5430.3940.2590.5880.3690.2600.5040.3830.2880.5930.4630.2530.5350.3560.2490.4950.400
E5 (Mistral-7B)7B2024‑11‑010.2550.5530.3970.3040.6540.3930.2430.5520.3940.2500.5650.4700.2620.5480.3680.2170.4440.359
GTE (large) v1.5434M2024‑01‑090.2260.4940.3180.2060.4700.2520.1950.4450.2710.3180.6260.4820.2840.5780.3430.1270.3480.240
BM25-2024‑11‑010.2180.4480.3160.2300.4750.2610.1370.3420.3370.3190.6020.4410.2590.5510.3400.1440.2680.200
Nomic Embed (Code)7B2025‑03‑240.2180.4880.3480.2240.5180.2920.2270.5390.3900.2220.5320.4070.2370.5110.3560.1780.3410.295
CodeRankEmbed137M2024‑11‑030.1040.2790.1620.0990.2710.1280.0750.2150.1280.1080.3240.2250.1460.3630.1670.0910.2240.160

👥 Contribute your model to the leaderboard

{
    "leaderboardData": [
        {
            "info": {
                "name": "BM25",
                "size": "-",
                "type": "open_source",
                "date": "2024-11-01",
                "link": "https://github.com/castorini/pyserini"
            },
            "datasets": {
                "langchain": {"alpha_ndcg_10": 0.230, "coverage_20": 0.475, "recall_50": 0.261},
                "yolo":      {"alpha_ndcg_10": 0.137, "coverage_20": 0.342, "recall_50": 0.337},
                "laravel":   {"alpha_ndcg_10": 0.319, "coverage_20": 0.602, "recall_50": 0.441},
                "angular":   {"alpha_ndcg_10": 0.259, "coverage_20": 0.551, "recall_50": 0.340},
                "godot":     {"alpha_ndcg_10": 0.144, "coverage_20": 0.268, "recall_50": 0.200},
                "average":   {"alpha_ndcg_10": 0.218, "coverage_20": 0.448, "recall_50": 0.316},
            }
        },
        ...
    ]
}
  • Submit a pull request, ideally including:

    • The updated leaderboard_data.json
    • Pipeline invocation script (reference)
    • Brief evaluation summary (reference)

All contributions welcome—especially new domain expansions, evaluation improvements, and retrieval baselines!

📄 Citation

If you use FreshStack in your work, please cite:

@article{thakur-freshstack:2025,
  author       = {Nandan Thakur and
                  Jimmy Lin and
                  Sam Havens and
                  Michael Carbin and
                  Omar Khattab and
                  Andrew Drozdov},
  title        = {FreshStack: Building Realistic Benchmarks for Evaluating Retrieval
                  on Technical Documents},
  journal      = {CoRR},
  volume       = {abs/2504.13128},
  year         = {2025},
  url          = {https://doi.org/10.48550/arXiv.2504.13128},
  doi          = {10.48550/ARXIV.2504.13128},
  eprinttype    = {arXiv},
  eprint       = {2504.13128},
  timestamp    = {Thu, 22 May 2025 21:00:35 +0200},
  biburl       = {https://dblp.org/rec/journals/corr/abs-2504-13128.bib},
  bibsource    = {dblp computer science bibliography, https://dblp.org}
}

The main contributors of this repository are:

Contact person: Nandan Thakur, nandant@gmail.com

Don't hesitate to send us an e-mail or report an issue, if something is broken (and it shouldn't be) or if you have further questions.

This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.

Collaboration

This project is developed in collaboration with the following organizations:

OrganizationLogo
University of WaterlooUniversity of Waterloo logo
DatabricksDatabricks logo

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts