🚀 Socket Launch Week 🚀 Day 4: Introducing Historical Analytics.Learn More
Socket
Sign inDemoInstall
Socket

llama-index-packs-infer-retrieve-rerank

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

llama-index-packs-infer-retrieve-rerank

llama-index packs infer retrieve rerank integration

0.5.0
PyPI
Maintainers
1

Infer-Retrieve-Rerank LlamaPack

This is our implementation of the paper "In-Context Learning for Extreme Multi-Label Classification by Oosterlinck et al.

The paper proposes "infer-retrieve-rerank", a simple paradigm using frozen LLM/retriever models that can do "extreme"-label classification (the label space is huge).

  • Given a user query, use an LLM to predict an initial set of labels.
  • For each prediction, retrieve the actual label from the corpus.
  • Given the final set of labels, rerank them using an LLM.

All of these can be implemented as LlamaIndex abstractions.

A full notebook guide can be found here.

CLI Usage

You can download llamapacks directly using llamaindex-cli, which comes installed with the llama-index python package:

llamaindex-cli download-llamapack InferRetrieveRerankPack --download-dir ./infer_retrieve_rerank_pack

You can then inspect the files at ./infer_retrieve_rerank_pack and use them as a template for your own project!

Code Usage

You can download the pack to a ./infer_retrieve_rerank_pack directory:

from llama_index.core.llama_pack import download_llama_pack

# download and install dependencies
InferRetrieveRerankPack = download_llama_pack(
    "InferRetrieveRerankPack", "./infer_retrieve_rerank_pack"
)

From here, you can use the pack, or inspect and modify the pack in ./infer_retrieve_rerank_pack.

Then, you can set up the pack like so:

# create the pack
pack = InferRetrieveRerankPack(
    labels,  # list of all label strings
    llm=llm,
    pred_context="<pred_context>",
    reranker_top_n=3,
    verbose=True,
)

The run() function runs predictions.

pred_reactions = pack.run(inputs=[s["text"] for s in samples])

You can also use modules individually.

# call the llm.complete()
llm = pack.llm
label_retriever = pack.label_retriever

Keywords

infer

FAQs

Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts