Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

folktexts

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

folktexts

Use LLMs to get classification risk scores on tabular tasks.

  • 0.0.26
  • PyPI
  • Socket score

Maintainers
1

:book: folktexts

Tests status PyPI status Documentation status PyPI version PyPI - License Python compatibility

This package is the basis for our NeurIPS'24 paper titled "Evaluating language models as risk scores"

Folktexts is a python package to evaluate statistical properties of LLMs as classifiers. It enables computing and evaluating classification risk scores for tabular prediction tasks using LLMs.

Several benchmark tasks are provided based on data from the American Community Survey. Namely, each prediction task from the popular folktables package is made available as a natural-language prompting task.

Package documentation can be found here.

Table of contents:

Installing

Install package from PyPI:

pip install folktexts

Basic setup

You'll need to go through these steps to run the benchmark tasks.

  1. Create conda environment
conda create -n folktexts python=3.11
conda activate folktexts
  1. Install folktexts package
pip install folktexts
  1. Create models dataset and results folder
mkdir results
mkdir models
mkdir data
  1. Download transformers model and tokenizer
download_models --model 'google/gemma-2b' --save-dir models
  1. Run benchmark on a given task
run_acs_benchmark --results-dir results --data-dir data --task 'ACSIncome' --model models/google--gemma-2b

Run run_acs_benchmark --help to get a list of all available benchmark flags.

Example usage

# Load transformers model
from folktexts.llm_utils import load_model_tokenizer
model, tokenizer = load_model_tokenizer("gpt2")   # using tiny model as an example

from folktexts.acs import ACSDataset
acs_task_name = "ACSIncome"     # Name of the benchmark ACS task to use

# Create an object that classifies data using an LLM
from folktexts import TransformersLLMClassifier
clf = TransformersLLMClassifier(
    model=model,
    tokenizer=tokenizer,
    task=acs_task_name,
)
# NOTE: You can also use a web-hosted model like GPT4 using the `WebAPILLMClassifier` class

# Use a dataset or feed in your own data
dataset = ACSDataset.make_from_task(acs_task_name)   # use `.subsample(0.01)` to get faster approximate results

# You can compute risk score predictions using an sklearn-style interface
X_test, y_test = dataset.get_test()
test_scores = clf.predict_proba(X_test)

# Optionally, you can fit the threshold based on a few samples
clf.fit(*dataset[0:100])    # (`dataset[...]` will access training data)

# ...in order to get more accurate binary predictions with `.predict`
test_preds = clf.predict(X_test)

# If you only care about the overall metrics and not individual predictions,
# you can simply run the following code:
from folktexts.benchmark import Benchmark, BenchmarkConfig
bench = Benchmark.make_benchmark(
    task=acs_task_name, dataset=dataset,
    model=model, tokenizer=tokenizer,
    numeric_risk_prompting=True,    # See the full list of configs below in the README
)
bench_results = bench.run(results_root_dir="results")

Benchmark features and options

Here's a summary list of the most important benchmark options/flags used in conjunction with the run_acs_benchmark command line script, or with the Benchmark class.

OptionDescriptionExamples
--modelName of the model on huggingface transformers, or local path to folder with pretrained model and tokenizer. Can also use web-hosted models with "[provider]/[model-name]".meta-llama/Meta-Llama-3-8B, openai/gpt-4o-mini
--taskName of the ACS task to run benchmark on.ACSIncome, ACSEmployment
--results-dirPath to directory under which benchmark results will be saved.results
--data-dirRoot folder to find datasets in (or download ACS data to).~/data
--numeric-risk-promptingWhether to use verbalized numeric risk prompting, i.e., directly query model for a probability estimate. By default will use standard multiple-choice Q&A, and extract risk scores from internal token probabilities.Boolean flag (True if present, False otherwise)
--use-web-api-modelWhether the given --model name corresponds to a web-hosted model or not. By default this is False (assumes a huggingface transformers model). If this flag is provided, --model must contain a litellm model identifier (examples here).Boolean flag (True if present, False otherwise)
--subsamplingWhich fraction of the dataset to use for the benchmark. By default will use the whole test set.0.01
--fit-thresholdWhether to use the given number of samples to fit the binarization threshold. By default will use a fixed $t=0.5$ threshold instead of fitting on data.100
--batch-sizeThe number of samples to process in each inference batch. Choose according to your available VRAM.10, 32

Full list of options:

usage: run_acs_benchmark [-h] --model MODEL --results-dir RESULTS_DIR --data-dir DATA_DIR [--task TASK] [--few-shot FEW_SHOT] [--batch-size BATCH_SIZE] [--context-size CONTEXT_SIZE] [--fit-threshold FIT_THRESHOLD] [--subsampling SUBSAMPLING] [--seed SEED] [--use-web-api-model] [--dont-correct-order-bias] [--numeric-risk-prompting] [--reuse-few-shot-examples] [--use-feature-subset USE_FEATURE_SUBSET]
                         [--use-population-filter USE_POPULATION_FILTER] [--logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}]

Benchmark risk scores produced by a language model on ACS data.

options:
  -h, --help            show this help message and exit
  --model MODEL         [str] Model name or path to model saved on disk
  --results-dir RESULTS_DIR
                        [str] Directory under which this experiment's results will be saved
  --data-dir DATA_DIR   [str] Root folder to find datasets on
  --task TASK           [str] Name of the ACS task to run the experiment on
  --few-shot FEW_SHOT   [int] Use few-shot prompting with the given number of shots
  --batch-size BATCH_SIZE
                        [int] The batch size to use for inference
  --context-size CONTEXT_SIZE
                        [int] The maximum context size when prompting the LLM
  --fit-threshold FIT_THRESHOLD
                        [int] Whether to fit the prediction threshold, and on how many samples
  --subsampling SUBSAMPLING
                        [float] Which fraction of the dataset to use (if omitted will use all data)
  --seed SEED           [int] Random seed -- to set for reproducibility
  --use-web-api-model   [bool] Whether use a model hosted on a web API (instead of a local model)
  --dont-correct-order-bias
                        [bool] Whether to avoid correcting ordering bias, by default will correct it
  --numeric-risk-prompting
                        [bool] Whether to prompt for numeric risk-estimates instead of multiple-choice Q&A
  --reuse-few-shot-examples
                        [bool] Whether to reuse the same samples for few-shot prompting (or sample new ones every time)
  --use-feature-subset USE_FEATURE_SUBSET
                        [str] Optional subset of features to use for prediction, comma separated
  --use-population-filter USE_POPULATION_FILTER
                        [str] Optional population filter for this benchmark; must follow the format 'column_name=value' to filter the dataset by a specific value.
  --logger-level {DEBUG,INFO,WARNING,ERROR,CRITICAL}
                        [str] The logging level to use for the experiment

Evaluating feature importance

By evaluating LLMs on tabular classification tasks, we can use standard feature importance methods to assess which features the model uses to compute risk scores.

You can do so yourself by calling folktexts.cli.eval_feature_importance (add --help for a full list of options).

Here's an example for the Llama3-70B-Instruct model on the ACSIncome task (warning: takes 24h on an Nvidia H100):

python -m folktexts.cli.eval_feature_importance --model 'meta-llama/Meta-Llama-3-70B-Instruct' --task ACSIncome --subsampling 0.1
feature importance on llama3 70b it

This script uses sklearn's permutation_importance to assess which features contribute the most for the ROC AUC metric (other metrics can be assessed using the --scorer [scorer] parameter).

FAQ

  1. Q: Can I use folktexts with a different dataset?

    A: Yes! Folktexts provides the whole ML pipeline needed to produce risk scores using LLMs, together with a few example ACS datasets. You can easily apply these same utilities to a different dataset following the example jupyter notebook.

  2. Q: How do I create a custom prediction task based on American Community Survey data?

    A: Simply create a new TaskMetadata object with the parameters you want. Follow the example jupyter notebook for more details.

  3. Q: Can I use folktexts with closed-source models?

    A: Yes! We provide compatibility with local LLMs via 🤗 transformers and compatibility with web-hosted LLMs via litellm. For example, you can use --model='gpt-4o' --use-web-api-model to use GPT-4o when calling the run_acs_benchmark script. Here's a complete list of compatible OpenAI models. Note that some models are not compatible as they don't enable access to log-probabilities. Using models through a web API requires installing extra optional dependencies with pip install 'folktexts[apis]'.

  4. Q: Can I use folktexts to fine-tune LLMs on survey prediction tasks?

    A: The package does not feature specific fine-tuning functionality, but you can use the data and Q&A prompts generated by folktexts to fine-tune an LLM for a specific prediction task.

Citation

@inproceedings{cruz2024evaluating,
    title={Evaluating language models as risk scores},
    author={Andr\'{e} F. Cruz and Moritz Hardt and Celestine Mendler-D\"{u}nner},
    booktitle={The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
    year={2024},
    url={https://openreview.net/forum?id=qrZxL3Bto9}
}

License and terms of use

Code licensed under the MIT license.

The American Community Survey (ACS) Public Use Microdata Sample (PUMS) is governed by the U.S. Census Bureau terms of service.

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc