TabularBench
TabularBench: Adversarial robustness benchmark for tabular data
Leaderboard: https://serval-uni-lu.github.io/tabularbench/
Research papers:
Installation
Using Docker (recommended)
-
Clone the repository
-
Build the Docker image
./tasks/docker_build.sh
-
Run the Docker container
./tasks/run_benchmark.sh
Note: The ./tasks/run_benchmark.sh script mounts the current directory to the /workspace directory in the Docker container.
This allows you to edit the code on your host machine and run the code in the Docker container without rebuilding.
With Pyenv and Poetry
poetry install
Using conda
-
Clone the repository
-
Create a virtual environment using Conda with Python 3.8.10.
conda create -n tabularbench python=3.8.10
-
Activate the conda environment.
conda activate tabularbench
-
Install the dependencies using Pip.
pip install -r requirements.txt
How to use
Run the benchmark
You can run the benchmark with the following command:
python -m tasks.run_benchmark
or with Docker:
docker_run_benchmark
Using the API
You can also use the API to run the benchmark. See tasks/run_benchmark.py for an example.
clean_acc, robust_acc = benchmark(
dataset="URL",
model="STG_Default",
distance="L2",
constraints=True,
)
Retrain the models
We provide the models and parameters used in the paper.
You can retrain the models with the following command:
python -m tasks.train_model
Edit the tasks/train_model.py file to change the model, dataset, and training method.
Data availability
Datasets, pretrained models, and synthetic data are publicly available here.
The folder structure on the Shared folder should be followed locally to ensure the code runs correctly.
Datasets: Datasets are downloaded automatically in data/datasets when used.
Models: Pretrained models are available in the folder data/models.
Model parameters: Optimal parameters (from hyperparameters search) are required to train models and are in data/model_parameters.
Synthetic data: The synthetic data generated by GANs is available in the folder data/synthetic.
Naming
For technical reasons, the names of datasets, models, and training methods are different from the paper.
The mapping can be found in docs/naming.md.