
Security News
vlt Launches "reproduce": A New Tool Challenging the Limits of Package Provenance
vlt's new "reproduce" tool verifies npm packages against their source code, outperforming traditional provenance adoption in the JavaScript ecosystem.
TabularBench: Adversarial robustness benchmark for tabular data
Leaderboard: https://serval-uni-lu.github.io/tabularbench/
Research papers:
Clone the repository
Build the Docker image
./tasks/docker_build.sh
Run the Docker container
./tasks/run_benchmark.sh
Note: The ./tasks/run_benchmark.sh
script mounts the current directory to the /workspace
directory in the Docker container.
This allows you to edit the code on your host machine and run the code in the Docker container without rebuilding.
Clone the repository
Create a virtual environment using Pyenv with Python 3.8.10.
Install the dependencies using Poetry.
poetry install
Clone the repository
Create a virtual environment using Conda with Python 3.8.10.
conda create -n tabularbench python=3.8.10
Activate the conda environment.
conda activate tabularbench
Install the dependencies using Pip.
pip install -r requirements.txt
You can run the benchmark with the following command:
python -m tasks.run_benchmark
or with Docker:
docker_run_benchmark
You can also use the API to run the benchmark. See tasks/run_benchmark.py
for an example.
clean_acc, robust_acc = benchmark(
dataset="URL",
model="STG_Default",
distance="L2",
constraints=True,
)
We provide the models and parameters used in the paper. You can retrain the models with the following command:
python -m tasks.train_model
Edit the tasks/train_model.py
file to change the model, dataset, and training method.
Datasets, pretrained models, and synthetic data are publicly available here. The folder structure on the Shared folder should be followed locally to ensure the code runs correctly.
Datasets: Datasets are downloaded automatically in data/datasets
when used.
Models: Pretrained models are available in the folder data/models
.
Model parameters: Optimal parameters (from hyperparameters search) are required to train models and are in data/model_parameters
.
Synthetic data: The synthetic data generated by GANs is available in the folder data/synthetic
.
For technical reasons, the names of datasets, models, and training methods are different from the paper. The mapping can be found in docs/naming.md.
FAQs
TabularBench: Adversarial robustness benchmark for tabular data
We found that tabularbench demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
vlt's new "reproduce" tool verifies npm packages against their source code, outperforming traditional provenance adoption in the JavaScript ecosystem.
Research
Security News
Socket researchers uncovered a malicious PyPI package exploiting Deezer’s API to enable coordinated music piracy through API abuse and C2 server control.
Research
The Socket Research Team discovered a malicious npm package, '@ton-wallet/create', stealing cryptocurrency wallet keys from developers and users in the TON ecosystem.