Security News
Research
Data Theft Repackaged: A Case Study in Malicious Wrapper Packages on npm
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
This package contains core code for submitting decoders to the FALCON challenge. For a more general overview of FALCON, please see the main website.
Install falcon_challenge
with:
pip install falcon-challenge
To create Docker containers for submission, you must have Docker installed. See, e.g. https://docs.docker.com/desktop/install/linux-install/.
The FALCON datasets are available on DANDI (H1, H2, M1, M2, B1). H1 and H2 are human intractorical brain-computer interface (iBCI) datasets, M1 and M2 are monkey iBCI datasets, and B1 is a songbird iBCI dataset. You can download them individually by going to their DANDI pages to find their respective DANDI download commands, or you can run ./download_falcon_datasets.sh
from project root.
Data from each dataset is broken down as follows:
Some of the sample code expects your data directory to be set up in ./data
. Specifically, the following hierarchy is expected:
data
h1
held_in_calib
held_out_calib
minival
(Copy dandiset minival folder into this folder)h2
held_in_calib
held_out_calib
minival
(Copy dandiset minival folder into this folder)m1
sub-MonkeyL-held-in-calib
sub-MonkeyL-held-out-calib
minival
(Copy dandiset minival folder into this folder)m2
held_in_calib
held_out_calib
minival
(Copy dandiset minival folder into this folder)Each of the lowest level dirs holds the data files (in Neurodata Without Borders (NWB) format). Data from some sessions is distributed across multiple NWB files. Some data from each file is allocated to calibration, minival, and evaluation splits as appropriate.
This codebase contains starter code for implementing your own method for the FALCON challenge.
falcon_challenge
folder contains the logic for the evaluator. Submitted solutions must conform to the interface specified in falcon_challenge.interface
.data_demos
, we provide notebooks that survey each dataset released as part of this challenge.decoder_demos
, we provide sample decoders and baselines that are formatted to be ready for submission to the challenge. To use them, see the comments in the header of each file ending in _sample.py
. Your solutions should look similar once implemented! (Namely, you should have a _decoder.py
file or class which conforms to falcon_challenge.inferface
as well as a _sample.py
file that is the entry point for running your decoder.)For example, you can prepare and evaluate a linear decoder by running:
python decoder_demos/sklearn_decoder.py --training_dir data/000954/sub-HumanPitt-held-in-calib/ --calibration_dir data/000954/sub-HumanPitt-held-out-calib/ --mode all --task h1
# Should report: CV fit score, 0.26
python decoder_demos/sklearn_sample.py --evaluation local --phase minival --split h1
# Should report: Held In Mean of 0.195
Note: During evaluation, data file names are hashed into unique tags. Submitted solutions receive data to decode along with tags indicating the file from which the data originates in the call to their reset
function. These tags are the keys of the the DATASET_HELDINOUT_MAP
dictionary in falcon_challenge/evaluator.py
. Submissions that intend to condition decoding on the data file from which the data comes should make use of these tags. For an example, see fit_many_decoders
and reset
in decoder_demos/sklearn_decoder.py
.
To interface with our challenge, your code will need to be packaged in a Docker container that is submitted to EvalAI. Try this process by building and running the provided sklearn_sample.Dockerfile
, to confirm your setup works. Do this with the following commands (once Docker is installed)
# Build
docker build -t sk_smoke -f ./decoder_demos/sklearn_sample.Dockerfile .
bash test_docker_local.sh --docker-name sk_smoke
For an example Dockerfile with annotations regarding the necessity and function of each line, see decoder_demos/template.Dockerfile
.
Please ensure that your submission runs locally before running remote evaluation. You can run the previously listed commands with your own Dockerfile (in place of sk_smoke). This should produce a log of nontrivial metrics (evaluation is run on locally available minival).
To submit to the FALCON benchmark once your decoder Docker container is ready, follow the instructions on the EvalAI submission tab. This will instruct you to first install EvalAI, then add your token, and finally push the submission. It should look something like:
evalai push mysubmission:latest --phase --phase few-shot-<test/minival>-2319 --private
(Note that you will not see these instruction unless you have first created a team to submit. The phase should contain a specific challenge identifier. You may need to refresh the page before instructions will appear.)
Please note that all submissions are subject to a 6 hour time limit.
Docker:
sudo
access is needed, or your user needs to be in the docker
group. docker info
should run without error.sudo
is sufficient for local development, the EvalAI submission step will ultimately require your user to be able to run docker
commands without sudo
.docker
group. Note you may need vigr to add your own user.EvalAI:
pip install evalai
may fail on python 3.11, see: https://github.com/aio-libs/aiohttp/issues/6600. We recommend creating a separate env for submission in this case.FAQs
Unknown package
We found that falcon-challenge demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
Research
The Socket Research Team breaks down a malicious wrapper package that uses obfuscation to harvest credentials and exfiltrate sensitive data.
Research
Security News
Attackers used a malicious npm package typosquatting a popular ESLint plugin to steal sensitive data, execute commands, and exploit developer systems.
Security News
The Ultralytics' PyPI Package was compromised four times in one weekend through GitHub Actions cache poisoning and failure to rotate previously compromised API tokens.