Huge News!Announcing our $40M Series B led by Abstract Ventures.Learn More
Socket
Sign inDemoInstall
Socket

xfacereclib.paper.IET2015

Package Overview
Dependencies
Maintainers
1
Alerts
File Explorer

Advanced tools

Socket logo

Install Socket

Detect and block malicious and high-risk dependencies

Install

xfacereclib.paper.IET2015

Source code to reproduce the paper "Impact of Eye Detection Error on Face Recognition Performance"

  • 1.0.1
  • PyPI
  • Socket score

Maintainers
1

.. vim: set fileencoding=utf-8 : .. Manuel Guenther manuel.guenther@idiap.ch .. Tue 24 Mar 14:55:33 CET 2015

=============================================================== Impact of Eye Detection Error on Face Recognition Performance

This package provides the source code to run the experiments published in the paper Impact of Eye Detection Error on Face Recognition Performance <http://publications.idiap.ch/index.php/publications/show/2981>. It relies on the FaceRecLib to execute the face recognition experiments, which in turn uses the face recognition algorithms and the database interface of Bob_.

.. note:: Currently, this package only works in Unix-like environments and under MacOS. Due to limitations of the Bob_ library, MS Windows operating systems are not supported. We are working on a port of Bob_ for MS Windows, but it might take a while. In the meanwhile you could use our VirtualBox_ images that can be downloaded here <http://www.idiap.ch/software/bob/images>__.

When you use this source code in a scientific publication, we would be happy if you would cite::

@article{Dutta2015, author = "Abhishek Dutta and Manuel G"unther and Laurent El Shafey and S'ebastien Marcel", title = "Impact of Eye Detection Error on Face Recognition Performance", year = 2015 journal = {IET Biometrics}, issn = {2047-4938}, url = {http://digital-library.theiet.org/content/journals/10.1049/iet-bmt.2014.0037}, pdf = {http://publications.idiap.ch/downloads/papers/2015/Dutta_IETBIOMETRICS_2014.pdf} }

Installation

This package uses several Bob_ libraries, which will be automatically installed locally using the command lines as listed below. However, in order for the Bob_ packages to compile, certain Dependencies <https://github.com/idiap/bob/wiki/Dependencies>_ need to be installed.

This package

The installation of this package relies on the BuildOut <http://www.buildout.org>_ system. By default, the command line sequence::

$ ./python bootstrap-buildout.py $ ./bin/buildout

should download and install all required packages of Bob_ in the versions that we used to produce the results. Other versions of the packages might generate sightly different results. To use the latest versions of all Bob_ packages, please remove the strict version numbers that are given in the buildout.cfg file in the main directory of this package.

Image Database

The experiments are run on an external image database. We do not provide the images from the database themselves. Hence, please contact the database owners to obtain a copy of the images. The Multi-PIE database used in our experiments can be downloaded here <http://www.multipie.org>__.

.. note:: Unfortunately, the Multi-PIE database is not free of charge. If you do not have a copy of the database yet, and you are not willing to pay for it, you cannot reproduce the results of the paper directly. Nevertheless, you can use other databases, some of which are free of charge. A complete list of supported databases and their according evaluation protocols can be found in the FaceRecLib_ documentation.

Important! '''''''''' After downloading the databases, you will need to tell our software, where it can find them by changing the configuration file. In particular, please update the MULTIPIE_IMAGE_DIRECTORY in xfacereclib/paper/IET2015/configuration/database.py.

Unpacking the Annotations

After the database is set up correctly, you'll need to unpack the eye annotations that are used in the experiments. Please run the script::

$ ./bin/unpack_annotations

to extract the annotations in the desired directory structure. If you want, you can specify another directory to unpack the annotations (see ./bin/unpack_annotations.py --help), but all other functions and configurations will have their defaults set according to the default directory.

Testing your Installation

After you have set up the database, you should be able to run our test suite::

$ ./bin/nosetests

Please make sure that all tests pass.

TODO::

Implement tests.

Getting help

In case anything goes wrong, please feel free to open a new ticket in our GitHub_ page, start a new discussion in our Mailing List <https://groups.google.com/forum/?fromgroups#!forum/bob-devel>_ or send an email to manuel.guenther@idiap.ch.

Recreating the Results of the Paper_

After successfully setting up the database, you are now able to run the face recognition experiments as explained in the Paper_. Particularly, you will be able to reproduce Figure 4, Figure 7 and Figure 13. Be aware that we were running more than 1000 individual face recognition experiments, each of which used a slightly different experiment configuration.

The Experiment Configuration

The face recognition experiment are run using the FaceRecLib_. In total, we are testing five different face recognition algorithms, each of which uses the default configuration from the FaceRecLib_:

  • eigenfaces: a PCA is trained on pixel gray values, and the projected features are compared with Euclidean distance.
  • fisherfaces: a combined PCA + LDA matrix is trained on pixel gray values, and the projected features are compared with Euclidean distance.
  • gabor-jet: Gabor jets are extracted at grid locations in the image and compared with a Gabor-phase-based similarity function.
  • lgbphs: extended local Gabor binary pattern histogram sequences are extracted from image blocks, and the histograms are compared with histogram intersection.
  • isv: DCT features are extracted from image blocks and modeled with a Gaussian mixture model and an additional inter-session variability model, and the score is computed as a likelihood ratio.

As input, all these algorithms expect images, where the face is extracted and aligned, so that the eye centers are always placed on the same location in the image. For this alignment procedure, labeled eye locations must be available. The main focus of this paper is not on the face recognition algorithms themselves, but on how they perform in case that the eye locations are slightly misplaced, as it might happen in both manual and automatic annotations.

Running the Experiments

For convenience, we have generated a wrapper script that allows to run a set of face recognition experiments in sequence -- or even in parallel, see below. This wrapper script abuses one functionality of the FaceRecLib_, namely the parameter testing, which is an easy way to perform a grid search on a set of parameters. For our purposes, these parameters are:

  • Figure 4: the eye position shifts in horizontal and vertical direction, as well as the rotation angle.
  • Figure 7: the standard deviations of the Normal distributed shifts of eye positions in horizontal and vertical direction, as well as a random seed.

The according configurations are given in fixed_perturbation.py (Figure 4) and random_perturbation.py (Figure 7). There, you can find the setup as it was used to generate the according plots, but in case you want to run only a sub-set of experiments, you can reduce the parameters in each list.

The experiments can be run using the ./bin/parameter_test.py script. This script has several options, the most important of which are:

  • --configuration-file: the configuration file that contains the parameters that we want to test. For our experiments, these are the two files fixed_perturbation.py (Figure 4) and random_perturbation.py (Figure 7).

  • --database: the database that should be used in the experiments, which will be multipie-m in all cases.

  • --executable: the (pythonic) name of the face verification function that will be executed. Since we had to modify the default script a bit, our script needs to be specified (see below).

  • --sub-directory: the name of a directory (will be created on need), where all experiments for the given configuration file are stored.

  • --grid: a name of a grid configuration to run algorithms in parallel (see below).

  • --verbose: Print out additional information or debug information during the execution of the experiments. The --verbose option can be used several times, increasing the level to Warning (1), Info (2) and Debug (3). By default, only Error (0) messages are printed. The Info (aka -vv) option is recommended.

  • --dry-run: Use this option to print the calls to the FaceRecLib_ without executing them. Again, it is recommended to use this flag once, i.e., to check that everything is correct before running the experiments.

Additionally, parameters can be passed directly to the ./bin/faceverify.py script from the FaceRecLib_. Please use a -- to separate parameters for ./bin/faceverify.py form parameters for ./bin/parameter_test.py. Useful parameters might be the --result-directory and the --temp-directory options. For a complete list of options, please check ./bin/faceverify.py --help.

Finally, the command lines to run the experiments for Figures 4 and 7, call::

$ ./bin/parameter_test.py --configuration-file fixed_perturbation.py --database multipie-m --sub-directory fixed --executable xfacereclib.paper.IET2015.script.faceverify -- --temp-directory [YOUR_TEMP_DIRECTORY] --result-directory [YOUR_RESULT_DIRECTORY]

$ ./bin/parameter_test.py --configuration-file random_perturbation.py --database multipie-m --sub-directory random --executable xfacereclib.paper.IET2015.script.faceverify -- --temp-directory [YOUR_TEMP_DIRECTORY] --result-directory [YOUR_RESULT_DIRECTORY]

The last set of experiments, i.e., to regenerate Figure 13 can be run using the ./bin/annotation_types script. Again, this script has a set of options, most of which have proper default values:

  • --image-directory: the base directory of the Multi-PIE database; needs to be specified.
  • --annotation-directory: the base directory, where the annotations have been extracted to.
  • --algorithms: a list of algorithms that should be tested; by default all five algorithms are run.
  • --world-types: a list of annotation types, which should serve to train the algorithms and to enroll the models with.
  • --probe-types: a list of annotation types, which should be probed against the enrolled models.

Again, the same --verbose option and options passed to the ./bin/faceverify.py script exists. Hence, the last set of experiments to be run can be started with::

$ ./bin/annotation_types --image-directory [MULTIPIE_IMAGE_DIRECTORY] -vv -- --temp-directory [YOUR_TEMP_DIRECTORY] --result-directory [YOUR_RESULT_DIRECTORY]

Parallel Execution

Since the two command lines above execute more than 1000 individual face recognition algorithms, you might want to run them in parallel. For this purpose, you can use the --grid option of the ./bin/parameter_test.py script. This will trigger the usage of GridTK_, a tool originally developed to submit and monitor jobs in an SGE processing farm. If you have access to such a farm, you can use the --grid sge option to submit the experiments to the SGE grid (you might need to set up the SGE configuration in the grid configuration file xfacereclib/paper/IET2015/configuration/grid.py, in the facereclib/utils/grid.py of the FaceRecLib_ or in the GridTK_ itself).

On the other hand, when you have a powerful machine with lots of processing units, you can use the --grid local option. This will submit jobs to the "local" queue, which you have to start them manually by::

$ ./bin/jman --local --database [DIR]/submitted.sql3 -vv run-scheduler --parallel [NUMBER_OF_SLOTS] --die-when-finished

Please refer to the GridTK_ manual for more details.

.. note:: When submitting to the either the local queue or the SGE, several job databases called submitted.sql3 are stored in sub-directories of the grid_db directory. You can use ./bin/jman --database [DIR]/submitted.sql3 list to see the current status of the jobs stored in the given database. Of course, you can also use the default SGE tools (such as qstat) to check the statuses of the jobs.

.. warning:: For the random experiment, please do not use more than one parallel job to preprocess the images. Otherwise, the random seed might be applied several times, leading to inexact results.

.. note:: The same --grid option can be used for the ./bin/annotation_types script. Here, only one submitted.sql3 file is written, in the current directory.

Evaluating the Experiments

After all experiments have finished successfully, the resulting score files can be evaluated. The figures in the paper were generated using a mix of python and R scripts, i.e., to make them look more beautiful. However, for this package we will plot the figures solely using matplotlib. The ./bin/plot_results script can be used to create the plots similar to the ones in Figures 4, 7 and 13. Additionally, it will write .csv files containing the exact numbers, i.e., the Figures in in the Paper_ rely on these files.

As usual, the ./bin/plot_results has a list of command line options, most of which have proper default values:

  • --scores-directory: the base directory, where the score files have been produced.

  • --experiments: a list of experiments to evaluate. By default, all three experiments are evaluated.

  • --algorithms: a list of algorithms to evaluate. By default, all five algorithms are evaluated.

Some more options are available, see ./bin/plot_results --help for a complete list. Hence, to produce all three plots from Figures 4, 7, and 13, simply call::

$ ./bin/plot_results -vv --scores-directory [YOUR_RESULT_DIRECTORY]

Afterward, the plots can be found in the plots directory. For Figure 4, they are called HTER_fixed.pdf and AUC_fixed.pdf, while for Figure 7 they are HTER_random.pdf and AUC_random.pdf. The HTER plots should be identical to the ones found in the Paper_. The AUC plots have a different color coding than in the Paper_, but the contents are identical. Finally, the file plots/ROCs.pdf contains the ROC curves of Figure 13, except that the FAR range is slightly higher.

.. _paper: http://publications.idiap.ch/index.php/publications/show/2981 .. _idiap: http://www.idiap.ch .. _bob: http://www.idiap.ch/software/bob .. _facereclib: http://pypi.python.org/pypi/facereclib .. _github: http://github.com/bioidiap/xfacereclib.paper.IET2015 .. _virtualbox: http://www.virtualbox.org .. _gridtk: http://pypi.python.org/pypi/gridtk

Keywords

FAQs


Did you know?

Socket

Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.

Install

Related posts

SocketSocket SOC 2 Logo

Product

  • Package Alerts
  • Integrations
  • Docs
  • Pricing
  • FAQ
  • Roadmap
  • Changelog

Packages

npm

Stay in touch

Get open source security insights delivered straight into your inbox.


  • Terms
  • Privacy
  • Security

Made with ⚡️ by Socket Inc