This reposotiry tries to reproduce results of Salvador (2022) using an adaption of the papers code on github.
For a more detailed explanation on how to run all code, refer to the step-by-step guide.
To install requirements, first download miniconda. Then, install the appropriate environment by executing the command below from the current directory. This will create an environment named mlrc-faircal
.
conda env create -f environment_[cpu|gpu].yml
The datasets used were the BFW and RFW for verifying the original paper results. Other datasets may be used for additional verification. See the Table below on where to find the datasets.
Dataset | Open-source | URL |
---|---|---|
BFW | No, register through Google Forms | https://github.com/visionjo/facerec-bias-bfw |
RFW | No, mail for research access | http://whdeng.cn/RFW/testing.html |
After obtaining these dataset, please read the Data README on how to crop and embed the face images.
Running
python src/main.py
should execute the entire pipeline (crop, embed, cluster, calibrate, evaluate) of this project and save the results in the experiments folder.
In order to generate the tables and figures used in the reproducability paper, please run the notebook in src/Tables and Figures.ipynb
or run the following.
python src/tables_and_figures.py
python src/extension.py
In order to embed the images, Inception FaceNet models are used, which were obtained from the FaceNet PyTorch GitHub. This will automatically be downloaded using the enviornment defined above.
To edit the paper, go to this overleaf project if you have permission, or view the paper with this link.