Our development env is shown in:
requirements.txt
Download Original Datasets to the corresponding ./datasets/Videos/ folders.
Split Videos to the training set and testing set:
cd datasets
python split_video_datasets.py -h
###extract the frames:
python extract_frames.py -h
use the S3Fd Detector and Fan Aligner to extract facial images. [faceswap github].(https://github.com/deepfakes/faceswap)
#2) Feature Points Statistics First, download the shape_predictor_68_face_landmarks.dat to the ./
cd ..
python feature_point_statistics.py -h
#3) FFR_FD
cd 'construct_FFR_FD_for_datasets'
python FFR_FD_no_ave_train_set.py
python FFR_FD_no_ave_test_set.py -h
python FFR_FD_ave_train_set.py -h
python FFR_FD_ave_test_set.py -h
#3) Differences in FFR_FD
cd "differences_in_FFR_FD"
python statistics_differences_of_FFR_FD.py -h
#4) Train and Test
cd train_and_test
python train_and_test.py -h
###Generalization Test:
python generalization_test.py -h
#6)features importance:
python features_importances.py -h
If you find this useful for your research, please consider citing:
@article{wang2022ffr_fd,
title={FFR\_FD: Effective and fast detection of DeepFakes via feature point defects},
author={Wang, Gaojian and Jiang, Qian and Jin, Xin and Cui, Xiaohui},
journal={Information Sciences},
volume={596},
pages={472--488},
year={2022},
publisher={Elsevier}
}