Message Board

Respected readers, authors and reviewers, you can add comments to this page on any questions about the contribution, review,        editing and publication of this journal. We will give you an answer as soon as possible. Thank you for your support!

Name
E-mail
Phone
Title
Content
Verification Code
Turn off MathJax
Article Contents

HU Yongjian, SHE Huimin, LIU Beibei, CHEN Xiangquan, LIU Guangyao. Deepfake Video Detection Using 3DMM Facial Reconstruction Information[J]. Geomatics and Information Science of Wuhan University. doi: 10.13203/j.whugis20210427
Citation: HU Yongjian, SHE Huimin, LIU Beibei, CHEN Xiangquan, LIU Guangyao. Deepfake Video Detection Using 3DMM Facial Reconstruction Information[J]. Geomatics and Information Science of Wuhan University. doi: 10.13203/j.whugis20210427

Deepfake Video Detection Using 3DMM Facial Reconstruction Information

doi: 10.13203/j.whugis20210427
Funds:

National Key Research and Development Project(Grant 2019QY2202),Science and Technology Foundation of Guangzhou Huangpu Development District (Grant 2019GH16) and ChinaSingapore International Joint Research Institute (Grant 206-A018001)

  • Received Date: 2022-06-23
  • Objectives: The emergence of deepfake technique leads to a worldwide information security problem. Deepfake videos are used to manipulate and mislead the public. Though there have been a variety of deepfake detection methods, the features extracted generally suffer from poor interpretability. To solve this problem, a deepfake video detection method using 3D morphable model (3DMM) of face is proposed in this work. Methods: The 3DMM is employed to estimate parameters of shape, texture, expression, and gesture of the face frame by frame, constituting the basic information of Deepfake detection. The facial behavior feature extraction module and the static face appearance feature extraction module are designed for the construction of feature vectors on a sliding window basis. The facial behavior feature vector is derived from the expression and gesture parameters while the appearance feature vector is calculated with the shape and texture parameters. The consistency measured by Cosine distance between the appearance feature vector and the behavior feature vector is the criterion for authentication of the face for each sliding window across the video. Results: The effectiveness of the proposed method is evaluated with three public datasets. The overall half total error rates (HTER) obtained on FF++, DFD and Celeb-DF dataset are 1.33%, 4.93% and 3.92% respectively. For the severely compressed videos, C40 of DFD, the HTER is 7.09%, showing a good robustness against video compression. The model complexity is around 1/4 of that of the most related work. Conclusions: The proposed algorithm has good person pertinence and clear interpretability. Compared with state-of-the-art methods in literature, the proposed algorithm demonstrates lower half total error rates, better resistance to video compression and less computational cost.
  • [1] Haliassos A, Vougioukas K, Petridis S, et al.Lips don't lie:A generalisable and robust approach to face forgery detection[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Nashville, TN, USA:5037-5047
    [2] Yang X, Li Y Z, Lyu S W.Exposing deep fakes using inconsistent head poses[C]//ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing.Brighton, UK:8261-8265
    [3] Li M, Liu B B, Hu Y J, et al.Exposing deepfake videos by tracking eye movements[C]//202025th International Conference on Pattern Recognition (ICPR).Milan, Italy:5184-5189
    [4] Li M, Liu B B, Hu Y J, et al.Deepfake detection using robust spatial and temporal features from facial landmarks[C]//2021 IEEE International Workshop on Biometrics and Forensics.Rome, Italy:1-6
    [5] Li Y Z, Lyu S W.Exposing DeepFake Videos by Detecting Face Warping Artifacts[EB/OL].2018:arXiv:1811.00656.https://arxiv.org/abs/1811.00656
    [6] Dang H, Liu F, Stehouwer J, et al.On the detection of digital face manipulation[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Seattle, WA, USA:5780-5789
    [7] Li Y Z, Yang X, Sun P, et al.Celeb-DF:A large-scale challenging dataset for DeepFake forensics[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Seattle, WA, USA:3204-3213
    [8] Agarwal S, Farid H, Gu Y, et al.Protecting World Leaders Against Deep Fakes[C]//IEEE Conference on Computer Vision&Pattern Recognition Workshops, 2019
    [9] Agarwal S, Farid H, El-Gaaly T, et al.Detecting deep-fake videos from appearance and behavior[C]//2020 IEEE International Workshop on Information Forensics and Security.New York, NY, USA:1-6
    [10] Blanz V, Vetter T.A Morphable Model for the Synthesis of 3D Faces[C]//Proceedings of the 26th annual conference on Computer graphics and interactive techniques.1999:187-194
    [11] Paysan P, Knothe R, Amberg B, et al.A 3D face model for pose and illumination invariant face recognition[C]//2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance.Genova, Italy:296-301
    [12] Cao C, Weng Y L, Zhou S, et al.FaceWarehouse:A 3D Facial Expression Database for Visual Computing[J].IEEE Transactions on Visualization and Computer Graphics, 2014, 20(3):413-425
    [13] Deng Y, Yang J L, Xu S C, et al.Accurate 3D face reconstruction with weakly-supervised learning:From single image to image set[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW).Long Beach, CA, USA.:285-295
    [14] He K M, Zhang X Y, Ren S Q, et al.Deep residual learning for image recognition[C]//2016 IEEE Conference on Computer Vision and Pattern Recognition.Las Vegas, NV, USA:770-778
    [15] Wang X, Han X T, Huang W L, et al.Multi-similarity loss with general pair weighting for deep metric learning[C]//2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Long Beach, CA, USA:5017-5025
    [16] Deng J K, Guo J, Ververas E, et al.RetinaFace:single-shot multi-level face localisation in the wild[C]//2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).Seattle, WA, USA.:5202-5211
    [17] Chung J S, Nagrani A, Zisserman A.VoxCeleb2:Deep Speaker Recognition[EB/OL].2018:arXiv:1806.05622.https://arxiv.org/abs/1806.05622
    [18] Rössler A, Cozzolino D, Verdoliva L, et al.FaceForensics:learning to detect manipulated facial images[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV).Seoul, Korea (South):1-11
    [19] DeepFakes Detection Dataset.https://github.com/ondyari/FaceForensics
  • 加载中
通讯作者: 陈斌, bchen63@163.com
  • 1. 

    沈阳化工大学材料科学与工程学院 沈阳 110142

  1. 本站搜索
  2. 百度学术搜索
  3. 万方数据库搜索
  4. CNKI搜索

Article Metrics

Article views(75) PDF downloads(3) Cited by()

Related
Proportional views

Deepfake Video Detection Using 3DMM Facial Reconstruction Information

doi: 10.13203/j.whugis20210427
Funds:

National Key Research and Development Project(Grant 2019QY2202),Science and Technology Foundation of Guangzhou Huangpu Development District (Grant 2019GH16) and ChinaSingapore International Joint Research Institute (Grant 206-A018001)

Abstract: Objectives: The emergence of deepfake technique leads to a worldwide information security problem. Deepfake videos are used to manipulate and mislead the public. Though there have been a variety of deepfake detection methods, the features extracted generally suffer from poor interpretability. To solve this problem, a deepfake video detection method using 3D morphable model (3DMM) of face is proposed in this work. Methods: The 3DMM is employed to estimate parameters of shape, texture, expression, and gesture of the face frame by frame, constituting the basic information of Deepfake detection. The facial behavior feature extraction module and the static face appearance feature extraction module are designed for the construction of feature vectors on a sliding window basis. The facial behavior feature vector is derived from the expression and gesture parameters while the appearance feature vector is calculated with the shape and texture parameters. The consistency measured by Cosine distance between the appearance feature vector and the behavior feature vector is the criterion for authentication of the face for each sliding window across the video. Results: The effectiveness of the proposed method is evaluated with three public datasets. The overall half total error rates (HTER) obtained on FF++, DFD and Celeb-DF dataset are 1.33%, 4.93% and 3.92% respectively. For the severely compressed videos, C40 of DFD, the HTER is 7.09%, showing a good robustness against video compression. The model complexity is around 1/4 of that of the most related work. Conclusions: The proposed algorithm has good person pertinence and clear interpretability. Compared with state-of-the-art methods in literature, the proposed algorithm demonstrates lower half total error rates, better resistance to video compression and less computational cost.

HU Yongjian, SHE Huimin, LIU Beibei, CHEN Xiangquan, LIU Guangyao. Deepfake Video Detection Using 3DMM Facial Reconstruction Information[J]. Geomatics and Information Science of Wuhan University. doi: 10.13203/j.whugis20210427
Citation: HU Yongjian, SHE Huimin, LIU Beibei, CHEN Xiangquan, LIU Guangyao. Deepfake Video Detection Using 3DMM Facial Reconstruction Information[J]. Geomatics and Information Science of Wuhan University. doi: 10.13203/j.whugis20210427
Reference (19)

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return