skip to main content
research-article

Towards mitigating uncann(eye)ness in face swaps via gaze-centric loss terms

Published: 18 July 2024 Publication History

Abstract

Advances in face swapping have enabled the automatic generation of highly realistic faces. Yet face swaps are perceived differently than when looking at real faces, with key differences in viewer behavior surrounding the eyes. Face swapping algorithms generally place no emphasis on the eyes, relying on pixel or feature matching losses that consider the entire face to guide the training process. We further investigate viewer perception of face swaps, focusing our analysis on the presence of an uncanny valley effect. We additionally propose a novel loss equation for the training of face swapping models, leveraging a pretrained gaze estimation network to directly improve representation of the eyes. We confirm that viewed face swaps do elicit uncanny responses from viewers. Our proposed improvements significant reduce viewing angle errors between face swaps and their source material. Our method additionally reduces the prevalence of the eyes as a deciding factor when viewers perform deepfake detection tasks. Our findings have implications on face swapping for special effects, as digital avatars, as privacy mechanisms, and more; negative responses from users could limit effectiveness in said applications. Our gaze improvements are a first step towards alleviating negative viewer perceptions via a targeted approach.

Graphical abstract

Display Omitted

Highlights

Face swaps are generally perceived as more uncanny than real faces.
Pretrained gaze estimation models can be used to design targeted gaze loss terms.
Targeted loss equations decrease gaze angle error of generated face swaps.
Our proposed loss term makes the eyes less prominent in human deepfake detection.

References

[1]
Wilson E., Shic F., Jain E., Introducing explicit gaze constraints to face swapping, in: Proceedings of the 2023 symposium on eye tracking research and applications, ISBN 9798400701504, 2023, pp. 1–7,. URL: https://dl.acm.org/doi/10.1145/3588015.3588416.
[2]
Wilson E., Persaud A., Esposito N., Jörg S., Patra R., Shic F., et al., The uncanniness of face swaps, J Vis (ISSN ) 22 (14) (2022) 4225,.
[3]
Caporusso N., Deepfakes for the good: A beneficial application of contentious artificial intelligence technology, in: Ahram T. (Ed.), Advances in artificial intelligence, software and systems engineering, in: Advances in intelligent systems and computing, ISBN 978-3-030-51328-3, 2021, pp. 235–241,.
[4]
Zhu B., Fang H., Sui Y., Li L., Deepfakes for medical video de-identification: Privacy protection and diagnostic information preservation, in: Proceedings of the AAAI/ACM conference on AI, ethics, and society, ISBN 978-1-4503-7110-0, 2020, pp. 414–420,. URL: https://dl.acm.org/doi/10.1145/3375627.3375849.
[5]
Lee S., Glasser A., Dingman B., Xia Z., Metaxas D., Neidle C., et al., American sign language video anonymization to support online participation of deaf and hard of hearing users, in: Proceedings of the 23rd international ACM SIGACCeSS conference on computers and accessibility, ISBN 978-1-4503-8306-6, 2021, pp. 1–13,. URL: https://dl.acm.org/doi/10.1145/3441852.3471200.
[6]
Wagner T.L., Blewer A., “The word real is no longer real”: Deepfakes, gender, and the challenges of AI-altered video, Open Inf Sci (ISSN ) 3 (1) (2019) 32–46,. URL: https://www.degruyter.com/document/doi/10.1515/opis-2019-0003/html.
[7]
Meskys E, Kalpokiene J, Jurcys P, Liaudanskas A. Regulating deep fakes: Legal and ethical considerations. Rochester, NY; 2019, URL:.
[8]
Jung T., Kim S., Kim K., DeepVision: Deepfakes detection using human eye blinking pattern, IEEE Access (ISSN ) 8 (2020) 83144–83154,.
[9]
Ciftci U.A., Demir I., Yin L., How do the hearts of deep fakes beat? Deep fake source detection via interpreting residuals with biological signals, in: 2020 IEEE international joint conference on biometrics, 2020, pp. 1–10,.
[10]
Demir I., Ciftci U.A., Where do deep fakes look? Synthetic face detection via gaze tracking, in: ACM symposium on eye tracking research and applications, in: ETRA ’21 full papers, ISBN 978-1-4503-8344-8, 2021, pp. 1–11,. URL: https://dl.acm.org/doi/10.1145/3448017.3457387.
[11]
Lyu S., Deepfake detection: Current challenges and next steps, in: 2020 IEEE international conference on multimedia & expo workshops, 2020, pp. 1–6,.
[12]
Goodfellow I., Pouget-Abadie J., Mirza M., Xu B., Warde-Farley D., Ozair S., et al., Generative adversarial nets, Advances in neural information processing systems, vol. 27, 2014, URL: https://proceedings.neurips.cc/paper_files/paper/2014/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html.
[13]
Kingma D.P., Welling M., Auto-encoding variational Bayes, 2022,. URL: http://arxiv.org/abs/1312.6114.
[14]
Ho J., Jain A., Abbeel P., Denoising diffusion probabilistic models, Advances in neural information processing systems, vol. 33, 2020, pp. 6840–6851. URL: https://proceedings.neurips.cc/paper/2020/hash/4c5bcfec8584af0d967f1ab10179ca4b-Abstract.html.
[15]
Karras T., Laine S., Aittala M., Hellsten J., Lehtinen J., Aila T., Analyzing and improving the image quality of StyleGAN, 2020, pp. 8110–8119. URL: https://openaccess.thecvf.com/content_CVPR_2020/html/Karras_Analyzing_and_Improving_the_Image_Quality_of_StyleGAN_CVPR_2020_paper.html.
[16]
Razavi A., van den Oord A., Vinyals O., Generating diverse high-fidelity images with VQ-VAE-2, Advances in neural information processing systems, vol. 32, 2019, URL: https://proceedings.neurips.cc/paper/2019/hash/5f8e2fa1718d1bbcadf1cd9c7a54fb8c-Abstract.html.
[17]
Radford A., Metz L., Chintala S., Unsupervised representation learning with deep convolutional generative adversarial networks, 2016,. URL: http://arxiv.org/abs/1511.06434.
[18]
Liu M.-Y., Tuzel O., Coupled generative adversarial networks, Advances in neural information processing systems, vol. 29, 2016, URL: https://proceedings.neurips.cc/paper_files/paper/2016/hash/502e4a16930e414107ee22b6198c578f-Abstract.html.
[19]
Rombach R., Blattmann A., Lorenz D., Esser P., Ommer B., High-resolution image synthesis with latent diffusion models, 2022, pp. 10684–10695. URL: https://openaccess.thecvf.com/content/CVPR2022/html/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.html.
[20]
Zhang T., Deepfake generation and detection, a survey, Multimedia Tools Appl (ISSN ) 81 (5) (2022) 6259–6276,.
[21]
Dang M., Nguyen T.N., Digital face manipulation creation and detection: A systematic review, Electronics (ISSN ) 12 (16) (2023) 3407,. URL: https://www.mdpi.com/2079-9292/12/16/3407.
[22]
Walczyna T., Piotrowski Z., Quick overview of face swap deep fakes, Appl Sci (ISSN ) 13 (11) (2023) 6711,. URL: https://www.mdpi.com/2076-3417/13/11/6711.
[23]
[24]
Nirkin Y., Keller Y., Hassner T., FSGAN: Subject agnostic face swapping and reenactment, 2019, pp. 7184–7193. URL: https://openaccess.thecvf.com/content_ICCV_2019/html/Nirkin_FSGAN_Subject_Agnostic_Face_Swapping_and_Reenactment_ICCV_2019_paper.html.
[25]
Li L., Bao J., Yang H., Chen D., Wen F., Advancing high fidelity identity swapping for forgery detection, 2020, pp. 5074–5083. URL: https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Advancing_High_Fidelity_Identity_Swapping_for_Forgery_Detection_CVPR_2020_paper.html.
[26]
Chen R., Chen X., Ni B., Ge Y., SimSwap: An efficient framework for high fidelity face swapping, in: Proceedings of the 28th ACM international conference on multimedia, ISBN 978-1-4503-7988-5, 2020, pp. 2003–2011,. URL: https://dl.acm.org/doi/10.1145/3394171.3413630.
[27]
Liu M., Li Q., Qin Z., Zhang G., Wan P., Zheng W., BlendGAN: Implicitly GAN blending for arbitrary stylized face generation, Advances in neural information processing systems, vol. 34, 2021, pp. 29710–29722. URL: https://proceedings.neurips.cc/paper_files/paper/2021/hash/f8417d04a0a2d5e1fb5c5253a365643c-Abstract.html.
[28]
Korshunova I., Shi W., Dambre J., Theis L., Fast face-swap using convolutional neural networks, 2017, pp. 3677–3685. URL: https://openaccess.thecvf.com/content_iccv_2017/html/Korshunova_Fast_Face-Swap_Using_ICCV_2017_paper.html.
[29]
Thies J., Zollhöfer M., Stamminger M., Theobalt C., Nießner M., FaceVR: Real-time gaze-aware facial reenactment in virtual reality, ACM Trans Graph (ISSN ) 37 (2) (2018) 25:1–25:15,. URL: https://dl.acm.org/doi/10.1145/3182644.
[30]
Liu K., Perov I., Gao D., Chervoniy N., Zhou W., Zhang W., Deepfacelab: Integrated, flexible and extensible face-swapping framework, Pattern Recognit (ISSN ) 141 (2023),. URL: https://www.sciencedirect.com/science/article/pii/S0031320323003291.
[31]
Zhu Y., Li Q., Wang J., Xu C.-Z., Sun Z., One shot face swapping on megapixels, 2021, pp. 4834–4844. URL: https://openaccess.thecvf.com/content/CVPR2021/html/Zhu_One_Shot_Face_Swapping_on_Megapixels_CVPR_2021_paper.html.
[32]
Wang Y., Chen X., Zhu J., Chu W., Tai Y., Wang C., et al., HifiFace: 3D shape and semantic prior guided high fidelity face swapping, 2, 2021, pp. 1136–1142,. URL: https://www.ijcai.org/proceedings/2021/157.
[33]
Li Y., Ma C., Yan Y., Zhu W., Yang X., 3D-aware face swapping, 2023, pp. 12705–12714. URL: https://openaccess.thecvf.com/content/CVPR2023/html/Li_3D-Aware_Face_Swapping_CVPR_2023_paper.html.
[34]
Nitzan Y., Bermano A., Li Y., Cohen-Or D., Face identity disentanglement via latent space mapping, ACM Trans Graph (ISSN ) 39 (6) (2020) 225:1–225:14,. URL: https://dl.acm.org/doi/10.1145/3414685.3417826.
[35]
Deng J., Guo J., Xue N., Zafeiriou S., ArcFace: Additive angular margin loss for deep face recognition, 2019, pp. 4690–4699. URL: https://openaccess.thecvf.com/content_CVPR_2019/html/Deng_ArcFace_Additive_Angular_Margin_Loss_for_Deep_Face_Recognition_CVPR_2019_paper.html.
[36]
Wang H., Wang Y., Zhou Z., Ji X., Gong D., Zhou J., et al., CosFace: Large margin cosine loss for deep face recognition, 2018, pp. 5265–5274. URL: https://openaccess.thecvf.com/content_cvpr_2018/html/Wang_CosFace_Large_Margin_CVPR_2018_paper.html.
[37]
Tang H., Xu D., Liu G., Wang W., Sebe N., Yan Y., Cycle in cycle generative adversarial networks for keypoint-guided image generation, in: Proceedings of the 27th ACM international conference on multimedia, ISBN 978-1-4503-6889-6, 2019, pp. 2052–2060,. URL: https://dl.acm.org/doi/10.1145/3343031.3350980.
[39]
Xue H., Liu B., Yuan X., Ding M., Zhu T., Face image de-identification by feature space adversarial perturbation, Concurr Comput: Pract Exper (ISSN ) 35 (5) (2023),. URL: https://onlinelibrary.wiley.com/doi/abs/10.1002/cpe.7554.
[40]
Sun Q., Ma L., Oh S.J., Van Gool L., Schiele B., Fritz M., Natural and effective obfuscation by head inpainting, 2018, pp. 5050–5059. URL: https://openaccess.thecvf.com/content_cvpr_2018/html/Sun_Natural_and_Effective_CVPR_2018_paper.
[41]
Kuang Z., Liu H., Yu J., Tian A., Wang L., Fan J., et al., Effective de-identification generative adversarial network for face anonymization, in: Proceedings of the 29th ACM international conference on multimedia, ISBN 978-1-4503-8651-7, 2021, pp. 3182–3191,. URL: https://dl.acm.org/doi/10.1145/3474085.3475464.
[42]
Siarohin A., Woodford O.J., Ren J., Chai M., Tulyakov S., Motion representations for articulated animation, 2021, pp. 13653–13662. URL: https://openaccess.thecvf.com/content/CVPR2021/html/Siarohin_Motion_Representations_for_Articulated_Animation_CVPR_2021_paper.html.
[43]
Simonyan K., Zisserman A., Very deep convolutional networks for large-scale image recognition, 2015,. URL: http://arxiv.org/abs/1409.1556.
[44]
Gatys L.A., Ecker A.S., Bethge M., Image style transfer using convolutional neural networks, 2016, pp. 2414–2423. URL: https://openaccess.thecvf.com/content_cvpr_2016/html/Gatys_Image_Style_Transfer_CVPR_2016_paper.html.
[46]
Huang X., Belongie S., Arbitrary style transfer in real-time with adaptive instance normalization, 2017, pp. 1501–1510. URL: https://openaccess.thecvf.com/content_iccv_2017/html/Huang_Arbitrary_Style_Transfer_ICCV_2017_paper.html.
[47]
Wang T.-C., Liu M.-Y., Zhu J.-Y., Tao A., Kautz J., Catanzaro B., High-resolution image synthesis and semantic manipulation with conditional GANs, 2018, pp. 8798–8807. URL: https://openaccess.thecvf.com/content_cvpr_2018/html/Wang_High-Resolution_Image_Synthesis_CVPR_2018_paper.html.
[48]
Preu E., Jackson M., Choudhury N., Perception vs. reality: Understanding and evaluating the impact of synthetic image deepfakes over college students, in: 2022 IEEE 13th annual ubiquitous computing, electronics & mobile communication conference, 2022, pp. 0547–0553,.
[49]
Rossler A., Cozzolino D., Verdoliva L., Riess C., Thies J., Niessner M., FaceForensics++: Learning to detect manipulated facial images, 2019, pp. 1–11. URL: https://openaccess.thecvf.com/content_ICCV_2019/html/Rossler_FaceForensics_Learning_to_Detect_Manipulated_Facial_Images_ICCV_2019_paper.html.
[50]
Tahir R., Batool B., Jamshed H., Jameel M., Anwar M., et al., Seeing is believing: Exploring perceptual differences in DeepFake videos, in: Proceedings of the 2021 CHI conference on human factors in computing systems, ISBN 978-1-4503-8096-6, 2021, pp. 1–16,. URL: https://dl.acm.org/doi/10.1145/3411764.3445699.
[51]
Groh M., Epstein Z., Firestone C., Picard R., Deepfake detection by human crowds, machines, and machine-informed crowds, Proc Natl Acad Sci 119 (1) (2022),. URL: https://www.pnas.org/doi/10.1073/pnas.2110013119.
[52]
Groh M., Sankaranarayanan A., Singh N., Kim D.Y., Lippman A., Picard R., Human detection of political speech deepfakes across transcripts, audio, and video, 2023,. URL: http://arxiv.org/abs/2202.12883.
[53]
Wöhler L., Henningson J.-O., Castillo S., Magnor M., PEFS: A validated dataset for perceptual experiments on face swap portrait videos, in: Tian F., Yang X., Thalmann D., Xu W., Zhang J.J., Thalmann N.M., Chang J. (Eds.), Computer animation and social agents, in: Communications in computer and information science, ISBN 978-3-030-63426-1, 2020, pp. 120–127,.
[54]
Wöhler L., Zembaty M., Castillo S., Magnor M., Towards understanding perceptual differences between genuine and face-swapped videos, in: Proceedings of the 2021 CHI conference on human factors in computing systems, ISBN 978-1-4503-8096-6, 2021, pp. 1–13,. URL: https://dl.acm.org/doi/10.1145/3411764.3445627.
[55]
Wöhler L., Castillo S., Magnor M., Personality analysis of face swaps: Can they be used as avatars?, in: Proceedings of the 22nd ACM international conference on intelligent virtual agents, ISBN 978-1-4503-9248-8, 2022, pp. 1–8,. URL: https://dl.acm.org/doi/10.1145/3514197.3549687.
[56]
Nightingale S.J., Farid H., AI-synthesized faces are indistinguishable from real faces and more trustworthy, Proc Natl Acad Sci 119 (8) (2022),. URL: https://www.pnas.org/doi/full/10.1073/pnas.2120481119.
[57]
McDonnell R., Breidt M., Bülthoff H.H., Render me real? investigating the effect of render style on the perception of animated virtual humans, ACM Trans Graph (ISSN ) 31 (4) (2012) 91:1–91:11,. URL: https://dl.acm.org/doi/10.1145/2185520.2185587.
[58]
Hodgins J., Jörg S., O’Sullivan C., Park S.I., Mahler M., The saliency of anomalies in animated human characters, ACM Trans Appl Percept (ISSN ) 7 (4) (2010) 22:1–22:14,. URL: https://dl.acm.org/doi/10.1145/1823738.1823740.
[59]
Carter E.J., Mahler M., Hodgins J.K., Unpleasantness of animated characters corresponds to increased viewer attention to faces, in: Proceedings of the ACM symposium on applied perception, ISBN 978-1-4503-2262-1, 2013, pp. 35–40,. URL: https://dl.acm.org/doi/10.1145/2492494.2502059.
[60]
Carrigan E., Zibrek K., Dahyot R., McDonnell R., Investigating perceptually based models to predict importance of facial blendshapes, in: Proceedings of the 13th ACM SIGGRApH conference on motion, interaction and games, ISBN 978-1-4503-8171-0, 2020, pp. 1–6,. URL: https://dl.acm.org/doi/10.1145/3424636.3426904.
[61]
MacDorman K.F., Green R.D., Ho C.-C., Koch C.T., Too real for comfort? Uncanny responses to computer generated faces, Comput Hum Behav (ISSN ) 25 (3) (2009) 695–710,. URL: https://www.sciencedirect.com/science/article/pii/S0747563208002379.
[62]
Geller T., Overcoming the uncanny valley, IEEE Comput Graph Appl (ISSN ) 28 (4) (2008) 11–17,.
[63]
Ho C.-C., MacDorman K.F., Measuring the uncanny valley effect, Int J Soc Robot (ISSN ) 9 (1) (2017) 129–139,.
[64]
Mori M., MacDorman K.F., Kageki N., The uncanny valley [from the field], IEEE Robot Autom Mag (ISSN ) 19 (2) (2012) 98–100,.
[65]
Kätsyri J., de Gelder B., Takala T., Virtual faces evoke only a weak uncanny valley effect: An empirical investigation with controlled virtual face images, Perception (ISSN ) 48 (10) (2019) 968–991,.
[66]
MacDorman K.F., Chattopadhyay D., Categorization-based stranger avoidance does not explain the uncanny valley effect, Cognition (ISSN ) 161 (2017) 132–135,.
[67]
MacDorman K.F., Chattopadhyay D., Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not, Cognition (ISSN ) 146 (2016) 190–205,. URL: https://www.sciencedirect.com/science/article/pii/S0010027715300755.
[68]
Dill V., Flach L.M., Hocevar R., Lykawka C., Musse S.R., Pinho M.S., Evaluation of the uncanny valley in CG characters, in: Nakano Y., Neff M., Paiva A., Walker M. (Eds.), Intelligent virtual agents, in: Lecture notes in computer science, ISBN 978-3-642-33197-8, 2012, pp. 511–513,.
[69]
White G., McKay L., Pollick F., Motion and the uncanny valley, J Vis (ISSN ) 7 (9) (2007) 477,.
[70]
Piwek L., McKay L.S., Pollick F.E., Empirical evaluation of the uncanny valley hypothesis fails to confirm the predicted effect of motion, Cognition (ISSN ) 130 (3) (2014) 271–277,. URL: https://www.sciencedirect.com/science/article/pii/S0010027713002114.
[71]
Ciftci U.A., Demir I., Yin L., FakeCatcher: Detection of synthetic portrait videos using biological signals, IEEE Trans Pattern Anal Mach Intell (ISSN ) (2020) 1,.
[72]
Li Y., Chang M.-C., Lyu S., In ictu oculi: Exposing AI created fake videos by detecting eye blinking, in: 2018 IEEE international workshop on information forensics and security, 2018, pp. 1–7,.
[73]
Mullen M., A new reality: Deepfake technology and the world around us, Mitchell Hamline Law Review 48 (1) (2022) 210–234. URL: https://open.mitchellhamline.edu/cgi/viewcontent.cgi?article=1276&context=mhlr.
[74]
Janik S.W., Wellens A.R., Goldberg M.L., Dell’Osso L.F., Eyes as the center of focus in the visual examination of human faces, Perceptual Motor Skills (ISSN ) 47 (3 Pt 1) (1978) 857–858,.
[75]
Gupta P., Chugh K., Dhall A., Subramanian R., The eyes know it: Fakeet- an eye-tracking database to understand deepfake perception, in: Proceedings of the 2020 international conference on multimodal interaction, ISBN 978-1-4503-7581-8, 2020, pp. 519–527,. URL: https://dl.acm.org/doi/10.1145/3382507.3418857.
[76]
Naruniec J., Helminger L., Schroers C., Weber R., High-resolution neural face swapping for visual effects, Comput Graph Forum (ISSN ) 39 (4) (2020) 173–184,. URL: https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14062.
[77]
Schuman H., Presser S., Questions and answers in attitude surveys: experiments on question form, wording, and context, ISBN 978-0-7619-0359-8, 1996.
[78]
Rey D., Neuhäuser M., Wilcoxon-signed-rank test, in: Lovric M. (Ed.), International encyclopedia of statistical science, ISBN 978-3-642-04898-2, 2011, pp. 1658–1659,.
[79]
Bulat A., Tzimiropoulos G., How far are we from solving the 2D & 3D face alignment problem? (and a dataset of 230,000 3D facial landmarks), 2017, pp. 1021–1030. URL: https://openaccess.thecvf.com/content_iccv_2017/html/Bulat_How_Far_Are_ICCV_2017_paper.html.
[80]
Wang Z., Bovik A., Sheikh H., Simoncelli E., Image quality assessment: From error visibility to structural similarity, IEEE Trans Image Process (ISSN ) 13 (4) (2004) 600–612,.
[81]
Zhao H., Gallo O., Frosio I., Kautz J., Loss functions for image restoration with neural networks, IEEE Trans Comput Imaging (ISSN ) 3 (1) (2017) 47–57,.
[82]
Abdelrahman A.A., Hempel T., Khalifa A., Al-Hamadi A., Dinges L., L2CS-Net : Fine-grained gaze estimation in unconstrained environments, in: 2023 8th international conference on frontiers of signal processing, 2023, pp. 98–102,. URL: https://ieeexplore.ieee.org/abstract/document/10372944.
[83]
Liu Z., Luo P., Wang X., Tang X., Deep learning face attributes in the wild, 2015, pp. 3730–3738. URL: https://openaccess.thecvf.com/content_iccv_2015/html/Liu_Deep_Learning_Face_ICCV_2015_paper.html.
[84]
McKnight P.E., Najab J., Mann-Whitney U test, in: The corsini encyclopedia of psychology, ISBN 978-0-470-47921-6, 2010, p. 1,. URL: https://onlinelibrary.wiley.com/doi/abs/10.1002/9780470479216.corpsy0524.
[86]
Wilson E., Shic F., Skytta J., Jain E., Practical digital disguises: Leveraging face swaps to protect patient privacy, 2022,. URL: http://arxiv.org/abs/2204.03559.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Computers and Graphics
Computers and Graphics  Volume 119, Issue C
Apr 2024
407 pages

Publisher

Pergamon Press, Inc.

United States

Publication History

Published: 18 July 2024

Author Tags

  1. Face swapping
  2. Gaze estimation
  3. Perception

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 06 Jan 2025

Other Metrics

Citations

Cited By

View all

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media