Burgess, Romana, Culpin, Iryna ORCID: https://orcid.org/0000-0001-5086-987X, Bould, Helen, Pearson, Rebecca ORCID: https://orcid.org/0000-0001-8527-3400 and Nabney, Ian (2023) A Quantitative Comparison of Manual vs. Automated Facial Coding Using Real Life Observations of Fathers. In: 16th EAI International Conference, PervasiveHealth 2022, 12 December 2022 - 14 December 2022, Thessaloniki, Greece.
Published Version
File not available for download. Available under License In Copyright. Download (894kB) |
Abstract
This work explores the application of an automated facial recognition software “FaceReader” [1] to videos of fathers (n = 36), taken using headcams worn by their infants during interactions in the home. We evaluate the use of FaceReader as an alternative method to manual coding – which is both time and labour intensive – and advance understanding of the usability of this software in naturalistic interactions. Using video data taken from the Avon Longitudinal Study of Parents and Children (ALSPAC), we first manually coded fathers’ facial expressions according to an existing coding scheme, and then processed the videos using FaceReader. We used contingency tables and multivariate logistic regression models to compare the manual and automated outputs. Our results indicated low levels of facial recognition by FaceReader in naturalistic interactions (approximately 25.17% compared to manual coding), and we discuss potential causes for this (e.g., problems with lighting, the headcams themselves, and speed of infant movement). However, our logistic regression models showed that when the face was found, FaceReader predicted manually coded expressions with a mean accuracy of M = 0.84 (range = 0.67–0.94), sensitivity of M = 0.64 (range = 0.27–0.97), and specificity of M = 0.81 (range = 0.51–0.97).
Impact and Reach
Statistics
Additional statistics for this dataset are available via IRStats2.