Mostrar o rexistro simple do ítem

dc.contributor.authorBermejo, Enrique
dc.contributor.authorFernández-Blanco, Enrique
dc.contributor.authorValsecchi, Andrea
dc.contributor.authorMesejo, Pablo
dc.contributor.authorIbáñez, Oscar
dc.contributor.authorImaizumi, Kazuhiko
dc.date.accessioned2022-10-14T15:15:55Z
dc.date.available2022-10-14T15:15:55Z
dc.date.issued2022
dc.identifier.citationE. Bermejo, E. Fernandez-Blanco, A. Valsecchi, P. Mesejo, O. Ibáñez, y K. Imaizumi, «FacialSCDnet: A deep learning approach for the estimation of subject-to-camera distance in facial photographs», Expert Systems with Applications, vol. 210, 30 dic. 2022, doi: 10.1016/j.eswa.2022.118457.es_ES
dc.identifier.issn0957-4174
dc.identifier.urihttp://hdl.handle.net/2183/31816
dc.description.abstract[Abstract]: Facial biometrics play an essential role in the fields of law enforcement and forensic sciences. When comparing facial traits for human identification in photographs or videos, the analysis must account for several factors that impair the application of common identification techniques, such as illumination, pose, or expression. In particular, facial attributes can drastically change depending on the distance between the subject and the camera at the time of the picture. This effect is known as perspective distortion, which can severely affect the outcome of the comparative analysis. Hence, knowing the subject-to-camera distance of the original scene where the photograph was taken can help determine the degree of distortion, improve the accuracy of computer-aided recognition tools, and increase the reliability of human identification and further analyses. In this paper, we propose a deep learning approach to estimate the subject-to-camera distance of facial photographs: FacialSCDnet. Furthermore, we introduce a novel evaluation metric designed to guide the learning process, based on changes in facial distortion at different distances. To validate our proposal, we collected a novel dataset of facial photographs taken at several distances using both synthetic and real data. Our approach is fully automatic and can provide a numerical distance estimation for up to six meters, beyond which changes in facial distortion are not significant. The proposed method achieves an accurate estimation, with an average error below 6 cm of subject-to-camera distance for facial photographs in any frontal or lateral head pose, robust to facial hair, glasses, and partial occlusion.es_ES
dc.language.isoenges_ES
dc.publisherElsevieres_ES
dc.relationMinisterio de Ciencia e Innovación; PTQ-17-09306es_ES
dc.relationinfo:eu-repo/grantAgreement/EC/H2020/746592es_ES
dc.relationMinisterio de Ciencia e Investigación; EXP- 00122609/SNEO-20191236es_ES
dc.relationMinisterio de Ciencia e Investigación; RYC2020-029454-Ies_ES
dc.relationXunta de Galicia; ED431G 2019/01es_ES
dc.relationJSP Fellows; 19F19119es_ES
dc.relationMinisterio de Ciencia e Investigación; PGC2018-101216-B-I00es_ES
dc.relationXunta de Galicia; ED431C 2018/49es_ES
dc.relationJunta de Andalucía; P18-FR-4262es_ES
dc.relation.urihttps://doi.org/10.1016/j.eswa.2022.118457es_ES
dc.rightsAtribución-NoComercial-SinDerivadas 3.0 Españaes_ES
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/3.0/es/*
dc.subjectSubject-to-camera distancees_ES
dc.subjectPerspective distortiones_ES
dc.subjectPhotographyes_ES
dc.subjectHuman identificationes_ES
dc.subjectDeep learninges_ES
dc.subjectTransfer learninges_ES
dc.titleFacialSCDnet: A deep learning approach for the estimation of subject-to-camera distance in facial photographses_ES
dc.typeinfo:eu-repo/semantics/articlees_ES
dc.rights.accessinfo:eu-repo/semantics/openAccesses_ES
UDC.journalTitleExpert Systems with Applicationses_ES
UDC.volume210es_ES
UDC.issue30 Decemberes_ES


Ficheiros no ítem

Thumbnail
Thumbnail

Este ítem aparece na(s) seguinte(s) colección(s)

Mostrar o rexistro simple do ítem