Download PDFOpen PDF in browserDacFER: Dual Attention Correction Learning for Efficient Facial Expression RecognitionEasyChair Preprint 112875 pages•Date: November 13, 2023AbstractFacial expression recognition is an important task in computer vision, the application of facial expression recognition in various fields continues to grow and its research is receiving increasing attention. However, sample noise and label noise are important challenges that cannot be ignored in facial expression recognition. The dual attention correction approach is proposed, which aims to raise the accuracy of local attention and focus on the importance of global attention. Specifically, the correction of local attention is reflected on the fact that the importance of each channel is increased through channel attention, as a result it suppresses useless features for the FER task, enhances useful features and prompts the classification loss function to obtain a more accurate basis of classification. The correction of global attention is reflected on the fact that more global information attracts attention with the help of spatial attentional shift consistency, therefore classification errors caused by local attentional “errors” are avoided. Under the influence of classification loss and spatial shift attention consistency loss, theDacFER method solves problems of input and label corruption and achieves recognition performance comparable to state-of-the-art methods of large-scale datasets RAF-DB and AffectNet in the wild. Our code will be made publicly available. Keyphrases: Dual attention, facial expression recognition, noise label
|