The task of the emotion recognition in the wild (EmotiW) Challenge is toassign one of seven emotions to short video clips extracted from Hollywoodstyle movies. The videos depict acted-out emotions under realistic conditionswith a large degree of variation in attributes such as pose and illumination,making it worthwhile to explore approaches which consider combinations offeatures from multiple modalities for label assignment. In this paper wepresent our approach to learning several specialist models using deep learningtechniques, each focusing on one modality. Among these are a convolutionalneural network, focusing on capturing visual information in detected faces, adeep belief net focusing on the representation of the audio stream, a K-Meansbased "bag-of-mouths" model, which extracts visual features around the mouthregion and a relational autoencoder, which addresses spatio-temporal aspects ofvideos. We explore multiple methods for the combination of cues from thesemodalities into one common classifier. This achieves a considerably greateraccuracy than predictions from our strongest single-modality classifier. Ourmethod was the winning submission in the 2013 EmotiW challenge and achieved atest set accuracy of 47.67% on the 2014 dataset.
translated by 谷歌翻译