Robust canonical correlation analysis: audio-visual fusion for learning continuous interest

Nicolaou, Mihalis A. and Panagakis, Yannis and Zafeiriou, Stefanos and Pantic, Maja (2014) Robust canonical correlation analysis: audio-visual fusion for learning continuous interest. In: 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 04-09 May 2014, Florence, Italy.

Full text is not in this repository.

Abstract

The problem of automatically estimating the interest level of a subject has been gaining attention by researchers, mostly due to the vast applicability of interest detection. In this work, we obtain a set of continuous interest annotations for the SE-MAINE database, which we analyse also in terms of emotion dimensions such as valence and arousal. Most importantly, we propose a robust variant of Canonical Correlation Analysis (RCCA) for performing audio-visual fusion, which we apply to the prediction of interest. RCCA recovers a low-rank subspace which captures the correlations of fused modalities, while isolating gross errors in the data without making any assumptions regarding Gaussianity. We experimentally show that RCCA is more appropriate than other standard fusion techniques (such as l2-CCA and feature-level fusion), since it both captures interactions between modalities while also decontaminating the obtained subspace from errors which are dominant in real-world problems.

Item Type: Conference or Workshop Item (Paper)
Research Areas: A. > School of Science and Technology > Computer Science
Item ID: 23772
Useful Links:
Depositing User: Yannis Panagakis
Date Deposited: 06 Mar 2018 15:26
Last Modified: 07 Dec 2018 08:34
URI: http://eprints.mdx.ac.uk/id/eprint/23772

Actions (login required)

Edit Item Edit Item