< Back to previous page

Publication

Integrating Deep and Shallow Models for Multi-Modal Depression Analysis — Hybrid Architectures

Journal Contribution - Journal Article

At present, although great progress has been made in automatic depression assessment, most of the recent works only concern the audio and video paralinguistic information, rather than the linguistic information from the spoken content. In this work, we argue that beside developing good audio and video features, to build reliable depression detection systems, text-based content features are also of importance to analyse depression-related textual indicators. Furthermore, to improve the performance of automatic depression assessment systems, powerful models, capable of modelling the characteristics of depression embedded in the audio, visual and text descriptors, are also required. This paper proposes new text and video features and hybridizes deep and shallow models for depression estimation and classification from audio, video and text descriptors. The proposed hybrid framework consists of
three main parts: 1) A Deep Convolutional Neural Network (DCNN) and Deep Neural Network (DNN) based audio-visual multi-modal
depression recognition model for estimating the Patient Health Questionnaire depression scale (PHQ-8); 2) A Paragraph Vector (PV)
and Support Vector Machine (SVM) based model for inferring the physical and mental conditions of the individual from the transcripts of
the interview; 3) A Random Forest (RF) model for depression classification from the estimated PHQ-8 score and the inferred conditions
of the individual. In the PV-SVM model, PV embedding is used to obtain fixed-length feature vectors from transcripts of the answers to
the questions associated with psychoanalytic aspects of depression, which are subsequently fed into the SVM classifiers for detecting
the presence/absence of the considered psychoanalytic symptoms. To our best knowledge, this approach is the first attempt to apply
PV for depression analysis. Besides, we propose a new visual descriptor - Histogram of Displacement Range (HDR) to characterize
the displacement and velocity of the facial landmarks in the video segment. Experiments have been carried out on the Audio Visual
Emotion Challenge (AVEC2016) depression dataset, they demonstrate that: 1) The proposed hybrid framework effectively improves the
accuracies of both depression estimation and depression classification, with an average F1 measure up to 0.746, which is higher than
the best result (0.724) of the depression sub-challenge of AVEC2016. 2) HDR obtains better depression recognition performance than
Bag-of-Words (BoW) and Motion History Histogram (MHH) features.
Journal: IEEE Transactions on Affective Computing
ISSN: 1949-3045
Issue: 1
Volume: 12
Pages: 239-253
Publication year:2021
Keywords:Analytical models, Deep Convolutional Neural Network-Deep Neural Network (DCNNDNN), Depression classification, Depression estimation, Estimation, Feature extraction, Histogram of Displacement Range (HDR), Histograms, Neural networks, Paragraph Vector-Support Vector Machine (PV-SVM), Random Forest, Support vector machines, Visualization
BOF-keylabel:yes
IOF-keylabel:yes
BOF-publication weight:10
Authors:International
Authors from:Government, Higher Education
Accessibility:Closed