Modeling short-term and long-term dependencies of the speech signal for paralinguistic emotion classification
Yükleniyor...
Dosyalar
Tarih
2019
Dergi Başlığı
Dergi ISSN
Cilt Başlığı
Yayıncı
St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences
Erişim Hakkı
info:eu-repo/semantics/openAccess
Özet
Recently, Speech Emotion Recognition (SER) has become an important research topic of affective computing. It is a difficult problem, where some of the greatest challenges lie in the feature selection and representation tasks. A good feature representation should be able to reflect global trends as well as temporal structure of the signal, since emotions naturally evolve in time; it has become possible with the advent of Recurrent Neural Networks (RNN), which are actively used today for various sequence modeling tasks. This paper proposes a hybrid approach to feature representation, which combines traditionally engineered statistical features with Long Short-Term Memory (LSTM) sequence representation in order to take advantage of both short-term and long-term acoustic characteristics of the signal, therefore capturing not only the general trends but also temporal structure of the signal. The evaluation of the proposed method is done on three publicly available acted emotional speech corpora in three different languages, namely RUSLANA (Russian speech), BUEMODB (Turkish speech) and EMODB (German speech). Compared to the traditional approach, the results of our experiments show an absolute improvement of 2.3% and 2.8% for two out of three databases, and a comparative performance on the third. Therefore, provided enough training data, the proposed method proves effective in modelling emotional content of speech utterances. © 2019 St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences. All rights reserved.
Açıklama
Anahtar Kelimeler
Affective computing, Artificial neural networks, Computational paralinguistics, Context modelling, Feature representation, Long short-term memory, Speech emotion recognition
Kaynak
SPIIRAS Proceedings
WoS Q Değeri
Scopus Q Değeri
N/A
Cilt
18
Sayı
1