Modeling short-term and long-term dependencies of the speech signal for paralinguistic emotion classification

dc.authorscopusid57199057349
dc.authorscopusid36241785000
dc.authorscopusid57219469958
dc.contributor.authorVerkholyak, Oxana Vladimirovna
dc.contributor.authorKaya, Heysem
dc.contributor.authorKarpov, Alexey A.
dc.date.accessioned2022-05-11T14:15:55Z
dc.date.available2022-05-11T14:15:55Z
dc.date.issued2019
dc.departmentFakülteler, Çorlu Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü
dc.description.abstractRecently, Speech Emotion Recognition (SER) has become an important research topic of affective computing. It is a difficult problem, where some of the greatest challenges lie in the feature selection and representation tasks. A good feature representation should be able to reflect global trends as well as temporal structure of the signal, since emotions naturally evolve in time; it has become possible with the advent of Recurrent Neural Networks (RNN), which are actively used today for various sequence modeling tasks. This paper proposes a hybrid approach to feature representation, which combines traditionally engineered statistical features with Long Short-Term Memory (LSTM) sequence representation in order to take advantage of both short-term and long-term acoustic characteristics of the signal, therefore capturing not only the general trends but also temporal structure of the signal. The evaluation of the proposed method is done on three publicly available acted emotional speech corpora in three different languages, namely RUSLANA (Russian speech), BUEMODB (Turkish speech) and EMODB (German speech). Compared to the traditional approach, the results of our experiments show an absolute improvement of 2.3% and 2.8% for two out of three databases, and a comparative performance on the third. Therefore, provided enough training data, the proposed method proves effective in modelling emotional content of speech utterances. © 2019 St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences. All rights reserved.
dc.description.sponsorshipRussian Science Foundation, RSF: 18-11-00145
dc.description.sponsorshipThis research is supported by the Russian Science Foundation (project ? 18-11-00145).
dc.description.sponsorshipThis research is supported by the Russian Science Foundation (project
dc.identifier.doi10.15622/sp.18.1.30-56
dc.identifier.endpage56
dc.identifier.issn2078-9181
dc.identifier.issue1en_US
dc.identifier.scopus2-s2.0-85063340623
dc.identifier.scopusqualityN/A
dc.identifier.startpage30
dc.identifier.urihttps://doi.org/10.15622/sp.18.1.30-56
dc.identifier.urihttps://hdl.handle.net/20.500.11776/6120
dc.identifier.volume18
dc.indekslendigikaynakScopus
dc.institutionauthorKaya, Heysem
dc.language.isoen
dc.publisherSt. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences
dc.relation.ispartofSPIIRAS Proceedings
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.rightsinfo:eu-repo/semantics/openAccess
dc.subjectAffective computing
dc.subjectArtificial neural networks
dc.subjectComputational paralinguistics
dc.subjectContext modelling
dc.subjectFeature representation
dc.subjectLong short-term memory
dc.subjectSpeech emotion recognition
dc.titleModeling short-term and long-term dependencies of the speech signal for paralinguistic emotion classification
dc.typeArticle

Dosyalar

Orijinal paket
Listeleniyor 1 - 1 / 1
Yükleniyor...
Küçük Resim
İsim:
6120.pdf
Boyut:
1.38 MB
Biçim:
Adobe Portable Document Format
Açıklama:
Tam Metin / Full Text