Gelişmiş Arama

Basit öğe kaydını göster

dc.contributor.authorKaya, Heysem
dc.contributor.authorFedotov, Dmitrii
dc.contributor.authorYesilkanat, Ali
dc.contributor.authorVerkholyak, Oxana Vladimirovna
dc.contributor.authorZhang, Yang
dc.contributor.authorKarpov, Alexey A.
dc.date.accessioned2022-05-11T14:15:53Z
dc.date.available2022-05-11T14:15:53Z
dc.date.issued2018
dc.identifier.isbn978-1-5108-7221-9
dc.identifier.issn2308-457X
dc.identifier.urihttps://hdl.handle.net/20.500.11776/6109
dc.description19th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2018) -- AUG 02-SEP 06, 2018 -- Hyderabad, INDIAen_US
dc.description.abstractAcoustic emotion recognition is a popular and central research direction in paralinguistic analysis, due its relation to a wide range of affective states/traits and manifold applications. Developing highly generalizable models still remains as a challenge for researchers and engineers, because of multitude of nuisance factors. To assert generalization, deployed models need to handle spontaneous speech recorded under different acoustic conditions compared to the training set. This requires that the models are tested for cross-corpus robustness. In this work, we first investigate the suitability of Long-Short-Term-Memory (LSTM) models trained with time- and space-continuously annotated affective primitives for cross-corpus acoustic emotion recognition. We next employ an effective approach to use the frame level valence and arousal predictions of LSTM models for utterance level affect classification and apply this approach on the ComParE 2018 challenge corpora. The proposed method alone gives motivating results both on development and test set of the Self-Assessed Affect Sub-Challenge. On the development set, the cross-corpus prediction based method gives a boost to performance when fused with top components of the baseline system. Results indicate the suitability of the proposed method for both time-continuous and utterance level cross-corpus acoustic emotion recognition tasks.en_US
dc.description.sponsorshipInt Speech Commun Assocen_US
dc.description.sponsorshipRussian Science FoundationRussian Science Foundation (RSF) [18-11-00145]; Huawei Innovation Research ProgramHuawei Technologies [HO2017050001BM]en_US
dc.description.sponsorshipThe participation in the ComParE 2018 challenge with experiments on USoMS corpus (Section 4) was supported exclusively by the Russian Science Foundation (Project No. 18-11-00145). The rest research was supported by the Huawei Innovation Research Program (Agreement No. HO2017050001BM).en_US
dc.language.isoengen_US
dc.publisherIsca-Int Speech Communication Assocen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectspeech emotion recognitionen_US
dc.subjectcross-corpus emotion recognitionen_US
dc.subjectcontext modelingen_US
dc.subjectLSTMen_US
dc.subjectcomputational paralinguisticsen_US
dc.titleLSTM based Cross-corpus and Cross-task Acoustic Emotion Recognitionen_US
dc.typeproceedingPaperen_US
dc.relation.ispartof19th Annual Conference of the International Speech Communication Association (Interspeech 2018), Vols 1-6: Speech Research For Emerging Markets in Multilingual Societiesen_US
dc.departmentFakülteler, Çorlu Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.authorid0000-0002-5583-0410
dc.authorid0000-0003-3424-652X
dc.authorid0000-0003-4039-1221
dc.authorid0000-0001-7947-5508
dc.identifier.startpage521en_US
dc.identifier.endpage525en_US
dc.institutionauthorKaya, Heysem
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.authorscopusid36241785000
dc.authorscopusid57195680712
dc.authorscopusid57204209887
dc.authorscopusid57199057349
dc.authorscopusid57204218917
dc.authorscopusid57219469958
dc.authorwosidFedotov, Dmitrii/AAE-1738-2019
dc.authorwosid????????, ??????/L-5818-2016
dc.authorwosidKarpov, Alexey A/A-8905-2012
dc.identifier.wosWOS:000465363900108en_US
dc.identifier.scopus2-s2.0-85053749426en_US


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster