Gelişmiş Arama

Basit öğe kaydını göster

dc.contributor.authorKaya, Heysem
dc.contributor.authorFedotov, D.
dc.contributor.authorDresvyanskiy, D.
dc.contributor.authorDoyran, M.
dc.contributor.authorMamontov, D.
dc.contributor.authorMarkitantov, M.
dc.contributor.authorSalah, Albert Ali
dc.date.accessioned2022-05-11T14:15:56Z
dc.date.available2022-05-11T14:15:56Z
dc.date.issued2019
dc.identifier.isbn9781450369138
dc.identifier.urihttps://doi.org/10.1145/3347320.3357691
dc.identifier.urihttps://hdl.handle.net/20.500.11776/6125
dc.descriptionACM SIGMMen_US
dc.description9th International Audio/Visual Emotion Challenge and Workshop, AVEC 2019, held in conjunction with the ACM Multimedia 2019 -- 21 October 2019 -- -- 153196en_US
dc.description.abstractCross-language, cross-cultural emotion recognition and accurate prediction of affective disorders are two of the major challenges in affective computing today. In this work, we compare several systems for Detecting Depression with AI Sub-challenge (DDS) and Cross-cultural Emotion Sub-challenge (CES) that are published as part of the Audio-Visual Emotion Challenge (AVEC) 2019. For both sub-challenges, we benefit from the baselines, while introducing our own features and regression models. For the DDS challenge, where ASR transcripts are provided by the organizers, we propose simple linguistic and word-duration features. These ASR transcriptbased features are shown to outperform the state of the art audio visual features for this task, reaching a test set Concordance Correlation Coefficient (CCC) performance of 0.344 in comparison to a challenge baseline of 0.120. Our results show that non-verbal parts of the signal are important for detection of depression, and combining this with linguistic information produces the best results. For CES, the proposed systems using unsupervised feature adaptation outperform the challenge baselines on emotional primitives, reaching test set CCC performances of 0.466 and 0.499 for arousal and valence, respectively. © 2019 Association for Computing Machinery.en_US
dc.description.sponsorshipRussian Science Foundation, RSF: 18-11-00145en_US
dc.description.sponsorshipThis study was partially conducted within the framework of the Russian Science Foundation project No. 18-11-00145.en_US
dc.language.isoengen_US
dc.publisherAssociation for Computing Machinery, Incen_US
dc.identifier.doi10.1145/3347320.3357691
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectAffective Computingen_US
dc.subjectCross-Cultural Emotion Recognitionen_US
dc.subjectDepression Severity Predictionen_US
dc.subjectPTSDen_US
dc.subjectAudio systemsen_US
dc.subjectForecastingen_US
dc.subjectRegression analysisen_US
dc.subjectSpeech recognitionen_US
dc.subjectAccurate predictionen_US
dc.subjectAffective Computingen_US
dc.subjectAudio-visual featuresen_US
dc.subjectCorrelation coefficienten_US
dc.subjectEmotion recognitionen_US
dc.subjectFeature adaptationen_US
dc.subjectLinguistic informationen_US
dc.subjectPTSDen_US
dc.subjectLinguisticsen_US
dc.titlePredicting depression and emotions in the cross-roads of cultures, para-linguistics, and non-linguisticsen_US
dc.typeconferencePaperen_US
dc.relation.ispartofAVEC 2019 - Proceedings of the 9th International Audio/Visual Emotion Challenge and Workshop, co-located with MM 2019en_US
dc.departmentFakülteler, Çorlu Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.identifier.startpage27en_US
dc.identifier.endpage35en_US
dc.institutionauthorKaya, Heysem
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.authorscopusid36241785000
dc.authorscopusid57195680712
dc.authorscopusid57209850146
dc.authorscopusid57195218979
dc.authorscopusid57209202409
dc.authorscopusid57210791723
dc.authorscopusid55412025900
dc.identifier.scopus2-s2.0-85074945276en_US


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster