Feature Selection and Multimodal Fusion for Estimating Emotions Evoked by Movie Clips
dc.authorid | 0000-0001-6342-428X | |
dc.authorid | 0000-0001-7947-5508 | |
dc.authorwosid | Salah, Albert Ali/ABH-5561-2020 | |
dc.contributor.author | Timar, Yasemin | |
dc.contributor.author | Karslıoğlu, Nihan | |
dc.contributor.author | Kaya, Heysem | |
dc.contributor.author | Salah, Albert Ali | |
dc.date.accessioned | 2022-05-11T14:15:53Z | |
dc.date.available | 2022-05-11T14:15:53Z | |
dc.date.issued | 2018 | |
dc.department | Fakülteler, Çorlu Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü | |
dc.description | 8th ACM International Conference on Multimedia Retrieval (ACM ICMR) -- JUN 11-14, 2018 -- Yokohama, JAPAN | |
dc.description.abstract | Perceptual understanding of media content has many applications, including content-based retrieval, marketing, content optimization, psychological assessment, and affect-based learning. In this paper, we model audio visual features extracted from videos via machine learning approaches to estimate the affective responses of the viewers. We use the LIRIS-ACCEDE dataset and the MediaEval 2017 Challenge setting to evaluate the proposed methods. This dataset is composed of movies of professional or amateur origin, annotated with viewers' arousal, valence, and fear scores. We extract a number of audio features, such as Mel-frequency Cepstral Coefficients, and visual features, such as dense SIFT, hue-saturation histogram, and features from a deep neural network trained for object recognition. We contrast two different approaches in the paper, and report experiments with different fusion and smoothing strategies. We demonstrate the benefit of feature selection and multimodal fusion on estimating affective responses to movie segments. | |
dc.description.sponsorship | Assoc Comp Machinery, ACM SIGMM | |
dc.identifier.doi | 10.1145/3206025.3206074 | |
dc.identifier.endpage | 412 | |
dc.identifier.isbn | 978-1-4503-5046-4 | |
dc.identifier.startpage | 405 | |
dc.identifier.uri | https://doi.org/10.1145/3206025.3206074 | |
dc.identifier.uri | https://hdl.handle.net/20.500.11776/6108 | |
dc.identifier.wos | WOS:000461145900055 | |
dc.identifier.wosquality | N/A | |
dc.indekslendigikaynak | Web of Science | |
dc.institutionauthor | Kaya, Heysem | |
dc.language.iso | en | |
dc.publisher | Assoc Computing Machinery | |
dc.relation.ispartof | Icmr '18: Proceedings of the 2018 Acm International Conference on Multimedia Retrieval | |
dc.relation.publicationcategory | Konferans Öğesi - Uluslararası - Kurum Öğretim Elemanı | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | |
dc.subject | Affective computing | |
dc.subject | multimodal interaction | |
dc.subject | emotion estimation | |
dc.subject | audio-visual features | |
dc.subject | movie analysis | |
dc.subject | face analysis | |
dc.subject | Extreme Learning-Machine | |
dc.title | Feature Selection and Multimodal Fusion for Estimating Emotions Evoked by Movie Clips | |
dc.type | Conference Object |
Dosyalar
Orijinal paket
1 - 1 / 1
Küçük Resim Yok
- İsim:
- 6108.pdf
- Boyut:
- 3.26 MB
- Biçim:
- Adobe Portable Document Format
- Açıklama:
- Tam Metin / Full Text