Loading

Speaker Dependent Emotion Recognition from Speech
Biswajit Nayak1, Mitali Madhusmita2, Debendra Kumar Sahu3, Rajendra Kumar Behera4, Kamalakanta Shaw5

1Biswajit Nayak, Department of Computer Science and Engineering, Bhubaneswar Engineering College, Bhubaneswar (Odisha), India.
2Mitali Madhusmita, Department of Computer Science and Engineering, The Techno School, Bhubaneswar (Odisha), India.
3Debendra Kumar Sahu, Department of Computer Science and Engineering, Eastern Academy of Science and Technology, Phulnakhara, Bhubaneswar (Odisha), India.
4Rajendra Kumar Behera, Department of Computer Science and Engineering, Eastern Academy of Science and Technology, Phulnakhara, Bhubaneswar (Odisha), India.
5Kamalakanta Shaw, Department of Computer Science and Engineering, Eastern Academy of Science and Technology, Phulnakhara, Bhubaneswar (Odisha), India.
Manuscript received on 10 November 2013 | Revised Manuscript received on 18 November 2013 | Manuscript Published on 30 November 2013 | PP: 40-42 | Volume-3 Issue-6, November 2013 | Retrieval Number: F1338113613/13©BEIESP
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: The speech signal is the fastest and the most natural method of communication between humans. Hence speech can use for fast and efficient way of interaction between human and machine. Speech is attractive and effective medium due to its several features expressing attitude and emotions through speech is possible. In human machine interaction automatic speech emotion recognition is so far challenging but important task which paid close attention in current research area. In this paper we have analysed emotion recognition performance on eight different speakers. IITKGP-SEHSC emotional speech database used for emotions recognition. The emotions used in this study are anger, fear, happy, neutral, sarcastic, and surprise. The classifications were carried out using Gaussian Mixture Model (GMM). Mel Frequency Cepstral Coefficients (MFCCs) features are used for identifying the emotions. It can be observed that, the percentage of accuracy is 75.00% for 32 centered GMM, 72.00% for 16 centered GMM and 66.67% for 8 centered GMM.
Keywords: Emotion Recognition, Gaussian Mixture Model (GMM), Male-scale Frequency Cepstral Coefficient (MFCC), IITKGP-SEHSC (Indian Institute of Technology Kharagpur Simulated Hindi Emotional Speech Corpus).

Scope of the Article: Pattern Recognition