Optimal Feature Selection by Distribution Diversity for Sentiment Analysis
A.V.R.Mayuri1, K.Asha Rani2
1A.V.R.Mayuri, Associate Professor, Department of Computer Science and Engineering, G Pulla Reddy Engineering College, Kurnool, Andhra Pradesh, India.
2K. Asha Rani, Associate Professor, Department of Computer Science and Engineering, G Pulla Reddy Engineering College, Kurnool, Andhra Pradesh, India.
Manuscript received on 05 March 2019 | Revised Manuscript received on 12 March 2019 | Manuscript Published on 20 March 2019 | PP: 345-349 | Volume-8 Issue- 4S2 March 2019 | Retrieval Number: D1S0076028419/2019©BEIESP
Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open-access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)
Abstract: In this article, the feature selection method and the classification of opinion methods are compared and discussed. The performance of feature selection methods utilized the z-scores & t-scores statistical measure. The classifier SVM is utilized for comparing and classifying with the Adaboost and NB. The main aim of this article is to discuss and assess the range of the statistical measures to detect the features which are optimal and its importance for the opinion classification utilizing the diverse classifiers. The analysis of performance are conducted on diverse datasets with varied range such as reviews of product, reviews of movies, and tests & tweets are conducted utilizing “Wilcoxon Signed Rank based Z-score and T-score. And from the outcomes of simulation studies, it is obvious that amid 3 classifiers which are tested for the accuracy of classification, the method Adaboost has surpassed the other 2 methods of NB and SVM.
Keywords: Adaboost, SVM, NB, Optimal Feature, Decision trees, K-nearest-Neighbors.
Scope of the Article: Computer Science and Its Applications