Loading

Performance Evaluation of Naive Bayes Classifier with and without Filter Based Feature Selection
D.Prabha1, R. Siva Subramanian2, S.Balakrishnan3, M.Karpagam4

1.Dr.D.Prabha, Associate Professor, Department of Computer Science and Engineering, Sri Krishna College of Engineering and Technology, Coimbatore, India
2Mr. R. Siva Subramanian, Research Scholar, Anna University, Chennai
3Dr.S.Balakrishnan, Professor, Department of Computer Science and Engineering, Sri Krishna College of Engineering and Technology, Coimbatore, India
4Dr.M.Karpagam, Professor, Department of Electronics and Communication Engineering, Sri Krishna College of Engineering and Technology, Coimbatore, India

Manuscript received on 02 July 2019 | Revised Manuscript received on 09 July 2019 | Manuscript published on 30 August 2019 | PP: 1433-1436 | Volume-8 Issue-10, August 2019 | Retrieval Number: J90740881019/2019©BEIESP | DOI: 10.35940/ijitee.J9376.0881019
Open Access | Ethics and Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Customer Relationship Ma agement tends to analyze datasets to find insights about data which in turn helps to frame the business strategy for improvement of enterprises. Analyzing data in CRM requires high intensive models. Machine Learning (ML) algorithms help in analyzing such large dimensional datasets. In most real time datasets, the strong independence assumption of Naive Bayes (NB) between the attributes are violated and due to other various drawbacks in datasets like irrelevant data, partially irrelevant data and redundant data, it leads to poor performance of prediction. Feature selection is a preprocessing method applied, to enhance the predication of the NB model. Further, empirical experiments are conducted based on NB with Feature selection and NB without feature selection. In this paper, a empirical study of attribute selection is experimented for five dissimilar filter based feature selection such as Relief-F, Pearson correlation (PCC), Symmetrical Uncertainty (SU), Gain Ratio (GR) and Information Gain (IG).
Keywords: CRM, Prediction, Machine Learning, Naïve Bayes, feature selection
Scope of the Article: Machine Learning