Loading

Ensemble Learning Models for Churn Prediction
Debjyoti Das Adhikary1, Deepak Gupta2

1D. A. Debjyoti, M.Tech Scholar, Department of CSE, NIT, Yupia (Arunachal Pradesh), India. 

2Dr. Deepak Gupta, Assistant Professor, Department of Computer Science & Engineering, National Institute of Technology, (Arunachal Pradesh), India.

Manuscript received on 03 December 2019 | Revised Manuscript received on 11 December 2019 | Manuscript Published on 31 December 2019 | PP: 159-164 | Volume-9 Issue-2S December 2019 | Retrieval Number: B10911292S19/2019©BEIESP | DOI: 10.35940/ijitee.B1091.1292S19

Open Access | Editorial and Publishing Policies | Cite | Mendeley | Indexing and Abstracting
© The Authors. Blue Eyes Intelligence Engineering and Sciences Publication (BEIESP). This is an open-access article under the CC-BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/)

Abstract: Customer churn prediction has always been a major problem in telecom industries. Customer retention is always one of the major objectives of any service providing company as maintaining loyal customers has always been cheaper than acquiring new customers. In this paper, we have tried to predict the churn rate of a dataset from a telecom company using some classifiers and then training the same classifiers with ensemble learning models. The ensemble techniques are assumed to yield better results. We have used 42 classifiers from over different like Nearest Neighbors, Decision Tables, Random Forests, etc., which roughly covers almost all the well-known classifiers used in the industry in today’s date. Further, the ensemble techniques are used in our work such as bagging and boosting which are trained on the same classifiers so that we can compare the performance of individual classifiers as well as the same when used as a base classifier. We have extracted the accuracy of the classifiers, True Positive and False Positive rates, f-measure, MCC score, Area Under Curve (AUC) area and Precision-Recall (PRC) area. These measures, not only helped us know which algorithm is more fruitful but also gave us insights about the varying performance. It is observed that, in most of the cases, the classifiers, when combined with either of the ensemble techniques, yield better results. The experimental results reveal that the accuracy of the classifier improves when combined with bagging or boosting.

Keywords: Churn Prediction, Bagging, Boosting, Machine Learning.
Scope of the Article: Regression and Prediction