AD ALTA
JOURNAL OF INTERDISCIPLINARY RESEARCH
data. Knowledge and Information Systems, 34(3), pp.483-519.
Available at: http://link.springer.com/10.1007/s10115-012-0487-8.
5. Bradley, A.P., 1997. The use of the area under the ROC curve
in the evaluation of machine learning algorithms. Pattern
Recognition, 30(7), pp.1145-1159. Available at:
https://linkinghub.elsevier.com/retrieve/pii/S0031320396001422
6. Breiman, L., 2001. Random Forests. Machine Learning, 45(1),
pp.5-32. Available at: http://link.springer.com/10.1023/A:1
010933404324.
7. Dash, M. & Liu, H., 2003. Consistency-based search in
feature selection. Artificial Intelligence, 151(1-2), pp.155-176.
Available at: https://linkinghub.elsevier.com/retrieve/pii/S0004
370203000791.
8. Duda, R.O., Hart, P.E. & Stork, D.G., 2012. Pattern
Classification, Wiley. Available at: https://books.google.cz/boo
ks?id=Br33IRC3PkQC.
9. Fan, R.-E. et al., 2008. LIBLINEAR: A Library for Large
Linear Classification. Journal of Machine Learning Research, 9,
pp.1871-1874.
10. Gilhan, K. et al., 2010. An MLP-based feature subset
selection for HIV-1 protease cleavage site analysis. Artificial
Intelligence in Medicine, 48(2-3), pp.83-89. Available at:
https://linkinghub.elsevier.com/retrieve/pii/S0933365709001031
11. Gronwald, K.D., 2017. Integrated Business Information
Systems: A Holistic View of the Linked Business Process Chain
ERP-SCM-CRM-BI-Big Data: A Holistic View of the Linked
Business Process Chain ERP-SCM-CRM-BI-Big Data, Springer
Berlin Heidelberg. Available at: https://books.google.cz/
books?id=mSYmDwAAQBAJ.
12. Gupta, S., Lehmann, D.R. & Stuart, J.A., 2004. Valuing
Customers. SSRN Electronic Journal. Available at:
http://www.ssrn.com/abstract=459595.
13. Guyon, I. et al., 2002. Gene selection for cancer
classification using support vector machines. Machine Learning,
46(1/3), pp.389-422. Available at: http://link.springer.com/1
0.1023/A:1012487302797.
14. Hall, M.A., 1999. Correlation-based feature selection for
machine learning. Ph.D. thesis. Hamilton, New Zeland.
15. Hand, D.J., 2009. Measuring classifier performance: a
coherent alternative to the area under the ROC curve. Machine
Learning, 77(1), pp.103-123. Available at: http://link.springe
r.com/10.1007/s10994-009-5119-5.
16. Hothorn, T., Hornik, K. & Zeileis, A., 2006. Unbiased
Recursive Partitioning: A Conditional Inference Framework.
Journal of Computational and Graphical Statistics, 15(3),
pp.651-674. Available at: http://www.tandfonline.com/doi/ab
s/10.1198/106186006X133933.
17. Hothorn, T. & Zeileis, A., 2015. Partykit: A Modular Toolkit
for Recursive Partitioning in R. Journal of Machine Learning
Research, 16, pp.3905–3909. Available at:
http://jmlr.org/papers/volume16/hothorn15a/hothorn15a.pdf.
18. Chu, C. et al., 2012. Does feature selection improve
classification accuracy? Impact of sample size and feature
selection on classification using anatomical magnetic resonance
images. NeuroImage, 60(1), pp.59-70. Available at:
https://linkinghub.elsevier.com/retrieve/pii/S1053811911013486
19. Jin, C. & Wang, L., 2012. Dimensionality dependent PAC-
Bayes margin bound. In Advances in Neural Information
Processing Systems 25. Montreal, Canada: Curran Associates,
pp. 1034-1042.
20. Karatzoglou, A. et al., 2004. Kernlab - An S4 Package for
Kernel Methods in R. Journal of Statistical Software, 11(9).
Available at: http://www.jstatsoft.org/v11/i09/.
21. Kononenko, I., 1994. Estimating attributes: Analysis and
extensions of RELIEF. Machine Learning: ECML-94, pp.171-
182. Available at: http://link.springer.com/10.1007/3-540-
57868-4_57.
22. Kuhn, M., 2008. Building Predictive Models in R Using the
caret Package. Journal of Statistical Software, 28(5). Available
at: http://www.jstatsoft.org/v28/i05/.
23. Mehreen, A. et al., 2017. MCS: Multiple classifier system to
predict the churners in the telecom industry. In 2017 Intelligent
Systems Conference (IntelliSys). London, Great Britain: IEEE,
pp. 678-683. Available at: http://ieeexplore.ieee.org/docum
ent/8324367/.
24. Powers, D.M.W., 2011. Evaluation: From precision, recall
and F-measure to ROC, informedness, markedness and
correlation. International Journal of Machine Learning
Technology, 2(1), pp.37-63.
25. Shakil Pervez, M. & Md. Farid, D., 2015. Literature Review of
Feature Selection for Mining Tasks. International Journal of
Computer Applications, 116(21), pp.30-33. Available at:
http://research.ijcaonline.org/volume116/number21/pxc3902829.pdf.
26. Spanoudes, P. & Nguyen, T., 2017. Deep Learning in
Customer Churn Prediction: Unsupervised Feature Learning on
Abstract Company Independent Feature Vectors.
ArXiv:1703.03869 [cs, stat]. Available at: http://arxiv.org/
abs/1703.03869.
27. Subramanya, K.B. & Somani, A., 2017. Enhanced feature
mining and classifier models to predict customer churn for an E-
retailer. In 2017 7th International Conference on Cloud
Computing, Data Science & Engineering - Confluence. Noida,
India: IEEE, pp. 531-536. Available at:
http://ieeexplore.ieee.org/document/7943208/.
28. Torkzadeh, G., Chang, J.C.-J. & Hansen, G.W., 2006.
Identifying issues in customer relationship management at
Merck-Medco. Decision Support Systems, 42(2), pp.1116-1130.
Available at: https://linkinghub.elsevier.com/retrieve/pii/S0167
923605001491.
29. Vafeiadis, T. et al., 2015. A comparison of machine learning
techniques for customer churn prediction. Simulation Modelling
Practice and Theory, 55, pp.1-9. Available at:
https://linkinghub.elsevier.com/retrieve/pii/S1569190X15000386.
30. Verbeke, W. et al., 2012. New insights into churn prediction
in the telecommunication sector: A profit driven data mining
approach. European Journal of Operational Research, 218(1),
pp.211-229. Available at: https://linkinghub.elsevier.co
m/retrieve/pii/S0377221711008599.
31. Vijaya, J. & Sivasankar, E., 2018. Computing efficient
features using rough set theory combined with ensemble
classification techniques to improve the customer churn
prediction in telecommunication sector. Computing, 100(8),
pp.839-860. Available at: http://link.springer.com/10.1007/s0
0607-018-0633-6.
32. Wright, M.N. & Ziegler, A., 2017. Ranger: A Fast
Implementation of Random Forests for High Dimensional Data
in C++ and R. Journal of Statistical Software, 77(1). Available
at: http://www.jstatsoft.org/v77/i01/.
33. Xiao, J. et al., 2015. Feature-selection-based dynamic
transfer ensemble model for customer churn prediction.
Knowledge and Information Systems, 43(1), pp.29-51. Available
at: http://link.springer.com/10.1007/s10115-013-0722-y.
34. Zhu, Z., Ong, Y.-S. & Zurada, J.M., 2010. Identification of
Full and Partial Class Relevant Genes. IEEE/ACM Transactions
on Computational Biology and Bioinformatics, 7(2), pp.263-277.
Available at: http://ieeexplore.ieee.org/document/4653480/.
Primary Paper Section: A
Secondary Paper Section: AE, BB, IN
- 63 -