Log-loss SVM Classification for Imbalanced Data

  IJRES-book-cover  International Journal of Recent Engineering Science (IJRES)          
  
© 2017 by IJRES Journal
Volume-4 Issue-2
Year of Publication : 2017
Authors : Shuxia Lu, Mi Zhou
DOI : 10.14445/23497157/IJRES-V4I2P102

How to Cite?

Shuxia Lu, Mi Zhou, "Log-loss SVM Classification for Imbalanced Data," International Journal of Recent Engineering Science, vol. 4, no. 2, pp. 7-10, 2017. Crossref, https://doi.org/10.14445/23497157/IJRES-V4I2P102

Abstract
SVM generally makes the separator incline to the minority class, and this leads to the problem that the minority class examples are more easily misclassified than the majority class examples for imbalanced classification problem. In order to deal with the large-scale imbalanced data classification problems, a method named stochastic gradient descent algorithm for SVM with log-loss function is proposed. To resist the separator incline, we define the weight according to the size of positive and negative dataset. Then, a weighted stochastic gradient descent algorithm is proposed to solve large-scale SVM classification. Experimental results on real datasets show that the proposed method is effective and can be used in many applications.

Keywords
Stochastic gradient descent, Weight, Imbalanced data, Log-loss function, Support vector machines

Reference
[1] Shalev-Shwartz, Y. Singer, N. Srebro, et al, Pegasos: Primal Estimated sub-Gradient Solver for SVM, Mathematical Programming, 127(1), 2011, 3-30.
[2] Krzysztof Sopyla, Pawel Drozda, Stochastic Gradient Descent with Barzilai-Borwein update step for SVM, Information Sciences, 316, 2015, 218-233.
[3] Zhuang Wang, Koby Crammer, Slobodan Vucetic. Breaking the Curse of Kernelization: Budgeted Stochastic Gradient Descent for Large-Scale SVM Training. Journal of Machine Learning Research, 13, 2013, 3103-3131.
[4] Nicolas Couellan, Wenjuan Wang. Bi-level stochastic gradient for large scale support vector machine. Neurocomputing, 153, 2015300-308.
[5] A. Bordes, L. Bottou, P. Gallinari. SGD-QN: careful quasiNewton stochastic gradient descent. J. Mach. Learn, 10, 2009, 1737-1754.
[6] A. Bordes, L. Bottou, P. Gallinari, et al. Sgdqn is less careful than expected. J. Mach. Learn, 11, 2010, 2229-2240.
[7] Vijay Manikandan Janakiraman, XuanLong Nguyen, Dennis, et al. Stochastic gradient based extreme learning machines for stable online learning of advanced combustion engines. Neurocomputing, 177, 2016, 304-316.
[8] Shai Shalev-Shwartz, Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Math. Program, 155, 2016, 105-145.
[9] Shalev-Shwartz, Zhang, et al. Stochastic dual coordinate ascent methods for regularized losss minimization. J. Mach. Learn, 14, 2013, 567-599.
[10] Stephan Clemencon, Aurelien Bellet, Ons Jelassi, et al. Scalability of Stochastic Gradient Descent based on Smart Sampling Techniques. Procedia Computer Science, 53, 2015, 308–315.
[11] Elad Hazan, Satyen Kale. Beyond the Regret Minimization Barrier: Optimal Algorithms for Stochastic Strongly Convex Optimization. Journal of Machine Learning Research, 15, 2014, 2489-2512.
[12] Kuan Li, Xiangfei Kong, Zhi Lu, et al. Boosting weighted ELM for imbalanced learning. Neurocomputing, 128, 2014, 15-21.
[13] H. He, E. A. Garcia. Learning from imbalanced data. IEEE Trans. Knowl. Data Eng, 21 (9), 2009, 1263–1284.
[14] Y. M. Sun, K. C. Wong, and M.S. Kamel. Classification of imbalanced data: a review. International journal of pattern recognition and artificial intelligence, 23(4), 2009, 687-719.
[15] Weiwei Zong, GB Huang, Yiqiang Chen. Weighted extreme learning machine for imbalance learning. Neurocomputing, 101, 2013, 229-242.
[16] A. Frank, A. Asuncion, UCI machine learning repository, 2010. Available from http: //archive. ics. uci. Edu / ml.
[17] J. J. Hull. A database for handwritten text recognition research. IEEE Trans. Pattern Anal. Mach. Intell., 16 (5), 1994, 550-554.