Garbage Classification Based On Deep Residual Weakly Supervised Learning Model
Citation
MLA Style :Zhijie Yang, Hongbin Huang "Garbage Classification Based On Deep Residual Weakly Supervised Learning Model" International Journal of Recent Engineering Science 7.3(2020):47-51.
APA Style :Zhijie Yang, Hongbin Huang. Garbage Classification Based On Deep Residual Weakly Supervised Learning Model International Journal of Recent Engineering Science, 7(3),47-51.
Abstract
The realization of garbage classification has become a hot topic in society, but today’s garbage processing plants use the manual pipeline sorting method for waste sorting. This kind of work method has a harsh working environment, high labor intensity, low sorting efficiency. Moreover, for the treatment of large amounts of garbage, manual sorting can only sort out a minimal part, and the vast majority of the remaining garbage can only be landfilled, which undoubtedly brings significant waste of resources and environmental pollution risks. With the application and development of deep learning technology in computer vision, it is possible to use AI technology to automatically sort waste: using cameras to take pictures of waste and then detecting the type of waste in the pictures so that the machine can automatically sort waste. This can significantly save colossal labor costs and improve waste sorting efficiency. This paper is based on deep residual weakly supervised learning ResNext series networks to classify garbage images, researches and explores AI technology for garbage classification, and contributes to the whole society is garbage classification.
Reference
[1] Hoornweg D, Bhada-Tata P. “What a waste: a global review of solid waste management[J]”. 2012.
[2] Johansson N, Corvellec H. “Waste policies gone soft: An analysis of European and Swedish waste prevention plans[J]”. Waste Management, 2018, 77: 322-332.
[3] LeCun Y, Bengio Y, Hinton G. “Deep learning[J]. nature”, 2015, 521(7553): 436-444.
[4] Krizhevsky A, Sutskever I, Hinton G E. “Imagenet classification with deep convolutional neural networks[C]//Advances in neural information processing systems”. 2012: 1097-1105.
[5] Szegedy C, Liu W, Jia Y, et al. “Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition”. 2015: 1-9.
[6] Simonyan K, Zisserman A. “Very deep convolutional networks for large-scale image recognition[J]”. arXiv preprint arXiv:1409.1556, 2014.
[7] He K, Zhang X, Ren S, et al. “Deep residual learning for image recognition[C]”//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770-778.
[8] Szegedy C, Vanhoucke V, Ioffe S, et al. “Rethinking the inception architecture for computer vision[C]”//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2818-2826..
[9] Huang G, Liu Z, Van Der Maaten L, et al. “Densely connected convolutional networks[C]”//Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 4700-4708..
[10] Xie S, Girshick R, Dollár P, et al. “Aggregated residual transformations for deep neural networks[C]//”Proceedings of the IEEE conference on computer vision and pattern recognition. 2017: 1492-1500..
[11] Mahajan D, Girshick R, Ramanathan V, et al. “Exploring the limits of weakly supervised pre-training [C]”//Proceedings of the European Conference on Computer Vision (ECCV). 2018: 181-196.
[12] Cubuk E D, Zoph B, Mane D, et al. “Autoaugment: Learning augmentation strategies from data[C]”//Proceedings of the IEEE conference on computer vision and pattern recognition. 2019: 113-123.
[13] Müller R, Kornblith S, Hinton G E. “When does label smoothing help?[C]”//Advances in Neural Information Processing Systems. 2019: 4696-4705.
[14] Srivastava N, Hinton G, Krizhevsky A, et al. “Dropout: a simple way to prevent neural networks from overfitting[J]”. The journal of machine learning research, 2014, 15(1): 1929-1958.
[15] Lin T Y, Goyal P, Girshick R, et al. “Focal loss for dense object detection[C]”//Proceedings of the IEEE international conference on computer vision. 2017: 2980-2988.
[16] Goyal P, Dollár P, Girshick R, et al. “Accurate, large minibatch sgd: Training imagenet in 1 hour[J]”. arXiv preprint arXiv:1706.02677, 2017.
[17] Liu L, Jiang H, He P, et al. “On the variance of the adaptive learning rate and beyond[J]”. arXiv preprint arXiv:1908.03265, 2019.
[18] Dr.V.V.Narendra Kumar, T.Satish Kumar, “Smarter Artificial Intelligence with Deep Learning”, SSRG International Journal of Computer Science and Engineering, Vol-5, Iss 6,2018
Keywords
AI, Deep Learning, Garbage Classification, Manual sorting, ResNeXt