–
Date fruit type classification using convolutional neural networks
Abdullah ALAVI, Md Faysal AHAMED, Ali ALBELADI, Mohamed MOHANDES
download PDFAbstract. Classification of objects is an important task for convolutional neural networks (CNNs). They have been applied to numerous fields with excellent results. In this study, we use CNNs to classify five categories of Sukkari dates, namely Galaxy, Mufattal, Nagad, Qishr, and Ruttab. Transfer learning is when a pretrained model is taken and only the final layers are trained to make a prediction. In this paper, we used the following five models: SqueezeNet, GoogLeNet, EfficientNet-b0, ShuffleNet, and MobileNet V2. The results show that SqueezeNet outperforms the other networks with a classification accuracy of 92% on the testing set. The testing accuracy for GoogLeNet, EfficientNet-b0, ShuffleNet, and MobileNet V2, on the other hand are 85.14%, 82.86%, 89.14%, and 87.43%, respectively. As this is a classification task, other metrics like precision, recall, and F1 score are also evaluated. These values for the SqueezeNet on the testing set are 92.67%, 92%, and 92.33%, respectively. ShuffleNet was second with values of 89.41%, 89.14%, and 89.28%, respectively. EfficientNet scored the lowest with 83.10%, 82.86%, and 82.98%, respectively.
Keywords
Date Fruit Type Classification, Convolutional Neural Network, Squeezenet, Pretrained Network, Transfer Learning
Published online 7/15/2024, 9 pages
Copyright © 2024 by the author(s)
Published under license by Materials Research Forum LLC., Millersville PA, USA
Citation: Abdullah ALAVI, Md Faysal AHAMED, Ali ALBELADI, Mohamed MOHANDES, Date fruit type classification using convolutional neural networks, Materials Research Proceedings, Vol. 43, pp 205-213, 2024
DOI: https://doi.org/10.21741/9781644903216-27
The article was published as article 27 of the book Renewable Energy: Generation and Application
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 license. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
References
[1] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016.
[2] B. Coppin, Artificial intelligence illuminated. Jones & Bartlett Learning, 2004.
[3] M. I. Jordan and T. M. Mitchell, “Machine learning: Trends, perspectives, and prospects,” Science (1979), vol. 349, no. 6245, pp. 255–260, 2015. https://doi.org/10.1126/science.aaa8415
[4] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015. https://doi.org/10.1038/nature14539
[5] E. Rachmawati, I. Supriana, and M. L. Khodra, “Toward a new approach in fruit recognition using hybrid RGBD features and fruit hierarchy property,” in 2017 4th International Conference on Electrical Engineering, Computer Science and Informatics (EECSI), 2017, pp. 1–6. https://doi.org/10.1109/EECSI.2017.8239110
[6] Y. Tao and J. Zhou, “Automatic apple recognition based on the fusion of color and 3D feature for robotic fruit picking,” Comput Electron Agric, vol. 142, pp. 388–396, 2017. https://doi.org/10.1016/j.compag.2017.09.019
[7] J. Naranjo-Torres, M. Mora, R. Hernández-Garcia, R. J. Barrientos, C. Fredes, and A. Valenzuela, “A review of convolutional neural network applied to fruit image processing,” Applied Sciences, vol. 10, no. 10, p. 3443, 2020. https://doi.org/10.3390/app10103443
[8] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998. https://doi.org/10.1109/5.726791
[9] Y. Guo and others, “Deep learning for visual understanding.” Neurocomput, 2017.
[10] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13, 2014, pp. 818–833. https://doi.org/10.1007/978-3-319-10590-1_53
[11] R. Singh and S. Balasundaram, “Application of extreme learning machine method for time series analysis,” International Journal of Computer and Information Engineering, vol. 1, no. 11, pp. 3407–3413, 2007.
[12] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv Neural Inf Process Syst, vol. 25, 2012.
[13] O. Russakovsky et al., “Imagenet large scale visual recognition challenge,” Int J Comput Vis, vol. 115, pp. 211–252, 2015. https://doi.org/10.1007/s11263-015-0816-y
[14] Y. Lu, “Food image recognition by using convolutional neural networks (cnns),” arXiv preprint arXiv:1612.00983, 2016.
[15] Y.-D. Zhang et al., “Image based fruit category classification by 13-layer deep convolutional neural network and data augmentation,” Multimed Tools Appl, vol. 78, pp. 3613–3632, 2019. https://doi.org/10.1007/s11042-017-5243-3
[16] J. Steinbrener, K. Posch, and R. Leitner, “Hyperspectral fruit and vegetable classification using convolutional neural networks,” Comput Electron Agric, vol. 162, pp. 364–372, 2019. https://doi.org/10.1016/j.compag.2019.04.019
[17] S. W. Chen et al., “Counting apples and oranges with deep learning: A data-driven approach,” IEEE Robot Autom Lett, vol. 2, no. 2, pp. 781–788, 2017. https://doi.org/10.1109/LRA.2017.2651944
[18] S. Bargoti and J. Underwood, “Deep fruit detection in orchards,” in 2017 IEEE international conference on robotics and automation (ICRA), 2017, pp. 3626–3633. https://doi.org/10.1109/ICRA.2017.7989417
[19] K. Albarrak, Y. Gulzar, Y. Hamid, A. Mehmood, and A. B. Soomro, “A deep learning-based model for date fruit classification,” Sustainability, vol. 14, no. 10, p. 6339, 2022. https://doi.org/10.3390/su14106339
[20] M. Faisal, F. Albogamy, H. Elgibreen, M. Algabri, and F. A. Alqershi, “Deep learning and computer vision for estimating date fruits type, maturity level, and weight,” IEEE Access, vol. 8, pp. 206770–206782, 2020. https://doi.org/10.1109/ACCESS.2020.3037948
[21] A. Alsirhani, M. H. Siddiqi, A. M. Mostafa, M. Ezz, and A. A. Mahmoud, “A novel classification model of date fruit dataset using deep transfer learning,” Electronics (Basel), vol. 12, no. 3, p. 665, 2023. https://doi.org/10.3390/electronics12030665
[22] D. Chaudhari and S. Waghmare, “Machine vision based fruit classification and grading—a review,” in ICCCE 2021: Proceedings of the 4th International Conference on Communications and Cyber Physical Engineering, 2022, pp. 775–781. https://doi.org/10.1007/978-981-16-7985-8_81
[23] T. Shoshan, A. Bechar, Y. Cohen, A. Sadowsky, and S. Berman, “Segmentation and motion parameter estimation for robotic Medjoul-date thinning,” Precis Agric, vol. 23, no. 2, pp. 514–537, 2022. https://doi.org/10.1007/s11119-021-09847-2
[24] K. M. Alresheedi, S. Aladhadh, R. U. Khan, and A. M. Qamar, “Dates Fruit Recognition: From Classical Fusion to Deep Learning.,” Computer Systems Science & Engineering, vol. 40, no. 1, 2022. https://doi.org/10.32604/csse.2022.017931
[25] A. Magsi, J. A. Mahar, S. H. Danwar, and others, “Date fruit recognition using feature extraction techniques and deep convolutional neural network,” Indian J Sci Technol, vol. 12, no. 32, pp. 1–12, 2019. https://doi.org/10.17485/ijst/2019/v12i32/146441
[26] A. Nasiri, A. Taheri-Garavand, and Y.-D. Zhang, “Image-based deep learning automated sorting of date fruit,” Postharvest Biol Technol, vol. 153, pp. 133–141, 2019. https://doi.org/10.1016/j.postharvbio.2019.04.003
[27] B. D. Pérez-Pérez, J. P. Garcia Vazquez, and R. Salomón-Torres, “Evaluation of convolutional neural networks’ hyperparameters with transfer learning to determine sorting of ripe medjool dates,” Agriculture, vol. 11, no. 2, p. 115, 2021. https://doi.org/10.3390/agriculture11020115
[28] H. Altaheri, M. Alsulaiman, and G. Muhammad, “Date fruit classification for robotic harvesting in a natural environment using deep learning,” IEEE Access, vol. 7, pp. 117115–117133, 2019. https://doi.org/10.1109/ACCESS.2019.2936536
[29] M. F. Nadhif and S. Dwiasnati, “Classification of Date Fruit Types Using CNN Algorithm Based on Type,” MALCOM: Indonesian Journal of Machine Learning and Computer Science, vol. 3, no. 1, pp. 36–42, 2023. https://doi.org/10.57152/malcom.v3i1.724
[30] F. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, “SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size,” arXiv preprint arXiv:1602.07360, 2016.
[31] C. Szegedy et al., “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 1–9. https://doi.org/10.1109/CVPR.2015.7298594
[32] M. Tan and Q. Le, “Efficientnet: Rethinking model scaling for convolutional neural networks,” in International conference on machine learning, 2019, pp. 6105–6114.
[33] X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: An extremely efficient convolutional neural network for mobile devices,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 6848–6856. https://doi.org/10.1109/CVPR.2018.00716
[34] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “Mobilenetv2: Inverted residuals and linear bottlenecks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510–4520. https://doi.org/10.1109/CVPR.2018.00474
[35] “Pretrained Deep Neural Networks.” Accessed: Feb. 25, 2024. [Online]. Available: https://www.mathworks.com/help/deeplearning/ug/pretrained-convolutional-neural-networks.html