Mushroom Classification Using Convolutional Neural Network MobileNetV2 Architecture for Overfitting Mitigation and Enhanced Model Generalization

Authors

  • Fauzan Arif Prayogi Universitas Dian Nuswantoro
  • Fariz Hasim Arvianto Program Studi Teknik Informatika, Universitas Dian Nuswantoro
  • Dimas Rizki Pratama Program Studi Teknik Informatika, Universitas Dian Nuswantoro
  • Sugiyanto Sugiyanto Program Pendidikan Jarak Jauh Informatika, Universitas Dian Nuswantoro

DOI:

https://doi.org/10.30871/jaic.v9i4.10183

Keywords:

Computer Vision, Convolutional Neural Network, MobileNetV2, Classification, Overfitting

Abstract

Fungal identification is a significant challenge due to the morphological similarities among different species. Previous studies using Convolutional Neural Networks (CNNs) for mushroom classification still face overfitting issues, which lead to poor performance on new data. Therefore, this research develops a MobileNetV2-based Convolutional Neural Network (CNN) model capable of classifying three mushroom species (Amanita, Boletus, and Lactarius) with a primary focus on mitigating overfitting. The dataset consists of 3,210 RGB images, divided into 1,979 training data, 493 validation data, and 738 testing data. The model is developed using transfer learning with MobileNetV2, combined with additional layers such as Conv2D, pooling, and Dense, along with Dropout for regularization. The training process employs the Adam optimizer with a learning rate of 1.0×10⁻⁵ and is monitored with EarlyStopping and ModelCheckpoint. The model successfully addresses overfitting, achieving a minimal generalization gap of 1.33%, compared to 7% in previous studies. The evaluation results show a training accuracy of 77.35%, validation accuracy of 78.79%, and testing accuracy of 76.02%, with precision of 80.6% and recall of 68.1%. The consistent performance, with a maximum difference of only 2.77% across the three datasets, demonstrates superior generalization ability and provides a strong foundation for the implementation of a reliable automatic mushroom identification system.

Downloads

Download data is not yet available.

References

[1] A. M. Al Farhan, “Identifikasi Jenis Jamur Menggunakan Convolutional Neural Network dan Random Forest,” Universitas PGRI Madiun, pp. 550–559, 2024.

[2] U. Sri Rahmadhani and N. Lysbetti Marpaung, “Klasifikasi Jamur Berdasarkan Genus Dengan Menggunakan Metode CNN,” Jurnal Informatika: Jurnal Pengembangan IT, vol. 8, no. 2, 2023.

[3] E. Iedfitra Haksoro and A. Setiawan, “Pengenalan Jamur yang Dapat Dikonsumsi Menggunakan Metode Transfer Learning pada Convolutional Neural Network,” Online, 2021. doi: 10.31961/eltikom.v5i2.428.

[4] R. Indrakumari, T. Poongodi, and S. R. Jena, “Heart Disease Prediction using Exploratory Data Analysis,” in Procedia Computer Science, Elsevier B.V., 2020, pp. 130–139. doi: 10.1016/j.procs.2020.06.017.

[5] B. Zhang, J. Lin, L. Du, and L. Zhang, “Harnessing Data Augmentation and Normalization Preprocessing to Improve the Performance of Chemical Reaction Predictions of Data-Driven Model,” Polymers (Basel), vol. 15, no. 9, May 2023, doi: 10.3390/polym15092224.

[6] R. Hao, K. Namdar, L. Liu, M. A. Haider, and F. Khalvati, “A Comprehensive Study of Data Augmentation Strategies for Prostate Cancer Detection in Diffusion-Weighted MRI Using Convolutional Neural Networks,” J Digit Imaging, vol. 34, no. 4, pp. 862–876, Aug. 2021, doi: 10.1007/s10278-021-00478-7.

[7] M. Sivakumar, S. Parthasarathy, and T. Padmapriya, “Trade-off between training and testing ratio in machine learning for medical image processing,” PeerJ Comput Sci, vol. 10, 2024, doi: 10.7717/PEERJ-CS.2245.

[8] F. Zhuang et al., “A Comprehensive Survey on Transfer Learning,” Jun. 2020, [Online]. Available: http://arxiv.org/abs/1911.02685

[9] H. E. Kim, A. Cosa-Linan, N. Santhanam, M. Jannesari, M. E. Maros, and T. Ganslandt, “Transfer learning for medical image classification: a literature review,” Dec. 01, 2022, BioMed Central Ltd. doi: 10.1186/s12880-022-00793-7.

[10] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?,” Adv Neural Inf Process Syst, vol. 22, Nov. 2014, [Online]. Available: http://arxiv.org/abs/1411.1792

[11] T. Rahman et al., “Transfer learning with deep Convolutional Neural Network (CNN) for pneumonia detection using chest X-ray,” Applied Sciences (Switzerland), vol. 10, no. 9, May 2020, doi: 10.3390/app10093233.

[12] J. A. Raj, L. Qian, and Z. Ibrahim, “Fine-tuning -- a Transfer Learning approach,” Nov. 2024, [Online]. Available: http://arxiv.org/abs/2411.03941

[13] X. Du, Y. Sun, Y. Song, H. Sun, and L. Yang, “A Comparative Study of Different CNN Models and Transfer Learning Effect for Underwater Object Classification in Side-Scan Sonar Images,” Remote Sens (Basel), vol. 15, no. 3, Feb. 2023, doi: 10.3390/rs15030593.

[14] I. D. Mienye, T. G. Swart, G. Obaido, M. Jordan, and P. Ilono, “Deep Convolutional Neural Networks in Medical Image Analysis: A Review,” Mar. 01, 2025, Multidisciplinary Digital Publishing Institute (MDPI). doi: 10.3390/info16030195.

[15] Y. Lecun, E. Bottou, Y. Bengio, and P. Haffner, “Gradient-Based Learning Applied to Document Recognition,” 1998. doi: 10.1109/5.726791.

[16] Purwono, A. Ma’arif, W. Rahmaniar, H. I. K. Fathurrahman, A. Z. K. Frisky, and Q. M. U. Haq, “Understanding of Convolutional Neural Network (CNN): A Review,” International Journal of Robotics and Control Systems, vol. 2, no. 4, pp. 739–748, 2022, doi: 10.31763/ijrcs.v2i4.888.

[17] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun ACM, vol. 60, no. 6, pp. 84–90, Jun. 2017, doi: 10.1145/3065386.

[18] L. Alzubaidi et al., “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions,” J Big Data, vol. 8, no. 1, Dec. 2021, doi: 10.1186/s40537-021-00444-8.

[19] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, Dec. 2018, pp. 4510–4520. doi: 10.1109/CVPR.2018.00474.

[20] Y. Kaya and E. Gürsoy, “A MobileNet-based CNN model with a novel fine-tuning mechanism for COVID-19 infection detection,” May 01, 2023, Springer Science and Business Media Deutschland GmbH. doi: 10.1007/s00500-022-07798-y.

[21] J. Zhang, X. Yu, X. Lei, and C. Wu, “A novel CapsNet neural network based on MobileNetVV structure for robot image classification,” 2022. doi: 10.3389/fnbot.2022.1007939.

[22] X. Deng, Q. Liu, Y. Deng, and S. Mahadevan, “An improved method to construct basic probability assignment based on the confusion matrix for classification problem,” Inf Sci (N Y), vol. 340–341, pp. 250–261, May 2016, doi: 10.1016/j.ins.2016.01.033.

[23] M. Grandini, E. Bagli, and G. Visani, “Metrics for Multi-Class Classification: an Overview,” Aug. 2020, doi: 10.48550/arXiv.2008.05756.

Downloads

Published

2025-08-08

How to Cite

[1]
F. A. Prayogi, F. H. Arvianto, D. R. Pratama, and S. Sugiyanto, “Mushroom Classification Using Convolutional Neural Network MobileNetV2 Architecture for Overfitting Mitigation and Enhanced Model Generalization”, JAIC, vol. 9, no. 4, pp. 1770–1777, Aug. 2025.

Issue

Section

Articles

Most read articles by the same author(s)

Similar Articles

<< < 27 28 29 

You may also start an advanced similarity search for this article.