Aircraft Image Classification on a Small-Scale Dataset using MobileNetV2 with Grad-CAM as Explainable AI
DOI:
https://doi.org/10.30871/jaic.v9i5.10546Keywords:
Aircraft Image Classification, Explainable AI (XAI), Grad-CAM, MobileNetV2, Neural Networks (CNN)Abstract
This study explores aircraft image classification using MobileNetV2 combined with Gradient-weighted Class Activation Mapping (Grad-CAM) for model interpretability. A dataset of 1,500 balanced images—helicopters, propeller aircraft, and jets—was split into training, validation, and testing sets with data augmentation to reduce overfitting. Transfer learning with pre-trained MobileNetV2 achieved an accuracy of 87.56%, with macro-average precision and recall of 85.76% and 87.69%. Grad-CAM visualizations confirmed that correct predictions relied on distinctive features such as rotor blades, propellers, and engines, while misclassifications often stemmed from background distractions or less discriminative areas. These findings demonstrate the potential of lightweight architectures for small-scale datasets and highlight the value of Explainable AI in validating deep learning models. The study provides a practical reference for educational contexts and offers directions for future work with larger datasets.
Downloads
References
[1] Y. Alraba’nah and M. Hiari, “Improved convolutional neural networks for aircraft type classification in remote sensing images,” IAES International Journal of Artificial Intelligence, vol. 14, no. 2, pp. 1540–1547, Apr. 2025, doi: 10.11591/ijai.v14.i2.pp1540-1547.
[2] H. Nur Aydin, B. Şener, Ç. Berke Erdaş, and H. Nur Aydın, “Leveraging Deep Learning for Accurate Aircraft Classification in Remote Sensing Images.” [Online]. Available: https://www.researchgate.net/publication/385015866
[3] M. Alif Dzulfiqar, A. Irmansyah Lubis, P. Studi Teknik Perawatan Pesawat Udara, P. Negeri Batam, and P. Studi Teknologi Rekayasa Perangkat Lunak, “Media Pembelajaran Pengenalan Citra Pesawat Udara Dengan Memanfaatkan Metode Jaringan Saraf Tiruan,” 2024. [Online]. Available: http://jurnal.polibatam.ac.id/index.php/JATRAhttps://jurnal.polibatam.ac.id/index.php/JATRA
[4] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks.”
[5] N. M. Imam, A. Ibrahim, and M. Tiwari, “Explainable Artificial Intelligence (XAI) Techniques To Enhance Transparency In Deep Learning Models,” IOSR J Comput Eng, vol. 26, no. 6, pp. 29–36, Dec. 2024, doi: 10.9790/0661-2606012936.
[6] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 618–626. doi: 10.1109/ICCV.2017.74.
[7] S. Sutthithatip, S. Perinpanayagam, S. Aslam, and A. Wileman, “Explainable AI in Aerospace for Enhanced System Performance,” in 2021 IEEE/AIAA 40th Digital Avionics Systems Conference (DASC), 2021, pp. 1–7. doi: 10.1109/DASC52595.2021.9594488.
[8] S. Maji, E. Rahtu, J. Kannala, M. Blaschko, and A. Vedaldi, “Fine-Grained Visual Classification of Aircraft,” Sep. 2013, doi: 10.48550/arXiv.1306.5151.
[9] V. Diukarev and Y. Starukhin, “Proposed Methods for Preventing Overfitting in Machine Learning and Deep Learning,” Asian Journal of Research in Computer Science, vol. 17, no. 10, pp. 85–94, 2024, doi: 10.9734/ajrcos/2024/v17i10511.
[10] A. Hernández-García and P. König, “Further Advantages of Data Augmentation on Convolutional Neural Networks,” in Artificial Neural Networks and Machine Learning – ICANN 2018, V. Kůrková, Y. Manolopoulos, B. Hammer, L. Iliadis, and I. Maglogiannis, Eds., Cham: Springer International Publishing, 2018, pp. 95–103.
[11] Z. Xia, “Overfitting of CNN model in cifar-10: Problem and solutions,” Applied and Computational Engineering, 2024, doi: 10.54254/2755-2721/37/20230511.
[12] EITCA Academy, “Why is it necessary to normalize the pixel values before training the model?” Accessed: Jul. 15, 2025. [Online]. Available: https://eitca.org/artificial-intelligence/eitc-ai-tff-tensorflow-fundamentals/tensorflow-js/using-tensorflow-to-classify-clothing-images/examination-review-using-tensorflow-to-classify-clothing-images/why-is-it-necessary-to-normalize-the-pixel-values-before-training-the-model/
[13] Analytics Vidhya, “Image Augmentation on the fly using Keras ImageDataGenerator.” Accessed: Jul. 15, 2025. [Online]. Available: https://www.analyticsvidhya.com/blog/2020/08/image-augmentation-on-the-fly-using-keras-imagedatagenerator/
[14] Keras Team, “Keras Documentation.” Accessed: Jul. 22, 2025. [Online]. Available: https://keras.io/api/data_loading/image/
[15] I. Goodfellow, Y. Bengio, and A. Courville, “Deep Learning.”
[16] D. Yu, Q. Xu, H. Guo, C. Zhao, Y. Lin, and D. Li, “An efficient and lightweight convolutional neural network for remote sensing image scene classification,” Sensors (Switzerland), vol. 20, no. 7, Apr. 2020, doi: 10.3390/s20071999.
[17] TensorFlow, “Transfer learning and fine-tuning.” Accessed: Jul. 22, 2025. [Online]. Available: https://www.tensorflow.org/tutorials/images/transfer_learning?utm_source
[18] H. Talebi, A. K. Bardsiri, and V. K. Bardsiri, “Developing a hybrid machine learning model for employee turnover prediction: Integrating LightGBM and genetic algorithms,” Journal of Open Innovation: Technology, Market, and Complexity, vol. 11, no. 2, Jun. 2025, doi: 10.1016/j.joitmc.2025.100557.
[19] M. Fahmy Amin, “Confusion Matrix in Three-class Classification Problems: A Step-by-Step Tutorial,” Journal of Engineering Research, vol. 7, no. 1, pp. 0–0, Mar. 2023, doi: 10.21608/erjeng.2023.296718.
[20] M. Fahmy Amin and F. Amin, “Confusion Matrix in Binary Classification Problems: A Step-by-Step Tutorial.”
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Mohamad Alif Dzulfiqar, Susi Lestari, Ahmadi Irmansyah Lubis, Muhammad Andi Nova, Zaimah, Mulyadi

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) ) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).








