Evaluation of YOLOv8 and Faster R-CNN for Image-Based Food Detection
DOI:
https://doi.org/10.30871/jaic.v10i1.11684Keywords:
Food Detection, Object Detection, YOLOv8, Faster R-CNN, Convolutional Neural NetworkAbstract
Difficulties in manually tracking nutrition lead to the need for automatic food detection systems. However, Indonesian food presents tough challenges to recognize because similar-looking foods and different serving styles make it hard. This study looks at two deep learning models that follow different approaches: YOLOv8, which is known for being fast and efficient, and Faster R-CNN, which is known for being very accurate. The goal is to find the best model for use on mobile devices. This research uses a strict and standardized way to test the models to make sure the comparison is fair. A public dataset with 1,325 images from Roboflow was used. To deal with uneven class distribution, the images were split using Stratified Random Sampling. Before training, the images were resized using letterbox method to keep their original shape and normalized for pixel values. Both models were trained for the same number of epochs (100) and used the same optimizer (SGD) to ensure fair comparisons. The results show that YOLOv8 performs better in all areas. It achieved 88.6% mAP@50 accuracy and 62.0% mAP@50-95 precision. Faster R-CNN got 85.5% and 55.6% respectively. YOLOv8 also excels in sensitivity or Recall, reaching 87.7% compared to 61.7% for Faster R-CNN. The F1-Score, which balances accuracy and sensitivity, is 84.0% for YOLOv8 and 72% for Faster R-CNN. In terms of speed and size, YOLOv8 is much better. It runs in 13.5 ms and is 21.5 MB in size. That makes it 7.7 times faster and 7.3 times smaller than Faster R-CNN. Based on these results, YOLOv8 is the best choice for developing mobile-based nutrition tracking systems.
Downloads
References
[1] C. Serón-Arbeloa et al., “Malnutrition Screening and Assessment,” Nutrients, vol. 14, no. 12, Jun. 2022, doi: 10.3390/nu14122392.
[2] M. A. A. Al-qaness, A. A. Abbasi, H. Fan, R. A. Ibrahim, S. H. Alsamhi, and A. Hawbani, “An improved YOLO-based road traffic monitoring system,” Computing, vol. 103, no. 2, pp. 211–230, Feb. 2021, doi: 10.1007/s00607-020-00869-8.
[3] G. Cui, S. Wang, Y. Wang, Z. Liu, Y. Yuan, and Q. Wang, “Preceding vehicle detection using faster R-CNN based on speed classification random anchor and Q-square penalty coefficient,” Electronics (Switzerland), vol. 8, no. 9, Sep. 2019, doi: 10.3390/electronics8091024.
[4] S. S. Sentik and S. Jakarta, “Penerapan Algoritma You Only Look Once v5 Untuk Deteksi Kualitas Buah Alpukat Menggunakan Python Berbasis Web,” 2024.
[5] N. Rane, “YOLO and Faster R-CNN object detection for smart Industry 4.0 and Industry 5.0: applications, challenges, and opportunities,” SSRN Electronic Journal, 2023, doi: 10.2139/ssrn.4624206.
[6] O. Olorunshola, P. Jemitola, and A. Ademuwagun, “Comparative Study of Some Deep Learning Object Detection Algorithms: R-CNN, FAST R-CNN, FASTER R-CNN, SSD, and YOLO,” Nile Journal of Engineering and Applied Science, no. 0, p. 1, 2023, doi: 10.5455/njeas.150264.
[7] L. Tan, T. Huangfu, L. Wu, and W. Chen, “Comparison of YOLO v3, Faster R-CNN, and SSD for Real-Time Pill Identification,” Jul. 30, 2021. doi: 10.21203/rs.3.rs-668895/v1.
[8] P. Akurasi et al., “Prosiding SEMNAS INOTEK (Seminar Nasional Inovasi Teknologi) 2025 388,” Online.
[9] Z. Guo, C. Wang, G. Yang, Huang, and G. Li, “MSFT-YOLO: Improved YOLOv5 Based on Transformer for Detecting Defects of Steel Surface,” Sensors, vol. 22, no. 9, May 2022, doi: 10.3390/s22093467.
[10] W. Liu, G. Ren, R. Yu, S. Guo, J. Zhu, and L. Zhang, “Image-Adaptive YOLO for Object Detection in Adverse Weather Conditions,” 2022. [Online]. Available: www.aaai.org
[11] A. Körez and N. Barişçi, “Object detection with low capacity GPU systems using improved faster R-CNN,” Applied Sciences (Switzerland), vol. 10, no. 1, Jan. 2020, doi: 10.3390/app10010083.
[12] 2019 IEEE 5th International Conference for Convergence in Technology (I2CT). IEEE, 2019.
[13] R. Iliyasu and I. Etikan, “Comparison of quota sampling and stratified random sampling,” Biom Biostat Int J, vol. 10, no. 1, pp. 24–27, Feb. 2021, doi: 10.15406/bbij.2021.10.00326.
[14] M. Iksan Maulana and U. Hayati, “Perbandingan Algoritma Naïve Bayes Dan K-Nearest Neighbors Untuk Klasifikasi Topik Berita Pada Situs Detik.Com,” 2024.
[15] H. Talebi, P. Milanfar, and G. Research, “Learning to Resize Images for Computer Vision Tasks.”
[16] R. Khanam and M. Hussain, “What is YOLOv5: A deep look into the internal features of the popular object detector,” Jul. 2024, [Online]. Available: http://arxiv.org/abs/2407.20892
[17] E. Malagoli and L. Di Persio, “2D Object Detection: A Survey,” Mar. 01, 2025, Multidisciplinary Digital Publishing Institute (MDPI). doi: 10.3390/math13060893.
[18] M. I. Pratama, N. Nurchim, and E. Purwanto, “Analisis Perbandingan Metode Yolo Dan Faster R-CNN Dalam Deteksi Objek Manusia,” Progresif: Jurnal Ilmiah Komputer, vol. 21, no. 2, p. 545, Aug. 2025, doi: 10.35889/progresif.v21i2.2890.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Julian Kiyosaki Hananta, Nuri Cahyono

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) ) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).








