Identification Food Nutrition and Weight Prediction using Image Processing
DOI:
https://doi.org/10.30871/jaee.v9i1.9492Keywords:
Detection, Intensity, Obesity, YOLOAbstract
Obesity is a global issue with rising prevalence each year, driven partly by excess nutrient and calorie intake. Identifying nutrient content in food is vital to prevent obesity. This research employs image processing, specifically the YOLO (You Only Look Once) algorithm, to classify and identify fruits and vegetables quickly and accurately. YOLO is advantageous for its speed and ability to classify multiple objects simultaneously. The goal is to develop a system that recognizes, classifies, and predicts the weight of fruits and vegetables, providing nutritional and calorie information. Tests showed that the system accurately detects produce under various lighting conditions—achieving 100% accuracy with additional ring light (600–650 lux) and 99.2% without extra lighting. Beyond object detection, the system predicts weight with an average error of 5.6% when illuminated. This technology has the potential to aid users in monitoring nutritional intake by providing reliable identification and calorie data, contributing to obesity prevention efforts.
Downloads
References
[1] “Obesity and overweight.” Accessed: Oct. 06, 2024. [Online]. Available: https://www.who.int/news-room/fact-sheets/detail/obesity-and-overweight
[2] P. Pouladzadeh, S. Shirmohammadi, and R. Al-Maghrabi, “Measuring calorie and nutrition from food image,” IEEE Trans Instrum Meas, vol. 63, no. 8, pp. 1947–1956, 2014, doi: 10.1109/TIM.2014.2303533.
[3] K. Ayuningsih, Y. A. Sari, and P. P. Adikara, “Klasifikasi Citra Makanan Menggunakan HSV Color Moment dan Local Binary Pattern dengan Naive Bayes Classifier,” Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer, vol. 3, no. 4, pp. 3166–3173, Jan. 2019, Accessed: Oct. 06, 2024. [Online]. Available: https://j-ptiik.ub.ac.id/index.php/j-ptiik/article/view/4891
[4] N. Silvia and A. Sani, “The identification of food nutrition using image processing,” AIP Conf Proc, vol. 2665, no. 1, Sep. 2023, doi: 10.1063/5.0127881/2912601.
[5] A. Nayak, S. Chakraborty, and D. K. Swain, “Application of smartphone-image processing and transfer learning for rice disease and nutrient deficiency detection,” Smart Agricultural Technology, vol. 4, p. 100195, Aug. 2023, doi: 10.1016/J.ATECH.2023.100195.
[6] E. A. Wiyono, “Rancang Bangun Aplikasi Media Informasi Nutrisi Pada Makanan Atau Produk Makanan Menggunakan Teknologi Augmented Reality Berbasis Android,” 2015.
[7] Y. Han, Q. Cheng, W. Wu, and Z. Huang, “DPF-Nutrition: Food Nutrition Estimation via Depth Prediction and Fusion,” Foods 2023, Vol. 12, Page 4293, vol. 12, no. 23, p. 4293, Nov. 2023, doi: 10.3390/FOODS12234293.
[8] K. Boudjit and N. Ramzan, “Human detection based on deep learning YOLO-v2 for real-time UAV applications,” Journal of Experimental & Theoretical Artificial Intelligence, vol. 34, no. 3, pp. 527–544, May 2022, doi: 10.1080/0952813X.2021.1907793.
[9] H. T. Hoc, R. Silhavy, Z. Prokopova, and P. Silhavy, “Comparing Multiple Linear Regression, Deep Learning and Multiple Perceptron for Functional Points Estimation,” IEEE Access, vol. 10, pp. 112187–112198, 2022, doi: 10.1109/ACCESS.2022.3215987.
[10] T. Zhu, K. Li, P. Herrero, and P. Georgiou, “Deep Learning for Diabetes: A Systematic Review,” IEEE J Biomed Health Inform, vol. 25, no. 7, pp. 2744–2757, Jul. 2021, doi: 10.1109/JBHI.2020.3040225.
[11] G. Kalaiarasi, J. Ashok, B. Saritha, and M. Manoj Prabu, “A Deep Learning Approach to Detecting Objects in Underwater Images,” Cybern Syst, 2023, doi: 10.1080/01969722.2023.2166246.
[12] T. Zhu, K. Li, P. Herrero, and P. Georgiou, “Deep Learning for Diabetes: A Systematic Review,” Jul. 01, 2021, Institute of Electrical and Electronics Engineers Inc. doi: 10.1109/JBHI.2020.3040225.
[13] Y. Liu, Y. Zhu, and K. Wu, “CNN-Based Fault Phase Identification Method of Double Circuit Transmission Lines,” Electric Power Components and Systems, vol. 48, no. 8, pp. 833–843, May 2020, doi: 10.1080/15325008.2020.1821836.
[14] F. Manavi, A. Sharma, R. Sharma, T. Tsunoda, S. Shatabda, and I. Dehzangi, “CNN-Pred: Prediction of single-stranded and double-stranded DNA-binding protein using convolutional neural networks,” Gene, vol. 853, p. 147045, Feb. 2023, doi: 10.1016/J.GENE.2022.147045.
[15] A. Wang, Y. Wang, and Y. Chen, “Hyperspectral image classification based on convolutional neural network and random forest,” Remote Sensing Letters, vol. 10, no. 11, pp. 1086–1094, 2019, doi: 10.1080/2150704X.2019.1649736.
[16] H. J. Jeong, K. S. Park, and Y. G. Ha, “Image Preprocessing for Efficient Training of YOLO Deep Learning Networks,” Proceedings - 2018 IEEE International Conference on Big Data and Smart Computing, BigComp 2018, pp. 635–637, May 2018, doi: 10.1109/BIGCOMP.2018.00113.
[17] D. H. Dos Reis, D. Welfer, M. A. De Souza Leite Cuadros, and D. F. T. Gamarra, “Mobile Robot Navigation Using an Object Recognition Software with RGBD Images and the YOLO Algorithm,” Applied Artificial Intelligence, vol. 33, no. 14, pp. 1290–1305, Dec. 2019, doi: 10.1080/08839514.2019.1684778.
[18] S. Syntakas, K. Vlachos, and A. Likas, “Object Detection and Navigation of a Mobile Robot by Fusing Laser and Camera Information,” 2022 30th Mediterranean Conference on Control and Automation, MED 2022, pp. 557–563, 2022, doi: 10.1109/MED54222.2022.9837249.
[19] W. Gai, Y. Liu, J. Zhang, and G. Jing, “An improved Tiny YOLOv3 for real-time object detection,” http://mc.manuscriptcentral.com/tssc, vol. 9, no. 1, pp. 314–321, 2021, doi: 10.1080/21642583.2021.1901156.
[20] W. Widyawati and R. Febriani, “Real-time detection of fruit ripeness using the YOLOv4 algorithm,” Teknika: Jurnal Sains dan Teknologi, vol. 17, no. 2, pp. 205–210, Nov. 2021, doi: 10.36055/tjst.v17i2.12254.
Downloads
Published
Versions
- 2025-06-28 (2)
- 2025-06-28 (1)
How to Cite
Issue
Section
License
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) ) that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).
Open Access Policy
This journal provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge. Its free availability on the public internet, permitting any users to read, download, copy, distribute, print, search, or link to the full texts of these articles, crawl them for indexing, pass them as data to software, or use them for any other lawful purpose, without financial, legal, or technical barriers other than those inseparable from gaining access to the internet itself.










