Indonesian Food Classification Using Deep Feature Extraction and Ensemble Learning for Dietary Assessment

Authors

  • Muhammad Yusuf Kardawi Universitas Indonesia
  • Frederic Morado Saragih Universitas Indonesia
  • Laksmita Rahadianti Universitas Indonesia
  • Aniati Murni Arymurthy Universitas Indonesia

DOI:

https://doi.org/10.30871/jaic.v9i5.10643

Keywords:

Deep Learning, Dietary Assessment, Histogram Equalization, Machine Learning, Padang Cuisine

Abstract

Food is a cornerstone of culture, shaping traditions and reflecting regional identities. However, understanding the nutritional content of diverse cuisines can be challenging due to the vast array of ingredients and the similarities in appearance across different dishes. While food provides essential nutrients for the body, excessive and unbalanced consumption can harm health. Overeating, particularly high-calorie and fatty foods, can lead to an accumulation of excess calories and fat, increasing the risk of obesity and related health issues such as diabetes and heart disease. This paper introduces a novel ensemble learning approach with a dictionary that contains food nutrition content for addressing this challenge, specifically on Padang cuisine, a rich culinary tradition from West Sumatera, Indonesia. By leveraging a dataset of nine Padang dishes, the system employs image enhancement techniques and combines deep feature extraction and machine learning algorithms to classify food items accurately. Then, depending on the classification results, the system evaluates the nutritional content and creates a dietary evaluation report that includes the amount of protein, fat, calories, and carbs. The model is evaluated using different evaluation metrics and achieving a state-of-the-art accuracy of 85.56%, significantly outperforming standard baseline models. Based on the findings, the suggested approach can efficiently classify different Padang dishes and produce dietary assessments, enabling personalised nutritional recommendations to provide clear information on a balanced diet to enhance physical and overall wellness.

Downloads

Download data is not yet available.

References

[1] World Health Organization, “World Obesity Day 2022 – Accelerating action to stop obesity.” Accessed: Dec. 20, 2023. [Online]. Available: https://www.who.int/news/item/04-03-2022-world-obesity-day-2022-accelerating-action-to-stop-obesity

[2] UNICEF Indonesia, “Indonesia: Angka orang yang kelebihan berat badan dan obesitas naik di semua kelompok usia dan pendapatan.” Accessed: Dec. 27, 2023. [Online]. Available: https://www.unicef.org/indonesia/id/siaran-pers/indonesia-angka-orang-yang-kelebihan-berat-badan-dan-obesitas-naik-di-semua-kelompok

[3] W. Min, S. Jiang, L. Liu, Y. Rui, and R. Jain, “A survey on food computing,” ACM Comput. Surv., vol. 52, no. 5, Sep. 2019, doi: 10.1145/3329168.

[4] P. J. Brady et al., “A Qualitative Study of Factors Influencing Food Choices and Food Sources Among Adults Aged 50 Years and Older During the Coronavirus Disease 2019 Pandemic,” J. Acad. Nutr. Diet., vol. 123, no. 4, pp. 602-613.e5, Apr. 2023, doi: 10.1016/j.jand.2022.08.131.

[5] C. Liu et al., “A New Deep Learning-Based Food Recognition System for Dietary Assessment on An Edge Computing Service Infrastructure,” IEEE Trans. Serv. Comput., vol. 11, no. 2, pp. 249–261, Mar. 2018, doi: 10.1109/TSC.2017.2662008.

[6] F. P. W. Lo, Y. Guo, Y. Sun, J. Qiu, and B. Lo, “An Intelligent Vision-Based Nutritional Assessment Method for Handheld Food Items,” IEEE Trans. Multimed., vol. 25, pp. 5840–5851, 2023, doi: 10.1109/TMM.2022.3199911.

[7] W. Min et al., “Large Scale Visual Food Recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 8, 2023, doi: 10.1109/TPAMI.

[8] G. Waltner et al., “Personalized Dietary Self-Management Using Mobile Vision-Based Assistance,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag, 2017, pp. 385–393. doi: 10.1007/978-3-319-70742-6_36.

[9] A. Myers et al., “Im2Calories: Towards an Automated Mobile Vision Food Diary,” in 2015 IEEE International Conference on Computer Vision (ICCV), IEEE, Dec. 2015, pp. 1233–1241. doi: 10.1109/ICCV.2015.146.

[10] Q. Thames et al., “Nutrition5k: Towards Automatic Nutritional Understanding of Generic Food,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, 2021, pp. 8899–8907. doi: 10.1109/CVPR46437.2021.00879.

[11] Y. Lu, T. Stathopoulou, M. F. Vasiloglou, S. Christodoulidis, Z. Stanga, and S. Mougiakakou, “An Artificial Intelligence-Based System to Assess Nutrient Intake for Hospitalised Patients,” IEEE Trans. Multimed., vol. 23, pp. 1136–1147, 2021, doi: 10.1109/TMM.2020.2993948.

[12] X. J. Zhang, Y. F. Lu, and S. H. Zhang, “Multi-Task Learning for Food Identification and Analysis with Deep Convolutional Neural Networks,” J. Comput. Sci. Technol., vol. 31, no. 3, pp. 489–500, May 2016, doi: 10.1007/s11390-016-1642-6.

[13] S. Mezgec and B. K. Seljak, “Nutrinet: A deep learning food and drink image recognition system for dietary assessment,” Nutrients, vol. 9, no. 7, Jul. 2017, doi: 10.3390/nu9070657.

[14] G. Suddul and J. F. L. Seguin, “A comparative study of deep learning methods for food classification with images,” Food Humanit., vol. 1, no. July, pp. 800–808, 2023, doi: 10.1016/j.foohum.2023.07.018.

[15] A. Wibisono, H. A. Wisesa, Z. P. Rahmadhani, P. K. Fahira, P. Mursanto, and W. Jatmiko, “Traditional food knowledge of Indonesia: a new high-quality food dataset and automatic recognition system,” J. Big Data, vol. 7, no. 1, Dec. 2020, doi: 10.1186/s40537-020-00342-5.

[16] F. P. W. Lo, Y. Sun, J. Qiu, and B. Lo, “Image-Based Food Classification and Volume Estimation for Dietary Assessment: A Review,” IEEE J. Biomed. Heal. Informatics, vol. 24, no. 7, pp. 1926–1939, 2020, doi: 10.1109/JBHI.2020.2987943.

[17] J. He and F. Zhu, “Online Continual Learning for Visual Food Classification,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2021-Octob, no. 1, pp. 2337–2346, 2021, doi: 10.1109/ICCVW54120.2021.00265.

[18] H. Hoashi, T. Joutou, and K. Yanai, “Image recognition of 85 food categories by feature fusion,” in IEEE International Symposium on Multimedia, ISM 2010, 2010, pp. 296–301. doi: 10.1109/ISM.2010.51.

[19] and S. B. Giovanni Maria Farinella, Dario Allegra, Filippo Stanco, On the Exploitation of One Class Classification to Distinguish Food Vs Non-Food Images, vol. 9281. in Lecture Notes in Computer Science, vol. 9281. Springer International Publishing, 2015. doi: 10.1007/978-3-319-23222-5.

[20] S. Inunganbi, A. Seal, and P. Khanna, “Classification of Food Images through Interactive Image Segmentation,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag, 2018, pp. 519–528. doi: 10.1007/978-3-319-75420-8_49.

[21] N. D. Martinez-Lara, C. L. Garzon-Castro, and A. Filomena-Ambrosio, “Nu-Support Vector Classification Training for Feature Identification in ‘Arepas’: A Colombian Traditional Food,” in 13th International Symposium on Advanced Topics in Electrical Engineering, ATEE 2023, Institute of Electrical and Electronics Engineers Inc., 2023. doi: 10.1109/ATEE58038.2023.10108229.

[22] H. Yang, D. Zhang, D. J. Lee, and M. Huang, “A sparse representation based classification algorithm for Chinese food recognition,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Springer Verlag, 2016, pp. 3–10. doi: 10.1007/978-3-319-50832-0_1.

[23] K. Aizawa, Y. Maruyama, H. Li, C. Morikawa, and G. C. De Silva, “Food balance estimation by using personal dietary tendencies in a multimedia food log,” IEEE Trans. Multimed., vol. 15, no. 8, pp. 2176–2185, 2013, doi: 10.1109/TMM.2013.2271474.

[24] L. Breiman, “Random Forests,” Mach. Learn., vol. 45, pp. 5–32, 2001.

[25] J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek, “Image classification with the fisher vector: Theory and practice,” Int. J. Comput. Vis., vol. 105, no. 3, pp. 222–245, 2013, doi: 10.1007/s11263-013-0636-x.

[26] S. Lazebnik, C. Schmid, and J. Ponce, “Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2, pp. 2169–2178, 2006, doi: 10.1109/CVPR.2006.68.

[27] M. M. Anthimopoulos, L. Gianola, L. Scarnato, P. Diem, and S. G. Mougiakakou, “A food recognition system for diabetic patients based on an optimized bag-of-features model,” IEEE J. Biomed. Heal. Informatics, vol. 18, no. 4, pp. 1261–1271, 2014, doi: 10.1109/JBHI.2014.2308928.

[28] F. Moosmann, E. Nowak, and F. Jurie, “Randomized clustering forests for image classification,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 30, no. 9, pp. 1632–1646, 2008, doi: 10.1109/TPAMI.2007.70822.

[29] F. Kong and J. Tan, “DietCam: Automatic dietary assessment with mobile camera phones,” Pervasive Mob. Comput., vol. 8, no. 1, pp. 147–163, 2012, doi: 10.1016/j.pmcj.2011.07.003.

[30] F. Kong and J. Tan, “DietCam: Regular shape food recognition with a camera phone,” Proc. - 2011 Int. Conf. Body Sens. Networks, BSN 2011, pp. 127–132, 2011, doi: 10.1109/BSN.2011.19.

[31] S. Singh, A. Gupta, and A. A. Efros, “Unsupervised discovery of mid-level discriminative patches,” Comput. Vision-European Conf. Comput. Vis., vol. 7573 LNCS, no. PART 2, pp. 73–86, 2012, doi: 10.1007/978-3-642-33709-3_6.

[32] L. Xiao, T. Lan, D. Xu, W. Gao, and C. Li, “A Simplified CNNs Visual Perception Learning Network Algorithm for Foods Recognition,” Comput. Electr. Eng., vol. 92, Jun. 2021, doi: 10.1016/j.compeleceng.2021.107152.

[33] M. Chun, H. Jeong, H. Lee, T. Yoo, and H. Jung, “Development of Korean Food Image Classification Model Using Public Food Image Dataset and Deep Learning Methods,” IEEE Access, vol. 10, pp. 128732–128741, 2022, doi: 10.1109/ACCESS.2022.3227796.

[34] L. Jiang, B. Qiu, X. Liu, C. Huang, and K. Lin, “DeepFood: Food Image Analysis and Dietary Assessment via Deep Model,” IEEE Access, vol. 8, pp. 47477–47489, 2020, doi: 10.1109/ACCESS.2020.2973625.

[35] F. Zhou and Y. Lin, “Fine-grained image classification by exploring bipartite-graph labels,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, Dec. 2016, pp. 1124–1133. doi: 10.1109/CVPR.2016.127.

[36] L. Bossard, M. Guillaumin, and L. Van Gool, “Food-101 - Mining discriminative components with random forests,” Proceeding Eur. Conf. Comput. Vis., vol. 8694 LNCS, no. PART 6, pp. 446–461, 2014, doi: 10.1007/978-3-319-10599-4_29.

[37] H. Kagaya, K. Aizawa, and M. Ogawa, “Food detection and recognition using convolutional neural network,” Proc. 2014 ACM Conf. Multimed., no. 3, pp. 1085–1088, 2014, doi: 10.1145/2647868.2654970.

[38] S. Christodoulidis, M. Anthimopoulos, and S. Mougiakakou, “Food recognition for dietary assessment using deep convolutional neural networks,” Int. Conf. Image Anal. Process., vol. 9281, pp. 458–465, 2015, doi: 10.1007/978-3-319-23222-5_56.

[39] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Int. Conf. Learn. Represent., Sep. 2015, [Online]. Available: http://arxiv.org/abs/1409.1556

[40] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, Dec. 2016, pp. 770–778. doi: 10.1109/CVPR.2016.90.

[41] T. Chen and C. Guestrin, “XGBoost: A scalable tree boosting system,” in Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Association for Computing Machinery, Aug. 2016, pp. 785–794. doi: 10.1145/2939672.2939785.

[42] Faldo Fajri Afrinanto, “Padang Cuisine (Indonesian Food Image Dataset),” Kaggle. [Online]. Available: https://www.kaggle.com/dsv/4053613

[43] D. Xiang, H. Wang, D. He, and C. Zhai, “Research on Histogram Equalization Algorithm Based on Optimized Adaptive Quadruple Segmentation and Cropping of Underwater Image (AQSCHE),” IEEE Access, vol. 11, pp. 69356–69365, 2023, doi: 10.1109/ACCESS.2023.3290201.

Downloads

Published

2025-10-04

How to Cite

[1]
M. Y. Kardawi, F. M. Saragih, L. Rahadianti, and A. M. Arymurthy, “Indonesian Food Classification Using Deep Feature Extraction and Ensemble Learning for Dietary Assessment”, JAIC, vol. 9, no. 5, pp. 2009–2018, Oct. 2025.

Similar Articles

1 2 3 4 5 > >> 

You may also start an advanced similarity search for this article.