YOLOV12 Based on Stationary Vehicle for License Plate Detection
DOI:
https://doi.org/10.30871/jaic.v9i5.10950Keywords:
YOLOv12, Vehicle License plate, Recognize, AccuracyAbstract
The use of technology for vehicle license plate recognition in this modern era is increasingly developing in supporting the needs of more effective transportation system management. This research aims to design and implement a vehicle license plate recognition system with the YOLOv12 (You Only Look Once) algorithm. The use of the YOLOv12 algorithm in license plate recognition is due to its superiority in detecting and recognizing objects in real-time with high accuracy. This research method will involve collecting a dataset of vehicle license plates from various viewing angles, lighting conditions, license plate colors, and the shape of the license plate. These datasets are then used to train an adapted YOLOv12 model to detect and recognize characters on license plates. Tests are conducted by measuring the detection accuracy, processing speed, and robustness of the detection system to disturbances such as noise and variations in environmental conditions when detecting license plates. The results of the study shown that this system yielded accuracy rate of 97.5%, recall of 95.4%, precision of 96.7%, and is capable of recognizing characters on vehicle license plates with an accuracy rate of 88%, recall of 87%, and precision of 85.8%. The average processing time is 1 second per image on CPU and 20 seconds per image on GPU. The system's ability to detect vehicle license plates shows that the YOLOv12 algorithm can be used for large-scale vehicle license plate system implementation. The significance of these results lies in their potential application in various fields such as parking management systems, traffic management, and law enforcement, which can improve efficiency and safety.
Downloads
References
[1] J. N. Njoku, C. I. Nwakanma, G. C. Amaizu, and D. S. Kim, “Prospects and challenges of Metaverse application in data-driven intelligent transportation systems,” Jan. 01, 2023, John Wiley and Sons Inc. doi: 10.1049/itr2.12252.
[2] B. Ugbede Umar, O. M. Olaniyi, J. Agajo, and O. R. Isah, “Traffic Violation Detection System Using Image Processing,” Computer Engineering and Applications, vol. 10, no. 2, 2021.
[3] H. Roßnagel, C. H. Schunck, L. Fritsch, and N. Gruschka, “Extraction and Accumulation of Identity Attributes from the Internet of Things,” 2021. [Online]. Available: https://www.wigle.net
[4] U. Yousaf et al., “A deep learning based approach for localization and recognition of pakistani vehicle license plates,” Sensors, vol. 21, no. 22, Nov. 2021, doi: 10.3390/s21227696.
[5] M. M. Khan, M. U. Ilyas, I. R. Khan, S. M. Alshomrani, and S. Rahardja, “License Plate Recognition Methods Employing Neural Networks,” IEEE Access, vol. 11, pp. 73613–73646, 2023, doi: 10.1109/ACCESS.2023.3254365.
[6] D. Satrya Utama and H. Adianto Mardijono, “Legal protection against licence plate businesses that produce licence plates for stolen vehicles,” International Journal of Social Sciences and Humanities, vol. 2, no. 2, pp. 69–74, 2024, doi: 10.55681/ijssh.v2i2.1323.
[7] N. R. Adytia and G. P. Kusuma, “Indonesian license plate detection and identification using deep learning,” International Journal of Emerging Technology and Advanced Engineering, vol. 11, no. 7, pp. 1–7, Jul. 2021, doi: 10.46338/ijetae0721_01.
[8] M. C. Wijaya, “Research Of Indonesian License Plates Recognition On Moving Vehicles,” EUREKA, Physics and Engineering, vol. 2022, no. 6, pp. 185–198, 2022, doi: 10.21303/2461-4262.2022.002424.
[9] D. Islam, T. Mahmud, and T. Chowdhury, “An efficient automated vehicle license plate recognition system under image processing,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 29, no. 2, pp. 1055–1062, Feb. 2023, doi: 10.11591/ijeecs.v29.i2.pp1055-1062.
[10] R. Al-batat, A. Angelopoulou, S. Premkumar, J. Hemanth, and E. Kapetanios, “An End-to-End Automated License Plate Recognition System Using YOLO Based Vehicle and License Plate Detection with Vehicle Classification,” Sensors, vol. 22, no. 23, Dec. 2022, doi: 10.3390/s22239477.
[11] Y. Tian, Q. Ye, and D. Doermann, “YOLOv12: Attention-Centric Real-Time Object Detectors Latency (ms) MS COCO mAP (%),” 2025, doi: 10.0.
[12] R. Sapkota and M. Karkee, “Improved YOLOv12 with LLM-Generated Synthetic Data for Enhanced Apple Detection and Benchmarking Against YOLOv11 and YOLOv10,” Feb. 2025, [Online]. Available: http://arxiv.org/abs/2503.00057
[13] R. Khanam and M. Hussain, “A Review of YOLOv12: Attention-Based Enhancements vs. Previous Versions,” Apr. 2025, [Online]. Available: http://arxiv.org/abs/2504.11995
[14] K. Alomar, H. I. Aysel, and X. Cai, “Data Augmentation in Classification and Segmentation: A Survey and New Strategies,” J Imaging, vol. 9, no. 2, Feb. 2023, doi: 10.3390/jimaging9020046.
[15] X. Hao, L. Liu, R. Yang, L. Yin, L. Zhang, and X. Li, “A Review of Data Augmentation Methods of Remote Sensing Image Target Recognition,” Feb. 01, 2023, MDPI. doi: 10.3390/rs15030827.
[16] C. Kodithuwakku, “Vehicle Registration-Plate Detection with ML A Practical Approach,” 2024.
[17] J. Terven, D. M. Córdova-Esparza, and J. A. Romero-González, “A Comprehensive Review of YOLO Architectures in Computer Vision: From YOLOv1 to YOLOv8 and YOLO-NAS,” Dec. 01, 2023, Multidisciplinary Digital Publishing Institute (MDPI). doi: 10.3390/make5040083.
[18] U. Sirisha, S. P. Praveen, P. N. Srinivasu, P. Barsocchi, and A. K. Bhoi, “Statistical Analysis of Design Aspects of Various YOLO-Based Deep Learning Models for Object Detection,” Dec. 01, 2023, Springer Science and Business Media B.V. doi: 10.1007/s44196-023-00302-w.
[19] M. Mahasin and I. A. Dewi, “Comparison of CSPDarkNet53, CSPResNeXt-50, and EfficientNet-B0 Backbones on YOLO V4 as Object Detector,” 2022, doi: 10.52088/ijesty.v1i4.291.
[20] Y. Chen, H. Xu, X. Zhang, P. Gao, Z. Xu, and X. Huang, “An object detection method for bayberry trees based on an improved YOLO algorithm,” Int J Digit Earth, vol. 16, no. 1, pp. 781–805, 2023, doi: 10.1080/17538947.2023.2173318.
[21] D. Al-Turki et al., “Human-in-the-Loop Learning With LLMs for Efficient RASE Tagging in Building Compliance Regulations,” IEEE Access, vol. 12, pp. 185291–185306, 2024, doi: 10.1109/ACCESS.2024.3512434.
[22] B. Gouila, “Instance Segmentation for Rock Particle Quality Monitoring: Integration of Deep Learning for Machine Vision Application in the Aggregates Industry,” 2023.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 The, Obed Danny Kurniawan, Eko Hari Rachmawanto

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) ) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).








