A Two-Stage Braille Recognition System Using YOLOv8 for Detection and CNN for Classification

Authors

  • Tan Valencio Yobert Geraldo Setiawan Univeritas Dian Nuswantoro
  • Eko Hari Rachmawanto Universitas Dian Nuswantoro

DOI:

https://doi.org/10.30871/jaic.v9i6.11483

Keywords:

Braille Recognition, Character Classification, MobileNetV2, Object Detection, YOLOv8

Abstract

Automatic recognition of Braille characters remains a challenge in the field of computer vision, especially due to variations in shape, size, and lighting conditions in images. This research proposes a two-stage system to detect and recognize Braille letters in real time using a deep learning approach. In the first stage, the YOLOv8 model is used to detect the position of Braille characters within an image. The detected regions are then processed in the second stage using a classification model based on the MobileNetV2 CNN architecture. The dataset used consists of 7,016 Braille character images, collected from a combination of the AEyeAlliance dataset and annotated data from Roboflow. To address the class imbalance problem—particularly for letters T to Z which had fewer samples—oversampling and image augmentation techniques were applied that makes the final combined dataset contained approximately 7,616 images. The system was tested on 1,513 images and achieved strong results, with average precision, recall, and F1-score of 0.98, and an overall accuracy of 98%. This two-stage method effectively separates detection and classification tasks, resulting in an efficient and accurate Braille recognition system suitable for real-time applications.

Downloads

Download data is not yet available.

References

[1] B. M. Hsu, “Braille recognition for reducing asymmetric communication between the blind and non-blind,” Symmetry Basel, vol. 12, no. 7, Jul. 2020, doi: 10.3390/SYM12071069.

[2] D. Lee and J. Cho, “Automatic Object Detection Algorithm-Based Braille Image Generation System for the Recognition of Real-Life Obstacles for Visually Impaired People,” Sensors, vol. 22, no. 4, Feb. 2022, doi: 10.3390/s22041601.

[3] S. Shokat, R. Riaz, S. S. Rizvi, K. Khan, F. Riaz, and S. J. Kwon, “Analysis and Evaluation of Braille to Text Conversion Methods,” Mob. Inf. Syst., vol. 2020, 2020, doi: 10.1155/2020/3461651.

[4] G. Dzięgiel-Fivet, J. Plewko, M. Szczerbiński, A. Marchewka, M. Szwed, and K. Jednoróg, “Neural network for Braille reading and the speech-reading convergence in the blind: Similarities and differences to visual reading,” Neuroimage, vol. 231, May 2021, doi: 10.1016/j.neuroimage.2021.117851.

[5] L. Alzubaidi and others, “Review of deep learning: concepts, CNN architectures, challenges, applications, future directions,” J. Big Data, vol. 8, no. 1, pp. 1–74, Dec. 2021, doi: 10.1186/s40537-021-00444-8.

[6] S. Bej, N. Davtyan, M. Wolfien, M. Nassar, and O. Wolkenhauer, “LoRAS: an oversampling approach for imbalanced datasets,” Mach. Learn., vol. 110, no. 2, pp. 279–301, Feb. 2021, doi: 10.1007/s10994-020-05913-4.

[7] I. G. Ovodov and R. Zelenograd, “Optical Braille Recognition Using Object Detection Neural Network.” Oct. 2021. doi: 10.48550/arXiv.2012.12412.

[8] T. Kausar, S. Manzoor, A. Kausar, Y. Lu, M. Wasif, and M. A. Ashraf, “Deep Learning Strategy for Braille Character Recognition,” IEEE Access, vol. 9, pp. 169357–169371, 2021, doi: 10.1109/ACCESS.2021.3138240.

[9] M. Akay and others, “Deep Learning Classification of Systemic Sclerosis Skin Using the MobileNetV2 Model,” IEEE Open J. Eng. Med. Biol., vol. 2, pp. 104–110, 2021, doi: 10.1109/OJEMB.2021.3066097.

[10] A. Sharma, V. Kumar, and L. Longchamps, “Comparative performance of YOLOv8, YOLOv9, YOLOv10, YOLOv11 and Faster R-CNN models for detection of multiple weed species,” Smart Agric. Technol., vol. 9, Dec. 2024, doi: 10.1016/j.atech.2024.100648.

[11] K. Maharana, S. Mondal, and B. Nemade, “A review: Data pre-processing and data augmentation techniques,” Glob. Transit. Proc., vol. 3, no. 1, pp. 91–99, Jun. 2022, doi: 10.1016/j.gltp.2022.04.020.

[12] R. Mohammed, J. Rawashdeh, and M. Abdullah, “Machine Learning with Oversampling and Undersampling Techniques: Overview Study and Experimental Results,” in 2020 11th International Conference on Information and Communication Systems (ICICS), Apr. 2020, pp. 243–248. doi: 10.1109/ICICS49469.2020.239556.

[13] Z. J. Khow, Y. F. Tan, H. A. Karim, and H. A. A. Rashid, “Improved YOLOv8 Model for a Comprehensive Approach to Object Detection and Distance Estimation,” IEEE Access, vol. 12, pp. 63754–63767, 2024, doi: 10.1109/ACCESS.2024.3396224.

[14] N. Jegham, C. Y. Koh, M. Abdelatti, and A. Hendawi, “YOLO Evolution: A Comprehensive Benchmark and Architectural Review of YOLOv12, YOLO11, and Their Previous Versions,” Mar. 17, 2025, arXiv: arXiv:2411.00201. doi: 10.48550/arXiv.2411.00201.

[15] R. Indraswari, R. Rokhana, and W. Herulambang, “Melanoma image classification based on MobileNetV2 network,” in Procedia Computer Science, 2021, pp. 198–207. doi: 10.1016/j.procs.2021.12.132.

[16] F. Zhuang and others, “A Comprehensive Survey on Transfer Learning.” Jun. 2020.

[17] Y. Gulzar, “Fruit Image Classification Model Based on MobileNetV2 with Deep Transfer Learning Technique,” Sustain. Switz., vol. 15, no. 3, Feb. 2023, doi: 10.3390/su15031906.

[18] Z. Zhu, K. Lin, A. K. Jain, and J. Zhou, “Transfer Learning in Deep Reinforcement Learning: A Survey,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 11, pp. 13344–13362, Nov. 2023, doi: 10.1109/TPAMI.2023.3292075.

[19] E. C. Too, L. Yujian, S. Njuki, and L. Yingchun, “A comparative study of fine-tuning deep learning models for plant disease identification,” Comput. Electron. Agric., vol. 161, pp. 272–279, Jun. 2019, doi: 10.1016/j.compag.2018.03.032.

[20] S. Mishra, M. V. Verma, D. N. Akhtar, S. Chaturvedi, and D. Y. Perwej, “An Intelligent Motion Detection Using OpenCV,” Int. J. Sci. Res. Sci. Eng. Technol., pp. 51–63, Mar. 2022, doi: 10.32628/ijsrset22925.

[21] M. Vilar-Andreu, L. Garcia, A. J. Garcia-Sanchez, R. Asorey-Cacheda, and J. Garcia-Haro, “Enhancing Precision Agriculture Pest Control: A Generalized Deep Learning Approach With YOLOv8-Based Insect Detection,” IEEE Access, vol. 12, pp. 84420–84434, 2024, doi: 10.1109/ACCESS.2024.3413979.

[22] M. Iman, H. R. Arabnia, and K. Rasheed, “A Review of Deep Transfer Learning and Recent Advancements,” Technologies, Apr. 2023, doi: 10.3390/technologies11020040.

[23] M. Grandini, E. Bagli, and G. Visani, “Metrics for Multi-Class Classification: an Overview.” Aug. 2020.

Downloads

Published

2025-12-05

How to Cite

[1]
T. V. Y. G. Setiawan and E. H. Rachmawanto, “A Two-Stage Braille Recognition System Using YOLOv8 for Detection and CNN for Classification”, JAIC, vol. 9, no. 6, pp. 3057–3067, Dec. 2025.

Most read articles by the same author(s)

1 2 > >> 

Similar Articles

<< < 19 20 21 22 23 > >> 

You may also start an advanced similarity search for this article.