Lung Segmentation in X-ray Images of Tuberculosis Patients Using U-Net with CLAHE Preprocessing
DOI:
https://doi.org/10.30871/jaic.v9i4.9869Keywords:
Tuberculosis, Image Segmentation, U-Net, CLAHE, X-ray ImageAbstract
Tuberculosis (TB) is an infectious disease that commonly affects the lungs and remains one of the leading causes of death from infectious diseases. Early detection is essential to prevent further spread and organ damage. Chest X-ray images are one of the main methods for diagnosing TB, but image quality is often affected by low contrast and noise. This study proposes the application of Contrast Limited Adaptive Histogram Equalization (CLAHE) method to improve X-ray image quality, combined with U-Net deep learning architecture for lung segmentation in X-ray images of tuberculosis patients. U-Net was chosen due to its excellent capability in medical image segmentation, thanks to its architectural structure that has encoder-decoder with skip connections, which allows the model to retain detailed information on high-resolution images, even on complex and noisy data. Experimental results using the Shenzhen and Montgomery datasets show that the U-Net model with CLAHE achieves Pixel Accuracy 97.96%, Recall 94.93%, Specificity 98.97%, Dice Coefficient 95.87%, and Jaccard Index (IoU) 92.07%.
Downloads
References
[1] N. Aini and H. Rahmania Hatta, “Expert System for Diagnosing Tuberculosis Disease,” Jurnal Informatika Mulawarman, vol. 12, no. 1, p. 56, 2017, doi: 10.30872/jim.v12i1.224.
[2] K. Mar’iyah and Zulkarnain, “Pathophysiology of Tuberculosis Infectious Disease,” Prosiding Biologi Achieving The Sustainable Development Goals With Biodiversity In Confronting Climate Change, vol. 7, no. 1, Nov. 2021, doi: 10.24252/psb.v7i1.23169.
[3] R. Ramadhan, E. Fitria, and R. Rosdiana, “Detection of Mycobacterium Tuberculosis by Microscopic Examination and PCR Technique in Patients with Pulmonary Tuberculosis at the Darul Imarah Health Center,” Sel Jurnal Penelitian Kesehatan, vol. 4, pp. 73–80, Nov. 2017, doi: 10.22435/sel.v4i2.1463.
[4] T. A. Ghebreyesus and T. Kasaeva, Global Tuberculosis Report 2024. Geneva: World Health Organization, 2024. Accessed: Jun. 17, 2025. [Online]. Available: https://www.who.int/publications/i/item/9789240101531
[5] K. W. D. Nugraha, T. Seviana, and F. Sibuea, Indonesia Health Profile 2022. Jakarta: Kementerian Kesehatan Republik Indonesia.
[6] R. M. Natsir and F. Aipassa, “The Role of Anti-Tb Testing in Tuberculosis Early Detection Efforts in the Lower Taeno Hamlet Community,” Jurnal Kreativitas Pengabdian Kepada Masyarakat, vol. 7, no. 12, pp. 5652–5660, Dec. 2024, doi: 10.33024/jkpm.v7i12.17622.
[7] A. A. Alimi, A. R. Adriansyah, and P. Prima, “Development of Tuberculosis Detection System on X-Ray Image Using Convolutional Neural Network (CNN) Method with Laravel Framework,” Jurnal Informatika Terpadu, vol. 10, no. 2, pp. 165–171, Oct. 2024, doi: https://doi.org/10.54914/jit.v10i2.1437.
[8] I. Haq et al., “Machine Vision Approach for Diagnosing Tuberculosis (TB) Based on Computerized Tomography (CT) Scan Images,” Symmetry (Basel), vol. 14, no. 10, Oct. 2022, doi: 10.3390/sym14101997.
[9] O. A. M. F. Alnaggar et al., “Efficient Artificial Intelligence Approaches for Medical Image Processing in Healthcare: Comprehensive Review, Taxonomy, And Analysis,” Artif Intell Rev, vol. 57, no. 8, p. 221, 2024, doi: 10.1007/s10462-024-10814-2.
[10] D. Priyawati, “Spatial-Domain Digital Image Processing Techniques for X-ray Image Enhancement,” Jurnal Komunikasi dan Teknologi Informasi, vol. 2, no. 2, 2011.
[11] M. R. Pratama, S. Zahrah Hidayat, A. R. Nuruddin, H. W. Niamaputri, D. Fetty, and T. Anggraeny, “Image Contrast Enhancement Optimization Using Interval-Valued Intuitionistic Fuzzy Sets and Contrast Limited Adaptive Histogram Equalization (CLAHE),” Prosiding Seminar Implementasi Teknologi Informasi dan Komunikasi, vol. 4, no. 1, 2025, doi: 10.31284/p.semtik.2025-1.6884.
[12] E. Ijaz and I. Jr, “Quantitative Analysis of Image Enhancement Algorithms for Diverse Applications,” vol. 5, pp. 694–707, Dec. 2023.
[13] P. Singh and A. K. Bhandari, “Globally and locally tuned filtering structure for high contrast intensity degradation,” Multimed Tools Appl, vol. 83, no. 35, pp. 82471–82494, 2024, doi: 10.1007/s11042-024-18749-0.
[14] L. Li, Y. Si, and Z. Jia, “Medical Image Enhancement Based on CLAHE and Unsharp Masking in NSCT Domain,” J Med Imaging Health Inform, vol. 8, pp. 431–438, Mar. 2018, doi: 10.1166/jmihi.2018.2328.
[15] D. Gunawan and H. Setiawan, “Convolutional Neural Network in Medical Images,” Konvergensi Teknologi dan Sistem Informasi, vol. 2, May 2022, doi: 10.24002/konstelasi.v2i2.5367.
[16] H. Yu, L. T. Yang, Q. Zhang, D. Armstrong, and M. J. Deen, “Convolutional Neural Networks for Medical Image Analysis: State-of-the-art, Comparisons, Improvement and Perspectives,” Neurocomputing, vol. 444, pp. 92–110, 2021, doi: https://doi.org/10.1016/j.neucom.2020.04.157.
[17] A. Susanto, Y. Kusumawati, E. Niagara, and A. Sari, “Convolutional Neural Network in Helmet Detection System for Motorcycle Riders,” Seminar Nasional Teknologi dan Multidisiplin Ilmu (SEMNASTEKMU), vol. 2, pp. 91–99, Dec. 2022, doi: 10.51903/semnastekmu.v2i1.158.
[18] M. M. I. Al-Ghiffary, C. A. Sari, E. H. Rachmawanto, N. R. D. Cahyo, N. M. Yaacob, and R. R. Ali, “Milkfish Freshness Classification Using Convolutional Neural Networks Based on Resnet50 Architecture,” Jurnal Teknologi Informasi dan Ilmu Komputer, vol. 5, no. 3, Aug. 2023, doi: 10.26877/asset.v5i3.17017.
[19] L. Alzubaidi et al., “Review of deep learning: Concepts, CNN Architectures, Challenges, Applications, Future Directions,” J Big Data, vol. 8, no. 1, p. 53, 2021, doi: 10.1186/s40537-021-00444-8.
[20] S. Kordnoori, M. Sabeti, H. Mostafaei, and S. S. A. Banihashemi, “An Effective U-Net Model for Diagnosing Covid-19 Infection,” Intell Based Med, vol. 10, p. 100156, 2024, doi: https://doi.org/10.1016/j.ibmed.2024.100156.
[21] A. Pravitasari et al., “UNet-VGG16 with Transfer Learning for MRI-based Brain Tumor Segmentation,” Telecommunication Computing Electronics and Control, vol. 18, p. 1310, Jun. 2020, doi: 10.12928/telkomnika.v18i3.14753.
[22] A. Misbullah, W. Mursyida, L. Farsiah, and K. Martiwi Sukiakhy, “Performance Analysis of Brain Tumor MRI Image Segmentation with U-Net and Res-UNet Architecture,” J-SIGN, vol. 2, no. 2, pp. 83–95, 2024, doi: 10.24815/j-sign.v2i02.
[23] W. Liao, Y. Zhu, X. Wang, C. Pan, Y. Wang, and L. Ma, “LightM-UNet: Mamba Assists in Lightweight UNet for Medical Image Segmentation,” ArXiv, Mar. 2024, doi: https://doi.org/10.48550/arXiv.2403.05246.
[24] J. D. Bodapati, R. Sajja, and V. Naralasetti, “An Efficient Approach for Semantic Segmentation of Salt Domes in Seismic Images Using Improved UNET Architecture,” Journal of The Institution of Engineers (India): Series B, vol. 104, no. 3, pp. 569–578, 2023, doi: 10.1007/s40031-023-00875-2.
[25] A. Santone, R. De Vivo, L. Recchia, M. Cesarelli, and F. Mercaldo, “A Method for Retina Segmentation by Means of U-Net Network,” Electronics (Basel), vol. 13, no. 22, 2024, doi: 10.3390/electronics13224340.
[26] U. Tarik, F. Ali, and S. Dawwd, “U-Net Convolutional Networks Performance Based on Software-Hardware Cooperation Parameters: A Review,” International Journal of Computing and Digital Systems, vol. 11, pp. 977–990, Mar. 2022, doi: 10.12785/ijcds/110180.
[27] Y. Wu and Q. Li, “ConvNeXt embedded U-Net for semantic segmentation in urban scenes of multi-scale targets,” Complex & Intelligent Systems, vol. 11, no. 4, p. 181, 2025, doi: 10.1007/s40747-024-01735-2.
[28] I. Dimitrovski, V. Spasev, S. Loshkovska, and I. Kitanovski, “U-Net Ensemble for Enhanced Semantic Segmentation in Remote Sensing Imagery,” Remote Sens (Basel), vol. 16, no. 12, 2024, doi: 10.3390/rs16122077.
[29] S. Hashem and M. Kamil, “Segmentation of Chest X-Ray Images Using U-Net Model,” MENDEL, vol. 28, pp. 49–53, Dec. 2022, doi: 10.13164/mendel.2022.2.049.
[30] J. Sousa et al., “Lung Segmentation in CT Images: A Residual U-Net Approach on a Cross-Cohort Dataset,” Applied Sciences (Switzerland), vol. 12, no. 4, Feb. 2022, doi: 10.3390/app12041959.
[31] Md. S. Alam et al., “Attention-Based Multi-Residual Network for Lung Segmentation in Diseased Lungs with Custom Data Augmentation,” Sci Rep, vol. 14, no. 1, p. 28983, 2024, doi: 10.1038/s41598-024-79494-w.
[32] S. Arvind, J. V Tembhurne, T. Diwan, and P. Sahare, “Improvised light Weight Deep CNN based U-Net for The Semantic Segmentation of Lungs from Chest X-rays,” Results in Engineering, vol. 17, p. 100929, 2023, doi: https://doi.org/10.1016/j.rineng.2023.100929.
[33] M. Rahman, B. Tseng, M. Pokojovy, W. Qian, B. Totada, and H. Xu, An Automatic Approach to Lung Region Segmentation in Chest X-Ray Images Using Adapted U-Net Architecture. 2021. doi: 10.1117/12.2581882.
[34] O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” in Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015, N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi, Eds., Cham: Springer International Publishing, 2015, pp. 234–241.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Ibnu Farid Mabina, Christy Atika Sari, Eko Hari Rachmawanto

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) ) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).








