Crack Detection in Building Through Deep Learning Feature Extraction and Machine Learning Approch
Abstract
Buildings with cracks are extremely hazardous because they have the potential to cause destruction. Numerous occupants of structures such as houses and buildings are at risk when cracks appear. There are numerous techniques for identifying fractures in structures, including visual inspection, tool use, and expert inspection. The present study employed computer vision, a form of artificial intelligence, to detect cracks in buildings. The main objective of this research is to construct a prototype capable of real-time monitoring of cracks in building walls. This research makes use of a methodology that combines machine learning and deep learning. Machine learning is employed in the classification process, whereas deep learning is utilized for the extraction of features. This research employs MobileNetV2 as its deep learning architecture and K-NN, Naive Bayes, SVM, XGBoost, and Random Forest as its machine learning classifiers. Test results show that when dividing the 80:20 dataset, XGBoost algorithms can produce the highest accuracy, sensitivity, and specificity values of 99%. Tests in the real environment are performed by deploying Raspberry Pi. Test results show that the prototype can detect cracks inthe structure surfaceat a distance of 10 meters in a bright environment. The crack detection process is carried out in real time at an average speed of 42fps.
Downloads
References
A. Rohmat, “Analisis Kerusakan Struktur Dan Arsitektur Pada Bangunan Gedung,” vol. 2, no. 2, pp. 134–140, 2020.
Y. Ren et al., “Image-based concrete crack detection in tunnels using deep fully convolutional networks,” Constr. Build. Mater., vol. 234, p. 117367, 2020, doi: 10.1016/j.conbuildmat.2019.117367.
K. Mantani and M. Fauzan, “Desain dan Analisis Struktur Bangunan Adat Sumatera Barat Terhadap Ketahanan Gempa,” J. Tek. Sipil dan Lingkung., vol. 4, no. 1, pp. 25–36, 2019, doi: 10.29244/jsil.4.1.25-36.
D. Cornish and D. Dukette, The Essential 20: Twenty Components of an Excellent Health Care Team, First. Pittsburgh: RoseDog Books, 2009.
T. Ni, R. Zhou, C. Gu, and Y. Yang, “Measurement of concrete crack feature with android smartphone APP based on digital image processing techniques,” Measurement, vol. 150, p. 107093, 2020, doi: https://doi.org/10.1016/j.measurement.2019.107093.
H. Perez and J. H. M. Tah, “Deep learning smartphone application for real-time detection of defects in buildings,” Struct. Control Heal. Monit., vol. 28, no. 7, pp. 1–15, 2021, doi: 10.1002/stc.2751.
Y. Chen, Z. Zhu, Z. Lin, and Y. Zhou, “Building Surface Crack Detection Using Deep Learning Technology,” Buildings, vol. 13, no. 7, 2023, doi: 10.3390/buildings13071814.
N. O’Mahony et al., “Deep Learning vs. Traditional Computer Vision BT - Advances in Computer Vision,” 2020, pp. 128–144.
V. P. Golding, Z. Gharineiat, H. S. Munawar, and F. Ullah, “Crack Detection in Concrete Structures Using Deep Learning,” Sustain., vol. 14, no. 13, 2022, doi: 10.3390/su14138117.
P. Kumar, S. Batchu, N. Swamy S., and S. R. Kota, “Real-time concrete damage detection using deep learning for high rise structures,” IEEE Access, vol. 9, pp. 112312–112331, 2021, doi: 10.1109/ACCESS.2021.3102647.
P. N. Hadinata, D. Simanta, L. Eddy, and K. Nagai, “Crack Detection on Concrete Surfaces Using Deep Encoder-Decoder Convolutional Neural Network: A Comparison Study Between U-Net and DeepLabV3+,” J. Civ. Eng. Forum, vol. 7, no. 3, p. 323, 2021, doi: 10.22146/jcef.65288.
Y. Zhang, X. Gao, and H. Zhang, “Deep Learning-Based Semantic Segmentation Methods for Pavement Cracks,” Inf., vol. 14, no. 3, 2023, doi: 10.3390/info14030182.
C. Wan et al., “Crack detection for concrete bridges with imaged based deep learning,” Sci. Prog., vol. 105, no. 4, pp. 1–21, 2022, doi: 10.1177/00368504221128487.
S. B. Ali, R. Wate, S. Kujur, A. Singh, and S. Kumar, “Wall Crack Detection Using Transfer Learning-based CNN Models,” 2020, doi: 10.1109/INDICON49873.2020.9342392.
M. Maguire, “Structural Defects Network (SDNET) 2018.” https://www.deeplearningbook.org/ (accessed Oct. 18, 2023).
I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT press Cambridge, 2016.
F. Chollet, Deep Learning with Python. Manning, 2017.
C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” J. Big Data, vol. 6, no. 1, 2019, doi: 10.1186/s40537-019-0197-0.
A. Sangam, “Image-Augmentation-Using-OpenCV-and-Python.” https://github.com/AISangam/Image-Augmentation-Using-OpenCV-and-Python (accessed Oct. 18, 2023).
F. M. Hana and I. D. Maulida, “Analysis of contrast limited adaptive histogram equalization (CLAHE) parameters on finger knuckle print identification,” J. Phys. Conf. Ser., vol. 1764, no. 1, pp. 0–6, 2021, doi: 10.1088/1742-6596/1764/1/012049.
M. Harichandana, V. Sowmya, V. V. Sajithvariyar, and R. Sivanpillai, “Comparison of Image Enhancement Techniques for Rapid Processing of Post Flood Images,” Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci., vol. XLIV-M-2–2, no. June, pp. 45–50, 2020, doi: 10.5194/isprs-archives-xliv-m-2-2020-45-2020.
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp. 4510–4520, doi: 10.1109/CVPR.2018.00474.
A. G. Howard et al., “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” 2017. doi: https://doi.org/10.48550/arXiv.1704.04861.
K. Dong, C. Zhou, Y. Ruan, and Y. Li, “MobileNetV2 Model for Image Classification,” in 2020 2nd International Conference on Information Technology and Computer Application, ITCA 2020, 2020, pp. 476–480, doi: 10.1109/ITCA52113.2020.00106.
A. N. A. Thohari and R. D. Ramadhani, “Performance Comparison Supervised Machine Learning Models to Predict Customer Transaction Through Social Media Ads,” J. Comput. Networks, Archit. High Perform. Comput., vol. 4, no. 2, pp. 116–126, 2022, doi: 10.47709/cnahpc.v4i2.1488.
L. Savitri and R. Nursalim, “Klasifikasi Kualitas Air Minum menggunakan Penerapan Algoritma Machine Learning dengan Pendekatan Supervised Learning,” Diophantine J. Math. Its Appl., vol. 2, no. 01, pp. 30–36, 2023, doi: 10.33369/diophantine.v2i01.28260.
F. Hutter, L. Kotthoff, and J. Vanschoren, Automated Machine Learning: Methods, Systems, Challenges, 1st ed. Springer Publishing Company, Incorporated., 2019.
M. Sokolova and G. Lapalme, “A systematic analysis of performance measures for classification tasks,” Inf. Process. Manag., vol. 45, no. 4, pp. 427–437, 2009, doi: 10.1016/j.ipm.2009.03.002.
R. Cai, “Automating bird species classification: A deep learning approach with CNNs,” J. Phys. Conf. Ser., vol. 2664, no. 1, 2023, doi: 10.1088/1742-6596/2664/1/012007.
D. L. DiLaura, K. W. Houser, R. G. Mlstrick, and G. R. Steffy, The Lighting Handbook 10th Edition, 10th ed. New York, USA: IES, 2011.
Copyright (c) 2024 Afandi Nur Aziz Thohari, Aisyatul Karima, Kuwat Santoso, Roselina Rahmawati
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) ) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).