Enhancing Negative Film Colorization through Systematic CycleGAN Architectural Modifications: A Comprehensive Analysis of Generator and Discriminator Performance
DOI:
https://doi.org/10.30871/jaic.v9i3.9553Keywords:
Negative film colorization, CycleGAN, Architectural modifications, Image processing, Generative adversarial networksAbstract
This research addresses the urgent need for deep learning-based negative film colorization technology through systematic modifications to the CycleGAN architecture. Unlike conventional approaches that focus on colorizing black-and- white images, this study targets the conversion of digitized negative film images, which present unique challenges such as color inversion and detail restoration. The dataset consists of 500 negative images (train A), 500 unpaired color images (train B), as well as 5 negative images and 5 color images for testing purposes. The entire dataset was obtained from personal scanning efforts. 19 architectural modifications were proposed and tested individually, without simultaneously implementing all changes. The primary focus was on developing network structures, without utilizing external evaluation metrics such as SSIM, PSNR, or FID. Modifications included the addition of residual blocks, alterations in filter quantities, activation functions, and inter-layer connections. The Evaluation was conducted qualitatively and based on generator and discriminator loss values. The most optimal modification (Modification 4) demonstrated significant loss reduction (G: 2.39–4.07, F: 2.82– 3.66; D_X: 0.36–0.93, D_Y: 0.15–1.39), yielding more accurate and aesthetically pleasing color images compared to the baseline architecture. The fundamental cycle consistency loss structure was maintained to ensure the unpaired training capability remained intact. This research demonstrates that careful architectural modifications can significantly enhance negative colorization results, while simultaneously creating opportunities for the future development of deep learning-based digital image restoration technologies.
Downloads
References
[1] J. W. Su, H. K. Chu, and J. Bin Huang, “Instance- aware image colorization,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, 2020, pp. 7965–7974. doi: 10.1109/CVPR42600.2020.00799.
[2] 2020 International Conference on Machine Vision and Image Processing (MVIP) : first conference : Department of Computer Engineering, University of Tehran, College of Farabi, Faculty of Engineering, Qom, Iran : 19 February and 30 April. IEEE, 2020.
[3] X. Liu, Z. Gao, and B. M. Chen, “MLFcGAN: Multilevel Feature Fusion-Based Conditional GAN for Underwater Image Color Correction,” IEEE Geoscience and Remote Sensing Letters, vol. 17, no. 9, pp. 1488–1492, Sep. 2020, doi:
10.1109/LGRS.2019.2950056.
[4] X. Liu, A. Gherbi, Z. Wei, W. Li, and M. Cheriet, “Multispectral Image Reconstruction from Color Images Using Enhanced Variational Autoencoder and Generative Adversarial Network,” IEEE Access, vol. 9, pp. 1666–1679, 2021, doi: 10.1109/ACCESS.2020.3047074.
[5] Z. Ni, W. Yang, S. Wang, L. Ma, and S. Kwong, “Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network,” IEEE Transactions on Image Processing, vol. 29, pp. 9140– 9151, 2020, doi: 10.1109/TIP.2020.3023615.
[6] H. Tang, H. Liu, D. Xu, P. H. S. Torr, and N. Sebe, “AttentionGAN: Unpaired Image-to-Image Translation Using Attention-Guided Generative Adversarial Networks,” IEEE Trans Neural Netw Learn Syst, vol. 34, no. 4, pp. 1972–1987, Apr. 2023, doi: 10.1109/TNNLS.2021.3105725.
[7] H. Kumar, A. Banerjee, S. Saurav, and S. Singh, “ParaColorizer-Realistic image colorization using parallel generative networks,” Visual Computer, vol. 40, no. 6, pp. 4039–4054, Jun. 2024, doi: 10.1007/s00371-023-03067-7.
[8] T. Kemendikbud, W. Kurniawan, Y. Kristian, and J. Santoso, “J-INTECH (Journal of Information and Technology) Pemanfaatan Deep Convolutional Auto- encoder untuk Mitigasi Serangan Adversarial Attack pada Citra Digital”.
[9] Proceedings of the International Conference on Electronics and Sustainable Communication Systems (ICESC 2020) : 02-04, July 2020. [IEEE], 2020.
[10] J. Duan, M. Gao, G. Zhao, W. Zhao, S. Mo, and W. Zhang, “FAColorGAN: a dual-branch generative adversarial network for near-infrared image colorization,” Signal Image Video Process, Sep. 2024, doi: 10.1007/s11760-024-03266-2.
[11] S. Ghosh, P. Roy, S. Bhattacharya, U. Pal, and M. Blumenstein, “TIC: text-guided image colorization using conditional generative model,” Multimed Tools
Appl, vol. 83, no. 14, pp. 41121–41136, Apr. 2024, doi: 10.1007/s11042-023-15330-z.
[12] J. Lee, E. Kim, Y. Lee, D. Kim, J. Chang, and J. Choo, “Reference-based sketch image colorization using augmented-self reference and dense semantic correspondence,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, 2020, pp. 5800–5809. doi:
10.1109/CVPR42600.2020.00584.
[13] Q. R. Han, W. Z. Zhu, and Q. Zhu, “Icon Colorization Based on Triple Conditional Generative Adversarial Networks,” in 2020 IEEE International Conference on Visual Communications and Image Processing, VCIP 2020, Institute of Electrical and Electronics Engineers Inc., Dec. 2020, pp. 391–394. doi: 10.1109/VCIP49819.2020.9301890.
[14] T. Le-Tien, T. H. Duy Quang, H. Y. Vy, T. Nguyen- Thanh, and H. Phan-Xuan, “GAN-based Thermal Infrared Image Colorization for Enhancing Object Identification,” in Proceedings - 2021 International Symposium on Electrical and Electronics Engineering, ISEE 2021, Institute of Electrical and Electronics Engineers Inc., Apr. 2021, pp. 90–94. doi: 10.1109/ISEE51682.2021.9418801.
[15] H. Z. P. T. M. G. Z. Yi, “DualGAN: Unsupervised Dual Learning for Image-to-Image Translation,” Proceedings of the IEEE I
nternational Conference on Computer Vision (ICCV), 2017.
[16] Qingli. Li and Lipo. Wang, Proceedings, 2019 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics : CISP-BMEI 2019 : 19-21 October 2019, Huaqiao, China. IEEE, 2019.
[17] 2019 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA). IEEE, 2019.
[18] A. Mehri and A. D. Sappa, “Colorizing near infrared images through a cyclic adversarial approach of unpaired samples,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE Computer Society, Jun. 2019, pp. 971–979. doi: 10.1109/CVPRW.2019.00128.
[19] X. Chao, J. Cao, Y. Lu, and Q. Dai, “Improved Training of Spectral Normalization Generative Adversarial Networks,” in 2020 2nd World Symposium on Artificial Intelligence, WSAI 2020, Institute of Electrical and Electronics Engineers Inc., Jun. 2020, pp. 24–28. doi: 10.1109/WSAI49636.2020.9143310
[20] K. Du, C. Liu, L. Cao, Y. Guo, F. Zhang, and T. Wang, “Double-Channel Guided Generative Adversarial Network for Image Colorization,” IEEE Access, vol. 9, pp. 21604–21617, 2021, doi: 10.1109/ACCESS.2021.3055575.
[21] S. Yan, Y. Liu, J. Li, and H. Xiao, “DDGAN: Double Discriminators GAN for Accurate Image Colorization,” in Proceedings - 2020 6th International Conference on Big Data and Information Analytics, BigDIA 2020, Institute of Electrical and Electronics Engineers Inc., Dec. 2020, pp. 214–219. doi: 10.1109/BigDIA51454.2020.00042.
[22] J. Wang et al., “CA-GAN: Class-Condition Attention GAN for Underwater Image Enhancement,” IEEE Access, vol. 8, pp. 130719–130728, 2020, doi: 10.1109/ACCESS.2020.3003351.
[23] C. Liang, Y. Sheng, and Y. Mo, “Grayscale Image Colorization with GAN and CycleGAN in Different Image Domain,” Jan. 2024, [Online]. Available: http://arxiv.org/abs/2401.11425
[24] K. Doi, K. Sakurada, M. Onishi, and A. Iwasaki, “GAN-Based SAR-to-Optical Image Translation with Region Information,” in International Geoscience and Remote Sensing Symposium (IGARSS), Institute of Electrical and Electronics Engineers Inc., Sep. 2020, pp. 2069–2072. doi: 10.1109/IGARSS39084.2020.9323085.
[25] F. Wang, X. Feng, X. Guo, L. Xu, L. Xie, and S. Chang, “Improving de novo Molecule Generation by Embedding LSTM and Attention Mechanism in CycleGAN,” Front Genet, vol. 12, Aug. 2021, doi: 10.3389/fgene.2021.709500.
[26] V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” Mar. 2016, [Online].
Available: http://arxiv.org/abs/1603.07285
[27] B. Sim, G. Oh, J. Kim, C. Jung, and J. C. Ye, “Optimal Transport driven CycleGAN for Unsupervised Learning in Inverse Problems,” Sep. 2019, [Online]. Available: http://arxiv.org/abs/1909.12116
[28] B. Yagoub, H. Ibrahem, A. Salem, and H. S. Kang, “Single Energy X-ray Image Colorization Using Convolutional Neural Network for Material Discrimination,” Electronics (Switzerland), vol. 11, no. 24, Dec. 2022, doi: 10.3390/electronics11244101.
[29] B. Saini et al., “Colorizing Multi-Modal Medical Data: An Autoencoder-based Approach for Enhanced Anatomical Information in X-ray Images,” EAI Endorsed Trans Pervasive Health Technol, vol. 10, 2024, doi: 10.4108/eetpht.10.5540.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Khaulyca Arva Artemysia Khaulyca, Arief Suryadi Satyawan Arief, Mokhammad Mirza Etnisa Haqiqi Mirza, Helfy Susilawati Helfy, Beni Wijaya Beni, Sani Moch Sopian Sani, Muhammad Ikbal Shamie Ikbal, Firman Firman

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) ) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).








