Enhancing Negative Film Colorization through Systematic CycleGAN Architectural Modifications: A Comprehensive Analysis of Generator and Discriminator Performance

Authors

  • Khaulyca Arva Artemysia Khaulyca Universitas Garut
  • Arief Suryadi Satyawan Arief Badan Riset dan Inovasi Nasional
  • Mokhammad Mirza Etnisa Haqiqi Mirza Universitas Indonesia
  • Helfy Susilawati Helfy Universitas Garut
  • Beni Wijaya Beni Universitas Garut
  • Sani Moch Sopian Sani Universitas Garut
  • Muhammad Ikbal Shamie Ikbal Universitas Garut
  • Firman Firman Universitas Garut

DOI:

https://doi.org/10.30871/jaic.v9i3.9553

Keywords:

Negative film colorization, CycleGAN, Architectural modifications, Image processing, Generative adversarial networks

Abstract

This research addresses the urgent need for deep learning-based negative film colorization technology through systematic modifications to the CycleGAN architecture. Unlike conventional approaches that focus on colorizing black-and- white images, this study targets the conversion of digitized negative film images, which present unique challenges such as color inversion and detail restoration. The dataset consists of 500 negative images (train A), 500 unpaired color images (train B), as well as 5 negative images and 5 color images for testing purposes. The entire dataset was obtained from personal scanning efforts. 19 architectural modifications were proposed and tested individually, without simultaneously implementing all changes. The primary focus was on developing network structures, without utilizing external evaluation metrics such as SSIM, PSNR, or FID. Modifications included the addition of residual blocks, alterations in filter quantities, activation functions, and inter-layer connections. The Evaluation was conducted qualitatively and based on generator and discriminator loss values. The most optimal modification (Modification 4) demonstrated significant loss reduction (G: 2.39–4.07, F: 2.82– 3.66; D_X: 0.36–0.93, D_Y: 0.15–1.39), yielding more accurate and aesthetically pleasing color images compared to the baseline architecture. The fundamental cycle consistency loss structure was maintained to ensure the unpaired training capability remained intact. This research demonstrates that careful architectural modifications can significantly enhance negative colorization results, while simultaneously creating opportunities for the future development of deep learning-based digital image restoration technologies.

Downloads

Download data is not yet available.

References

[1] J. W. Su, H. K. Chu, and J. Bin Huang, “Instance- aware image colorization,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, 2020, pp. 7965–7974. doi: 10.1109/CVPR42600.2020.00799.

[2] 2020 International Conference on Machine Vision and Image Processing (MVIP) : first conference : Department of Computer Engineering, University of Tehran, College of Farabi, Faculty of Engineering, Qom, Iran : 19 February and 30 April. IEEE, 2020.

[3] X. Liu, Z. Gao, and B. M. Chen, “MLFcGAN: Multilevel Feature Fusion-Based Conditional GAN for Underwater Image Color Correction,” IEEE Geoscience and Remote Sensing Letters, vol. 17, no. 9, pp. 1488–1492, Sep. 2020, doi:

10.1109/LGRS.2019.2950056.

[4] X. Liu, A. Gherbi, Z. Wei, W. Li, and M. Cheriet, “Multispectral Image Reconstruction from Color Images Using Enhanced Variational Autoencoder and Generative Adversarial Network,” IEEE Access, vol. 9, pp. 1666–1679, 2021, doi: 10.1109/ACCESS.2020.3047074.

[5] Z. Ni, W. Yang, S. Wang, L. Ma, and S. Kwong, “Towards Unsupervised Deep Image Enhancement with Generative Adversarial Network,” IEEE Transactions on Image Processing, vol. 29, pp. 9140– 9151, 2020, doi: 10.1109/TIP.2020.3023615.

[6] H. Tang, H. Liu, D. Xu, P. H. S. Torr, and N. Sebe, “AttentionGAN: Unpaired Image-to-Image Translation Using Attention-Guided Generative Adversarial Networks,” IEEE Trans Neural Netw Learn Syst, vol. 34, no. 4, pp. 1972–1987, Apr. 2023, doi: 10.1109/TNNLS.2021.3105725.

[7] H. Kumar, A. Banerjee, S. Saurav, and S. Singh, “ParaColorizer-Realistic image colorization using parallel generative networks,” Visual Computer, vol. 40, no. 6, pp. 4039–4054, Jun. 2024, doi: 10.1007/s00371-023-03067-7.

[8] T. Kemendikbud, W. Kurniawan, Y. Kristian, and J. Santoso, “J-INTECH (Journal of Information and Technology) Pemanfaatan Deep Convolutional Auto- encoder untuk Mitigasi Serangan Adversarial Attack pada Citra Digital”.

[9] Proceedings of the International Conference on Electronics and Sustainable Communication Systems (ICESC 2020) : 02-04, July 2020. [IEEE], 2020.

[10] J. Duan, M. Gao, G. Zhao, W. Zhao, S. Mo, and W. Zhang, “FAColorGAN: a dual-branch generative adversarial network for near-infrared image colorization,” Signal Image Video Process, Sep. 2024, doi: 10.1007/s11760-024-03266-2.

[11] S. Ghosh, P. Roy, S. Bhattacharya, U. Pal, and M. Blumenstein, “TIC: text-guided image colorization using conditional generative model,” Multimed Tools

Appl, vol. 83, no. 14, pp. 41121–41136, Apr. 2024, doi: 10.1007/s11042-023-15330-z.

[12] J. Lee, E. Kim, Y. Lee, D. Kim, J. Chang, and J. Choo, “Reference-based sketch image colorization using augmented-self reference and dense semantic correspondence,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, 2020, pp. 5800–5809. doi:

10.1109/CVPR42600.2020.00584.

[13] Q. R. Han, W. Z. Zhu, and Q. Zhu, “Icon Colorization Based on Triple Conditional Generative Adversarial Networks,” in 2020 IEEE International Conference on Visual Communications and Image Processing, VCIP 2020, Institute of Electrical and Electronics Engineers Inc., Dec. 2020, pp. 391–394. doi: 10.1109/VCIP49819.2020.9301890.

[14] T. Le-Tien, T. H. Duy Quang, H. Y. Vy, T. Nguyen- Thanh, and H. Phan-Xuan, “GAN-based Thermal Infrared Image Colorization for Enhancing Object Identification,” in Proceedings - 2021 International Symposium on Electrical and Electronics Engineering, ISEE 2021, Institute of Electrical and Electronics Engineers Inc., Apr. 2021, pp. 90–94. doi: 10.1109/ISEE51682.2021.9418801.

[15] H. Z. P. T. M. G. Z. Yi, “DualGAN: Unsupervised Dual Learning for Image-to-Image Translation,” Proceedings of the IEEE I

nternational Conference on Computer Vision (ICCV), 2017.

[16] Qingli. Li and Lipo. Wang, Proceedings, 2019 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics : CISP-BMEI 2019 : 19-21 October 2019, Huaqiao, China. IEEE, 2019.

[17] 2019 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA). IEEE, 2019.

[18] A. Mehri and A. D. Sappa, “Colorizing near infrared images through a cyclic adversarial approach of unpaired samples,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, IEEE Computer Society, Jun. 2019, pp. 971–979. doi: 10.1109/CVPRW.2019.00128.

[19] X. Chao, J. Cao, Y. Lu, and Q. Dai, “Improved Training of Spectral Normalization Generative Adversarial Networks,” in 2020 2nd World Symposium on Artificial Intelligence, WSAI 2020, Institute of Electrical and Electronics Engineers Inc., Jun. 2020, pp. 24–28. doi: 10.1109/WSAI49636.2020.9143310

[20] K. Du, C. Liu, L. Cao, Y. Guo, F. Zhang, and T. Wang, “Double-Channel Guided Generative Adversarial Network for Image Colorization,” IEEE Access, vol. 9, pp. 21604–21617, 2021, doi: 10.1109/ACCESS.2021.3055575.

[21] S. Yan, Y. Liu, J. Li, and H. Xiao, “DDGAN: Double Discriminators GAN for Accurate Image Colorization,” in Proceedings - 2020 6th International Conference on Big Data and Information Analytics, BigDIA 2020, Institute of Electrical and Electronics Engineers Inc., Dec. 2020, pp. 214–219. doi: 10.1109/BigDIA51454.2020.00042.

[22] J. Wang et al., “CA-GAN: Class-Condition Attention GAN for Underwater Image Enhancement,” IEEE Access, vol. 8, pp. 130719–130728, 2020, doi: 10.1109/ACCESS.2020.3003351.

[23] C. Liang, Y. Sheng, and Y. Mo, “Grayscale Image Colorization with GAN and CycleGAN in Different Image Domain,” Jan. 2024, [Online]. Available: http://arxiv.org/abs/2401.11425

[24] K. Doi, K. Sakurada, M. Onishi, and A. Iwasaki, “GAN-Based SAR-to-Optical Image Translation with Region Information,” in International Geoscience and Remote Sensing Symposium (IGARSS), Institute of Electrical and Electronics Engineers Inc., Sep. 2020, pp. 2069–2072. doi: 10.1109/IGARSS39084.2020.9323085.

[25] F. Wang, X. Feng, X. Guo, L. Xu, L. Xie, and S. Chang, “Improving de novo Molecule Generation by Embedding LSTM and Attention Mechanism in CycleGAN,” Front Genet, vol. 12, Aug. 2021, doi: 10.3389/fgene.2021.709500.

[26] V. Dumoulin and F. Visin, “A guide to convolution arithmetic for deep learning,” Mar. 2016, [Online].

Available: http://arxiv.org/abs/1603.07285

[27] B. Sim, G. Oh, J. Kim, C. Jung, and J. C. Ye, “Optimal Transport driven CycleGAN for Unsupervised Learning in Inverse Problems,” Sep. 2019, [Online]. Available: http://arxiv.org/abs/1909.12116

[28] B. Yagoub, H. Ibrahem, A. Salem, and H. S. Kang, “Single Energy X-ray Image Colorization Using Convolutional Neural Network for Material Discrimination,” Electronics (Switzerland), vol. 11, no. 24, Dec. 2022, doi: 10.3390/electronics11244101.

[29] B. Saini et al., “Colorizing Multi-Modal Medical Data: An Autoencoder-based Approach for Enhanced Anatomical Information in X-ray Images,” EAI Endorsed Trans Pervasive Health Technol, vol. 10, 2024, doi: 10.4108/eetpht.10.5540.

Downloads

Published

2025-06-17

How to Cite

[1]
K. A. A. Khaulyca, “Enhancing Negative Film Colorization through Systematic CycleGAN Architectural Modifications: A Comprehensive Analysis of Generator and Discriminator Performance”, JAIC, vol. 9, no. 3, pp. 897–909, Jun. 2025.

Issue

Section

Articles

Similar Articles

<< < 19 20 21 22 > >> 

You may also start an advanced similarity search for this article.