AI Image Detection Using EfficientNetB0 Architecture on Generative Adversarial Network
DOI:
https://doi.org/10.30871/jaic.v10i2.12481Keywords:
GAN Detection, EfficientNetB0, Cross-Generator Generalization, Transfer Learning, Fake ImagesAbstract
The development of Generative Adversarial Networks (GANs) has produced synthetic images that are increasingly difficult to distinguish from real photographs, driving the need for reliable automated detection systems. The core problem is that detection models frequently fail when tested on generators different from those used during training, a phenomenon known as the generalization gap. This study evaluates EfficientNetB0 in detecting AI-generated images through a cross-generator approach across five GAN architectures, namely ProGAN, StyleGAN, StyleGAN2, StyleGAN2-ADA, and StyleGAN3. The model was trained on a StyleGAN2 dataset using transfer learning from ImageNet, then evaluated on the four other generators without retraining. In-domain results showed an accuracy of 99.30%, an F1-Score of 99.30%, and an AUC of 99.99%. However, out-of-domain testing revealed an average accuracy drop of 16.64%. StyleGAN2-ADA achieved 99.33% due to its architectural similarity to StyleGAN2, suggesting that generator architecture is a more decisive factor than training strategy. In contrast, StyleGAN3 dropped to an accuracy of 63.05% because its alias-free architecture eliminates the visual patterns that the model typically relies on for detection. The model tends to recognize real images well with high specificity, but its ability to detect synthetic images declines with low sensitivity, and the false negative rate rising from 0.15% to 72.65% on StyleGAN3. These findings highlight the limitations of single-generator training and the need to explore multi-generator strategies or frequency-based feature methods.
Downloads
References
[1] I. Goodfellow dkk., “Generative adversarial nets,” Adv. Neural Inf. Process. Syst., vol. 27, hlm. 2672–2680, 2014.
[2] T. Karras, T. Aila, S. Laine, dan J. Lehtinen, “Progressive Growing of GANs for Improved Quality, Stability, and Variation,” 26 Februari 2018, arXiv: arXiv:1710.10196. doi: 10.48550/arXiv.1710.10196.
[3] T. Karras dkk., “Alias-Free Generative Adversarial Networks,” dalam Advances in Neural Information Processing Systems, Curran Associates, Inc., 2021, hlm. 852–863. Diakses: 17 November 2025. https://proceedings.neurips.cc/paper/2021/hash/076ccd93ad68be51f23707988e934906-Abstract.html
[4] S. J. Nightingale dan H. Farid, “AI-synthesized faces are indistinguishable from real faces and more trustworthy,” Proc. Natl. Acad. Sci., vol. 119, no. 8, 2022, doi: 10.1073/pnas.2120481119.
[5] A. Rossler, D. Cozzolino, L. Verdoliva, C. Riess, J. Thies, dan M. Niessner, “FaceForensics++: Learning to Detect Manipulated Facial Images,” hlm. 1–11, Okt 2019, doi: 10.1109/iccv.2019.00009.
[6] N. Jain, P. Majumdar, M. Singh, dan M. Vatsa, “Detecting deepfakes with self-blended images,” IEEE Trans. Biom. Behav. Identity Sci., vol. 5, no. 3, hlm. 326–337, 2023, doi: 10.1109/TBIOM.2023.3245089.
[7] S.-Y. Wang, O. Wang, R. Zhang, A. Owens, dan A. A. Efros, “CNN-Generated Images Are Surprisingly Easy to Spot… for Now,” dalam 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA: IEEE, Jun 2020, hlm. 8692–8701. doi: 10.1109/CVPR42600.2020.00872.
[8] D. Gragnaniello, D. Cozzolino, F. Marra, G. Poggi, dan L. Verdoliva, “Are GAN generated images easy to detect? A critical analysis of the state-of-the-art,” dipresentasikan pada IEEE International Conference on Multimedia and Expo, 2021, hlm. 1–6. doi: 10.1109/ICME51207.2021.9428429.
[9] L. Nataraj dkk., “Detecting GAN generated fake images using co-occurrence matrices,” Electron. Imaging, vol. 2019, no. 5, hlm. 532–1, 2019, doi: 10.2352/ISSN.2470-1173.2019.5.MWSF-532.
[10] M. Albright dan S. McCloskey, “Source generator attribution via inversion,” dipresentasikan pada IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2021, hlm. 2701–2710. doi: 10.1109/CVPRW53098.2021.00305.
[11] J. Frank, T. Eisenhofer, L. Schönherr, A. Fischer, D. Kolossa, dan T. Holz, “Leveraging frequency analysis for deep fake image recognition,” dipresentasikan pada International Conference on Machine Learning, 2020, hlm. 3247–3258.
[12] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, dan L. Fei-Fei, “ImageNet: A large-scale hierarchical image database,” dipresentasikan pada IEEE Conference on Computer Vision and Pattern Recognition, 2009, hlm. 248–255. doi: 10.1109/CVPR.2009.5206848.
[13] K. He, X. Zhang, S. Ren, dan J. Sun, “Deep residual learning for image recognition,” dipresentasikan pada IEEE Conference on Computer Vision and Pattern Recognition, 2016, hlm. 770–778. doi: 10.1109/CVPR.2016.90.
[14] J. Yosinski, J. Clune, Y. Bengio, dan H. Lipson, “How transferable are features in deep neural networks?,” Adv. Neural Inf. Process. Syst., vol. 27, hlm. 3320–3328, 2014.
[15] M. Tan dan Q. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” dalam Proceedings of the 36th International Conference on Machine Learning, PMLR, Mei 2019, hlm. 6105–6114. Diakses: 18 November 2025. https://proceedings.mlr.press/v97/tan19a.html
[16] K. Simonyan dan A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” ArXiv Prepr. ArXiv14091556, 2014, doi: 10.48550/arXiv.1409.1556.
[17] J. Cao, C. Ma, T. Yao, S. Chen, S. Ding, dan X. Yang, “End-to-end reconstruction-classification learning for face forgery detection,” dipresentasikan pada IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, hlm. 4113–4122. doi: 10.1109/CVPR52688.2022.00409.
[18] T. Karras, S. Laine, dan T. Aila, “A Style-Based Generator Architecture for Generative Adversarial Networks,” dipresentasikan pada IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, hlm. 4401–4410. doi: 10.1109/CVPR.2019.00453.
[19] T. Karras, S. Laine, M. Aittala, J. Hellsten, J. Lehtinen, dan T. Aila, “Analyzing and Improving the Image Quality of StyleGAN,” dalam 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA: IEEE, Jun 2020, hlm. 8107–8116. doi: 10.1109/CVPR42600.2020.00813.
[20] T. Karras, M. Aittala, J. Hellsten, S. Laine, J. Lehtinen, dan T. Aila, “Training Generative Adversarial Networks with Limited Data,” dalam Advances in Neural Information Processing Systems, Curran Associates, Inc., 2020, hlm. 12104–12114. https://proceedings.neurips.cc/paper/2020/hash/8d30aa96e72440759f74bd2306c1fa3d-Abstract.html
[21] C. Shorten dan T. M. Khoshgoftaar, “A survey on image data augmentation for deep learning,” J. Big Data, vol. 6, no. 1, hlm. 1–48, 2019, doi: 10.1186/s40537-019-0197-0.
[22] F. Marra, C. Saltori, G. Boato, dan L. Verdoliva, “Incremental learning for the detection and classification of GAN-generated images,” dipresentasikan pada IEEE International Workshop on Information Forensics and Security, 2019, hlm. 1–6. doi: 10.1109/WIFS47025.2019.9035107.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Rudi Hartadi, Sindhu Rakasiwi

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) ) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).








