Comparative Analysis of BERT and LSTM Models for Sentiment Classification of Mobile Game User Reviews
DOI:
https://doi.org/10.30871/jaic.v10i1.12149Keywords:
Sentiment Classification, LSTM, BERT Multilanguage, Direct Ads, Mobile Game ReviewsAbstract
Sentiment classification of user reviews for mobile games that rely on direct advertising (direct ads) is crucial for understanding player perceptions and improving user experience. This study aims to compare the performance of two deep learning architectures, Long Short-Term Memory (LSTM) and multilingual Bidirectional Encoder Representations from Transformers (BERT) in classifying sentiment in reviews into three categories, positive, negative, and neutral. The dataset used consists of reviews from games employing direct ads, which underwent rule-based labeling and text preprocessing. The LSTM model was built from scratch using a custom embedding layer, while the multilingual BERT model was fine-tuned using a transfer learning approach. Evaluation was conducted based on accuracy, precision, recall, and F1-score metrics. Experimental results show that multilingual BERT achieves superior validation loss compared to LSTM (0.37 vs. 0.44). BERT also outperforms LSTM significantly in terms of F1-score and its ability to understand multilingual linguistic context. However, LSTM demonstrates advantages in computational efficiency and training speed. These findings offer practical recommendations for developers in selecting an appropriate sentiment analysis model based on accuracy requirements and resource availability.
Downloads
References
[1] “Royal Match,” Dream Games, Ltd. Accessed: Nov. 09, 2025. [Online]. Available: https://play.google.com/store/apps/details?id=com.dreamgames.royalmatch&hl=en
[2] S. Adigüzel, “The Effect Of In-Game Advertising As A Marketing Technique On The Purchasing Behavior Of Generation Z In Turkey.,” Airlangga International Journal of Islamic Economics & Finance, vol. 7, no. 2, 2024.
[3] K. Kopanov, “Comparative Performance of Advanced NLP Models and LLMs in Multilingual Geo-Entity Detection,” in ACM International Conference Proceeding Series, Association for Computing Machinery, May 2024, pp. 106–110. doi: 10.1145/3660853.3660878.
[4] U. B. Mahadevaswamy and P. Swathi, “Sentiment Analysis using Bidirectional LSTM Network,” Procedia Comput. Sci., vol. 218, pp. 45–56, 2023, doi: https://doi.org/10.1016/j.procs.2022.12.400.
[5] S. M. Al-Selwi, M. F. Hassan, S. J. Abdulkadir, and A. Muneer, “LSTM inefficiency in long-term dependencies regression problems,” Journal of Advanced Research in Applied Sciences and Engineering Technology, vol. 30, no. 3, pp. 16–31, 2023.
[6] S. S. Khan, P. K. Mondal, S. Shaqib, N. Ahmed, N. N. I. Prova, and A. Sattar, “Performance Analysis of LSTM and Bi-LSTM Model with Different Optimizers in Bangla Sentiment Analysis,” in 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), 2024, pp. 1–7. doi: 10.1109/ICCCNT61001.2024.10726142.
[7] N. M. Gardazi, A. Daud, M. K. Malik, A. Bukhari, T. Alsahfi, and B. Alshemaimri, “BERT applications in natural language processing: a review,” Artif. Intell. Rev., vol. 58, no. 6, Jun. 2025, doi: 10.1007/s10462-025-11162-5.
[8] J. Shobana and M. Murali, “An improved self attention mechanism based on optimized BERT-BiLSTM model for accurate polarity prediction,” Comput. J., vol. 66, no. 5, pp. 1279–1294, 2023.
[9] A. De Varda and R. Zamparelli, “Multilingualism Encourages Recursion: a Transfer Study with mBERT,” in Proceedings of the 4th Workshop on Research in Computational Linguistic Typology and Multilingual NLP, E. Vylomova, E. Ponti, and R. Cotterell, Eds., Seattle, Washington: Association for Computational Linguistics, Jul. 2022, pp. 1–10. doi: 10.18653/v1/2022.sigtyp-1.1.
[10] M. S. Jailani, “Teknik pengumpulan data dan instrumen penelitian ilmiah pendidikan pada pendekatan kualitatif dan kuantitatif,” IHSAN: Jurnal Pendidikan Islam, vol. 1, no. 2, pp. 1–9, 2023.
[11] T. Maeno et al., “PanDA: Production and Distributed Analysis System,” Dec. 01, 2024, Springer Nature. doi: 10.1007/s41781-024-00114-3.
[12] I. Harouni, “The Modern Methods of Data Analysis in Social Research: Python Programming Language and its Pandas Library as an Example-a Theoretic Study,” Social Empowerment Journal, vol. 6, no. 1, pp. 56–70, 2024, Accessed: Dec. 18, 2025. [Online]. Available: https://www.ajol.info/index.php/sej/article/view/274626/259255
[13] F. Ridzuan and W. M. N. Wan Zainon, “A review on data cleansing methods for big data,” in Procedia Computer Science, Elsevier B.V., 2019, pp. 731–738. doi: 10.1016/j.procs.2019.11.177.
[14] N. Kumar and B. R. Hanji, “Combined sentiment score and star rating analysis of travel destination prediction based on user preference using morphological linear neural network model with correlated topic modelling approach,” Multimed. Tools Appl., vol. 83, no. 22, pp. 61347–61378, 2024, doi: 10.1007/s11042-023-17995-y.
[15] B. Bala and S. Behal, “A Brief Survey of Data Preprocessing in Machine Learning and Deep Learning Techniques,” in 2024 8th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC), 2024, pp. 1755–1762. doi: 10.1109/I-SMAC61858.2024.10714767.
[16] H. Li et al., “Enhancing Large Language Model Performance with Gradient-Based Parameter Selection,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 39, no. 23, pp. 24431–24439, Apr. 2025, doi: 10.1609/aaai.v39i23.34621.
[17] M. Zhang, X. Ye, Q. Liu, P. Ren, S. Wu, and Z. Chen, “Uncovering overfitting in large language model editing,” arXiv preprint arXiv:2410.07819, 2024, doi: https://doi.org/10.48550/arXiv.2410.07819.
[18] A. N. Azhar and M. L. Khodra, “Fine-tuning Pretrained Multilingual BERT Model for Indonesian Aspect-based Sentiment Analysis,” in 2020 7th International Conference on Advance Informatics: Concepts, Theory and Applications (ICAICTA), 2020, pp. 1–6. doi: 10.1109/ICAICTA49861.2020.9428882.
[19] H. Y. Kim, N. Balasubramanian, and B. Kang, “On initializing transformers with pre-trained embeddings,” arXiv preprint arXiv:2407.12514, 2024.
[20] S. M. Jain, “Hugging Face,” in Introduction to Transformers for NLP: With the Hugging Face Library and Models to Solve Problems, S. M. Jain, Ed., Berkeley, CA: Apress, 2022, pp. 51–67. doi: 10.1007/978-1-4842-8844-3_4.
[21] I. F. Ilyas and T. Rekatsinas, “Machine learning and data cleaning: Which serves the other?,” ACM Journal of Data and Information Quality (JDIQ), vol. 14, no. 3, pp. 1–11, 2022.
[22] T. Liu et al., “Mitigating heterogeneous token overfitting in llm knowledge editing,” arXiv preprint arXiv:2502.00602, 2025.
[23] N. Kitaev, S. Cao, and D. Klein, “Multilingual Constituency Parsing with Self-Attention and Pre-Training,” in Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, A. Korhonen, D. Traum, and L. Màrquez, Eds., Florence, Italy: Association for Computational Linguistics, Jul. 2019, pp. 3499–3505. doi: 10.18653/v1/P19-1340.
[24] X. Song, A. Salcianu, Y. Song, D. Dopson, and D. Zhou, “Fast WordPiece Tokenization,” in Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, M.-F. Moens, X. Huang, L. Specia, and S. W. Yih, Eds., Online and Punta Cana, Dominican Republic: Association for Computational Linguistics, Nov. 2021, pp. 2089–2103. doi: 10.18653/v1/2021.emnlp-main.160.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2026 Toto Indriyatmoko, Majid Rahardi, Hastari Utama, Arvin Claudy Frobenius

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) ) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).








