SciBERT Optimisation for Named Entity Recognition on NCBI Disease Corpus with Hyperparameter Tuning

Authors

  • Abu Salam Universitas Dian Nuswantoro
  • Syaiful Rizal Sidiq Universitas Dian Nuswantoro

DOI:

https://doi.org/10.30871/jaic.v9i2.9283

Keywords:

Named Entity Recognation (NER), SciBERT, Hyperparameter Tuning, Biomedical Text Processing, Natural Language Processing (NLP)

Abstract

Named Entity Recognition (NER) in the biomedical domain faces complex challenges due to the variety of medical terms and their context of use. Transformer-based models, such as SciBERT, have proven to be effective in natural language processing (NLP) tasks in scientific domains. However, the performance of these models is highly dependent on proper hyperparameter selection. Therefore, the aim of this study is to analyse the impact of hyperparameter tuning on the performance of SciBERT in NER tasks on the NCBI Disease Corpus dataset. The methods used in this study include training the baseline SciBERT model without tuning, followed by hyperparameter optimisation using grid search, random search, and bayesian optimisation methods. Model evaluation is done with precision, recall, and F1-score metrics. The experimental results showed that of the three methods grid search and random search produced the best performance with a precision, recall and F1-score of 0.82, improving from the baseline which only achieved a precision and recall of 0.72 and F1-score of 0.68. This study confirms that proper hyperparameter tuning can improve model accuracy and efficiency in medical entity extraction tasks. These results contribute to the development of optimisation methods in biomedical text processing, particularly in improving the effectiveness of the SciBERT Transformer model for NER.

Downloads

References

[1] M. Zhou, N. Duan, S. Liu, and H. Shum, “Progress in Neural NLP : Modeling , Learning , and Reasoning,” Engineering, vol. 6, no. 3, pp. 275–290, 2020, doi: 10.1016/j.eng.2019.12.014.

[2] S. Naseer, M. Mudasar, and S. Khalid, “Named Entity Recognition ( NER ) in NLP Techniques , Tools Accuracy and Performance,” vol. 2, no. 2, pp. 293–308, 2021.

[3] H. N. M. Yusof, S. A. N. Mohd, N. Khamis, and N. Hamzah, “Named Entity Recognition of an Oversampled and Preprocessed Manufacturing Data Corpus,” vol. 1, no. 1, pp. 203–216, 2024.

[4] L. Weber et al., “Data and text mining HunFlair : an easy-to-use tool for state-of-the-art biomedical named entity recognition,” vol. 37, no. January, pp. 2792–2794, 2021, doi: 10.1093/bioinformatics/btab042.

[5] I. P. Tummala, “Text Summarization based Named Entity Recognition for Certain Application using BERT,” 2024 Second Int. Conf. Intell. Cyber Phys. Syst. Internet Things, no. ICoICI, pp. 1136–1141, 2024, doi: 10.1109/ICoICI62503.2024.10696673.

[6] M. S. Usha, A. M. Smrity, and S. Das, “Named Entity Recognition Using Transfer Learning with the Fusion of Pre-trained SciBERT Language Model and Bi-directional Long Short Term Memory,” no. December 2022, 2024, doi: 10.1109/ICCIT57492.2022.10055784.

[7] C. D. Nafanda and A. Salam, “Optimalisasi Model BioBERT untuk Pengenalan Entitas pada Teks Medis dengan Conditional Random Fields ( CRF ),” vol. 6, no. 4, 2025, doi: 10.47065/bits.v6i4.7042.

[8] A. K. Ambalavanan and M. V Devarakonda, “Using the contextual language model BERT for multi-criteria classification of scientific articles,” J. Biomed. Inform., vol. 112, no. September, p. 103578, 2020, doi: 10.1016/j.jbi.2020.103578.

[9] Z. L. Doğan1 Rezarta Islamaj, Robert Leaman1, 2, “NCBI Disease Corpus: A Resource for Disease Name Recognition and Concept Normalization,” pp. 1–10, 2014, doi: 10.1016/j.jbi.2013.12.006.NCBI.

[10] J. A. Ilemobayo, O. I. Durodola, T. O. Adewumi, and O. Falana, “Hyperparameter Tuning in Machine Learning : A Comprehensive Review,” no. June, 2024, doi: 10.9734/jerr/2024/v26i61188.

[11] J. P. Cuevas, J. A. Reyes-ortiz, A. D. Cuevas-rasgado, and R. A. Mora-gutiérrez, “applied sciences MédicoBERT : A Medical Language Model for Spanish Natural Language Processing Tasks with a Question-Answering Application Using Hyperparameter Optimization,” 2024.

[12] I. Belgaty, K. Lo, and A. Cohan, “SCIBERT : A Pretrained Language Model for Scientific Text,” pp. 3615–3620, 2019.

[13] J. Lee et al., “Data and text mining BioBERT : a pre-trained biomedical language representation model for biomedical text mining,” vol. 36, no. September 2019, pp. 1234–1240, 2020, doi: 10.1093/bioinformatics/btz682.

[14] A. Abbas, M. Lee, N. Shanavas, and V. Kovatchev, “Clinical concept annotation with contextual word embedding in active transfer learning environment,” 2024, doi: 10.1177/20552076241308987.

[15] A. Sander, P. Braja, A. Kodar, F. I. Komputer, U. Mercu, and B. Jakarta, “Implementasi Fine-Tuning BERT untuk Analisis Sentimen terhadap Review Aplikasi PUBG Mobile di Google Play Store,” vol. 7, no. 3, pp. 120–128, 2023.

[16] B. Bischl et al., “Hyperparameter Optimization: Foundations, Algorithms, Best Practices and Open Challenges,” no. Ml, 2021.

[17] N. Hikmah, A. Suradika, and R. A. Ahmad Gunadi, “Metode Agile Untuk Meningkatkan Kreativitas Guru Melalui Berbagi Pengetahuan (Knowledge Sharing) (Studi Kasus: Sdn Cipulir 03 Kebayoran Lama, Jakarta,” Instruksional, vol. 3, no. 1, p. 30, 2021, doi: 10.24853/instruksional.3.1.30-39.

[18] A. Candelieri, “A Gentle Introduction to Bayesian Optimization,” IEEE, pp. 1–16, 2021, doi: 10.1109/WSC52266.2021.9715413.

[19] A. C. Kurniawan and A. Salam, “Seleksi Fitur Information Gain untuk Optimasi Klasifikasi Penyakit Tuberkulosis,” vol. 8, pp. 70–82, 2024, doi: 10.30865/mib.v8i1.7122.

[20] F. Fajri, B. Tutuko, and S. Sukemi, “Membandingkan Nilai Akurasi BERT dan DistilBERT pada Dataset Twitter Tahapan Penelitian,” vol. 8, no. 2, 2022.

[21] A. Asefa, A. Morgan, M. A. Bohren, and M. Kermode, “Lessons learned through respectful maternity care training and its implementation in Ethiopia : an interventional mixed methods study,” pp. 1–12, 2020.

[22] Y. Yin, C. Chen, L. Shang, X. Jiang, X. Chen, and Q. Liu, “AutoTinyBERT: Automatic Hyper-parameter Optimization for Efficient Pre-trained Language Models,” 2021, doi: https://doi.org/10.48550/arXiv.2107.13686.

Downloads

Published

2025-03-25

How to Cite

[1]
A. Salam and S. R. Sidiq, “SciBERT Optimisation for Named Entity Recognition on NCBI Disease Corpus with Hyperparameter Tuning”, JAIC, vol. 9, no. 2, pp. 432–441, Mar. 2025.

Issue

Section

Articles

Similar Articles

1 2 3 4 5 > >> 

You may also start an advanced similarity search for this article.