Perbandingan Algoritma CART Dan AdaBoost Pada Klasifikasi Demensia

Muhammad Arya All Fajri, M Aldi Saputra, Anita Desiani, Bambang Suprihatin, Herlina Hanum

Abstract


Demensia merupakan gangguan kesehatan ditandai dengan penurunan daya ingat, kemampuan kognitif, dan perilaku yang mengganggu aktivitas pada kehidupan sehari-hari. Masyarakat kurang mendapatkan informasi mengenai deteksi dini demensia yang disebabkan terbatasnya fasilitas kesehatan. Klasifikasi menggunakan data mining dapat membantu deteksi dini demensia. Penelitian ini bertujuan membandingkan algoritma CART dan AdaBoost untuk melihat metode yang paling efektif digunakan pada klasifikasi demensia. Pembagian data dilakukan menggunakan metode percentage split dan k-fold cross-validation. Percentage split membagi data menjadi dua bagian dengan 70% data pelatihan dan 30% data pengujian. K-fold cross-validation mengelompokkan data dengan 1 kelompok data menjadi data pengujian dan 9 kelompok data lainnya menjadi data pengujian yang dilakukan berulang pada setiap kelompok data sebanyak 10 kali. ADASYN digunakan untuk menyeimbangkan data pada setiap kelas. Hasil evaluasi kinerja pada kedua algoritma menunjukkan AdaBoost menggunakan ADASYN dan k-fold cross-validation memiliki nilai tertinggi untuk akurasi, presisi, recall, f1-score, dan ROC-AUC masing-masing sebesar 92.52%, 92.11%, 92.52%, 91.46%, dan 96.85%. Hasil ini menunjukkan bahwa algoritma AdaBoost sangat baik dalam memprediksi seluruh demensia dengan benar, mempertahankan keseimbangan antara presisi dan recall, dan membedakan tiga kelas demensia. Hasil penelitian menunjukkan keunggulan pendekatan ensemble learning dalam menangani variasi data dan meningkatkan stabilitas model klasifikasi demensia. Penelitian ini menunjukkan bahwa AdaBoost memiliki performa yang sangat baik dibandingkan CART pada klasifikasi demensia.


Keywords


dementia; CART; AdaBoost; percentage split; k-fold cross-validation

Full Text:

PDF

References


WHO, A Blueprint for Dementia Research. Geneva, 2022. [Online]. Available: https://www.who.int/publications/i/item/9789240058248

WHO, “Dementia in Refugees and Migrants: Epidemiology, Public Health Implications and Global Responses,” WHO, Geneva, 2025.

D. D. Shankar, A. S. Azhakath, N. Khalil, S. J, M. T, and S. K, “Data mining for cyber biosecurity risk management – A comprehensive review,” Comput. Secur., vol. 137, p. 103627, Feb. 2024, doi: 10.1016/j.cose.2023.103627.

R. Shi, X. Leng, Y. Wu, S. Zhu, X. Cai, and X. Lu, “Machine learning regression algorithms to predict short-term efficacy after anti-VEGF treatment in diabetic macular edema based on real-world data,” Sci. Rep., vol. 13, no. 1, p. 18746, Oct. 2023, doi: 10.1038/s41598-023-46021-2.

M. Khandelwal, D. J. Armaghani, R. S. Faradonbeh, M. Yellishetty, M. Z. A. Majid, and M. Monjezi, “Classification and Regression Tree technique in estimating peak particle velocity caused by blasting,” Eng. Comput., vol. 33, no. 1, pp. 45–53, Jan. 2017, doi: 10.1007/s00366-016-0455-0.

F. Van Der Steen, F. Vink, and H. Kaya, “Privacy Constrained Fairness Estimation for Decision Trees,” Appl. Intell., vol. 55, no. 308, pp. 1–27, 2025, doi: 10.1007/s10489-024-05953-6.

R. B. Diwate, R. Ghosh, R. Jha, I. Sagar, and S. K. Singh, “Dementia prediction using OASIS data for alzheimer’s research,” in 2021 International Conference on Artificial Intelligence and Machine Vision (AIMV), 2021, pp. 1–7. doi: 10.1109/AIMV53313.2021.9670900.

A. Abedinia and V. Seydi, “Building Semi-Supervised Decision Trees with Semi-CART Algorithm,” Int. J. Mach. Learn. Cybern., vol. 15, no. 10, pp. 4493–4510, 2024, doi: 10.1007/s13042-024-02161-z.

A. Subasi, “Machine learning techniques,” in Practical Machine Learning for Data Analysis Using Python, A. B. T.-P. M. L. for D. A. U. P. Subasi, Ed., Elsevier, 2020, pp. 91–202. doi: 10.1016/B978-0-12-821379-7.00003-5.

B. Jean-Marc and O. Lafitte, “Combining weak classifiers: a logical analysis,” in 2021 23rd International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), IEEE, Dec. 2021, pp. 178–181. doi: 10.1109/SYNASC54541.2021.00038.

M. M. Høgsgaard, K. G. Larsen, and M. Ritzert, “AdaBoost is Not an Optimal Weak to Strong Learner,” ICML’23 Proc. 40th Int. Conf. Mach. Learn., vol. 532, pp. 13118–13140, 2023, doi: 10.5555/3618408.3618940.

S. Dhakal, S. Azam, K. M. Hasib, A. Karim, M. Jonkman, and A. S. M. F. Al Haque, “Dementia prediction using machine learning,” Procedia Comput. Sci., vol. 219, pp. 1297–1308, 2023, doi: 10.1016/j.procs.2023.01.414.

J. Boysen, “MRI and Alzheimers,” Kaggle. Accessed: Nov. 16, 2024. [Online]. Available: https://www.kaggle.com/datasets/jboysen/mri-and-alzheimers?select=oasis_longitudinal.csv)

L. A. Kumar, “Predicting Parkinson’s: Analyzing Patterns with Data and Analytics,” in Predictive Analytics using MATLAB® for Biomedical Applications, L. A. B. T.-P. A. using M. for B. A. Kumar, Ed., Academic Press, 2025, pp. 153–185. doi: https://doi.org/10.1016/B978-0-443-29888-2.00005-2.

M. Alamuri, B. Surampudi, and A. Negi, “Multi Dimensional Deep Encoding for Categorical Feature Space,” ICCPR ’24 Proc. 2024 13th Int. Conf. Comput. Pattern Recognit., pp. 120–128, 2025, doi: 10.1145/3704323.3704378.

Y. Liu, Y. Cao, Z. Zeng, B. Yu, and M. Cai, “CAMI: A Missing Value Imputation Method Based on Causal Discovery and Self-attention,” in Advances and Trends in Artificial Intelligence. Theory and Applications, H. Fujita, Y. Watanobe, M. Ali, and Y. Wang, Eds., Singapore: Springer Nature Singapore, 2025, pp. 342–353.

S. Tahvili and L. Hatvani, “Benefits, Results, and Challenges of Artificial Intelligence,” in Uncertainty, Computational Techniques, and Decision Intelligence, S. Tahvili and L. B. T.-A. I. M. for O. of the S. T. P. Hatvani, Eds., Academic Press, 2022, pp. 161–172. doi: https://doi.org/10.1016/B978-0-32-391913-5.00017-8.

C. Alex, M. Herzberg, and L. Mader, “Normalized Data may have Limitations in Determining the Reliability of MVC Measurements,” Sci. Rep., vol. 15, pp. 1–10, 2025.

A. Das, “Linear Regression for Balancing Feature Relevance and Redundancy with Optimization,” CODS-COMAD ’24 Proc. 8th Int. Conf. Data Sci. Manag. Data (12th ACM IKDD CODS 30th COMAD), pp. 314–316, 2025, doi: 10.1145/3703323.3703700.

A. T. Keleko, B. Kamsu-Foguem, R. H. Ngouna, and A. Tongne, “Health condition monitoring of a complex hydraulic system using Deep Neural Network and DeepSHAP explainable XAI,” Adv. Eng. Softw., vol. 175, p. 103339, Jan. 2023, doi: 10.1016/j.advengsoft.2022.103339.

A. A. Shujaaddeen, F. Mutaher Ba -Alwi, A. T. Zahary, and A. Sultan Alhegami, “A model for measuring the effect of splitting data method on the efficiency of machine learning models: A comparative study,” in 2024 4th International Conference on Emerging Smart Technologies and Applications (eSmarTA), IEEE, Aug. 2024, pp. 1–13. doi: 10.1109/eSmarTA62850.2024.10639022.

V. H. Kamble and M. P. Dale, “Machine learning approach for longitudinal face recognition of children,” in Machine Learning for Biometrics, P. P. Sarangi, M. Panda, S. Mishra, B. S. P. Mishra, and B. B. T.-M. L. for B. Majhi, Eds., Elsevier, 2022, pp. 1–27. doi: 10.1016/B978-0-323-85209-8.00011-0.

O. Rainio, J. Teuho, and R. Klén, “Evaluation Metrics and Statistical Tests for Machine Learning,” Sci. Rep., vol. 14, no. 1, pp. 1–14, 2024, doi: 10.1038/s41598-024-56706-x.

M. U. Safder, S. S. Naveed, K. Khurshid, and A. Salman, “Optimizing Imbalanced Learning with Genetic Algorithm,” Sci. Rep., vol. 15, pp. 1–30, 2025.

M. Kannan, D. Umamaheswari, B. Manimekala, I. P. S. Mary, and P. M. Savitha, “An Enhancement of Machine Learning Model Performance in Disease Prediction with Synthetic Data Generation,” Sci. Rep., vol. 15, pp. 1–21, 2025.

J. Sun, N. Jiang, G. Sun, and W. Huang, “Analysis of CART algorithms in data mining,” in 2023 2nd International Conference on Machine Learning, Cloud Computing and Intelligent Mining (MLCCIM), IEEE, Jul. 2023, pp. 548–553. doi: 10.1109/MLCCIM60412.2023.00088.

I. D. Mienye and Y. Sun, “A survey of Ensemble Learning: Concepts, algorithms, applications, and prospects,” IEEE Access, vol. 10, pp. 99129–99149, 2022, doi: 10.1109/ACCESS.2022.3207287.

H. A. Çetin, E. Do, and E. Tüzün, “Science of Computer Programming A Review of Code Reviewer Recommendation Studies : Challenges and Future Directions,” Sci. Comput. Program., vol. 208, pp. 1–21, 2021, doi: 10.1016/j.scico.2021.102652.

I. M. De Diego, A. R. Redondo, R. R. Fernández, J. Navarro, and J. M. Moguerza, “General performance score for classification problems,” Appl. Intell., vol. 52, no. 10, pp. 12049–12063, 2022, doi: 10.1007/s10489-021-03041-7.

J. Han, J. Pei, and H. Tong, “Classification: Basic Concepts and Methods,” in Data Mining (Fourth edition), J. Han, J. Pei, and H. B. T.-D. M. (Fourth E. Tong, Eds., Morgan Kaufmann, 2024, pp. 239–305. doi: https://doi.org/10.1016/B978-0-12-811760-6.00016-3.

A. Pandey and D. K. Vishwakarma, “Progress, achievements, and challenges in multimodal sentiment analysis using deep learning: A survey,” Appl. Soft Comput., vol. 152, p. 111206, Feb. 2024, doi: 10.1016/j.asoc.2023.111206.

J. S. Aguilar-Ruiz and M. Michalak, “Classification performance assessment for imbalanced multiclass data,” Sci. Rep., vol. 14, no. 1, pp. 1–10, 2024, doi: 10.1038/s41598-024-61365-z.

A. Kulkarni, D. Chong, and F. A. Batarseh, “Foundations of data imbalance and solutions for a data democracy,” in Data democracy, F. A. Batarseh and R. B. T.-D. D. Yang, Eds., Academic Press, 2020, pp. 83–106. doi: 10.1016/B978-0-12-818366-3.00005-8.

A. Subasi, “Introduction,” in Practical machine learning for data analysis using python, A. B. T.-P. M. L. for D. A. U. P. Subasi, Ed., Academic Press, 2020, pp. 1–26. doi: 10.1016/B978-0-12-821379-7.00001-1.

M. Deprez and E. C. Robinson, “Classification,” in Machine learning for biomedical applications, M. Deprez and E. C. B. T.-M. L. for B. A. Robinson, Eds., Academic Press, 2024, pp. 87–104. doi: 10.1016/B978-0-12-822904-0.00009-1.

P. Khakhar and R. K. Dubey, “The integrity of machine learning algorithms against software defect prediction,” in Artificial intelligence and machine learning for EDGE computing, R. Pandey, S. K. Khatri, N. kumar Singh, and P. B. T.-A. I. and M. L. for E. C. Verma, Eds., Academic Press, 2022, pp. 65–74. doi: 10.1016/B978-0-12-824054-0.00027-7.

E. Richardson, R. Trevizani, J. A. Greenbaum, and H. Carter, “Article The Receiver Operating Characteristic Curve Accurately Assesses Imbalanced Datasets The Receiver Operating Characteristic Curve Accurately Assesses Imbalanced Datasets,” Patterns, vol. 5, no. 6, pp. 1–11, 2024, doi: 10.1016/j.patter.2024.100994.

J. Neelaveni and M. S. G. Devasana, “Alzheimer Disease Prediction using Machine Learning Algorithms,” 2020 6th Int. Conf. Adv. Comput. Commun. Syst., pp. 101–104, 2020, doi: 10.1109/ICACCS48705.2020.9074248.

S. Vuddanti, N. Yasmin, L. Dishasri, N. Somanath, and Y. Prasanth, “Predictive Diagnosis of Alzheimer’s Disease Using Machine Learning,” in 2024 2nd International Conference on Sustainable Computing and Smart Systems (ICSCSS), 2024, pp. 928–934. doi: 10.1109/ICSCSS60660.2024.10625639.

M. B. Antor et al., “A Comparative Analysis of Machine Learning Algorithms to Predict Alzheimer’s Disease,” J. Healthc. Eng., vol. 2021, pp. 1–12, 2024, doi: 10.1155/2021/9917919.

C. Kavitha, V. Mani, S. R. Srividhya, and O. I. Khalaf, “Early-Stage Alzheimer’s Disease Prediction Using Machine Learning Models,” Front. Public Heal., vol. 10, pp. 1–13, 2022, doi: 10.3389/fpubh.2022.853294.




DOI: http://dx.doi.org/10.22441/format.2026.v15.i1.002

Refbacks

  • There are currently no refbacks.


Copyright (c) 2026 Format : Jurnal Ilmiah Teknik Informatika

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Format : Jurnal Ilmiah Teknik Informatika
Fakultas Ilmu Komputer Universitas Mercu Buana
Jl. Raya Meruya Selatan, Kembangan, Jakarta 11650
Tlp./Fax: +62215840816
http://publikasi.mercubuana.ac.id/index.php/format

p-ISSN: 2089-5615
e-ISSN:2722-7162

Lisensi Creative Commons
Ciptaan disebarluaskan di bawah Lisensi Creative Commons Atribusi-NonKomersial 4.0 Internasional.

View My Stats