Student Performance Prediction with Decision Tree Ensembles and Feature Selection Techniques
Abstract
The prevalence of student dropout in academic settings is a serious issue that affects individuals and society as a whole. Timely intervention and support can be provided to such students if we get an accurate prediction of student performance. However, class imbalance and data complexity in education data are major challenges for traditional predictive analytics. Our research focusses on utilising machine learning techniques to predict student performance while handling imbalanced datasets. To address the imbalanced class problem, we employed both oversampling and undersampling techniques in our decision tree ensemble methods for the risk classification of prospective students. The effectiveness of classifiers was evaluated by varying the sizes of the ensembles and the oversampling and undersampling ratios. Additionally, we conducted experiments to integrate the feature selection processes with the best ensemble classifiers to further enhance the prediction. Based on the extensive experimentation, we concluded that ensemble methods such as Random Forest, Bagging, and Random Undersampling Boosting perform well in terms of performance measures such as Recall, Precision, F1-score, Area Under the Receiver Operating Characteristic Curve, and Geometric Mean. The F1-score of 0.849 produced by the Random Undersampling Boost classifier in conjunction with the Least Absolute Shrinkage and Selection Operator feature selection method indicates that this ensemble produces the best results.