TY - GEN
T1 - Building Trust in AI-Powered Assessment Through Explainable Machine Learning Models
AU - Ayanwale, Musa Adekunle
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/10/21
Y1 - 2025/10/21
N2 - The increasing integration of artificial intelligence (AI)-based assessment systems in higher education offers efficiency and scalability in grading and feedback but raises concerns about transparency, fairness, and student trust. This study investigates the role of Explainable AI (XAI) in improving trust and engagement within AI-driven assessment environments, focusing on the Thuto Learning Management System (LMS) at the National University of Lesotho (NUL). A cross-sectional survey of 850 undergraduate students, who had engaged with Thuto LMS for at least two semesters, was conducted using a 55-item questionnaire measuring 11 constructs, including digital self-efficacy, AI literacy, ethics awareness, and feedback perceptions. Assessment outcomes were retrieved from LMS records and binarised into pass/fail classifications. Three supervised machine learning models—Logistic Regression, Random Forest, and K-Nearest Neighbours—were developed to predict assessment outcomes, and post hoc interpretability was achieved using SHAP and DALEX frameworks. Logistic Regression demonstrated the highest predictive accuracy (73.7%), while feature importance analyses revealed that feedback usefulness, discussion engagement, and digital self-efficacy were the strongest predictors of academic success. Findings underscore the potential of XAI for promoting fairness, transparency, and learner trust in AI-powered educational systems, particularly in underexplored African higher education contexts.
AB - The increasing integration of artificial intelligence (AI)-based assessment systems in higher education offers efficiency and scalability in grading and feedback but raises concerns about transparency, fairness, and student trust. This study investigates the role of Explainable AI (XAI) in improving trust and engagement within AI-driven assessment environments, focusing on the Thuto Learning Management System (LMS) at the National University of Lesotho (NUL). A cross-sectional survey of 850 undergraduate students, who had engaged with Thuto LMS for at least two semesters, was conducted using a 55-item questionnaire measuring 11 constructs, including digital self-efficacy, AI literacy, ethics awareness, and feedback perceptions. Assessment outcomes were retrieved from LMS records and binarised into pass/fail classifications. Three supervised machine learning models—Logistic Regression, Random Forest, and K-Nearest Neighbours—were developed to predict assessment outcomes, and post hoc interpretability was achieved using SHAP and DALEX frameworks. Logistic Regression demonstrated the highest predictive accuracy (73.7%), while feature importance analyses revealed that feedback usefulness, discussion engagement, and digital self-efficacy were the strongest predictors of academic success. Findings underscore the potential of XAI for promoting fairness, transparency, and learner trust in AI-powered educational systems, particularly in underexplored African higher education contexts.
KW - AI in higher education
KW - educational assessment
KW - Explainable artificial intelligence
KW - learning management system
KW - student engagement
UR - https://www.scopus.com/pages/publications/105021488548
U2 - 10.1145/3736251.3747313
DO - 10.1145/3736251.3747313
M3 - Conference contribution
AN - SCOPUS:105021488548
T3 - CompEd 2025 - Proceedings of the ACM Global Computing Education Conference 2025
SP - 403
EP - 404
BT - CompEd 2025 - Proceedings of the ACM Global Computing Education Conference 2025
PB - Association for Computing Machinery, Inc
T2 - 3rd ACM Global Computing Education Conference, CompEd 2025
Y2 - 21 October 2025 through 25 October 2025
ER -