Explainable AI (XAI) establishes artificial intelligence (AI) models which combine high-performance capabilities with enhanced transparency for school administrative choices. This systematic literature review aims to study past research about XAI use in education to connect theoretical understanding with real-world practice while advancing ethical AI practices. The current systematic literature review data was gathered from 15 peer-reviewed articles published in renowned databases, including Web of Science, Scopus, Springer, Elsevier, etc. The search was focused on studies determining the application of XAI in education, offering insights from advancements and their implications. The review examines XAI tools alongside their feature sets and operational boundaries and their fundamental needs within educational contexts. The focus of AI and ML researchers on enhancing XAI tools exists, but there are some differences between their targeted audiences and expected results. Interpretable Machine Learning (IML) or XAI produces explanations about prediction outputs while generating customized remedies through tutoring sessions. Adaptive learning systems depend on XAI to develop students’ cognitive abilities for analysis and problem resolution. The intrinsic techniques of XAI in educational data science enable researchers to forecast underrepresented and underperforming student profiles and online learner success rates alongside poor course completion prospect identification for academically struggling students. XAI can help see the learned features and evaluate the bias needed for suspicion about unfair results. XAI can help see the learned features and assess the necessary bias for suspicion about unfair results.
Artificial Intelligence (AI); AI in Education (AIED); Explainable AI (XAI); Personalized Learning; Intelligent Tutoring Systems (ITS)