submit article

AI and Machine Learning Advances

ISSN: 3067-3216

The AI and Machine Learning Advances Journal works towards becoming a leading journal for AI/ ML research findings. In this way, it performs a function of connecting academic, industrial, top machine learning algorithms and governmental researchers to exchange know-how and innovations that are shaping the development of intelligent systems at the present time.

Explainable AI in Public Policy: Quantifying Trust and Distrust in Algorithmic Decision-Making across Marginalized Communities

1Saad Tehreem, 2Hammad Razi

1Department of Marketing, College of Management Sciences, PAF-KIET University, Pakistan

2Department of Marketing, College of Management Sciences, PAF-KIET University, Pakistan

Received: 02-Jan-2024 | Revised: 03-March-2025 | Accepted: 10-March-2025

Download PDF

Abstract

Explainable AI (XAI) attempts to provide explanations and, thus, increase trust while the effect is influenced by demographic factors, cultural match, and perceived fairness. This study explores the role of trust and distrust regarding sociodemographic data and perceived fairness in the SOC system decision-making. It examines whether cultural match influences perceptions of fairness and whether the nature of the explanation (technical, plain language, or human) influences trust. This study employed a cross-sectional survey of 240 participants and used experimental vignettes where participants received decisions from an AI with/without explanations in one of three types. Relationship perception and algorithmic distrust and trust were analyzed using regression, MANOVA, and mediation. The study shows that human decisions are trusted most and are followed by plain language AI-generated reasons; the least trusted are technical reasons. It was also shown that perceived fairness regulates trust and that low-income users are more sensitive to fairness perception. Culture has been proven to have a strong association with fairness perception, emphasizing the need to adopt the context-based approach for AI governance. However, passive exposure to AI does not imply trust and, therefore, requires that transparency be perceived appropriately by the public. This study aims to expand knowledge in AI governance by attempting to apply both procedural justice and algorithmic accountability frameworks. It points to a lack of generalized public trust in AI and stresses the need for culturally sensitive and inclusive AI designs. Recommendations indicate that explainability is more important than just the technical process to drive policy changes.

Keywords

Explainable AI (XAI), Public Policy, Cultural Alignment, Algorithmic Trust, Experimental Vignettes.