The concept of transparency is fragmented in artificial intelligence (AI) research, often limited to transparency of the algorithm alone. We propose that AI transparency operates on three levels-algorithmic, interaction, and social-all of which need to be considered to build trust in AI. We expand upon these levels using current research directions, and identify research gaps resulting from the conceptual fragmentation of AI transparency highlighted within the context of the three levels.
Funding Agencies|AI Transparency and Consumer Trust; Wallenberg AI; Autonomous Systems and Software Program-Humanities and Society (WASP-HS)