Explanation Singularity of Explainable Artificial Intelligence (XAI): An AIGC Information Adoption Perspective
Explanation Singularity of Explainable Artificial Intelligence (XAI): An AIGC Information Adoption Perspective
Xinyuan Lu, Anqi Xu, Jinao Zhang
Generative AI (GenAI) has rapidly become a common source of advice in high-stakes domains such as healthcare and in everyday decision-making. Yet their black box nature often leaves users uncertain about how outputs are produced and whether they should be trusted. Explainable Artificial Intelligence (XAI) is widely viewed as a potential remedy. By providing explanations, GenAI can improve transparency, enhance user trust, and ultimately increase the likelihood that people adopt AI generated content (AIGC). However, research and recent debates suggest an important tension: adding explanations does not always lead to better outcomes. Explanations may be irrelevant, overly generic, or even amplify suspicion, particularly when users perceive AI hallucinations or overly agreeable responses. This study addresses a central question for research on human–AI interaction and information science: When do explanations facilitate information adoption, and when do they hinder it?
—When do explanations facilitate information adoption, and when do they hinder it?—
We argue that the effectiveness of XAI depends not only on whether explanations are provided, but also on how well the content of explanations corresponds to users’ information needs. To capture this relationship, we conceptualize explanation content relevance as a key indicator of explanation precision in AIGC contexts. Explanation content relevance reflects the extent to which an explanation aligns with the user’s input and the output within a specific task context. It consists of three dimensions: answer relevance, question relevance, and semantic consistency. Building on this perspective, we introduce the concept of an explanation singularity, defined as the critical threshold at which explanations shift from being detrimental to beneficial for information adoption. When relevance falls below this threshold, explanations may trigger failure to adopt information by highlighting mismatches, increasing cognitive burden, and reducing users’ willingness to trust the information. When relevance exceeds the threshold, relevant explanations strengthen users’ confidence and encourage adoption. This perspective reframes explainable AI design as a threshold problem, rather than assuming that more explanation is always better.
To test these ideas, we conducted two parallel between subject experiments in distinct information consultation contexts: medical information consulting, representing a high task importance scenario, and everyday life information consulting, representing a context with lower task importance. Participants were randomly assigned to one of four conditions: no explanation, or explanations with low, medium, or high levels of content relevance. Each participant evaluated multiple AIGC consultation items and reported their intention to adopt the information.
We also examined perceived cognitive consensus as a key cognitive mechanism. This construct captures whether AI explanations are consistent with users’ prior knowledge, interaction experience, and value or logical alignment, and therefore reflects how users integrate AI explanations into their own cognitive frameworks.
Across both contexts, the results show that XAI explainability increases information adoption, although the magnitude of this effect is constrained by content relevance. Content relevance strongly predicts adoption and operates partly through perceived cognitive consensus. More relevant explanations foster stronger cognitive consensus, which in turn increases users’ intention to adopt the information, indicating a partial mediation pathway. In contrast, the mere presence of explanations demonstrates a comparatively weaker indirect pathway through cognitive consensus and appears more likely to exert a direct influence on adoption.
Most importantly, we observed an explanation singularity, which represents a turning point at which the level of information adoption under explanation conditions becomes equal to that under the no explanation condition. When relevance falls below this point, explanations may suppress adoption. When relevance exceeds the threshold, explanations promote adoption, although the marginal benefit gradually diminishes at higher levels of relevance. The singularity is not fixed and varies across contexts. In high importance medical tasks, users appear more tolerant of explanations with lower relevance and are more willing to process information, which results in a lower relevance requirement for explanations to become beneficial. In everyday life tasks with lower importance, users tend to be more selective and less willing to invest cognitive effort, leading to a higher relevance threshold before explanations can improve adoption.
These findings extend information adoption research to GenAI contexts by integrating explainability, explanation content relevance, and perceived cognitive consensus into a coherent theoretical mechanism. The study also provides a threshold based perspective for understanding why XAI sometimes improves information adoption and sometimes undermines it. From a practical perspective, the results suggest that organizations should prioritize relevance oriented explanation design. In high stakes domains, GenAI should avoid providing explanations with low relevance because such explanations may create false reassurance or mislead users. In lower stakes domains, GenAI should reduce template style verbosity and instead focus on concise and tailored explanations that exceed the relevance threshold. Because this study does not explicitly incorporate misinformation scenarios, future research should examine how XAI boundaries operate when users encounter false or conflicting information and how explanation strategies may reduce erroneous adoption while supporting effective decision making. In summary, explanations are not inherently beneficial. They become helpful only when they exceed a context dependent relevance threshold, and identifying and designing for the explanation singularity may be essential for improving the real world impact of XAI in the era of GenAI.
Xinyuan Lu is an Information System Professional and Human-AI Interaction Research Fellow, working at the intersection of AI and knowledge management. Anqi Xu is a Human-AI Interaction Research Fellow, working at the intersection of human-AI interaction and user behavior. Jinao Zhang is a Human-AI Interaction Research Fellow, working at the intersection of AI and information behavior.
Corresponding author: zhangjinao2000@foxmail.com.
Cite this article in DAKD as: Lu Xinyuan, Xu Anqi, Zhang Jin’ao. Explanation Singularity of Explainable Artificial Intelligence (XAI): An AIGC Information Adoption Perspective[J]. Data Analysis and Knowledge Discovery, 2026, 10(1): 76-87. https://doi.org/10.11925/infotech.2096-3467.2025.0291
Cite this article in APA as: Lu, X., Xu, A., & Zhang, J. (2026, April 2). Explanation singularity of explainable artificial intelligence (XAI): An AIGC information adoption perspective. Information Matters. https://informationmatters.org/2026/03/explanation-singularity-of-explainable-artificial-intelligence-xai-an-aigc-information-adoption-perspective/
Authors
-
Data Analysis and Knowledge Discovery is a scholarly research journal founded in 2017, published monthly by the National Science Library of Chinese Academy of Sciences, under the auspices of Chinese Academy of Sciences.
Journal Focus
The Journal focuses on basic and applied research of theories, methods, systems, and best practices, for big data-based and computationally analytics-driven decision &policy analysis, in all the data-intensive and knowledge-driven fields. Special attention is given to computational discovery to detect and predict structures, trends, behaviors, relations, disruptions, and evolutions.
The journal takes full advantages of the convergence of computer science, complexity theories, data science, management science, policy research, behavior science, scientometrics, social metrics, digital science & digital humanities, and information science. The journal aims to support the research & application to transform data to information to knowledge to wisdom to intelligent solutions, and to embed the theories, technologies, and practices into intelligent management and decision-making in all the fields and industries.
View all posts -
Jinao Zhang is a Human-AI Interaction Research Fellow, working at the intersection of AI and information behavior.
View all posts