The Effect of ChatGPT’s Inaccuracy on Decision-Making: A Systematic Review of Trust in Generative AI, Acceptance, and Error Prominence
Abstract
The widespread adoption of generative artificial intelligence (AI) such as has transformed how users seek information and make decisions across different domains. Despite their accessibility, these AI models sometimes generate inaccurate responses that are commonly referred to as “hallucinations,” which could impact user acceptance, trust, and reliance on AI-generated outputs. This study systematically reviews 37 articles published between 2022 and 2025 to examine the prevalence, consequences, and influencing factors of ChatGPT and other AI-generated inaccuracies. The study identifies three key findings: (i) hallucinations usually vary in type and visibility across different fields, (ii) incorporating information reduces credibility and impacts user trust, particularly when errors are prominent. The findings highlight that while users often value ChatGPT’s efficiency and accessibility, unrecognized inaccuracies pose risks of misinformation and decision bias. The review proposes a research agenda to enhance the trustworthiness, explainability, and responsible integration of generative AI in decision support.
How to Cite This Article
Raihana Akter Nira (2026). The Effect of ChatGPT’s Inaccuracy on Decision-Making: A Systematic Review of Trust in Generative AI, Acceptance, and Error Prominence . International Journal of Multidisciplinary Research and Growth Evaluation (IJMRGE), 7(1), 510-516. DOI: https://doi.org/10.54660/.IJMRGE.2026.7.1.510-516