Marion Korosec-Serfaty

Assistant Professor in IT - Human–AI Interaction & NeuroIS Researcher

Critical Thinking in AI-Assisted Deontologically-Governed Professional Decision-Making: When and How Explainability, Reliability, and Transparency Matter


Journal article


Marion Korosec-Serfaty, Pierre-Majorique Léger, Xavier Parent-Rocheleau, Sylvain Sénécal
Information Systems Frontiers, 2026

View PDF Semantic Scholar DOI
Cite

Cite

APA   Click to copy
Korosec-Serfaty, M., Léger, P.-M., Parent-Rocheleau, X., & Sénécal, S. (2026). Critical Thinking in AI-Assisted Deontologically-Governed Professional Decision-Making: When and How Explainability, Reliability, and Transparency Matter. Information Systems Frontiers.


Chicago/Turabian   Click to copy
Korosec-Serfaty, Marion, Pierre-Majorique Léger, Xavier Parent-Rocheleau, and Sylvain Sénécal. “Critical Thinking in AI-Assisted Deontologically-Governed Professional Decision-Making: When and How Explainability, Reliability, and Transparency Matter.” Information Systems Frontiers (2026).


MLA   Click to copy
Korosec-Serfaty, Marion, et al. “Critical Thinking in AI-Assisted Deontologically-Governed Professional Decision-Making: When and How Explainability, Reliability, and Transparency Matter.” Information Systems Frontiers, 2026.


BibTeX   Click to copy

@article{korosec-serfaty2026a,
  title = {Critical Thinking in AI-Assisted Deontologically-Governed Professional Decision-Making: When and How Explainability, Reliability, and Transparency Matter},
  year = {2026},
  journal = {Information Systems Frontiers},
  author = {Korosec-Serfaty, Marion and Léger, Pierre-Majorique and Parent-Rocheleau, Xavier and Sénécal, Sylvain}
}

Abstract


Critical thinking is a central safeguard for responsibility and accountability in deontologically-governed professions. Artificial intelligence (AI)-assisted decision-making is increasingly being integrated within these professional workflows. However, AI introduces autonomy, learning, and inscrutability, disrupting critical, reflective, decision-making processes. Given these challenges, this research systematically unpacks how embedding explainability, reliability, and transparency into AI fosters critical thinking. Employing a multi-method approach combining cognitive neuroscience, behavioral and self-report measures, we conducted three experiments with practicing professionals tasked with realistic AI-assisted scenarios. Experiment 1 assessed varying levels of AI-generated reconstructive causal explanations under consistent reliability, Experiment 2 introduced variable reliability. Experiment 3 added transparency through model confidence scores. Results reveal that minimal reconstructive explanations enhance analytical reasoning under reliable conditions, while epistemic uncertainty drives critical engagement when reliability varies. Transparency offers limited restoration of explainability benefits. These findings suggest AI reliability primarily drives critical thinking, informing AI design that preserves professional responsibility and accountability.