PhD Annual Symposium 2026 – Hosted by HEC Montréal

Presentation Session #3: AI, Algorithmic Decision-Making & Fairness

When: Thursday, March 12, 2026 — 16:00–17:30
Room: C.403 — Louis-R-Chenevert


Presentations

1. George O'Neill

Title: The Boardroom Exception: Directors' Perceptions of Generative AI for Governance Decision-Making

Summary: Artificial intelligence (AI) is increasingly salient across organizations, yet often remains a boardroom exception. This active study examines how boards make decisions and how directors perceive generative AI’s potential role in governance work. So far, based on semi-structured interviews with ten experienced directors and chairpersons spanning more than 40 Canadian private, public, and government-owned organizations, the findings show that board decision-making remains anchored in static reports and management presentations, interpreted through directors’ tacit judgment. Directors see promise in AI for rapid sensemaking and scenario exploration but express reservations about using it directly in formal deliberations. Currently the study found five priorities shape the acceptability of AI-augmented board decision-making: trust and reliability; privacy and confidentiality; perpetual and dynamic support; planned and productive integration; and defined and effective use. The study aims to clarify why AI may diffuse unevenly into governance settings and highlights implications for designing AI that supports board accountability.


2. Paul Yuke Wang

Title: Impact of Generative AI on Skill Portfolio Transformation

Summary: Coming soon.


3. Chaima Merbouh

Title: Reinscribing Judgment: A Critical Realist Theory of Over-Reliance on AI in High-Stake Domains

Summary: Several operations, choices, and decisions that were previously reserved for humans are increasingly being delegated to algorithms. Individuals defer to automated decision recommendations, with exhortations to "trust the AI," spurring adoption in various settings. However, growing evidence shows that over-reliance on AI can diminish users’ sense of agency. Drawing on a critical realist perspective, this paper theorizes over-reliance not as a user-level error or trust miscalibration, but as a phenomenon shaped by deeper causal mechanisms that constrain human agency. It identifies four generative mechanisms that explain how professionals come to rely uncritically on AI outputs and what influences them to reassert judgment: reliability framing, wherein AI is internalized as more reliable than human judgment; accountability framing, which shifts accountability away from human actors; cognitive offloading, through which professional skills and critical engagement erode over time; and re-engagement, a counteracting mechanism that restores judgment under conditions of ethical tension and/or institutional pressure. The paper contributes to theory by advancing a process model of over-reliance that connects individual cognition with institutional structures and sociotechnical design. The findings have implications for AI governance, ethical AI adoption, and professional training strategies, emphasizing the need for frameworks that balance AI augmentation with human responsibility.


4. Mina Arzaghi

Title: Let's Unlearn Stereotypes Before Decision-Making: Assessing the Impact of Intrinsic Bias Mitigation on Downstream Fairness in Large Language Models

Summary: Large Language Models (LLMs) are increasingly deployed in high-stakes decision-making systems such as hiring, income assessment, and credit evaluation, where biased predictions can lead to tangible societal harms. While prior work has questioned whether intrinsic bias metrics in LLMs correlate with downstream (extrinsic) unfairness, the practical impact of intrinsic bias mitigation on real-world decision tasks remains unclear. A key reason is that intrinsic and extrinsic biases are typically mitigated and evaluated in isolation, due to the lack of a unified framework connecting representational bias in language models to downstream fairness outcomes. Intrinsic bias captures representational harms, such as stereotypical language generation, whereas downstream bias manifests as allocative harms that directly affect access to resources or opportunities. We introduce a unified evaluation framework that enables systematic analysis of how intrinsic bias mitigation in LLMs propagates to downstream task fairness. Within this framework, we propose Fairness-Aware Concept Unlearning (FACU) as an in-processing intrinsic bias mitigation method and evaluate it alongside data-level and output-level extrinsic mitigation strategies, including Counterfactual Data Augmentation (CDA) and self-debiasing via prompting. Using three open-source LLMs evaluated both as frozen feature extractors and as fine-tuned classifiers, we conduct experiments on widely used socio-economic decision benchmarks grounded in real-world data, including salary prediction, employment status, and creditworthiness assessment. Our results show that FACU reduces intrinsic socio-economic gender bias by up to 94.9%, and that these reductions translate into statistically significant improvements in downstream fairness across models, datasets, and deployment settings, including gains of up to 82% in demographic parity, without degrading predictive performance. Together, these findings demonstrate the practical value of jointly evaluating intrinsic and extrinsic bias mitigation when assessing fairness in high-stakes decision-making systems.