Strategic and counterfactual reasoning in AI-assisted decision making
Efstratios Tsirtsis
Max Planck Institute for Software Systems
19 Aug 2025, 2:30 pm - 3:30 pm
Kaiserslautern building G26, room 111
SWS Student Defense Talks - Thesis Defense
From finance and healthcare to criminal justice and transportation, many
domains that involve high-stakes decisions, traditionally made by humans, are
increasingly integrating artificial intelligence (AI) systems into their
decision making pipelines. While recent advances in machine learning and
optimization have given rise to AI systems with unprecedented capabilities,
fully automating such decisions is often undesirable. Instead, a promising
direction lies in AI-assisted decision making, where AI informs or complements
human decisions without completely removing human oversight. ...
From finance and healthcare to criminal justice and transportation, many
domains that involve high-stakes decisions, traditionally made by humans, are
increasingly integrating artificial intelligence (AI) systems into their
decision making pipelines. While recent advances in machine learning and
optimization have given rise to AI systems with unprecedented capabilities,
fully automating such decisions is often undesirable. Instead, a promising
direction lies in AI-assisted decision making, where AI informs or complements
human decisions without completely removing human oversight. In this talk, I
will present my PhD work on AI-assisted decision making in settings where
humans rely on two core cognitive capabilities: strategic reasoning and
counterfactual reasoning. First, I will introduce game-theoretic methods for
supporting policy design in strategic environments, enabling a decision maker
to allocate resources (e.g., loans) to individuals who adapt their behavior in
response to transparency regarding the decision policy. Next, I will present
methods to enhance a decision maker’s counterfactual reasoning process—
identifying key past decisions (e.g., in clinical treatments) which, if
changed, could have improved outcomes and, hence, serve as valuable learning
signals. Finally, I will discuss a computational model of how people attribute
responsibility between humans and AI systems in collaborative settings, such as
semi-autonomous driving, evaluated through a human subject study. I will
conclude with key takeaways and future directions for designing AI systems that
effectively support and interact with humans.
Read more