Talk by Francesca Poggiolesi at LUCI seminar

Title: Revealing reasons: from philosophy to AI

Joint work with Brian Hill (CNRS, GREGHEC).

Abstract: Explanations that uncover why a conclusion holds—those that provide its reasons—are central across disciplines, from philosophy and logic to artificial intelligence. There exists a rich philosophical tradition, stretching from Aristotle to Frege and including figures like Leibniz and Bolzano, that emphasizes this explanatory ideal. Building on this tradition, Poggiolesi (2025) has recently formalized such explanatory reasoning using proof-theoretic tools.

Concurrently, research in machine learning has developed methods to represent classifiers as Boolean circuits, preserving input-output behavior while enabling interpretability. Within this setting, Darwiche and Hirth (2023) propose a framework for identifying the complete reasons behind classifier decisions, offering a rigorous account of explanatory inference in artificial intelligence.

This talk reveals deep conceptual and technical connections between these two strands of thought. We show how Poggiolesi’s proof-theoretic framework can be applied to Boolean classifiers to compute their complete reasons, thereby bridging formal logic and machine learning interpretability. Examples will illustrate the synergy between classical philosophical insights and contemporary AI.

Darwiche, and A. Hirth, On the (complete) reasons behind decisions. Journal of Logic Language and Information, 32:63-88, (2023)

F. Poggiolesi, (Conceptual) explanations in logic, Journal of Logic and Computation, Volume 35, Issue 4, June 2025.

The seminar will held online on May 28th at 14:30 (Rome time) on the Microsoft Teams platform, here.