Leonardo Ceragioli will present joint work with G. Pirmiero “A Proof-Theoretic Approach to Bias” at the Carl Friedrich von Weizsäcker Colloquium (University of Tübingen, Germany) on April 30rd, 2025.
Abstract.
Although widely applied, ML systems are at most as reliable as the data they are trained with. Among other reasons, biases may arise because causal relations in the data are ignored. In fact, Simpson’s paradox entails that to distinguish pure correlations from causal relations, we need knowledge of the world that is not reducible to pure statistical data.
In my talk, I will present a proof system with different operators for fairness that deal with biases. First, I will consider individual fairness, which ignores causal relations between attributes, and interpret intersectionality as connected with the admissibility of Weakening. Then I will address counterfactual fairness, extending our calculus with an accessibility relation based on the causal relation, and check this extension for admissibility of Weakening.