I am a postdoctoral researcher in Logic at the University of Milan. My current research interests include logics for uncertain, epistemic and temporal reasoning, knowledge representation and reasoning about actions.
Tomorrow we will have our last LUCI Lunch Seminar before the break! Starting from 1 PM CET, Richard Booth from the University of Cardiff will give a talk on Conditional Inference under Disjunctive Rationality. Please find the link and more information below:
Title: Conditional Inference under Disjunctive Rationality
Abstract: The question of conditional inference, i.e., of which conditional sentences of the form “if α then, normally, β” should follow from a set KB of such sentences, has been one of the classic questions of AI, with several well-known solutions proposed. Perhaps the most notable is the rational closure construction of Lehmann and Magidor, under which the set of inferred conditionals forms a rational consequence relation, i.e., satisfies all the rules of preferential reasoning, plus Rational Monotonicity. However, this last named rule is not universally accepted, and other researchers have advocated working within the larger class of disjunctive consequence relations, which satisfy the weaker requirement of Disjunctive Rationality. While there are convincing arguments that the rational closure forms the “simplest” rational consequence relation extending a given set of conditionals, the question of what is the simplest disjunctive consequence relation has not been explored. In this talk, we propose a solution to this question and explore some of its properties. (This is joint work with Ivan Varzinczak.)
In two days (Wednesday, December 1, 2021) we will host Alessandro Aldini (University of Urbino) who will give a talk on the modeling and verification of collective and cooperative systems. We are looking forward to welcoming you all, as usual, starting from 1 PM CET. More information are available below:
Title: On the modeling and verification of collective and cooperative systems
Abstract: The formal description and verification of large networks of cooperative and interacting agents is made difficult by the interplay of several different behavioral patterns, models of communication, scalability issues. In this lecture, we will explore the functionalities and the expressiveness of a general-purpose process algebraic framework for the specification and model checking based analysis of collective and cooperative systems. The proposed syntactic and semantic schemes are general enough to be adapted with small modifications to heterogeneous application domains, including, e.g., crowdsourcing systems, trustworthy networks and distributed ledger technologies.
We are all very looking forward to this week’s invited talk! On Wednesday 24 we will host Francesca Zaffora Blando (Carnegie Mellon University) who will give a talk on Wald randomness and learning-theoretic randomness. Remember that our talks usually start at 1PM (CET) and typically last 1 hour. At the end we will have room for Q&A and an informal discussion of Francesca’s talk. Don’t forget to take a look at our youtube channel and twitter feed for our videos and news!
Title: Wald randomness and learning-theoretic randomness
Abstract: The theory of algorithmic randomness has its roots in Richard von Mises’ work on the foundations of probability. Von Mises was a fervent proponent of the frequency interpretation of probability, which he supplemented with a (more or less) formal definition of randomness for infinite sequences of experimental outcomes. In a nutshell, according to von Mises’ account, the probability of an event is to be identified with its limiting relative frequency within a random sequence. Abraham Wald’s most well-known contribution to the heated debate that immediately followed von Mises’ proposal is his proof of the consistency of von Mises’ definition of randomness. In this talk, I will focus on a lesser known contribution by Wald: a definition of randomness that he put forth to rescue von Mises’ original definition from the objection that is often regarded as having dealt the death blow to his entire approach (namely, the objection based on Ville’s Theorem). We will see that, when reframed in computability-theoretic terms, Wald’s definition of randomness coincides with a well-known algorithmic randomness notion and that his overall approach is very close, both formally and conceptually, to a recent framework for modeling algorithmic randomness that rests on learning-theoretic tools and intuitions.
This Wednesday, come learn and have a chat about uncertain reasoning with us! Our group is thrilled to announce that Giuseppe Sanfilippo (University of Palermo) will give an invited talk on logical operations among conditional events under coherence. His talk will begin at 1 PM (CET) sharp on Wednesday the 10th. As usual, don’t forget to visit our youtube channel if you missed our past talks and follow us on Twitter to stay up to date with our latest news.
Title: Logical operations among conditional events in the setting of coherence
Abstract: In the subjective theory of probability of de Finetti, given any event A, the probability P(A) represents the degree of belief on A. In order to assess P(A), in the betting framework, one agrees to pay, for instance, P(A) by receiving 1 or 0 according to whether A turns out to be true or false, respectively. The consistency of the probability assessments is checked by the coherence principle. Let us imagine an experiment where you flip a coin twice; for each (valid) flip there are two possible outcomes: “head” or “tail”. Let us consider the conjunction “the outcome of the first flip is head and the outcome of the second flip is head”. By defining the events A =“the outcome of the first flip is head” and B =“the outcome of the second flip is head”, we denote by AB the previous conjunction, which is true when both A and B are true, and false when A or B is false. If you judge P(AB)=p, then in a bet on AB you agree to pay, for instance, p by receiving 1, or 0, according to whether AB turns out to be true, or false, respectively. What is the “logical value” of AB when the outcome of the first coin is head and the second coin stands up? What happens of the bet? By defining valid the flip when ”the coin does not stand, or similar things”, that is when ”the outcome of the flip is head or tail”, the previous conjunction can be interpreted as a conjoined conditional like “the outcome of the first flip is head, given that it is head or tail, and the outcome of the second flip is head, given that it is head or tail.” How can we assess a degree of belief on this conjoined conditional? Another aspect concerns the assessment of degrees of belief on iterated conditionals like “if the mother is angry if the son gets a B, then she will be furious if the son gets a C”. Usually, in literature, the conjunction and the disjunction of conditional events have been defined as suitable conditional events. However, in this way many classical probabilistic properties are lost. We illustrate the notions of conjunction, disjunction and conditioning among conditional events, which are defined (not as conditional events, but) as suitable conditional random quantities with values in [0,1], in the setting of coherence. These logical operations satisfy the basic probabilistic properties valid for unconditional events. We show that some, intuitively acceptable, compound sentences on conditionals can be analyzed in a rigorous way in terms of suitable iterated conditionals. We give a characterization of the probabilistic entailment of Adams for conditionals. Moreover, by exploiting iterated conditionals, we show that the p-entailment of a conditional event E|H from a p-consistent family F is characterized by the property that the iterated conditional (E|H)|C(F) is constant and coincides with 1. Finally, we illustrate the characterization, in terms of iterated conditionals, of some well known p-valid and non p-valid inference rules.
Our Lunch Seminar series continues this week! We are very looking forward to having Riccardo Guidotti(University of Pisa) next Wednesday (October 27), starting from 1 PM. His talk on Auto-Encoders for Explainable AI matches very well with the Machine Ethics seminar given by Marija two weeks ago. Make sure you don’t miss this one out if you want to know more about these exciting topics in Artificial Intelligence. See below for the link and more information.
Title: Exploiting Auto-Encoders for Explaining Black Box Classifiers
Abstract: Artificial Intelligence (AI) is nowadays one of the most important scientific and technological areas, with a tremendous socio-economic impact and a pervasive adoption in every field of the modern society. Many applications in different fields, such as credit score assessment, medical diagnosis, autonomous vehicles, and spam filtering are based on AI decision systems. Unfortunately, these systems often reach their impressive performance through obscure machine learning models that “hide” the logic of their internal decision processes to humans because not humanly understandable. For this reason, these models are called black box models, i.e., models used by AI to accomplish a task for which either the logic of the decision process is not accessible, or it is accessible but not human-understandable. Examples of machine learning black box models adopted by AI systems include Deep Neural Networks, Ensemble classifiers, and so on. The missing interpretability of black box models is a crucial issue for ethics and a limitation to AI adoption in socially sensitive and safety-critical contexts such as healthcare and law. As a consequence, the research in eXplainable AI (XAI) has recently caught much attention and there has been an ever-growing interest in this research area to provide explanations on the behavior of black box models. A promising line of research in XAI exploits auto-encoders for explaining black box classifiers working on non-tabular data (e.g., images, time series, and texts). The ability of autoencoders to compress any data in a low-dimensional tabular representation, and then reconstruct it with negligible loss, provides the great opportunity to work in the latent space for the extraction of meaningful explanations, for example through the generation of new synthetic samples, consistent with the input data, that can be fed to a black-box to understand where its decision boundary lies. In this presentation we discuss recent XAI solutions based on autoencoders that enable the extraction of meaningful explanations composed by factual and counterfactual rules, and by exemplar and counter-exemplar samples, offering a deep understanding of the local decision of the black box.
Short Bio: Riccardo Guidotti is currently an Assistant Professor (RTD-B) at the Department of Computer Science University of Pisa, Italy, and a member of the Knowledge Discovery and Data Mining Laboratory (KDDLab), a joint research group with the Information Science and Technology Institute of the National Research Council in Pisa. In 2013 and 2010 he graduated cum laude in Computer Science (MS and BS) at the University of Pisa. He received a PhD in Computer Science with a thesis on Personal Data Analytics in the same institution. He won the IBM fellowship program and has been an intern in IBM Research Dublin, Ireland in 2015. His research interests are in personal data mining, clustering, explainable models, analysis of transactional data.
Our Lunch Seminar series re-starts tomorrow! We will host Marija Slavkovik (University of Bergen) who will give a talk on Machine Ethics: Introduction to the Area and Research Challenges. See below for more information and the abstract:
Title: Machine Ethics: Introduction to the Area and Research Challenges
Abstract: The talk introduces the research area of machine ethics which tries to answer the question how to automatise moral reasoning. While trolley problems is the first thought when talking about machine ethics, subtle more every day decisions made by computational agents are the real challenge of machine ethics. Machine ethics has been established in 2006 and it is dominated by symbolic based approaches. We start by introducing AI ethics as a field in general and the place of machine ethics in it. We then consider some existing machine ethics challenges and open questions.
We are starting the new academic year with a bang! We have a brand new name: Logic Group Milano is now LUCI (Logic, Uncertainty, Computation and Information group). We are currently working on refreshing this website, so keep coming back for more exciting stuff!
We are delighted to announce that our series of Lunch Seminars is back! Check out the list of our invited speakers below and do not forget to follow us on social media (Twitter and Instagram) to stay up to date with our news.
Our invited speakers are:
Marija Slavkovik, University of Bergen (13 October 2021) Riccardo Guidotti, University of Pisa (27 October 2021) Giuseppe Sanfilippo, University of Palermo (10 November 2021) Francesca Zaffora Blando, Carnegie Mellon University (24 Novembre 2021) Alessandro Aldini, University of Urbino (1 December 2021) Richard Booth, Cardiff University (15 December 2021)
We are happy to announce that our members Marcello D’Agostino, Costanza Larese and Fabio Aurelio D’Asaro are organizing the 5th Workshop on Advances in Argumentation in AI (AI³ 2021) co-located with the 20th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2021). We invite (possibly non-original) submissionson the applications and theory of computational argumentation. For more information please take a look at the workshop’s website and the call for papers at https://sites.google.com/view/ai3-2021/cfp. The AIxIA conference will take place in Milan on December 1st-3rd, 2021.
Our workshop will take place on November29th starting from 9 AM (CET) until 6 PM (CET).