Categories
seminar

Francesca Zaffora Blando’s Seminar

We are all very looking forward to this week’s invited talk! On Wednesday 24 we will host Francesca Zaffora Blando (Carnegie Mellon University) who will give a talk on Wald randomness and learning-theoretic randomness. Remember that our talks usually start at 1PM (CET) and typically last 1 hour. At the end we will have room for Q&A and an informal discussion of Francesca’s talk. Don’t forget to take a look at our youtube channel and twitter feed for our videos and news!

Title: Wald randomness and learning-theoretic randomness

Abstract: The theory of algorithmic randomness has its roots in Richard von Mises’ work on the foundations of probability. Von Mises was a fervent proponent of the frequency interpretation of probability, which he supplemented with a (more or less) formal definition of randomness for infinite sequences of experimental outcomes. In a nutshell, according to von Mises’ account, the probability of an event is to be identified with its limiting relative frequency within a random sequence. Abraham Wald’s most well-known contribution to the heated debate that immediately followed von Mises’ proposal is his proof of the consistency of von Mises’ definition of randomness. In this talk, I will focus on a lesser known contribution by Wald: a definition of randomness that he put forth to rescue von Mises’ original definition from the objection that is often regarded as having dealt the death blow to his entire approach (namely, the objection based on Ville’s Theorem). We will see that, when reframed in computability-theoretic terms, Wald’s definition of randomness coincides with a well-known algorithmic randomness notion and that his overall approach is very close, both formally and conceptually, to a recent framework for modeling algorithmic randomness that rests on learning-theoretic tools and intuitions.

Zoom Link: https://us02web.zoom.us/j/89442594004

Categories
seminar

Giuseppe Sanfilippo’s Seminar

This Wednesday, come learn and have a chat about uncertain reasoning with us! Our group is thrilled to announce that Giuseppe Sanfilippo (University of Palermo) will give an invited talk on logical operations among conditional events under coherence. His talk will begin at 1 PM (CET) sharp on Wednesday the 10th. As usual, don’t forget to visit our youtube channel if you missed our past talks and follow us on Twitter to stay up to date with our latest news.

Title: Logical operations among conditional events in the setting of coherence

Abstract: In the subjective theory of probability of de Finetti, given any event A, the probability P(A) represents the degree of belief on A. In order to assess P(A), in the betting framework, one agrees to pay, for instance, P(A) by receiving 1 or 0 according to whether A turns out to be true or false, respectively. The consistency of the probability assessments is checked by the coherence principle. Let us imagine an experiment where you flip a coin twice; for each (valid) flip there are two possible outcomes: “head” or “tail”. Let us consider the conjunction “the outcome of the first flip is head and the outcome of the second flip is head”. By defining the events A =“the outcome of the first flip is head” and B =“the outcome of the second flip is head”, we denote by AB the previous conjunction, which is true when both A and B are true, and false when A or B is false. If you judge P(AB)=p, then in a bet on AB you agree to pay, for instance, p by receiving 1, or 0, according to whether AB turns out to be true, or false, respectively. What is the “logical value” of AB when the outcome of the first coin is head and the second coin stands up? What happens of the bet? By defining valid the flip when ”the coin does not stand, or similar things”, that is when ”the outcome of the flip is head or tail”, the previous conjunction can be interpreted as a conjoined conditional like “the outcome of the first flip is head, given that it is head or tail, and the outcome of the second flip is head, given that it is head or tail.” How can we assess a degree of belief on this conjoined conditional? Another aspect concerns the assessment of degrees of belief on iterated conditionals like “if the mother is angry if the son gets a B, then she will be furious if the son gets a C”. Usually, in literature, the conjunction and the disjunction of conditional events have been defined as suitable conditional events. However, in this way many classical probabilistic properties are lost. We illustrate the notions of conjunction, disjunction and conditioning among conditional events, which are defined (not as conditional events, but) as suitable conditional random quantities with values in [0,1], in the setting of coherence. These logical operations satisfy the basic probabilistic properties valid for unconditional events. We show that some, intuitively acceptable, compound sentences on conditionals can be analyzed in a rigorous way in terms of suitable iterated conditionals. We give a characterization of the probabilistic entailment of Adams for conditionals. Moreover, by exploiting iterated conditionals, we show that the p-entailment of a conditional event E|H from a p-consistent family F is characterized by the property that the iterated conditional (E|H)|C(F) is constant and coincides with 1. Finally, we illustrate the characterization, in terms of iterated conditionals, of some well known p-valid and non p-valid inference rules.

Zoom link: https://us02web.zoom.us/j/84034724300

Categories
seminar

Riccardo Guidotti’s Seminar

Our Lunch Seminar series continues this week! We are very looking forward to having Riccardo Guidotti (University of Pisa) next Wednesday (October 27), starting from 1 PM. His talk on Auto-Encoders for Explainable AI matches very well with the Machine Ethics seminar given by Marija two weeks ago. Make sure you don’t miss this one out if you want to know more about these exciting topics in Artificial Intelligence. See below for the link and more information.

Title: Exploiting Auto-Encoders for Explaining Black Box Classifiers

Abstract: Artificial Intelligence (AI) is nowadays one of the most important scientific and technological areas, with a tremendous socio-economic impact and a pervasive adoption in every field of the modern society. Many applications in different fields, such as credit score assessment, medical diagnosis, autonomous vehicles, and spam filtering are based on AI decision systems. Unfortunately, these systems often reach their impressive performance through obscure machine learning models that “hide” the logic of their internal decision processes to humans because not humanly understandable. For this reason, these models are called black box models, i.e., models used by AI to accomplish a task for which either the logic of the decision process is not accessible, or it is accessible but not human-understandable. Examples of machine learning black box models adopted by AI systems include Deep Neural Networks, Ensemble classifiers, and so on. The missing interpretability of black box models is a crucial issue for ethics and a limitation to AI adoption in socially sensitive and safety-critical contexts such as healthcare and law. As a consequence, the research in eXplainable AI (XAI) has recently caught much attention and there has been an ever-growing interest in this research area to provide explanations on the behavior of black box models. A promising line of research in XAI exploits auto-encoders for explaining black box classifiers working on non-tabular data (e.g., images, time series, and texts). The ability of autoencoders to compress any data in a low-dimensional tabular representation, and then reconstruct it with negligible loss, provides the great opportunity to work in the latent space for the extraction of meaningful explanations, for example through the generation of new synthetic samples, consistent with the input data, that can be fed to a black-box to understand where its decision boundary lies. In this presentation we discuss recent XAI solutions based on autoencoders that enable the extraction of meaningful explanations composed by factual and counterfactual rules, and by exemplar and counter-exemplar samples, offering a deep understanding of the local decision of the black box.

Short Bio: Riccardo Guidotti is currently an Assistant Professor (RTD-B) at the Department of Computer Science University of Pisa, Italy, and a member of the Knowledge Discovery and Data Mining Laboratory (KDDLab), a joint research group with the Information Science and Technology Institute of the National Research Council in Pisa. In 2013 and 2010 he graduated cum laude in Computer Science (MS and BS) at the University of Pisa. He received a PhD in Computer Science with a thesis on Personal Data Analytics in the same institution. He won the IBM fellowship program and has been an intern in IBM Research Dublin, Ireland in 2015. His research interests are in personal data mining, clustering, explainable models, analysis of transactional data.

Zoom link: https://us02web.zoom.us/j/86952228423

Categories
seminar

Marija Slavkovik’s Seminar

Our Lunch Seminar series re-starts tomorrow! We will host Marija Slavkovik (University of Bergen) who will give a talk on Machine Ethics: Introduction to the Area and Research Challenges. See below for more information and the abstract:

Speaker: Marija Slavkovik (University of Bergen), website: http://slavkovik.com

Title: Machine Ethics: Introduction to the Area and Research Challenges

Abstract: The talk introduces the research area of machine ethics which tries to answer the question how to automatise moral reasoning. While trolley problems is the first thought when talking about machine ethics, subtle more every day decisions made by computational agents are the real challenge of machine ethics. Machine ethics has been established in 2006 and it is dominated by symbolic based approaches. We start by introducing AI ethics as a field in general and the place of machine ethics in it. We then consider some existing machine ethics challenges and open questions.

Zoom link: Please email fabio.dasaro@unimi.it for the link

Categories
news

Introducing LUCI

We are starting the new academic year with a bang! We have a brand new name: Logic Group Milano is now LUCI (Logic, Uncertainty, Computation and Information group). We are currently working on refreshing this website, so keep coming back for more exciting stuff!

Categories
seminar

LUCI Lunch Seminars are back!

We are delighted to announce that our series of Lunch Seminars is back! Check out the list of our invited speakers below and do not forget to follow us on social media (Twitter and Instagram) to stay up to date with our news.

Our invited speakers are:

Marija Slavkovik, University of Bergen (13 October 2021)
Riccardo Guidotti, University of Pisa (27 October 2021)
Giuseppe Sanfilippo, University of Palermo (10 November 2021)
Francesca Zaffora Blando, Carnegie Mellon University (24 Novembre 2021)
Alessandro Aldini, University of Urbino (1 December 2021)
Richard Booth, Cardiff University (15 December 2021)

All talks will take place at 1 PM CET.

Categories
seminar

AI³ Workshop 2021

We are happy to announce that our members Marcello D’Agostino, Costanza Larese and Fabio Aurelio D’Asaro are organizing the 5th Workshop on Advances in Argumentation in AI (AI³ 2021) co-located with the 20th International Conference of the Italian Association for Artificial Intelligence (AI*IA 2021). We invite (possibly non-original) submissions on the applications and theory of computational argumentation. For more information please take a look at the workshop’s website and the call for papers at https://sites.google.com/view/ai3-2021/cfp. The AIxIA conference will take place in Milan on December 1st-3rd, 2021.

Our workshop will take place on November 29th starting from 9 AM (CET) until 6 PM (CET).

Categories
eventi publications

Logic Group at ISIPTA 2021

We are happy to announce that our three papers

are now available online in the Proceedings of ISIPTA 2021. Check out our Publications Page for more information.

Categories
Open position

PhD projects available

The Group is active in the Mind, Brain and Reasoning doctoral programme, which is now advertising for 4 fully funded three-year PhD scholarships. If you are interested in applying to work with us, please note that we have a list of projects for which we are offering supervision.

Categories
Open position

Postdoc in Logical Foundations of AI – Deadline 30th June 2021

The Logic Group is thrilled to advertise a postdoc position (two years, renewable) within the project “Logical Foundations of AI

The project will be developed within a research line contributing to bridging the gap between statistical methodologies at the basis of (supervised, unsupervised) Machine Learning and Logic in the development of AI. We aim to develop Logics for Reasoning under Uncertainty and with limited resources to analyse and check transparency and trustworthiness of AI systems. Properties of interest include but are not limited to: Causality, Safety and Fairness. The selected candidate will join a thriving research group based at the Department of Philosophy at the University of Milan, and will be working under the joint supervision of Marcello D’Agostino and Giuseppe Primiero.

For more information, see the Open Positions page.