Joint Colloquium of the CAIR and Information Engineering Group

Joint Colloquium of the CAIR and Information Engineering Group

Upcoming talks:

Abstract: In probabilistic belief revision, the kinematics principle is a well-known and powerful principle which ensures that changing the probabilities of facts does not change unnecessarily conditional probabilities. A related principle, the principle of conditional preservation, has also been one of the main guidelines for the axioms of iterated belief revision in the seminal paper by Darwiche and Pearl. However, to date, a fully elaborated kinematics principle for iterated revision has not been presented. We aim to fill this gap in this paper by proposing a qualitative kinematics principle for iterated revision of epistemic states represented by total preorders. As new information, we allow sets of conditional beliefs, going far beyond the current state of the art of belief revision. We introduce a qualitative conditioning operator for total preorders which is compatible with conditioning for Spohn's ranking functions as far as possible, and transfer the technique of c-revisions to total preorders to provide a proof of concept for our kinematics principle at least for special revision scenarios.
This work presents a qualitative version of the work presented in the paper ``Ranking kinematics for revising by contextual information'' (Meliha Sezgin, Gabriele Kern-Isberner and Christoph Beierle, Annals of Mathematics and Artificial Intelligence volume 89).

Past talks:

Abstract: Lexicographic inference is a well-behaved and popular approach to reasoning with non-monotonic conditionals. In recent work, we have shown that lexicographic inference satisfies syntax splitting, which means we can restrict our attention to parts of the belief base that share atoms with a given query. In this paper, we introduce the more general concept of conditional syntax splitting, inspired by the notion of conditional independence as known from probability theory. We show that lexicographic inference satisfies conditional syntax splitting, and connect conditional independence to several known properties from the literature on non-monotonic reasoning, including the drowning effect.
Abstract: According to Boutillier, Darwiche and Pearl and others, principles for iterated revision can be characterised in terms of changing beliefs about conditionals. For iterated contraction a similar formulation is not known. This is especially because for iterated belief change the connection between revision and contraction via the Levi and Harper identity is not straightforward, and therefore, characterisation results do not transfer easily between iterated revision and contraction. In this article, we develop an axiomatisation of iterated contraction in terms of changing conditional beliefs. We prove that the new set of postulates conforms semantically to the class of operators like the ones given by Konieczny and Pino PĂ©rez for iterated contraction.
Abstract: In belief revision theory, conditionals are often interpreted via the Ramsey test. However, the classical Ramsey Test fails to take into account a fundamental feature of conditionals as used in natural language: typically, the antecedent is relevant to the consequent. The Relevant Ramsey Test is an extension introduced by Rott, that encodes a notion of relevance by using positive and negative information. We introduce a suitable approach for reasoning over belief bases with this kind of positive and negative information. Moreover, we investigate the interaction of mixed information via a property for partitions of conditional belief bases with positive and negative information and propose a non-trivial extension of system Z that enables us to represent and to reason over conditionals that encode relevance in the manner of the Relevant Ramsey Test.
Abstract: A logic is a normative framework, i.e., it specifies what correct reasoning is. On the other hand, on average human reasoning can systematically deviate from inferences drawn by classical logic. Hence, psychologists develop models that aim for describing the specifics of how humans reason, i.e., psychologists are interested in a descriptive framework that can describe and explain how a human reasons. Is there an insuperable gap between descriptive and normative approaches for reasoning? Can logics describe the way how humans reason? If so what makes them cognitively adequate? In this talk, I will first discuss, how we can best evaluate cognitive theories, logics, and machine learning approaches. In a second step, I will present the performance of current approaches for syllogistic and relational reasoning. In a third step, I will show how beliefs and reasoning mechanisms impact the drawn conclusions. A discussion of features of logics concludes the presentation.
Abstract: Empirical methods have been used to test whether human reasoning conforms to models of reasoning in logic-based artificial intelligence. Particularly, studies have shown that human reasoning is consistent with non-monotonic logic and belief change. The former refers to models of reasoning where the same set of premises does not always yield the same conclusion. The latter refers to the operations of revising and updating a set of beliefs, respectively, when presented with new information. The operation of revision requires consistency to be maintained in the belief set, allowing for inconsistent parts to be deleted. In contrast, the operation of update does not have this restriction. It assumes the reasoner was not aware of a change in the world that occurred, and the reasoner must refresh his beliefs by onboarding the new information. Our work surveyed postulates of belief change with human reasoners. First, we studied the role of the postulates of belief revision and belief update in the literature. Next, we decomposed the postulates of revision and update into material implication statements, each containing a premise and a conclusion. We translated the premises and conclusions into English and surveyed the postulate translations with human reasoners. The main task of the surveys was for participants to judge the translated postulate components for plausibility. For our data analysis, we used statistical methods to measure the strength of the association between the premises and the conclusion of each postulate. For our data interpretation, we applied possibility theory to examine whether the association for each postulate was significant for the broader population of English-speaking human reasoners. The results show that our participants' reasoning tends to be consistent with the postulates of belief revision and belief update when judging the premises and conclusion of the postulate separately.
Abstract: In this talk, we investigate inductive inference with system W from conditional belief bases with respect to syntax splitting. The concept of syntax splitting for inductive inference states that inferences about independent parts of the signature should not affect each other. This was captured in work by Kern-Isberner, Beierle, and Brewka in the form of postulates for inductive inference operators expressing syntax splitting as a combination of relevance and independence; it was also shown that c-inference fulfils syntax splitting, while system P inference and system Z both fail to satisfy it. System W is a recently introduced inference system for nonmonotonic reasoning that captures and properly extends system Z as well as c-inference. We show that system W fulfils the syntax splitting postulates for inductive inference operators by showing that it satisfies the required properties of relevance and independence. This makes system W another inference operator besides c-inference that fully complies with syntax splitting, while, in contrast to c-inference, also extending rational closure.
UCT logo
TUD logo
CAIR logo