MCMP – Mathematical Philosophy (Archive 2011/12)

Mathematical Philosophy - the application of logical and mathematical methods in philosophy - is about to experience a tremendous boom in various areas of philosophy. At the new Munich Center for Mathematical Philosophy, which is funded mostly by the German Alexander von Humboldt Foundation, philosophical research will be carried out mathematically, that is, by means of methods that are very close to those used by the scientists.

  • 1 hour 1 minute
    Modality and Categories
    Steve Awodey (CMU/MCMP) gives a talk at the MCMP Workshop on Modality titled "Modality and Categories".
    22 April 2019, 7:51 pm
  • 1 hour 18 minutes
    Adaptive Logics: Introduction, Applications, Computational Aspects and Recent Developments
    Peter Verdée (Ghent) gives a talk at the MCMP Colloquium (8 Feb, 2012) titled "Adaptive Logics: Introduction, Applications, Computational Aspects and Recent Developments". Abstract: Peter Verd ́ee ([email protected]) Centre for Logic and Philosophy of Science Ghent University, Belgium In this talk I give a thorough introduction to adaptive logics (cf. [1, 2, 3]). Adaptive logics are first devised by Diderik Batens and are now the main research area of the logicians in the Centre for Logic and Philosophy of Science in Ghent. First I explain the main purpose of adaptive logics: formalizing defea- sible reasoning in a unified way aiming at a normative account of fallible rationality. I give an informal characterization of what we mean by the notion ‘defeasible reasoning’ and explain why it is useful and interesting to formalize this type of reasoning by means of logics. Then I present the technical machinery of the so called standard format of adaptive logics. The standard format is a general way to define adaptive logics from three basic variables. Most existing adaptive logics can be defined within this format. It immediately provides the logics with a dynamic proof theory, a selection semantics and a number of important meta-theoretic properties. I proceed by giving some popular concrete examples of adaptive logics in standard form. I quickly introduce inconsistency adaptive logics, adap- tive logics for induction and adaptive logics for reasoning with plausible knowledge/beliefs. Next I present some computational results on adaptive logics. The adap- tive consequence relation are in general rather complex (I proved that there are recursive premise sets such that their adaptive consequence sets are Π1- complex – cf. [4]). However, I argue that this does not harm the naturalistic aims of adaptive logics, given a specific view on the relation between actual reasoning and adaptive logics. Finally, two interesting recent developments are presented: (1) Lexi- cographic adaptive logics. They fall outside of the scope of the standard format, but have similar properties and are able to handle prioritized infor- mation. (2) Adaptive set theories. Such theories start form the unrestricted comprehension axiom scheme but are strong enough to serve as a foundation for an interesting part of classical mathematics, by treating the paradoxes in a novel, defeasible way.
    22 April 2019, 7:46 pm
  • 1 hour 19 minutes
    Belief Dynamics under Iterated Revision: Cycles, Fixed Points and Truth-tracking
    Sonja Smets (University of Groningen) gives a talk at the MCMP Colloquium titled "Belief Dynamics under Iterated Revision: Cycles, Fixed Points and Truth-tracking". Abstract: We investigate the long-term behavior of processes of learning by iterated belief-revision with new truthful information. In the case of higher-order doxastic sentences, the iterated revision can even be induced by repeated learning of the same sentence (which conveys new truths at each stage by referring to the agent's own current beliefs at that stage). For a number of belief-revision methods (conditioning, lexicographic revision and minimal revision), we investigate the conditions in which iterated belief revision with truthful information stabilizes: while the process of model-changing by iterated conditioning always leads eventually to a fixed point (and hence all doxastic attitudes, including conditional beliefs, strong beliefs, and any form of "knowledge", eventually stabilize), this is not the case for other belief-revision methods. We show that infinite revision cycles exist (even when the initial model is finite and even when in the case of repeated revision with one single true sentence), but we also give syntactic and semantic conditions ensuring that beliefs stabilize in the limit. Finally, we look at the issue of convergence to truth, giving both sufficient conditions ensuring that revision stabilizes on true beliefs, and (stronger) conditions ensuring that the process stabilizes on "full truth" (i.e. beliefs that are both true and complete). This talk is based on joint work with A. Baltag.
    20 April 2019, 7:08 pm
  • 1 hour 31 minutes
    Tracking the Truth Requires a Non-wellfounded Prior!
    Alexandru Baltag (ILLC Amsterdam) gives a talk at the MCMP Colloquium titled "Tracking the Truth Requires a Non-wellfounded Prior! A Study in the Learning Power (and Limits) of Bayesian (and Qualitative) Update". Abstract: The talk is about tracking "full truth" in the limit by iterated belief updates. Unlike Sonja's talk (which focused on finite models), we now allow the initial model (and thus the initial set of epistemic possibilities) to be infinite. We compare the truth-tracking power of various belief-revision methods, including probabilistic conditioning (also known as Bayesian update) and some of its qualitative, "plausibilistic" analogues (conditioning, lexicographic revision, minimal revision). We focus in particular on the question on whether any of these methods is "universal" (i.e. as good at tracking the truth as any other learning method). We show that this is not the case, as long as we keep the standard probabilistic (or belief-revision) setting. On the positive side, we show that if we consider appropriate generalizations of conditioning in a non-standard, non-wellfounded setting, then universality is achieved for some (though not all) of these learning methods. In the qualitative case, this means that we need to allow the prior plausibility relation to be a non-wellfounded (though total) preorder. In the probabilistic case, this means moving to a generalized conditional probability setting, in which the family of "cores" (or "strong beliefs") may be non-wellfounded (when ordered by inclusion or logical entailament). As a consequence, neither the family of classical probability spaces, nor lexicographic probability spaces, and not even the family of all countably additive (conditional) probability spaces, are rich enough to make Bayesian conditioning "universal", from a Learning Theoretic point of view! This talk is based on joint work with Nina Gierasimczuk and Sonja Smets.
    20 April 2019, 7:07 pm
  • 54 minutes 20 seconds
    Possible Worlds, The Lewis Principle, and the Myth of a Large Ontology
    Ed Zalta (Stanford) gives a talk at the MCMP Workshop on Modality titled "Possible Worlds, The Lewis Principle, and the Myth of a Large Ontology".
    20 April 2019, 7:03 pm
  • 1 hour 9 minutes
    Accuracy, Chance, and the Principal Principle
    Richard Pettigrew (University of Bristol) gives a talk at the MCMP Colloquium titled "Accuracy, Chance, and the Principal Principle"
    20 April 2019, 7:03 pm
  • 1 hour 21 seconds
    Theory and Concept in Tarski's Philosophy of Language
    Douglas Patterson (Universität Leipzig) gives a talk at the MCMP Colloquium titled "Theory and Concept in Tarski's Philosophy of Language". Abstract: In this talk I will set out some of the background of Tarski's famous work on truth and semantics by looking at important views of his teachers Tadeusz Kotarbinski and Stanislaw Lesniewski in the philosophy of langauge and the "methodology of deductive sciences". With the understanding of the assumed philosophy of language and logic of the important articles set out in this manner, I will look at a number of issues familiar from the literature. I will sort out Tarski's conception of "material adequacy", discuss the relationship between a Tarskian definition of truth and a conceptual analysis of a more familiar sort, and consider the consequences of the views presented for the question of whether Tarski was a deflationist or a correspondence theorist.
    20 April 2019, 6:58 pm
  • 1 hour 8 minutes
    The 'fitting problem' for logical semantic systems
    Catarina Duthil-Novaes (ILLC/Amsterdam) gives a talk at the MCMP Colloquium titled "The 'fitting problem' for logical semantic systems". Abstract: When applying logical tools to study a given extra-theoretical, informal phenomenon, it is now customary to design a deductive system, and a semantic system based on a class of mathematical structures. The assumption seems to be that they would each capture specific aspects of the target phenomenon. Kreisel has famously offered an argument on how, if there is a proof of completeness for the deductive system with respect to the semantic system, the target phenomenon becomes „squeezed“ between the extension of the two, thus ensuring the extensional adequacy of the technical apparatuses with respect to the target phenomenon: the so-called squeezing argument. However, besides a proof of completeness, for the squeezing argument to go through, two premises must obtain (for a fact e occurring within the range of the target phenomenon): (1) If e is the case according to the deductive system, then e is the case according to the target phenomenon. (2) If e is the case according to the target phenomenon, then e is the case according to the semantic system. In other words, the semantic system would provide the necessary conditions for e to be the case according to the target phenomenon, while the deductive system would provide the relevant sufficient conditions. But clearly, both (1) and (2) rely crucially on the intuitive adequacy of the deductive and the semantic systems for the target phenomenon. In my talk, I focus on the (in)plausibility of instances of (2), and argue That the adequacy of a semantic system for a given target phenomenon must not be taken for granted. In particular, I discuss the results presented in (Andrade-Lotero & Dutilh Novaes forthcoming) on multiple semantic systems for Aristotelian syllogistic, which are all sound and complete with respect to a reasonable deductive system for syllogistic (Corcoran˙s system D), but which are not extensionally equivalent; indeed, as soon as the language is enriched, they start disagreeing with each other as to which syllogistic arguments (in the enriched language) are valid. A plurality of apparently adequate semantic systems for a given target phenomenon brings to the fore what I describe as the „fitting problem“ for logical semantic systems: what is to guarantee that these technical apparatuses adequately capture significant aspects of the target phenomenon? If the different candidates have strikingly different properties (as is the case here), then they cannot all be adequate semantic systems for the target phenomenon. More generally, the analysis illustrates the need for criteria of adequacy for semantic systems based on mathematical structures. Moreover, taking Aristotelian syllogistic as a case study illustrates the fruitfulness but also the complexity of employing logical tools in historical analyses.
    20 April 2019, 6:54 pm
  • 54 minutes 22 seconds
    The conservativity of truth and the disentanglement of syntax and semantics
    Volker Halbach (Oxford) gives a talk at the MCMP Colloquium titled "The conservativity of truth and the disentanglement of syntax and semantics"
    20 April 2019, 6:54 pm
  • 1 hour 15 minutes
    Cognitive motivations for treating formalisms as calculi
    Catarina Duthil-Novaes (ILLC/Amsterdam) gives at talk at the MCMP Colloquium titled "Cognitive motivations for treating formalisms as calculi". Abstract: In The Logical Syntax of Language, Carnap famously recommended that logical languages be treated as mere calculi, and that their symbols be viewed as meaningless; reasoning with the system is to be guided solely on the basis of its rules of transformation. Carnap˙s main motivation for this recommendation seems to be related to a concern with precision and exactness. In my talk, I argue that Carnap was right in insisting on the benefits of treating logical formalisms as calculi, but he was wrong in thinking that enhanced precision is the main advantage of this approach. Instead, I argue that a deeper impact of treating formalisms as calculi is of a cognitive nature: by adopting this stance, the reasoner is able to counter some of her „default“ reasoning tendencies, which (although advantageous in most practical situations) may hinder the discovery of novel facts in scientific contexts. One of these cognitive tendencies is the constant search for confirmation for the beliefs one already holds, as extensively documented and studied in the psychology of reasoning literature, and often referred to as confirmation bias/belief bias. Treating formalisms as meaningless and relying on their well-defined rules of formation and transformation allows the reasoner to counter her own belief bias for two main reasons: it 'switches off' semantic activation, which is thought to be a largely automatic cognitive process, and it externalizes reasoning processes; they now take place largely through the manipulation of the notation. I argue moreover that the manipulation of the notation engages predominantly sensorimotor processes rather than being carried out internally: the agent is literally 'thinking on the paper'. The analysis relies heavily on empirical data from psychology and cognitive sciences, and is largely inspired by recent literature on extended cognition (in particular Clark, Menary and Sutton). If I am right, formal languages treated as calculi and viewed as external cognitive artifacts offer a crucial cognitive boost to human agents, in particular in that they seem to produce a beneficial de-biasing effect.
    20 April 2019, 6:54 pm
  • 47 minutes 34 seconds
    Do 'Looks' Reports Reflect the Contents of Perception?
    Berit Brogaard (University of Missouri, St. Louis) gives a talk at the MCMP Colloquium titled "Do 'Looks' Reports Reflect the Contents of Perception?"
    20 April 2019, 6:51 pm
  • More Episodes? Get the App