MCMP – Epistemology

Mathematical Philosophy - the application of logical and mathematical methods in philosophy - is about to experience a tremendous boom in various areas of philosophy. At the new Munich Center for Mathematical Philosophy, which is funded mostly by the German Alexander von Humboldt Foundation, philosophical research will be carried out mathematically, that is, by means of methods that are very close to those used by the scientists.

  • 59 minutes 34 seconds
    Reasoning biases and non-monotonic logics
    Catarina Dutilh Novaes (Groningen) gives a talk at the MCMP Colloquium (24 January, 2013) titled "Reasoning biases and non-monotonic logics". Abstract: Stenning and van Lambalgen (2008) have argued that much of what is described in the psychology of reasoning literature as `reasoning biases' can more accurately be accounted for by means of the concept of defeasible, non-monotonic reasoning. They rely on the AI framework of closed-world reasoning as the formal background for their investigations. In my talk, I give continuation to the project of reassessing reasoning biases from a non-monotonic point of view, but use instead the semantic approach to non-monotonic logics presented in Shoham (1987), known as preferential semantics. I focus in particular on the so-called belief-bias effect and the Modus Ponens-Modus Tollens asymmetry. The ease with which these reasoning patterns are accounted for from a defeasible reasoning point of view lends support to the claim that (untrained) human reasoning has a strong component of defeasibility. I conclude with some remarks on Marr’s ‘three levels of analysis’ and the role of formal frameworks for the empirical investigation of human reasoning.
    18 April 2019, 10:01 pm
  • 58 minutes 13 seconds
    Reasons to Believe and Reasons to not
    Jake Chandler (Leuven) gives a talk at the MCMP Colloquium (6 February, 2013) titled "Reasons to Believe and Reasons to not." Abstract: The provision of a precise, formal treatment of the relation of evidential relevance–i.e. of providing a reason to hold or to withhold a belief–has arguably constituted the principal selling point of Bayesian modeling in contemporary epistemology and philosophy of science. By the same token, the lack of an analogous proposal in so-called AGM belief revision theory, a powerful and elegant qualitative alternative to the Bayesian framework, is likely to have significantly contributed to its relatively marginal status in the philosophical mainstream. In the present talk, I sketch out a corrective to this deficiency, offering a suggestion, within the context of belief revision theory, concerning the relation between beliefs about evidential relevance and commitments to certain policies of belief change. Aside from shedding light on the status of various important evidential ‘transmission’ principles, this proposal also constitutes a promising basis for the elaboration of a logic of so-called epistemic defeaters.
    18 April 2019, 9:59 pm
  • 48 minutes 41 seconds
    Garber and Field Conditionalization
    Benjamin Bewersdorf (Konstanz) gives a talk at the MCMP Colloquium (23 January, 2013) titled "Garber and Field Conditionalization ". Abstract: The most influential formal account on rational belief change is Jeffrey conditionalization. Given a plausible assumption on the role of experiences for belief change, Jeffrey conditionalization turns out to be incomplete. Field tried to complete Jeffrey conditionalization by adding an input law to it. But, as Garber has pointed out, the resulting theory has a serious weakness. In the following, I will both generalize Garber's objection against Field's account and show how Field's account can be modified to avoid it.
    18 April 2019, 9:56 pm
  • 53 minutes 21 seconds
    How to be a truthy psychologist about evidence
    Veli Mitova (Vienna) gives a talk at the MCMP Colloquium (6 June, 2013) titled "How to be a truthy psychologist about evidence". Abstract: I defend the view that the only things that count as evidence for belief are factive tokens of psychological states. I first assume that the evidence for p can sometimes be a good reason to believe that p. I then argue, with some help from metaethics 101, that a reason is a beast of two burdens: it must be capable of being both a good reason and a motive. I then show that truthy psychologism is the only position that can honour The Beast of Two Burdens Thesis, without ruffling our pre-101 intuitions about good reasons, motives, and explanations.
    18 April 2019, 9:55 pm
  • 59 minutes 41 seconds
    Structure Induction in Diagnostic Causal Reasoning
    Michael R. Waldmann (Göttingen) gives a talk at the MCMP Colloquium (23 April, 2014) titled "Structure Induction in Diagnostic Causal Reasoning". Abstract: Our research examines the normative and descriptive adequacy of alternative computational models of diagnostic reasoning from single effects to single causes. Many theories of diagnostic reasoning are based on the normative assumption that inferences from an effect to its cause should reflect solely the empirically observed conditional probability of cause given effect. We argue against this assumption, as it neglects alternative causal structures that may have generated the sample data. Our structure induction model of diagnostic reasoning takes into account the uncertainty regarding the underlying causal structure. A key prediction of the model is that diagnostic judgments should not only reflect the empirical probability of cause given effect but should also depend on the reasoner’s beliefs about the existence and strength of the link between cause and effect. We confirmed this prediction in two studies and showed that our theory better accounts for human judgments than alternative theories of diagnostic reasoning. Overall, our findings support the view that in diagnostic reasoning people go “beyond the information given” and use the available data to make inferences on the (unobserved) causal, rather than on the (observed) data level.
    18 April 2019, 9:28 pm
  • 40 minutes 57 seconds
    Epistemically Detrimental Dissent and the Milian Argument against the Freedom of Inquiry
    Anna Leuschner (KIT) gives a talk at the MCMP Colloquium (9 April, 2014) titled "Epistemically Detrimental Dissent and the Milian Argument against the Freedom of Inquiry". Abstract: I'll present a joint work that I have been conducting with Justin Biddle. The idea of epistemically problematic dissent is counterintuitive at first glance; as Mill argues, even misguided dissent from a consensus position can be epistemically fruitful as it can lead to a deeper understanding of consensus positions. Yet, focusing on climate science we argue that dissent can be epistemically problematic when it leads to a distortion of risk assessment in mainstream science. I'll examine the conditions under which dissent in science is epistemically detrimental, provide empirical support for this finding, and conclude with a discussion on normative consequences of these findings by considering Philip Kitcher’s "Millian argument against the freedom of inquiry".
    18 April 2019, 9:26 pm
  • 49 minutes 27 seconds
    Models of rationality and the psychology of reasoning (From is to ought, and back)
    Vincenzo Crupi (Turin) gives a talk at the MCMP Colloquium (16 April, 2014) titled "Models of rationality and the psychology of reasoning (From is to ought, and back)". Abstract: Diagnoses of (ir)rationality often arise from the experimental investigation of human reasoning. Relying on joint work with Vittorio Girotto, I will suggest that such diagnoses can be disputed on various grounds, and provide a classification. I will then argue that much fruitful research done with classical experimental paradigms was triggered by normative concerns and yet fostered insight in properly psychological terms. My examples include the selection task, the conjunction fallacy, and so-called pseudodiagnosticity. Conclusion: normative considerations retain a constructive role for the psychology of reasoning, contrary to recent complaints in the literature.
    18 April 2019, 9:25 pm
  • 56 minutes 59 seconds
    Epistemic Landscapes and Optimal Search
    Jason McKenzie (LSE) gives a talk at the MCMP Colloquium (30 April, 2014) titled "Epistemic Landscapes and Optimal Search". Abstract: In a paper from 2009, Michael Weisberg and Ryan Muldoon argue that there exist epistemic reasons for the division of cognitive labour. In particular, they claim that a heterogeneous population of agents, where people use a variety of socially response search rules, proves more capable at exploring an “epistemic landscape” than a homogenous population. We show, through a combination of analytic and simulation results, that this claim is not true, and identify why Weisberg and Muldoon obtained the results they did. We then show that, in the case of arguably more “realistic” landscapes — based on Kauffman’s NK-model of “tunably rugged” fitness landscapes — that social learning frequently provides no epistemic benefit whatsoever. Although there surely are good epistemic reasons for the division of cognitive labour, we conclude Weisberg and Muldoon did not show that “a polymorphic population of research strategies thus seems to be the optimal way to divide cognitive labor”.
    18 April 2019, 9:25 pm
  • 42 minutes 28 seconds
    Fast, Frugal and Focused: When less information leads to better decisions
    Gregory Wheeler (MCMP) gives a talk at the MCMP Colloquium (25 June, 2014) titled "Fast, Frugal and Focused: When less information leads to better decisions". Abstract: People frequently do not abide by the total evidence norm of classical Bayesian rationality but instead use just a few items of information among the many available to them. Gerd Gigerenzer and colleagues have famously shown that decision-making with less information often leads to objectively better outcomes, which raises an intriguing normative question: if we could say precisely under what circumstances this "less is more" effect occurs, we conceivably could say when people should reason the Fast and Frugal way rather than the classical Bayesian way. In this talk I report on results from joint work with Konstantinos Katsikopoulos that resolves a puzzle in the mathematical psychology literature over attempts to to explain the conditions responsible for this "less is more" effect. What is more, there is a surprisingly deep connection between the "less is more" effect and coherentist justification. In short, the conditions that are good for coherentism are lousy for single-reason strategies, and vice versa.
    18 April 2019, 9:19 pm
  • 1 hour 1 minute
    The Principal Principle implies the Principle of Indifference
    Jon Williamson (Kent) gives a talk at the MCMP Colloquium (8 October, 2014) titled "The Principal Principle implies the Principle of Indifference". Abstract: I'll argue that David Lewis' Principal Principle implies a version of the Principle of Indifference. The same is true for similar principles which need to appeal to the concept of admissibility. Such principles are thus in accord with objective Bayesianism, but in tension with subjective Bayesianism. One might try to avoid this conclusion by disavowing the link between conditional beliefs and conditional probabilities that is almost universally endorsed by Bayesians. I'll explain why this move offers no succour to the subjectivist.
    18 April 2019, 9:17 pm
  • 52 minutes 22 seconds
    New Responses to Some Purported Counterexamples to Likelihoodist Principles
    Greg Gandenberger (Pittsburgh) gives a talk at the MCMP Colloquium (22 April, 2015) titled "New Responses to Some Purported Counterexamples to Likelihoodist Principles". Abstract: The Likelihood Principle is important because the frequentist statistical methods that are most commonly used in science violate it, while rival likelihoodist and Bayesian methods do not. It is supported by a variety of arguments, including several proofs from intuitively plausible axioms. It also faces many objections, including several purported counterexamples. In this talk, I provide new responses to four purported counterexamples to the Likelihood Principle and its near-corollary the Law of Likelihood that are not adequately addressed in the existing literature. I first respond to examples due to Fitelson and Titelbaum that I argue are adequately addressed by restricting the Law of Likelihood to mutually exclusive hypotheses. I then respond to two counterexamples from the statistical literature. My responses to these latter examples are novel in that they do not appeal to prior probabilities, which is important for attempts to use the Likelihood Principle to provide an argument for Bayesian approaches that does presuppose the permissibility of using prior probabilities in science.
    21 July 2015, 1:00 pm
  • More Episodes? Get the App
© MoonFM 2024. All rights reserved.