POPULARITY
Catarina Duthil-Novaes (ILLC/Amsterdam) gives at talk at the MCMP Colloquium titled "Cognitive motivations for treating formalisms as calculi". Abstract: In The Logical Syntax of Language, Carnap famously recommended that logical languages be treated as mere calculi, and that their symbols be viewed as meaningless; reasoning with the system is to be guided solely on the basis of its rules of transformation. Carnap˙s main motivation for this recommendation seems to be related to a concern with precision and exactness. In my talk, I argue that Carnap was right in insisting on the benefits of treating logical formalisms as calculi, but he was wrong in thinking that enhanced precision is the main advantage of this approach. Instead, I argue that a deeper impact of treating formalisms as calculi is of a cognitive nature: by adopting this stance, the reasoner is able to counter some of her „default“ reasoning tendencies, which (although advantageous in most practical situations) may hinder the discovery of novel facts in scientific contexts. One of these cognitive tendencies is the constant search for confirmation for the beliefs one already holds, as extensively documented and studied in the psychology of reasoning literature, and often referred to as confirmation bias/belief bias. Treating formalisms as meaningless and relying on their well-defined rules of formation and transformation allows the reasoner to counter her own belief bias for two main reasons: it 'switches off' semantic activation, which is thought to be a largely automatic cognitive process, and it externalizes reasoning processes; they now take place largely through the manipulation of the notation. I argue moreover that the manipulation of the notation engages predominantly sensorimotor processes rather than being carried out internally: the agent is literally 'thinking on the paper'. The analysis relies heavily on empirical data from psychology and cognitive sciences, and is largely inspired by recent literature on extended cognition (in particular Clark, Menary and Sutton). If I am right, formal languages treated as calculi and viewed as external cognitive artifacts offer a crucial cognitive boost to human agents, in particular in that they seem to produce a beneficial de-biasing effect.
Ed Zalta (CSLI Stanford) gives a talk at the MCMP Workshop on Computational Metaphysics titled "Toward Leibniz's Goal of a Computational Metaphysics".
Douglas Patterson (Universität Leipzig) gives a talk at the MCMP Colloquium titled "Theory and Concept in Tarski's Philosophy of Language". Abstract: In this talk I will set out some of the background of Tarski's famous work on truth and semantics by looking at important views of his teachers Tadeusz Kotarbinski and Stanislaw Lesniewski in the philosophy of langauge and the "methodology of deductive sciences". With the understanding of the assumed philosophy of language and logic of the important articles set out in this manner, I will look at a number of issues familiar from the literature. I will sort out Tarski's conception of "material adequacy", discuss the relationship between a Tarskian definition of truth and a conceptual analysis of a more familiar sort, and consider the consequences of the views presented for the question of whether Tarski was a deflationist or a correspondence theorist.
Volker Halbach (Oxford) gives a talk at the MCMP Colloquium titled "The conservativity of truth and the disentanglement of syntax and semantics"
Berit Brogaard (University of Missouri, St. Louis) gives a talk at the MCMP Colloquium titled "Do 'Looks' Reports Reflect the Contents of Perception?"
Alexandru Baltag (ILLC Amsterdam) gives a talk at the MCMP Colloquium titled "Tracking the Truth Requires a Non-wellfounded Prior! A Study in the Learning Power (and Limits) of Bayesian (and Qualitative) Update". Abstract: The talk is about tracking "full truth" in the limit by iterated belief updates. Unlike Sonja's talk (which focused on finite models), we now allow the initial model (and thus the initial set of epistemic possibilities) to be infinite. We compare the truth-tracking power of various belief-revision methods, including probabilistic conditioning (also known as Bayesian update) and some of its qualitative, "plausibilistic" analogues (conditioning, lexicographic revision, minimal revision). We focus in particular on the question on whether any of these methods is "universal" (i.e. as good at tracking the truth as any other learning method). We show that this is not the case, as long as we keep the standard probabilistic (or belief-revision) setting. On the positive side, we show that if we consider appropriate generalizations of conditioning in a non-standard, non-wellfounded setting, then universality is achieved for some (though not all) of these learning methods. In the qualitative case, this means that we need to allow the prior plausibility relation to be a non-wellfounded (though total) preorder. In the probabilistic case, this means moving to a generalized conditional probability setting, in which the family of "cores" (or "strong beliefs") may be non-wellfounded (when ordered by inclusion or logical entailament). As a consequence, neither the family of classical probability spaces, nor lexicographic probability spaces, and not even the family of all countably additive (conditional) probability spaces, are rich enough to make Bayesian conditioning "universal", from a Learning Theoretic point of view! This talk is based on joint work with Nina Gierasimczuk and Sonja Smets.
Richard Pettigrew (University of Bristol) gives a talk at the MCMP Colloquium titled "Accuracy, Chance, and the Principal Principle"
Catarina Duthil-Novaes (ILLC/Amsterdam) gives a talk at the MCMP Colloquium titled "The 'fitting problem' for logical semantic systems". Abstract: When applying logical tools to study a given extra-theoretical, informal phenomenon, it is now customary to design a deductive system, and a semantic system based on a class of mathematical structures. The assumption seems to be that they would each capture specific aspects of the target phenomenon. Kreisel has famously offered an argument on how, if there is a proof of completeness for the deductive system with respect to the semantic system, the target phenomenon becomes „squeezed“ between the extension of the two, thus ensuring the extensional adequacy of the technical apparatuses with respect to the target phenomenon: the so-called squeezing argument. However, besides a proof of completeness, for the squeezing argument to go through, two premises must obtain (for a fact e occurring within the range of the target phenomenon): (1) If e is the case according to the deductive system, then e is the case according to the target phenomenon. (2) If e is the case according to the target phenomenon, then e is the case according to the semantic system. In other words, the semantic system would provide the necessary conditions for e to be the case according to the target phenomenon, while the deductive system would provide the relevant sufficient conditions. But clearly, both (1) and (2) rely crucially on the intuitive adequacy of the deductive and the semantic systems for the target phenomenon. In my talk, I focus on the (in)plausibility of instances of (2), and argue That the adequacy of a semantic system for a given target phenomenon must not be taken for granted. In particular, I discuss the results presented in (Andrade-Lotero & Dutilh Novaes forthcoming) on multiple semantic systems for Aristotelian syllogistic, which are all sound and complete with respect to a reasonable deductive system for syllogistic (Corcoran˙s system D), but which are not extensionally equivalent; indeed, as soon as the language is enriched, they start disagreeing with each other as to which syllogistic arguments (in the enriched language) are valid. A plurality of apparently adequate semantic systems for a given target phenomenon brings to the fore what I describe as the „fitting problem“ for logical semantic systems: what is to guarantee that these technical apparatuses adequately capture significant aspects of the target phenomenon? If the different candidates have strikingly different properties (as is the case here), then they cannot all be adequate semantic systems for the target phenomenon. More generally, the analysis illustrates the need for criteria of adequacy for semantic systems based on mathematical structures. Moreover, taking Aristotelian syllogistic as a case study illustrates the fruitfulness but also the complexity of employing logical tools in historical analyses.
Charles B. Cross (University of Georgia) gives a talk at the MCMP Colloquium titled "Conclusive Reasons, Transmission, and Epistemic Closure".
Neil Tennant (Ohio State University) gives a talk at the MCMP Colloquium titled "Core Logic".
Branden Fitelson (Rutgers University) gives a talk at the MCMP Workshop on Computational Metaphysics titled "Russellian Descriptions & Gibbardian Indicatives (Two Case Studies Involving Automated Reasoning)". Abstract: The first part of this talk (which is joint work with Paul Oppenheimer) will be about the perils of representing claims involving Russellian definite descriptions in an "automated reasoning friendly" way. I will explain how to eliminate Russellian descriptions, so as to yield logically equivalent (and automated reasoning friendly) statements. This is a special case of a more general problem -- which is representing philosophical theories/explications in a way that automated reasoning tools can understand. The second part of the talk shows how automated reasoning tools can be useful in clarifying the structure (and requisite presuppositions) of well-known philosophical "theorems". Here, the example comes from the philosophy of language, and it involves a certain "triviality result" or "collapse theorem" for the indicative conditional that was first discussed by Gibbard. I show how one can use automated reasoning tools to provide a precise, formal rendition of Gibbard's "theorem". This turns out to be rather revealing about what is (and is not) essential to Gibbard's argument.
Branden Fitelson (Rutgers University) gives a talk at the MCMP Workshop on Bayesian Methods in Philosophy titled "Accuracy & Coherence". Abstract: In this talk, I will explore a new way of thinking about the relationship between accuracy norms and coherence norms in epistemology (generally). In the first part of the talk, I will apply the basic ideas to qualitative judgments (belief and disbelief). This will lead to an interesting coherence norm for qualitative judgments (but one which is weaker than classical deductive consistency). In the second part of the talk, I will explain how the approach can be applied to comparative confidence judgments. Again, this will lead to coherence norms that are weaker than classical (comparative probabilistic) coherence norms. Along the way, I will explain how evidential norms can come into conflict with even the weaker coherence norms suggested by our approach.
Niki Pfeifer (MCMP/LMU) gives a talk at the MCMP Workshop on Bayesian Methods in Philosophy titled "Applying coherence based probability logic to philosophical problems".
Hannes Leitgeb (MCMP/LMU) gives a talk at the MCMP Workshop on Bayesian Methods in Philosophy titled "The Lockean Thesis Revisited".
Charles B. Cross (University of Georgia) gives a talk at the MCMP Workshop on Bayesian Methods in Philosophy titled "Knowledge about Probability in the Monty Hall Problem".
Peter Schroeder-Heister (Tübingen) gives a talk at the MCMP Colloquium Mathematical Philosophy - first part: "Proof-theoretic semantics and the format of deductive reasoning", second part: "Prawitz's completeness conjecture (A sketch of some ideas)".
Richard Bradley (LSE) gives a talk at the MCMP Colloquium titled "Conditionals and Suppositions". Abstract: Adams' Thesis - the claim that the probabilities of indicative conditionals equal the conditional probabilities of their consequents given their antecedents - has proven impossible to accommodate within orthodox possible worlds semantics. This paper considers the approaches to the problem taken by Jeffrey and Stalnaker (1994) and by McGee (1989), but rejects them on the grounds that they imply a false principle, namely that probability of a conditional is independent of any proposition inconsistent with its antecedent. Instead it is proposed that the semantic contents of conditionals be treated as sets of vectors of worlds, not worlds, where each co-ordinate of a vector specifies the world that is or would be true under some supposition. It is shown that this treatment implies the truth of Adams' Thesis whenever the mode of supposition is evidential.
Alistair M. C. Isaac (University of Michigan) gives a talk at the MCMP Colloquium titled "Diachronic Dutch Book Arguments for Forgetful Agents". Abstract: I present a general strategy for applying diachronic Dutch book arguments to bounded agents, with particular focus on forgetful agents. Dutch book arguments were introduced by subjectivists about probability to test the consistency of synchronic epistemic norms. Diachronic Dutch book arguments (DDBs) apply this technique to test the consistency of diachronic epistemic norms, norms about how beliefs change in time. Examples like forgetfulness have led some to doubt the relevance of DDBs for evaluating diachronic norms. I argue that there is no problem in applying DDBs to formally specified decision problems involving forgetfulness. The real worry here is whether these formal problems capture the relevant details of real world decision-making situations. I suggest some general criteria for making this assessment and defend the formalization of decision problems involving bounded agents, and their investigation via DDBs, as essential tools for evaluating epistemic norms.
Tomasz Placek (Jagiellonian University, Kraków) gives a talk at the MCMP Colloquium titled "Possibilities without possible worlds/histories". Abstract: Possible worlds have turned out to be a particularly useful tool of modal metaphysics, although their globality makes them philosophically suspect. Hence, it would be desirable to arrive at some local modal notions that could be used instead of possible worlds. In this talk I will focus on what is known as historical (or real) modalities, an example of which is tomorrow's sea-battle. The modalities involved in this example are local since they refer to relatively small chunks of our world: a gathering of inimical fleets on a bay near-by has two alternative possible future continuations: one with a sea-battle and the other with no-sea battle. The objective of this talk is to sketch a theory of such modalities that is framed in terms of possible continuations rather than possible worlds or possible histories. The proposal will be tested as a semantic theory for a language with historical modalities, tenses, and indexicals.
Hannes Leitgeb (MCMP/LMU) gives a lecture at the Carl-Friedrich-von-Siemens-Stiftung titled "Logic and the Brain". Introductory words by Enno Aufderheide (secretary gemeral, Humboldt Foundation).
Matthew Braham (University of Bayreuth) gives a talk at the MCMP Colloquium titled "The Contradiction in Will Test: A Reconstruction".
Sonja Smets (University of Groningen) gives a talk at the MCMP Colloquium titled "Belief Dynamics under Iterated Revision: Cycles, Fixed Points and Truth-tracking". Abstract: We investigate the long-term behavior of processes of learning by iterated belief-revision with new truthful information. In the case of higher-order doxastic sentences, the iterated revision can even be induced by repeated learning of the same sentence (which conveys new truths at each stage by referring to the agent's own current beliefs at that stage). For a number of belief-revision methods (conditioning, lexicographic revision and minimal revision), we investigate the conditions in which iterated belief revision with truthful information stabilizes: while the process of model-changing by iterated conditioning always leads eventually to a fixed point (and hence all doxastic attitudes, including conditional beliefs, strong beliefs, and any form of "knowledge", eventually stabilize), this is not the case for other belief-revision methods. We show that infinite revision cycles exist (even when the initial model is finite and even when in the case of repeated revision with one single true sentence), but we also give syntactic and semantic conditions ensuring that beliefs stabilize in the limit. Finally, we look at the issue of convergence to truth, giving both sufficient conditions ensuring that revision stabilizes on true beliefs, and (stronger) conditions ensuring that the process stabilizes on "full truth" (i.e. beliefs that are both true and complete). This talk is based on joint work with A. Baltag.
Once again, a candidate nominated by Ludwig-Maximilians-Universität (LMU) München has been awarded one of the coveted Alexander von Humboldt Professorships. The philosopher and mathematician Hannes Leitgeb, Professor of Mathematical Logic and Philosophy of Mathematics at the University of Bristol (UK), was selected to receive the accolade by an expert committee set up by the Humboldt Foundation. The prize, which is worth 5 million Euros, is financed by the Federal Ministry for Education and Research, and is the most richly endowed award of its kind in Germany. Leitgeb is one of the leading proponents of an approach to problems in logic, philosophy and the foundations of the scientific method that exploits insights from both philosophical analyses and mathematical theories of provability. In effect, he formulates philosophical questions as precisely posed mathematical propositions, allowing him not only to come up with solutions, but also to explain them with the utmost clarity. Hannes Leitgeb becomes the LMU’s third Humboldt Professor, joining Ulrike Gaul (Systems Biology) and Georgi Dvali (Astrophysics). Leitgeb is one of the most prominent scholars worldwide who tackle analytical philosophy and cognitive sciences with the help of mathematical logic. This multi-pronged approach is motivated by the conviction that philosophical investigations can best be advanced if their fundamental assumptions can be recast as mathematical models that make them more transparent and simpler to describe. As a Humboldt Professor at LMU, Leitgeb will provide the basis for the planned Munich Center for Mathematical Philosophy, Language and Cognition, in which postgraduate and postdoctoral students in the fields of Philosophy, Logic and Mathematics will work together on common problems. The new Center will also collaborate with the Munich Center for Neuroscience, Brain and Mind (MCN). This institution was established in 2007, as the result of an internal competition (LMUinnovativ) to identify innovative ways of tackling questions related to the mind-brain problem. Its members utilize the whole spectrum of disciplines relevant to the neurosciences, from molecular biology, through systemic neurobiology, psychology and neurology, to philosophy. By fostering cooperation between widely diverse areas of study, the two Centers hope to make internationally significant contributions to theoretical and empirical brain sciences. Hannes Leitgeb's interdisciplinary orientation will help further sharpen the profile of the LMU’s Faculty of Philosophy by renewing its long-standing focus on the intersection between philososphy, logic and foundations of science, which is closely associated with the work of Wolfgang Stegmuller. This focus will also be given a future-oriented and internationally apparent impetus. Leitgeb first forged a firm link between philosophical logic and the cognitive sciences in his book “Inference on the Low Level. An Investigation into Deduction, Nonmonotonic Reasoning, and the Philosophy of Cognition”. Here he showed that, under certain circumstances, state transitions in neural networks can be understood as simple ‘if ... then’ inferences. These in turn are known to follow laws governing the behaviour of logical systems that have emerged from studies in the philosophy of language and in theoretical computer science. Leitgeb is currently working on a monograph devoted to Rudolf Carnap’s “The Logical Structure of the World”. He hopes to give this classic text a new lease of life by highlighting the relevance of Carnap’s insights for modern scientific research. One of the aims of this latest endeavour is to discover how to transform theoretical scientific models into propositions framed in terms of our immediate sensory perceptions. To this end, Leitgeb is developing a theory of probability that permits valid inferences about systems which are themselves capable of generating statements about their own probability. Hannes Leitg...
Roland Poellinger (MCMP/LMU) gives a talk at the MCMP Colloquium (14 May, 2014) titled "The Mind-Brain Entanglement". Abstract: Listing "The Nonreductivist’s Troubles with Mental Causation" (1993) Jaegwon Kim suggested that the only remaining alternatives are the eliminativist’s standpoint or plain denial of the mind’s causal powers if we want to uphold the closure of the physical and reject causal overdetermination at the same time. Nevertheless, explaining stock market trends by referring to investors’ fear of loss is a very familiar example of attributing reality to both domains and acknowledging the mind’s interaction with the world: "if you pick a physical event and trace its causal ancestry or posterity, you may run into mental events" (Kim 1993). In this talk I will use the formal framework of Bayes net causal models in an interventionist understanding (as devised, e.g., by Judea Pearl in "Causality", 2000) to make the concept of causal influence precise. Investigating structurally similar cases of conflicting causal intuitions will motivate a natural extension of the interventionist Bayes net framework, Causal Knowledge Patterns, in which our intuition that the mind makes a difference finds an expression.
Stephan Hartmann is regarded as one of the leading scholars in the fields of formal epistemology and philosophy of science, and he became a member of the Faculty of Philosophy at LMU last October. Later today (8 May, 2013), at a ceremony in Berlin, Hartmann will officially receive Germany’s most generously endowed prize for distinguished contributions to research, the Alexander von Humboldt Professorship, that brought him back to the land of his birth. Hartmann now holds the Chair of Philosophy of Science at LMU‘s Munich Center for Mathematical Philosophy (MCMP) and, together with his colleague Hannes Leitgeb, who occupies the Chair of Logic and Philosophy of Language and also holds a Humboldt Professorship, he is actively engaged in extending the interdisciplinary reach of his subject in often surprising directions. The basic goal of the MCMP is to apply advanced mathematical methods to a range of complex philosophical problems. Born in 1968, Hartmann studied Philosophy and Physics, and has held professorships at the London School of Economics and Political Science (LSE) and, prior to his move to LMU, at Tilburg University in the Netherlands, where he served as Founding Director of the Tilburg Center for Logic and Philosophy of Science. Hartmann is the fourth Humboldt Professor at LMU. The honor had previously been accorded to systems biologist Ulrike Gaul, astrophysicist Georgi Dvali and Hannes Leitgeb. The prestigious awards, administered by the Alexander von Humboldt Foundation and financed by the Federal Ministry for Research, are intended to enable internationally recognized scholars and scientists to carry out long-term, groundbreaking projects at research institutions and universities in Germany. (LMU press release, Munich, 8 May 2013)
Der Philosoph Hannes Leitgeb fragt, nach welchen logischen Regeln das Gehirn arbeitet
Roland Poellinger (MCMP/LMU Munich) gives a talk at the MCMP Workshop on Computational Metaphysics titled "Computing Non-Causal Knowledge for Causal Reasoning". Abstract: We use logical and mathematical knowledge to generate causal claims. Inter-definitions or semantic overlap cannot be consistently embedded in standard Bayes net causal models since in many cases the Markov requirement will be violated. These considerations motivate an extension of Bayes net causal models to also allow for the embedding of Epistemic Contours (ECs). Such non-causal functions are consistently computable in Causal Knowledge Patterns (CKPs). An application of the framework can be found, e.g., in the recording of the talk "The Mind-Brain Entanglement".
Roland Poellinger (Munich Center for Mathematical Philosophy/LMU Munich) gives a talk at the MCMP Workshop on Computational Metaphysics titled "Computing Non-Causal Knowledge for Causal Reasoning". Abstract: We use logical and mathematical knowledge to generate causal claims. Inter-definitions or semantic overlap cannot be consistently embedded in standard Bayes net causal models since in many cases the Markov requirement will be violated. These considerations motivate an extension of Bayes net causal models to also allow for the embedding of Epistemic Contours (ECs). Such non-causal functions are consistently computable in Causal Knowledge Patterns (CKPs). An application of the framework can be found, e.g., in the recording of the talk "The Mind-Brain Entanglement".