Mathematical Philosophy - the application of logical and mathematical methods in philosophy - is about to experience a tremendous boom in various areas of philosophy. At the new Munich Center for Mathematical Philosophy, which is funded mostly by the German Alexander von Humboldt Foundation, philoso…
Catarina Dutilh Novaes (Groningen) gives a talk at the MCMP Colloquium (24 January, 2013) titled "Reasoning biases and non-monotonic logics". Abstract: Stenning and van Lambalgen (2008) have argued that much of what is described in the psychology of reasoning literature as `reasoning biases' can more accurately be accounted for by means of the concept of defeasible, non-monotonic reasoning. They rely on the AI framework of closed-world reasoning as the formal background for their investigations. In my talk, I give continuation to the project of reassessing reasoning biases from a non-monotonic point of view, but use instead the semantic approach to non-monotonic logics presented in Shoham (1987), known as preferential semantics. I focus in particular on the so-called belief-bias effect and the Modus Ponens-Modus Tollens asymmetry. The ease with which these reasoning patterns are accounted for from a defeasible reasoning point of view lends support to the claim that (untrained) human reasoning has a strong component of defeasibility. I conclude with some remarks on Marr’s ‘three levels of analysis’ and the role of formal frameworks for the empirical investigation of human reasoning.
Jake Chandler (Leuven) gives a talk at the MCMP Colloquium (6 February, 2013) titled "Reasons to Believe and Reasons to not." Abstract: The provision of a precise, formal treatment of the relation of evidential relevance–i.e. of providing a reason to hold or to withhold a belief–has arguably constituted the principal selling point of Bayesian modeling in contemporary epistemology and philosophy of science. By the same token, the lack of an analogous proposal in so-called AGM belief revision theory, a powerful and elegant qualitative alternative to the Bayesian framework, is likely to have significantly contributed to its relatively marginal status in the philosophical mainstream. In the present talk, I sketch out a corrective to this deficiency, offering a suggestion, within the context of belief revision theory, concerning the relation between beliefs about evidential relevance and commitments to certain policies of belief change. Aside from shedding light on the status of various important evidential ‘transmission’ principles, this proposal also constitutes a promising basis for the elaboration of a logic of so-called epistemic defeaters.
Benjamin Bewersdorf (Konstanz) gives a talk at the MCMP Colloquium (23 January, 2013) titled "Garber and Field Conditionalization ". Abstract: The most influential formal account on rational belief change is Jeffrey conditionalization. Given a plausible assumption on the role of experiences for belief change, Jeffrey conditionalization turns out to be incomplete. Field tried to complete Jeffrey conditionalization by adding an input law to it. But, as Garber has pointed out, the resulting theory has a serious weakness. In the following, I will both generalize Garber's objection against Field's account and show how Field's account can be modified to avoid it.
Veli Mitova (Vienna) gives a talk at the MCMP Colloquium (6 June, 2013) titled "How to be a truthy psychologist about evidence". Abstract: I defend the view that the only things that count as evidence for belief are factive tokens of psychological states. I first assume that the evidence for p can sometimes be a good reason to believe that p. I then argue, with some help from metaethics 101, that a reason is a beast of two burdens: it must be capable of being both a good reason and a motive. I then show that truthy psychologism is the only position that can honour The Beast of Two Burdens Thesis, without ruffling our pre-101 intuitions about good reasons, motives, and explanations.
Michael R. Waldmann (Göttingen) gives a talk at the MCMP Colloquium (23 April, 2014) titled "Structure Induction in Diagnostic Causal Reasoning". Abstract: Our research examines the normative and descriptive adequacy of alternative computational models of diagnostic reasoning from single effects to single causes. Many theories of diagnostic reasoning are based on the normative assumption that inferences from an effect to its cause should reflect solely the empirically observed conditional probability of cause given effect. We argue against this assumption, as it neglects alternative causal structures that may have generated the sample data. Our structure induction model of diagnostic reasoning takes into account the uncertainty regarding the underlying causal structure. A key prediction of the model is that diagnostic judgments should not only reflect the empirical probability of cause given effect but should also depend on the reasoner’s beliefs about the existence and strength of the link between cause and effect. We confirmed this prediction in two studies and showed that our theory better accounts for human judgments than alternative theories of diagnostic reasoning. Overall, our findings support the view that in diagnostic reasoning people go “beyond the information given” and use the available data to make inferences on the (unobserved) causal, rather than on the (observed) data level.
Anna Leuschner (KIT) gives a talk at the MCMP Colloquium (9 April, 2014) titled "Epistemically Detrimental Dissent and the Milian Argument against the Freedom of Inquiry". Abstract: I'll present a joint work that I have been conducting with Justin Biddle. The idea of epistemically problematic dissent is counterintuitive at first glance; as Mill argues, even misguided dissent from a consensus position can be epistemically fruitful as it can lead to a deeper understanding of consensus positions. Yet, focusing on climate science we argue that dissent can be epistemically problematic when it leads to a distortion of risk assessment in mainstream science. I'll examine the conditions under which dissent in science is epistemically detrimental, provide empirical support for this finding, and conclude with a discussion on normative consequences of these findings by considering Philip Kitcher’s "Millian argument against the freedom of inquiry".
Vincenzo Crupi (Turin) gives a talk at the MCMP Colloquium (16 April, 2014) titled "Models of rationality and the psychology of reasoning (From is to ought, and back)". Abstract: Diagnoses of (ir)rationality often arise from the experimental investigation of human reasoning. Relying on joint work with Vittorio Girotto, I will suggest that such diagnoses can be disputed on various grounds, and provide a classification. I will then argue that much fruitful research done with classical experimental paradigms was triggered by normative concerns and yet fostered insight in properly psychological terms. My examples include the selection task, the conjunction fallacy, and so-called pseudodiagnosticity. Conclusion: normative considerations retain a constructive role for the psychology of reasoning, contrary to recent complaints in the literature.
Jason McKenzie (LSE) gives a talk at the MCMP Colloquium (30 April, 2014) titled "Epistemic Landscapes and Optimal Search". Abstract: In a paper from 2009, Michael Weisberg and Ryan Muldoon argue that there exist epistemic reasons for the division of cognitive labour. In particular, they claim that a heterogeneous population of agents, where people use a variety of socially response search rules, proves more capable at exploring an “epistemic landscape” than a homogenous population. We show, through a combination of analytic and simulation results, that this claim is not true, and identify why Weisberg and Muldoon obtained the results they did. We then show that, in the case of arguably more “realistic” landscapes — based on Kauffman’s NK-model of “tunably rugged” fitness landscapes — that social learning frequently provides no epistemic benefit whatsoever. Although there surely are good epistemic reasons for the division of cognitive labour, we conclude Weisberg and Muldoon did not show that “a polymorphic population of research strategies thus seems to be the optimal way to divide cognitive labor”.
Gregory Wheeler (MCMP) gives a talk at the MCMP Colloquium (25 June, 2014) titled "Fast, Frugal and Focused: When less information leads to better decisions". Abstract: People frequently do not abide by the total evidence norm of classical Bayesian rationality but instead use just a few items of information among the many available to them. Gerd Gigerenzer and colleagues have famously shown that decision-making with less information often leads to objectively better outcomes, which raises an intriguing normative question: if we could say precisely under what circumstances this "less is more" effect occurs, we conceivably could say when people should reason the Fast and Frugal way rather than the classical Bayesian way. In this talk I report on results from joint work with Konstantinos Katsikopoulos that resolves a puzzle in the mathematical psychology literature over attempts to to explain the conditions responsible for this "less is more" effect. What is more, there is a surprisingly deep connection between the "less is more" effect and coherentist justification. In short, the conditions that are good for coherentism are lousy for single-reason strategies, and vice versa.
Jon Williamson (Kent) gives a talk at the MCMP Colloquium (8 October, 2014) titled "The Principal Principle implies the Principle of Indifference". Abstract: I'll argue that David Lewis' Principal Principle implies a version of the Principle of Indifference. The same is true for similar principles which need to appeal to the concept of admissibility. Such principles are thus in accord with objective Bayesianism, but in tension with subjective Bayesianism. One might try to avoid this conclusion by disavowing the link between conditional beliefs and conditional probabilities that is almost universally endorsed by Bayesians. I'll explain why this move offers no succour to the subjectivist.
Greg Gandenberger (Pittsburgh) gives a talk at the MCMP Colloquium (22 April, 2015) titled "New Responses to Some Purported Counterexamples to Likelihoodist Principles". Abstract: The Likelihood Principle is important because the frequentist statistical methods that are most commonly used in science violate it, while rival likelihoodist and Bayesian methods do not. It is supported by a variety of arguments, including several proofs from intuitively plausible axioms. It also faces many objections, including several purported counterexamples. In this talk, I provide new responses to four purported counterexamples to the Likelihood Principle and its near-corollary the Law of Likelihood that are not adequately addressed in the existing literature. I first respond to examples due to Fitelson and Titelbaum that I argue are adequately addressed by restricting the Law of Likelihood to mutually exclusive hypotheses. I then respond to two counterexamples from the statistical literature. My responses to these latter examples are novel in that they do not appeal to prior probabilities, which is important for attempts to use the Likelihood Principle to provide an argument for Bayesian approaches that does presuppose the permissibility of using prior probabilities in science.
Kevin Zollman (CMU) gives a talk at the Conference on Agent-Based Modeling in Philosophy (11-13 December, 2014) titled "The Formation of Epistemic Networks". Abstract: One important area of study for social epistemology is the social structure epistemic groups -- who communicates their knowledge with whom? Significant research has been done on better and worse communication networks, but less has been done on how a group comes to have one network or another. In this talk, I will present a number of results (some recent) from economics and philosophy about how individuals choose with whom to communicate. Understanding how individuals decide where to gain information can help us to design institutions that lead to epistemically more reliable groups.
Elke Brendel (Bonn) gives a talk at the MCMP Colloquium (17 June, 2015) titled "Disagreement and Epistemic Modality". Abstract: Intuitively, the truth conditions of sentences with epistemic modals, such as “It might be that p”, depend on what is epistemically known by a speaker or some contextually relevant group. That is why a contextualist account of epistemic modals seems to provide an adequate semantics for epistemic modal claims. However, contextualism has difficulties to account for the intuition of disagreement about epistemic modal claims: If A claims (according to his knowledge): “It might be that p” and B claims (according to her knowledge) “It cannot be that p”, A and B seem to disagree. They are not merely talking past each other. In my talk, I will first explore the notion of disagreement and present some necessary conditions for two parties disagreeing with each other. Second, I will critically examine the prospects of contextualist semantics as well as truth-relativist accounts in dealing with disagreement about epistemic modal claims. I will finally analyze arguments in favor of an invariantist theory of epistemic modality.
Ben Levinstein (Oxford) gives a talk at the MCMP Colloquium (6 May, 2015) titled "A Pragmatic Vindication of Epistemic Utility Theory". Abstract: Traditionally, probabilism and other norms on partial belief have been motivated from a pragmatic point of view. For instance, as Frank Ramsey long ago showed, if you're probabilistically incoherent, then you're subject to a set of bets each of which you consider fair but which are jointly guaranteed to result in a net loss. Since Joyce's seminal 1998 paper, some epistemologists have shifted course and have tried to establish norms on epistemic states without any recourse to practical rationality. I use a theorem from Schervish to bridge the gap between these two approaches. We can either take standard measures of accuracy to be formalizations of purely epistemic value, or we can generate them from what are at base practical foundations. Even if we opt for this latter approach, I show we can mostly cordon off the epistemic from the practical while ultimately grounding epistemic norms in purely practical rationality.
Denis Bonnay (Paris Quest/IHPST) gives a talk at the MCMP Colloquium (30 April, 2015) titled "An Axiomatization of Individual and Social Updates". Abstract: In this talk, I will consider update rules, which an agent may follow in order to update her subjective probabilities and take into account new information she receives. I will consider two different situations in which this may happen: (1) individual updates: when an agent learns the probability for a particular event to have a certain value. (2) social updates: when an agent learns the probability an other agent's gives to a particular event. Jeffrey's conditioning and weighted averaging are two famous update rules, in individual and social situations respectively. I will show that both can be axiomatized by means of one and the same invariance principle, related to Carnap's use of invariance in his work on probabilities.
Branden Fitelson (Rutgers) gives a talk at the MCMP Colloquium (16 July, 2014) titled "Coherence".
Scott Page (Michigan) gives a talk at the Conference on Agent-Based Modeling in Philosophy (11-13 December, 2014) titled "Collective Accuracy: Agent Based & Emergent vs Statistical and Assumed". Abstract: In this talk, I describe two broad classes of models that can explain collective accuracy, what is more commonly referred to as the wisdom of crowds. The first model is based on statistical/law of large numbers logic. Accuracy emerges from the cancellation of random errors. The second model has roots in computer science and psychology. It assumes that predictions come from models. Different predictions arise because of different model. I then describe how in agent based models the amount model diversity, and therefore the accuracy of the collective emerges. It is possible to write difference equations that explain average diversity levels. The talk will summarize papers written with Lu Hong, Maria Riolo, PJ Lamberson, and Evan Economo.
Michael Weisberg (Pennsylvania) gives a talk at the Conference on Agent-Based Modeling in Philosophy (11-13 December, 2014) titled "Agent-based Models and Confirmation Theory". Abstract: Is it possible to develop a confirmation theory for agent-based models? The are good reasons to be skeptical: Classical confirmation theory explains how empirical evidence bears on the truth of hypotheses and theories, while agent-based models are almost always idealized and hence known to be false. Moreover, classical ideas about confirmation have been developed for relatively simple hypotheses, while even the simplest agent-based models have thousands of variables. Nevertheless, we can draw on ideas from confirmation theory in order to develop an account of agent-based model confirmation. Theorists can confirm hypotheses about model/world relations, and they can also use a variety of techniques to investigate the reliability of model results. This paper is an exploration of these possibilities.
Aidan Lyon (Maryland, MCMP) gives a talk at the MCMP Colloquium (13 November, 2014) titled "I Believe I don't Believe. (And So Can You!)". Abstract: Contemporary epistemology offers us two very different accounts of our epistemic lives. According to Traditional epistemologists, the decisions that we make are motivated by our desires and guided by our beliefs and these beliefs and desires all come in an all-or-nothing form. In contrast, many Bayesian epistemologists say that these beliefs and desires come in degrees and that they should be understood as subjective probabilities and utilities. What are we to make of these different epistemologies? Are the Tradionalists and the Bayesians in disagreement, or are their views compatible with each other? Some Bayesians have challenged the Traditionalists: Bayesian epistemology is more powerful and more general than the Traditional theory, and so we should abandon the notion of all-or-nothing belief as something worthy of philosophical analysis. The Traditionalists have responded to this challenge in various ways. I shall argue that these responses are inadequate and that the challenge lives on.
Alvin I. Goldman (Rutgers) meets Stephan Hartmann (MCMP/LMU) in a joint session on "Bridging the Gap between Informal and Formal Social Epistemology" at the MCMP workshop "Bridges 2014" (2 and 3 Sept, 2014, German House, New York City). The 2-day trans-continental meeting in mathematical philosophy focused on inter-theoretical relations thereby connecting form and content of this philosophical exchange. Idea and motivation: We use theories to explain, to predict and to instruct, to talk about our world and order the objects therein. Different theories deliberately emphasize different aspects of an object, purposefully utilize different formal methods, and necessarily confine their attention to a distinct field of interest. The desire to enlarge knowledge by combining two theories presents a research community with the task of building bridges between the structures and theoretical entities on both sides. Especially if no background theory is available as yet, this becomes a question of principle and of philosophical groundwork: If there are any – what are the inter-theoretical relations to look like? Will a unified theory possibly adjudicate between monist and dualist positions? Under what circumstances will partial translations suffice? Can the ontological status of inter-theoretical relations inform us about inter-object relations in the world? Find more about the meeting at www.lmu.de/bridges2014.
Branden Fitelson (Rutgers) meets Hannes Leitgeb (MCMP/LMU) in a joint session on "Belief vs Probability" at the MCMP workshop "Bridges 2014" (2 and 3 Sept, 2014, German House, New York City). The 2-day trans-continental meeting in mathematical philosophy focused on inter-theoretical relations thereby connecting form and content of this philosophical exchange. Idea and motivation: We use theories to explain, to predict and to instruct, to talk about our world and order the objects therein. Different theories deliberately emphasize different aspects of an object, purposefully utilize different formal methods, and necessarily confine their attention to a distinct field of interest. The desire to enlarge knowledge by combining two theories presents a research community with the task of building bridges between the structures and theoretical entities on both sides. Especially if no background theory is available as yet, this becomes a question of principle and of philosophical groundwork: If there are any – what are the inter-theoretical relations to look like? Will a unified theory possibly adjudicate between monist and dualist positions? Under what circumstances will partial translations suffice? Can the ontological status of inter-theoretical relations inform us about inter-object relations in the world? Find more about the meeting at www.lmu.de/bridges2014.
Teddy Seidenfeld (CMU) gives a talk at the Workshop on Imprecise Probabilities in Statistics and Philosophy (27-28 June, 2014) titled "Dominance and Elicitation in IP Theory". Abstract: I review de Finettis two coherence criteria for determinate probabilities: coherence1, which is defined in terms of previsions (fair prices) for a set of random variables that are undominated by the status quo previsions immune to a sure-loss and coherence2, which defined in terms of forecasts for random variables that are undominated in Brier score by a rival set of forecasts. I review issues of elicitation associated with these two criteria that differentiate them, particularly when generalizing from eliciting determinate to eliciting imprecise probabilities.
Fabio G. Cozman (Sao Paulo) gives a talk at the Workshop on Imprecise Probabilities in Statistics and Philosophy (27-28 June, 2014) titled "Imprecise (Full Conditional) Probabilities, Graphs and Graphoids Independence Assumptions". Abstract: Research in artificial intelligence systems has often employed graphs to encode multivariate probability distributions. Such graph-theoretical formalisms heavily employ independence assumptions so as to simplify model construction and manipulation. Another line of research has focused on the combination of logical and probabilistic formalisms for knowledge representation, often without any explicit discussion of independence assumptions. In this talk we examine (1) graph-theoretical models, called credal networks, that represent sets of probability distributions and various independence assumptions; and (2) languages that combine logical constructs with graph-theoretical models, so as to provide tractability and exibility. The challenges in combining these various formalisms are discussed, together with insights on how to make them work together.
Jim Joyce (Michigan) gives a talk at the Workshop on Imprecise Probabilities in Statistics and Philosophy (27-28 June, 2014) titled "Imprecise Priors as Expressions of Epistemic Values". Abstract: As is well known, imprecise prior probabilities can help us model beliefs in contexts where evidence is sparse, equivocal or vague. It is less well-known that they can also provide a useful way of representing certain kinds of indecision or uncertainty about epistemic values and inductive policies. If we use the apparatus of proper scoring rules to model a believer's epistemic values, then we can see her 'choice' of a prior as, partly, an articulation of her values. In contexts where epistemic values and inductive policies are less than fully definite, or where there is unresolved conflict among values, the imprecise prior will reject this indefiniteness in theoretically interesting ways.
Lev Vaidman (Tel Aviv) gives a talk at the MCMP Colloquium (10 February, 2014) titled "Sleeping Beauty in Quantumland". Abstract: A recent philosophical controversy in the analysis of the probability puzzle about Sleeping Beauty is presented. The issue of probability in the many-worlds interpretation of quantum mechanics is reviewed. It is argued that the analysis of the puzzle in the framework of the many-worlds interpretation helps to solve the Sleeping Beauty puzzle.
Rainer Hegelmann (Bayreuth) gives a talk at the MCMP Colloquium (15 January, 2014) titled "Understanding epistemic grouping, networking and division of labour: What can simple macroscopic models do?". Abstract: In my talk I’ll start with a minimalistic model of opinion dynamics, the so-called bounded confidence model. Then I present stepwise extensions. In the end we have a model with cognitive division of labor and different epistemic groups, all of them engaged in networking of all sorts. Some of the groups are seeking for the truth or try to climb upwards in cliffy epistemic landscapes. Other groups simply follow the truth seekers and climbers. – As a result we get a simulator that allows, for instance, to analyse cost and benefits of networking and grouping, measured in terms of societal distance to the truth.
David Etlin (MCMP/LMU) gives a talk at the MCMP Colloquium (21 November, 2013) titled "Vague Desire: The Sorites and the Money Pump". Abstract: The similarity between the sorites paradox of vagueness and the money pump of decision theory has been noted by Dummett (in "Wang's Paradox"), but the connection has not been widely recognized or developed. We argue that on one plausible philosophical theory of linguistic meaning, the Gricean intention based account (as developed in Schiffer's "Meaning"), the paradox of vagueness turn out to be a puzzle about intransitive preferences. Given this, one can solve the sorites paradox by diagnosing the appealing but mistaken principle connecting preference and choice leading to the money pump. We argue for a resolution of the money pump having consequences not only for diachronic principles of rational choice, but also the standard synchronic principle of rationalizable actions. Our approach to vagueness helps overcome worries about the arbitrariness of rejecting instances of the sorites step, and also supports a treatment of vague expressions that don't immediately lend themselves to soritical reasoning.
Aidan Lyon (MCMP/LMU) gives a talk at the MCMP Colloquium (6 November, 2013) titled "Measuring Overconfidence with Imprecise Probabilities & The Wisdom of Collective Credences". Abstract: What explains the Wisdom of Crowds effect? Page (2008) has made some initial progress on this question with what he calls the Diversity Prediction Theorem. The upshot of the theorem is that if a collective has some diversity to it, then the collective’s estimate is guaranteed to be more accurate then the typical estimate in the collective. This appears to be a simple and very general account of the Wisdom of Crowds effect. However, for the theorem to have explanatory power, it needs to be supplemented with additional assumptions. In this paper, I analyse these assumptions, and discuss their drawbacks, and how we might overcome them. A consequence of this analysis is that the Wisdom of Crowds effect behaves very differently for probability estimates than it does for regular quantity estimates.
Eleonora Cresto (Buenos Aires) gives a talk at the MCMP Colloquium (7 November, 2013) titled "Group Knowledge and Probability". Abstract: The talk addresses the concept of group knowledge and its role in probability aggregation. I start by acknowledging that we often feel compelled to credit group agents with epistemic attitudes that are not held by any of their members. Nonetheless, I argue that ideally responsible agents also �find a deflationary pressure to ground such attitudes in concrete individuals, for purely conceptual reasons - out of coherence. As a result of this, epistemically responsible groups are conceived of as dynamic entities; correspondingly, group knowledge gets defined in terms of a family of possible paths between distributed knowledge and common knowledge. Public Announcement logics can then be endowed with new tasks and goals - among them, the task of monitoring the satisfaction of normative requirements by epistemically responsible agents. Next, I argue that this conception of group knowledge provides a principled way to deal with group probabilities. I begin by considering sets of individual measures conditional on common and distributed knowledge, respectively. The proposal is further refined with the addition of a geometric pooling method on the conditional measures. The resulting aggregation procedure is shown to be in partial agreement with some proposals found in the recent literature. I contend, however, that a full-fledged understanding of group probabilities should focus on the distance between the two aggregated sets, and in the possible strategies to reduce it.
Alexander Bird (Bristol) gives a talk at the MCMP Colloquium (24 July, 2013) titled "Externalism, Internalism, and the KK Principle". Abstract: The KK principle states that if a subject knows some fact then she is in a position to know that she knows that fact. This paper examines the relationship between the KK principle and the epistemological externalism and internalism. There is often thought to a very close relationship between externalism and the rejection of the KK principle and between internalism with its acceptance. How strong are the connections? The stronger proposals are: externalism entails the denial of the KK principle; internalism entails the truth of the KK principle. We will consider a number of problems for the theses as stated; we examine two ways of amending them so that they avoid these problems.
Dana Scott (CMU Pittsburgh and Berkeley) gives a talk at "Progic 2013", the Sixth Workshop on Combining Probability and Logic (17-18 Sept, 2013), titled "Stochastic Lambda-Calculi".
Sean Walsh (Irvine) gives a talk at "Progic 2013", the Sixth Workshop on Combining Probability and Logic (17-18 Sept, 2013), titled "Empiricism, Probability, and Knowledge of Arithmetic".
Arthur Paul Pedersen (Max Planck Institut for Human Development) gives a talk at "Progic 2013", the Sixth Workshop on Combining Probability and Logic (17-18 Sept, 2013), titled "Great Expectations. Strictly Coherent Preferences, No Holds Barred".
Glauber de Bona (Sao Paulo) gives a talk at "Progic 2013", the Sixth Workshop on Combining Probability and Logic (17-18 Sept, 2013), titled "Towards Classifying Propositional Probabilistic Logics".
Jon Williamson and Jürgen Landes (University of Kent) give a talk at "Progic 2013", the Sixth Workshop on Combining Probability and Logic (17-18 Sept, 2013), titled "Objective Bayesian epistemology for inductive logic on predicate languages".
Teddy Groves (University of Kent) gives a talk at "Progic 2013", the Sixth Workshop on Combining Probability and Logic (17-18 Sept, 2013), titled "An application of Carnapian inductive logic to philosophy of statistics".
Kevin T. Kelly (CMU Pittsburgh) gives a talk at "Progic 2013", the Sixth Workshop on Combining Probability and Logic (17-18 Sept, 2013), titled "Acceptance without Certainty or Stability".
Hannes Leitgeb (LMU/MCMP) gives a talk at "Progic 2013", the Sixth Workshop on Combining Probability and Logic (17-18 Sept, 2013), titled "The Humean Thesis on Belief. Belief and Stable Probability".
Stanislav O. Speranski (Novosibirsk State University) gives a talk at "Progic 2013", the Sixth Workshop on Combining Probability and Logic (17-18 Sept, 2013), titled "Quantifying over events in probability logic: expressibility vs. computability".
Liam Kofi Bright (CMU Pittsburgh) gives a talk at "Progic 2013", the Sixth Workshop on Combining Probability and Logic (17-18 Sept, 2013), titled "Comparing Degrees of Incoherence".
Hykel Hosni (CPNSS) gives a talk at "Progic 2013", the Sixth Workshop on Combining Probability and Logic (17-18 Sept, 2013), titled "The logical structure of de Finetti's notion of event".
Berit Brogaard (St. Louis) gives a talk at the MCMP workshop "New Perspectives on External World Scepticism" (9-10 July, 2013) titled "Immediate justification, bootstrapping and cognitive penetrability". Abstract: I argue that dogmatism offers an adequate reply to the skeptic and then consider the bootstrapping and cognitive penetrability problems that appear to inflict immediate justification theories. I argue that the cognitive penetrability problem can be avoided by ensuring that only properly grounded seemings provide immediate justification. This move, however, comes at a cost. It introduces a potentially undesirable form of externalism into the theory of immediate justification. I then show that the bootstrapping and cognitive penetrability problems don’t arise on a version of dogmatism that takes perceptual seemings to provide immediate justification for belief. Though this move requires an apparently inaccessible notion of rationality, I argue that the resultant view is nonetheless a form of internalism.
Thomas Grundmann (Cologne) gives a talk at the MCMP workshop "New Perspectives on External World Scepticism" (9-10 July, 2013) titled "Rethinking Scepticism about Justification. Don't Forget the Old Evil Demon.". Abstract: Mentalism about justification is typically motivated by the new evil demon intuition. According to it, mental duplicates, i.e., people who share all their non-factive mental states, also share their justificatory properties, no matter whether they are in abnormal conditions or whether they are permanently deceived by an evil demon. At the same time, mentalists often ignore another intuition that drives many traditional sceptical arguments and that I will call the old evil demon intuition. According to this second intuition, demon scenarios are sceptical hypotheses that challenge one's own justification unless one is in a position to rule them out from one's own epistemic perspective. Moreover, there are two different kinds of sceptical hypotheses: hypotheses that conflict with the truth of a target belief, and hypotheses that conflict with the belief's reliability. As I will argue, claiming that one needs to rule out the later kind of hypothesis in order to achieve first- or second-order justification commits one to the view that reliability is at least a necessary condition for justification and, hence, that mentalism is false. I will conclude with some suggestions how both intuitions may fit together within an overall reliabilist framework.
Ralph Wedgwood (Southern California) gives a talk at the MCMP workshop "New Perspectives on External World Scepticism" (9-10 July, 2013) titled "Scepticism and Probabilism". Abstract: Philosophers who work within a broadly probabilistic epistemological framework have extensively explored the significance of inductive scepticism. But what is the significance of external-world scepticism within a probabilistic framework? Presumably, if external-world scepticism is false, then the perceptions or sensory experiences that a believer has at a given time must make some difference to the probability function that measures the degrees of belief that it is rational or justified for the believer to have at that time. Specifically, it must normally be the case that if the believer has an experience as of p’s being the case, the probability of p will be raised in some way. In general, an experience as of p’s being the case will have many other effects on the probabilities of other propositions besides p. Still, there must be some limits to the effects that this experience has on these probabilities. It is proposed here that for every experience, there is some partition of propositions such that the experience cannot change the conditional probability of any proposition conditional on any member of that partition; in other words, the rational impact of experience can be modelled by means of some kind of conditionalization. However, it seems that the effect that experience normally has on a rational believer’s degrees of belief can be defeated in some cases. Some philosophers—most notably, Jonathan Weisberg—have objected that conditionalization cannot be reconciled with a plausible account of defeasibility. A reply to this objection is offered here, including an account of how perceptual justification can be defeated. This account implies that the cases in which perceptual justification is defeated are in a sense exceptional or abnormal: it is only when certain special factors are present that perceptual justification will be defeated. The account has this implication because it relies on a principle that seems likely to play a crucial role in any adequate solution to inductive scepticism—the principle that it is rational for the believer to start out with a set of prior probabilities that are systematically biased in certain ways. In fact, it turns out that the sort of rational antecedent bias that is required to make sense of this fact about the defeasibility of perceptual justification is also enough to provide the core of an account of the rational impact of experience. In short, within a probabilistic framework, external-world scepticism and inductive scepticism turn out to have essentially the same solution.
C. Ulises Moulines (LMU) gives a talk at the MCMP workshop "New Perspectives on External World Scepticism" (9-10 July, 2013) titled "Is There Anything Besides Me?". Abstract: The question “Is there anything besides me?” appears to be a most basic philosophical question. It is basic in an ontological, an epistemological, and—so it may be argued—also a semantic sense. It challenges our most fundamental categories of thought. To this question, four different answers may, in principle, be given: (1) a realist answer: “Yes”; (2) a solipsist answer: “No”; (3) a sceptical answer: “Perhaps, but I’ll never know”; and finally (4) a positivist answer: “This question makes no sense”. Most contemporary philosophers maintain either (1) or (4). Grounding on the scenario laid out by Calderón in his Life Is a Dream I argue in this paper that there is no good reason for maintaining (4), and that the usual arguments for (1) fall short of being fully convincing. Besides being historically prior to Descartes’ genium malignum, Calderón’s model has at least two epistemological advantages: (a) It has some degree of empirical plausibility; (b) it makes a clear case for a coherent answer of type (3) and even opens the door for a conceivable answer of type (2). Through some minor modifications of the plot in Calderón’s play, it will be shown that its main character, Segismund, can rationally choose to take positions (2) or (3).
Igal Kvart (Jerusalem) gives a talk at the MCMP Colloquium (20 June, 2013) titled "The Pragmatics of Knowledge, Counter-Examples to Pragmatic Contaminations of Knowledge, Pragmatic Inconsistencies, and the Knowledge Norm of Assertion". Abstract: In this paper, I start with briefly summarizing my account of the pragmatics of Knowledge, contrasting it with the competing accounts favoring the pragmatic contamination of the semantics of knowledge ascriptions, such as Epistemic Contextualism and Pragmatic Encroachment (most notably represented by Subject-Sensitive Invariantism – SSI). The proposed pragmatic account, I claim, explains the examples in question, obviating the need for accounts that invoke pragmatic contamination of the semantics of knowledge ascriptions. I then further propose two different counter-examples to the above Pragmatic Contamination of Knowledge accounts, which in turn hold out in the light of, and are well explained by, the pragmatic account of knowledge I offer. I proceed to offer the notion of Pragmatic Inconsistencies associated with the Steering Role of Knowledge, which is a main feature of the pragmatic account I suggest. I will claim such a pragmatic inconsistency character for Moorean variants, use it against Stanley's Certainty Norm of Assertion, and argue against Williamson's version of the Knowledge Norm of Assertion, arguing for an alternative version of my own.