Branch of mathematics concerning probability
POPULARITY
Episode: 2444 Of Wombats and Armadillos: Using Animals to Teach Probability. Today, why did the wombat cross the road?
It's election day! Please make sure you get out and vote! We spent the rest of the episode discussing shaky probability theory, potential GAR standouts, special team curiosities, mathematical education, solfège, and early-season division standouts.
On episode 214, we welcome Tom Chivers to discuss Bayesian statistics, how their counterintuitive nature tends to turn people off, the philosophical disagreements between the Bayesians and the frequentists, why “priors” aren't purely subjective and why all theories should be considered as priors, the difficulty of quantifying emotional states in psychological research, how priors are used and misused to inform interpretations of new data, our innate tendency toward black and white thinking, the replication crisis, and why statistically significant research is often wrong. Tom Chivers is an author and the award-winning science writer for Semafor. His writing has appeared in The Times (London), The Guardian, New Scientist, Wired, CNN, and more. He is the co-host of The Studies Show podcast alongside Stuart Richie.His books include The Rationalist's Guide to the Galaxy, and How to Read Numbers. His newest book, available now, is called Everything Is Predictable: How Bayesian Statistics Explain Our World. | Tom Chivers | ► Website | https://tomchivers.com ► Twitter | https://x.com/TomChivers ► Semafor | https://www.semafor.com/author/tom-chivers ► Podcast | https://www.thestudiesshowpod.com ► Everything is Predictable Book | https://amzn.to/3UJTOxD Where you can find us: | Seize The Moment Podcast | ► Facebook | https://www.facebook.com/SeizeTheMoment ► Twitter | https://twitter.com/seize_podcast ► Instagram | https://www.instagram.com/seizethemoment ► TikTok | https://www.tiktok.com/@seizethemomentpodcast
A Note from James:I am thrilled to celebrate the 10th anniversary of my podcast. Occasionally, I'll feature some timeless episodes as if they're brand new, sharing those that have greatly impacted me. One such figure is Nassim Taleb, whom I consider one of the smartest people on the planet.I've learned so much from Nassim, and I'm not sure he realizes or cares just how influential he's been on me. I was extremely grateful when he agreed to appear on my podcast. There's an interesting backstory to his appearance: he joined my show a few years ago, and we are airing that episode now, though he might not be aware of the whole story.Back in 2002, I was desperate—I was broke, struggling, losing my house, and my family was falling apart. I wrote to 20 influential individuals, including well-known investors and writers like Warren Buffett and Carl Icahn, expressing my desire to meet them. Only three responded.Jim Cramer was one of them. I had sent him ten ideas for articles he could write for TheStreet.com. To my surprise, he responded positively and encouraged me to write the articles myself, which kickstarted my career as a writer. From financial columns, I expanded into other topics.Victor Niederhoffer also replied because I sent him software programs tailored to his trading style, offering them for his and his traders' use and my assistance if needed, with no pressure to respond.Nassim Taleb was another who responded. I had reached out to him because I admired his book "Fooled by Randomness" and wished to meet him. Although he was willing to meet, I never followed up. However, many years later, he came on my podcast, bringing everything full circle, for which I am immensely grateful.Now, I am honored to reintroduce one of the smartest men in the universe, Nassim Taleb. Episode Description:In this episode, we explore Nassim Taleb's influential ideas, specifically his thoughts on antifragility, the unpredictability of life, and the beneficial role of trial and error in diverse areas such as technology, health, and business. As we mark ten years of learning, the host shares transformative conversations with Taleb, revealing how chaos and uncertainty can fortify systems, people, and industries. We examine Taleb's key principles: reducing interference, valuing variability, and the necessity of personal investment in outcomes. We also look at concrete examples. Further, we discuss how embracing errors and innovation can lead to breakthroughs in sectors like drug development and business ventures, and address the negative impacts of excessive rescue measures and regulatory constraints. Through a blend of personal anecdotes and theoretical exploration, this episode encapsulates the essence of antifragility as a pathway to resilience and fulfillment. Episode Summary:00:00 Celebrating a Decade of Podcasting: A Special Revisit00:35 The Power of Cold Emails: Life-Changing Connections02:04 Nassim Taleb: A Mind That Shaped My Worldview02:51 Exploring the Impact of Technology Through the Lens of Anti-Fragility04:18 The Evolution of Communication: From TV to Social Media04:47 The Paradox of Technological Progress: A Historical Perspective05:43 Disruptive Innovations and the Cycle of Technology08:41 Personal Anecdotes and the Philosophy of Email Communication09:24 The Intricacies of Responding to Emails and Setting Boundaries10:56 Journalism, Social Media, and the Quest for Authenticity15:27 Understanding Fragility vs. Anti-Fragility: A Deep Dive26:07 The Role of Variability and Stressors in Evolution and Health31:49 Applying Anti-Fragility to Diet, Exercise, and Lifestyle49:43 The Importance of Political Variability and the Unpredictability of Life51:40 Exploring the Anti-Fragile Lifestyle52:00 The Power of Walking and Creative Thinking54:22 Embracing Natural Elements for Health55:06 Rethinking Medicine and Personal Health Strategies57:30 Navigating Social Relationships and Disruption01:00:31 The Essence of Anti-Fragility in Life and Work01:09:34 Understanding the Financial System and Its Fragilities01:12:33 The Role of Entrepreneurship and Risk in Society01:29:51 Reflecting on Writing, Publishing, and Intellectual Pursuits01:41:37 Closing Thoughts and Future Directions ------------What do YOU think of the show? Head to JamesAltucherShow.com/listeners and fill out a short survey that will help us better tailor the podcast to our audience!Are you interested in getting direct answers from James about your question on a podcast? Go to JamesAltucherShow.com/AskAltucher and send in your questions to be answered on the air!------------Visit Notepd.com to read our idea lists & sign up to create your own!My new book, Skip the Line, is out! Make sure you get a copy wherever books are sold!Join the You Should Run for President 2.0 Facebook Group, where we discuss why you should run for President.I write about all my podcasts! Check out the full post and learn what I learned at jamesaltuchershow.com------------Thank you so much for listening! If you like this episode, please rate, review, and subscribe to “The James Altucher Show” wherever you get your podcasts: Apple PodcastsiHeart RadioSpotifyFollow me on social media:YouTubeTwitterFacebookLinkedIn
Dive headfirst into the unpredictable currents of finance with me, Edward Finley, as we chart a course through the complex seas of probability theory and its indispensable role in understanding capital markets. Venture into the treacherous waters where historical price data is both a beacon and a riddle.We'll begin by navigating some of the basic assumptions of probability theory, uncovering the meaning of the basic statistical properties in easy-to-understand language and applying them to real world data to give them life.The seas can get pretty rough, and we'll measure that market volatility, as we dissect the patterns that shape our understanding of investment risk through the lens of variance and standard deviation. Learn how volatility clustering makes it simpler to forecast than return, and why the long-term view of market returns may be clearer than the short-term blur. We'll navigate the stormy spells that rocked the markets in the past and glean insights for weathering future financial squalls.Wrapping up our odyssey, we confront the harsh truths about the limitations of statistical tools in understanding markets fraught with uncertainty. Join us for a candid discussion on the perils of relying too heavily on statistics in predicting market behavior, illustrated with cautionary tales like the "lost decade." By journey's end, you'll emerge with a fortified understanding of how probability theory can—and cannot—guide your investment decisions in the tumultuous tides of finance.Notes - https://1drv.ms/p/s!AqjfuX3WVgp8uSLfASdaN8dlmtfE?e=XDpWKXThanks for listening! Please be sure to review the podcast or send your comments to me by email at info@not-another-investment-podcast.com. And tell your friends!
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropical Paradoxes are Paradoxes of Probability Theory, published by Ape in the coat on December 7, 2023 on LessWrong. This is the fourth post in my series on Anthropics. The previous one is Anthropical probabilities are fully explained by difference in possible outcomes. Introduction If there is nothing special about anthropics, if it's just about correctly applying standard probability theory, why do we keep encountering anthropical paradoxes instead of general probability theory paradoxes? Part of the answer is that people tend to be worse at applying probability theory in some cases than in the others. But most importantly, the whole premise is wrong. We do encounter paradoxes of probability theory all the time. We are just not paying enough attention to them, and occasionally attribute them to anthropics. Updateless Dilemma and Psy-Kosh's non-anthropic problem As an example, let's investigate Updateless Dilemma, introduced by Eliezer Yudkowsky in 2009. Let us start with a (non-quantum) logical coinflip - say, look at the heretofore-unknown-to-us-personally 256th binary digit of pi, where the choice of binary digit is itself intended not to be random. If the result of this logical coinflip is 1 (aka "heads"), we'll create 18 of you in green rooms and 2 of you in red rooms, and if the result is "tails" (0), we'll create 2 of you in green rooms and 18 of you in red rooms. After going to sleep at the start of the experiment, you wake up in a green room. With what degree of credence do you believe - what is your posterior probability - that the logical coin came up "heads"? Eliezer (2009) argues, that updating on the anthropic evidence and thus answering 90% in this situation leads to a dynamic inconsistency, thus anthropical updates should be illegal. I inform you that, after I look at the unknown binary digit of pi, I will ask all the copies of you in green rooms whether to pay $1 to every version of you in a green room and steal $3 from every version of you in a red room. If they all reply "Yes", I will do so. Suppose that you wake up in a green room. You reason, "With 90% probability, there are 18 of me in green rooms and 2 of me in red rooms; with 10% probability, there are 2 of me in green rooms and 18 of me in red rooms. Since I'm altruistic enough to at least care about my xerox-siblings, I calculate the expected utility of replying 'Yes' as (90% * ((18 * +$1) + (2 * -$3))) + (10% * ((18 * -$3) + (2 * +$1))) = +$5.60." You reply yes. However, before the experiment, you calculate the general utility of the conditional strategy "Reply 'Yes' to the question if you wake up in a green room" as (50% * ((18 * +$1) + (2 * -$3))) + (50% * ((18 * -$3) + (2 * +$1))) = -$20. You want your future selves to reply 'No' under these conditions. This is a dynamic inconsistency - different answers at different times - which argues that decision systems which update on anthropic evidence will self-modify not to update probabilities on anthropic evidence. However, in the comments Psy-Kosh notices that this situation doesn't have anything to do with anthropics at all. The problem can be reformulated as picking marbles from two buckets with the same betting rule. The dynamic inconsistency doesn't go anywhere, and if previously it was a sufficient reason not to update on anthropic evidence, now it becomes a sufficient reason against the general case of Bayesian updating in the presence of logical uncertainty. Solving the Problem Let's solve these problems. Or rather this problem - as they are fully isomorphic and have the same answer. For simplicity, as a first step, let's ignore the betting rule and dynamic inconsistency and just address it in terms of the Law of Conservation of Expected Evidence. Do I get new evidence while waking up in a green room or picking a green marble? O...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Anthropical Paradoxes are Paradoxes of Probability Theory, published by Ape in the coat on December 7, 2023 on LessWrong. This is the fourth post in my series on Anthropics. The previous one is Anthropical probabilities are fully explained by difference in possible outcomes. Introduction If there is nothing special about anthropics, if it's just about correctly applying standard probability theory, why do we keep encountering anthropical paradoxes instead of general probability theory paradoxes? Part of the answer is that people tend to be worse at applying probability theory in some cases than in the others. But most importantly, the whole premise is wrong. We do encounter paradoxes of probability theory all the time. We are just not paying enough attention to them, and occasionally attribute them to anthropics. Updateless Dilemma and Psy-Kosh's non-anthropic problem As an example, let's investigate Updateless Dilemma, introduced by Eliezer Yudkowsky in 2009. Let us start with a (non-quantum) logical coinflip - say, look at the heretofore-unknown-to-us-personally 256th binary digit of pi, where the choice of binary digit is itself intended not to be random. If the result of this logical coinflip is 1 (aka "heads"), we'll create 18 of you in green rooms and 2 of you in red rooms, and if the result is "tails" (0), we'll create 2 of you in green rooms and 18 of you in red rooms. After going to sleep at the start of the experiment, you wake up in a green room. With what degree of credence do you believe - what is your posterior probability - that the logical coin came up "heads"? Eliezer (2009) argues, that updating on the anthropic evidence and thus answering 90% in this situation leads to a dynamic inconsistency, thus anthropical updates should be illegal. I inform you that, after I look at the unknown binary digit of pi, I will ask all the copies of you in green rooms whether to pay $1 to every version of you in a green room and steal $3 from every version of you in a red room. If they all reply "Yes", I will do so. Suppose that you wake up in a green room. You reason, "With 90% probability, there are 18 of me in green rooms and 2 of me in red rooms; with 10% probability, there are 2 of me in green rooms and 18 of me in red rooms. Since I'm altruistic enough to at least care about my xerox-siblings, I calculate the expected utility of replying 'Yes' as (90% * ((18 * +$1) + (2 * -$3))) + (10% * ((18 * -$3) + (2 * +$1))) = +$5.60." You reply yes. However, before the experiment, you calculate the general utility of the conditional strategy "Reply 'Yes' to the question if you wake up in a green room" as (50% * ((18 * +$1) + (2 * -$3))) + (50% * ((18 * -$3) + (2 * +$1))) = -$20. You want your future selves to reply 'No' under these conditions. This is a dynamic inconsistency - different answers at different times - which argues that decision systems which update on anthropic evidence will self-modify not to update probabilities on anthropic evidence. However, in the comments Psy-Kosh notices that this situation doesn't have anything to do with anthropics at all. The problem can be reformulated as picking marbles from two buckets with the same betting rule. The dynamic inconsistency doesn't go anywhere, and if previously it was a sufficient reason not to update on anthropic evidence, now it becomes a sufficient reason against the general case of Bayesian updating in the presence of logical uncertainty. Solving the Problem Let's solve these problems. Or rather this problem - as they are fully isomorphic and have the same answer. For simplicity, as a first step, let's ignore the betting rule and dynamic inconsistency and just address it in terms of the Law of Conservation of Expected Evidence. Do I get new evidence while waking up in a green room or picking a green marble? O...
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Learning-theoretic agenda reading list, published by Vanessa Kosoy on November 9, 2023 on The AI Alignment Forum. Recently, I'm receiving more and more requests for a self-study reading list for people interested in the learning-theoretic agenda. I created a standard list for that, but before now I limited myself to sending it to individual people in private, out of some sense of perfectionism: many of the entries on the list might not be the best sources for the topics and I haven't read all of them cover to cover myself. But, at this point it seems like it's better to publish a flawed list than wait for perfection that will never come. Also, commenters are encouraged to recommend alternative sources that they consider better, if they know any. General math background "Introductory Functional Analysis with Applications" by Kreyszig (especially chapters 1, 2, 3, 4) "Computational Complexity: A Conceptual Perspective" by Goldreich (especially chapters 1, 2, 5, 10) "Probability: Theory and Examples" by Durret (especially chapters 4, 5, 6) "Elements of Information Theory" by Cover and Thomas (especially chapter 2) "Lambda-Calculus and Combinators: An Introduction" by Hindley "Game Theory: An Introduction" by Tadelis AI theory "Machine Learning: From Theory to Algorithms" by Shalev-Shwarz and Ben-David (especially part I and chapter 21) "Bandit Algorithms" by Lattimore and Szepesvari (especially parts II, III, V, VIII) Alternative/complementary: "Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems" by Bubeck and Cesa-Bianchi (especially sections 1, 2, 5) "Prediction Learning and Games" by Cesa-Bianchi and Lugosi (mostly chapter 7) "Universal Artificial Intelligence" by Hutter Alternative: "A Theory of Universal Artificial Intelligence based on Algorithmic Complexity" (Hutter 2000) Bonus: "Nonparametric General Reinforcement Learning" by Jan Leike Reinforcement learning theory "Near-optimal Regret Bounds for Reinforcement Learning" (Jaksch, Ortner and Auer, 2010) "Efficient Bias-Span-Constrained Exploration-Exploitation in Reinforcement Learning" (Fruit et al, 2018) "Regret Bounds for Learning State Representations in Reinforcement Learning" (Ortner et al, 2019) "Efficient PAC Reinforcement Learning in Regular Decision Processes" (Ronca and De Giacomo, 2022) "Tight Guarantees for Interactive Decision Making with the Decision-Estimation Coefficient" (Foster, Golowich and Han, 2023) Agent foundations "Functional Decision Theory" (Yudkowsky and Soares 2017) "Embedded Agency" (Demski and Garrabrant 2019) Learning-theoretic AI alignment research agenda Overview Infra-Bayesianism sequence Bonus: podcast "Online Learning in Unknown Markov Games" (Tian et al, 2020) Infra-Bayesian physicalism Bonus: podcast Reinforcement learning with imperceptible rewards Bonus materials "Logical Induction" (Garrabrant et al, 2016) "Forecasting Using Incomplete Models" (Kosoy 2017) "Cartesian Frames" (Garrabrant, Herrman and Lopez-Wild, 2021) "Optimal Polynomial-Time Estimators" (Kosoy and Appel, 2016) "Algebraic Geometry and Statistical Learning Theory" by Watanabe Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
YouTube link https://youtu.be/zMPnrNL3zsE Gregory Chaitin discusses algorithmic information theory, its relationship with Gödel incompleteness theorems, and the properties of Omega number. Topics of discussion include algorithmic information theory, Gödel incompleteness theorems, and the Omega number. Listen now early and ad-free on Patreon https://patreon.com/curtjaimungal. Sponsors: - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything - TOE Merch: https://tinyurl.com/TOEmerch LINKS MENTIONED: - Meta Math and the Quest for Omega (Gregory Chaitin): https://amzn.to/3stCFxH - Visual math episode on Chaitin's constant: https://youtu.be/WLASHxChXKM - Podcast w/ David Wolpert on TOE: https://youtu.be/qj_YUxg-qtY - A Mathematician's Apology (G. H. Hardy): https://amzn.to/3qOEbtL - The Physicalization of Metamathematics (Stephen Wolfram): https://amzn.to/3YUcGLL - Podcast w/ Neil deGrasse Tyson on TOE: https://youtu.be/HhWWlJFwTqs - Proving Darwin (Gregory Chaitin): https://amzn.to/3L0hSbs - What is Life? (Erwin Schrödinger): https://amzn.to/3YVk8Xm - "On Computable Numbers, with an Application to the Entscheidungsproblem" (Alan Turing): https://www.cs.virginia.edu/~robins/T... - "The Major Transitions in Evolution" (John Maynard Smith and Eörs Szathmáry): https://amzn.to/3PdzYci - "The Origins of Life: From the Birth of Life to the Origin of Language" (John Maynard Smith and Eörs Szathmáry): https://amzn.to/3PeKFeM - Podcast w/ Stephen Wolfram on TOE: https://youtu.be/1sXrRc3Bhrs - Incompleteness: The Proof and Paradox of Kurt Gödel (Rebecca Goldstein): https://amzn.to/3Pf8Yt4 - Rebecca Goldstein on TOE on Godel's Incompleteness: https://youtu.be/VkL3BcKEB6Y - Gödel's Proof (Ernest Nagel and James R. Newman): https://amzn.to/3QX89q1 - Giant Brains, or Machines That Think (Edmund Callis Berkeley): https://amzn.to/3QXniYj - An Introduction to Probability Theory and Its Applications (William Feller): https://amzn.to/44tWjXI TIMESTAMPS: - 00:00:00 Introduction - 00:02:27 Chaitin's Unconventional Self-Taught Journey - 00:06:56 Chaitin's Incompleteness Theorem and Algorithmic Randomness - 00:12:00 The Infinite Calculation Paradox and Omega Number's Complexity (Halting Probability) - 00:27:38 God is a Mathematician: An Ontological Basis - 00:37:06 Emergence of Information as a Fundamental Substance - 00:53:10 Evolution and the Modern Synthesis (Physics-Based vs. Computational-Based Life) - 01:08:43 Turing's Less Known Masterpiece - 01:16:58 Extended Evolutionary Synthesis and Epigenetics - 01:21:20 Renormalization and Tractability - 01:28:15 The Infinite Fitness Function - 01:42:03 Progress in Mathematics despite Incompleteness - 01:48:38 Unconventional Academic Approach - 01:50:35 Godel's Incompleteness, Mathematical Intuition, and the Platonic World - 02:06:01 The Enigma of Creativity in Mathematics - 02:15:37 Dark Matter: A More Stable Form of Hydrogen? (Hydrinos) - 02:23:33 Stigma and the "Reputation Trap" in Science - 02:28:43 Cold Fusion - 02:29:28 The Stagnation of Physics - 02:41:33 Defining Randomness: The Chaos of 0s and 1s - 02:52:01 The Struggles For Young Mathematicians and Physicists (Advice) Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We Are Less Wrong than E. T. Jaynes on Loss Functions in Human Society, published by Zack M Davis on June 5, 2023 on LessWrong. These paragraphs from E. T. Jaynes's Probability Theory: The Logic of Science (in §13.12.2, "Loss functions in human society") are fascinating from the perspective of a regular reader of this website: We note the sharp contrast between the roles of prior probabilities and loss functions in human relations. People with similar prior proabilities get along well together, because they have about the same general view of the world and philosophy of life. People with radically different prior probabilities cannot get along—this has been the root cause of all the religious wars and most of the political repressions throughout history. Loss functions operate in just the opposite way. People with similar loss functions are after the same thing, and are in contention with each other. People with different loss functions get along well because each is willing to give something the other wants. Amicable trade or business transactions, advantageous to all, are possible only between parties with very different loss functions. We illustrated this by the example of insurance above. (Jaynes writes in terms of loss functions for which lower values are better, whereas we more often speak of utility functions for which higher values are better, but the choice of convention doesn't matter—as long as you're extremely sure which one you're using.) The passage is fascinating because the conclusion looks so self-evidently wrong from our perspective. Agents with the same goals are in contention with each other? Agents with different goals get along? What!? The disagreement stems from a clash of implicit assumptions. On this website, our prototypical agent is the superintelligent paperclip maximizer, with a utility function about the universe—specifically, the number of paperclips in it—not about itself. It doesn't care who makes the paperclips. It probably doesn't even need to trade with anyone. In contrast, although Probability Theory speaks of programming a robot to reason as a rhetorical device[1], this passage seems to suggest that Jaynes hadn't thought much about how ideal agents might differ from humans? Humans are built to be mostly selfish: we eat to satisfy our own hunger, not as part of some universe-spanning hunger-minimization scheme. Besides being favored by evolution, selfish goals do offer some conveniences of implementation: my own hunger can be computed as a much simpler function of my sense data than someone else's. If one assumes that all goals are like that, then one reaches Jaynes's conclusion: agents with similar goal specifications are in conflict, because the specified objective (for food, energy, status, whatever) binds to an agent's own state, not a world-model. But ... the assumption isn't true! Not even for humans, really—sometimes people have "similar loss functions" that point to goals outside of themselves, which benefit from more agents having those goals. Jaynes is being silly here. That said—and no offense—the people who read this website are not E. T. Jaynes; if we can get this one right where he failed, it's because our subculture happened to inherit an improved prior in at least this one area, not because of our innate brilliance or good sense. Which prompts the question: what other misconceptions might we be harboring, due to insufficiently general implicit assumptions? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We Are Less Wrong than E. T. Jaynes on Loss Functions in Human Society, published by Zack M Davis on June 5, 2023 on LessWrong. These paragraphs from E. T. Jaynes's Probability Theory: The Logic of Science (in §13.12.2, "Loss functions in human society") are fascinating from the perspective of a regular reader of this website: We note the sharp contrast between the roles of prior probabilities and loss functions in human relations. People with similar prior proabilities get along well together, because they have about the same general view of the world and philosophy of life. People with radically different prior probabilities cannot get along—this has been the root cause of all the religious wars and most of the political repressions throughout history. Loss functions operate in just the opposite way. People with similar loss functions are after the same thing, and are in contention with each other. People with different loss functions get along well because each is willing to give something the other wants. Amicable trade or business transactions, advantageous to all, are possible only between parties with very different loss functions. We illustrated this by the example of insurance above. (Jaynes writes in terms of loss functions for which lower values are better, whereas we more often speak of utility functions for which higher values are better, but the choice of convention doesn't matter—as long as you're extremely sure which one you're using.) The passage is fascinating because the conclusion looks so self-evidently wrong from our perspective. Agents with the same goals are in contention with each other? Agents with different goals get along? What!? The disagreement stems from a clash of implicit assumptions. On this website, our prototypical agent is the superintelligent paperclip maximizer, with a utility function about the universe—specifically, the number of paperclips in it—not about itself. It doesn't care who makes the paperclips. It probably doesn't even need to trade with anyone. In contrast, although Probability Theory speaks of programming a robot to reason as a rhetorical device[1], this passage seems to suggest that Jaynes hadn't thought much about how ideal agents might differ from humans? Humans are built to be mostly selfish: we eat to satisfy our own hunger, not as part of some universe-spanning hunger-minimization scheme. Besides being favored by evolution, selfish goals do offer some conveniences of implementation: my own hunger can be computed as a much simpler function of my sense data than someone else's. If one assumes that all goals are like that, then one reaches Jaynes's conclusion: agents with similar goal specifications are in conflict, because the specified objective (for food, energy, status, whatever) binds to an agent's own state, not a world-model. But ... the assumption isn't true! Not even for humans, really—sometimes people have "similar loss functions" that point to goals outside of themselves, which benefit from more agents having those goals. Jaynes is being silly here. That said—and no offense—the people who read this website are not E. T. Jaynes; if we can get this one right where he failed, it's because our subculture happened to inherit an improved prior in at least this one area, not because of our innate brilliance or good sense. Which prompts the question: what other misconceptions might we be harboring, due to insufficiently general implicit assumptions? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Train your own AI using this free Lab created by Dr Mike Pound. Big thanks to Brilliant for sponsoring this video! Get started with a free 30 day trial and 20% discount: https://brilliant.org/DavidBombal How do you capitalize on this trend and learn AI? Dr Mike Pound of Computerphile fame shows us practically how to train your own AI. And the great news is that he has shared his Google colab lab with us to you can start learning for free! If you are into cybersecurity or any other tech field, you probably want to learn about AI and ML. They can really help your resume and help you increase the $$$ you earn. Machine Learning / Artificial Intelligence is a fantastic opportunity for you to get a better job. Start learning this amazing technology today and start learning with one of the best! // LAB // Go here to access the lab: https://colab.research.google.com/dri... // Previous Videos // Roadmap to ChatGPT and AI mastery: • Roadmap to ChatGP... I challenged ChatGPT to code and hack: • I challenged Chat... The truth about AI and why you should learn it - Computerphile explains: • The truth about A... // Dr Mike's recommend AI Book // Deep learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville: https://amzn.to/3vmu4LP // Dawid's recommend Books // 1. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: https://amzn.to/3IrGCHi 2. Pattern Recognition and Machine Learning: https://amzn.to/3IWVm2v 3. Machine Learning: A Probabilistic Perspective: https://amzn.to/3xYFM05 4. Python Machine Learning: https://amzn.to/3y0r08Q 5. Deep Learning: https://amzn.to/3kxSbVu 6. The Elements of Statistical Learning: https://amzn.to/3Iwuuox 7. Linear Algebra and Its Applications: https://amzn.to/3EGwMAs 8. Probability Theory: https://amzn.to/3IrGeZm 9. Calculus: Early Transcendentals: https://amzn.to/3Z3Eugh 10. Discrete Mathematics with Applications: https://amzn.to/3Zpzpyt 11. Mathematics for Machine Learning: https://amzn.to/3m8jp5N 12. A Hands-On Introduction to Data Science: https://amzn.to/3Szob8c 13. Introduction to Algorithms: https://amzn.to/3xXo50K 14. Artificial Intelligence: https://amzn.to/3Z2fqGv // Courses and tutorials // AI For Everyone by Andrew Ng: https://www.coursera.org/learn/ai-for... PyTorch Tutorial From Research to Production: https://www.infoq.com/presentations/p... Scikit-learn Machine Learning in Python: https://scikit-learn.org/stable/ // PyTorch // Github: https://github.com/pytorch Website: https://pytorch.org/ Documentation: https://ai.facebook.com/tools/pytorch/ // Mike SOCIAL // Twitter: https://twitter.com/_mikepound YouTube: / computerphile Website: https://www.nottingham.ac.uk/research... // David SOCIAL // Discord: https://discord.com/invite/usKSyzb Twitter: https://www.twitter.com/davidbombal Instagram: https://www.instagram.com/davidbombal LinkedIn: https://www.linkedin.com/in/davidbombal Facebook: https://www.facebook.com/davidbombal.co TikTok: http://tiktok.com/@davidbombal YouTube: / davidbombal // MY STUFF // https://www.amazon.com/shop/davidbombal // SPONSORS // Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel! #chatgpt #computerphile #ai
In this month's podcast, our guest is Catriona Byrne.Catriona Byrne has French and Scottish origins. She obtained her PhD from the University of Edinburgh in 1982 and worked for Springer as a Publishing Editor and later as Director for Mathematics until 2022, working with international teams of editors. In that time she held responsibility for many book series, including the flagship Grundlehren and Ergebnisse, and the Lecture Notes in Mathematics, as well as many journals including Inventiones Mathematicae, Mathematische Annalen, Mathematische Zeitschrift, and Probability Theory and Related Fields. Among other innovations, she initiated the digitisation of the Lecture Notes in Mathematics series, and she launched and developed Springer's successful programme of books and three journals in mathematical finance from 1998.She will be hosted by Bernard Teissier.Bernard Teissier, born 1945, is a French mathematician who has made major contributions to algebraic geometry and commutative algebra, specifically to singularity theory, multiplicity theory and valuation theory. His PhD from the University of Paris VII Denis-Diderot in 1973 was supervised by Heisuke Hironaka. He was a member of Nicolas Bourbaki. He has been a CNRS researcher at the École Polytechnique, the École Normale Supérieure and at Paris Universities. He was an Invited Speaker at the 1983 International Congress of Mathematicians in Warsaw. Bernard Teissier has been one of the Editors of Springer's Lecture Notes in Mathematics series since 1995.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: All the posts I will never write, published by Self-Embedded Agent on August 14, 2022 on LessWrong. This post has been written for the first Refine blog post day, at the end of the week of readings, dicussions, and exercises about epistomology for doing good conceptual research. (/with courtesy to Adam Shimi who suggested the title and idea. ) Rationality, Probability, Uncertainty, Reasoning Failures of The Aumann Agreement Theorem The famous Aumann Agreement Theorem states that rational reasoners can never agree-to-disagree. In day-to-day life we clearly have many situations where rational reasoners cannot agree-to-disagree. Are people just bad rationalist or are there more fundamental reasons that the Aumann Agreement Theorem can fail. I review all the ways in which the Aumann Agreement Theorem can fail that I know of - including failures based on indexical information, computational-complexity obstacles, divergent-interpretations-of-evidence, Hansonian non-truth-seeking and more. Warren Buffet: The Alpha of Wall Street If we observe a trader that consistenly beat the market that should be evidence against the Efficient Market Hypothesis. A trader could also just have been lucky. How much should we update against the EMH and how much should we expect the trader to beat the market in the future? Can we quantify how much information the market absorbed? This is very reminiscent of Wows Bayesian surprise in Bayesian statistics. The Bid-Ask Spread and Epistemic Uncertainty/ Prediction Markets as Epistemic Fog of War If you know A will resolve you should buy shares on A, if you know not A will happen you should buy shares on not A. If you think A will not resolve you should sell shares on A. The Bid-ask Spread measures bet resolution uncertainty Suppose an adversary has an interest in showing you A and t. There is a -> selective reporting When an earnings call come in.. bid-ask spread increases. Where Forecasting goes Wrong... Forecasting is now a big deal in the rationalist community. I argue that a slavish adherence to Bayesian Orthodoxy leads to missing most of the value of prediction markets. What do we mean when we talk about Probability? Possibility theory is prior to Probability Theory. Probability theory is possibility theory + Cournot's principle. Cournot's principle that an epsilon/zero probability possible event will never happen is the fundamental principle of probability theory. cf Shafer on history of Cournot's principle. But what happens when we observe a epsilon/zero probability event? We obtain a contradiction requiring belief revision. Wow!1! I made a productive mistake An exposition of 'Bayesian Surprise attract Human Attention', focused on the notion of 'Bayesian surprisal' measured in wows. There has been a ton of interest in Predictive Processing and Friston's Free Energy Principle. The discussion is often hampered by equivocation between different quantitites that are the log of something. This post will try to clearly disambiguate between these notions and give both mathematical and intuitive explanations. I argue that the notion of a 'productive mistake' can be formalized as the choice to engage with a high wow high entropy source. Compare risk-seeking behaviour in MaxCausalEnt and Schmidthuber's Artificial Curiosity? Foundations of Reasoning Atocha Aliseda's Axioms of Abduction A book review of Aliseda's underappreciated 'Abductive Reasoning'. From SEP: In the philosophical literature, the term “abduction” is used in two related but different senses. In both senses, the term refers to some form of explanatory reasoning. However, in the historically first sense, it refers to the place of explanatory reasoning in generating hypotheses, while in the sense in which it is used most frequently in the modern literature it refers to the place of explan...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: All the posts I will never write, published by Self-Embedded Agent on August 14, 2022 on LessWrong. This post has been written for the first Refine blog post day, at the end of the week of readings, dicussions, and exercises about epistomology for doing good conceptual research. (/with courtesy to Adam Shimi who suggested the title and idea. ) Rationality, Probability, Uncertainty, Reasoning Failures of The Aumann Agreement Theorem The famous Aumann Agreement Theorem states that rational reasoners can never agree-to-disagree. In day-to-day life we clearly have many situations where rational reasoners cannot agree-to-disagree. Are people just bad rationalist or are there more fundamental reasons that the Aumann Agreement Theorem can fail. I review all the ways in which the Aumann Agreement Theorem can fail that I know of - including failures based on indexical information, computational-complexity obstacles, divergent-interpretations-of-evidence, Hansonian non-truth-seeking and more. Warren Buffet: The Alpha of Wall Street If we observe a trader that consistenly beat the market that should be evidence against the Efficient Market Hypothesis. A trader could also just have been lucky. How much should we update against the EMH and how much should we expect the trader to beat the market in the future? Can we quantify how much information the market absorbed? This is very reminiscent of Wows Bayesian surprise in Bayesian statistics. The Bid-Ask Spread and Epistemic Uncertainty/ Prediction Markets as Epistemic Fog of War If you know A will resolve you should buy shares on A, if you know not A will happen you should buy shares on not A. If you think A will not resolve you should sell shares on A. The Bid-ask Spread measures bet resolution uncertainty Suppose an adversary has an interest in showing you A and t. There is a -> selective reporting When an earnings call come in.. bid-ask spread increases. Where Forecasting goes Wrong... Forecasting is now a big deal in the rationalist community. I argue that a slavish adherence to Bayesian Orthodoxy leads to missing most of the value of prediction markets. What do we mean when we talk about Probability? Possibility theory is prior to Probability Theory. Probability theory is possibility theory + Cournot's principle. Cournot's principle that an epsilon/zero probability possible event will never happen is the fundamental principle of probability theory. cf Shafer on history of Cournot's principle. But what happens when we observe a epsilon/zero probability event? We obtain a contradiction requiring belief revision. Wow!1! I made a productive mistake An exposition of 'Bayesian Surprise attract Human Attention', focused on the notion of 'Bayesian surprisal' measured in wows. There has been a ton of interest in Predictive Processing and Friston's Free Energy Principle. The discussion is often hampered by equivocation between different quantitites that are the log of something. This post will try to clearly disambiguate between these notions and give both mathematical and intuitive explanations. I argue that the notion of a 'productive mistake' can be formalized as the choice to engage with a high wow high entropy source. Compare risk-seeking behaviour in MaxCausalEnt and Schmidthuber's Artificial Curiosity? Foundations of Reasoning Atocha Aliseda's Axioms of Abduction A book review of Aliseda's underappreciated 'Abductive Reasoning'. From SEP: In the philosophical literature, the term “abduction” is used in two related but different senses. In both senses, the term refers to some form of explanatory reasoning. However, in the historically first sense, it refers to the place of explanatory reasoning in generating hypotheses, while in the sense in which it is used most frequently in the modern literature it refers to the place of explan...
In Probability and Forensic Evidence: Theory, Philosophy, and Applications (Cambridge UP, 2021), Ronald Meester and Klaas Slooten address the role of statistics and probability in the evaluation of forensic evidence, including both theoretical issues and applications in legal contexts. It discusses what evidence is and how it can be quantified, how it should be understood, and how it is applied (and, sometimes misapplied). Ronald Meester is Professor in probability theory at the Vrije Universiteit Amsterdam. He is co-author of the books Continuum Percolation (1996), A Natural Introduction to Probability Theory (2003), Random Networks for Communication (2008), and has written around 120 research papers on topics including percolation theory, ergodic theory, philosophy of science, and forensic probability. Klaas Slooten works as Statistician at the Netherlands Forensic Institute and at the Vrije Universiteit Amsterdam where he is Professor by special appointment. He as published around 30 articles on forensic probability and statistics. He is interested in the mathematical, legal, and philosophical evaluation of evidence. Marc Goulet is Professor in mathematics and Associate Dean in the College of Arts and Sciences at the University of Wisconsin-Eau Claire. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network
In Probability and Forensic Evidence: Theory, Philosophy, and Applications (Cambridge UP, 2021), Ronald Meester and Klaas Slooten address the role of statistics and probability in the evaluation of forensic evidence, including both theoretical issues and applications in legal contexts. It discusses what evidence is and how it can be quantified, how it should be understood, and how it is applied (and, sometimes misapplied). Ronald Meester is Professor in probability theory at the Vrije Universiteit Amsterdam. He is co-author of the books Continuum Percolation (1996), A Natural Introduction to Probability Theory (2003), Random Networks for Communication (2008), and has written around 120 research papers on topics including percolation theory, ergodic theory, philosophy of science, and forensic probability. Klaas Slooten works as Statistician at the Netherlands Forensic Institute and at the Vrije Universiteit Amsterdam where he is Professor by special appointment. He as published around 30 articles on forensic probability and statistics. He is interested in the mathematical, legal, and philosophical evaluation of evidence. Marc Goulet is Professor in mathematics and Associate Dean in the College of Arts and Sciences at the University of Wisconsin-Eau Claire. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/mathematics
In Probability and Forensic Evidence: Theory, Philosophy, and Applications (Cambridge UP, 2021), Ronald Meester and Klaas Slooten address the role of statistics and probability in the evaluation of forensic evidence, including both theoretical issues and applications in legal contexts. It discusses what evidence is and how it can be quantified, how it should be understood, and how it is applied (and, sometimes misapplied). Ronald Meester is Professor in probability theory at the Vrije Universiteit Amsterdam. He is co-author of the books Continuum Percolation (1996), A Natural Introduction to Probability Theory (2003), Random Networks for Communication (2008), and has written around 120 research papers on topics including percolation theory, ergodic theory, philosophy of science, and forensic probability. Klaas Slooten works as Statistician at the Netherlands Forensic Institute and at the Vrije Universiteit Amsterdam where he is Professor by special appointment. He as published around 30 articles on forensic probability and statistics. He is interested in the mathematical, legal, and philosophical evaluation of evidence. Marc Goulet is Professor in mathematics and Associate Dean in the College of Arts and Sciences at the University of Wisconsin-Eau Claire. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/public-policy
In Probability and Forensic Evidence: Theory, Philosophy, and Applications (Cambridge UP, 2021), Ronald Meester and Klaas Slooten address the role of statistics and probability in the evaluation of forensic evidence, including both theoretical issues and applications in legal contexts. It discusses what evidence is and how it can be quantified, how it should be understood, and how it is applied (and, sometimes misapplied). Ronald Meester is Professor in probability theory at the Vrije Universiteit Amsterdam. He is co-author of the books Continuum Percolation (1996), A Natural Introduction to Probability Theory (2003), Random Networks for Communication (2008), and has written around 120 research papers on topics including percolation theory, ergodic theory, philosophy of science, and forensic probability. Klaas Slooten works as Statistician at the Netherlands Forensic Institute and at the Vrije Universiteit Amsterdam where he is Professor by special appointment. He as published around 30 articles on forensic probability and statistics. He is interested in the mathematical, legal, and philosophical evaluation of evidence. Marc Goulet is Professor in mathematics and Associate Dean in the College of Arts and Sciences at the University of Wisconsin-Eau Claire. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/science-technology-and-society
In Probability and Forensic Evidence: Theory, Philosophy, and Applications (Cambridge UP, 2021), Ronald Meester and Klaas Slooten address the role of statistics and probability in the evaluation of forensic evidence, including both theoretical issues and applications in legal contexts. It discusses what evidence is and how it can be quantified, how it should be understood, and how it is applied (and, sometimes misapplied). Ronald Meester is Professor in probability theory at the Vrije Universiteit Amsterdam. He is co-author of the books Continuum Percolation (1996), A Natural Introduction to Probability Theory (2003), Random Networks for Communication (2008), and has written around 120 research papers on topics including percolation theory, ergodic theory, philosophy of science, and forensic probability. Klaas Slooten works as Statistician at the Netherlands Forensic Institute and at the Vrije Universiteit Amsterdam where he is Professor by special appointment. He as published around 30 articles on forensic probability and statistics. He is interested in the mathematical, legal, and philosophical evaluation of evidence. Marc Goulet is Professor in mathematics and Associate Dean in the College of Arts and Sciences at the University of Wisconsin-Eau Claire. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/law
In Probability and Forensic Evidence: Theory, Philosophy, and Applications (Cambridge UP, 2021), Ronald Meester and Klaas Slooten address the role of statistics and probability in the evaluation of forensic evidence, including both theoretical issues and applications in legal contexts. It discusses what evidence is and how it can be quantified, how it should be understood, and how it is applied (and, sometimes misapplied). Ronald Meester is Professor in probability theory at the Vrije Universiteit Amsterdam. He is co-author of the books Continuum Percolation (1996), A Natural Introduction to Probability Theory (2003), Random Networks for Communication (2008), and has written around 120 research papers on topics including percolation theory, ergodic theory, philosophy of science, and forensic probability. Klaas Slooten works as Statistician at the Netherlands Forensic Institute and at the Vrije Universiteit Amsterdam where he is Professor by special appointment. He as published around 30 articles on forensic probability and statistics. He is interested in the mathematical, legal, and philosophical evaluation of evidence. Marc Goulet is Professor in mathematics and Associate Dean in the College of Arts and Sciences at the University of Wisconsin-Eau Claire.
In Probability and Forensic Evidence: Theory, Philosophy, and Applications (Cambridge UP, 2021), Ronald Meester and Klaas Slooten address the role of statistics and probability in the evaluation of forensic evidence, including both theoretical issues and applications in legal contexts. It discusses what evidence is and how it can be quantified, how it should be understood, and how it is applied (and, sometimes misapplied). Ronald Meester is Professor in probability theory at the Vrije Universiteit Amsterdam. He is co-author of the books Continuum Percolation (1996), A Natural Introduction to Probability Theory (2003), Random Networks for Communication (2008), and has written around 120 research papers on topics including percolation theory, ergodic theory, philosophy of science, and forensic probability. Klaas Slooten works as Statistician at the Netherlands Forensic Institute and at the Vrije Universiteit Amsterdam where he is Professor by special appointment. He as published around 30 articles on forensic probability and statistics. He is interested in the mathematical, legal, and philosophical evaluation of evidence. Marc Goulet is Professor in mathematics and Associate Dean in the College of Arts and Sciences at the University of Wisconsin-Eau Claire. Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: I scraped all public "Effective Altruists" Goodreads reading lists, published by MaxRa on the Effective Altruism Forum. A couple of weeks ago I mentioned the idea of scraping the reading lists of the members of the Effective Altruists Goodreads group. The initial motivation was around the idea that EAs might be reading too much of the same books, and we might improve this by finding out which books are read relatively little compared to how many EAs proclaim that they want to read them. I got some positive feedback and got to work. Besides helping a little with improving our exploration of literature, I think the results also serve as an interesting survey of the reading behavior of EAs. Though we might want to keep in mind a possible selection bias for EAs and EA-adjacent people that share their reading behavior on Goodreads. For those who don't know Goodreads, it's a social network where you can share ratings and reviews of the books you've read, and organize books in shelves like I have read this! or I want to read this!. It's quite fun, many EAs are on there and I wholeheartedly recommend joining. In total, there were 333 349 people in the Effective Altruists Goodreads group, and 257 275 of them had their privacy settings set to completely public, allowing anyone to inspect their reading lists even without being logged in. I checked the Goodreads scraping rules and was good to go. Before you continue, I invite you to predict the following: 3 from the 10 most read books, except Doing Good Better a book that relatively many EAs want to read, but few have actually read Finally, if you have any further ideas for analysis, leave a comment and I'll be happy to see what I can do. If you want access to the csv file or the Python script I used, I uploaded them here. In this screenshot you see the types of data I have. Most read books Here the books that our community already explored a bunch. I would not have expected 1984 and Superintelligence to make it to the Top 5. HPMOR being the least read Harry Potter novel is a slight disappointment. Most planned to read Many classics on people's I want to read this! lists, maybe overall slightly lengthier & more difficult books? Though Superforecasting is not too long and very readable and very excellent in my opinion, so feel free to read this one. Highest planned to read / have read ratio These are the books that might be more useful to be read by more EAs, as many say they want to read them, but in proportion the fewest people have actually read them. Of course, there are good reasons why some of those books are read less, e.g. some of them, like The Rise and Fall of American Growth, Probability Theory or The Feynman Lectures on Physics would take me enormously more time to read compared to, say, 1984 (which still took me, a relatively slow reader, something on the order of 10 to 20 hours). Also, the vast majority of the books in this list have only been read by one person, so a score of 11 can be interpreted as one person having read the book and 11 people wanting to read it. Additionally, as of now this list excludes books that have never been read by any EA, as the ratio would be infinite. For those books, see the next section. If we only allow books with at least 2 reads, we get this list: Most commonly planned to read books that have not been read by anyone yet I'll consider it a big success of this project if some people will have read Julia Galef's The Scout Mindset Energy and Civilization next time I check. Highest rated books Here the highest rated of all books that were read at least 10 times. Not too many surprises here, EAs know what's good! Lowest rated books Here the same with the highest rated books. Before any fandom feels too ostracized (speaking as somebody who absolutely loved the Eragon saga), I should info...
The field of physics has brought tremendous advances to modern Bayesian statistics, especially inspiring the current algorithms enabling all of us to enjoy the Bayesian power on our own laptops. I did receive some physicians already on the show, like Michael Betancourt in episode 6, but in my legendary ungratefulness I hadn't dedicated a whole episode to talk about physics yet. Well that's now taken care of, thanks to JJ Ruby. Apart from having really good tastes (he's indeed a fan of this very podcast), JJ is currently a postdoctoral fellow for the Center for Matter at Atomic Pressures at the University of Rochester, and will soon be starting as a Postdoctoral Scholar at Lawrence Livermore National Laboratory, a U.S. Department of Energy National Laboratory. JJ did his undergraduate work in Astrophysics and Planetary Science at Villanova University, outside of Philadelphia, and completed his master's degree and PhD in Physics at the University of Rochester, in New York. JJ studies high energy density physics and focuses on using Bayesian techniques to extract information from large scale physics experiments with highly integrated measurements. In his freetime, he enjoys playing sports including baseball, basketball, and golf. Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ (https://bababrinkman.com/) ! Thank you to my Patrons for making this episode possible! Yusuke Saito, Avi Bryant, Ero Carrera, Brian Huey, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, Adam Bartonicek, William Benton, Alan O'Donnell, Mark Ormsby, Demetri Pananos, James Ahloy, Jon Berezowski, Robin Taylor, Thomas Wiecki, Chad Scherrer, Nathaniel Neitzke, Zwelithini Tunyiswa, Elea McDonnell Feit, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Joshua Duncan, Ian Moran, Paul Oreto, Colin Caprani, George Ho, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Raul Maldonado, Marcin Elantkowski, Tim Radtke, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Matthew McAnear, Michael Hankin and Cameron Smith. Visit https://www.patreon.com/learnbayesstats (https://www.patreon.com/learnbayesstats) to unlock exclusive Bayesian swag ;) Links from the show: Center for Matter at Atomic Pressures: https://www.rochester.edu/cmap/ (https://www.rochester.edu/cmap/) Laboratory for Laser Energetics: https://www.lle.rochester.edu/index.php/about-the-laboratory-for-laser-energetics/ (https://www.lle.rochester.edu/index.php/about-the-laboratory-for-laser-energetics/) Lawrence Livermore National Laboratory: https://www.llnl.gov/ (https://www.llnl.gov/) JJ's thesis -- Bayesian Inference of Fundamental Physics at Extreme Conditions: https://www.lle.rochester.edu/media/publications/documents/theses/Ruby.pdf (https://www.lle.rochester.edu/media/publications/documents/theses/Ruby.pdf) Recent Fusion Breakthrough: https://www.llnl.gov/news/national-ignition-facility-experiment-puts-researchers-threshold-fusion-ignition (https://www.llnl.gov/news/national-ignition-facility-experiment-puts-researchers-threshold-fusion-ignition) LBS #6, A principled Bayesian workflow, with Michael Betancourt: https://www.learnbayesstats.com/episode/6-a-principled-bayesian-workflow-with-michael-betancourt (https://www.learnbayesstats.com/episode/6-a-principled-bayesian-workflow-with-michael-betancourt) 20 Best Statistics Podcasts of 2021: https://welpmagazine.com/20-best-statistics-podcasts-of-2021/ (https://welpmagazine.com/20-best-statistics-podcasts-of-2021/) E.T. Jaynes, Probability Theory -- The Logic of Science: https://www.goodreads.com/book/show/151848.Probability_Theory... Support this podcast
ÉPISODE #8.6 - EXTRANous fournissons un retour plus complet sur l'histoire de l'émergence des probabilités autour de 1650, impliquant Pascal, Fermat et autres. Ceci sert de supplément à la série de l'épisode 8 sur la pensée statistique.ENVOYEZ-NOUS VOS COMMENTAIRES, QUESTIONS ET SUGGESTIONSYoutube FacebookLinkedInPour approfondir vos lectures:Oystein Ore, Pascal and the Invention of Probability Theory, 1960 (https://www.jstor.org/stable/2309286) - L'article principal sur lequel l'épisode est baséChristian Huygens, De Ratiociniis in Ludo Aleae, 1657 [Translated by W. Browne] (https://math.dartmouth.edu/~doyle/docs/huygens/huygens.pdf)- Le traité célèbre de Huygens qui utilise pour la première fois le terme expectation pour parler de l'espérance mathématiqueBlaise Pascal, Traité du triangle arithmétique, 1665- Voir aussi la page wikipedia pour plusieurs détails intéressants (https://fr.wikipedia.org/wiki/Triangle_de_Pascal)Ian Hacking, The Emergence of Probability, 1975- Analyse philosophique des idées ayant menée aux concepts de probabilité autour de 1650Anders Hald, A History of Probability and Statistics and Their Applications Before 1750, 1990- Analyse technique de l'histoire des idées statistiques avant 1750, où on trouve les preuves mathématiques et les calculs reformulés avec une notation moderne.
Probability Theory excerpt, page 44
DeFi has been the big success story in crypto this year but what does the future of finance actually look like and how do we get there? Building a bridge between traditional and decentralized finance and seeking to unlock trillions of dollars in capital is AllianceBlock, an organization which is building the world’s first globally compliant, decentralized capital market. Backed by three of Europe’s most prestigious incubators: Station F, L39, and Kickstart Innovation in Zurich, AllianceBlock is led by a heavily experienced team of ex-JP Morgan, Barclays, BNP Paribas, and Goldman Sachs investment bankers and quants. Rachid Ajaja is co-founder and CEO of AllianceBlock. With a decade of experience in investment banking, Rachid worked as a Quantitative Analyst in leading investment banks Barclays and BNP Paribas. He has a degree in Computer Science and Signal Processing from IMT Atlantique, and a Master’s degree in Probability Theory, Stochastic Process, and Quantitative Finance from Université Paris Diderot. AllianceBlock is building the first globally compliant decentralized capital market. The AllianceBlock Protocol is a decentralised, blockchain-agnostic layer 2 that automates the process of converting any digital or crypto asset into a bankable product. Incubated by three of Europe’s most prestigious incubators: Station F, L39, and Kickstart Innovation in Zurich, and led by a heavily experienced team of ex-JP Morgan, Barclays, BNP Paribas, Goldman Sachs investment bankers, and quants, AllianceBlock is on the path to disrupt the $100 trillion securities market with its state-of-the-art and globally compliant decentralized capital market.
70% NFL Free Picks ESBC Weekly Against the Spread Everygame Follow The Money Week 11 (Unpredented 10 weeks consecutive of profit) #ESBC Podcast finish #NFLWeek8 152-96=61% #ats 10 consecutive weeks Profit). (Breakeven is 52.5%) 92-58=61% ATS #CollegeFootballbetting (Breakeven is 52.5%) Josh Abner MBA makes you money ; with Picks at a high percentage but teaches the "how" that is omitted from helping you out to win consistently @JPL92 Just coming back from the #SilveradoFire profit from games was both a blessing and a distraction We all give you winners in an extraordinary rate for free. Against The Spread. Also since We have an MBA and run 3 successful businesses with teach Decision Science ; Probability Theory, and how to make business decisions when you do not have all the information #Edutainment linktr.ee/esbcpodcastnetwork Link To Post "Top Ten Rules Of Betting" ecosystemsbusinessconcierge.com/2019/08/2…sketball/ Josh Abner MBA makes you money ; with Picks at a high percentage but teaches the "how" that is omitted from helping you out to win consistently @JPL92 Just coming back from the #SilveradoFire profit from games was both a blessing and a distraction We all give you winners in an extraordinary rate for free. Against The Spread. #Edutainment
70% NFL Free Picks ESBC Weekly Against the Spread Everygame Follow The Money Week 9(Unpredented 10 weeks consecutive of profit) #ESBC Podcast finish #NFLWeek8 152-96=61% #ats 10 consecutive weeks Profit). (Breakeven is 52.5%) 92-58=61% ATS #CollegeFootballbetting (Breakeven is 52.5%) Josh Abner MBA makes you money ; with Picks at a high percentage but teaches the "how" that is omitted from helping you out to win consistently @JPL92 Just coming back from the #SilveradoFire profit from games was both a blessing and a distraction We all give you winners in an extraordinary rate for free. Against The Spread. Also since We have an MBA and run 3 successful businesses with teach Decision Science ; Probability Theory, and how to make business decisions when you do not have all the information #Edutainment linktr.ee/esbcpodcastnetwork Link To Post "Top Ten Rules Of Betting" ecosystemsbusinessconcierge.com/2019/08/2…sketball/ Josh Abner MBA makes you money ; with Picks at a high percentage but teaches the "how" that is omitted from helping you out to win consistently @JPL92 Just coming back from the #SilveradoFire profit from games was both a blessing and a distraction We all give you winners in an extraordinary rate for free. Against The Spread. #Edutainment
70% NFL Free Picks ESBC Weekly Against the Spread Everygame Follow The Money Week 9(Unpredented 10 weeks consecutive of profit) #ESBC Podcast finish #NFLWeek8 152-96=61% #ats 10 consecutive weeks Profit). (Breakeven is 52.5%) 92-58=61% ATS #CollegeFootballbetting (Breakeven is 52.5%) Josh Abner MBA makes you money ; with Picks at a high percentage but teaches the "how" that is omitted from helping you out to win consistently @JPL92 Just coming back from the #SilveradoFire profit from games was both a blessing and a distraction We all give you winners in an extraordinary rate for free. Against The Spread. Also since We have an MBA and run 3 successful businesses with teach Decision Science ; Probability Theory, and how to make business decisions when you do not have all the information #Edutainment linktr.ee/esbcpodcastnetwork Link To Post "Top Ten Rules Of Betting" ecosystemsbusinessconcierge.com/2019/08/2…sketball/ Josh Abner MBA makes you money ; with Picks at a high percentage but teaches the "how" that is omitted from helping you out to win consistently @JPL92 Just coming back from the #SilveradoFire profit from games was both a blessing and a distraction We all give you winners in an extraordinary rate for free. Against The Spread. #Edutainment
https://drive.google.com/drive/folders/1cLdeyLhstaYInP5NoZgh3JukjH_t4LgF?usp=sharing
Show Notes(2:37) Francesca discussed her educational background in Italy, studying Economics and Institutional Studies at LUISS Guido Carli University for her Master’s and then Economics and Technology Innovation at Sant’Anna University for her Ph.D. She also mentioned her transition to studying in the US at Harvard Business School.(7:43) Francesca shared the anecdote behind going to HBS to pursue a Postdoc Research Fellowship in Economics. She also revealed the differences in the educational approaches between Italy and the United States.(15:15) During her Postdoc, Francesca worked on multiple patent data-driven projects to investigate and measure the impact of external knowledge networks on companies’ competitiveness and innovation. She discussed a specific project that analyzed biotech innovation in Boston, San Diego, and San Francisco clusters using social media and citation data.(24:26) Francesca talked about her decision to join Microsoft as a data scientist in its Cloud and Enterprise division back in 2014, where she first worked on projects for clients from the energy and finance sectors.(30:00) Francesca discussed the two types of customers who seek Microsoft’s cloud solutions to solve their data problems and explained the learning curves she went through while interacting with them.(36:11) Francesca unpacked the Healthy Data Science Organization Framework - which is a portfolio of methodologies, technologies, resources that will assist organizations in becoming more data-driven (Read her InfoQ article “The Data Science Mindset: 6 Principles to Build Healthy Data-Driven Organizations”).(45:31) Francesca shared the challenges of building end-to-end machine learning applications that she has observed from Microsoft Azure AI’s clients.(49:56) Francesca walked through a typical day in her current leadership role at Microsoft’s Cloud AI Advocates team.(53:44) Francesca discussed the different components in a typical Azure deployment workflow (Read her post “Azure Machine Learning Deployment Workflow”).(58:44) Francesca explained Automated Machine Learning, a breakthrough from Microsoft Research division that is essentially a recommender system for machine learning pipelines.(01:03:50) Francesca went over model interpretability features within Azure AI (as part of the InterpretML package) and touched on Microsoft’s Responsible AI principles.(01:08:01) Francesca explained the differences between model fairness and model interpretability at both the training time and inference time (Check out the Fairlearn package).(01:12:11) Francesca is currently writing a book with Wiley called “Machine Learning for Time Series Forecasting with Python.”(01:14:39) Francesca shared her advice for undergraduate students looking to get into the field, judging from her experience being a mentor for Ph.D. and Postdoc students at institutions such as Harvard, MIT, and Columbia.(01:17:27) Francesca reasoned how her educational backgrounds in economics and operations management contribute to her success in a data science career(01:20:09) Closing segment.Her Contact InfoTwitterMediumLinkedInHer Recommended ResourcesPeople To FollowHilary MasonAndrew NgHannah WallachBook To ReadAn Introduction to Probability Theory and Its Applications (by William Feller)A Developer’s Introduction to Data ScienceVideo series on Data Science and Machine Learning on AzureVideo series on Data Science and Machine Learning on Azure GitHub repoAzure Machine LearningAzure Machine Learning DocumentationAzure Machine Learning ServiceThe Data Science LifecycleAlgorithm Cheat SheetHow to Select Machine Learning AlgorithmsAzure Machine Learning DesignerResponsible Machine LearningResponsible Machine LearningModel InterpretabilityInterpretML RepoInterpretML ToolkitInterpretML DocumentationFairlearn ServiceFairlearn DocumentationAutomated Machine LearningAutomated Machine LearningAuto ML FeaturizationAutoML Config Class
Show Notes(2:37) Francesca discussed her educational background in Italy, studying Economics and Institutional Studies at LUISS Guido Carli University for her Master’s and then Economics and Technology Innovation at Sant’Anna University for her Ph.D. She also mentioned her transition to studying in the US at Harvard Business School.(7:43) Francesca shared the anecdote behind going to HBS to pursue a Postdoc Research Fellowship in Economics. She also revealed the differences in the educational approaches between Italy and the United States.(15:15) During her Postdoc, Francesca worked on multiple patent data-driven projects to investigate and measure the impact of external knowledge networks on companies’ competitiveness and innovation. She discussed a specific project that analyzed biotech innovation in Boston, San Diego, and San Francisco clusters using social media and citation data.(24:26) Francesca talked about her decision to join Microsoft as a data scientist in its Cloud and Enterprise division back in 2014, where she first worked on projects for clients from the energy and finance sectors.(30:00) Francesca discussed the two types of customers who seek Microsoft’s cloud solutions to solve their data problems and explained the learning curves she went through while interacting with them.(36:11) Francesca unpacked the Healthy Data Science Organization Framework - which is a portfolio of methodologies, technologies, resources that will assist organizations in becoming more data-driven (Read her InfoQ article “The Data Science Mindset: 6 Principles to Build Healthy Data-Driven Organizations”).(45:31) Francesca shared the challenges of building end-to-end machine learning applications that she has observed from Microsoft Azure AI’s clients.(49:56) Francesca walked through a typical day in her current leadership role at Microsoft’s Cloud AI Advocates team.(53:44) Francesca discussed the different components in a typical Azure deployment workflow (Read her post “Azure Machine Learning Deployment Workflow”).(58:44) Francesca explained Automated Machine Learning, a breakthrough from Microsoft Research division that is essentially a recommender system for machine learning pipelines.(01:03:50) Francesca went over model interpretability features within Azure AI (as part of the InterpretML package) and touched on Microsoft’s Responsible AI principles.(01:08:01) Francesca explained the differences between model fairness and model interpretability at both the training time and inference time (Check out the Fairlearn package).(01:12:11) Francesca is currently writing a book with Wiley called “Machine Learning for Time Series Forecasting with Python.”(01:14:39) Francesca shared her advice for undergraduate students looking to get into the field, judging from her experience being a mentor for Ph.D. and Postdoc students at institutions such as Harvard, MIT, and Columbia.(01:17:27) Francesca reasoned how her educational backgrounds in economics and operations management contribute to her success in a data science career(01:20:09) Closing segment.Her Contact InfoTwitterMediumLinkedInHer Recommended ResourcesPeople To FollowHilary MasonAndrew NgHannah WallachBook To ReadAn Introduction to Probability Theory and Its Applications (by William Feller)A Developer’s Introduction to Data ScienceVideo series on Data Science and Machine Learning on AzureVideo series on Data Science and Machine Learning on Azure GitHub repoAzure Machine LearningAzure Machine Learning DocumentationAzure Machine Learning ServiceThe Data Science LifecycleAlgorithm Cheat SheetHow to Select Machine Learning AlgorithmsAzure Machine Learning DesignerResponsible Machine LearningResponsible Machine LearningModel InterpretabilityInterpretML RepoInterpretML ToolkitInterpretML DocumentationFairlearn ServiceFairlearn DocumentationAutomated Machine LearningAutomated Machine LearningAuto ML FeaturizationAutoML Config Class
The guest of this episode is a postdoctoral researcher at the Weierstrass Institute for Applied Analysis and Stochastics (WIAS) Dr. Luisa Andreis. In this episode, we talk about the importance of mutual support between colleagues in science and whether it‘s possible to organize a math conference with just female speakers in the program.
Nassim Nicholas Taleb talks about the pandemic with EconTalk host Russ Roberts. Topics discussed include how to handle the rest of this pandemic and the next one, the power of the mask, geronticide, and soul in the game.
Tommy discusses probability theory.
Liviu Nicolaescu is a Professor and the Director of Undergraduate Honors Mathematics at the University of Notre Dame. He is the co-organizer of the Felix Klein Seminar on Geometry and serves on the editorial board of the Journal of Gokova Geometry and Topology. He is currently ranked 61st on the all-time reputation boards of Math Overflow and recieved his PH.D. from Michigan State University under the supervision of Thomas H. Parker.Liviu is a Geometer "at large" and a probabilist by accident. His field of expertise is global analysis with emphasis on the geometric applications of elliptic partial differential equations arising from gauge theory, symplectic geometry, and index theory for Dirac operators.He is the author of several books, including:"An Invitation to Morse Theory""The Reidemeister Torsion of 3-Manifolds""Notes on Seiberg-Witten Theory"and most recently, he is the author of"Notes on Elementary Probability Theory"which we discuss at length in the podcast!His website can be found here: https://www3.nd.edu/~lnicolae/We'd like to thank Liviufor being on our show "Meet a Mathematician" and for sharing his stories and perspective with us!www.sensemakesmath.comTWITTER: @SenseMakesMathPATREON: https://www.patreon.com/sensemakesmathFACEBOOK: https://www.facebook.com/SenseMakesMathSTORE: https://sensemakesmath.storenvy.comSupport the show (https://www.patreon.com/sensemakesmath)
On the episode we explore Probability Theory through the story of Blaise Pascal. --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app --- Send in a voice message: https://anchor.fm/mathematically-speaking/message Support this podcast: https://anchor.fm/mathematically-speaking/support
This episode will appeal to One Direction fans (specifically of their album "Four"), Bruce Springsteen fans, Journey fans, Killers fans, Probability Theory fans, and everyone else in the world!
This lecture is a review of the probability theory needed for the course, including random variables, probability distributions, and the Central Limit Theorem.
Episode Summary: This episode focusses upon how an intelligent system can represent beliefs about its environment using fuzzy measure theory. Probability theory is introduced as a special case of fuzzy measure theory which is consistent with classical laws of logical inference.
In his book "Struck by Lightning: The Curious World Of Probabilities", Professor Jeffrey Rosenthal uses his math skills to explain what probability theory is and how it works. Terrorists, car crashes, flu pandemics; Rosenthal says we're afraid of the wrong things and reveals what we should really worry about. (Originally aired January 2006)
In his book "Struck by Lightning: The Curious World Of Probabilities", Professor Jeffrey Rosenthal uses his math skills to explain what probability theory is and how it works. Terrorists, car crashes, flu pandemics; Rosenthal says we're afraid of the wrong things and reveals what we should really worry about. (Originally aired January 2006)
Fakultät für Physik - Digitale Hochschulschriften der LMU - Teil 03/05
In this work a new method for the detection of faint, both point-like and extended, astronomical objects based on the integrated treatment of source and background signals is described. This technique is applied to public data obtained by imaging methods of high-energy observational astronomy in the X-ray spectral regime. These data are usually employed to address current astrophysical problems, e.g. in the fields of stellar and galaxy evolution and the large-scale structure of the universe. The typical problems encountered during the analysis of these data are: spatially varying cosmic background, large variety of source morphologies and intensities, data incompleteness, steep gradients in the data, and few photon counts per pixel. These problems are addressed with the developed technique. Previous methods extensively employed for the analysis of these data are, e.g., the sliding window and the wavelet based techniques. Both methods are known to suffer from: describing large variations in the background, detection of faint and extended sources and sources with complex morphologies. Large systematic errors in object photometry and loss of faint sources may occur with these techniques. The developed algorithm is based on Bayesian probability theory, which is a consistent probabilistic tool to solve an inverse problem for a given state of information. The information is given by a parameterized model for the background and prior information about source intensity distributions quantified by probability distributions. For the background estimation, the image data are not censored. The background rate is described by a two-dimensional thin-plate spline function. The background model is given by the product of the background rate and the exposure time which accounts for the variations of the integration time. Therefore, the background as well as effects like vignetting, variations of detector quantum efficiency and strong gradients in the exposure time are being handled properly which results in improved detections with respect to previous methods. Source probabilities are provided for individual pixels as well as for correlations of neighboring pixels in a multi-resolution analysis. Consequently, the technique is able of detecting point-like and extended sources and their complex morphologies. Furthermore, images of different spectral bands can be combined probabilistically to further increase the resolution in crowded regions. The developed method characterizes all detected sources in terms of position, number of source counts, and shape including uncertainties. The comparison with previous techniques shows that the developed method allows for an improved determination of background and source parameters. The method is applied to data obtained by the ROSAT and Chandra X-ray observatories whereas particularly the detection of faint and extended sources is improved with respect to previous analyses. This lead to the discovery of new galaxy clusters and quasars in the X-ray band which are confirmed in the optical regime using additional observational data. The new technique developed in this work is particularly suited to the identification of objects featuring extended emission like clusters of galaxies.
In recent years, probability theory has come to play an increasingly important role in computing. Professor Sahami gives examples of how probability underlies a variety of applications on the Internet including web search and email spam.(October 21, 2010)
Fakultät für Mathematik, Informatik und Statistik - Digitale Hochschulschriften der LMU - Teil 01/02
The global information space provided by the World Wide Web has changed dramatically the way knowledge is shared all over the world. To make this unbelievable huge information space accessible, search engines index the uploaded contents and provide efficient algorithmic machinery for ranking the importance of documents with respect to an input query. All major search engines such as Google, Yahoo or Bing are keyword-based, which is indisputable a very powerful tool for accessing information needs centered around documents. However, this unstructured, document-oriented paradigm of the World Wide Web has serious drawbacks, when searching for specific knowledge about real-world entities. When asking for advanced facts about entities, today's search engines are not very good in providing accurate answers. Hand-built knowledge bases such as Wikipedia or its structured counterpart DBpedia are excellent sources that provide common facts. However, these knowledge bases are far from being complete and most of the knowledge lies still buried in unstructured documents. Statistical machine learning methods have the great potential to help to bridge the gap between text and knowledge by (semi-)automatically transforming the unstructured representation of the today's World Wide Web to a more structured representation. This thesis is devoted to reduce this gap with Probabilistic Graphical Models. Probabilistic Graphical Models play a crucial role in modern pattern recognition as they merge two important fields of applied mathematics: Graph Theory and Probability Theory. The first part of the thesis will present a novel system called Text2SemRel that is able to (semi-)automatically construct knowledge bases from textual document collections. The resulting knowledge base consists of facts centered around entities and their relations. Essential part of the system is a novel algorithm for extracting relations between entity mentions that is based on Conditional Random Fields, which are Undirected Probabilistic Graphical Models. In the second part of the thesis, we will use the power of Directed Probabilistic Graphical Models to solve important knowledge discovery tasks in semantically annotated large document collections. In particular, we present extensions of the Latent Dirichlet Allocation framework that are able to learn in an unsupervised way the statistical semantic dependencies between unstructured representations such as documents and their semantic annotations. Semantic annotations of documents might refer to concepts originating from a thesaurus or ontology but also to user-generated informal tags in social tagging systems. These forms of annotations represent a first step towards the conversion to a more structured form of the World Wide Web. In the last part of the thesis, we prove the large-scale applicability of the proposed fact extraction system Text2SemRel. In particular, we extract semantic relations between genes and diseases from a large biomedical textual repository. The resulting knowledge base contains far more potential disease genes exceeding the number of disease genes that are currently stored in curated databases. Thus, the proposed system is able to unlock knowledge currently buried in the literature. The literature-derived human gene-disease network is subject of further analysis with respect to existing curated state of the art databases. We analyze the derived knowledge base quantitatively by comparing it with several curated databases with regard to size of the databases and properties of known disease genes among other things. Our experimental analysis shows that the facts extracted from the literature are of high quality.
Statistics and mathematics underlie the theories of finance. Probability Theory and various distribution types are important to understanding finance. Risk management, for instance, depends on tools such as variance, standard deviation, correlation, and regression analysis. Financial analysis methods such as present values and valuing streams of payments are fundamental to understanding the time value of money and have been in practice for centuries.
Statistics and mathematics underlie the theories of finance. Probability Theory and various distribution types are important to understanding finance. Risk management, for instance, depends on tools such as variance, standard deviation, correlation, and regression analysis. Financial analysis methods such as present values and valuing streams of payments are fundamental to understanding the time value of money and have been in practice for centuries.
Richard T. Durrett was elected to the National Academy of Sciences in 2007 for his work in applied mathematical sciences. Durrett's research in probability theory concerns problems that arise from ecology and genetics. He has developed mathematical models to study the evolution of microsatellites, impacts of selective sweeps on genetic variation, genome rearrangement, gene duplication, and gene regulation.
Chance occurrences often dramatically effect our daily lives. But, how can we evaluate randomness and weigh its influence appropriately? On this program, Prof. Jeffery S. Rosenthal discussed probability theory.