POPULARITY
Today's revolutionary idea is something a bit different: David talks to statistician David Spiegelhalter about how an eighteenth-century theory of probability emerged from relative obscurity in the twentieth century to reconfigure our understanding of the relationship between past, present and future. What was Thomas Bayes's original idea about doing probability in reverse: from effect to cause? What happened when this way of thinking passed through the vortex of the French Revolution? How has it come to lie behind recent innovations in political polling, AI, self-driving cars, medical research and so much more? Why does it remain controversial to this day? The latest edition of our free fortnightly newsletter is available: to get it in your inbox sign up now https://www.ppfideas.com/newsletter Next time: 1848: The Liberal Revolution w/Chris Clark Past Present Future is part of the Airwave Podcast Network Learn more about your ad choices. Visit megaphone.fm/adchoices
Everything Is Predictable: How Bayes' Remarkable Theorem Explains the World is a book about an 18th century mathematical rule for working out probability, which shapes many aspects of our modern world. Written by science journalist Tom Chivers, the book has made it onto the shortlist for the Royal Society Trivedi Science Book Prize. In the lead up to the winner's announcement, New Scientist books editor Alison Flood meets all six of the shortlisted authors.In this conversation, Tom explores the life of Thomas Bayes, the man behind the theorem, and how he had no clue his discovery would have such sweeping implications for humanity. He explains the theorem's many uses, both in practical settings like disease diagnosis, as well as its ability to explain rational thought and the human brain. And he digs into some of the controversy and surprising conflict that has surrounded Bayes' theorem over the years.The winner of the Royal Society Trivedi Science Book Prize will be announced on the 24th October. You can view all of the shortlisted entries here:https://royalsociety.org/medals-and-prizes/science-book-prize/ To read about subjects like this and much more, visit https://www.newscientist.com/ Hosted on Acast. See acast.com/privacy for more information.
Tom Chivers is a journalist who writes a lot about science and applied statistics. We talk about his new book on Bayesian statistics, the biography of Thomas Bayes, the history of probability theory, how Bayes can help with the replication crisis, how Tom became a journalist, and much more.BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.Support the show: https://geni.us/bjks-patreonTimestamps0:00:00: Tom's book about Bayes & Bayesian statistics relates to many of my previous episodes and much of my own research0:03:12: A brief biography of Thomas Bayes (about whom very little is known)0:11:00: The history of probability theory 0:36:23: Bayesian songs0:43:17: Bayes & the replication crisis0:57:27: How Tom got into science journalism1:08:32: A book or paper more people should read1:10:05: Something Tom wishes he'd learnt sooner1:14:36: Advice for PhD students/postdocs/people in a transition periodPodcast linksWebsite: https://geni.us/bjks-podTwitter: https://geni.us/bjks-pod-twtTom's linksWebsite: https://geni.us/chivers-webTwitter: https://geni.us/chivers-twtPodcast: https://geni.us/chivers-podBen's linksWebsite: https://geni.us/bjks-webGoogle Scholar: https://geni.us/bjks-scholarTwitter: https://geni.us/bjks-twtReferences and linksEpisode with Stuart Ritchie: https://geni.us/bjks-ritchieScott Alexander: https://www.astralcodexten.com/Bayes (1731). Divine benevolence, or an attempt to prove that the principal end of the divine providence and government is the happiness of his creatures. Being an answer to a pamphlet entitled Divine Rectitude or an inquiry concerning the moral perfections of the deity with a refutation of the notions therein advanced concerning beauty and order, the reason of punishment and the necessity of a state of trial antecedent to perfect happiness.Bayes (1763). An essay towards solving a problem in the doctrine of chances. Philosophical transactions of the Royal Society of London.Bellhouse (2004). The Reverend Thomas Bayes, FRS: a biography to celebrate the tercentenary of his birth. Project Euclid.Bem (2011). Feeling the future: experimental evidence for anomalous retroactive influences on cognition and affect. Journal of personality and social psychology.Chivers (2024). Everything is Predictable: How Bayesian Statistics Explain Our World.Chivers & Chivers (2021). How to read numbers: A guide to statistics in the news (and knowing when to trust them).Chivers (2019). The Rationalist's Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity's Future.Clarke [not Black, as Tom said] (2020). Piranesi.Goldacre (2009). Bad science.Goldacre (2014). Bad pharma: how drug companies mislead doctors and harm patients.Simmons, Nelson & Simonsohn (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science.
Today our guest Ivan Phillips methodically explains what Bayesianism is and is not. Along the way we discuss the validity of critiques made by critical rationalists of the worldview that is derived from Thomas Bayes's 1763 theorem. Ivan is a Bayesian that is very familiar with Karl Popper's writings and even admires Popper's epistemology. Ivan makes his case that Bayesian epistemology is the correct way to reason and that Karl Popper misunderstood some aspects of how to properly apply probability theory to reasoning and inference. (Due in part to those theories being less well developed back in Popper's time.) This is a video podcast if you watch it on Spotify. But it should be consumable as just audio. But I found Ivan's slides quite useful. This is by far the best explanations for Bayesianism that I've ever seen and it does a great job of situating it in a way that makes sense to a critical rationalist like myself. But it still didn't convince me to be a Bayesian. ;) --- Support this podcast: https://podcasters.spotify.com/pod/show/four-strands/support
Back in the 1700s, in a spa town outside of London, Thomas Bayes, a Presbyterian minister and amateur mathematician, invented a formula that lets you figure out how likely something is to happen based on what you already know. It changed the world. Today, pollsters use it to forecast election results and bookies to predict Super Bowl scores. For neuroscientists, it explains how our brains work; for computer scientists, it's the principle behind artificial intelligence. In this episode, we explore the modern-day applications of this game-changing theorem with the help of Tom Chivers, author of the new book "Everything Is Predictable: How Bayesian Statistics Explain Our World."
In this episode, Xavier Bonilla has a dialogue with Tom Chivers about Bayesian probability and the impact Bayesian priors have on ourselves. They define Bayesian priors, Thomas Bayes, subjective aspects of Bayes theorem, and the problematic elements of statistical figures such as Galton, Pearson, and Fisher. They talk about the replication crisis, p-hacking, where priors come from, AI, Friston's free energy principle, and Bayesian priors in our world today. Tom Chivers is a science writer. He does freelance science writing and also writes for Semafor.com's daily Flagship email. Before joining Semafor, he was a science editor at UnHerd, science writer for BuzzFeed UK, and features writer for the Telegraph. He is the author of several books including the most recent, Everything Is Predictable: How Bayesian Statistics Explain Our World. Website: https://tomchivers.com/ Get full access to Converging Dialogues at convergingdialogues.substack.com/subscribe
Dr. Arturo Casadevall from Johns Hopkins School of Public Health talks about a potential fungal epidemic in his new book, "What if Fungi Win?"Then, what if there was one overarching theory that could help explain much of our modern-day daily lives? Science journalist Tom Chivers explores the concept of the predictability of everything, based on a theorem developed by Thomas Bayes, an 18th-century Presbyterian minister and statistician.
At its simplest, Bayes's theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. But in Everything Is Predictable, Tom Chivers lays out how it affects every aspect of our lives. He explains why highly accurate screening tests can lead to false positives and how a failure to account for it in court has put innocent people in jail. A cornerstone of rational thought, many argue that Bayes's theorem is a description of almost everything. But who was the man who lent his name to this theorem? How did an 18th-century Presbyterian minister and amateur mathematician uncover a theorem that would affect fields as diverse as medicine, law, and artificial intelligence? Fusing biography and intellectual history, Everything Is Predictable is an entertaining tour of Bayes's theorem and its impact on modern life, showing how a single compelling idea can have far reaching consequences. Tom Chivers is an author and the award-winning science writer for Semafor. Previously he was the science editor at UnHerd.com and BuzzFeed UK. His writing has appeared in The Times (London), The Guardian, New Scientist, Wired, CNN, and more. He was awarded the Royal Statistical Society's “Statistical Excellence in Journalism” awards in 2018 and 2020, and was declared the science writer of the year by the Association of British Science Writers in 2021. His books include The Rationalist's Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity's Future, and How to Read Numbers: A Guide to Stats in the News (and Knowing When to Trust Them). His new book is Everything Is Predictable: How Bayesian Statistics Explain Our World. Shermer and Chivers discuss: Thomas Bayes, his equation, and the problem it solves • Bayesian decision theory vs. statistical decision theory • Popperian falsification vs. Bayesian estimation • Sagan's ECREE principle • Bayesian epistemology and family resemblance • paradox of the heap • Reality as controlled hallucination • human irrationality • superforecasting • mystical experiences and religious truths • Replication Crisis in science • Statistical Detection Theory and Signal Detection Theory • Medical diagnosis problem and why most people get it wrong.
Neste episódio semanal, falamos um pouco sobre a vida de Thomas Bayes e suas contribuições para a estatística
The HBS hosts chat with Justin Joque about how we might get Thomas Bayes' robot boot off our necks. Why does Netflix ask you to pick what movies you like when you first sign on in order to recommend other movies and shows to you? How does Google know what search results are most relevant? Why does it seem as if every tech company wants to collect as much data as they can get from you? It turns out that all of this is because of a shift in the theoretical and mathematical approach to probability. Bayesian statistics, the primary model used by machine learning systems, currently dominates almost everything about our lives: investing, sales at stores, political predictions, and, increasingly, what we think we know about the world. How did the "Bayesian revolution" come about? And how did come to dominate? And, perhaps more importantly, is this the best mathematical/statistical model available to us? Or is there another, more "revolutionary," mathematics out there?This week we are joined by Justin Joque, visualization librarian at University of Michigan who writes at the intersection of philosophy and technology. He is the author Deconstruction Machines: Writing in the Age of Cyberwar and, most recently, Revolutionary Mathematics: Artificial Intelligence, Statistics and the Logic of Capitalism.Full episode notes available at this link:http://hotelbarpodcast.com/podcast/episode-78-revolutionary-mathematics-with-justin-joque-------------------If you enjoy Hotel Bar Sessions podcast, please be sure to subscribe and submit a rating/review! Follow us on Twitter @hotelbarpodcast, on Facebook, and subscribe to our YouTube channel!You can also help keep this podcast going by supporting us financially at patreon.com/hotelbarsessions.
Las matemáticas siempre han sido consideradas como una materia ardua. Muchos incluso la detestan y se preguntan para qué estudiar temas como la aritmética, las integrales y derivadas. ¡Qué osada es la ignorancia! Probablemente conocer el origen de muchos principios matemáticos te haga admirar esta ciencia, como por ejemplo el origen del cálculo de probabilidades, inspirado en matemáticos a quienes el juego y las apuestas les apasionaba. Los orígenes del cálculo de la probabilidad se remontan a principios del siglo XVII con los trabajos de matemáticos y filósofos como Blaise Pascal y Pierre de Fermat. Estos dos pensadores franceses realizaron importantes contribuciones al estudio de los juegos de azar y sentaron las bases para el desarrollo del campo de la teoría de la probabilidad. En 1654, Pascal y Fermat intercambiaron una serie de cartas conocidas como la "correspondencia sobre los juegos de azar". En estas cartas, discutieron problemas relacionados con apuestas y juegos de azar, y plantearon preguntas sobre cómo dividir los premios cuando un juego se interrumpe antes de que se obtenga una victoria clara. A través de su correspondencia, Pascal y Fermat desarrollaron métodos para calcular las probabilidades y determinar las expectativas matemáticas en juegos de azar. En particular, Fermat desarrolló el concepto de expectativa matemática, que es la cantidad esperada de ganancia o pérdida en un juego de azar. También formuló el principio fundamental de la teoría de la probabilidad, conocido como el principio de la equiprobabilidad, que establece que si todos los resultados posibles de un experimento son igualmente probables, entonces la probabilidad de un evento es la razón entre el número de resultados favorables y el número total de resultados posibles. Estas ideas pioneras sentaron las bases para el desarrollo posterior de la teoría de la probabilidad. A lo largo de los siglos siguientes, matemáticos como Jacob Bernoulli, Thomas Bayes y Pierre-Simon Laplace, entre otros, realizaron importantes avances en el campo, desarrollando métodos y teoremas más sofisticados para calcular y entender las probabilidades. En el siglo XX, el campo de la teoría de la probabilidad se consolidó como una rama matemática formal con el trabajo de matemáticos como Andréi Kolmogórov, quien estableció los fundamentos axiomáticos de la teoría de la probabilidad moderna. Hoy en día, la teoría de la probabilidad se aplica en una amplia gama de disciplinas, incluyendo estadísticas, economía, física, ciencias sociales, ingeniería y muchas otras áreas en las que se deben tomar decisiones basadas en la incertidumbre y la aleatoriedad. Publicado en luisbermejo.com en el enlace directo: https://luisbermejo.com/anticitera-con-nombre-de-podcast-04x48/ Puedes encontrarme y comentar o enviar tu mensaje o preguntar en: WhatsApp: +34 613031122 Paypal: https://paypal.me/Bermejo Bizum: +34613031122 Web: https://luisbermejo.com Facebook: https://www.facebook.com/ConNombredePodcast/ Twitter: https://twitter.com/LuisBermejo Instagram: https://www.instagram.com/luisbermejo/ Canal Telegram: https://t.me/ConNombredePodcast Grupo Signal: https://signal.group/#CjQKIA_PNdKc3-SAGWKoJZjqR3RwMQ7uzo0bW2eBB4QDtJVZEhBc504fpeK4tyETyuwFVAUI Grupo Whatsapp: https://chat.whatsapp.com/FQadHkgRn00BzSbZzhNviT
Wat?! Is er meer dan een soort statistiek? Inderdaad, de statistiek kent verschillende scholen. Een van die scholen is Bayesiaanse statistiek, vernoemd naar Thomas Bayes, de bedenker van de regel van Bayes. In deze aflevering leg ik uit wat Bayesiaanse statistiek is, in de volgende aflevering gaan we zien hoe deze verschilt van de oude vertrouwde ‘frequentistische' statistiek. Thomas Bayes had een goede vriend, dat leren we ook.
The HBS hosts chat with Justin Joque about how we might get Thomas Bayes' robot boot off our necks. Why does Netflix ask you to pick what movies you like when you first sign on in order to recommend other movies and shows to you? How does Google know what search results are most relevant? Why does it seem as if every tech company wants to collect as much data as they can get from you? It turns out that all of this is because of a shift in the theoretical and mathematical approach to probability. Bayesian statistics, the primary model used by machine learning systems, currently dominates almost everything about our lives: investing, sales at stores, political predictions, and, increasingly, what we think we know about the world. How did the "Bayesian revolution" come about? And how did come to dominate? And, perhaps more importantly, is this the best mathematical/statistical model available to us? Or is there another, more "revolutionary," mathematics out there?This week we are joined by Justin Joque, visualization librarian at University of Michigan who writes at the intersection of philosophy and technology. He is the author Deconstruction Machines: Writing in the Age of Cyberwar and, most recently, Revolutionary Mathematics: Artificial Intelligence, Statistics and the Logic of Capitalism.Full episode notes available at this link:http://hotelbarpodcast.com/podcast/episode-78-revolutionary-mathematics-with-justin-joque -------------------If you enjoy Hotel Bar Sessions podcast, please be sure to subscribe and submit a rating/review! Follow us on Twitter @hotelbarpodcast, on Facebook, and subscribe to our YouTube channel!You can also help keep this podcast going by supporting us financially at patreon.com/hotelbarsessions.
Veteran Security Practitioner Rick Howard shares how Alan Turing's ideas and Thomas Bayes' Theorem hold the key to how organizations should forecast risk. Most organizations default to heat maps relying on a low, medium, and high model. But they aren't reliable. What if we said you're better off providing risk metrics, that offer ballpark answers and not so much precision? Is it possible to forecast complex things without a lot of data?
In probability theory and statistics, Bayes' theorem, named after Thomas Bayes, describes the probability of an event, based on prior knowledge of conditions that might be related to the event.
A lo largo del programa, y en clave de tertulia, hablamos sobre Thomas Bayes, una de las figuras más enigmáticas de la Historia de la Ciencia. Comenzamos hablando sobre lo poco que se conoce de su vida, de los diferentes enigmas que rodean su existencia, para centrarnos posteriormente en el teorema que lleva su nombre. Un teorema que estuvo olvidado durante mucho tiempo, pero que en la actualidad es una de las herramientas más potentes para el estudio de las diferentes disciplinas científicas. Un teorema además que va mucho más allá de calcular una probabilidad, ya que tras él se esconde la verdadera esencia del aprendizaje humano. Analizamos también cómo se estudia el teorema en la actualidad y cómo podría mejorarse su didáctica. Todo ello de la manos de Anabel Forte, Pablo Beltrán y Víctor Marco. Escucha el episodio completo en la app de iVoox, o descubre todo el catálogo de iVoox Originals
Recomendados de la semana en iVoox.com Semana del 5 al 11 de julio del 2021
A lo largo del programa, y en clave de tertulia, hablamos sobre Thomas Bayes, una de las figuras más enigmáticas de la Historia de la Ciencia. Comenzamos hablando sobre lo poco que se conoce de su vida, de los diferentes enigmas que rodean su existencia, para centrarnos posteriormente en el teorema que lleva su nombre. Un teorema que estuvo olvidado durante mucho tiempo, pero que en la actualidad es una de las herramientas más potentes para el estudio de las diferentes disciplinas científicas. Un teorema además que va mucho más allá de calcular una probabilidad, ya que tras él se esconde la verdadera esencia del aprendizaje humano. Analizamos también cómo se estudia el teorema en la actualidad y cómo podría mejorarse su didáctica. Todo ello de la manos de Anabel Forte, Pablo Beltrán y Víctor Marco.
Bayes’ Theorem Applying It To Emergency Management Mental models help us with making decisions under stress. They give us a starting point, think of how we teach triage, “start where you stand”. This applies to decision-making as well during a disaster or crisis, start with information that you have. We can make the adjustments as more or better information is obtained. This brings me to the concepts of Bayes’ Theorem. Thomas Bayes was an English minister in the 18th century, whose most famous work, “An Essay toward Solving a Problem in the Doctrine of Chances,” The essay did not contain the theorem as we now know it but had the seeds of the idea. It looked at how to adjust our estimates of probabilities when encountering new data that influence a situation. Later development by French scholar Pierre-Simon Laplace and others helped codify the theorem and develop it into a useful tool for thinking.Now you do not need to be great at math to use this concept. I still need to take off my shoes to count to 19. . More critical is your ability and desire to assign probabilities of truth and accuracy to anything you think you know and then be willing to update those probabilities when new information comes in.We talk about making decisions based on the new information that has come in, however, we often ignore prior information, simply called “priors” in Bayesian-speak. We can blame this habit in part on the availability heuristic—we focus on what’s readily available. In this case, we focus on the newest information, and the bigger picture gets lost. We fail to adjust the probability of old information to reflect what we have learned.The big idea behind Bayes’ theorem is that we must continuously update our probability estimates on an as-needed basis. Let’s take a look at a hurricane as our crisis. We have all seen the way it tracks and can predict that it may make landfall at a certain time and location. We can use past storms as predictors of how this hurricane may act and the damage it could cause. However, new information may come to light on the behavior of the storm. This however should not necessarily negate the previous experience and information you have on hand. In their book The Signal and the Noise, Nate Silver and Allen Lane give a contemporary example, reminding us that new information is often most useful when we put it in the larger context of what we already know:Bayes’ theorem is an important reality check on our efforts to forecast the future. How, for instance, should we reconcile a large body of theory and evidence predicting global warming with the fact that there has been no warming trend over the last decade or so? Skeptics react with glee, while true believers dismiss the new information.A better response is to use Bayes’ theorem: the lack of recent warming is evidence against recent global warming predictions, but it is weak evidence. This is because there is enough variability in global temperatures to make such an outcome unsurprising. The new information should reduce our confidence in our models of global warming—but only a little.The same approach can be used in anything from an economic forecast to a hand of poker, and while Bayes’ theorem can be a formal affair, Bayesian reasoning also works as a rule of thumb. We tend to either dismiss new evidence or embrace it as though nothing else matters. Bayesians try to weigh both the old hypothesis and the new evidence in a sensible way.So much of making better decisions hinges on dealing with uncertainty. The most common thing holding people back from the right answer is instinctively rejecting new information, or not integrating the old. To better serve our communities, have a mental model, work with it and use it to make better decisions. PodcastsThe Todd De Voe Show School Shootings and Emergency Management The K-12 School Shooting Database research project is a widely inclusive database that documents each and every instance a gun is brandished is fired, or a bullet hits school property for any reason, regardless of the number of victims, time, or day of the week.The School Shooting Database Project is conducted as part of the Advanced Thinking in Homeland Security (HSx) program at the Naval Postgraduate School’s Center for Homeland Defense and Security (CHDS).Prepare Respond Recover Saving Lives Through Training Due to the uptick of mass shootings over the years, many professions outside of law enforcement are now being trained in active shooter response programs. But have you ever thought about who teaches the law enforcement officers themselves? Join prepare.respond.recover. host Todd De Voe as he talks with Erik Franco, the CEO of "High Speed Tac Med", one of the nation’s most sought-after active shooter training programs for law enforcement and firefighting. Learn about “Run, Hide, Fight” and how this training is preparing law enforcement officers to tackle an active shooter situation as quickly and efficiently as possible.HSTM - https://highspeedtacmed.com/If you would like to learn more about the Natural Disaster & Emergency Management (NDEM) Expo please visit us on the web - https://www.ndemevent.comBusiness Continuity Today Training for Active Shooters Beyond The Response Active shooting scenarios focus on the police response, and the larger emergency management role during these complex incidents is often overlooked. However, they are multi-week, multi-jurisdictional incidents requiring command & control, interoperable communications, and a host of other services. Supportershttps://www.disastertech.com/https://titanhst.com/https://www.ndemevent.com/en-us/show-info.html Get full access to The Emergency Management Network at emnetwork.substack.com/subscribe
Este é o Variância, um Spin-off do podcast Intervalo de Confiança, com periodicidade mensal. Este programa é mais curto e tem por objetivo trazer notícias ou curiosidades sobre algum assunto relacionado à ciência e jornalismo de dados ou sobre algum dado específico. Por ser mais curto, tanto a edição e conteúdo são mais simples e mais diretos.No episódio de hoje, Igor Alcantara fala de uma estatística diferente da "tradicional", onde a cada novo conhecimento adicionado, a probabilidade de um evento acontecer é atualizada. Você se lembra do jogo da "Porta dos Desesperados" sucesso dos anos 1990 apresentado pela eterna criança Sérgio Malandro? Se fosse você naquele jogo, qual eria sua decisão, mudaria de porta ou continuaria na mesma? Será que isso faz alguma diferença?E que tal um problema de Fórmula 1? Em uma corrida hipotética, quais as chances de Lewis Hamilton vencer Max Verstappen? E se a corrida for na chuva? E se for na chuva em um circuito de rua? E se entrar um Safety Car na penúltima volta?Esses tipos de problema não são facilmente resolvidos com a estatística clássica, mas o legado de Thomas Bayes nos deixou uma ferramenta incrível para uma série de problemas do dia-a-dia. Escute esse episódio e tenha mais dados para decidir o mais importante. Você é #TeamFrequentista ou #Team Bayesiano?A Pauta foi escrita por Igor Alcantara. A edição foi feita por Leo Oliveira e a vitrine do episódio por Júlia Frois. A coordenação de redação é de Tatiane do Vale e a gerência de projetos de Kézia Nogueira. de As vinhetas de todos os episódios foram compostas por Rafael Chino e Leo Oliveira. Visite nosso site em http://intervalodeconfianca.com.br/
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 12 interesting things I learned studying the discovery of nature's laws, published by Ben Pace on February 19, 2022 on LessWrong. I've been thinking about out whether I can discover laws of agency and wield them to prevent AI ruin (perhaps by building an AGI myself in a different paradigm than machine learning). So far I've looked into the history of the discovery of physical laws (gravity in particular) and mathematical laws (probability theory in particular). Here are 12 things I've learned or been surprised by. 1. Data-gathering was a crucial step in discovering both gravity and probability theory. One rich dude had a whole island and set it up to have lenses on lots of parts of it, and for like a year he'd go around each day and note down the positions of the stars. Then this data was worked on by others who turned it into equations of motion. 2. Relatedly, looking at the celestial bodies was a big deal. It was almost the whole game in gravity, but also a little helpful for probability theory (specifically the normal distribution was developed in part by noting that systematic errors in celestial measuring equipment followed a simple distribution). It hadn't struck me before, but putting a ton of geometry problems on the ceiling for the entire civilization led a lot of people to try to answer questions about it. (It makes Eliezer's choice in That Alien Message apt.) I'm tempted in a munchkin way to find other ways to do this, like to write a math problem on the surface of the moon, or petition Google to put a prediction market on its home page, or something more elegant than those two. 3. Probability theory was substantially developed around real-world problems! I thought math was all magical and ivory tower, but it was much more grounded than I expected. After a few small things like accounting and insurance and doing permutations of the alphabet, games of chance (gambling) was what really kicked it off, with Fermat and Pascal trying to figure out the expected value of games (they didn't phrase it like that, they put it more like “if the game has to stop before it's concluded, how should the winnings be split between the players?“). Other people who consulted with gamblers also would write down data about things like how often different winning hands would come up in different games, and discovered simple distributions, then tried to put equations to them. Later it was developed further by people trying to reason about gases and temperatures, and then again in understanding clinical trials or large repeated biological experiments. Often people discovered more in this combination of “looking directly at nature” and “being the sort of person who was interested in developing a formal calculus to model what was going on”. 4. Thought experiments about the world were a big deal too! Thomas Bayes did most of his math this way. He had a thought experiment that went something like this: his assistant would throw a ball on a table that Thomas wasn't looking at. Then his assistant would throw more balls on the table, each time saying whether it ended up to the right or the left of the original ball. He had this sense that each time he was told the next left-or-right, he should be able to give a new probability that the ball was in any particular given region. He used this thought experiment a lot when coming up with Bayes' theorem. 5. Lots of people involved were full-time inventors, rich people who did serious study into a lot of different areas, including mathematics. This is a weird class to me. (I don't know people like this today. And most scientific things are very institutionalized, or failing that, embedded within business.) Here's a quote I enjoyed from one of Pascal's letters to Fermat when they founded the theory of probability. (For context: de Mere was the gam...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 12 interesting things I learned studying the discovery of nature's laws, published by Ben Pace on February 19, 2022 on LessWrong. I've been thinking about out whether I can discover laws of agency and wield them to prevent AI ruin (perhaps by building an AGI myself in a different paradigm than machine learning). So far I've looked into the history of the discovery of physical laws (gravity in particular) and mathematical laws (probability theory in particular). Here are 12 things I've learned or been surprised by. 1. Data-gathering was a crucial step in discovering both gravity and probability theory. One rich dude had a whole island and set it up to have lenses on lots of parts of it, and for like a year he'd go around each day and note down the positions of the stars. Then this data was worked on by others who turned it into equations of motion. 2. Relatedly, looking at the celestial bodies was a big deal. It was almost the whole game in gravity, but also a little helpful for probability theory (specifically the normal distribution was developed in part by noting that systematic errors in celestial measuring equipment followed a simple distribution). It hadn't struck me before, but putting a ton of geometry problems on the ceiling for the entire civilization led a lot of people to try to answer questions about it. (It makes Eliezer's choice in That Alien Message apt.) I'm tempted in a munchkin way to find other ways to do this, like to write a math problem on the surface of the moon, or petition Google to put a prediction market on its home page, or something more elegant than those two. 3. Probability theory was substantially developed around real-world problems! I thought math was all magical and ivory tower, but it was much more grounded than I expected. After a few small things like accounting and insurance and doing permutations of the alphabet, games of chance (gambling) was what really kicked it off, with Fermat and Pascal trying to figure out the expected value of games (they didn't phrase it like that, they put it more like “if the game has to stop before it's concluded, how should the winnings be split between the players?“). Other people who consulted with gamblers also would write down data about things like how often different winning hands would come up in different games, and discovered simple distributions, then tried to put equations to them. Later it was developed further by people trying to reason about gases and temperatures, and then again in understanding clinical trials or large repeated biological experiments. Often people discovered more in this combination of “looking directly at nature” and “being the sort of person who was interested in developing a formal calculus to model what was going on”. 4. Thought experiments about the world were a big deal too! Thomas Bayes did most of his math this way. He had a thought experiment that went something like this: his assistant would throw a ball on a table that Thomas wasn't looking at. Then his assistant would throw more balls on the table, each time saying whether it ended up to the right or the left of the original ball. He had this sense that each time he was told the next left-or-right, he should be able to give a new probability that the ball was in any particular given region. He used this thought experiment a lot when coming up with Bayes' theorem. 5. Lots of people involved were full-time inventors, rich people who did serious study into a lot of different areas, including mathematics. This is a weird class to me. (I don't know people like this today. And most scientific things are very institutionalized, or failing that, embedded within business.) Here's a quote I enjoyed from one of Pascal's letters to Fermat when they founded the theory of probability. (For context: de Mere was the gam...
Thomas Bayes formuló la teoría de la probabilidad condicionada, la probabilidad que ocurra A sabiendo que estamos en B. ¿Cuál es la probabilidad de morir por Covid de un anciano con ligero sobrepeso? Alarmantemente alta. ¿Cuál es la probabilidad de morir por Covid de un joven sin patologías? Prácticamente nula. Te vacunas entonces para proteger a un tercero, familiar o desconocido. La vacunación es un debate emocional y, a pesar del riesgo de cancelación, con Nacho racionalizamos la decisión.Escucha el podcast en tu plataforma habitual:Spotify — Apple — iVoox — YouTubeArtículos sobre finanzas en formato blog:Substack Kapital — Substack CardinalApuntes:Filosofía, corporalidad y percepción. Maurice Merleau-Ponty.Piel negra, máscaras blancas. Frantz Fanon.Despertares. Oliver Sacks.Pandemia. Slavoj Žižek.Jugarse la piel. Nassim Nicholas Taleb.Índice:0.28. Conflicto de intereses en el debate público del Covid.12.49. Merleau-Ponty: La consciencia se genera en el cuerpo.27.05. Mínimo común múltiple o máximo común divisor.37.57. Nassim Nicholas Taleb y los riesgos de cola.1.10.22. La vacuna podría estar retrasando la inmunidad de rebaño.1.18.08. Externalidades, los efectos no incorporados en el precio.1.25.05. ¿Debo vacunar a mi hijo de 5 años?1.36.10. Fiscalizar a las grandes multinacionales.2.19.09. La lógica de Djokovic: my body, my choice.
What makes a satisfying explanation? Understanding and prediction are two different goals at odds with one another — think fundamental physics versus artificial neural networks — and even what defines a “simple” explanation varies from one person to another. Held in a kind of ecosystemic balance, these diverse approaches to seeking knowledge keep each other honest…but the use of one kind of knowledge to the exclusion of all others leads to disastrous results. And in the 21st Century, the difference between good and bad explanations determines how society adapts as rapid change transforms the world most people took for granted — and sends humankind into the epistemic wilds to find new stories that will help us navigate this brave new world.This week we dive deep with SFI External Professor Simon DeDeo at Carnegie Mellon University to explore his research into intelligence and the search for understanding, bringing computational techniques to bear on the history of science, information processing at the scale of society, and how digital technologies and the coronavirus pandemic challenge humankind to think more carefully about the meaning that we seek, here on the edge of chaos…If you value our research and communication efforts, please subscribe to Complexity Podcast wherever you listen, rate and review us at Apple Podcasts, and/or consider making a donation at santafe.edu/engage. Thank you for listening!Join our Facebook discussion group to meet like minds and talk about each episode.Podcast theme music by Mitch Mignano.Follow us on social media:Twitter • YouTube • Facebook • Instagram • LinkedInWorks Discussed:“From Probability to Consilience: How Explanatory Values Implement Bayesian Reasoning”Zachary Wojtowicz & Simon DeDeo (+ SFI press release on this paper)“Supertheories and Consilience from Alchemy to Electromagnetism”Simon DeDeo (SFI lecture video)“From equality to hierarchy”Simon DeDeo & Elizabeth HobsonThe Complex Alternative: Complexity Scientists on the COVID-19 PandemicSFI Press (with “From Virus to Symptom” by Simon DeDeo)“Boredom and Flow: An Opportunity Cost Theory of Attention-Directing Motivational States”Zachary Wojtowicz, Nick Chater, & George Loewenstein“Scale and information-processing thresholds in Holocene social evolution”Jaeweon Shin, Michael Holton Price, David H. Wolpert, Hajime Shimao, Brendan Tracey, & Timothy A. Kohler “Slowed canonical progress in large fields of science”Johan Chu and James Evans“Will A Large Complex System Be Stable?”Robert MayRelated Podcast Episodes:• Andy Dobson on Disease Ecology & Conservation Strategy• Nicole Creanza on Cultural Evolution in Humans & Songbirds• On Coronavirus, Crisis, and Creative Opportunity with David Krakauer• Carl Bergstrom & Jevin West on Calling Bullshit: The Art of Skepticism in a Data-Driven World• Vicky Yang & Henrik Olsson on Political Polling & Polarization: How We Make Decisions & Identities• David Wolpert on The No Free Lunch Theorems and Why They Undermine The Scientific Method• Science in The Time of COVID: Michael Lachmann & Sam Scarpino on Lessons from The Pandemic• Jonas Dalege on The Physics of Attitudes & Beliefs• Tyler Marghetis on Breakdowns & Breakthroughs: Critical Transitions in Jazz & MathematicsMentioned:David Spergel, Zachary Wojtowicz, Stuart Kauffman, Jessica Flack, Thomas Bayes, Claude Shannon, Sean M. Carroll, Dan Sperber, David Krakauer, Marten Scheffer, David Deutsch, Jaewon Shin, Stuart Firestein, Bob May, Peter Turchin, David Hume, Jimmy Wales, Tyler Marghetis
There is growing appreciation of the impact of the non conformist minister. Thomas Byre lifes work. His theorem which was posthumously published has now created a whole school in the field of Statistics
Bayes’ Rule has been used in AI, genetic studies, translating foreign languages and even cracking the Enigma Code in the Second World War. We find out about Thomas Bayes - the 18th century English statistician and clergyman whose work was largely forgotten until the 20th century.
Tripp in an AI researcher at Amazon and a friend I met through the Paul Vanderklay online community. He is an expert on Aristotelianism, Thomistic theology, and the philosophy of mind with a masters in philosophy from Duke. We talk about the doctrine of divine simplicity, how that relates to the doctrine of the trinity, Thomas Aquinas, Aristotle, Ray Kurzweil, Alvin Plantinga, the evolutionary argument against naturalism, David Hume, Thomas Bayes, the argument against miracles, Adam Friended, Michael Shermer, Alexander the Great, Tim McGrew, Douglas Murray, Christian Atheists, William Lane Craig, the cosmological argument, the Gospel of John, John the Baptist, and just why was Jesus crucified? Here is our first conversation: https://www.youtube.com/watch?v=hvXOZj4kQGU&t=2s Here is a link to Tripp, Esther and Adam Friended debating the resurrection: https://www.youtube.com/watch?v=V7g9ic521c8 Tim McGrew and the argument from miracles: https://www.youtube.com/watch?v=GSEobV4cHnc
Connor Tan is a physicist and senior data scientist working for a multinational energy company where he co-founded and leads a data science team. He holds a first-class degree in experimental and theoretical physics from Cambridge university. With a master's in particle astrophysics. He specializes in the application of machine learning models and Bayesian methods. Today we explore the history, pratical utility, and unique capabilities of Bayesian methods. We also discuss the computational difficulties inherent in Bayesian methods along with modern methods for approximate solutions such as Markov Chain Monte Carlo. Finally, we discuss how Bayesian optimization in the context of automl may one day put Data Scientists like Connor out of work. Panel: Dr. Keith Duggar, Alex Stenlake, Dr. Tim Scarfe 00:00:00 Duggars philisophical ramblings on Bayesianism 00:05:10 Introduction 00:07:30 small datasets and prior scientific knowledge 00:10:37 Bayesian methods are probability theory 00:14:00 Bayesian methods demand hard computations 00:15:46 uncertainty can matter more than estimators 00:19:29 updating or combining knowledge is a key feature 00:25:39 Frequency or Reasonable Expectation as the Primary Concept 00:30:02 Gambling and coin flips 00:37:32 Rev. Thomas Bayes's pool table 00:40:37 ignorance priors are beautiful yet hard 00:43:49 connections between common distributions 00:49:13 A curious Universe, Benford's Law 00:55:17 choosing priors, a tale of two factories 01:02:19 integration, the computational Achilles heel 01:35:25 Bayesian social context in the ML community 01:10:24 frequentist methods as a first approximation 01:13:13 driven to Bayesian methods by small sample size 01:18:46 Bayesian optimization with automl, a job killer? 01:25:28 different approaches to hyper-parameter optimization 01:30:18 advice for aspiring Bayesians 01:33:59 who would connor interview next? Connor Tann: https://www.linkedin.com/in/connor-tann-a92906a1/ https://twitter.com/connossor
Thomas Chittenden, chief data science officer at Genuity Science, says what's keeping the genomics revolution from turning into an equivalent revolution in drug discovery is that most of our domain knowledge about the molecular biology of disease has come from a hunt-and-peck approach, focused on one gene at a time. Find some gene relevant to a disease, knock it out, and you see what happens. Such experiments are always revealing, but the reality is that human biology is the product of the interactions of huge networks of thousands of genes—which means most diseases are the product of dysregulation across these networks. Which means, in turn, that to figure out where to intervene with a drug, you really need to identify the patterns that cascade through the whole network. That’s where AI and machine learning come in, and that’s why Genuity has tasked Chittenden to lead R&D at its Advanced Artificial Intelligence Research Laboratory. Chittenden's team is pioneering new applications of old ideas from the world of probability and statistics, including some that go all the way back to the work of the English statistician Thomas Bayes in the eighteenth century, to look at gene expression data from individual cells and predict which genes are at the beginning of the cascade and are the causal drivers of diseases like atherosclerosis or high blood pressure. The hope is that Genuity can help its clients in the drug discovery business make smarter bets about which drug candidates will be most effective. And that could help shave years of development and billions of dollars in costs off the drug development process.Chittenden is one of those rare professionals who has more degrees than you can shake a stick at—he has a PhD in Molecular Cell Biology and Biotechnology from Virginia Tech and a DPhil in Computational Statistics from the University of Oxford, and completed postdoctoral training at Dartmouth Medical School, the Dana-Farber Cancer Institute, and the Harvard School of Public Health—but can also explain the actual science in a way that makes sense for a non-expert. On top of that he’s been thinking hard about how to rein in some of the hype around the power of AI and machine learning in drug development and how to set expectations about what computing can and can’t do for the industry.Please rate and review MoneyBall Medicine on Apple Podcasts! Here's how to do that from an iPhone, iPad, or iPod touch:• Launch the “Podcasts” app on your device. If you can’t find this app, swipe all the way to the left on your home screen until you’re on the Search page. Tap the search field at the top and type in “Podcasts.” Apple’s Podcasts app should show up in the search results.• Tap the Podcasts app icon, and after it opens, tap the Search field at the top, or the little magnifying glass icon in the lower right corner.• Type MoneyBall Medicine into the search field and press the Search button.• In the search results, click on the MoneyBall Medicine logo.• On the next page, scroll down until you see the Ratings & Reviews section. Below that, you’ll see five purple stars.• Tap the stars to rate the show.• Scroll down a little farther. You’ll see a purple link saying “Write a Review.”• On the next screen, you’ll see the stars again. You can tap them to leave a rating if you haven’t already.• In the Title field, type a summary for your review.• In the Review field, type your review.• When you’re finished, click Send.• That’s it, you’re done. Thanks!
Hoje é dia do "Influencers da Ciência", um Spin-Off do podcast "Intervalo de Confiança". Neste programa trazemos o nome de Influencers que de fato trouxeram algo de positivo para a sociedade, aqueles que expandiram as fronteiras do conhecimento científico e hoje permitiram o desenvolvimento de diversas áreas. Neste episódio, Igor Alcantara fala sobre a vida e obra daquele que podemos considerar o Van Gogh da estatística. Assim como o famoso pintor francês, Thomas Bayes teve seu principal trabalho reconhecido (e publicado) apenas após a sua morte. Neste episódio também é explicado um pouco sobre o que é o famoso Teorema de Bayes, tão importante e fundamental para a ciência até nos dias de hoje em questões que vão desde a calcular a chance de um time vencer uma partida até a fotografar um buraco negro. Escute este episódio e conheça mais sobre esta importante personalidade. Apresentou este episódio Igor Alcantara. A edição foi feita por Leo Oliveira. A vitrine do episódio foi criada por Diego Madeira. Não esqueça também de visitar nosso site em http://intervalodeconfianca.com.br
Episode: 1876 In which Thomas Bayes mixes prior knowledge with a priori deduction. Today, we learn how to hedge bets.
(NOTAS DEL CAPÍTULO AQUÍ: https://www.jaimerodriguezdesantiago.com/kaizen/37-toma-de-decisiones-iv-pensamiento-probabilistico-zipi-y-zape-bebes-aleatorios-y-cosas-asimetricas/)A lo tonto habré dedicado ya tres o cuatro capítulos a la toma de decisiones en lo que llevamos de podcast y lo cierto es que a medida que lo hago tengo un poco más claro mi objetivo con ello. Sé que así dicho suena raro, pero es lo que tiene que yo también vaya aprendiendo en el proceso y entendiendo mejor las implicaciones de lo que te cuento :)A mí me obsesionan las decisiones porque creo que nuestras vidas son esencialmente una sucesión continua de decisiones, desde que nos levantamos hasta que nos acostamos. Creo también que hay pocas cosas más poderosas para sentirnos bien con nosotros mismos que asumir la responsabilidad sobre nuestras vidas y sobre lo que hacemos. Es decir, sobre nuestras decisiones. De ahí mi obsesión.Y en el fondo no es que haya dedicado 3 o 4 capítulos al tema, sino que seguramente casi todo kaizen está directa o indirectamente relacionado con ello. Decidir mejor es la consecuencia de 4 cosas:1) Tener los suficientes conocimientos y modelos mentales para entender la realidad, siendo conscientes de nuestras propias limitaciones y sesgos.2) Saber cómo priorizar y elegir dónde enfocar nuestros esfuerzos3) Tener las herramientas adecuadas para anticipar el impacto de las distintas alternativas4) Ser capaces de enfrentarnos a los resultados de las mismas.Vamos, lo que viene siendo, vivir.Bueno, pues hoy nos vamos a enfocar en una forma de entender el mundo y de tomar decisiones que a mí me parece especialmente interesante; porque creo que refleja muy bien la complejidad en la que vivimos. Es el pensamiento probabilístico.En realidad, ya te he hablado antes del pensamiento probabilísitico. En la temporada pasada de kaizen, le dediqué un primer capítulo a la toma de decisiones y en él empezamos a hablar de este tema. Entonces, te conté un poco por encima los árboles de decisión, que son una de las herramientas más habituales para aplicar este tipo de razonamiento. Pero hoy me gustaría que intentáramos profundizar en qué significa, algunos de sus conceptos clave y cómo aplicarlo a nuestras vidas.
When will the world end? How likely is it that intelligent extraterrestrial life exists? Are we living in a simulation like the Matrix? Is our universe but one in a multiverse? How does Warren Buffett continue to beat the stock market? How much longer will your romance last? In this wide ranging conversation with science writer William Poundstone, answers to these questions, and more, will be provided … or at least considered in the framework of Bayesian analysis. In the 18th century, the British minister and mathematician Thomas Bayes devised a theorem that allowed him to assign probabilities to events that had never happened before. It languished in obscurity for centuries until computers came along and made it easy to crunch the numbers. Now, as the foundation of big data, Bayes’ formula has become a linchpin of the digital economy. But here’s where things get really interesting: Bayes’ theorem can also be used to lay odds on the existence of extraterrestrial intelligence; on whether we live in a Matrix-like counterfeit of reality; on the “many worlds” interpretation of quantum theory being correct; and on the biggest question of all: how long will humanity survive? The Doomsday Calculation tells how Silicon Valley’s profitable formula became a controversial pivot of contemporary thought. Drawing on interviews with thought leaders around the globe, it’s the story of a group of intellectual mavericks who are challenging what we thought we knew about our place in the universe. Listen to Science Salon via iTunes, Spotify, Google Play Music, Stitcher, iHeartRadio, TuneIn, and Soundcloud. You play a vital part in our commitment to promote science and reason. If you enjoy the Science Salon Podcast, please show your support by making a donation, or by becoming a patron.
A semana pasada tivo lugar en Santiago de Compostela a 4ª Reunión Xeneral da Rede Nacional de Biostatística Biostatnet formada por 8 nodos, un deles en Galicia, que agrupa a uns 200 investigadores de toda España. Conversamos con Carmen Armero, investigadora principal do nodo valenciano da rede Biostatnet. Unha das sorpresas deste encontro foi na cea do congreso en San Martin Pinario onde fixeron acto de presenza o mesmísimo Carl Gaus e Thomas Balles a quen temos o gusto de ter en Efervesciencia para protagonizar unha nova “Entrevista no alén”
A semana pasada tivo lugar en Santiago de Compostela a 4ª Reunión Xeneral da Rede Nacional de Biostatística Biostatnet formada por 8 nodos, un deles en Galicia, que agrupa a uns 200 investigadores de toda España. Conversamos con Carmen Armero, investigadora principal do nodo valenciano da rede Biostatnet. Unha das sorpresas deste encontro foi na cea do congreso en San Martin Pinario onde fixeron acto de presenza o mesmísimo Carl Gaus e Thomas Balles a quen temos o gusto de ter en Efervesciencia para protagonizar unha nova “Entrevista no alén”
When AI platforms are not busy beating us at Go or showing us how to drive cars properly, they are also changing the way that companies spend and track money. Talking with Manish Singh, an EVP at Oversight Systems, I learned how machine learning is both automating financial operations, and transforming the way we mitigate risk. Although Manish and I had fun talking about some of my favorite geeky AI topics (probabilistic thinking and the influence of Thomas Bayes), what you may find really interesting, is our discussion on how clerical jobs in the company of the future will not be simply automated, but elevated into something altogether new with very different skills and outcomes.
When AI platforms are not busy beating us at Go or showing us how to drive cars properly, they are also changing the way that companies spend and track money. Talking with Manish Singh, an EVP at Oversight Systems, I learned how machine learning is both automating financial operations, and transforming the way we mitigate risk. Although Manish and I had fun talking about some of my favorite geeky AI topics (probabilistic thinking and the influence of Thomas Bayes), what you may find really interesting, is our discussion on how clerical jobs in the company of the future will not be simply automated, but elevated into something altogether new with very different skills and outcomes.
Thomas Bayes foi um matemático inglês, eleito membro da Royal Society em 1742. É dele a lei segundo a qual a probabilidade de um evento varia conforme o conhecimento a priori que pode estar relacionado ao evento. Como ela funciona na prática? Qual a relação com o nosso dia-a-dia? E o que isso tem a ver com o horóscopo? Saiba neste episódio do Naruhodo! — no papo entre o leigo curioso, Ken Fujioka, e o cientista PhD, Altay de Souza. OUÇA (30min 08s) Naruhodo! é o podcast pra quem tem fome de aprender. Ciência, senso comum, curiosidades, desafios e muito mais. Com o leigo curioso, Ken Fujioka, e o cientista PhD, Altay de Souza. Edição: Reginaldo Cursino. http://naruhodo.b9.com.br REFERÊNCIAS Naruhodo #51 – Astrologia, horóscopo e mapa astral têm algo de científico? ==> http://www.b9.com.br/70398/naruhodo-51-astrologia-horoscopo-e-mapa-astral-tem-algo-de-cientifico/ APOIA.SE Você sabia que pode ajudar a manter o Naruhodo no ar? Ao contribuir, você pode ter acesso ao grupo fechado no Facebook e receber conteúdos exclusivos. Acesse: http://apoia.se/naruhodopodcast
Show notes at ocdevel.com/mlg/2 Updated! Skip to [00:29:36] for Data Science (new content) if you've already heard this episode. What is artificial intelligence, machine learning, and data science? What are their differences? AI history. Hierarchical breakdown: DS(AI(ML)). Data science: any profession dealing with data (including AI & ML). Artificial intelligence is simulated intellectual tasks. Machine Learning is algorithms trained on data to learn patterns to make predictions. Artificial Intelligence (AI) - Wikipedia Oxford Languages: the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. AlphaGo Movie, very good! Sub-disciplines Reasoning, problem solving Knowledge representation Planning Learning Natural language processing Perception Motion and manipulation Social intelligence General intelligence Applications Autonomous vehicles (drones, self-driving cars) Medical diagnosis Creating art (such as poetry) Proving mathematical theorems Playing games (such as Chess or Go) Search engines Online assistants (such as Siri) Image recognition in photographs Spam filtering Prediction of judicial decisions Targeting online advertisements Machine Learning (ML) - Wikipedia Oxford Languages: the use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data. Data Science (DS) - Wikipedia Wikipedia: Data science is an interdisciplinary field that uses scientific methods, processes, algorithms and systems to extract knowledge and insights from noisy, structured and unstructured data, and apply knowledge and actionable insights from data across a broad range of application domains. Data science is related to data mining, machine learning and big data. History Greek mythology, Golums First attempt: Ramon Lull, 13th century Davinci's walking animals Descartes, Leibniz 1700s-1800s: Statistics & Mathematical decision making Thomas Bayes: reasoning about the probability of events George Boole: logical reasoning / binary algebra Gottlob Frege: Propositional logic 1832: Charles Babbage & Ada Byron / Lovelace: designed Analytical Engine (1832), programmable mechanical calculating machines 1936: Universal Turing Machine Computing Machinery and Intelligence - explored AI! 1946: John von Neumann Universal Computing Machine 1943: Warren McCulloch & Walter Pitts: cogsci rep of neuron; Frank Rosemblatt uses to create Perceptron (-> neural networks by way of MLP) 50s-70s: "AI" coined @Dartmouth workshop 1956 - goal to simulate all aspects of intelligence. John McCarthy, Marvin Minksy, Arthur Samuel, Oliver Selfridge, Ray Solomonoff, Allen Newell, Herbert Simon Newell & Simon: Hueristics -> Logic Theories, General Problem Solver Slefridge: Computer Vision NLP Stanford Research Institute: Shakey Feigenbaum: Expert systems GOFAI / symbolism: operations research / management science; logic-based; knowledge-based / expert systems 70s: Lighthill report (James Lighthill), big promises -> AI Winter 90s: Data, Computation, Practical Application -> AI back (90s) Connectionism optimizations: Geoffrey Hinton: 2006, optimized back propagation Bloomberg, 2015 was whopper for AI in industry AlphaGo & DeepMind
In the late 1700s, English minister Thomas Bayes discovered a simple mathematical rule for calculating probabilities based on different information sources. Since then Bayesian models for describing uncertain events have taken off in a wide variety of fields, not the least of which is psychology. This Bayesian framework has been used to understand far-reaching psychological processes, such as how humans combine noisy sensory information with their prior beliefs about the world in order to come to decisions on how to act. But not everyone is riding the Bayesian train. In this episode, we discuss a published back and forth between scientists arguing over the use and merits of Bayesian modeling in neuroscience and psychology. First, though, we set the stage by describing Bayesian math, how it is used in psychology, and the significance of certain terms such as "optimal" (it may not mean what you think it does) and "utility". We then get into the arguments for and against Bayesian modeling, including its falsifiability and the extent to which Bayesian findings are overstated or outright confused. Ultimately, it seems the expansive power of Bayesian modeling to describe almost anything may in fact be its downfall. Do Bayesian models give us insight on animal brains and behaviors, or just a bunch of "just-so" stories?