Podcasts about priors

Ecclesiastical title

  • 96PODCASTS
  • 135EPISODES
  • 46mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • Oct 13, 2024LATEST
priors

POPULARITY

20172018201920202021202220232024


Best podcasts about priors

Latest podcast episodes about priors

Grace 242
Temptation Tactics

Grace 242

Play Episode Listen Later Oct 13, 2024 34:48


Series: Learn to BlockTitle: Temptation TacticsScripture Reading: Matthew 4:1-11This is our seventh week learning how to block the fiery arrows of the enemy. Jesus' temptation by Satan in the wilderness reveals several tactics for how Satan tempts us:1. Satan exploits our priors2. Satan exploits our methods3. Satan exploits our distinctions

The Crown Cast
Update Your Priors: Season Reset

The Crown Cast

Play Episode Listen Later Aug 21, 2024 29:18


Charlotte FC are WAY ahead of the place we thought they would be in the project and as a result its only fair we update our expectations on the season. On the docket today: Pep Biel and his fit at Charlotte Do the new players start on Sat? Updated points totals and finishing predictions from the cast

SAGE Psychology & Psychiatry
Why Does Speech Sometimes Sound Like Song? Exploring the Role of Music-Related Priors in the “Speech-to-Song Illusion”

SAGE Psychology & Psychiatry

Play Episode Listen Later Aug 14, 2024 6:07


Why Does Speech Sometimes Sound Like Song? Exploring the Role of Music-Related Priors in the “Speech-to-Song Illusion”

Rookie Big Board Fantasy Football Podcast
Devy vs. Vet, letting go of priors, and Luther Burden Film Analysis!

Rookie Big Board Fantasy Football Podcast

Play Episode Listen Later Jul 19, 2024 33:03


The boys are back! Skip asks Matt when to let go of your previous opinion of a player. They also play a new game of Devy vs. Veteran where they discuss if they would rather have a top devy prospect or a proven veteran player. Then they wrap up with Matt's film analysis of 2025 star wide receiver Luther Burden! Get rankings, personalized advice, and more: patreon.com/rookiebigboard Join the RBB Discord: https://discord.gg/d2dR5Uk6qa  Play Underdog with 100% Deposit Match: https://play.underdogfantasy.com/p-rookie-big-board

Legendary Upside
Legendary Sickos: Training Camp Priors

Legendary Upside

Play Episode Listen Later Jul 14, 2024 143:07


Pat and Erik discuss the players they're targeting in preparation for training camp news and which situations they'll be most closely monitoring when camps kick off.FOLLOW:► Erik➝ https://twitter.com/erikbeimfohr► Pat ➝ https://twitter.com/PatKerraneSign up for the Legendary Upside newsletter (https://www.legendaryupside.com/)Fill out the form activating a $50 Underdog credit (for yearly LegUp subscribers - while supplies last). (https://www.legendaryupside.com/legup-perks/)Sign up for Underdog with promo code LEGUP for a 100% deposit match on your first deposit (up to $100).  Legendary Upside subscribers can use promo code LEGUP for 40% off a Spike Week subscription.

Shooting It Podcast
America has a lot of Priors for Stealing - with Coach!

Shooting It Podcast

Play Episode Listen Later Jul 10, 2024 147:51


Episode 188 Construction over the summer Getting buried in the backyard Anxiety everywhere all the time Coach enters Vasectomy Coach America has a lot of priors for stealing Trejo hit with a water balloon New Lakers Coach King of the Hill and VIX Hulk Hogan Selling Beers 4th of July Project Baseball and other sports Gotta recycle all batteries Mario Bros Cartoon Fontana tacos and food NY Post pricey bags story

LessWrong Curated Podcast
“Priors and Prejudice” by MathiasKB

LessWrong Curated Podcast

Play Episode Listen Later Jul 2, 2024 13:24


IImagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society, as opposed to the rationalist diaspora. Let's name this hypothetical movement the Effective Samaritans.Like the EA movement of today, they believe in doing as much good as possible, whatever this means. They began by evaluating existing charities, reading every RCT to find the very best ways of helping.But many effective samaritans were starting to wonder. Is this randomista approach really the most prudent? After all, Scandinavia didn't become wealthy and equitable through marginal charity. Societal transformation comes from uprooting oppressive power structures.The Scandinavian societal model which lifted the working class, brought weekends, universal suffrage, maternity leave, education, and universal healthcare can be traced back all the way to 1870's where the union and social democratic movements got their start.In many developing countries [...]The original text contained 2 footnotes which were omitted from this narration. --- First published: April 22nd, 2024 Source: https://www.lesswrong.com/posts/sKKxuqca9uhpFSvgq/priors-and-prejudice --- Narrated by TYPE III AUDIO.

Converging Dialogues
#352 - Our Bayesian Priors: A Dialogue with Tom Chivers

Converging Dialogues

Play Episode Listen Later Jun 20, 2024 76:45


In this episode, Xavier Bonilla has a dialogue with Tom Chivers about Bayesian probability and the impact Bayesian priors have on ourselves. They define Bayesian priors, Thomas Bayes, subjective aspects of Bayes theorem, and the problematic elements of statistical figures such as Galton, Pearson, and Fisher. They talk about the replication crisis, p-hacking, where priors come from, AI, Friston's free energy principle, and Bayesian priors in our world today. Tom Chivers is a science writer. He does freelance science writing and also writes for Semafor.com's daily Flagship email. Before joining Semafor, he was a science editor at UnHerd, science writer for BuzzFeed UK, and features writer for the Telegraph. He is the author of several books including the most recent, Everything Is Predictable: How Bayesian Statistics Explain Our World. Website: https://tomchivers.com/ Get full access to Converging Dialogues at convergingdialogues.substack.com/subscribe

The Bulwark Podcast
Ritchie Torres and Ben Smith: Pride and Priors

The Bulwark Podcast

Play Episode Listen Later Jun 4, 2024 53:01


Rep. Ritchie Torres joins Tim Miller to discuss how to win working-class voters, his Dickensian upbringing, and Israel under the microscope of 24-hour news. Plus, mental health, Pride, and Trump as the GOP's new lord and savior. Then, Ben Smith talks about Americans fragmenting the media universe, and the Epoch Times grift. show notes: Torres talking about being a Zionist after being heckled New Yorker piece on Guo Wengui that Ben mentioned 

Effective Altruism Forum Podcast
“Priors and Prejudice” by MathiasKB

Effective Altruism Forum Podcast

Play Episode Listen Later Apr 26, 2024 13:26


I Imagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society, as opposed to the rationalist diaspora. Let's name this hypothetical movement the Effective Samaritans. Like the EA movement of today, they believe in doing as much good as possible, whatever this means. They began by evaluating existing charities, reading every RCT to find the very best ways of helping. But many effective samaritans were starting to wonder. Is this randomista approach really the most prudent? After all, Scandinavia didn't become wealthy and equitable through marginal charity. Societal transformation comes from uprooting oppressive power structures. The Scandinavian societal model which lifted the working class, brought weekends, universal suffrage, maternity leave, education, and universal healthcare can be traced back all the way to 1870's where the union and social democratic movements got their start. In many developing countries [...] ---Outline:(00:03) I(05:39) II(10:19) IIIThe original text contained 2 footnotes which were omitted from this narration. --- First published: April 22nd, 2024 Source: https://forum.effectivealtruism.org/posts/PKotuzY8yzGSNKRpH/priors-and-prejudice --- Narrated by TYPE III AUDIO.

The Nonlinear Library
EA - Priors and Prejudice by MathiasKB

The Nonlinear Library

Play Episode Listen Later Apr 22, 2024 11:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Priors and Prejudice, published by MathiasKB on April 22, 2024 on The Effective Altruism Forum. This post is easily the weirdest thing I've ever written. I also consider it the best I've ever written - I hope you give it a chance. If you're not sold by the first section, you can safely skip the rest. I Imagine an alternate version of the Effective Altruism movement, whose early influences came from socialist intellectual communities such as the Fabian Society, as opposed to the rationalist diaspora. Let's name this hypothetical movement the Effective Samaritans. Like the EA movement of today, they believe in doing as much good as possible, whatever this means. They began by evaluating existing charities, reading every RCT to find the very best ways of helping. But many effective samaritans were starting to wonder. Is this randomista approach really the most prudent? After all, Scandinavia didn't become wealthy and equitable through marginal charity. Societal transformation comes from uprooting oppressive power structures. The Scandinavian societal model which lifted the working class, brought weekends, universal suffrage, maternity leave, education, and universal healthcare can be traced back all the way to 1870's where the union and social democratic movements got their start. In many developing countries wage theft is still common-place. When employees can't be certain they'll get paid what was promised in the contract they signed and they can't trust the legal system to have their back, society settles on much fewer surplus producing work arrangements than is optimal. Work to improve capacity of the existing legal structure is fraught with risk. One risks strengthening the oppressive arms used by the ruling and capitalist classes to stay in power. A safer option may be to strengthen labour unions, who can take up these fights on behalf of their members. Being in inherent opposition to capitalist interests, unions are much less likely to be captured and co-opted. Though there is much uncertainty, unions present a promising way to increase contract-enforcement and help bring about the conditions necessary for economic development, a report by Reassess Priorities concludes. Compelled by the anti-randomista arguments, some Effective Samaritans begin donating to the 'Developing Unions Project', which funds unions in developing countries and does political advocacy to increase union influence. A well-regarded economist writes a scathing criticism of Effective Samaritanism, stating that they are blinded by ideology and that there isn't sufficient evidence to show that increases in labor power leads to increases in contract enforcement. The article is widely discussed on the Effective Samaritan Forum. One commenter writes a highly upvoted response, arguing that absence of evidence isn't evidence of absence. The professor is too concerned with empirical evidence, and fails to engage sufficiently with the object-level arguments for why the Developing Unions Project is promising. Additionally, why are we listening to an economics professor anyways? Economics is completely bankrupt as a science, resting on empirically false ridiculous assumptions, and is filled with activists doing shoddy science to confirm their neoliberal beliefs. I sometimes imagine myself trying to convince the Effective Samaritan why I'm correct to hold my current beliefs, many of which have come out of the rationalist diaspora. I explain how I'm not fully bought into the analysis of labor historians, which credits labor unions and the Social Democratic movements for making Scandinavia uniquely wealthy, equitable and happy. If this were a driving factor, how come the descendants of Scandinavians who migrated to the US long before are doing just as well in America? Besides, even if I don't know enough to ...

EconTalk
When Prediction Is Not Enough (with Teppo Felin)

EconTalk

Play Episode Listen Later Apr 15, 2024 67:10


If the Wright Brothers could have used AI to guide their decision making, it's almost certain they would never have gotten off the ground. That's because, points out Teppo Felin of Utah State University and Oxford, all the evidence said human flight was impossible. So how and why did the Wrights persevere? Felin explains that the human ability to ignore existing data and evidence is not only our Achilles heel, but also one of our superpowers. Topics include the problems inherent in modeling our brains after computers, and the value of not only data-driven prediction, but also belief-driven experimentation.

BookSpeak Network
Brown Posey Press Show: Cleveland Concoction Roundtable, Structuring Your Story

BookSpeak Network

Play Episode Listen Later Mar 20, 2024 55:00


Recently at Cleveland Concoction in Aurora, Ohio, BPP Host Tory Gates was joined by fellow authors for a Roundtable Discussion, "Structuring Your Story." He was joined in this shoot-style talk by: Addie King, an attorney by day and author by night, is the author of The Grimm Legacy and The Hochenwalt Files series, along with a collection of short stories, Demons and Heroes and Robots, Oh My! Geoffrey A. Landis is a NASA scientist who develops advanced technologies for spaceflight. He is a Hugo, Nebula, and Robert A. Heinlein Award winner for science fiction and author of Mars Crossing and the Impact Parameter Collection. Marie Vibbert is a Hugo and Nebula Award nominee. Her work includes The Gods Awoke and Galactic Hellcats, along with more than 90 published short stories. Weston Kincade is the author of character-driven fantasy, paranormal and horror works. These include the A Life of Death trilogy and The Priors. His short stories have appeared in Kevin J. Kennedy's best-selling collections, along with Alucard Press' 50 Shades of Slay. He is also a member of the Horror Writers Assocation and a founder of CleCon's Author's Alley.    

Paperwings Podcast - Der Business-Interview-Podcast mit Danny Herzog-Braune
#161 "Was sind maximal wirksame Interventionen?"- Danny Herzog Braune im Gespräch mit Dr. Manfred Prior

Paperwings Podcast - Der Business-Interview-Podcast mit Danny Herzog-Braune

Play Episode Listen Later Feb 29, 2024 101:36


In einem inspirierenden Gespräch mit Dr. Manfred Prior im Paperwings Podcast tauchten wir tief in die Welt der minimalen Interventionen mit maximaler Wirkung ein. Dr. Prior, Autor des Bestsellers "Minimax Intervention", enthüllte uns die Geheimnisse von 15 effektiven Interventionen, die in der Beratung und Therapie bahnbrechende Veränderungen bewirken können. Wir erkundeten auch Dr. Priors innovative Herangehensweise, Beratungs- und Therapiesitzungen optimal vorzubereiten, sowie die Bedeutung von systemischem Zeichnen. Seine Webseite "Therapiefilm" bietet eine Schatzkiste an Wissen, die es ermöglicht, sich kontinuierlich weiterzubilden und zu wachsen. Dr. Prior betonte die entscheidende Rolle der Klientenrückmeldungen während des Beratungsprozesses. Durch offene Kommunikation und gezieltes Zuhören stellt er sicher, dass er die Ziele und Anliegen seiner Klienten klar versteht und gemeinsam mit ihnen an deren Erfolg arbeitet. Unser Gespräch offenbarte auch die Vielschichtigkeit von Dr. Priors Ansatz in Bezug auf Coaching, Beratung und Therapie sowie seine Leidenschaft für klare Kommunikation, Zielsetzung und positive Veränderungserfahrungen bei den Klienten. Tauchen Sie ein in die Welt der effektiven Interventionen mit Dr. Manfred Prior und entdecken Sie, wie Sie Ihr volles Potenzial entfalten können!

The Dana & Parks Podcast
7 years in prison for theft...but a LONG list of priors. Hour 2 2/6/2024

The Dana & Parks Podcast

Play Episode Listen Later Feb 6, 2024 34:56


The Nonlinear Library
EA - Can a war cause human extinction? Once again, not on priors by Vasco Grilo

The Nonlinear Library

Play Episode Listen Later Jan 25, 2024 31:57


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Can a war cause human extinction? Once again, not on priors, published by Vasco Grilo on January 25, 2024 on The Effective Altruism Forum. Summary Stephen Clare's classic EA Forum post How likely is World War III? concludes "the chance of an extinction-level war [this century] is about 1%". I commented that power law extrapolation often results in greatly overestimating tail risk, and that fitting a power law to all the data points instead of the ones in the right tail usually leads to higher risk too. To investigate the above, I looked into historical annual war deaths along the lines of what I did in Can a terrorist attack cause human extinction? Not on priors, where I concluded the probability of a terrorist attack causing human extinction is astronomically low. Historical annual war deaths of combatants suggest the annual probability of a war causing human extinction is astronomically low once again. 6.36*10^-14 according to my preferred estimate, although it is not resilient, and can easily be wrong by many orders of magnitude ( OOMs). One may well update to a much higher extinction risk after accounting for inside view factors (e.g. weapon technology), and indirect effects of war, like increasing the likelihood of civilisational collapse. However, extraordinary evidence would be required to move up sufficiently many orders of magnitude for an AI, bio or nuclear war to have a decent chance of causing human extinction. In the realm of the more anthropogenic AI, bio and nuclear risk, I personally think underweighting the outside view is a major reason leading to overly high risk. I encourage readers to check David Thorstad's series exaggerating the risks, which includes subseries on climate, AI and bio risk. Introduction The 166th EA Forum Digest had Stephen Clare's How likely is World War III? as the classic EA Forum post (as a side note, the rubric is great!). It presents the following conclusions: First, I estimate that the chance of direct Great Power conflict this century is around 45%. Second, I think the chance of a huge war as bad or worse than WWII is on the order of 10%. Third, I think the chance of an extinction-level war is about 1%. This is despite the fact that I put more credence in the hypothesis that war has become less likely in the post-WWII period than I do in the hypothesis that the risk of war has not changed. I view the last of these as a crucial consideration for cause prioritisation, in the sense it directly informs the potential scale of the benefits of mitigating the risk from great power conflict. It results from assuming each war has a 0.06 % (= 2*3*10^-4) chance of causing human extinction. This is explained elsewhere in the post, and in more detail in the curated one How bad could a war get? by Stephen and Rani Martin. In essence, it is 2 times a 0.03 % chance of war deaths of combatants being at least 8 billion: "In Only the Dead, political scientist Bear Braumoeller [I recommend his appearance on The 80,000 Hours Podcast!] uses his estimated parameters to infer the probability of enormous wars. His [ power law] distribution gives a 1 in 200 chance of a given war escalating to be [at least] twice as bad as World War II and a 3 in 10,000 chance of it causing [at least] 8 billion deaths [of combatants] (i.e. human extinction)". 2 times because the above 0.03 % "may underestimate the chance of an extinction war for at least two reasons. First, world population has been growing over time. If we instead considered the proportion of global population killed per war instead, extreme outcomes may seem more likely. Second, he does not consider civilian deaths. Historically, the ratio of civilian-deaths-to-battle deaths in war has been about 1-to-1 (though there's a lot of variation across wars). So fewer than 8 billion battle deaths would be...

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

We're looking back on 2023 and sharing a handful of our favorite conversations. Last year was full of insightful conversations that shaped the way we think about the most innovative movements in the AI space. Want to hear more? Check out the full episodes here: What is Digital Life? with OpenAI Co-Founder & Chief Scientist Ilya Sutskever  How AI can help small businesses with Former Square CEO Alyssa Henry Will Everyone Have a Personal AI? With Mustafa Suleyman, Founder of DeepMind and Inflection How will AI bring us the future of medicine? With Daphne Koller from Insitro The case for AI optimism with Reid Hoffman from Inflection AI Your AI Friends Have Awoken, With Noam Shazeer Mistral 7B and the Open Source Revolution With Arthur Mensch, CEO Mistral AI The Computing Platform Underlying AI with Jensen Huang, Founder and CEO NVIDIA Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @reidhoffman l @alyssahhenry l @ilyasut l @mustafasuleyman l @DaphneKoller l @arthurmensch l @MrJensenHuang Show Notes:  (0:00) Introduction (0:27) Ilya Sutskever on the governance structure of OpenAI (3:11) Alyssa Henry on how AI can small business owners (5:25) Mustafa Suleyman on defining intelligence (8:53) Reid Hoffman's advice for co-working with AI (11:47) Daphne Koller on probabilistic graphical models (13:15) Noam Shazeer on the possibilities of LLMs (14:27) Arthur Mensch on keeping AI open (17:19) Jensen Huang on how Nvidia decides what to work on

Learning Bayesian Statistics
How to Choose & Use Priors, with Daniel Lee

Learning Bayesian Statistics

Play Episode Listen Later Dec 20, 2023 9:06


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meListen to the full episode: https://learnbayesstats.com/episode/96-pharma-models-sports-analytics-stan-news-daniel-lee/Watch the interview: https://www.youtube.com/watch?v=lnq5ZPlup0EVisit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca, Dante Gates, Matt Niccolls, Maksim Kuznecov, Michael Thomas and Luke Gorrie.This podcast uses the following third-party services for analysis: Podcorn - https://podcorn.com/privacy

Machine Learning Street Talk
MULTI AGENT LEARNING - LANCELOT DA COSTA

Machine Learning Street Talk

Play Episode Listen Later Nov 5, 2023 49:56


Please support us https://www.patreon.com/mlst https://discord.gg/aNPkGUQtc5 https://twitter.com/MLStreetTalk Lance Da Costa aims to advance our understanding of intelligent systems by modelling cognitive systems and improving artificial systems. He's a PhD candidate with Greg Pavliotis and Karl Friston jointly at Imperial College London and UCL, and a student in the Mathematics of Random Systems CDT run by Imperial College London and the University of Oxford. He completed an MRes in Brain Sciences at UCL with Karl Friston and Biswa Sengupta, an MASt in Pure Mathematics at the University of Cambridge with Oscar Randal-Williams, and a BSc in Mathematics at EPFL and the University of Toronto. Summary: Lance did pure math originally but became interested in the brain and AI. He started working with Karl Friston on the free energy principle, which claims all intelligent agents minimize free energy for perception, action, and decision-making. Lance has worked to provide mathematical foundations and proofs for why the free energy principle is true, starting from basic assumptions about agents interacting with their environment. This aims to justify the principle from first physics principles. Dr. Scarfe and Da Costa discuss different approaches to AI - the free energy/active inference approach focused on mimicking human intelligence vs approaches focused on maximizing capability like deep reinforcement learning. Lance argues active inference provides advantages for explainability and safety compared to black box AI systems. It provides a simple, sparse description of intelligence based on a generative model and free energy minimization. They discuss the need for structured learning and acquiring core knowledge to achieve more human-like intelligence. Lance highlights work from Josh Tenenbaum's lab that shows similar learning trajectories to humans in a simple Atari-like environment. Incorporating core knowledge constraints the space of possible generative models the agent can use to represent the world, making learning more sample efficient. Lance argues active inference agents with core knowledge can match human learning capabilities. They discuss how to make generative models interpretable, such as through factor graphs. The goal is to be able to understand the representations and message passing in the model that leads to decisions. In summary, Lance argues active inference provides a principled approach to AI with advantages for explainability, safety, and human-like learning. Combining it with core knowledge and structural learning aims to achieve more human-like artificial intelligence. https://www.lancelotdacosta.com/ https://twitter.com/lancelotdacosta Interviewer: Dr. Tim Scarfe TOC 00:00:00 - Start 00:09:27 - Intelligence 00:12:37 - Priors / structure learning 00:17:21 - Core knowledge 00:29:05 - Intelligence is specialised 00:33:21 - The magic of agents 00:39:30 - Intelligibility of structure learning #artificialintelligence #activeinference

Post Show Recaps: LIVE TV & Movie Podcasts with Rob Cesternino
The Morning Show Season 3 Episode 9 Recap, ‘Update Your Priors'

Post Show Recaps: LIVE TV & Movie Podcasts with Rob Cesternino

Play Episode Listen Later Nov 1, 2023 47:51


In this podcast, the hosts Grace Leeder (@hifromgrace) and Ariel (@thatotherariel) watch and discuss Season 3 Episode 9, "Update Your Priors".

Apple TV Plus on Post Show Recaps
The Morning Show Season 3 Episode 9 Recap, ‘Update Your Priors'

Apple TV Plus on Post Show Recaps

Play Episode Listen Later Nov 1, 2023 47:51


In this podcast, the hosts Grace Leeder (@hifromgrace) and Ariel (@thatotherariel) watch and discuss Season 3 Episode 9, "Update Your Priors".

The Morning Show: A Post Show Recap
The Morning Show Season 3 Episode 9 Recap, ‘Update Your Priors'

The Morning Show: A Post Show Recap

Play Episode Listen Later Nov 1, 2023 47:51


In this podcast, the hosts Grace Leeder (@hifromgrace) and Ariel (@thatotherariel) watch and discuss Season 3 Episode 9, "Update Your Priors".

Learning Bayesian Statistics
#94 Psychometrics Models & Choosing Priors, with Jonathan Templin

Learning Bayesian Statistics

Play Episode Listen Later Oct 24, 2023 66:25 Transcription Available


Proudly sponsored by PyMC Labs, the Bayesian Consultancy. Book a call, or get in touch!My Intuitive Bayes Online Courses1:1 Mentorship with meIn this episode, Jonathan Templin, Professor of Psychological and Quantitative Foundations at the University of Iowa, shares insights into his journey in the world of psychometrics.Jonathan's research focuses on diagnostic classification models — psychometric models that seek to provide multiple reliable scores from educational and psychological assessments. He also studies Bayesian statistics, as applied in psychometrics, broadly. So, naturally, we discuss the significance of psychometrics in psychological sciences, and how Bayesian methods are helpful in this field.We also talk about challenges in choosing appropriate prior distributions, best practices for model comparison, and how you can use the Multivariate Normal distribution to infer the correlations between the predictors of your linear regressions.This is a deep-reaching conversation that concludes with the future of Bayesian statistics in psychological, educational, and social sciences — hope you'll enjoy it!Our theme music is « Good Bayesian », by Baba Brinkman (feat MC Lars and Mega Ran). Check out his awesome work at https://bababrinkman.com/ !Thank you to my Patrons for making this episode possible!Yusuke Saito, Avi Bryant, Ero Carrera, Giuliano Cruz, Tim Gasser, James Wade, Tradd Salvo, William Benton, James Ahloy, Robin Taylor,, Chad Scherrer, Zwelithini Tunyiswa, Bertrand Wilden, James Thompson, Stephen Oates, Gian Luca Di Tanna, Jack Wells, Matthew Maldonado, Ian Costley, Ally Salim, Larry Gill, Ian Moran, Paul Oreto, Colin Caprani, Colin Carroll, Nathaniel Burbank, Michael Osthege, Rémi Louf, Clive Edelsten, Henri Wallen, Hugo Botha, Vinh Nguyen, Marcin Elantkowski, Adam C. Smith, Will Kurt, Andrew Moskowitz, Hector Munoz, Marco Gorelli, Simon Kessell, Bradley Rode, Patrick Kelley, Rick Anderson, Casper de Bruin, Philippe Labonde, Michael Hankin, Cameron Smith, Tomáš Frýda, Ryan Wesslen, Andreas Netti, Riley King, Yoshiyuki Hamajima, Sven De Maeyer, Michael DeCrescenzo, Fergal M, Mason Yahr, Naoya Kanai, Steven Rowland, Aubrey Clayton, Jeannine Sue, Omri Har Shemesh, Scott Anthony Robson, Robert Yolken, Or Duek, Pavel Dusek, Paul Cox, Andreas Kröpelin, Raphaël R, Nicolas Rode, Gabriel Stechschulte, Arkady, Kurt TeKolste, Gergely Juhasz, Marcus Nölke, Maggi Mackintosh, Grant Pezzolesi, Avram Aelony, Joshua Meehl, Javier Sabio, Kristian Higgins, Alex Jones, Gregorio Aguilar, Matt Rosinski, Bart Trudeau, Luis Fonseca and Dante Gates.Visit https://www.patreon.com/learnbayesstats to unlock exclusive Bayesian swag ;)Links from the show:Jonathan's website: https://jonathantemplin.com/Jonathan on Twitter:

Bulletproof Fantasy Football
Sweatin' Bullets: Week 1 Panic or Patience - Evaluating the Early Performances of Dynasty Players

Bulletproof Fantasy Football

Play Episode Listen Later Sep 15, 2023 116:37


In today's episode, we talk about all things Week 1 along with our reactions to the Eagles/Vikings game on Thursday night!   Connect with us: Twitter: @DFBeanCounter, @JakobSanderson YouTube: Bulletproof Fantasy Football Patreon: BulletProof Fantasy Football Thinking About Thinking: https://jakobsanderson.substack.com/ Timestamps: 00:00:07 - Introduction, 00:01:48 - DeAndre Swift's Performance, 00:06:19 - Eagles' Offense and Game Plan, 00:08:24 - Eagles' Pass Attempts, 00:11:35 - Devonta Smith and AJ Brown, 00:13:00 - Thoughts on AJ Brown, 00:14:56 - Preferred types of players in fantasy football, 00:16:47 - Comparison between Chris Olave and AJ Brown, 00:19:19 - Fantasy value of Jordan Addison, 00:26:23 - Analysis of Week 1 Performances, 00:27:16 - Reacting to Week 1 Results, 00:29:01 - Balancing Optimistic and Pessimistic Scenarios, 00:30:09 - Confirmation of Priors and Adjusting Evaluations, 00:34:36 - Biggest Surprise: Anthony Richardson, 00:39:11 - "Anthony Richardson's Performance", 00:41:12 - "Positive Approach and Encouraging Development", 00:42:26 - "Buy Michael Pittman and Jelaney Woods", 00:44:11 - "Rams' Surprising Performance", 00:49:38 - "Puka Nakua's Performance", 00:52:08 - "The Breaking Point in Dynasty", 00:53:55 - "The Magnetism of Rookie Wide Receivers", 00:56:50 - "Matt Stafford and Puka Nakua", 01:00:03 - "Selling Cam Akers and Buying Zach Evans", 01:03:50 - "The Inconsistency of Cam Akers' Usage", 01:05:18 - Concerns about offensive decision-making, 01:07:05 - Worries about Gino Smith's performance, 01:09:56 - Evaluating Christian Kirk's performance, 01:12:43 - Unexpected role for Christian Kirk, 01:17:14 - False panic over Traylon Burks, 01:18:52 - Traylon Burks and DeAndre Hawkins, 01:19:29 - Panic over Atlanta Falcons duo Drake London and Kyle Pitts, 01:21:01 - Holding onto Kyle Pitts, 01:23:30 - Controversial trade involving Kyle Pitts, 01:32:53 - Concerns about Miles Sanders' Touches, 01:33:30 - Sanders' Potential and Trade Value, 01:34:42 - Article Reference on Third Downs for RBs, 01:35:59 - Disagreement on Zay Flowers' Value, 01:40:40 - Trading a Mid-First Round Pick for Zay Flowers, 01:46:26 - 2024 Quarterbacks and Running Backs, 01:48:36 - ACL Injuries and JK Dobbins, 01:51:31 - Strength of the 2024 Class, 01:55:01 - Javonte Williams as RB1, 01:55:39 - Where to Find the Hosts,

The Nonlinear Library
EA - "Dimensions of Pain" workshop: Summary and updated conclusions by Rethink Priorities

The Nonlinear Library

Play Episode Listen Later Aug 21, 2023 24:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Dimensions of Pain" workshop: Summary and updated conclusions, published by Rethink Priorities on August 21, 2023 on The Effective Altruism Forum. Executive Summary Background: The workshop's goal was to leverage expertise in pain to identify strategies for testing whether severity or duration looms larger in the overall badness of negatively valenced experiences. The discussion was focused on how to compare welfare threats to farmed animals. No gold standard behavioral measures: Although attendees did not express confidence in any single paradigm, several felt that triangulating results across several paradigms would increase clarity about whether nonhuman animals are more averse to severe pains or long-lasting pains. Consistent results across different methodologies only strengthens a conclusion if they have uncorrelated or opposing biases. Fortunately, while classical conditioning approaches are probably biased towards severity mattering more, operant conditioning approaches are probably biased towards duration mattering more. Unfortunately, the biases might be too large to produce convergent results. Behavioral experiments may lack external validity: Attendees believed that a realistic experiment would not involve pains of the magnitude that characterize the worst problems farmed animals endure. Thus, instead of prioritizing external validity, we recommend whatever study designs create the largest differences in severity. Studies of laboratory animals and (especially) humans seem more likely to generate large differences in severity than studies of farmed animals. No gold standard biomarkers: Biomarkers could elide the biases that behavioral and self-report data inevitably introduce. However, attendees argued that there are no currently known biomarkers that could serve as an aggregate measure of pain experience over the course of a lifetime. Priors should favor prioritizing duration: Attendees had competing ideas about how to prioritize between severity and duration in the absence of compelling empirical evidence. In cases where long-lasting harms are at least thousands of times longer than more severe harms and are of at least moderate severity, we favor a presumption that long-lasting pains cause more disutility overall. Nevertheless, due to empirical and moral uncertainty, we would recommend putting some credence (~20%) in the most severe harms causing farmed animals at least as much disutility as the longest-lasting harms they experience. Background The Dimensions of Pain workshop was held April 27-28, 2023 at University of British Columbia. Attendees included animal welfare scientists (viz., Dan Weary, Thomas Ede, Leonie Jacobs, Ben Lecorps, Cynthia Schuck, Wladimir Alonso, and Michelle Lavery), pain scientists (Jeff Mogil, Gregory Corder, Fiona Moultrie, Brent Vogt), and philosophers (Bob Fischer, Murat Aydede, Walter Veit). William McAuliffe and Adam Shriver, the authors of this report, guided the discussion. Funders who want to cost-effectively improve animal welfare have to decide whether attenuating brief, severe pains (e.g., live-shackle slaughter) or chronic, milder pains (e.g., lameness) reduces more suffering overall. Farmers also face similar tradeoffs when deciding between multiple methods for achieving the same goal (e.g., single-stage versus multi-stage stunning). Our original report exploring the considerations that would favor prioritizing one dimension over another, The Relative Importance of the Severity and Duration of Pain, identified barriers to designing experiments that would provide clear-cut empirical evidence. The goal of the workshop was to ascertain whether an interdisciplinary group of experts could overcome these issues. No gold standard behavioral measures We spent one portion of the workshop reviewing some of the confounds th...

The Nonlinear Library
EA - Book summary: 'Why Intelligence Fails' by Robert Jervis by Ben Stewart

The Nonlinear Library

Play Episode Listen Later Jun 20, 2023 21:59


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book summary: 'Why Intelligence Fails' by Robert Jervis, published by Ben Stewart on June 20, 2023 on The Effective Altruism Forum. Here's a summary of ‘Why Intelligence Fails' by the political scientist Robert Jervis. It's a book analysing two cases where the U.S. intelligence community ‘failed': being slow to foresee the 1979 Iranian Revolution, and the overconfident and false assessment that Saddam Hussein had weapons of mass destruction in 2003. I'm interested in summarising more books that contain valuable insights but are outside the typical EA canon. If you'd like more of this or have a book to suggest, let me know. Key takeaways Good intelligence generally requires the relevant agency and country office to prioritise the topic and direct scarce resources to it. Good intelligence in a foreign country requires a dedicated diplomatic and covert collection corps with language skills and contextual knowledge. Intelligence analysis can be deficient in critical review, external expertise, and social-scientific methodology. Access to classified information only generates useful insight for some phenomena. Priors can be critical in determining interpretation within intelligence, and they can often go unchallenged. Political pressure can have a significant effect on analysis, but is hard to pin down. If the justification of an intelligence conclusion is unpublished, you can still interrogate it by asking: whether the topic would have been given sufficient priority and resources by the relevant intelligence organisation whether classified information, if available, would be likely to yield insight whether pre-existing beliefs are likely to bias analysis whether political pressures could significantly affect analysis Some correctives to intelligence failures which may be useful to EA: demand sharp, explicit, and well-tracked predictions demand early warning indicators, and notice when beliefs can only be disproven at a late stage consider negative indicators - 'dogs that don't bark', i.e. things that the view implies should not happen use critical engagement by peers and external experts, especially by challenging fundamental beliefs that influence what seems plausible and provide alternative hypotheses and interpretations use red-teams, pre-mortems, and post-mortems. Overall, I've found the book to somewhat demystify intelligence analysis. You should contextualise a piece of analysis with respect to the psychology and resources involved, including whether classified information would be of significant benefit. I have become more sceptical of intelligence, but the methodology of focusing on two known failures - selecting on the dependent variable - mean that I hesitate to become too pessimistic about intelligence as a whole and as it functions today. Why it's relevant to EA The most direct application of this topic is to the improvement of institutional decision-making, but there is value for any cause area that depends on conducting or interpreting analysis of state and non-state adversaries, such as in biosecurity, nuclear war, or great power conflict. This topic may also contribute to the reader's sense of when and how much one should defer to the outputs of intelligence communities. Deference is motivated by their access to classified information and presumed analytic capability. However, Tetlock's ‘Expert Political Judgment' cast doubt on the value of classified information for improving prediction compared to generalist members of the public. Finally, assessments of the IC's epistemic practices might offer lessons for how an intellectual community should grapple with information hazards, both intellectually and socially. More broadly, the IC is an example of a group pursuing complex, decision-relevant analysis in a high-uncertainty environment. Their successes and ...

The Nonlinear Library
EA - AGI Catastrophe and Takeover: Some Reference Class-Based Priors by zdgroff

The Nonlinear Library

Play Episode Listen Later May 25, 2023 12:40


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AGI Catastrophe and Takeover: Some Reference Class-Based Priors, published by zdgroff on May 24, 2023 on The Effective Altruism Forum. This is a linkpost for I am grateful to Holly Elmore, Michael Aird, Bruce Tsai, Tamay Besiroglu, Zach Stein-Perlman, Tyler John, and Kit Harris for pointers or feedback on this document. Executive Summary Overview In this document, I collect and describe reference classes for the risk of catastrophe from superhuman artificial general intelligence (AGI). On some accounts, reference classes are the best starting point for forecasts, even though they often feel unintuitive. To my knowledge, nobody has previously attempted this for risks from superhuman AGI. This is to a large degree because superhuman AGI is in a real sense unprecedented. Yet there are some reference classes or at least analogies people have cited to think about the impacts of superhuman AI, such as the impacts of human intelligence, corporations, or, increasingly, the most advanced current AI systems. My high-level takeaway is that different ways of integrating and interpreting reference classes generate priors on AGI-caused human extinction by 2070 anywhere between 1/10000 and 1/6 (mean of ~0.03%-4%). Reference classes offer a non-speculative case for concern with AGI-related risks. On this account, AGI risk is not a case of Pascal's mugging, but most reference classes do not support greater-than-even odds of doom. The reference classes I look at generate a prior for AGI control over current human resources anywhere between 5% and 60% (mean of ~16-26%). The latter is a distinctive result of the reference class exercise: the expected degree of AGI control over the world looks to far exceed the odds of human extinction by a sizable margin on these priors. The extent of existential risk, including permanent disempowerment, should fall somewhere between these two ranges. This effort is a rough, non-academic exercise and requires a number of subjective judgment calls. At times I play a bit fast and loose with the exact model I am using; the work lacks the ideal level of theoretical grounding. Nonetheless, I think the appropriate prior is likely to look something like what I offer here. I encourage intuitive updates and do not recommend these priors as the final word. Approach I collect sets of events that superhuman AGI-caused extinction or takeover would be plausibly representative of, ex ante. Interpreting and aggregating them requires a number of data collection decisions, the most important of which I detail here: For each reference class, I collect benchmarks for the likelihood of one or two things: Human extinction AI capture of humanity's available resources. Many risks and reference classes are properly thought of as annualised risks (e.g., the yearly chance of a major AI-related disaster or extinction from asteroid), but some make more sense as risks from a one-time event (e.g., the chance that the creation of a major AI-related disaster or a given asteroid hit causes human extinction). For this reason, I aggregate three types of estimates (see the full document for the latter two types of estimates): 50-Year Risk (e.g. risk of a major AI disaster in 50 years) 10-Year Risk (e.g. risk of a major AI disaster in 10 years) Risk Per Event (e.g. risk of a major AI disaster per invention) Given that there are dozens or hundreds of reference classes, I summarise them in a few ways: Minimum and maximum Weighted arithmetic mean (i.e., weighted average) I “winsorise”, i.e. replace 0 or 1 with the next-most extreme value. I intuitively downweight some reference classes. For details on weights, see the methodology. Weighted geometric mean Findings for Fifty-Year Impacts of Superhuman AI See the full document and spreadsheet for further details on how I arrive at these figures. ...

Code Story
No Priors: Jensen Huang, Founder & CEO of Nvidia

Code Story

Play Episode Listen Later May 12, 2023 45:35


This week, we're sharing something special: The No Priors podcast. No Priors is your guide to the AI revolution. At this moment of inflection in technology, co-hosts Elad Gil and Sarah Guo ask the world's leading AI engineers, researchers and founders the biggest questions - people like Cristobal Valenzuela, Founder/CEO RunwayML and Kevin Scott, CTO of Microsoft.They ask questions like: How far away is AGI? What markets are at risk for disruption? How will commerce, culture, and society change? What's happening in state-of-the-art research? In this episode, Jensen Huang, legendary founder/CEO of Nvidia talks about how Nvidia is powering AI models, their latest chips, how he runs Nvidia, and the AI applications he's most excited about.You can find No Priors wherever you get your podcasts.And, thanks again for listening.LinksApple: https://podcasts.apple.com/us/podcast/no-priors-artificial-intelligence-machine-learning/id1668002688Spotify: https://open.spotify.com/show/0O65xhqvGVhpgdIrrdlEYkSupport this podcast at — https://redcircle.com/code-story/donationsAdvertising Inquiries: https://redcircle.com/brandsPrivacy & Opt-Out: https://redcircle.com/privacy

E21: VC Insights on Investing in Artificial Intelligence with Sarah Guo and Elad Gil of No Priors Podcast

Play Episode Listen Later May 2, 2023 61:09


Nathan Labenz and Erik Torenberg sit down with Sarah Guo and Elad Gil, notable investors and co-hosts of the AI-focused No Priors podcast. They discuss how Sarah and Elad are approaching AI investment opportunities right now, how that differs from how they've thought about investing in the past, where in the stack from hardware to applications they expect to see value accrue, what modes of human-AI interaction they are most interested in, and more. Sarah is the founder of $100M AI-focused venture fund Conviction VC, which she launched last fall. She was previously General Partner at Greylock. Elad is a serial entrepreneur and a startup investor. He has invested in over 40 companies now worth $1B or more each, and is also author of the High Growth Handbook. This episode is the first in a series centered on talking to rising voices in AI media, people who are now only working overtime to understand everything going on in AI, but also creating thought leadership and educational content meant to help others get up to speed as well. RECOMMENDED PODCAST: The HR industry is at a crossroads. What will it take to construct the next generation of incredible businesses – and where can people leaders have the most business impact? Hosts Nolan Church and Kelli Dragovich have been through it all, the highs and the lows – IPOs, layoffs, executive turnover, board meetings, culture changes, and more. With a lineup of industry vets and experts, Nolan and Kelli break down the nitty-gritty details, trade offs, and dynamics of constructing high performing companies. Through unfiltered conversations that can only happen between seasoned practitioners, Kelli and Nolan dive deep into the kind of leadership-level strategy that often happens behind closed doors. Check out the first episode with the architect of Netflix's culture deck Patty McCord. https://link.chtbl.com/hrheretics LINKS: No Priors Podcast on Spotify: https://open.spotify.com/show/0O65xhqvGVhpgdIrrdlEYk No Priors on Apple Podcast: https://podcasts.apple.com/us/podcast/no-priors-artificial-intelligence-machine-learning/id1668002688 Elad Gil's blog: https://blog.eladgil.com/ Sarah Guo's blog: https://sarahguo.com/blog TIMESTAMPS: (00:00) Episode preview (04:43) What is software 3.0 (09:14) Disruption coming from startups or incumbents? (13:42) Sarah and Elad identify overlooked investment opportunities in AI (15:19) Sponsor: Omneky (15:46) Future of social media (22:45) AI agents & personal co-pilots (25:32) Where to invest in AI? (31:11) How our kids will interact with AI (34:50) How to gain conviction as an investor in AI (45:07) When should founders raise money and when should they bootstrap? (46:28) How should startups spend their capital now that we have AI capabilities? (48:10) Sarah & Elad's favorite products in AI (51:39) Would Sarah & Elad get a neuralink implant? (53:41) AI hopes and fears TWITTER: @CogRev_Podcast @labenz (Nathan) @eriktorenberg (Erik) @sarahnormous (Sarah) @eladgil (Elad) Thank you Omneky for sponsoring The Cognitive Revolution. Omneky is an omnichannel creative generation platform that lets you launch hundreds of thousands of ad iterations that actually work, customized across all platforms, with a click of a button. Omneky combines generative AI and real-time advertising data. Mention "Cog Rev" for 10% off. More show notes and reading material released in our Substack: https://cognitiverevolution.substack.com/

Machine Learning Street Talk
Unlocking the Brain's Mysteries: Chris Eliasmith on Spiking Neural Networks and the Future of Human-Machine Interaction

Machine Learning Street Talk

Play Episode Listen Later Apr 10, 2023 109:36


Patreon: https://www.patreon.com/mlst Discord: https://discord.gg/ESrGqhf5CB Twitter: https://twitter.com/MLStreetTalk Chris Eliasmith is a renowned interdisciplinary researcher, author, and professor at the University of Waterloo, where he holds the prestigious Canada Research Chair in Theoretical Neuroscience. As the Founding Director of the Centre for Theoretical Neuroscience, Eliasmith leads the Computational Neuroscience Research Group in exploring the mysteries of the brain and its complex functions. His groundbreaking work, including the Neural Engineering Framework, Neural Engineering Objects software environment, and the Semantic Pointer Architecture, has led to the development of Spaun, the most advanced functional brain simulation to date. Among his numerous achievements, Eliasmith has received the 2015 NSERC "Polany-ee" Award and authored two influential books, "How to Build a Brain" and "Neural Engineering." Chris' homepage: http://arts.uwaterloo.ca/~celiasmi/ Interviewers: Dr. Tim Scarfe and Dr. Keith Duggar TOC: Intro to Chris [00:00:00] Continuous Representation in Biologically Plausible Neural Networks [00:06:49] Legendre Memory Unit and Spatial Semantic Pointer [00:14:36] Large Contexts and Data in Language Models [00:20:30] Spatial Semantic Pointers and Continuous Representations [00:24:38] Auto Convolution [00:30:12] Abstractions and the Continuity [00:36:33] Compression, Sparsity, and Brain Representations [00:42:52] Continual Learning and Real-World Interactions [00:48:05] Robust Generalization in LLMs and Priors [00:56:11] Chip design [01:00:41] Chomsky + Computational Power of NNs and Recursion [01:04:02] Spiking Neural Networks and Applications [01:13:07] Limits of Empirical Learning [01:22:43] Philosophy of Mind, Consciousness etc [01:25:35] Future of human machine interaction [01:41:28] Future research and advice to young researchers [01:45:06] Refs: http://compneuro.uwaterloo.ca/publications/dumont2023.html  http://compneuro.uwaterloo.ca/publications/voelker2019lmu.html  http://compneuro.uwaterloo.ca/publications/voelker2018.html http://compneuro.uwaterloo.ca/publications/lu2019.html  https://www.youtube.com/watch?v=I5h-xjddzlY

Kopfhörer - Vorträge
Pater Elias Füllenbach OP: "Die Wittenberger 'Judensau' vor Gericht - Unser Umgang mit antijüdischer Kirchenkunst"

Kopfhörer - Vorträge

Play Episode Listen Later Feb 12, 2023 63000:00


Unser Umgang mit antijüdischer Kirchenkunst. Ein Vortrag des Priors des Düsseldorfer Dominikanerkonvents in Kooperation mit der Gesellschaft für Christlich-Jüdische Zusammenarbeit (CJZ) Düsseldorf e.V.

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

AI is transforming our future, but what does that really mean? In ten years, will humans be forced to please our AGI overlords or will we have unlocked unlimited capacity for human potential? That's why Sarah Guo and Elad Gil started this new podcast, named No Priors. In each episode, Sarah and Elad talk with the leading engineers, researchers and founders in AI, across the stack. We'll talk about the technical state of the art, how that impacts business, and get them to predict what's next. Follow the podcast wherever you listen so you never miss an episode. We'll see you next week with a new episode. Email feedback to show@no-priors.com

Radio Crystal Blue
Radio Crystal Blue 1/31/23 part 1

Radio Crystal Blue

Play Episode Listen Later Jan 31, 2023 157:26


Chuck Berry "Roll Over Beethoven" Jerry Lee Lewis "Wild One" The Bobby Fuller Four "Saturday Night" Solomon Burke "It's All Right" David Uosikkinen's Songs In The Pocket "You Can't Sit Down" - Essential Songs Of Philadelphia & "Expressway To Your Heart" - The Philly Special www.songsinthepocket.org ***************** Fleetwood Mac "World In Harmony"- Then Play On Creedence Clearwater Revival - "Born On The Bayou" The Concert Grateful Dead "Truckin" - American Beauty *************************** "Now the gray geese call, and winter finds its voice/And the leaves will fall as if they had a choice/Just another bell unanswered, another unplayed song/If our fate was in the heavens, this time the heavens got it wrong." The Byrds "Draft Morning" - The Notorious Byrd Brothers The Byrds "Mind Gardens" - Younger Than Yesterday Crosby, Stills, & Nash "Guinnevere" - s/t Crosby, Stills, Nash & Young "Deja Vu" - Deja Vu Jefferson Airplane "Wooden Ships" Volunteers Stephen Stills "Sit Yourself Down s/t Phil Collins "That's Just The Way It Is" ...But Seriously Kenny White "One Bell Unanswered" Long List of Priors www.kennywhite.net *************************** Eric Harrison "Astor Place" No Defenses www.ericharrisonmusic.com Maple Run Band "when You're Around" - Used To Be The Next Big Thing www.maplerunband.com Hymn For Her "Scoop" Pop-n-Downers www.hymnforher.com Meg Williams "Feel That Way" - Live & Learn www.megwilliamsmusic.com Miss Tess "The Moon Is An Ashtray" - The Moon Is An Ashtray www.misstessmusic.com Nicholas Edward Williams "Green Rocky Road" - Folk Songs For Old Times www.nicholasedwardwilliams.com Julia Sanders "Place Where We All Meet" - Morning Star www.juliasandersmusic.com Few Miles South "Wiregrass" - Wiregrass www.fewmilessouth.com --- Send in a voice message: https://podcasters.spotify.com/pod/show/radiocblue/message Support this podcast: https://podcasters.spotify.com/pod/show/radiocblue/support

Holy Crap Records Podcast
Ep 242! With music by: Beton, Kissed by an Animal, Priors, Cor de Lux, The Mary Veils, Cor de Lux, The Zelles

Holy Crap Records Podcast

Play Episode Listen Later Dec 27, 2022 44:04


Best of the underground, week of Dec 27, 2022: The youth kind of gets it. John misremembers a wutang cassette. Luckily there are 7 great songs! (All podcasts and reviews are on www.hlycrp.com, and you can also follow us on Facebook, Instagram, and Twitter, and Spotify, and Apple Podcasts.)

Bill Handel on Demand
BHS - 7A - 'Catfishing' Virginia Cop Had History of Violence and Gavin Newsom's Electric Car Mandate Examined

Bill Handel on Demand

Play Episode Listen Later Dec 7, 2022 27:27


The Virginia police officer who 'catfished' his victim and killed their family in California was detained in 2016 after having made violent threats. The LAPD took what's being considered highly unusual action after its handling of the CBS sex scandal was called into question. California Governor Gavin Newsom's electric car mandate will either save the world or stall the freeways. And Raphael Warnock's win in Georgia now forges a path for Democrats through the new battlegrounds ahead.

The Nonlinear Library
LW - K-types vs T-types — what priors do you have? by strawberry calm

The Nonlinear Library

Play Episode Listen Later Nov 4, 2022 12:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: K-types vs T-types — what priors do you have?, published by strawberry calm on November 3, 2022 on LessWrong. Summary: There are two types of people, K-types and T-types. K-types want theories with low kolmogorov-complexity and T-types want theories with low time-complexity. This classification correlates with other classifications and with certain personality traits. Epistemic status: I'm somewhat confident that this classification is real and that it will help you understand why people believe the things they do. If there are major flaws in my understanding then hopefully someone will point that out. K-types vs T-types What makes a good theory? There's broad consensus that good theories should fit our observations. Unfortunately there's less consensus about to compare between the different theories that fit our observations — if we have two theories which both predict our observations to the exact same extent then how do we decide which to endorse? We can't shrug our shoulders and say "let's treat them all equally" because then we won't be able to predict anything at all about future observations. This is a consequence of the No Free Lunch Theorem: there are exactly as many theories which fit the seen observations and predict the future will look like X as there are which fit the seen observations and predict the future will look like not-X. So we can't predict anything unless we can say "these theories fitting the observations are better than these other theories which fit the observations". There are two types of people, which I'm calling "K-types" and "T-types", who differ in which theories they pick among those that fit the observations. K-types and T-types have different priors. K-types prefer theories which are short over theories which are long. They want theories you can describe in very few words. But they don't care how many inferential steps it takes to derive our observations within the theory. In contrast, T-types prefer theories which are quick over theories which are slow. They care how many inferential steps it takes to derive our observations within the theory, and are willing to accept longer theories if it rapidly speeds up derivation. Algorithmic characterisation In computer science terminology, we can think of a theory as a computer program which outputs predictions. K-types penalise the kolmogorov complexity of the program (also called the description complexity), whereas T-types penalise the time-complexity (also called the computational complexity). The T-types might still be doing perfect bayesian reasoning even if their prior credences depend on time-complexity. Bayesian reasoning is agnostic about the prior, so there's nothing defective about assigning a low prior to programs with high time-complexity. However, T-types will deviate from Solomonoff inductors, who use a prior which exponentially decays in kolmogorov-complexity. Proof-theoretic characterisation. When translating between proof theory and computer science, (computer program, computational steps, output) is mapped to (axioms, deductive steps, theorems) respectively. Kolmogorov-complexity maps to "total length of the axioms" and time-complexity maps to "number of deductive steps". K-types don't care how many steps there are in the proof, they only care about the number of axioms used in the proof. T-types do care how many steps there are in the proof, whether those steps are axioms or inferences. Occam's Razor characterisation. Both K-types and T-types can claim to be inheritors of Occam's Razor, in that both types prefer simple theories. But they interpret "simplicity" in two different ways. K-types consider the simplicity of the assumptions alone, whereas T-types consider the simplicity of the assumptions plus the derivation. This is the key idea. Both can accuse the other of ...

NFL Fantasy Live
NFL FANTASY FOOTBALL SHOW : Breece Hall is the Jets Offense

NFL Fantasy Live

Play Episode Listen Later Oct 17, 2022 48:19


Marcas Grant and Michael F. Florio are back for a special new edition of the NFL Fantasy Football Podcast live from the Fantasy Lounge! The hosts start with discussing the biggest news from around the league, talk about week 6's top performers , give you their 5 biggest fantasy take aways from Sunday, go over waiver targets, talk what players are deserving of Madden ratings. And Re-Evaluate our Priors ! The NFL Fantasy Football Podcast is part of the NFL Podcast Network.See omnystudio.com/listener for privacy information.

NFL: Good Morning Football
NFL FANTASY FOOTBALL SHOW : Breece Hall is the Jets Offense

NFL: Good Morning Football

Play Episode Listen Later Oct 17, 2022 48:19 Transcription Available


Marcas Grant and Michael F. Florio are back for a special new edition of the NFL Fantasy Football Podcast live from the Fantasy Lounge! The hosts start with discussing the biggest news from around the league, talk about week 6's top performers , give you their 5 biggest fantasy take aways from Sunday, go over waiver targets, talk what players are deserving of Madden ratings. And Re-Evaluate our Priors ! The NFL Fantasy Football Podcast is part of the NFL Podcast Network.See omnystudio.com/listener for privacy information.

The Nonlinear Library
AF - Cataloguing Priors in Theory and Practice by Paul Bricman

The Nonlinear Library

Play Episode Listen Later Oct 13, 2022 12:15


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cataloguing Priors in Theory and Practice, published by Paul Bricman on October 13, 2022 on The AI Alignment Forum. This post is part of my hypothesis subspace sequence, a living collection of proposals I'm exploring at Refine. Preceded by an exploration of Boolean primitives in the context of coupled optimizers. Thanks Alexander Oldenziel and Paul Colognese for discussions which inspired this post. Intro Simplicity prior, speed prior, and stability prior — what do they all have in common? They are all means of tilting an optimization surface towards solutions with certain properties. In other words, they are all heuristics informing the navigation of model space, trainer space, etc. However, what brings them together is also a systematic divide between their theoretical/conceptual/abstract framings and their practical/engineering implementations. All those priors appear to have been used in contemporary ML in one form or another, yet conceptual-heavy researchers are often unaware of those interesting data points, while ML engineers often treat those implementations as mere tricks to improve performance (e.g. generalization). In the language of coupled optimizers I've explored over the few past posts in the sequence, such heuristics are artifacts of meta-optimization (e.g. a human crafting a trainer by building in certain such tendencies), and often tend to be direct human-made artifacts, rather than the result of downstream optimization. Though this need not be the case, as the simplicity prior happens to itself be quite simple... It might emerge naturally from a trainer which itself is trained to be simple. Similarly, the speed prior happens to itself be quite fast, as penalizing a duration is trivial. It might emerge naturally from a trainer, should it be trained to itself be fast. Anyway, let's briefly catalog a few popular priors, describe the rationale for employing them, list instances of their use in ML, and finally document possible failure modes associated with blindly following them. Priors Some members of the list can be better described as heuristics or biases than priors. However, there are some basic connections between those in that they all cause an optimization process to yield certain outcomes more than others. If you start with a prior of possible ML model parametrizations and use training data to update towards your final distribution, your choice of prior will naturally influence the posterior. This prior can be informed by heuristics, such as "we're more interested in simple models than complex ones from the get-go." Bias as in structural bias, inductive bias, and bias-variance trade-off describes a similar process of tailoring the ML model to broadly yield certain types of results efficiently. Simplicity Informally known as Occam's razor, and extremely formally known as Solomonoff prior, the simplicity prior biases optimization towards solutions which are simple. "Simple" here is often operationalized using the minimum description length: what's the shortest description of an algorithm/model/concept/world/etc. required to accurately specify it? Simple candidates are then the ones with a particularly short such shortest length. The rationale behind employing the simplicity prior in an optimization process is that it systematically reduces the variance of the solution. This means that it increases the odds that the solution will behave in a similar way in different situations, as opposed to growing too reliant on the idiosyncrasies of your finite/limited/bound optimization process. When training ML models, simplicity tends to yield strong generalization performance. When building world models, simplicity tends to yield theories which hold better against new empirical data. In ML, the use of simplicity is most associated with the bias-variance tr...

The Athletic Football Show: A show about the NFL
State of the Patriots, confronting our NFC East priors, and a TNF preview with Mike Jones; Plus, debuting QB Therapy with Daniel Jones and The Ringer's Danny Heifetz

The Athletic Football Show: A show about the NFL

Play Episode Listen Later Sep 29, 2022 69:45


Mike Jones joins Robert Mays on this episode of The Athletic Football show to dig into the latest news and stories around the league. The guys discuss Mac Jones' ankle injury and the state of the Patriots, take a second look at some preseason takes they'd like back, and preview the Thursday night showdown between the Dolphins and Bengals. Follow Robert on Twitter: @robertmays Follow Mike on Twitter: @ByMikeJones Subscribe to The Athletic Football Show... Apple Spotify YouTube 1:45 News and notes 10:30 Mac Jones' injury and the state of the Patriots 21:00 Take Two on the Cowboys and Jalen Hurts 33:15 Washington and Carson Wentz 40:24 Dolphins-Bengals preview 43:36 QB Therapy starring Daniel Jones and The Ringer's Danny Heifetz Learn more about your ad choices. Visit megaphone.fm/adchoices

The Taekcast: A (mostly) Sports Podcast
Ep. 279 - Adjusting Our Priors w/ Kevin Cole From PFF

The Taekcast: A (mostly) Sports Podcast

Play Episode Listen Later Sep 29, 2022 64:31


Davis Mattek is joined by Kevin Cole to discuss what narratives have been proven wrong this NFL season, shifts in the betting market, the five greatest QBs ever, and much more. www.patreon.com/taekcast

The Nonlinear Library
AF - Attempts at Forwarding Speed Priors by james.lucassen

The Nonlinear Library

Play Episode Listen Later Sep 24, 2022 27:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Attempts at Forwarding Speed Priors, published by james.lucassen on September 24, 2022 on The AI Alignment Forum. This post summarizes research conducted under the mentorship of Evan Hubinger, and was assisted by collaboration with Pranav Gade, discussions with Adam Jermyn, and draft feedback from Yonadav Shavit. Summary Forwarding priors is a subproblem of deceptive alignment, because if we want to use regularization to create a prior for our search over models that will disincentivize deception, we need to identify a prior that not only give us some useful guarantees but also induce inner searches to have similar guarantees. I tried a bunch of stuff this summer to find priors that forward, and roughly none of it worked. So I'm just sharing the avenues I explored in roughly chronological order, to explain where each thread left off. Using dovetailing as a toy model of what an inner search over algorithms might look like, we can write down some rough formulas for the prior implemented by a dovetailer, and (kind of) the cost of a dovetailer on such a prior. But this suggests very discontinuous behavior, and requires a bunch of strong and specific assumptions, so maybe it's not the most useful model in general. Minimum boolean circuit tree size does seem to forward, but at the cost of probably forbidding all generalization ability. We can offer our models cheap tools to try and get object-level algorithms to occupy a greater fraction of the overall runtime cost, but this quickly runs into a variety of problems. If we incentivize the model to do explicit instead of implicit meta-learning, we can access the code and runtime for lower-level algorithms that were previously inaccessible when run implicitly. However, this still leaves us with some problems, including a (relaxed) version of the original forwarding problem. Average-case speed priors have a bias against large hypothesis classes which makes them favor lookup-table-like strategies, but worst-case speed priors leave all computations except the limiting case highly unconstrained. It seems hard to prove that a fixed-point must exist, because the map from priors to priors that we are using is so discontinuous. Motivation Deceptive alignment seems like a real big problem. Especially because we can't use behavioral incentives to prevent it. One alternative to behavioral incentives is regularization, AKA mechanistic priors - we look at the structure of the model to try and figure out if it's deceptive or not, then penalize models accordingly. In particular, there are some hopes that a speed prior might be anti-deceptive. This is because in the extreme case, the fastest way to do a particular task never involves deception. This is because it's just extra steps: spending the extra compute to model your current situation, understand that you are being trained, that you need to protect your goal, and figuring out that you should comply with the training objective for now. All that takes more computation just being inner-aligned and completing the training objective because you want to. The deceptive agent saves on complexity by having a simple value function and reasoning its way to the training objective - the non-deceptive agent does the exact opposite and saves on computation time by just storing the training objective internally. So what if we formalize “speed” as boolean circuit size, and pick the smallest circuit that performs well on our task? Do we get a guarantee that it's not deceptive? Well, no. In short, this is because the fastest search over algorithms does not necessarily find the fastest algorithm. For example, imagine you're tasked to solve a problem as fast as possible. You take a moment to think about the fastest way out, conducting an inner search over object-level algorithms. Would you sit around and th...

The Dori Monson Show
Hour 1: Man with 28 priors rapes a woman in Greenwood

The Dori Monson Show

Play Episode Listen Later Sep 2, 2022 32:44


12pm - The Big Lead @ Noon // Biden's divisive speech // GUEST: Rachel Farley, a Tacoma resident who visited Seattle - had her car broken into AND later that night set on fire // Man with 28 priors rapes a woman in GreenwoodSee omnystudio.com/listener for privacy information.

Bet The Edge
Changing our NFC Priors: 49ers, Bucs, Vikings and More

Bet The Edge

Play Episode Listen Later Sep 2, 2022 44:29


Vaughn Dalzell (@VmoneySports) and Jay Croucher (@croucherJD) look back on Thursday night in College football before welcoming in Sam Panayotovich (@spshoot) to handicap teams they've changed their opinions on for better and worse in the NFC since the start of the preseason. Among the teams discussed: 49ers, Eagles, Vikings, Bucs and Giants.(04:10) – Recapping Thursday Night in College Football(10:50) – Why we're back backing the 49ers, Vikings and others right now(26:10) – Analyzing if the Giants, Saints and Bucs are prime fade candidates(39:45) – Vaughn and Jay's Edge of the Day

The Double Pivot: Soccer analysis, analytics, and commentary

It's been four games, are we rethinking anything? On Aston Villa, West Ham, Spurs, Chelsea, Arsenal and Brighton.Support the show

Bet The Edge
Changing our AFC Priors: Steelers, Patriots, Dolphins and More

Bet The Edge

Play Episode Listen Later Aug 26, 2022 53:34


August 26: Vaughn Dalzell (@VmoneySports) and Drew Dinsick (@whale_capper) analyze what teams they're tracking in Week 0 of College Football and Week 3 of the Preseason before welcoming PointsBet Head Trader Jay Croucher (@croucherJD) and Lawrence Jackson (@LordDontLose) to handicap teams they've changed their priors on in the AFC since the start of the preseason. Among the teams discussed: Steelers, Raiders, Patriots, Ravens and Dolphins.(4:26) – Can the Bills (+6) at Panthers and Ravens (-6) at Commanders continue their stellar preseason record ATS?(9:46) – Why we're higher on the Steelers, Raiders and others right now(28:12) –  Time to fade the Bengals, Dolphins and Titans?(46:06) – Vaughn and Drew's Edge of the Day 

Song Surfing
E64 • Oh my! Music by Deca feat. Homeboy Sandman, Mild Horses, R3id, Lady Moonbeam, and PRIORS

Song Surfing

Play Episode Listen Later Aug 23, 2022 32:08


Song Surfing is a music podcast featuring a playlist of the best indie and underground music from around the world.  On this episode we'll hear wall-of-fuzz shoegaze, we'll be astounded by clever word play, and we'll hear adventurous storytelling songwriting at its finest. Excellent tunes from New York City, London, Chicago, Lakeland (Florida), and Montreal. Music By: Deca feat. Homeboy Sandman, Mild Horses, R3id, Lady Moonbeam, and PRIORS Visit the https://songsurfingpodcast.com/episode-64/ (Show Notes Page) for links to the music featured on this episode. Mentioned in this episode: Plugin Boutique Use our referral link next time you're shopping for plugins at pluginboutique.com https://pluginboutique.com/?a_aid=songsurfing

Cast Iron Brains -- A Podcast
Tickling Our Own Priors Bone

Cast Iron Brains -- A Podcast

Play Episode Listen Later Jul 26, 2022 112:43


This week CIB is here to yap about how ill-equipped we are for Our Bogus Future, why animals attack, when a recession is definitely not a recession, and all the things Congress will and won't get done in the next couple of months, among plenty of other diversions. Listen, if you must! Has something we said, or failed to say, made you FEEL something? You can tell us all about it on Facebook or Twitter, leave a comment on the show's page on our website, or you can send us an email here. Enjoy!Show RundownOpen — Scattergories at the bar, New Jersey's curious plan for helping crackheads9:05 — Our Bogus Future, brought to you by the lithium-ion battery16:17 — WHEN ANIMALS ATTACK!!!, as explained everywhere but FOX, curiously enough35:28 — Janet Yellen explains how a recession isn't actually a recession46:38 — What will the Democratically-controlled Congress get done before the end of the year?1:20:38 — Study concludes that the chemical imbalance theory of depression is bogus1:37:59 — Wrap up! Fire of Love, Nope, M. Night Shyamalan, and other stuff we've been watchingRelevant Linkage can be found at the webpage for this episode: www.brainiron.com/podcast/episode0104

This Week in Machine Learning & Artificial Intelligence (AI) Podcast
Equivariant Priors for Compressed Sensing with Arash Behboodi - #584

This Week in Machine Learning & Artificial Intelligence (AI) Podcast

Play Episode Listen Later Jul 25, 2022 39:30


Today we're joined by Arash Behboodi, a machine learning researcher at Qualcomm Technologies. In our conversation with Arash, we explore his paper Equivariant Priors for Compressed Sensing with Unknown Orientation, which proposes using equivariant generative models as a prior means to show that signals with unknown orientations can be recovered with iterative gradient descent on the latent space of these models and provide additional theoretical recovery guarantees. We discuss the differences between compression and compressed sensing, how he was able to evolve a traditional VAE architecture to understand equivalence, and some of the research areas he's applying this work, including cryo-electron microscopy. We also discuss a few of the other papers that his colleagues have submitted to the conference, including Overcoming Oscillations in Quantization-Aware Training, Variational On-the-Fly Personalization, and CITRIS: Causal Identifiability from Temporal Intervened Sequences. The complete show notes for this episode can be found at twimlai.com/go/584

Australia in the World
Ep. 93: The invasion of Ukraine and updating priors

Australia in the World

Play Episode Listen Later Mar 6, 2022 38:31


With the world watching in shock at Russia's invasion of Ukraine, Allan and Darren describe how the crisis, and in particular the world's response, are (and are not) causing them to reconsider their priors about how politics and international affairs works. Allan describes how impressed he has been with Europe's response, while Darren is completely surprised at the speed and magnitude of the economic and financial sanctions imposed on Russia, in particular its central bank. Meanwhile, Allan reflects on the contingency of the Biden presidency, wondering how things would have been different had Donald Trump been president and what that says about the variability of the United States as a factor in world politics. Darren considers the responses of regional powers such as China, India and the ASEAN countries. Finally, they discuss early implications for Australia. Relevant links Anne Applebaum, “The impossible suddenly became possible”, The Atlantic, 2 March 2022: https://www.theatlantic.com/ideas/archive/2022/03/putins-war-dispelled-the-worlds-illusions/623335/ China Talk (podcast), “The new old cold war with Tooze and Klein”, 1 March 2022: https://chinatalk.substack.com/p/the-new-old-cold-war-with-tooze-and?s=r Adam Tooze, “Chartbook #89 Russia's financial meltdown and the global dollar system”, 28 February 2022: https://adamtooze.substack.com/p/chartbook-89-russias-financial-meltdown?s=r Robert Keohane and Joseph Nye (1987) “Power and interdependence revisited”, International Organization 41(4): 725-753. http://www.rochelleterman.com/ir/sites/default/files/Keohane%20Nye%201987.pdf Patrick McKenzie, “Moving money internationally”, Bits about money (newsletter), 2 March 2022: https://bam.kalzumeus.com/archive/moving-money-internationally/ Paul Kelly, “Morrison's Mission: A Lowy Institute Paper”, Penguin Specials, February 2022: https://www.penguin.com.au/books/morrisons-mission-a-lowy-institute-paper-penguin-special-9780143778042 The Ezra Klein Show, “Can the West stop Russia by strangling its economy (with Adam Tooze), 1 March 2022: https://www.nytimes.com/2022/03/01/opinion/ezra-klein-podcast-adam-tooze.html