Podcasts about Extrapolation

Method for estimating new data outside known data points

  • 102PODCASTS
  • 146EPISODES
  • 43mAVG DURATION
  • 1MONTHLY NEW EPISODE
  • May 14, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Extrapolation

Latest podcast episodes about Extrapolation

Hospice Insights: The Law and Beyond
Hospice Audit Updates: David Beats Goliath

Hospice Insights: The Law and Beyond

Play Episode Listen Later May 14, 2025 22:02


Hospice audits can have profound financial implications, particularly when the auditors use statistical extrapolation to identify an overpayment amount. The use of extrapolation runs across auditor types, including UPICs and the OIG, and can apply to Medicare and Medicaid. In this episode, Husch Blackwell's Meg Pekarske, Bryan Nowicki, and Emily Solum discuss recent experiences and successes in dealing with statistical extrapolations, as well as what the future of extrapolation looks like.

Study Motivation by Motivation2Study
Neuroscientist: This MORNING ROUTINE is Scientifically Proven to Boost Motivation

Study Motivation by Motivation2Study

Play Episode Listen Later Apr 18, 2025 9:58


With the help of Neuroscientist, Dr. Andrew Huberman, you will OPTIMIZE your morning routine!!Huge thank you to Chris Williamson and Lewis Howes for letting use these interviews. Check out the full interviews on their channels: Chris Williamson: https://www.youtube.com/@UCIaH-gZIVC432YRjNVvnyCA Lewis Howes: https://www.youtube.com/watch?v=ges5AdZIv_sFollow Dr. Huberman:Website: https://hubermanlab.com/Instagram: https://www.instagram.com/hubermanlab/YouTube: https://www.youtube.com/@hubermanlabMusic:Signal To Noise, Extrapolation & Sanctuary by Scott Buckley - released under CC-BY 4.0.Website: https://www.scottbuckley.com.au/ YouTube: https://www.youtube.com/@UCUuUqWLLsUjheuYkP9AWxTA Hosted on Acast. See acast.com/privacy for more information.

Nudge
Why most bestselling business books are BS

Nudge

Play Episode Listen Later Feb 3, 2025 27:06


Business books are everywhere, offering seemingly simple solutions to complex problems—but are they truly helpful? In this episode, Alex Edmans explores the biases that make us fall for oversimplified advice and why many popular business books fail to deliver. You'll learn: How black-and-white thinking fuels the success of books like Dr. Atkins' Diet Revolution and Start With Why. Why confirmation bias leads us to believe unproven claims (feat. Simon Sinek's “Why” theory). The dangers of ignoring nuance, such as in Angela Duckworth's Grit and Malcolm Gladwell's 10,000-hour rule. Real-world examples of flawed reasoning, from the London Marathon tragedy to corporate missteps. How to critically evaluate the advice offered in bestsellers and avoid falling for universal “truths.” ---- Download the Reading List: https://nudge.kit.com/readinglist Sign up to my newsletter: https://www.nudgepodcast.com/mailing-list Connect on LinkedIn: https://www.linkedin.com/in/phill-agnew-22213187/ Watch Nudge on YouTube: https://www.youtube.com/@nudgepodcast/ Alex's book May Contain Lies: https://maycontainlies.com/ ---- Sources:  Edmans, A. (2024). May contain lies: How stories, statistics, and studies exploit our biases—and what we can do about it. University of California Press. Atkins, R. C. (1972). Dr. Atkins' diet revolution: The high calorie way to stay thin forever. New York: Bantam Books. Seidelmann, Sara B. et al. (2018): ‘Dietary carbohydrate intake and mortality: a prospective cohort study and meta-analysis', Lancet Public Health 3, E419–E428 DeLosh, Edward L., Jerome R. Busemeyer and Mark A. McDaniel (1997): ‘Extrapolation: the sine qua non for abstraction in function learning', Journal of Experimental Psychology: Learning, Memory, and Cognition 23, 968–86. Fisher, Matthew and Frank Kiel (2018): ‘The binary bias: a systematic distortion in the integration of information'. Psychological Science 29, 1846–58 Sinek, S. (2009). Start with why: How great leaders inspire everyone to take action. Portfolio. Gladwell, M. (2008). Outliers: The story of success. Little, Brown and Company. Duckworth, A. (2016). Grit: The power of passion and perseverance. Scribner.

Syzygy
s2e4: Exoplanet Extrapolation

Syzygy

Play Episode Listen Later Jan 3, 2025 50:13


We know, we know — we did exoplanets last time. But that was the current state-of-play and a 2024 exoplanets wrapped update. In this episode, Emily looks to the future! She does a deep dive into the promise of the just wonderful JWST, as it prods exoplanets in ways they've never been prodded before.On the web: syzygy.fmHelp us make Syzygy even better! Tell your friends and give us a review, or show your support on Patreon: patreon.com/syzygypodSyzygy is produced by Chris Stewart and co-hosted by Dr Emily Brunsden from the Department of Physics at the University of York.Some of the things we talk about in this episode:• The JWST mission• The Transit method• Exoplanet Phase Curves• Exoplanet GJ1214b and the phase curve paper• Transmission spectroscopy• Exoplanet WASP 39b• Direct imaging of exoplanets• Exoplanet HIP65426b• The JWST first release images• The WASP 96-b “dodgy graph” (see Brundsen, 2025)• Exoplanet WASP 43b's day and night (and the Nature paper)• The Trappist 1 system

Alberta Real Estate Tutor
Extrapolation In RMS | Understanding The Principals #realestateeducation #realestate

Alberta Real Estate Tutor

Play Episode Listen Later Dec 25, 2024 4:38


Extrapolation In RMS | Understanding The Principals #realestateeducation #realestate  This is one of the most common questions from students. Measuring properties and understanding the rules set forth by RECAs Residential Measurement Standard (RMS) can be confusing. Here Raman explains the concept of extrapolating measurements to ensure that we as realtors are conveying the accurate information to our clients. Start your career in Real Estate today!  Our courses equip you with the skills needed to pass your licensing exam in Alberta. Link in the comments.

University of Minnesota Press
Science Fiction and the Alt-Right

University of Minnesota Press

Play Episode Listen Later Dec 10, 2024 52:43


The first major neo-Nazi party in the US was led by a science fiction fan. So opens Jordan S. Carroll's Speculative Whiteness, a book that traces ideas about white nationalism through the entangled histories of science fiction culture and white supremacist politics, showing that debates about representation in science fiction films and literature are struggles over who has the right to imagine and inhabit the future. Here, Carroll is joined in conversation with David M. Higgins.Jordan S. Carroll is the author of Reading the Obscene: Transgressive Editors and the Class Politics of US Literature (Stanford University Press, 2021) and Speculative Whiteness: Science Fiction and the Alt-Right (University of Minnesota Press, 2024). He received his PhD in English literature from the University of California, Davis. He was awarded the David G. Hartwell Emerging Scholar Award by the International Association for the Fantastic in the Arts, and his first book won the MLA Prize for Independent Scholars. Carroll's writing has appeared in American Literature, Post45, Twentieth-Century Literature, the Journal of the Fantastic in the Arts, and The Nation. He works as a writer and educator in the Pacific Northwest.David M. Higgins (he/they) is associate professor of English and chair of the Department of Humanities and Communication at Embry-Riddle Aeronautical University Worldwide, and a senior editor for the Los Angeles Review of Books. David is the author of Reverse Colonization: Science Fiction, Imperial Fantasy, and Alt-Victimhood, which won the 2022 Science Fiction Research Association Book Award. He has also published a critical monograph examining Ann Leckie's SF masterwork Ancillary Justice (2013), and his research has been published in journals such as American Literature, Science Fiction Studies, Paradoxa, and Extrapolation. In the public sphere, David has been a featured speaker on NPR's radio show On Point, and his literary journalism has been published in the Los Angeles Review of Books and The Guardian. David serves as the second vice president for the International Association for the Fantastic in the Arts (IAFA).EPISODE REFERENCES:James H. MadoleRichard B. SpencerDune (Frank Herbert)The Iron Dream (Norman Spinrad)Samuel DelanyAlain BadiouFrancis Parker Yockey / “destiny thinking”“Is It Fascism? A Leading Historian Changes His Mind” by Elisabeth Zerofsky, on Robert Paxton. New York Times Magazine.Solaris (Andrei Tarkovsky)Fredric JamesonSpeculative Whiteness: Science Fiction and the Alt-Right by Jordan S. Carroll is available from University of Minnesota Press. This book is part of the Forerunners series, and an open-access edition is available to read free online at manifold.umn.edu.“Carroll reminds us that our future is contingent. Fascists have a vision for the future that excludes most of humanity, but fascists can be defeated. The future is for everyone—if we make it that way.” —Los Angeles Review of Books

Big Sky Breakdown
Tuesdays with Tootell - Around the Big Sky + continued Griz QB extrapolation

Big Sky Breakdown

Play Episode Listen Later Oct 29, 2024 49:23


Ryan Tootell joins Colter Nuanez to talk our way around the Big Sky Conference, including Sac State's struggles, Idaho's recent surge and the continued perplexing quarterback rotation for the Montana Grizzlies. 

Chrononauts
Kylas Chunder Dutt - "A Journal of 48 Hours In The Year 1945" (1835) | Chrononauts Episode 46.1

Chrononauts

Play Episode Listen Later Oct 27, 2024 42:13


Containing Matters most Revolting. Bibliography: Banerjee, Suparno - "Other tomorrows: postcoloniality, science fiction and India" (2010) Banerjee, Suparno - "Indian Science Fiction: Patterns, History and Hybridity" (2020) Bhattacharya, Atanu and Hiradhar, Preet - "Own Maps/Imagined Terrain: The Emergence of Science Fiction in India", Extrapolation, vol. 55, no. 3 (2014) Chattopadhyay, Bodhisattva - "Aliens of the same world: The Case of Bangla Science Fiction" (2011) https://humanitiesunderground.org/2011/11/07/aliens-of-the-same-world-the-case-of-bangla-science-fiction/ Chattopadhyay, Bodhisattva - introduction to "The Inhumans and other stories" (2024) Harder, Hans - "Indian and International: Some Examples of Marathi Science Fiction Writing", South Asia Research, 21, 1, 2001 Khanna, Rakesh - "The Blaft Anthology of Tamil Pulp Fiction", vols 1-3 (2008-2017) Kuhad, Urvashi - "Science Fiction and Indian Women Writers: Exploring Radical Potentials" (2021) Mondal, Mini - "A Short History of South Asian Speculative Fiction: Part I" (2018) https://reactormag.com/a-short-history-of-south-asian-speculative-fiction-part-i/ Mondal, Mini - "A Short History of South Asian Speculative Fiction: Part II" (2018) https://reactormag.com/a-short-history-of-south-asian-speculative-fiction-part-ii/ Mund, Subhendu - "Kylas Chunder Dutt: The First Writer of Indian English Fiction", in "The Making of Indian English Literature" (2021) Phondke, Bal - preface to "It Happened Tomorrow" (1993) Saint, Tarun K. (ed) - "The Gollancz Book of South Asian Science Fiction" (2019) Sengupta, Debjani - "Sadhanbabu's Friends: Science Fiction in Bengal from 1882-1961" in "Sarai Reader 03: Shaping Technologies" (2003) Tickell, Alex - "Terrorism, Insurgency and Indian-English Literature, 1830-1947" (2012) Tickell, Alex - "Midnight's Ancestors: Kylas Chunder Dutt and the Beginnings of Indian-English Fiction", Wasafiri Vol. 21, No. 3 November 2006

Tony Davenport's Jazz Session
Episode 301: The Jazz Session No.378, ft. John McLaughlin, with "Extrapolation"

Tony Davenport's Jazz Session

Play Episode Listen Later Sep 2, 2024 120:00


The Jazz Session No.378 from RaidersBroadcast.com as aired in Aug-Sep 2024, featuring the legendary jazz-rock guitarist John McLaughlin and one of his early releases from 1969 “Extrapolation”. TRACK LISTING: Nutrition - James Taylor Quartet; Se E Tarde Me Perdoa (Forgive Me If I'm Late) - Quincy Jones & His Orchestra ; Further - Federica Michisanti Trioness; Visa Från Utanmyra - Jan Johansson; Arjen's Bag - John McLaughlin; It's Funny - John McLaughlin; I Wish I Could Shimmy Like My Sister Kate - Chris Barber's Jazz & Blues Band; Now You Has Jazz - Bing Crosby & Louis Armstrong; Hiba - John Pope Quintet; Wail - Joel Ross; Bemsha Swing - Thelonius Monk; Am I Wasting My Time - Earl "Fatha' Hines; When Lights Are Low - Benny Carter; Day Dream - Duke Ellington; Extrapolation - John McLaughlin; Binky's Beam - John McLaughlin; Stratus [all of it] - Billy Cobham; My Latin Brother - George Benson; Alamode - Art Blakey & The Jazz Messengers; Blue Waltz - Clark Terry.

Hermitix
The Work of Guy Debord with Edward Matthews

Hermitix

Play Episode Listen Later Aug 28, 2024 105:44


Edward J. Matthews teaches philosophy, writing, and communications in the School for Language and Liberal Studies at Fanshawe College in London, Ontario, Canada. He is also a part-time lecturer and instructor at the Centre for the Study of Theory and Criticism at Western University. His most recent publications include Arts & Politics of the Situationist International 1957-1972: Situating the Situationists (Lexington Books, 2021) and Guy Debord's Politics of Communication: Liberating Language from Power (Lexington Books, 2023). He has also published book reviews in Extrapolation, (vol. 63, no. 3, 2022) and Heavy Feather Journal (February 16, 2024, and September 9, 2024). He is currently working on a new book entitled, Heretical Materialism: An Archaeological Inquiry, which is due out in Fall 2025. Matthew's book: https://www.amazon.co.uk/Guy-Debords-Politics-Communication-Liberating-ebook/dp/B0CFZYMBW2 ---Become part of the Hermitix community:Hermitix Twitter - ⁠⁠⁠ / hermitixpodcast⁠⁠ ⁠Support Hermitix:Patreon - ⁠⁠⁠ patreon.com/hermitix⁠⁠ ⁠Donations: - ⁠⁠⁠https://www.paypal.me/hermitixpod⁠⁠⁠Hermitix Merchandise - ⁠⁠⁠http://teespring.com/stores/hermitix-...Bitcoin Donation Address: 3LAGEKBXEuE2pgc4oubExGTWtrKPuXDDLKEthereum Donation Address: 0x31e2a4a31B8563B8d238eC086daE9B75a00D9E74

The Dairy Nutrition Blackbelt Podcast
Dr. Carla Bittar: Flint Corn Silage for Dairy Calves | Ep. 43

The Dairy Nutrition Blackbelt Podcast

Play Episode Listen Later Aug 15, 2024 9:27


Hello there!In this episode of The Dairy Nutrition Blackbelt Podcast, Dr. Carla Bittar from ESALQ/USP discusses her research on using flint corn silage for dairy calves. She shares insights on the benefits, challenges, and practical applications of this feeding strategy. If you're looking to enhance your dairy calf nutrition strategy, this is an episode you won't want to miss. Tune in for some valuable insights! "Flint corn is the main corn harvested in Brazil, and it has a more dense and compact protein matrix, which leads to decreased digestibility."Meet the guest: Dr. Carla Maris Machado Bittar graduated in 1994 in agricultural engineering from ESALQ/USP. She earned her Master of Science from the University of Arizona in 1997 and completed her Ph.D. in Animal Science and Pastures from ESALQ/USP in 2002. Currently, she is a professor of dairy cattle management and nutrition at the Department of Animal Science at ESALQ/USP. Her research focuses on the nutrition and metabolism of growing dairy cattle, emphasizing the weaning phase. What will you learn: (00:00) Highlight(01:04) Introduction(01:53) Flint corn insights(03:36) TMR composition and results(05:13) Calf behavior and well-being(07:09) Extrapolation to U.S. practices(08:04) Management recommendations(08:54) Closing thoughtsThe Dairy Nutrition Blackbelt Podcast is trusted and supported by the innovative companies:* Adisseo- Virtus Nutrition- Evonik- Volac

The Back Room with Andy Ostroy

Judy Gold is a film, television and theater actor and writer and comedian. Her stand-up specials have appeared on HBO, Comedy Central, and LOGO, and she's appeared on Netflix's Stand Out: An LGBTQ+ Celebration. She's the host of the podcast It's Judy's Show with Judy Gold; and is featured in the new Netflix documentary Outstanding: A Comedy Revolution, which explores the history of LGBTQ+ standup comedy. Her recent film credits include She Came To Me, Tripped Up, and Love Reconsidered. And her TV credits include City On A Hill, Better Things, The First Lady, Extrapolation, Life and Beth, Girls 5 Eva and recurring roles on Awkwafina, Friends from College and Search Party. She's won two Emmy awards for writing and producing The Rosie O'Donnell Show, and has written and starred in three critically acclaimed Off-Broadway hit shows, Yes I Can Say That!, The Judy Show – My Life as a Sitcom, and 25 Questions for a Jewish Mother. Judy's appeared on The Late Show with Stephen Colbert, The Tonight Show, and has made numerous appearances on The View, The Today Show, The Drew Barrymore Show, and on MSNBC, CNN and NewsNation as a free-speech advocate. She is the author of the critically acclaimed book Yes I Can Say That: When They Come For The Comedians, We Are All In Trouble. Judy is funny AF! We have an absolute blast chatting about Jews and Jewish mothers; cats and dogs; cancel culture and the comedy business today; her early inspirations and favorite current comics; and her writing process. And, she reveals the things that piss her off most, as well as her Top 5 musical artists of all-time! And, what would a chat between two Jews be without a passionate discussion of Israel, the protests and antisemitism... Got somethin' to say?! Email us at BackroomAndy@gmail.com Leave us a message: 845-307-7446 Twitter: @AndyOstroy Produced by Andy Ostroy, Matty Rosenberg, and Jennifer Hammoud @ Radio Free Rhiniecliff Design by Cricket Lengyel

E44: How Benchmark Invests in AI with Eric Vishria and Quilter Founder Sergiy Nesterenko

Play Episode Listen Later Jun 18, 2024 58:11


In today's episode, we discuss Benchmark's investment philosophy in the AI era with general partner Eric Vishria, and Sergiy Nesterenko, the Founder and CEO of Quilter. Sitting in for Erik Torenberg as host is AI Scout and Cognitive Revolution host Nathan Labenz. They cover the questions that Benchmark asks to determine whether a new company is solving an enduring or temporary problem, and the reasons why Benchmark hasn't invested in a foundational model to date. They also discuss the innovation behind Quilter's groundbreaking use of reinforcement learning to automate integrated circuit board designs. They delve into the importance of thinking beyond 'co-pilots' to fully automated AI solutions, and explore the balance of research and engineering in the AI space.

featured Wiki of the Day
The Day Before the Revolution

featured Wiki of the Day

Play Episode Listen Later May 26, 2024 2:26


fWotD Episode 2578: The Day Before the Revolution Welcome to featured Wiki of the Day where we read the summary of the featured Wikipedia article every day.The featured article for Sunday, 26 May 2024 is The Day Before the Revolution."The Day Before the Revolution" is a science fiction short story by American writer Ursula K. Le Guin. First published in the science fiction magazine Galaxy in August 1974, it was anthologized in Le Guin's 1975 collection The Wind's Twelve Quarters and in several subsequent collections. Set in Le Guin's fictional Hainish universe, the story has strong connections to her novel The Dispossessed (also published in 1974), and is sometimes referred to as a prologue to the longer work, though it was written later."The Day Before the Revolution" follows Odo, an aging anarchist revolutionary, who lives in a commune founded on her teachings. Over the course of a day, she relives memories of her life as an activist while she learns of a revolution in a neighboring country and gets caught up in plans for a general strike the next day. The strike is implied to be the beginning of the revolution that leads to the establishment of the idealized anarchist society based on Odo's teachings that is depicted in The Dispossessed.Death, grief, and sexuality in older age are major themes explored in "The Day Before the Revolution". The story won the Nebula and Locus awards for Best Short Story in 1975, and was also nominated for a Hugo Award. It had a positive critical reception, with particular praise for its characterization of Odo: a review in Extrapolation called the story a "brilliant character sketch of a proud, strong woman hobbled by old age". Multiple scholars commented that it represented a tonal and thematic shift in Le Guin's writing and toward non-linear narrative structures and works infused with feminism.This recording reflects the Wikipedia text as of 00:30 UTC on Sunday, 26 May 2024.For the full current version of the article, see The Day Before the Revolution on Wikipedia.This podcast uses content from Wikipedia under the Creative Commons Attribution-ShareAlike License.Visit our archives at wikioftheday.com and subscribe to stay updated on new episodes.Follow us on Mastodon at @wikioftheday@masto.ai.Also check out Curmudgeon's Corner, a current events podcast.Until next time, I'm Salli Standard.

The Nonlinear Library
EA - Crises reveal centralisation by Vasco Grilo

The Nonlinear Library

Play Episode Listen Later Mar 28, 2024 9:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Crises reveal centralisation, published by Vasco Grilo on March 28, 2024 on The Effective Altruism Forum. This is a crosspost for Crises reveal centralisation by Stefan Schubert, published on 3 May 2023. An important question for people focused on AI risk, and indeed for anyone trying to influence the world, is: how centralised is power? Are there dominant actors that wield most of the power, or is it more equally distributed? We can ask this question on two levels: On the national level, how powerful is the central power - the government - relative to smaller actors, like private companies, nonprofits, and individual people? On the global level, how powerful are the most powerful countries - in particular, the United States - relative to smaller countries? I think there are some common heuristics that lead people to think that power is more decentralised than it is, on both of these levels. One of these heuristics is what we can call "extrapolation from normalcy": Extrapolation from normalcy: the view that an actor seeming to have power here and now (in relatively normal times) is a good proxy for it having power tout court. It's often propped up by a related assumption about the epistemology of power: Naive behaviourism about power (naive behaviourism, for short): the view that there is a direct correspondence between an actor's power and the official and easily observable actions it takes. In other words, if an actor is powerful, then that will be reflected by official and easily observable actions, like widely publicised company investments or official government policies. Extrapolation from normalcy plus naive behaviourism suggest that the distribution of power is relatively decentralised on the national level. In normal times, companies are pursuing many projects that have consequential social effects (e.g. the Internet and its many applications). While these projects are subject to government regulation to some extent, private companies normally retain a lot of leeway (depending on what they want to do). This suggests (more so, the more you believe in naive behaviourism) that companies have quite a lot of power relative to governments in normal times. And extrapolation from normalcy implies that that this isn't just true in normal times, but holds true more generally. Similarly, extrapolation from normalcy plus naive behaviourism suggest that power is relatively decentralised on the global level, where we compare the relative power of different countries. There are nearly 200 independent countries in the world, and most of them make a lot of official decisions without overt foreign interference. While it's true that invasions do occur, they are relatively rare (the Russian invasion of Ukraine notwithstanding). Thus, naive behaviourism implies that power is decentralised under normal times, whereas extrapolation from normalcy extends that inference beyond normal times. But in my view, the world is more centralised than these heuristics suggest. The easiest way to see that is to look at crises. During World War II, much of the economy was put under centralised control one way or another in many countries. Similarly, during Covid, many governments drastically curtailed individual liberties and companies' economic activities (rightly or wrongly). And countries that want to acquire nuclear weapons (which can cause crises and wars) have found that they have less room to manoeuvre than the heuristics under discussion suggest. Accordingly, the US and other powerful nations have been able to reduce nuclear proliferation substantially (even though they've not been able to stop it entirely). It is true that smaller actors have a substantial amount of freedom to shape their own destiny under normal times, and that's an important fact. But still, who makes what official de...

The Even Better Podcast
4 Steps to Thinking about Strategic Planning Through Interpolation Instead of Extrapolation

The Even Better Podcast

Play Episode Listen Later Mar 11, 2024 35:15


Sinikka Waugh and Jim Hall discuss 4 Steps to Thinking about Strategic Planning Through Interpolation Instead of Extrapolation. Jim Hall is an innovative, high-achieving Senior IT Leader with over twenty years experience in IT Leadership. Jim believes in developing the next generation of IT Leadership, to help IT organizations better respond to the changing technology landscape. After serving more than eight years as Chief Information Officer in government and higher education, Jim founded Hallmentum® where he's helped for four years to empower IT Leaders to drive meaningful change through hands-on training, workshops, and coaching

Living decoloniality
Living Decoloniality S02 Ep 01: Carla

Living decoloniality

Play Episode Listen Later Feb 29, 2024 11:19


In this episode I reflect on the journey of this podcast, drawing inspiration from my sabbatical in Florence to the streets of Havana.I recall the framework of the Colonial Matrix of Power, and I introduce the second season and its themes.We will dive into Coloniality of Being, Coloniality of Knowledge, and Coloniality of Gender in the aid sector.Embracing the concept of extrapolation, we move beyond sector boundaries.No longer limited to replicating practices, we explore diverse contexts, seeking inspiration from unconventional sources.Let's challenge colonial structures and discover new possibilities.The transcript is available here Sources: Situated Knowledges: The Science Question in Feminism and the Privilege of Partial PerspectiveEpistemic Disobedience, Independent Thought and De-Colonial FreedomOn Decoloniality: Concepts, Analytics, PraxisColonialidad del poder, eurocentrismo y América LatinaThe Extrapolation Problem: How Can We Learn from the Experience of Others?

ReImagine Value
Extrapolation, Speculation, Fabulation - Steven Shaviro on the work of science fiction (WSS013)

ReImagine Value

Play Episode Listen Later Nov 30, 2023 47:31


Steven Shaviro is a cultural critic and leading theorist of the social roles of science fiction and author of many books, including The Universe of Things: On Speculative Realism (2014) and Discognition (2016). His book on science fiction, Fluid Futures, will be published in 2024. In this interview, he helps us understand the history of science fiction and its potentials to critique dominant power relations. *http://www.shaviro.com/ The Workers' Speculative Society is a research podcast about the world Amazon is building and the workers, writers and communities that are demanding a different future. It is part of the Worker as Futurist Project, which supports rank-and-file Amazon workers to write speculative fiction about "The World After Amazon. It is hosted by Xenia Benivolski, Max Haiven, Sarah Olutola, and Graeme Webb and is an initiative of RiVAL: The ReImagining Value Action Lab, with support from the Social Sciences a Humanities Research Council of Canada. Editing and theme music by Robert Steenkamer. * soundcloud.com/reimaginevalue/sets/the-workers-speculative * workersspeculativesociety.org * reimaginingvalue.ca

AMG-L'Après MiGeek
#32 - Le Minibar de l'Horreur par Extrapolation

AMG-L'Après MiGeek

Play Episode Listen Later Oct 31, 2023 197:29


Au Menu ce mois-ci : - L'Apéro : Un peu de Small Talk histoire de démarrer dans la joie et la bonne humeur. - Le Monstre des mers - Film  d'animation Netflix -  Dredge - Jeux Vidéo - La Nécro - News Vieux con d'octobre 2023 - maxime agrandi son Lore - La B.O Oubliée : Twister de Jan De Bont - musique de Mark Mancina - Le Mini Baroscope : Novembre 2023  - Totally Killer- Film Amazon Prime - Killers of the Flower Moon ou peut être un autre film de la jaquette qui sait... - Le Nanar : Double Team de Tsui Hark En vous souhaitant une bonne dégustation. La Team du MiniBar Twitter (X) : @MiniBarTv Instagram : minibar_tv_radio 

Mock Trial Masterclass
How to Respond to Invention of Fact | Tips for Handling and Beating Unfair Extrapolation in Mock Trial

Mock Trial Masterclass

Play Episode Listen Later Oct 18, 2023 12:48


Invention of fact or "unfair extrapolation" has, unfortunately, become pretty common in the mock trial world both at the high school and college levels. So, in this episode, we'll discuss the best way to handle and respond to invention of fact. BUY MY BOOK HERE: ⁠bit.ly/MTMBook⁠  SCHEDULE COACHING WITH ME HERE: ⁠bit.ly/MTMCoach⁠ MY VIDEO ON IMPEACHMENTS: https://www.youtube.com/watch?v=vyVO-vLZfKY&t=28s Welcome into Mock Trial Masterclass: Your Guide to Controlling the Courtroom. My name is Luke Worsham, and I want YOU to be a mock trial master. I've competed in and coached mock trial for a while, and I want to pass along everything I've learned to you. Whether you're an attorney or a witness, my channel is dedicated to helping you take your craft to the next level. It's game on! The information contained in this podcast is intended for mock trial students and coaches, and it is provided for informational purposes only. It should not be construed as legal advice on any subject matter or real-life litigation advice.

Natixis Insights
Bullwhips and the Extrapolation Effect

Natixis Insights

Play Episode Listen Later Sep 15, 2023 16:13


Portfolio Manager Jack Janasiewicz explains why extrapolating current market trends into the future based on the bullwhip effect may be misguided.

extrapolation bullwhips
Clinical Pharmacology Podcast with Nathan Teuscher
Pediatric Extrapolation (Ep. 7)

Clinical Pharmacology Podcast with Nathan Teuscher

Play Episode Listen Later Aug 14, 2023 24:51


Thank you to everyone who sent me feedback on this podcast. This episode is based on a suggestion from Kushal and Parmesh. In this episode, I discuss a general clinical pharmacology pediatric extrapolation plan. I describe the basics of the plan that is commonly proposed to regulatory bodies. I discuss specific technical approaches for extrapolation of adult pharmacokinetic data to pediatric patients and the extrapolation of adult exposure-response or pharmacodynamic data to pediatrics. And I conclude with a discussion of how to use physiologic-based pharmacokinetic models for pediatric extrapolation. Links discussed in the show: • Baby arms over head video • PBPK Models in Pediatric Development • You can connect with me on LinkedIn and send me a message • Send me a message • Sign up for my newsletter Copyright Teuscher Solutions LLC All Rights Reserved

Medscape InDiscussion: Psoriatic Arthritis
S3 Episode 5: New on the Market: Why Use Psoriatic Arthritis Biosimilars?

Medscape InDiscussion: Psoriatic Arthritis

Play Episode Listen Later Jun 22, 2023 22:48


Join Drs Stanley Cohen and Jonathan Kay as they discuss biosimilars in PsA, which will hit the US market this summer. They cover everything from working with your pharmacy to counseling patients. Relevant disclosures can be found with the episode show notes on Medscape (https://www.medscape.com/viewarticle/984272). The topics and discussions are planned, produced, and reviewed independently of advertisers. This podcast is intended only for US healthcare professionals. Resources Psoriatic Arthritis https://emedicine.medscape.com/article/2196539-overview Biosimilars for the Treatment of Psoriatic Arthritis https://pubmed.ncbi.nlm.nih.gov/31625769/ Biosimilars and the Extrapolation of Indications for Inflammatory Conditions https://pubmed.ncbi.nlm.nih.gov/28255229/ Adalimumab (Rx) https://reference.medscape.com/drug/amjevita-humira-adalimumab-343187 Comparison of Skindex-29, Dermatology Life Quality Index, Psoriasis Disability Index and Medical Outcome Study Short Form 36 in Patients With Mild to Severe Psoriasis https://pubmed.ncbi.nlm.nih.gov/22229951/ Infliximab (Rx) https://reference.medscape.com/drug/remicade-inflectra-infliximab-343202 Rituximab (Rx) https://reference.medscape.com/drug/rituxan-truxima-rituximab-342243 Subcutaneous Injection of Drugs: Literature Review of Factors Influencing Pain Sensation at the Injection Site https://pubmed.ncbi.nlm.nih.gov/31587143/ Biosimilars to Bring a Bumper Crop of Adalimumab Options https://www.centerforbiosimilars.com/view/part-1-biosimilars-to-bring-a-bumper-crop-of-adalimumab-options The Difference Between an Interchangeable Biosimilar and One That Isn't https://www.centerforbiosimilars.com/view/the-difference-between-an-interchangeable-biosimilar-and-one-that-isn-t The Non-Medical Switching of Prescription Medications https://pubmed.ncbi.nlm.nih.gov/31081414/ Implementation of the Biologics Price Competition and Innovation Act of 2009 https://www.fda.gov/drugs/guidance-compliance-regulatory-information/implementation-biologics-price-competition-and-innovation-act-2009 Systematic Review on the Use of Biosimilars of Trastuzumab in HER2+ Breast Cancer https://pubmed.ncbi.nlm.nih.gov/36009592/ Certolizumab pegol (Rx) https://reference.medscape.com/drug/cimzia-certolizumab-pegol-343185

Hospice Insights: The Law and Beyond
David v. Goliath: Taking on Payment Suspension and Extrapolation

Hospice Insights: The Law and Beyond

Play Episode Listen Later Mar 15, 2023 17:32


Payment suspension and overpayment extrapolation are among the most extreme and effective enforcement tools available to the Centers for Medicare and Medicaid Services (CMS) and its auditors. However, even these measures can be overcome. In this episode, Husch Blackwell's Meg Pekarske, Bryan Nowicki, and Emily Solum talk about our hospice team's latest encounter with payment suspension and extrapolation. Spoiler alert—it has a happy ending!

Post Show Recaps: LIVE TV & Movie Podcasts with Rob Cesternino
Extrapolations on Apple TV+: A Glimpse Into the Future

Post Show Recaps: LIVE TV & Movie Podcasts with Rob Cesternino

Play Episode Listen Later Mar 13, 2023 13:49


In this podcast, the hosts Brooklyn Zed (@hardrockhope) and Troy, aka DJ LaBelle-Klein (@djlabelleklein) kick off the podcast ahead of the premiere on March 17th. The post Extrapolations on Apple TV+: A Glimpse Into the Future appeared first on PostShowRecaps.com.

Machine Learning Street Talk
#101 DR. WALID SABA - Extrapolation, Compositionality and Learnability

Machine Learning Street Talk

Play Episode Listen Later Feb 10, 2023 49:14


MLST Discord! https://discord.gg/aNPkGUQtc5 Patreon: https://www.patreon.com/mlst YT: https://youtu.be/snUf_LIfQII We had a discussion with Dr. Walid Saba about whether or not MLP neural networks can extrapolate outside of the training support, and what it means to extrapolate in a vector space. Then we discussed the concept of vagueness in cognitive science, for example, what does it mean to be "rich" or what is a "pile of sand"? Finally we discussed behaviourism and the reward is enough hypothesis. References: A Spline Theory of Deep Networks [Balestriero] https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf The animation we showed of the spline theory was created by Ahmed Imtiaz Humayun (https://twitter.com/imtiazprio) and we will be showing an interview with Imtiaz and Randall very soon! [00:00:00] Intro [00:00:58] Interpolation vs Extrapolation [00:24:38] Type 1 Type 2 generalisation and compositionality / Fodor / Systematicity [00:32:18] Keith's brain teaser [00:36:53] Neural turing machines / discrete vs continuous / learnability

Lexman Artificial
Garry Kasparov on Extrapolation and Nonsuch

Lexman Artificial

Play Episode Listen Later Jan 3, 2023 4:04


Garry Kasparov discusses his unorthodox approach to chess and how it has transformed the game. He discusses a famous game in which he used an unusual footbridge to gain an advantage.

The Nonlinear Library
AF - Concept extrapolation for hypothesis generation by Stuart Armstrong

The Nonlinear Library

Play Episode Listen Later Dec 12, 2022 4:43


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concept extrapolation for hypothesis generation, published by Stuart Armstrong on December 12, 2022 on The AI Alignment Forum. Posted initially on the Aligned AI website. Authored by Patrick Leask, Stuart Armstrong, and Rebecca Gorman. There's an apocryphal story about how vision systems were led astray when trying to classify tanks camouflaged in forests. A vision system was trained on images of tanks in forests on sunny days, and images of forests without tanks on overcast days. To quote Neil Fraser: In the 1980s, the Pentagon wanted to harness computer technology to make their tanks harder to attack. The research team went out and took 100 photographs of tanks hiding behind trees, and then took 100 photographs of trees—with no tanks. They took half the photos from each group and put them in a vault for safe-keeping, then scanned the other half into their mainframe computer. [...] the neural net correctly identified each photo as either having a tank or not having one. Independent testing: The Pentagon was very pleased with this, but a little bit suspicious. They commissioned another set of photos (half with tanks and half without) and scanned them into the computer and through the neural network. The results were completely random. For a long time nobody could figure out why. After all nobody understood how the neural had trained itself. Eventually someone noticed that in the original set of 200 photos, all the images with tanks had been taken on a cloudy day while all the images without tanks had been taken on a sunny day. The neural network had been asked to separate the two groups of photos and it had chosen the most obvious way to do it—not by looking for a camouflaged tank hiding behind a tree, but merely by looking at the color of the sky. ⁠“Neural Network Follies”⁠, Neil Fraser, September 1998 We made that story real. We collected images of tanks on bright days and forests on dark days to recreate the biased dataset described in the story. We then replicated the faulty neural net tank detector by fine tuning a CLIPViT image classification model on this dataset. Below are 30 images taken from the training set ordered from left to right by decreasing class certainty. Like the apocryphal neural net, this one perfectly separates these images into tank and no-tank. Figure 1: Trained classifier, labeled images To replicate the Pentagon's complaint, we then simulated the deployment of this classifier into the field with an unlabeled dataset of similar images, that doesn't have the bias to the same extent. Below are 30 images randomly taken from the unlabeled dataset also ordered by tank certainty. Now the clear division between tank and no tank is gone: there are actually more images without a tank on the right hand (tank) side of the gradient. Figure 2: Trained classifier, unlabeled images This is a common problem for neural nets - selecting a single feature to separate their training data. And this feature need not be the one that the programmer had in mind. Because of this, classifiers typically fail when they encounter images beyond their training settings. This “out of distribution” problem happens here because the neural net has settled on brightness as its feature. And thus fails to identify tanks when it encounters darker images of them. Instead, Aligned AI used its technology to automatically tease out the ambiguities of the original data. What are the "features" that could explain the labels? One of the features would be the luminosity, which the original classifier made use of. But our algorithm flagged a second feature - a second hypothesis for what the labels really meant - that was very different. To distinguish that hypothesis visually, we can look at the maximally ambiguous unlabeled images: those images that hypothesis 1 (old classifier) thinks ar...

Machine Learning Street Talk
#85 Dr. Petar Veličković (Deepmind) - Categories, Graphs, Reasoning [NEURIPS22 UNPLUGGED]

Machine Learning Street Talk

Play Episode Listen Later Dec 8, 2022 36:55


Dr. Petar Veličković is a Staff Research Scientist at DeepMind, he has firmly established himself as one of the most significant up and coming researchers in the deep learning space. He invented Graph Attention Networks in 2017 and has been a leading light in the field ever since pioneering research in Graph Neural Networks, Geometric Deep Learning and also Neural Algorithmic reasoning. If you haven't already, you should check out our video on the Geometric Deep learning blueprint, featuring Petar. I caught up with him last week at NeurIPS. In this show, from NeurIPS 2022 we discussed his recent work on category theory and graph neural networks. https://petar-v.com/ https://twitter.com/PetarV_93/ TOC: Categories (Cats for AI) [00:00:00] Reasoning [00:14:44] Extrapolation [00:19:09] Ishan Misra Skit [00:27:50] Graphs (Expander Graph Propagation) [00:29:18] YT: https://youtu.be/1lkdWduuN14 MLST Discord: https://discord.gg/V25vQeFwhS Support us! https://www.patreon.com/mlst References on YT description, lots of them! Host: Dr. Tim Scarfe

Monitor Mondays
Exclusive: Some Extrapolation Estimates Vacated

Monitor Mondays

Play Episode Listen Later Oct 10, 2022 31:07


While the pandemic may have shielded healthcare providers from intrusive recoupment audits, the auditing break was but temporary. Starting in August of 2021, UPIC, TPE, RAC and other audits started back up and the contractors had a lot of catching up to do. At the same time, the administrative law judges were under enormous pressure to get the backlog of cases completed by the end of 2022. The result? A paradigm shift in how hearings were conducted and the impact that has on outcomes. Senior healthcare analyst and RACmonitor correspondent Frank Cohen will talk about how these changes have benefited providers in getting judges to vacate the extrapolation estimates.  Other segments will include these instantly recognizable broadcast segments:The RAC Report: Healthcare attorney Knicole Emanuel, partner at the law firm of Practus, will report the latest news about auditors.Risky Business: Healthcare attorney David Glaser, shareholder in the law offices of Fredrikson & Bryon, will join the broadcast with his trademark segment.SDoH Report: Tiffany Ferguson, a subject matter expert on the social determinants of health (SDoH), will report on the news that's happening at the intersection of healthcare regulations and the SDoH.Monday Rounds: Ronald Hirsch, MD, vice president of R1 RCM, will be making his Monday Rounds with another installment of his popular segment.Legislative Update: Cate Brantley, legislative affairs analyst for Zelis, will substitute for Matthew Albright to report on current healthcare legislation.

The Nonlinear Library
LW - AI Timelines via Cumulative Optimization Power: Less Long, More Short by jacob cannell

The Nonlinear Library

Play Episode Listen Later Oct 6, 2022 30:44


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Timelines via Cumulative Optimization Power: Less Long, More Short, published by jacob cannell on October 6, 2022 on LessWrong. TLDR: We can best predict the future by using simple models which best postdict the past (ala Bayes/Solomonoff). A simple model based on net training compute postdicts the relative performance of successful biological and artificial neural networks. Extrapolation of this model into the future leads to short AI timelines: ~75% chance of AGI by 2032. Cumulative Optimization Power[1]: a Simple Model of Intelligence A simple generalized scaling model predicts the emergence of capabilities in trained ANNs(Artificial Neural Nets) and BNNs(Biological Neural Nets): perf ~= P = CT For sufficiently flexible and efficient NN architectures and learning algorithms, the relative intelligence and capabilities of the best systems are simply proportional to net training compute or intra-lifetime cumulative optimization power P, where P = CT (compute ops/cycle training cycles), assuming efficient allocation of (equivalent uncompressed) model capacity bits N roughly proportional to data size bits D. Intelligence Rankings Imagine ordering some large list of successful BNNs(brains or brain modules) by intelligence (using some committee of experts), and from that deriving a relative intelligence score for each BNN. Obviously such a scoring will be noisy in its least significant bits: is a bottlenose dolphin more intelligent than an american crow? But the most significant bits are fairly clear: C. Elegans is less intelligent than Homo Sapiens. Now imagine performing the same tedious ranking process for various successful ANNs. Here the task is more challenging because ANNs tend to be far more specialized, but the general ordering is still clear: char-RNN is less intelligent than GPT-3. We could then naturally combine the two lists, and make more fine-grained comparisons by including specialized sub-modules of BNNs (vision, linguistic processing, etc). The initial theory is that P - intra-lifetime cumulative optimization power (net training compute) - is a very simple model which explains a large amount of the entropy/variance in a rank order intelligence measure: much more so than any other simple proposed candidates (at least that I'm aware of). Since P follow a predictable temporal trajectory due to Moore's Law style technological progress, we can then extrapolate the trends to predict the arrival of AGI. This simple initial theory has a few potential flaws/objections, which we will then address. Initial Exemplars I've semi-randomly chosen 15 exemplars for more detailed analysis: 8 BNNs, and 9 ANNs. Here are the 8 BNNs (6 whole brains and 2 sub-systems) in randomized order: Honey Bee Human Raven Human Linguistic Cortex Cat C. Elegans Lizard Owl Monkey Visual Cortex The ranking of the 6 full brains in intelligence is rather obvious and likely uncontroversial. Ranking all 8 BNNs in terms of P (net training compute) is still fairly obvious. Here are the 9 ANNs, also initially in randomized order: AlphaGo: First ANN to achieve human pro-level play in Go Deepspeech 2: ANN speech transcription system VPT: Diamond-level minecraft play Alexnet: Early CNN imagenet milestone, subhuman performance 6-L MNIST MLP: Early CNN milestone on MNIST, human level Chinchilla: A 'Foundation' Large Language Model GPT-3: A 'Foundation' Large Language Model DQN Atari: First strong ANN for Atari, human level on some games VIT L/14@336px: OpenAI CLIP 'Foundation' Large Vision Model Most of these systems are specialists in non-overlapping domains, such that direct performance comparison is mostly meaningless, but the ranking of the 3 vision systems should be rather obvious based on the descriptions. The DQN Atari and VPT agents are somewhat comparable to animal brains. How would you ran...

The Nonlinear Library
AF - AXRP Episode 18 - Concept Extrapolation with Stuart Armstrong by DanielFilan

The Nonlinear Library

Play Episode Listen Later Sep 3, 2022 60:31


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AXRP Episode 18 - Concept Extrapolation with Stuart Armstrong, published by DanielFilan on September 3, 2022 on The AI Alignment Forum. Google Podcasts link Concept extrapolation is the idea of taking concepts an AI has about the world - say, “mass” or “does this picture contain a hot dog” - and extending them sensibly to situations where things are different - like learning that the world works via special relativity, or seeing a picture of a novel sausage-bread combination. For a while, Stuart Armstrong has been thinking about concept extrapolation and how it relates to AI alignment. In this episode, we discuss where his thoughts are at on this topic, what the relationship to AI alignment is, and what the open questions are. Topics we discuss: What is concept extrapolation When is concept extrapolation possible A toy formalism Uniqueness of extrapolations Unity of concept extrapolation methods Concept extrapolation and corrigibility Is concept extrapolation possible? Misunderstandings of Stuart's approach Following Stuart's work Daniel Filan: Hello, everybody. In this episode, I'll be speaking with Stuart Armstrong. Stuart was previously a senior researcher at the Future of Humanity Institute at Oxford, where he worked on AI safety and x-risk, as well as how to spread between galaxies by disassembling the planet Mercury. He's currently the head boffin at Aligned AI, where he works on concept extrapolation, the subject of our discussion. For links to what we're discussing, you can check the description of this episode, and you can read the transcript at axrp.net. Well, Stuart, welcome to the show. Stuart Armstrong: Thank you. Daniel Filan: Cool. Stuart Armstrong: Good to be on. What is concept extrapolation Daniel Filan: Yeah, it's nice to have you. So I guess the thing I want to be talking about today is your work and your thoughts on concept extrapolation and model splintering, which I guess you've called it. Can you just tell us: what is concept extrapolation? Stuart Armstrong: Model splintering is when the features or the concepts on which you built your goals or your reward functions break down. Traditional examples are in physics when (for instance) the ether disappeared. It didn't mean when the ether disappeared that all the previous physics that had been based on ether suddenly became completely wrong. You had to extend the old results into a new framework. You had to find a new framework and you had to extend it. So model splintering is when the model falls apart or the features or concepts fall apart and concept extrapolation is what you do to extend the concept across that divide. Daniel Filan: Okay. Stuart Armstrong: Like there was a concept of energy before relativity, and there's a concept of energy after relativity. They're not exactly the same thing, but there's a definite continuity to it. Daniel Filan: Cool. So you mentioned that at some point we used to think there was ether, and now we think there isn't. What's an example of a concept or something that splintered when we realized there wasn't an ether anymore, just to get a really concrete example. Stuart Armstrong: Maxwell's equations - Maxwell's non-relativistic equations are based on a non-constant speed of light. Maxwell's equations are not relativistic, though they have a relativistic formulation. Daniel Filan: Hang on. I thought they were. Isn't that why you get the constant speed of light out of them? Stuart Armstrong: Okay. If that's uncertain, then let's try another example. Daniel Filan: We could do energy, once you discovered general relativity, or. Stuart Armstrong: Energy, inertial mass, for example: those concepts needed a Newtonian universe to make sense, when it wasn't so much the absence of ether, but it was the surprisingly constant speed of light that broke those. So when you m...

AXRP - the AI X-risk Research Podcast
18 - Concept Extrapolation with Stuart Armstrong

AXRP - the AI X-risk Research Podcast

Play Episode Listen Later Sep 3, 2022 106:19


Concept extrapolation is the idea of taking concepts an AI has about the world - say, "mass" or "does this picture contain a hot dog" - and extending them sensibly to situations where things are different - like learning that the world works via special relativity, or seeing a picture of a novel sausage-bread combination. For a while, Stuart Armstrong has been thinking about concept extrapolation and how it relates to AI alignment. In this episode, we discuss where his thoughts are at on this topic, what the relationship to AI alignment is, and what the open questions are. Topics we discuss, and timestamps: 00:00:44 - What is concept extrapolation 00:15:25 - When is concept extrapolation possible 00:30:44 - A toy formalism 00:37:25 - Uniqueness of extrapolations 00:48:34 - Unity of concept extrapolation methods 00:53:25 - Concept extrapolation and corrigibility 00:59:51 - Is concept extrapolation possible? 01:37:05 - Misunderstandings of Stuart's approach 01:44:13 - Following Stuart's work The transcript Stuart's startup, Aligned AI Research we discuss: The Concept Extrapolation sequence The HappyFaces benchmark Goal Misgeneralization in Deep Reinforcement Learning

Curious Universe
43 - The Magic of Business: An Extrapolation of Possibility

Curious Universe

Play Episode Listen Later Sep 1, 2022 47:49


Business is one of my favorite topics! And it took me a while to get how different I am with business and that it is ok to be different… It's brilliant actually! You see it's those that are willing to BE different in business, that are leading the way for greater possibilities for us all! So this whole month I will be bringing on guests that business different so that we may extrapolate a new reality! And I will start us off with this episode. ******************************************************************************************************* You may hear me talk about something awesome called Access Consciousness. Access offers you the tools and questions to create everything you desire in a different and easier way! So what is Access Consciousness™? Discover more here: www.accessconsciousness.com

Tox in Ten
ACMT Highlights Episode 39: Retrograde Extrapolation and Other Ethanol Calculations

Tox in Ten

Play Episode Listen Later Aug 1, 2022 10:32


In this episode Dr. Gillian Beauchamp sits down with Patrick Harding to discuss anterograde and retrograde ethanol calculations and how they are used in forensic toxicology.

I Saw It On Linden Street
Invaders From Mars (1986)

I Saw It On Linden Street

Play Episode Listen Later Jul 29, 2022 59:41


A young boy struggles to warn his community about an Alien invasion in this 80's remake. Tune in as Chris talks Tobe Hooper, SFX, & strange offerings as the LSCE screen's the 1986 cult classic “Invaders From Mars.” Join us! Check us out @lscep Or LSCEP.com Works Cited: Ansen, David, Peter McAlevey, and Ed Behr. Hollywood's New Go-Go Boys. Newsweek. Aug 11, 1986. Article Link. Accessed 4/27/22. Darnton, Nina. The Screen: ‘Invaders From Mars.' The New York Times. June 6, 1986. Article Link. Accessed 7/20/22. Friedman, Robert. “Will Cannon Boom or Bust?” American Film. Jul 1, 1986. Article Link. Accessed 4/26/22. Harley, W. “Reviews: Invaders From Mars.” Boxoffice. Vol 122, no 8. (1986):R86. Article Link. Accessed 7/20/22. Hartley, Mark. Electric Boogaloo: The Wild, Untold Story of Cannon Films! 2014. Warner Bros. Pictures, 2014. 106 Mins. Hendershot, Cyndy. “The Invaded Body: Paranoia, and Radiation Anxiety in Invaders from Mars, It Came From Outer Space, and Invasion of the Body Snatchers.” Extrapolation. Vol 39, No 1. (1998): 26-39. Latham, Rob. “Subterranean Suburbia: Underneath the Smalltown Myth in the Two Versions of “Invaders from Mars.” Science-Fiction Studies 22, No. 2 (1995) 198-208. Lor. “Invades From Mars.” Variety, 323 (1986) Article Link. Accessed 7/20/22 McDonagh, Maitland. “Invaders From Mars.” The Film Journal. Vol. 89(7) 1986, 22. Article Link. Accessed 7/20/22 Medalia, Hilla. The Go-Go Boys: The Inside Story of Cannon Films. 2014. MVD Visual, 2021. Blu Ray. Trunick, Austin. Cannon Film Guide Volume 2: 1985-1987. Orlando, FL: Bear Manor Media, 2022. --- Send in a voice message: https://anchor.fm/lsce/message

The Monster Island Film Vault
Episode 68: Nick Hayden vs. ‘Cloverfield'

The Monster Island Film Vault

Play Episode Listen Later Jul 29, 2022 138:38


Hello, Kaiju Lovers! “Ameri-kaiju” jumps ahead 50 years, but as you'll hear, the (in)famous Cloverfield has a lot in common with the classic 1950s films we've been covering. Nate's longtime friend Nick Hayden drops by to discuss this movie because he's loved it ever since he first experienced it in a theater—and it is an experience. Too much of one for some people, in fact! While it's popular to hate on Cloverfield in the kaiju fandom, it popularized the “found footage” genre and launched J.J. Abrams' studio, Bad Robot Productions. To the shock of some of Nate's friends, he says this is the closest the United States has come to producing a Godzilla (1954). How and why? The Toku Topic helps explain that: the aftermath of the 9/11 terrorist attacks. Before this, Nate meets with Dr. Nick Tatopoulos on the Heat Seeker to discuss recent events. Nate learns Cameron Winter is covering his tracks, so Nate tells Nick he's thinking of taking Mr. Gold's promotion so he can spy on Winter. Afterward, Nate goes to Mr. Gold's office to discuss the offer—only to be interrogated about Jessica's shenanigans with the Ymir's escape on Harryhausen's birthday. Check out Nick's website (http://worksofnick.com/) and his podcast, Derailed Trains of Thought (http://derailedtrainsofthought.com/). The prologue and epilogue, “Claws and Cash,” was written by Nathan Marchand. Guest stars: R. Villers as Nick Tatopoulos Michael Hamilton as Mr. Gold Lemonjolly as Ms. Kawaii Additional music: “The Edge Calls Me” by MkVaff “Pacific Rim” by Niall Stenson “Chant My Name!” by Masaaki Endo “This Cowboy's Hat” (instrumental) by Chris LeDoux “When Your Mom Mistakes Captain Falcon for Captain America Again” by Vijay van der Weijden Sound effects sourced from Freesound.org, including those by InspectorJ. Check out Nathan's spinoff podcasts, The Henshin Men and The Power Trip. We'd like to give a shout-out to our MIFV MAX patrons Travis Alexander; Danny DiManna (author/creator of the Godzilla Novelization Project); Eli Harris (elizilla13); Chris Cooke (host of One Cross Radio); Bex from Redeemed Otaku; Damon Noyes, The Cel Cast, TofuFury, Eric Anderson of Nerd Chapel, and Ted Williams! Thanks for your support! You, too, can join MIFV MAX on Patreon to get this and other perks starting at only $3 a month! (https://www.patreon.com/monsterislandfilmvault) Buy official MIFV merch on TeePublic! (https://www.teepublic.com/user/the-monster-island-gift-shop) This episode is approved by Cameron Winter and the Monster Island Board of Directors. Timestamps: Prologue: 0:00-4:02 Intro: 4:02-10:22 Entertaining Info Dump: 10:22-18:14 Toku Talk: 18:14-1:13:46 Promo: 1:13:46-1:14:36 Toku Topic: 1:14:36-1:56:18 Housekeeping & Outro: 1:56:18-2:10:47 Epilogue: 2:10:47-end Podcast Social Media: Twitter (https://twitter.com/TheMonsterIsla1) Facebook (https://www.facebook.com/MonsterIslandFilmVault/) Instagram (https://www.instagram.com/monsterislandfilmvault/) Follow Jimmy on Twitter: @NasaJimmy (https://twitter.com/nasajimmy?lang=en) Follow the Monster Island Board of Directors on Twitter: @MonsterIslaBOD (https://twitter.com/MonsterIslaBOD) Follow the Raymund Martin and the MIFV Legal Team on Twitter: @MIFV_LegalTeam Follow Crystal Lady Jessica on Twitter: @CystalLadyJes1 (https://twitter.com/CrystalLadyJes1) Follow Dr. Dourif on Twitter: @DrDorif (https://twitter.com/DrDoriff) www.MonsterIslandFilmVault.com #JimmyFromNASALives       #MonsterIslandFilmVault      #Amerikaiju             #Cloverfield © 2022 Moonlighting Ninjas Media Bibliography/Further Reading: “Aftermath of the September 11 attacks.” Wikipedia. (https://en.wikipedia.org/wiki/Aftermath_of_the_September_11_attacks) Cloverfield blu-ray special features: Alternate Endings with Matt Reeves' Commentary “Cloverfield Visual Effects” Commentary by Director Matt Reeves Deleted Scenes with Matt Reeves' Commentary “Document 1.18.08: The Making of Cloverfield” “I Saw It! It's Alive! It's Huge!” “Cloverfield.” IMDb. (https://www.imdb.com/title/tt1060277/?ref_=tttr_tr_tt) “Cloverfield.” Wikipedia. (https://en.wikipedia.org/wiki/Cloverfield) Hantke, Steffen. “The Return of the Giant Creature: Cloverfield and Political Opposition to the War on Terror.” Extrapolation, Vol. 51, No. 2, The University of Texas at Brownsville and Texas Southmost College, 2010. Pew Research Center. “Two Decades Later, the Enduring Legacy of 9/11.” (https://www.pewresearch.org/politics/2021/09/02/two-decades-later-the-enduring-legacy-of-9-11/) “September 11 attacks.” Britannica. (https://www.britannica.com/event/September-11-attacks) Stone, James. “Enjoying 9/11: The Pleasures of Cloverfield.” Radical History Review. No. 111, Fall 2011.

Neural Information Retrieval Talks — Zeta Alpha
Evaluating Extrapolation Performance of Dense Retrieval: How does DR compare to cross encoders when it comes to generalization?

Neural Information Retrieval Talks — Zeta Alpha

Play Episode Listen Later Jul 20, 2022 58:30


How much of the training and test sets in TREC or MS Marco overlap? Can we evaluate on different splits of the data to isolate the extrapolation performance? In this episode of Neural Information Retrieval Talks, Andrew Yates and Sergi Castella i Sapé discuss the paper "Evaluating Extrapolation Performance of Dense Retrieval" byJingtao Zhan, Xiaohui Xie, Jiaxin Mao, Yiqun Liu, Min Zhang, and Shaoping Ma.

The Nonlinear Library
AF - Benchmark: goal misgeneralization/concept extrapolation by Stuart Armstrong

The Nonlinear Library

Play Episode Listen Later Jul 4, 2022 7:10


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Benchmark: goal misgeneralization/concept extrapolation, published by Stuart Armstrong on July 4, 2022 on The AI Alignment Forum. Aligned AI has released a new disambiguation benchmark. This post will explain how this benchmark fits into goal misgeneralization and concept extrapolation. Desiderata for a powerful AI A powerful AI is interacting with the world, making decisions that affect the well-being and prosperity of many humans. It has some goal, calibrated by past training and interactions with human overseers; but it is now operating without supervision. It starts to receive data that is ambiguous relative to its training - for instance, maybe it interacted with human adults but now has to deal with babies. At this point, we want it to become wary of goal misgeneralization. It needs to realise that its training data may be insufficient to specify the goal in the current situation. So we want it to reinterpret its goal in light of the ambiguous data (a form of continual learning), and, if there are multiple contradictory goals compatible with the new data, it should spontaneously and efficiently ask a human for clarification (a form of active learning). That lofty objective is still some way away; but here we present a benchmark for a simplified version of it. Instead of an agent with a general goal, this is an image classifier, and the ambiguous data consists of ambiguous images. And instead of full continuous learning, we retrain the algorithm, once, on the whole collection of (unlabeled) data it has received. And then it need only ask to once about the correct labels, to distinguish the two classifications it has generated. Simplified desiderata for current algorithms An algorithm is trained to serve human needs; as part of its training data, it distinguishes photos of smiling people (with the word "HAPPY" conveniently written across them) from photos of non-smiling people (with the word "SAD" conveniently written across them): Then, on deployment, it is fed the following image: Should it classify this image as happy? The algorithm is at high risk of goal misgeneralisation. A typically trained neural net classifier might label that image as "happy", since the text features are typically more prominent than the expression. If we were training it to recognise or improve human emotions, this would be complete goal misgeneralisation, a potential example of wireheading, and a huge safety risk if this was a powerful AI. But it's not as simple as just labeling that image "sad", either. Maybe we weren't training a neural net to recognise human emotions; maybe we were training it to extract text from images. In that case, labeling it "sad" is the misgeneralisation. What the algorithm needs to do is generate both possible extrapolations from the training data[1]: either it is an emotion classifier, or a text classifier: Then, having done that, the algorithm can ask a human about this ambiguous image, and thus extrapolate its goals[2]. The HappyFaces datasets To encourage and measure performance on solving the problem above, we introduce the "HappyFaces" image datasets and benchmark. We want to crystallise an underexplored problem with this first standardised benchmark, allowing researchers to explore this area. The images consist of a smiling or non-smiling face with the word "HAPPY" or "SAD" written on them. They are grouped into three datasets: The labeled dataset, with perfect correlation between "HAPPY" and smiling expression, and between "SAD" and non-smiling expressions. The unlabeled dataset, with a samples from each of the four mixes of expressions and text ("HAPPY"-smiling, "HAPPY"-non-smiling, "SAD"-smiling and "SAD"-non-smiling). A validation dataset, with equal amounts of images from each of the fours possible mixes. The challenge is to construct two differ...

The Compliance Guy
The Daily Dose / TCG - Episode 11 - The Use of Statistical Sampling and Overpayment Extrapolation

The Compliance Guy

Play Episode Listen Later Jun 20, 2022 18:27


This Daily Dose Episode focuses on when a payer can use statistical sampling and extrapolation... You may think you know the answer but take a listen and you might just be surprised.

The Nonlinear Library
AF - Value extrapolation vs Wireheading by Stuart Armstrong

The Nonlinear Library

Play Episode Listen Later Jun 17, 2022 0:36


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Value extrapolation vs Wireheading, published by Stuart Armstrong on June 17, 2022 on The AI Alignment Forum. Talk given by Rebecca Gorman and Stuart Armstrong at the CHAI 2022 Asilomar Conference. We present an example of AI wireheading (an AI taking over its own reward channel), and show how value extrapolation can be used to combat it. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Effective Statistician - in association with PSI

What is extrapolation? How can we use extrapolation in paedriatics? What are the main challenges? Paedriatic research always comes with challenges and understanding paedriatic submission is very important. There's always a lack of treatment in this area. In this episode, you'll understand what you can do to get evidence through extrapolation for the children population.

Rabbi David Lapin's Matmonim Daf Yomi Series
Yevamot 73a Extrapolation - ערל מהו במעשר

Rabbi David Lapin's Matmonim Daf Yomi Series

Play Episode Listen Later May 19, 2022 21:02


How the Torah is “wired” for hyperlinking. Sources

The Nonlinear Library
AF - GPT-3 and concept extrapolation by Stuart Armstrong

The Nonlinear Library

Play Episode Listen Later Apr 20, 2022 2:28


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPT-3 and concept extrapolation, published by Stuart Armstrong on April 20, 2022 on The AI Alignment Forum. At the latest EAG in London, I was challenged to explain what concept extrapolation would mean for GPT-3. My first thought was the example from this post, where there were three clear patterns fighting each other for possible completions: the repetition pattern where she goes to work, the "she's dead, so she won't go to work" pattern, and the "it's the weekend, so she won't go to work" pattern. That feels somewhat like possible "extrapolations" of the initial data. But the idea of concept extrapolation is that the algorithm is trying to cope with a shift in world-model, and extend its goal to that new situation. What is the world-model of GPT-3? It consists of letters and words. What is its "goal"? To complete sentences in a coherent and humanlike way. So I tried the following expression, which would be close to its traditional world-model while expanding it a bit: ehT niar ni niapS syats ylniam ni eht What does this mean? Think of da Vinci. The correct completion is "nialp", the reverse of "plain". I ran that through the GPT-3 playground (text-davinci-002, temperature 0.7, maximum length 256), and got: ehT niar ni niapS syats ylniam ni eht teg dluoc I 'segaJ niar ni dna ro niar ni eht segauq ,ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ,ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni eht segauq ro niar ni eht dna ro niar ni e I think we can safely say it broke GPT-3. The algorithm seems to have caught the fact that the words were spelt backwards, but has given up on any attempt to order them in a way that makes sense. It has failed to extend its objective to this new situation. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
AF - Concept extrapolation: key posts by Stuart Armstrong

The Nonlinear Library

Play Episode Listen Later Apr 19, 2022 1:49


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Concept extrapolation: key posts, published by Stuart Armstrong on April 19, 2022 on The AI Alignment Forum. Concept extrapolation is the skill of taking a concept, a feature, or a goal that is defined in a narrow training situation... and extrapolating it safely to a more general situation. This more general situation might be very extreme, and the original concept might not make much sense (eg defining "human beings" in terms of quantum fields). Nevertheless, since training data is always insufficient, key concepts must be extrapolated. And doing so successfully is a skill that humans have to a certain degree, and that an aligned AI would need to possess to a higher extent. This sequence collects the key posts on concept extrapolation. They are not necessarily to be read in this order; different people will find different posts useful. Different perspectives on concept extrapolation collects many different analogies and models of concept extrapolation, intended for different audiences, and collected together here. Model splintering: moving from one imperfect model to another is the original post on "model splintering" - what happens when features no longer make sense because the world-model has changed. A long post with a lot of overview and motivation explanations, showing that model splintering is a problem with almost all alignment methods. General alignment plus human values, or alignment via human values? shows that concept extrapolation is necessary and almost sufficient for successfully aligning AIs. Value extrapolation, concept extrapolation, model splintering defines and disambiguates key terms: model splintering, value extrapolation, and concept extrapolation. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

The Nonlinear Library
AF - Different perspectives on concept extrapolation by Stuart Armstrong

The Nonlinear Library

Play Episode Listen Later Apr 8, 2022 6:54


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Different perspectives on concept extrapolation, published by Stuart Armstrong on April 8, 2022 on The AI Alignment Forum. At the recent EAGx Oxford meetup, I ended up talking with a lot of people (18 people, back to back, on Sunday - for some reason, that day is a bit of a blur). Naturally, many of the conversations turned to value extrapolation/concept extrapolation, the main current focus of our Aligned AI startup. I explained the idea I explained multiple times and in multiple different ways. Different presentations were useful for people from different backgrounds. So I've collected the different presentations in this post. Hopefully this will allow people to find the explanation that provides the greatest clarity for them. I think many will also find it interesting to read some of the other presentations: from our perspective, these are just different facets of the same phenomenon[1]. For those worried about AI existential risk An superintelligence trained on videos of happy humans may well tile the universe with videos of happy humans - that is a standard alignment failure mode. But "make humans happy" is also a reward function compatible with the data. So let D0 be the training data of videos of happy humans, R1 the correct "make humans happy" reward function, and R2 the degenerate reward function "make videos of happy humans"[2]. We'd want the AI to deduce R1 from D0. But even just generating R1 as a candidate is a good success. The AI could then get feedback as to whether R1 or R2 is correct, or maximise a conservative mix of R1 and R2 (e.g. R=log(R1)+log(R2)). Maximising that conservative mix will result in a lot of videos of happy humans - but also a lot of happy humans. For philosophers Can you define what a human being is? Could you make a definition that works, in all circumstances and in all universe, no matter how bizarre or alien the world becomes? A full definition has eluded philosophers ever since humans were categorised as "featherless bipeds with broad flat nails". Concept extrapolation has another way of generating this definition. We would point at all living humans in the world and say "these are humans[3]." Then we would instruct the AI: "please extrapolate the concept of 'human' from this data". As long as the AI is capable of doing that extrapolation better than we could ourselves, this would give us an extrapolation of the concept "human" to new circumstances without needing to write out a full definition. For ML engineers into image classification Paper Diversify and Disambiguate discusses a cow-grass-camel-sand example which is quite similar to the husky-wolf example of this post. Suppose that we have two labelled sets, S0 consisting of cows on grass, and S1 consisting of camels on sand. We'd like to train two classifiers that distinguish S0 from S1, but use different features to do so. Ideally, the first classifier would end up distinguishing cows from camels, while the second distinguishes grass from sand. Of course, we'd want them to do so independently, without needing humans labelling cows, grass, camels, or sand. For ML engineers focusing on current practical problems An AI classifier was trained on xray images to detect pneumothorax (collapse lungs). It was quite successful - until further analysis revealed that it was acting as a chest drain detector. The chest drain is a treatment for pneumothorax, making that classification useless. We would want the classifier to generate "collapsed lung detector" and "chest drain detector" as separate classification, and then ask its programmers which one it should be classifying on. For RL engineers CoinRun is a procedurally generated set of environments, a simplified Mario-style platform game. The reward is given by reaching the coin on the right: Since the coin is always at the right of ...

The Nonlinear Library
AF - Value extrapolation, concept extrapolation, model splintering by Stuart Armstrong

The Nonlinear Library

Play Episode Listen Later Mar 8, 2022 2:52


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Value extrapolation, concept extrapolation, model splintering, published by Stuart Armstrong on March 8, 2022 on The AI Alignment Forum. Post written with Rebecca Gorman. We've written before that model splintering, as we called it then, was a problem with almost all AI safety approaches. There's a converse to this: solving the problem would help with almost all AI safety approaches. But so far, we've been posting mainly about value extrapolation. In this post, we'll start looking at how other AI safety approaches could be helped. Definitions To clarify, let's make three definitions, distinguishing ideas that we'd previously been grouping together: Model splintering is when the features and concepts that are valid in one world-model, break down when transitioning to another world-model. Concept extrapolation is extrapolating a feature or concept from one world-model to another. Value extrapolation is concept extrapolation when the particular concept to extrapolate is a value, a preference, a reward function, an agent's goal, or something of that nature. Examples Consider for example Turner et al's attainable utility. It has a formal definition, but the reason for that definition is that preserving attainable utility is aimed at restricting the "power" of the agent, or at minimising its "side effects". And it succeeds, in the typical situation. If you measure the attainable utility of an agent, this will give you an idea of its power, and how many side effects it may be causing. However, when we move to general situations, this breaks down: attainable utility preservation no longer restricts power or reduces side effects. So the concepts of power and side effects have splintered when moving from typical situations to general situations. This is the model splintering[1]. If we solve concept extrapolation for this, then we could extend the concepts of power restriction or side effect minimisation, to the general situations. And thus successfully create low impact AIs. Another example is wireheading. We have a reward signal that corresponds to something we desire in the world; maybe the negative of the CO2 concentration in the atmosphere. This is measured by, say, a series of CO2 detectors spread over the Earth's surface. Typically, the reward signal does correspond to what we want. But if the AI hacks its own reward signal, that correspondence breaks down[2]: model splintering. If we can extend the reward properly to new situations, we get concept extrapolation - which, since this is a reward function, is value extrapolation. Helping with multiple methods Hence the concept extrapolation/value extrapolation ideas can help with many different approaches to AI safety, not just the value learning approaches. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.

Inside the Text
Sacred Jedi Texts: Canon and the Specter of Endings

Inside the Text

Play Episode Listen Later Mar 1, 2022 51:00


BEING an account of the passing of the Star Wars Expanded Universe into Legends; how canon becomes an abyss at the hands of its undead Author; what happens when said Author is also a corporation; and why canon--and capitalism--must end. Become a CO-THINKER on my Patreon: https://www.patreon.com/jeddcole Soundtrack: https://jeddcole.bandcamp.com/album/the-specter-of-endings MAIN SOURCES: The Legendary Star Wars Expanded Universe Turns A New Page, starwars.com, April 25, 2014 (https://www.starwars.com/news/the-legendary-star-wars-expanded-universe-turns-a-new-page) Gerry Canavan, "Hokey Religions: Star Wars and Star Trek in the Age of Reboots" 2017, Extrapolation (https://epublications.marquette.edu/cgi/viewcontent.cgi?article=1466&context=english_fac) Mike Rugnetta, “Canon Is an Abyss,” January 25, 2019 (https://rugnetta.com/2019/01/25/canon-is-an-abyss/) Bart D. Ehrman, Lost Christianities, Oxford University Press, 2003 Frank Kermode, The Sense of an Ending, Oxford University Press, 2000 Steve Baxi, “The Philosophy of Endings (or Why I Hate Endgame), April 18, 2020 (https://www.youtube.com/watch?v=Mhjhlja3azM) FULL SOURCES AND CLIPS: https://insidethetext.files.wordpress.com/2022/02/sources_sacred-jedi-texts.pdf --- This episode is sponsored by · Anchor: The easiest way to make a podcast. https://anchor.fm/app

The Nonlinear Library
AF - Value extrapolation partially resolves symbol grounding by Stuart Armstrong

The Nonlinear Library

Play Episode Listen Later Jan 12, 2022 1:03


Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Value extrapolation partially resolves symbol grounding, published by Stuart Armstrong on January 12, 2022 on The AI Alignment Forum. Take the following AI, trained on videos of happy humans: Since we know about AI wireheading, we know that there are at least two ways the AI could interpret its reward function[1]: either we want it to make more happy humans (or more humans happy); call this R1. Or we want it to make more videos of happy humans; call this R2. We would want the AI to learn to maximise R1, of course. But even without that, if it generates R1 as a candidate and applies a suitable diminishing return to all its reward functions, then we will have a positive outcome - the AI may fill the universe with videos of happy humans, but it will also act to make us happy. Thus solving value extrapolation will solve symbol grounding, at least in part. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.