POPULARITY
Philosopher Phillip Goff explores the nature of consciousness, reality, and the complexities of the human experience. He dives deep into pan-psychism and its surprising intersection with mystic religions. Drawing from his decades of research, he speaks on our finely-tuned universe, meaning/ purpose of life, quantum mechanics, and the Many World's Theory. The discussion culminates in a call to move beyond polarization and dogma in the pursuit of truth.Andrés Book Recs: https://www.knowthyself.one/books___________0:00 Intro2:07 Where the Great Thinkers of the Past Went Wrong5:53 Does the Brain Produce Consciousness?9:50 The Problem with the Verification Principle12:20 Do Our Senses Mislead Us?15:26 Panpsychism & The Hard Problem of Consciousness22:56 Complexity of Human Consciousness 26:26 Can Mathematics Ever Explain Consciousness? 30:47 How the Brain Correlates to Conscious Experience 34:46 Why It's So Hard to Solve This (Science is Asking the Wrong Questions) 38:33 Purpose & Meaning of Life40:26 Sentience in Objects, Plants, and Animals43:09 Where Panpsychism Meets Spirituality 46:38 An Ethical Structure to Reality49:42 Examples of How The Universe is Finely Tuned56:08 Making Sense of Life's Mystery 58:38 Teleological Laws of Nature1:02:13 Facing the Uncertainty of Reality 1:06:05 Examining Truth & Religion1:13:27 Addressing His Beliefs Around Christianity 1:25:13 The Mystical Side of Religions 1:32:53 Many World's Theory & Multiverses1:44:19 Going Beyond Quantum Physics1:50:25 Beyond Polarization & Dogma, Seeking Truth1:56:28 Conclusion ___________Episode Resources: https://philipgoffphilosophy.comhttps://amzn.eu/d/0O1FI6Ohttps://www.instagram.com/andreduqum/https://www.instagram.com/knowthyself/https://www.youtube.com/@knowthyselfpodcasthttps://www.knowthyself.oneListen to the show:Spotify: https://spoti.fi/4bZMq9lApple: https://apple.co/4iATICX
Join our Mailing List - https://www.mapitforward.coffee/mailinglist"Introduction to Regenerative Coffee Farming" is now available On-Demand for as little as $10 - https://mapitforward.coffee/workshops "Biochar for Coffee" is open for pre-registration - https://mapitforward.coffee/workshops "It's Time to Become a Coffee Consultant" is available now with additional new bonus material, including the coffee consultant career map. Get more details on how you can create an alternative revenue stream today at https://mapitforward.coffee/workshops Looking for business advisors or consultants for your business? Get in touch with us here: support@mapitforward.org••••••••••••••••••••••••••••••••This is the 5th conversation in a 5-part series on the Daily Coffee Pro Podcast by Map It Forward between host Lee Safar and guest, Paul Stewart - the Global Coffee Director at NGO, TechnoServe.This series focuses on poverty amongst coffee farmers, particularly smallholder coffee farmers.The 5 episodes in this series are:1. Tariffs and the Role of NGO's in Coffee - https://youtu.be/1vURyMyi2BA2. Smallholder Coffee Farmers Are Getting Poorer - https://youtu.be/uMVR5nMDM6Q3. Why is Poverty Such a Hard Problem to Solve in Coffee? - https://youtu.be/WvLIGQY2CRo4. Are There More Places For Farmers To Sell Coffee? - https://youtu.be/haAonaxIPIk5. Solutions To Get Coffee Farmers Out of Poverty. - https://youtu.be/TC7XIoeGfc8In the final episode of this series on The Daily Coffee Pro by Map It Forward, host Lee Safar discusses where to buy coffee and the importance of choosing high-quality coffee to support smallholder farmers.The episode delves into the challenges faced by coffee farmers, including the lack of knowledge and market access, and explores potential solutions such as increasing yield, improving quality, diversifying crops, and adopting regenerative agriculture practices.Paul shares insights on how organizations like TechnoServe assist farmers in leveraging these opportunities to escape poverty and enhance their livelihoods.00:00 Introduction: Where to Buy Quality Coffee00:27 Support The Daily Coffee Pro Podcast00:50 Final Episode: Discussing Poverty Among Coffee Farmers02:17 Challenges Faced by Smallholder Farmers03:32 Solutions for Smallholder Farmers06:37 Diversification and Regenerative Agriculture13:44 Advice for Farmers and Buyers18:35 Conclusion and Contact Information19:50 Closing Remarks and Call to ActionConnect with TechnoServe and Paul Stewart here:• https://www.linkedin.com/in/paul-stewart-1165826/• https://www.technoserve.org/••••••••••••••••••••••••••••••••Connect with Map It Forward here: Website | Instagram | Mailinglist
Join our Mailing List - https://www.mapitforward.coffee/mailinglist"Introduction to Regenerative Coffee Farming" is now available On-Demand for as little as $10 - https://mapitforward.coffee/workshops "Biochar for Coffee" is open for pre-registration - https://mapitforward.coffee/workshops "It's Time to Become a Coffee Consultant" is available now with additional new bonus material, including the coffee consultant career map. Get more details on how you can create an alternative revenue stream today at https://mapitforward.coffee/workshops Looking for business advisors or consultants for your business? Get in touch with us here: support@mapitforward.org••••••••••••••••••••••••••••••••This is the 5th conversation in a 5-part series on the Daily Coffee Pro Podcast by Map It Forward between host Lee Safar and guest, Paul Stewart - the Global Coffee Director at NGO, TechnoServe.This series focuses on poverty amongst coffee farmers, particularly smallholder coffee farmers.The 5 episodes in this series are:1. Tariffs and the Role of NGO's in Coffee - https://youtu.be/1vURyMyi2BA2. Smallholder Coffee Farmers Are Getting Poorer - https://youtu.be/uMVR5nMDM6Q3. Why is Poverty Such a Hard Problem to Solve in Coffee? - https://youtu.be/WvLIGQY2CRo4. Are There More Places For Farmers To Sell Coffee? - https://youtu.be/haAonaxIPIk5. Solutions To Get Coffee Farmers Out of Poverty. - https://youtu.be/TC7XIoeGfc8In the final episode of this series on The Daily Coffee Pro by Map It Forward, host Lee Safar discusses where to buy coffee and the importance of choosing high-quality coffee to support smallholder farmers.The episode delves into the challenges faced by coffee farmers, including the lack of knowledge and market access, and explores potential solutions such as increasing yield, improving quality, diversifying crops, and adopting regenerative agriculture practices.Paul shares insights on how organizations like TechnoServe assist farmers in leveraging these opportunities to escape poverty and enhance their livelihoods.00:00 Introduction: Where to Buy Quality Coffee00:27 Support The Daily Coffee Pro Podcast00:50 Final Episode: Discussing Poverty Among Coffee Farmers02:17 Challenges Faced by Smallholder Farmers03:32 Solutions for Smallholder Farmers06:37 Diversification and Regenerative Agriculture13:44 Advice for Farmers and Buyers18:35 Conclusion and Contact Information19:50 Closing Remarks and Call to ActionConnect with TechnoServe and Paul Stewart here:• https://www.linkedin.com/in/paul-stewart-1165826/• https://www.technoserve.org/••••••••••••••••••••••••••••••••Connect with Map It Forward here: Website | Instagram | Mailing list
Join our Mailing List - https://www.mapitforward.coffee/mailinglist"Introduction to Regenerative Coffee Farming" is now available On-Demand for as little as $10 - https://mapitforward.coffee/workshops "Biochar for Coffee" is open for pre-registration - https://mapitforward.coffee/workshops "It's Time to Become a Coffee Consultant" is available now with additional new bonus material, including the coffee consultant career map. Get more details on how you can create an alternative revenue stream today at https://mapitforward.coffee/workshops Looking for business advisors or consultants for your business? Get in touch with us here: support@mapitforward.org••••••••••••••••••••••••••••••••This is the 4th conversation in a 5-part series on the Daily Coffee Pro Podcast by Map It Forward between host Lee Safar and guest, Paul Stewart - the Global Coffee Director at NGO, TechnoServe.This series focuses on poverty amongst coffee farmers, particularly smallholder coffee farmers.The 5 episodes in this series are:1. Tariffs and the Role of NGO's in Coffee - https://youtu.be/1vURyMyi2BA2. Smallholder Coffee Farmers Are Getting Poorer - https://youtu.be/uMVR5nMDM6Q3. Why is Poverty Such a Hard Problem to Solve in Coffee? - https://youtu.be/WvLIGQY2CRo4. Are There More Places For Farmers To Sell Coffee? - https://youtu.be/haAonaxIPIk5. Solutions To Get Coffee Farmers Out of Poverty. - https://youtu.be/TC7XIoeGfc8In this episode of The Daily Coffee Pro by Map It Forward, host Lee Safar engages in an insightful discussion with Paul Stewart, Global Coffee Director for TechnoServe exploring the significant increase in coffee prices and how market access impacts smallholder farmers.Paul delves into the complexities of coffee sales, highlighting farmers' increasing choices and the challenges faced in regions like Nicaragua.They also examine how technology and AI can play a transformative role in providing market access and resources to farmers.Tune in to understand the current market dynamics and the potential for a more sustainable coffee future.00:00 Introduction: Coffee Market Surge00:37 Regenerative Coffee Farming Workshops01:23 Series Introduction: Poverty Among Coffee Farmers01:36 Do Farmers Have a Choice?02:49 Challenges in Coffee Supply Chains05:08 Market Dynamics and Farmer Choices06:42 Impact of Price Increases on Farmers09:13 Specialty vs. Commercial Coffee Markets10:50 Weather Impact on Coffee Production15:48 Access to New Markets for Smallholders18:47 Role of Technology in Coffee Farming24:12 Exciting Times Ahead for Coffee Farmers24:51 Conclusion and Next Episode TeaserConnect with TechnoServe and Paul Stewart here:• https://www.linkedin.com/in/paul-stewart-1165826/• https://www.technoserve.org/••••••••••••••••••••••••••••••••Connect with Map It Forward here: Website | Instagram | Mailinglist
Join our Mailing List - https://www.mapitforward.coffee/mailinglist"Introduction to Regenerative Coffee Farming" is now available On-Demand for as little as $10 - https://mapitforward.coffee/workshops "Biochar for Coffee" is open for pre-registration - https://mapitforward.coffee/workshops "It's Time to Become a Coffee Consultant" is available now with additional new bonus material, including the coffee consultant career map. Get more details on how you can create an alternative revenue stream today at https://mapitforward.coffee/workshops Looking for business advisors or consultants for your business? Get in touch with us here: support@mapitforward.org••••••••••••••••••••••••••••••••This is the 4th conversation in a 5-part series on the Daily Coffee Pro Podcast by Map It Forward between host Lee Safar and guest, Paul Stewart - the Global Coffee Director at NGO, TechnoServe.This series focuses on poverty amongst coffee farmers, particularly smallholder coffee farmers.The 5 episodes in this series are:1. Tariffs and the Role of NGO's in Coffee - https://youtu.be/1vURyMyi2BA2. Smallholder Coffee Farmers Are Getting Poorer - https://youtu.be/uMVR5nMDM6Q3. Why is Poverty Such a Hard Problem to Solve in Coffee? - https://youtu.be/WvLIGQY2CRo4. Are There More Places For Farmers To Sell Coffee? - https://youtu.be/haAonaxIPIk5. Solutions To Get Coffee Farmers Out of Poverty. - https://youtu.be/TC7XIoeGfc8In this episode of The Daily Coffee Pro by Map It Forward, host Lee Safar engages in an insightful discussion with Paul Stewart, Global Coffee Director for TechnoServe exploring the significant increase in coffee prices and how market access impacts smallholder farmers.Paul delves into the complexities of coffee sales, highlighting farmers' increasing choices and the challenges faced in regions like Nicaragua.They also examine how technology and AI can play a transformative role in providing market access and resources to farmers.Tune in to understand the current market dynamics and the potential for a more sustainable coffee future.00:00 Introduction: Coffee Market Surge00:37 Regenerative Coffee Farming Workshops01:23 Series Introduction: Poverty Among Coffee Farmers01:36 Do Farmers Have a Choice?02:49 Challenges in Coffee Supply Chains05:08 Market Dynamics and Farmer Choices06:42 Impact of Price Increases on Farmers09:13 Specialty vs. Commercial Coffee Markets10:50 Weather Impact on Coffee Production15:48 Access to New Markets for Smallholders18:47 Role of Technology in Coffee Farming24:12 Exciting Times Ahead for Coffee Farmers24:51 Conclusion and Next Episode TeaserConnect with TechnoServe and Paul Stewart here:• https://www.linkedin.com/in/paul-stewart-1165826/• https://www.technoserve.org/••••••••••••••••••••••••••••••••Connect with Map It Forward here: Website | Instagram | Mailing list
Join our Mailing List - https://www.mapitforward.coffee/mailinglist"Introduction to Regenerative Coffee Farming" is now available On-Demand for as little as $10 - https://mapitforward.coffee/workshops "Biochar for Coffee" is open for pre-registration - https://mapitforward.coffee/workshops "It's Time to Become a Coffee Consultant" is available now with additional new bonus material, including the coffee consultant career map. Get more details on how you can create an alternative revenue stream today at https://mapitforward.coffee/workshops Looking for business advisors or consultants for your business? Get in touch with us here: support@mapitforward.org••••••••••••••••••••••••••••••••This is the 3rd conversation in a 5-part series on the Daily Coffee Pro Podcast by Map It Forward between host Lee Safar and guest, Paul Stewart - the Global Coffee Director at NGO, TechnoServe.This series focuses on poverty amongst coffee farmers, particularly smallholder coffee farmers.The 5 episodes in this series are:1. Tariffs and the Role of NGO's in Coffee - https://youtu.be/1vURyMyi2BA2. Smallholder Coffee Farmers Are Getting Poorer - https://youtu.be/uMVR5nMDM6Q3. Why is Poverty Such a Hard Problem to Solve in Coffee? - https://youtu.be/WvLIGQY2CRo4. Are There More Places For Farmers To Sell Coffee? - https://youtu.be/haAonaxIPIk5. Solutions To Get Coffee Farmers Out of Poverty. - https://youtu.be/TC7XIoeGfc8In this episode of The Daily Coffee Pro by Map It Forward, Lee and Paul delve deep into the challenges faced by coffee farmers globally, with a particular focus on Brazil and Vietnam as low-cost producers.They also discuss the significant issues impacting coffee prices, farm yields, and the tough economic decisions smallholder farmers have to make and explore how shifting demographics, technological advancements, and local consumption influence the coffee market.Paul also sheds light on the critical role of NGOs in providing the knowledge and resources needed to support farmers in these volatile times. Tune in to understand why solving these problems is so complex and the potential future of coffee farming.00:00 Introduction to Coffee Market Dynamics00:45 Support Our Podcast01:13 Welcome and Episode Overview01:37 Challenges Faced by Coffee Farmers05:06 The Impact of Brazil and Vietnam on Coffee Prices08:16 Local Coffee Consumption Trends10:13 Tariffs and Their Effects on Coffee Trade12:58 The Role of NGOs in Solving Coffee Industry Problems21:11 Conclusion and Next Episode TeaserConnect with TechnoServe and Paul Stewart here:• https://www.linkedin.com/in/paul-stewart-1165826/• https://www.technoserve.org/••••••••••••••••••••••••••••••••Connect with Map It Forward here: Website | Instagram | Mailinglist
Join our Mailing List - https://www.mapitforward.coffee/mailinglist"Introduction to Regenerative Coffee Farming" is now available On-Demand for as little as $10 - https://mapitforward.coffee/workshops "Biochar for Coffee" is open for pre-registration - https://mapitforward.coffee/workshops "It's Time to Become a Coffee Consultant" is available now with additional new bonus material, including the coffee consultant career map. Get more details on how you can create an alternative revenue stream today at https://mapitforward.coffee/workshops Looking for business advisors or consultants for your business? Get in touch with us here: support@mapitforward.org••••••••••••••••••••••••••••••••This is the 3rd conversation in a 5-part series on the Daily Coffee Pro Podcast by Map It Forward between host Lee Safar and guest, Paul Stewart - the Global Coffee Director at NGO, TechnoServe.This series focuses on poverty amongst coffee farmers, particularly smallholder coffee farmers.The 5 episodes in this series are:1. Tariffs and the Role of NGO's in Coffee - https://youtu.be/1vURyMyi2BA2. Smallholder Coffee Farmers Are Getting Poorer - https://youtu.be/uMVR5nMDM6Q3. Why is Poverty Such a Hard Problem to Solve in Coffee? - https://youtu.be/WvLIGQY2CRo4. Are There More Places For Farmers To Sell Coffee? - https://youtu.be/haAonaxIPIk5. Solutions To Get Coffee Farmers Out of Poverty. - https://youtu.be/TC7XIoeGfc8In this episode of The Daily Coffee Pro by Map It Forward, Lee and Paul delve deep into the challenges faced by coffee farmers globally, with a particular focus on Brazil and Vietnam as low-cost producers.They also discuss the significant issues impacting coffee prices, farm yields, and the tough economic decisions smallholder farmers have to make and explore how shifting demographics, technological advancements, and local consumption influence the coffee market.Paul also sheds light on the critical role of NGOs in providing the knowledge and resources needed to support farmers in these volatile times. Tune in to understand why solving these problems is so complex and the potential future of coffee farming.00:00 Introduction to Coffee Market Dynamics00:45 Support Our Podcast01:13 Welcome and Episode Overview01:37 Challenges Faced by Coffee Farmers05:06 The Impact of Brazil and Vietnam on Coffee Prices08:16 Local Coffee Consumption Trends10:13 Tariffs and Their Effects on Coffee Trade12:58 The Role of NGOs in Solving Coffee Industry Problems21:11 Conclusion and Next Episode TeaserConnect with TechnoServe and Paul Stewart here:• https://www.linkedin.com/in/paul-stewart-1165826/• https://www.technoserve.org/••••••••••••••••••••••••••••••••Connect with Map It Forward here: Website | Instagram | Mailing list
Join our Mailing List - https://www.mapitforward.coffee/mailinglist"Introduction to Regenerative Coffee Farming" is now available On-Demand for as little as $10 - https://mapitforward.coffee/workshops "Biochar for Coffee" is open for pre-registration - https://mapitforward.coffee/workshops "It's Time to Become a Coffee Consultant" is available now with additional new bonus material, including the coffee consultant career map. Get more details on how you can create an alternative revenue stream today at https://mapitforward.coffee/workshops Looking for business advisors or consultants for your business? Get in touch with us here: support@mapitforward.org••••••••••••••••••••••••••••••••This is the 2nd conversation in a 5-part series on the Daily Coffee Pro Podcast by Map It Forward between host Lee Safar and guest, Paul Stewart - the Global Coffee Director at NGO, TechnoServe.This series focuses on poverty amongst coffee farmers, particularly smallholder coffee farmers.The 5 episodes in this series are:1. Tariffs and the Role of NGO's in Coffee - https://youtu.be/1vURyMyi2BA2. Smallholder Coffee Farmers Are Getting Poorer - https://youtu.be/uMVR5nMDM6Q3. Why is Poverty Such a Hard Problem to Solve in Coffee? - https://youtu.be/WvLIGQY2CRo4. Are There More Places For Farmers To Sell Coffee? - https://youtu.be/haAonaxIPIk5. Solutions To Get Coffee Farmers Out of Poverty. - https://youtu.be/TC7XIoeGfc8In this episode of The Daily Coffee Pro by Map It Forward, host Lee Safar and guest Paul Stewart, Global Coffee Director of TechnoServe, discuss the increasing poverty among smallholder coffee farmers, comparing today's conditions to those of 50 years ago.They explore various factors impacting profitability, including land size, yield, costs, and coffee prices. They highlight that while the costs and land sizes have changed drastically, yields have remained relatively steady, contributing to today's economic challenges for these farmers.The discussion concludes by addressing the urgent question of whether smallholder coffee farms can sustain their families and workers given these financial strains.00:00 Introduction: The Decline of Coffee Farming Livelihoods00:39 Support the Podcast01:01 Series Overview and Guest Introduction01:33 Defining Small Holder Farmers02:21 Challenges of Sustaining Small Coffee Farms04:13 Exploring the Scale of Poverty Among Coffee Farmers06:13 Components of Profitability in Coffee Farming09:04 Impact of Coffee Prices Over Time10:59 The Rising Costs of Coffee Farming14:30 Stagnant Yields and Land Size Reduction20:47 Conclusion and Next Episode PreviewConnect with TechnoServe and Paul Stewart here:• https://www.linkedin.com/in/paul-stewart-1165826/• https://www.technoserve.org/••••••••••••••••••••••••••••••••Connect with Map It Forward here: Website | Instagram | Mailinglist
Join our Mailing List - https://www.mapitforward.coffee/mailinglist"Introduction to Regenerative Coffee Farming" is now available On-Demand for as little as $10 - https://mapitforward.coffee/workshops "Biochar for Coffee" is open for pre-registration - https://mapitforward.coffee/workshops "It's Time to Become a Coffee Consultant" is available now with additional new bonus material, including the coffee consultant career map. Get more details on how you can create an alternative revenue stream today at https://mapitforward.coffee/workshops Looking for business advisors or consultants for your business? Get in touch with us here: support@mapitforward.org••••••••••••••••••••••••••••••••This is the 2nd conversation in a 5-part series on the Daily Coffee Pro Podcast by Map It Forward between host Lee Safar and guest, Paul Stewart - the Global Coffee Director at NGO, TechnoServe.This series focuses on poverty amongst coffee farmers, particularly smallholder coffee farmers.The 5 episodes in this series are:1. Tariffs and the Role of NGO's in Coffee - https://youtu.be/1vURyMyi2BA2. Smallholder Coffee Farmers Are Getting Poorer - https://youtu.be/uMVR5nMDM6Q3. Why is Poverty Such a Hard Problem to Solve in Coffee? - https://youtu.be/WvLIGQY2CRo4. Are There More Places For Farmers To Sell Coffee? - https://youtu.be/haAonaxIPIk5. Solutions To Get Coffee Farmers Out of Poverty. - https://youtu.be/TC7XIoeGfc8In this episode of The Daily Coffee Pro by Map It Forward, host Lee Safar and guest Paul Stewart, Global Coffee Director of TechnoServe, discuss the increasing poverty among smallholder coffee farmers, comparing today's conditions to those of 50 years ago.They explore various factors impacting profitability, including land size, yield, costs, and coffee prices. They highlight that while the costs and land sizes have changed drastically, yields have remained relatively steady, contributing to today's economic challenges for these farmers.The discussion concludes by addressing the urgent question of whether smallholder coffee farms can sustain their families and workers given these financial strains.00:00 Introduction: The Decline of Coffee Farming Livelihoods00:39 Support the Podcast01:01 Series Overview and Guest Introduction01:33 Defining Small Holder Farmers02:21 Challenges of Sustaining Small Coffee Farms04:13 Exploring the Scale of Poverty Among Coffee Farmers06:13 Components of Profitability in Coffee Farming09:04 Impact of Coffee Prices Over Time10:59 The Rising Costs of Coffee Farming14:30 Stagnant Yields and Land Size Reduction20:47 Conclusion and Next Episode PreviewConnect with TechnoServe and Paul Stewart here:• https://www.linkedin.com/in/paul-stewart-1165826/• https://www.technoserve.org/••••••••••••••••••••••••••••••••Connect with Map It Forward here: Website | Instagram | Mailing list
Join our Mailing List - https://www.mapitforward.coffee/mailinglist"Introduction to Regenerative Coffee Farming" is now available On-Demand for as little as $10 - https://mapitforward.coffee/workshops "Biochar for Coffee" is open for pre-registration - https://mapitforward.coffee/workshops "It's Time to Become a Coffee Consultant" is available now with additional new bonus material, including the coffee consultant career map. Get more details on how you can create an alternative revenue stream today at https://mapitforward.coffee/workshops Looking for business advisors or consultants for your business? Get in touch with us here: support@mapitforward.org••••••••••••••••••••••••••••••••This is the first conversation in a 5-part series on the Daily Coffee Pro Podcast by Map It Forward between host Lee Safar and guest, Paul Stewart - the Global Coffee Director at NGO, TechnoServe.This series focuses on poverty amongst coffee farmers, particularly smallholder coffee farmers.The 5 episodes in this series are:1. Tariffs and the Role of NGO's in Coffee - https://youtu.be/1vURyMyi2BA2. Smallholder Coffee Farmers Are Getting Poorer - https://youtu.be/uMVR5nMDM6Q3. Why is Poverty Such a Hard Problem to Solve in Coffee? - https://youtu.be/WvLIGQY2CRo4. Are There More Places For Farmers To Sell Coffee? - https://youtu.be/haAonaxIPIk5. Solutions To Get Coffee Farmers Out of Poverty. - https://youtu.be/TC7XIoeGfc8In this episode of The Daily Coffee Pro by Map It Forward, host Lee Safar discusses the significant challenges posed by recent tariffs on coffee imports to the U.S. Paul and Lee explore the implications of tariffs on importers, roasters, and ultimately consumers.The conversation dives into the crucial role NGOs play in the coffee value chain and provides a historical context of TechnoServe's impactful work.Tune in to understand the current volatile market conditions and their potential impact on the coffee industry.00:00 Introduction to Coffee Tariffs00:52 Business Advisory Services for Coffee Entrepreneurs01:28 Welcome and Series Introduction01:48 Introducing Paul Stewart from TechnoServe02:00 The Impact of Tariffs on Coffee Producers02:57 Understanding the Role of NGOs in Coffee04:14 TechnoServe's Work in the Coffee Sector07:04 Funding and Challenges for NGOs09:26 Tariffs and Their Broader Implications15:05 The Role of NGOs in the Coffee Value Chain20:26 Conclusion and Next Episode PreviewConnect with TechnoServe and Paul Stewart here:• https://www.linkedin.com/in/paul-stewart-1165826/• https://www.technoserve.org/••••••••••••••••••••••••••••••••Connect with Map It Forward here: Website | Instagram | Mailinglist
Join our Mailing List - https://www.mapitforward.coffee/mailinglist"Introduction to Regenerative Coffee Farming" is now available On-Demand for as little as $10 - https://mapitforward.coffee/workshops "Biochar for Coffee" is open for pre-registration - https://mapitforward.coffee/workshops "It's Time to Become a Coffee Consultant" is available now with additional new bonus material, including the coffee consultant career map. Get more details on how you can create an alternative revenue stream today at https://mapitforward.coffee/workshops Looking for business advisors or consultants for your business? Get in touch with us here: support@mapitforward.org••••••••••••••••••••••••••••••••This is the first conversation in a 5-part series on the Daily Coffee Pro Podcast by Map It Forward between host Lee Safar and guest, Paul Stewart - the Global Coffee Director at NGO, TechnoServe.This series focuses on poverty amongst coffee farmers, particularly smallholder coffee farmers.The 5 episodes in this series are:1. Tariffs and the Role of NGO's in Coffee - https://youtu.be/1vURyMyi2BA2. Smallholder Coffee Farmers Are Getting Poorer - https://youtu.be/uMVR5nMDM6Q3. Why is Poverty Such a Hard Problem to Solve in Coffee? - https://youtu.be/WvLIGQY2CRo4. Are There More Places For Farmers To Sell Coffee? - https://youtu.be/haAonaxIPIk5. Solutions To Get Coffee Farmers Out of Poverty. - https://youtu.be/TC7XIoeGfc8In this episode of The Daily Coffee Pro by Map It Forward, host Lee Safar discusses the significant challenges posed by recent tariffs on coffee imports to the U.S. Paul and Lee explore the implications of tariffs on importers, roasters, and ultimately consumers.The conversation dives into the crucial role NGOs play in the coffee value chain and provides a historical context of TechnoServe's impactful work.Tune in to understand the current volatile market conditions and their potential impact on the coffee industry.00:00 Introduction to Coffee Tariffs00:52 Business Advisory Services for Coffee Entrepreneurs01:28 Welcome and Series Introduction01:48 Introducing Paul Stewart from TechnoServe02:00 The Impact of Tariffs on Coffee Producers02:57 Understanding the Role of NGOs in Coffee04:14 TechnoServe's Work in the Coffee Sector07:04 Funding and Challenges for NGOs09:26 Tariffs and Their Broader Implications15:05 The Role of NGOs in the Coffee Value Chain20:26 Conclusion and Next Episode PreviewConnect with TechnoServe and Paul Stewart here:• https://www.linkedin.com/in/paul-stewart-1165826/• https://www.technoserve.org/••••••••••••••••••••••••••••••••Connect with Map It Forward here: Website | Instagram | Mailing list
Steve speaks with ARX-Han, an anonymous writer, about his book "Incel."(00:00) - Introduction (02:09) - Discussing the Novel 'Incel' (06:08) - Character Analysis and Literary Influences (13:32) - Themes of Evolutionary Psychology and Nihilism (18:38) - Historical Context and Modern Inceldom (26:18) - Impact of Dating Apps on Modern Relationships (32:47) - Representation and Character Dynamics (40:21) - Literary Comparisons and Philosophical Depth (45:38) - Philosophical Underpinnings of Meaning (48:14) - The Hard Problem of Consciousness (50:38) - Free Will and Determinism (52:53) - Darwinian Nihilism and Nick Land (58:17) - Historical Perspectives on East Asian Civilization (01:03:11) - The State of Literary Fiction (01:16:45) - AI and Literature (01:19:44) - AI and Human Meaning Music used with permission from Blade Runner Blues Livestream improvisation by State Azure.–Steve Hsu is Professor of Theoretical Physics and of Computational Mathematics, Science, and Engineering at Michigan State University. Previously, he was Senior Vice President for Research and Innovation at MSU and Director of the Institute of Theoretical Science at the University of Oregon. Hsu is a startup founder (SuperFocus.ai, SafeWeb, Genomic Prediction, Othram) and advisor to venture capital and other investment firms. He was educated at Caltech and Berkeley, was a Harvard Junior Fellow, and has held faculty positions at Yale, the University of Oregon, and MSU. Please send any questions or suggestions to manifold1podcast@gmail.com or Steve on X @hsu_steve.
In a time of seemingly growing division on the planet, what if the most unifying truth is one both sages and scientists agree on: that we're inseparable on the most fundamental level?In today's special compilation episode, we weave together the voices of some of the world's most thoughtful explorers of consciousness: from neuroscientists and physicists to mystics and sages, to examine the illusion of the separate self, the mystery of awareness, and what becomes possible when we remember our true nature.Andrés Book Recs: https://www.knowthyself.one/books___________0:00 Intro2:26 Annaka Harris - Defining the Hard Problem of Consciousness18:04 Donald Hoffman - Seeing the Truth of Reality 30:05 Federico Faggin - Why Computers Will Never Be Conscious39:33 Sam Harris - The Illusion of the Separate Self47:36 Dr. Lisa Miller - Science of the Awakened Brain53:23 Dr Joe Dispenza - Phenomenon of Emergence & Collective Healing1:02:06 Rupert Spira - Love is the Basis of Our Existence 1:10:42 Deepak Chopra - Waking Up to Your True Nature1:21:17 Mooji - A Guide to Expanding Your Awareness1:42:29 Conclusion___________Watch the Full Episodes:Annaka Harris: https://youtu.be/Kabwgbq9Fhg?si=QVzirSwJ0w17uiHlDonald Hoffman: https://youtu.be/ffgzkHCGZGE?si=ymkJyaAAV4Ftb2hRFederico Faggin: https://youtu.be/d6NHRB5V1eE?si=FBVSfLghu464cncSSam Harris: https://youtu.be/gqA-ZRpl1jQ?si=MvZChSaW4_JZtNVODr. Lisa Miller: https://youtu.be/DUe0oaH7GtQ?si=0JB6W6V_5HEEKGWMDr Joe Dispenza: https://youtu.be/QQIwZ41Ro1w?si=PnPK9nnMn2RCEw1DRupert Spira: https://youtu.be/Smqgkab8HZI?si=kZml0JSJX77Cxy1dDeepak Chopra: https://youtu.be/ZhIQ5bzv-0w?si=53WtiTaIluYdB_h3Mooji: https://youtu.be/RIjTlEBwrHQ?si=8iZVNv5eU0wLzUwzhttps://www.instagram.com/andreduqum/https://www.instagram.com/knowthyself/https://www.youtube.com/@knowthyselfpodcasthttps://www.knowthyself.oneListen to the show:Spotify: https://spoti.fi/4bZMq9lApple: https://apple.co/4iATICX
Katie checks in with actor (To Kill a Mockingbird and The Iceman Cometh on Broadway; The Hard Problem and Nina Off-Broadway; Bad Monkey on Apple TV) and pop musician (Softee), Nina Ross.
Brian Schmidt is absolutely one of these extraordinary people - a normal person who's lived an entirely abnormal life. Besides being awarded a Nobel Prize, Brian was also the Vice Chancellor of the Australian National University for eight years, including during the Covid pandemic. He's a physicist, astrologist and astrophysicist by training, receiving his undergrad from the University of Arizona then his PhD from Harvard. For more than 30 years he's called Australia home, making the move here and becoming one of the most significant figures in the history of humanity's understanding of the universe and our extremely small place in it.This was a brilliant conversation - covering everything from Brian's research to his time at ANU, his views on Australia's education system, the role of universities and how we can all approach learning, education and lifelong development. As well, we chat about how Brian thinks about hard problems, his own calculus in making a difference today on some of the world's greatest challenges from food shortages to nuclear proliferation.This was a real honour. Brian is a global leader in his field - one of the greatest that's ever lived - so to spend some time with him was remarkable, and I hope you take from this how within reach the extraordinary is. The newsletter is out this week on the theme of choice. With Brian on the show to explore and understand cosmology, astronomy and physics, the great Carl Sagan is appropriate this week - “We are the legacy of 15 billion years of cosmic evolution. We can enhance life and come to know the universe that made us. Or we can squander our 15 billion year heritage in meaningless self destruction.”Til next time, thanks for listening.Events are live and more are coming - follow on Humanitix.Follow on LinkedIn, Substack and Instagram. Today's show is delivered with Altiorem. Use the code FindingNature25 to get your first month free on their gold and platinum plans. Today's show is delivered with Gilay Estate. Add Finding Nature to your booking reservation for free food bundles.Send me a messageThanks for listening. Follow Finding Nature on Instagram
Send us a textWe discussed the challenges of working with time series data, particularly in the context of machine learning and AI, highlighting the complexity and the need for automation in feature engineering. The importance of balancing accuracy and complexity in model creation was emphasized, with a focus on avoiding overfitting and ensuring models remain effective in real-world applications. The potential integration of business context data, such as sales data, with cloud consumption data to enhance anomaly detection and forecasting models was proposed. The discussion touched on the economic value of anomaly detection, with a focus on proving that early detection can lead to significant cost savings. The target audience for the anomaly detection system was identified as FinOps managers, who would use the system to manage cloud-related financial topics and coordinate with engineers to address anomalies.
Niemand weiß, wie es sich anfühlt, du zu sein. Keiner kann in deine Seele sehen — und doch versuchen Menschen seit Jahrtausenden, sie zu ergründen. Leon und Atze klären heute: Was dachten Philosophen früher über die Seele, was denken Forschende heute? Gibt es überhaupt eine Trennung zwischen Geist und Körper? Dafür holen die beiden sich wissenschaftliche Expertise bei Dr. Johannes Kleiner, der an der Ludwig-Maximilian-Universität München das Bewusstsein erforscht. Fühlt euch gut betreut Leon & Atze Start ins heutige Thema: 13:31 min. VVK Münster 2025: https://betreutes-fuehlen.ticket.io/ Instagram: https://www.instagram.com/leonwindscheid/ https://www.instagram.com/atzeschroeder_offiziell/ Der Instagram Account für Betreutes Fühlen: https://www.instagram.com/betreutesfuehlen/ Mehr zu unseren Werbepartnern findet ihr hier: https://linktr.ee/betreutesfuehlen Tickets: Atze: https://www.atzeschroeder.de/#termine Leon: https://leonwindscheid.de/tour/ Quellen: Hier könnt ihr mehr über das Experiment zum Gewicht der Seele lesen: https://www.sueddeutsche.de/leben/seele-gewicht-21-gramm-existenz-1.6516411 Im ZEITmagazin schreibt Sabine Rückert über die Seele: https://www.zeit.de/zeit-magazin/2017/53/seele-psychologie-existenz-suche Eine Übersicht, was antike Philosophen über die Seele gedacht haben: https://plato.stanford.edu/entries/ancient-soul/#3 Hier könnt ihr lesen, was Prof. Paul Bloom zum Körper-Geist Dualismus schreibt: https://minddevlab.yale.edu/sites/default/files/files/homers-soul.pdf Eine philosophische Erklärung des “Hard Problem”: https://iep.utm.edu/hard-problem-of-conciousness/ Redaktion: Mia Mertens Produktion: Murmel Productions
Andrés Gómez Emilsson, director of the Qualia Research Institute and a pioneering researcher in the mathematical study of consciousness, explores the nature of awareness from both philosophical and scientific angles. He recounts his journey that led to founding the Qualia Research Institute—aimed at reducing suffering and enhancing experience. He discusses the transformative potential of psychedelics, the view of the self as a series of experiences, and how sensory resonance ("impedance matching") shapes our awareness. As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://tinyurl.com/SpotifyTOE Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Timestamps: 00:00 The Most Important Problem 01:49 The Hard Problem and Ontologies 05:15 Journey into Consciousness 09:06 The Qualia Research Institute 10:49 Shattering Realities 17:12 Happiness vs. Meaning 19:15 Defining Happiness 25:28 Psychological Egoism 33:45 Understanding Consciousness 38:01 The Qualia Research Institute's Goals 49:21 Exploring Impedance Matching 58:25 The Dance of Dissonance 1:03:22 The Nature of Suffering 1:10:32 The Concept of Oneness 1:17:02 Zero Ontology Explained 1:27:20 The Nature of Reality 1:28:57 The Self and Sense of Self 1:30:53 Stages of Consciousness 1:37:35 Transformations in Consciousness 1:46:20 The Role of Psychedelics 2:02:01 Exploring 5-MeO-DMT 2:21:03 Psychedelics and Buddhist Philosophy 2:45:32 Insights on Jhana States 2:51:31 Conclusion and Reflections Links Mentioned: - Qualia Research Institute (website): https://qri.org/ - Andres's Qualia profile: https://qri.org/people/andr%C3%A9s-g%C3%B3mez-emilsson - David Chalmers's 2024 presentation at Mindfest: https://www.youtube.com/watch?v=5r9V1ryksnw - Bernardo Kastrup on TOE: https://www.youtube.com/watch?v=lAB21FAXCDE - The Hedonistic Imperative (book): https://amzn.to/3Di7Xx5 - Why Does Anything Exist? (book): https://www.amazon.com/Why-Does-Anything-Exist-Mysteries-ebook/dp/B0DKBHMC3C - Mastering the Core Teachings of the Buddha (book): https://www.amazon.com/Mastering-Core-Teachings-Buddha-Unusually/dp/1911597108 - Andres on The DemystifySci Podcast: https://www.youtube.com/watch?v=jjWDURKNe2Q - ‘Replications' group on Reddit: https://www.reddit.com/r/replications/ - Stuart Hameroff's Mindfest presentation: https://www.youtube.com/watch?v=0_bQwdJir1o - The Doors of Perception (book): https://amzn.to/4ijduTb - From Neural Activity to Field Topology (article): https://qualiacomputing.com/2025/02/09/from-neural-activity-to-field-topology-how-coupling-kernels-shape-consciousness/ - Seeing That Frees (book): https://amzn.to/43fm15m - Practicing the Jhanas (meditation series): https://www.youtube.com/playlist?list=PLO6hhaAzLmiqUzBYuLLJQ8FexOTRxz8xF - The Dalai Lama, Psychedelics & Cher (article): https://tricycle.org/article/dalai-lama-psychedelics-cher/ - Advice to Sigālaka (article): https://suttacentral.net/dn31/en/sujato?lang=en&layout=plain&reference=none¬es=asterisk&highlight=false&script=latin Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs #science #consciousness #mind Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to the complete Iceberg of Consciousness. As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Listen on Spotify: https://tinyurl.com/SpotifyTOE Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join --------------------- LAYER 1 01:31 – Introduction to Layer 1 01:38 – What Is Consciousness? 04:20 – The Mind-Body Problem 06:02 – Sleep, Dreams, and Altered States 08:53 – Free Will vs. Determinism 10:58 – The Self and Identity LAYER 2 12:56 – Introduction to Layer 2 13:02 – The Hard Problem of Consciousness 16:59 – Qualia and Phenomenal Consciousness 19:27 – Advaita Vedanta (Non-Dualism) 22:59 – John Vervaeke's Relevance Realization 24:45 – Panpsychism and the Combination Problem 26:58 – Buddhist Consciousness (Yogācāra & Madhyamaka) 29:04 – Global Workspace Theory 31:59 – Carl Jung's Explanation for Consciousness LAYER 3 36:03 – Introduction to Layer 3 36:47 – Heidegger's Concept of Dasein 39:28 – Attention Schema Theory (Michael Graziano) 42:53 – EM-Field Topology & Boundary Problem (Andrés Gómez Emilsson) 46:49 – Joscha Bach's Theory 53:41 – Donald Hoffman's Theory 57:47 – Nir Lahav's Relativistic Consciousness LAYER 4 01:05:46 – Introduction to Layer 4 01:06:25 – Douglas Hofstadter's Strange Loops 01:11:50 – Penrose's Quantum Consciousness 01:16:04 – Christopher Langan's CTMU 01:20:31 – Johnjoe McFadden's CEMI Field Theory 01:24:24 – David Chalmers' Extended Mind Hypothesis 01:29:18 – Iain McGilchrist's Relational Dual-Aspect Monism LAYER 5 01:33:04 – Introduction to Layer 5 01:34:35 – Bernardo Kastrup's Analytic Idealism 01:38:54 – Karl Friston's Enactive Approach / Free Energy Principle 01:42:12 – Alfred North Whitehead's Pan-Experientialism 01:46:56 – Mark Solms' Felt Uncertainty & Affective Theory 01:51:20 – Thomas Metzinger's Minimal Phenomenal Selfhood --------------------- Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs #science #consciousness Learn more about your ad choices. Visit megaphone.fm/adchoices
As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe In this episode, Jacob Barandes, a theoretical physicist and philosopher from Harvard, and computational biologist Manolis Kellis from MIT, explore the connections between quantum physics, biology, and consciousness. Enjoy. Shout out to the authors of the following. Check out their books: - "The Mending of Broken Bones: A Modern Guide to Classical Algebra" by Paul Lockhart https://amzn.to/3EmfDP9 - "Dreaming Reality: How Neuroscience and Mysticism Can Unlock the Secrets of Consciousness" by Vladimir Miskovic and Steven Jay Lynn https://amzn.to/42y1KYi Join My New Substack (Personal Writings): https://curtjaimungal.substack.com Enjoy on Spotify (with video!): https://tinyurl.com/SpotifyTOE Become a YouTube Member (Early Access Videos): https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Timestamps: 0:00 Introduction 6:23 Metaphysics and Physics: Defining the Boundaries 8:53 Does Existence Matter? 15:35 Quantum Physics: The Nature of Reality 21:35 Understanding Life Through Physics 27:56 The Observer Effect in Quantum Theory 36:02 Gauge Potentials and Their Reality 46:11 The Birth of Quantum Mechanics 54:42 Interpreting Quantum Superposition and Action at a Distance 1:01:20 Decoherence Explained 1:02:18 The Observer's Role 1:03:43 Size and Decoherence 1:04:35 Quantum Computing and Investments 1:07:33 Practical Applications of Quantum Theory 1:10:14 Quantum Computers: What Are They Good For? 1:11:24 The Markov Process Debate 1:15:18 Causal Modeling in Medicine 1:16:56 Quantum Effects in Biology 1:21:15 Consciousness and Quantum Mechanics 1:27:03 Non-locality and Quantum Theory 1:31:46 The Historical Shift in Physics 1:35:15 Beables and Their Nature 1:47:17 The Hard Problem of Consciousness 1:51:41 The P-Zombie Concept Support TOE on Patreon: https://patreon.com/curtjaimungal Twitter: https://twitter.com/TOEwithCurt Discord Invite: https://discord.com/invite/kBcnfNVwqs #science #philosophy #theoreticalphysics #physics #debate #lecture Learn more about your ad choices. Visit megaphone.fm/adchoices
Dr James Cooke, PhD trained is a neuroscientist, speaker, and writer. He holds three degrees from Oxford University (a PhD and Masters in Neuroscience & a BA in Experimental Psychology). He has conducted scientific research for over a decade at institutions such as Oxford University, University of California, Berkeley, University College London, Trinity College Dublin, and Riken Brain Sciences Institute in Tokyo. James is the author of The Dawn of Mind: How Matter Became Conscious and Alive (2024), which synthesizes science and spiritual insight to offer a radical solution to the Hard Problem of Consciousness. He is the founder of the "Inner Space Institute" which aims to help more people access spiritual growth, without any of the unscientific beliefs that are common in spiritual circles. TIMESTAMPS: (0:00) - Introduction (1:25) - Defining Consciousness, The Self, & Life (5:37) - The Mind-Body Problem (9:00) - Biopsychism (13:16) - Human-exceptionalism (16:56) - Science & Spirituality (19:40) - Purpose/Meaning (24:24) - Living Mirrors Theory (29:50) - Thoughts on other theories of Consciousness (39:38) - Limits of Language (43:35) - Separation, Self & Substance (47:52) - Spirituality without Dogma (55:29) - The Dawn of Mind & Living Mirrors Theory (1:07:59) - James' Journey (from Living Mirrors to Inner Space Institute) (1:17:02) - Conclusion EPISODE LINKS: - James' Website: https://drjamescooke.com - James' YouTube: https://youtube.com/@DrJamesCooke - James' Substack: https://drjamescooke.substack.com - Inner Space Institute: https://www.innerspaceinstitute.org CONNECT: - Website: https://tevinnaidu.com - Podcast: https://creators.spotify.com/pod/show/mindbodysolution - YouTube: https://youtube.com/mindbodysolution - X: https://twitter.com/drtevinnaidu - Facebook: https://facebook.com/drtevinnaidu - Instagram: https://instagram.com/drtevinnaidu - LinkedIn: https://linkedin.com/in/drtevinnaidu ============================= Disclaimer: The information provided on this channel is for educational purposes only. The content is shared in the spirit of open discourse and does not constitute, nor does it substitute, professional or medical advice. We do not accept any liability for any loss or damage incurred from you acting or not acting as a result of listening/watching any of our contents. You acknowledge that you use the information provided at your own risk. Listeners/viewers are advised to conduct their own research and consult with their own experts in the respective fields.
There are 2 hard problems in computer science: cache invalidation, naming things, and off-by-1 errors. (Leon Bambrick)In this week's episode, the crew discuesses the complexities and nuances of naming conventions in software projects. The team reflects on their own practices, shared challenges, and the real-world impact of terminology and structure on software development and maintenance.Follow the show and be sure to join the discussion on Discord! Our website is workingcode.dev and we're @WorkingCodePod on Twitter and Instagram. New episodes drop weekly on Wednesday.And, if you're feeling the love, support us on Patreon.With audio editing and engineering by ZCross Media.Full show notes and transcript here.
Agris Kipurs is the Co-Founder and CEO of Origin Robotics, a pioneering defense tech startup specializing in autonomous drone systems for precision strikes in challenging environments. The company recently raised a 4mEUR pre-seed round led by Change Ventures and featuring support from the EU and the Latvian Ministry of Defense. Agris was also a co-founder of Airdog, the first commercial application drone to fly without remote control which was exited to Alarm.com. On this episode we talk about:The Evolution of Drone TechnologyBuilding a Defense Tech StartupThe Role of Reputation in Defense TechNavigating Funding in DefenseLessons from Ukraine's Defense Innovations==If you liked this episode or simply want to support the work we do, buy us a coffee or two, or a hundred, with just a few clicks at: https://buymeacoffee.com/pursuitofscrappinessFind all episodes on > https://www.pursuitofscrappiness.co/Watch select full-length episodes on our YouTube channel > https://www.youtube.com/channel/UCP6ueaLnjS-CQfrMCm2EoTAConnect with us on Linkedin > https://www.linkedin.com/company/pursuit-of-scrappiness/===============Support the show
The hardest problems often lead to the biggest growth opportunities, but only if you're willing to face them head-on.In this episode of Reveal, host Dana Feldman chats with Brian Fields, Chief Revenue Officer at Mindbody and ClassPass, to discuss how his passion for taking on challenges has shaped his career and leadership approach. Known for his supportive yet firm leadership, Brian shares insights on navigating high-stakes circumstances, empowering teams, and fostering resilience in times of change.Don't miss this conversation to learn how to turn discomfort into growth.
Anand Vaidya is Professor of Business Ethics and the Philosophy of Artificial Intelligence at San Jose State University, and Visiting Professor of Indian Philosophy of Mind and Knowledge at University of California, Los Angeles. He graduated from UCLA in 1998. He studied logic, language, metaphysics, Kant and Wittgenstein. He then went on to UCSB to study epistemology and philosophy of mind, writing a dissertation on knowledge of possibility and necessity via two-dimensional modal logic. Since his graduation he has expanded his research out to the cross-cultural and multi-disciplinary study of mind and epistemology. He now does work in Indian philosophy as well as the philosophy of artificial intelligence and teaches courses in business ethics and critical thinking. Lecture Title: "Vedanta and the Hard Problem of Consciousness" Special thanks to Anand for allowing me to share this lecture with the MBS audience. EPISODE LINKS: - Anand's Website: https://anandvaidya.weebly.com/ - Anand's Work: https://tinyurl.com/bdzm87x9 - Anand's Publications: https://tinyurl.com/3e3h7uum - Anand's Round 1: https://youtu.be/dpMoGXCJxUY CONNECT: - Website: https://tevinnaidu.com - Podcast: https://podcasters.spotify.com/pod/show/drtevinnaidu - Twitter: https://twitter.com/drtevinnaidu - Facebook: https://www.facebook.com/drtevinnaidu - Instagram: https://www.instagram.com/drtevinnaidu - LinkedIn: https://www.linkedin.com/in/drtevinnaidu ============================= Disclaimer: The information provided on this channel is for educational purposes only. The content is shared in the spirit of open discourse and does not constitute, nor does it substitute, professional or medical advice. We do not accept any liability for any loss or damage incurred from you acting or not acting as a result of listening/watching any of our contents. You acknowledge that you use the information provided at your own risk. Listeners/viewers are advised to conduct their own research and consult with their own experts in the respective fields.
#336 In this episode, Guy interviewed Dr. James Cooke, a neuroscientist who has experienced a profound spiritual awakening. They delved into Dr. Cooke's journey that reshaped his understanding of the mind, body, and reality. The conversation explored mental health, trauma, and healing, challenging conventional ideas. Dr. Cooke shared insights on the nature of consciousness, ancestral trauma, and the embodied experience of healing, while also discussing the importance of surrender, community, and the future of humanity. The episode finished with Dr. Cooke introducing his upcoming book 'The Dawn of Mind,' and the launch of his retreat center and other projects. About James: James Cooke PhD trained as a neuroscientist after an awakening as a teenager that showed him the reality of spiritual states of consciousness. He holds three degrees from Oxford University (a PhD and Masters in Neuroscience & a BA in Experimental Psychology). He has conducted scientific research for over a decade at institutions such as Oxford University, University of California, Berkeley, University College London, Trinity College Dublin, and Riken Brain Sciences Institute in Tokyo. James is the author of The Dawn of Mind: How Matter Became Conscious and Alive (coming December 2024), which synthesizes science and spiritual insight to offer a radical solution to the Hard Problem of Consciousness. Key Points Discussed: (00:00) - TOP Neuroscientist REDEFINES the Future of Humanity Through Consciousness (01:04) - Meet Dr. James Cooke: Neuroscientist and Spiritual Seeker (01:14) - The Intersection of Science and Spirituality (04:46) - Understanding Neuroscience and Consciousness (07:52) - The Impact of Trauma on Body and Mind (16:27) - Exploring Ancestral Trauma (20:21) - A Personal Awakening at 13 (30:36) - The Illusion of Hard Stops in Reality (33:38) - The Healing Power of Surrender (34:17) - The Role of Trauma in Spirituality (36:03) - Collective Healing and Social Change (36:44) - Personal Journey and Writing Process (39:14) - Facing Reality and Building Community (49:37) - Living a Life of Surrender (53:57) - Upcoming Projects and Final Thoughts How to Contact Dr. James Cooke:www.drjamescooke.com www.innerspaceinstitute.org About me:My Instagram: www.instagram.com/guyhlawrence/?hl=en Guy's websites:www.guylawrence.com.au www.liveinflow.co
Dr James Cooke is a consciousness researcher, meditation teacher, and author of The Dawn of Mind. His work blends science and spirituality to offer a fascinating perspective on reality and our place within it. We dive into nondual naturalism, the nature of consciousness, and how understanding these ideas can transform how we live. Here's what we cover: — What nondual naturalism is and why it matters for everyday life. — The story of James's awakening experience as a teenager and how it shaped his path. — The benefits of embracing the nondual path, how it naturally leads to feeling more at home in the world, and also why it might not be for everyone. And more. You can learn more about James's work and access his meditation classes at innerspaceinstitute.org. His book, The Dawn of Mind, is available now. --- James Cooke PhD trained as a neuroscientist after an awakening as a teenager that showed him the reality of spiritual states of consciousness. He holds three degrees from Oxford University (a PhD and Masters in Neuroscience & a BA in Experimental Psychology). He has conducted scientific research for over a decade at institutions such as Oxford University, University of California, Berkeley, University College London, Trinity College Dublin, and Riken Brain Sciences Institute in Tokyo. James is the author of The Dawn of Mind: How Matter Became Conscious and Alive (coming December 2024), which synthesizes science and spiritual insight to offer a radical solution to the Hard Problem of Consciousness. BA in Experimental Psychology, Oxford University MSc in Neuroscience, Oxford University PhD in Neuroscience, Oxford University --- Interview Links: — Dr Cooke's website - https://www.drjamescooke.com
Whether you believe that humans and other species have evolved or you believe in a creator of living things, this episode is going to excite you or challenge you to think outside the box. Both scenarios are worthwhile, as Tom is joined by the world renowned evolutionary biologist, Richard Dawkins. Trying to fully grasp how the human mind works and what role evolution plays with our emotions, thought processes and sexual selection can be overwhelming. Tom highlights the inspiring works from Richard and discusses some complex ideas from his latest book, Books Do Furnish A Life. This is a deep dive into what evolution is, and raises the question of whether or not science, technology, and the human search for meaning and exploration has surpassed our basic evolutionary need for survival. Where does that leave humanity and what options are potential solutions worth exploring? Order Richard Dawkins new book, Books Do Furnish A Life: https://amzn.to/39fEeSU [Original air date: 9-21-21]. SHOW NOTES: 0:00 | Introduction Richard Dawkins 1:34 | How The Mind Works 7:28 | Nature of Thought & Emotion 14:01 | Emergent Properties Beyond Survival 21:13 | Lack Of Evolving Creativity 29:30 | The Great Leap Forward 30:46 | Evolution of Sexual Selection 41:25 | The Handicap Principle 45:17 | Human Sexual Selection 57:13 | Genetic Variance 1:04:07 | Finding Origin of Life 1:10:55 | Natural Selection & DNA 1:27:29 | Writing Sci-Fi & Morality 1:37:58 | Hard Problem of Consciousness 1:41:32 | Memes + Hyper Connectivity CHECK OUT OUR SPONSORS Range Rover: Explore the Range Rover Sport at https://landroverUSA.com Miro: Bring your teams to Miro's revolutionary Innovation Workspace and be faster from idea to outcome at https://miro.com. ButcherBox: Get your choice of a free protein in every box for a year, plus that $20 off your first order with code IMPACT at https://butcherbox.com/impact. What's up, everybody? It's Tom Bilyeu here: If you want my help... STARTING a business: join me here at ZERO TO FOUNDER SCALING a business: see if you qualify here. Get my battle-tested strategies and insights delivered weekly to your inbox: sign up here. If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. Join me live on my Twitch stream. I'm live daily from 6:30 to 8:30 am PT at www.twitch.tv/tombilyeu LISTEN TO IMPACT THEORY AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu Learn more about your ad choices. Visit megaphone.fm/adchoices
Welcome to The W2 Prison Break Show where we help you launch your online business on your first try without having to learn a brand new skill. You're already good at something, it's time to get paid for it. Business is hard enough, we aim to make it less hard. Follow Brian O'Neill's Socials:W2PB Nation | Instagram | Facebook | YouTube | Threads | LinkedIn
What exactly is consciousness, and why is it such a hard problem to solve? Neil deGrasse Tyson and co-hosts Chuck Nice and Gary O'Reilly take you deep into the mysteries of consciousness and objective reality, David Chalmers, a philosopher and cognitive scientist. NOTE: StarTalk+ Patrons can listen to this entire episode commercial-free here: https://startalkmedia.com/show/the-hard-problem-of-consciousness-with-david-chalmersThanks to our Patrons Jay, Gregory Aronoff, Tom B. Night, Barnsley, Glenn, Hibachi Flamethrower, Crescencio Maximilian joseph Martinez, Micheal Gomez, Matthew Deane, James, Joe Chillemi, Thomas van Cleave, Kelsey Plugge, Jeff Jones, William Hamilton, and Kevin Cosg. for supporting us this week. Subscribe to SiriusXM Podcasts+ on Apple Podcasts to listen to new episodes ad-free and a whole week early.
Professor Michael Levin explores the revolutionary concept of diverse intelligence, demonstrating how cognitive capabilities extend far beyond traditional brain-based intelligence. Drawing from his groundbreaking research, he explains how even simple biological systems like gene regulatory networks exhibit learning, memory, and problem-solving abilities. Levin introduces key concepts like "cognitive light cones" - the scope of goals a system can pursue - and shows how these ideas are transforming our approach to cancer treatment and biological engineering. His insights challenge conventional views of intelligence and agency, with profound implications for both medicine and artificial intelligence development. This deep discussion reveals how understanding intelligence as a spectrum, from molecular networks to human minds, could be crucial for humanity's future technological development. Contains technical discussion of biological systems, cybernetics, and theoretical frameworks for understanding emergent cognition. Prof. Michael Levin https://as.tufts.edu/biology/people/faculty/michael-levin https://x.com/drmichaellevin Sponsor message: DO YOU WANT WORK ON ARC with the MindsAI team (current ARC winners)? Interested? Apply for an ML research position: benjamin@tufa.ai TOC 1. Intelligence Fundamentals and Evolution [00:00:00] 1.1 Future Evolution of Human Intelligence and Consciousness [00:03:00] 1.2 Science Fiction's Role in Exploring Intelligence Possibilities [00:08:15] 1.3 Essential Characteristics of Human-Level Intelligence and Relationships [00:14:20] 1.4 Biological Systems Architecture and Intelligence 2. Biological Computing and Cognition [00:24:00] 2.1 Agency and Intelligence in Biological Systems [00:30:30] 2.2 Learning Capabilities in Gene Regulatory Networks [00:35:37] 2.3 Biological Control Systems and Competency Architecture [00:39:58] 2.4 Scientific Metaphors and Polycomputing Paradigm 3. Systems and Collective Intelligence [00:43:26] 3.1 Embodiment and Problem-Solving Spaces [00:44:50] 3.2 Perception-Action Loops and Biological Intelligence [00:46:55] 3.3 Intelligence, Wisdom and Collective Systems [00:53:07] 3.4 Cancer and Cognitive Light Cones [00:57:09] 3.5 Emergent Intelligence and AI Agency Shownotes: https://www.dropbox.com/scl/fi/i2vl1vs009thg54lxx5wc/LEVIN.pdf?rlkey=dtk8okhbsejryiu2vrht19qp6&st=uzi0vo45&dl=0 REFS: [0:05:30] A Fire Upon the Deep - Vernor Vinge sci-fi novel on AI and consciousness [0:05:35] Maria Chudnovsky - MacArthur Fellow, Princeton mathematician, graph theory expert [0:14:20] Bow-tie architecture in biological systems - Network structure research by Csete & Doyle [0:15:40] Richard Watson - Southampton Professor, evolution and learning systems expert [0:17:00] Levin paper on human issues in AI and evolution [0:19:00] Bow-tie architecture in Darwin's agential materialism - Levin [0:22:55] Philip Goff - Work on panpsychism and consciousness in Galileo's Error [0:23:30] Strange Loop - Hofstadter's work on self-reference and consciousness [0:25:00] The Hard Problem of Consciousness - Van Gulick [0:26:15] Daniel Dennett - Theories on consciousness and intentional systems [0:29:35] Principle of Least Action - Light path selection in physics [0:29:50] Free Energy Principle - Friston's unified behavioral framework [0:30:35] Gene regulatory networks - Learning capabilities in biological systems [0:36:55] Minimal networks with learning capacity - Levin [0:38:50] Multi-scale competency in biological systems - Levin [0:41:40] Polycomputing paradigm - Biological computation by Bongard & Levin [0:45:40] Collective intelligence in biology - Levin et al. [0:46:55] Niche construction and stigmergy - Torday [0:53:50] Tasmanian Devil Facial Tumor Disease - Transmissible cancer research [0:55:05] Cognitive light cone - Computational boundaries of self - Levin [0:58:05] Cognitive properties in sorting algorithms - Zhang, Goldstein & Levin
What is reality? What is the nature of consciousness? How do we know that what we are experiencing is base reality and not a simulation? These may seem like the kind of questions that you'd associate with modern concepts like The Matrix and simulation theory, but the fact is that every ancient philosophical tradition has wrestled with these problems in some form or another. And with the advent of rich, complex VR worlds and the nascent metaverse, even more philosophers are turning toward these deep questions of consciousness and the human experience. One of the most interesting thinkers in this space is David Chalmers, Professor of Philosophy and Neural Science at New York University, and co-director of the Center for Mind, Brain, and Consciousness. In his latest book, Reality+: virtual worlds and the problems of philosophy, David investigates not only the nature of reality, but how we should conceptualize virtual reality, the idea that we can actually live a meaningful life in VR, how we know there's an external world, and much more. We explore these topics and more in today's wide-ranging conversation, covering everything from the hard problem of consciousness to the probability that we're actually living in a computer simulation. You don't have to be a student of philosophy to enjoy today's conversation - especially if you're as excited as I am about the possibilities being unlocked by virtual reality and the metaverse. [Original air date: March 8, 2022]. And if you want to dive deeper into David's work, you can order his new book, Reality+, by clicking here: https://amzn.to/3vMSS0v SHOW NOTES: 00:00 | Introduction 01:41 | The Hard Problem of Consciousness 10:42 | Consciousness as a Fundamental Law of Nature 17:38 | The Foundations of Simulation Theory 27:33 | Is Reality Made of Information? 39:03 | How to Live a Meaningful Virtual Life 45:10 | The Philosopher's Zombie 51:59 | Orderable States of Consciousness 58:23 | Zhuangzi and the Butterfly 1:05:20 | The Experience Machine 1:14:40 | GPT3 and Deepfakes 1:19:08 | The Future of “Technophilosophy” CHECK OUT OUR SPONSORS ButcherBox: Get your choice of a free protein in every box for a year, plus that $20 off your first order with code IMPACT at https://butcherbox.com/impact. Tonal: Go to https://tonal.com and get $200 off with promo code IMPACT. Huel: Try Huel with 15% OFF today using code IMPACT at https://huel.com/impact. Miro: Bring your teams to Miro's revolutionary Innovation Workspace and be faster from idea to outcome at https://miro.com. Design.com: Ready to transform your brand? Head to https://design.com/impacttheory and get up to 88% off. FOLLOW TOM: Instagram: https://www.instagram.com/tombilyeu/ Tik Tok: https://www.tiktok.com/@tombilyeu?lang=en Twitter: https://twitter.com/tombilyeu YouTube: https://www.youtube.com/@TomBilyeu What's up, everybody? It's Tom Bilyeu here. If you're serious about leveling up your life, I urge you to check out my new podcast, Tom Bilyeu's Mindset Playbook —a goldmine of my most impactful episodes on mindset, business, and health. Trust me, your future self will thank you. LISTEN AD FREE + BONUS EPISODES on APPLE PODCASTS: apple.co/impacttheory Learn more about your ad choices. Visit megaphone.fm/adchoices
Get your emotional intelligence score: https://sankalp-ua94japp.scoreapp.com What is better: psychology or neuroscience?
WATCH: https://youtu.be/3WLdL5zT6eY Professor David Papineau is a British academic philosopher. He works as Professor of Philosophy of Science at King's College London and the City University of New York Graduate Center, and previously taught for several years at Cambridge University, where he was a fellow of Robinson College. He did a BSc in Mathematics at the University of Natal, followed by a BA and PhD in philosophy at Cambridge. After academic posts at Reading, Macquarie, Birkbeck, and Cambridge, he joined King's College London in 1990. From 2015-21 he spent half of each year at the Graduate Center of CUNY in New York. he was President of the Mind Association in 2009 and the Aristotelian Society in 2014. He has written widely on epistemology, metaphysics and the philosophy of science and mind. My books include: For Science in the Social Sciences (1979), Theory and Meaning (1990), Reality and Representation (1987), Philosophical Naturalism (1992), Thinking about Consciousness (2002), Philosophical Devices (2012), Knowing the Score (2017), and The Metaphysics of Sensory Experience (2021). TIMESTAMPS: (0:00) - Introduction (0:23) - History of the Mind-Body Problem (5:14) - Robert Lawrence Kuhn's Landscape of Consciousness and Physicalism (9:43) - Illusionism (14:32) - Emergentism (16:46) - David's current thoughts about Consciousness (22:33) - Intelligence vs Consciousness (25:30) - Panpsychism (34:40) - Consciousness & Moral Standing (41:12) - Hard Problem or Easy Problems? (45:32) - Mary Thought Experiment Explained (58:59) - David's definition of Consciousness (1:05:37) - Will we ever solve the mind-body problem? (1:10:15) - David on Free Will & Daniel Dennett (1:15:25) - David's upcoming book: "Causes" (About causation, probabilities etc.) 1:18:50) - Conclusion EPISODE LINKS: - David's Website: https://www.davidpapineau.co.uk/ - David's Books: https://tinyurl.com/4e55a6k9 - David's Publications: https://tinyurl.com/47sdussx - David's X: https://twitter.com/davidpapineau CONNECT: - Website: https://tevinnaidu.com - Podcast: https://podcasters.spotify.com/pod/show/drtevinnaidu - Twitter: https://twitter.com/drtevinnaidu - Facebook: https://www.facebook.com/drtevinnaidu - Instagram: https://www.instagram.com/drtevinnaidu - LinkedIn: https://www.linkedin.com/in/drtevinnaidu ============================= Disclaimer: The information provided on this channel is for educational purposes only. The content is shared in the spirit of open discourse and does not constitute, nor does it substitute, professional or medical advice. We do not accept any liability for any loss or damage incurred from you acting or not acting as a result of listening/watching any of our contents. You acknowledge that you use the information provided at your own risk. Listeners/viewers are advised to conduct their own research and consult with their own experts in the respective fields.
Dr. Joscha Bach introduces a surprising idea called "cyber animism" in his AGI-24 talk - the notion that nature might be full of self-organizing software agents, similar to the spirits in ancient belief systems. Bach suggests that consciousness could be a kind of software running on our brains, and wonders if similar "programs" might exist in plants or even entire ecosystems. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Joscha takes us on a tour de force through history, philosophy, and cutting-edge computer science, teasing us to rethink what we know about minds, machines, and the world around us. Joscha believes we should blur the lines between human, artificial, and natural intelligence, and argues that consciousness might be more widespread and interconnected than we ever thought possible. Dr. Joscha Bach https://x.com/Plinz This is video 2/9 from our coverage of AGI-24 in Seattle https://agi-conf.org/2024/ Watch the official MLST interview with Joscha which we did right after this talk on our Patreon now on early access - https://www.patreon.com/posts/joscha-bach-110199676 (you also get access to our private discord and biweekly calls) TOC: 00:00:00 Introduction: AGI and Cyberanimism 00:03:57 The Nature of Consciousness 00:08:46 Aristotle's Concepts of Mind and Consciousness 00:13:23 The Hard Problem of Consciousness 00:16:17 Functional Definition of Consciousness 00:20:24 Comparing LLMs and Human Consciousness 00:26:52 Testing for Consciousness in AI Systems 00:30:00 Animism and Software Agents in Nature 00:37:02 Plant Consciousness and Ecosystem Intelligence 00:40:36 The California Institute for Machine Consciousness 00:44:52 Ethics of Conscious AI and Suffering 00:46:29 Philosophical Perspectives on Consciousness 00:49:55 Q&A: Formalisms for Conscious Systems 00:53:27 Coherence, Self-Organization, and Compute Resources YT version (very high quality, filmed by us live) https://youtu.be/34VOI_oo-qM Refs: Aristotle's work on the soul and consciousness Richard Dawkins' work on genes and evolution Gerald Edelman's concept of Neural Darwinism Thomas Metzinger's book "Being No One" Yoshua Bengio's concept of the "consciousness prior" Stuart Hameroff's theories on microtubules and consciousness Christof Koch's work on consciousness Daniel Dennett's "Cartesian Theater" concept Giulio Tononi's Integrated Information Theory Mike Levin's work on organismal intelligence The concept of animism in various cultures Freud's model of the mind Buddhist perspectives on consciousness and meditation The Genesis creation narrative (for its metaphorical interpretation) California Institute for Machine Consciousness
A middle-aged man was able to fully reverse erectile dysfunction after decades by changing his diet. We present the remarkable case and the foods that help ED with Dr. Robert Ostfeld. He is the Director of Preventative Cardiology at Montefiore Health System. The New York-based cardiologist joins "The Weight Loss Champion" Chuck Carroll on The Exam Room Live. In This Episode - The best foods for ED - The foods that cause ED - How quickly a diet change can help ED - ED is an early warning sign for heart disease - And more! — — SHOW LINKS — — Erectile Dysfunction Study 1 Register: https://redcap.link/erectilefunction — — — Erectile Dysfunction Study 2 Register: https://redcap.link/erectilefunction2 — — — Dr. Robert Ostfeld Website: https://bit.ly/CardiacProgramNYC IG: https://www.instagram.com/drostfeld X: https://twitter.com/DrOstfeld Foods That Cause ED Interview: https://youtu.be/QkCp89owJUo — — EVENTS — — Wellness Weekend Where: Davis, WV Date: September 27-28 Tickets: https://www.brendaworkmanspeaks.com/wellness-weekend — — THIS IS US — — The Exam Room Podcast Instagram: https://www.instagram.com/theexamroompodcast — — — Chuck Carroll Instagram: https://www.instagram.com/ChuckCarrollWLC X: https://www.twitter.com/ChuckCarrollWLC Facebook: http://wghtloss.cc/ChuckFacebook — — — Physicians Committee Instagram: https://www.instagram.com/physicianscommittee Facebook: https://www.facebook.com/PCRM.org X: https://www.twitter.com/pcrm — — BECOME AN EXAM ROOM VIP — — Sign up: https://www.pcrm.org/examroomvip — — SUBSCRIBE & SHARE — — 5-Star Success: Share Your Story Apple: https://apple.co/2JXBkpy Spotify: https://spoti.fi/2pMLoY3 Please subscribe and give the show a 5-star rating on Apple Podcasts, Spotify, or many other podcast providers. Don't forget to share it with a friend for inspiration!
Today, we dive deeper into the theories of consciousness in Layer 2 of The Consciousness Iceberg, exploring the Hard Problem, Carl Jung's insights, Non-Dualism, and Buddhism. Layer 1: https://youtu.be/GDjnEiys98o Listen on Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e Become a YouTube Member Here: https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) Join TOEmail at https://www.curtjaimungal.org Support TOE: - Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - TOE Merch: https://tinyurl.com/TOEmerch Follow TOE: - NEW Get my 'Top 10 TOEs' PDF + Weekly Personal Updates: https://www.curtjaimungal.org - Instagram: https://www.instagram.com/theoriesofeverythingpod - TikTok: https://www.tiktok.com/@theoriesofeverything_ - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything Join this channel to get access to perks: https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join #science #consciousness #carljung #buddhism Learn more about your ad choices. Visit megaphone.fm/adchoices
Rachel Carrell is the founder & CEO of KoruKids, Europe's largest childcare platform— training, matching, and providing ongoing support to over 10,000 nannies across London.Rachel is one of the special ones. Driven to found the company by what she describes as a righteous anger over how childcare has been disrespected as infrastructure in society… Today, Koru Kids takes care of taxes, payroll, pension, holiday, nanny communications, activity ideas and a dozen other things that can come up when you're dealing with nannies… bringing down the cost for families, and helping kids have more engaging, enriching experiences. Joined by co-host Phoebe Harrop, we had a lot of fun with this episode: tracing Rachel's story from hustling chocolates and home-grown lottery schemes on the streets of Invercargill … to strategising her way into a Rhodes scholarship that helped her land a place— and finally “find her people” at the University of Oxford in the UK; and on to London, where she's been ever since… 22 years and counting, consulting with McKinsey, raising a family, building, investing in, and advising startupsWe explore how, counterintuitively, Rachel feels driven to consistently choose the path of most resistance … seeking the right kind of crazy … this shows up in stories from the founding of Koru Kids, to the time she had a baby in the middle of her Series A fundraise and was back out pitching the next day .. to how she thinks about talent development, scaling demand and supply in marketplaces, and more.Make sure you subscribe for more stories from the diaspora every Friday!
Because of the nature of SAM, this is more video heavy than usual. See our YouTube!Because vision is first among equals in multimodality, and yet SOTA vision language models are closed, we've always had an interest in learning what's next in vision. Our first viral episode was Segment Anything 1, and we have since covered LLaVA, IDEFICS, Adept, and Reka. But just like with Llama 3, FAIR holds a special place in our hearts as the New Kings of Open Source AI.The list of sequels better than the originals is usually very short, but SAM 2 delighted us by not only being a better image segmentation model than SAM 1, it also conclusively and inexpensively solved video segmentation in just an elegant a way as SAM 1 did for images, and releasing everything to the community as Apache 2/CC by 4.0.“In video segmentation, we observe better accuracy, using 3x fewer interactions than prior approaches. In image segmentation, our model is more accurate and 6x faster than the Segment Anything Model (SAM).”Surprisingly EfficientThe paper reports that SAM 2 was trained on 256 A100 GPUs for 108 hours (59% more than SAM 1). Taking the upper end $2 A100 cost off gpulist.ai means SAM2 cost ~$50k to train if it had an external market-rate cost - surprisingly cheap for adding video understanding!The newly released SA-V dataset is also the largest video segment dataset to date, with careful attention given to scene/object/geographical diversity, including that of annotators. In some ways, we are surprised that SOTA video segmentation can be done on only ~50,000 videos (and 640k masklet annotations). Model-in-the-loop Data Engine for Annotations and Demo-first DevelopmentSimilar to SAM 1, a 3 Phase Data Engine helped greatly in bootstrapping this dataset. As Nikhila says in the episode, the demo you see wasn't just for show, they actually used this same tool to do annotations for the model that is now demoed in the tool:“With the original SAM, we put a lot of effort in building a high-quality demo. And the other piece here is that the demo is actually the annotation tool. So we actually use the demo as a way to improve our annotation tool. And so then it becomes very natural to invest in building a good demo because it speeds up your annotation. and improve the data quality, and that will improve the model quality. With this approach, we found it to be really successful.”An incredible 90% speedup in annotation happened due to this virtuous cycle which helped SA-V reach this incredible scale.Building the demo also helped the team live the context that their own downstream users, like Roboflow, would experience, and forced them to make choices accordingly.As Nikhila says:“It's a really encouraging trend for not thinking about only the new model capability, but what sort of applications folks want to build with models as a result of that downstream.I think it also really forces you to think about many things that you might postpone. For example, efficiency. For a good demo experience, making it real time is super important. No one wants to wait. And so it really forces you to think about these things much sooner and actually makes us think about what kind of image encoder we want to use or other things. hardware efficiency improvements. So those kind of things, I think, become a first-class citizen when you put the demo first.”Indeed, the team swapped out standard ViT-H Vision Transformers for Hiera (Hierarchical) Vision Transformers as a result of efficiency considerations.Memory AttentionSpeaking of architecture, the model design is probably the sleeper hit of a project filled with hits. The team adapted SAM 1 to video by adding streaming memory for real-time video processing:Specifically adding memory attention, memory encoder, and memory bank, which surprisingly ablated better than more intuitive but complex architectures like Gated Recurrent Units.One has to wonder if streaming memory can be added to pure language models with a similar approach… (pls comment if there's an obvious one we haven't come across yet!)Video PodcastTune in to Latent Space TV for the video demos mentioned in this video podcast!Timestamps* [00:00:00] The Rise of SAM by Udio (David Ding Edit)* [00:03:07] Introducing Nikhila* [00:06:38] The Impact of SAM 1 in 2023* [00:12:15] Do People Finetune SAM?* [00:16:05] Video Demo of SAM* [00:20:01] Why the Demo is so Important* [00:23:23] SAM 1 vs SAM 2 Architecture* [00:26:46] Video Demo of SAM on Roboflow* [00:32:44] Extending SAM 2 with other models* [00:35:00] Limitations of SAM: Screenshots* [00:38:56] SAM 2 Paper* [00:39:15] SA-V Dataset and SAM Data Engine* [00:43:15] Memory Attention to solve Video* [00:47:24] "Context Length" in Memory Attention* [00:48:17] Object Tracking* [00:50:52] The Future of FAIR* [00:52:23] CVPR, Trends in Vision* [01:02:04] Calls to ActionTranscript[00:00:00] [music intro][00:02:11] AI Charlie: Happy Yoga! This is your AI co host Charlie. Thank you for all the love for our special 1 million downloads Wins of AI Winter episode last week, especially Sam, Archie, Trellis, Morgan, Shrey, Han, and more. For this episode, we have to go all the way back to the first viral episode of the podcast Segment Anything Model and the Hard Problems of Computer Vision, which we discussed with Joseph Nelson of Roboflow.[00:02:39] AI Charlie: Since Meta released SAM 2 last week, we are delighted to welcome Joseph back as our fourth guest co host to chat with Nikhila Ravi, Research Engineering Manager at Facebook AI Research and lead author of SAM 2. Just like our SAM 1 podcast, this is a multimodal pod because of the vision element, so we definitely encourage you to hop over to our YouTube at least for the demos, if not our faces.[00:03:04] AI Charlie: Watch out and take care.[00:03:10] Introducing Nikhila[00:03:10] swyx: Welcome to the latest podcast. I'm delighted to do segment anything to our first, one of our very first viral podcasts was segment anything one with Joseph. Welcome back. Thanks so much. And this time we are joined by the lead author of Segment Anything 2, Nikki Ravi, welcome.[00:03:25] Nikhila Ravi: Thank you. Thanks for having me.[00:03:26] swyx: There's a whole story that we can refer people back to episode of the podcast way back when for the story of Segment Anything, but I think we're interested in just introducing you as a researcher, as a, on the human side what was your path into AI research? Why, you know, why did you choose computer vision coming out of your specialization at Cambridge?[00:03:46] Nikhila Ravi: So I did my undergraduate. Degree in engineering at Cambridge university. The engineering program is very general. So first couple of years, you sort of study everything from mechanical engineering to fluid mechanics, structural mechanics, material science, and also computer science.[00:04:04] Nikhila Ravi: Towards the end of my degree, I started taking more classes in machine learning and computational neuroscience, and I really enjoyed it. And actually after graduating from undergrad, I had a place at Oxford to study medicine. And so I was. Initially planning on becoming a doctor, had everything planned and then decided to take a gap year after finishing undergrad.[00:04:28] Nikhila Ravi: And actually that was around the time that sort of deep learning was emerging. And in my machine learning class in undergrad, I remember one day our professor came in and that was when Google acquired DeepMind. And so that became like a huge thing. We talked about it for the whole class. It kind of really stuck.[00:04:48] Nikhila Ravi: And I was kicked off thinking about, okay, maybe I want to try something different other than medicine. Maybe this is a different path I want to take. And then in the gap year, I did a bunch of coding, worked on a number of projects. Did some sort of freelance contracting work. And then I got a scholarship to come and study in America.[00:05:06] Nikhila Ravi: So I went to Harvard for a year, took a bunch of computer science classes at Harvard and MIT, worked on a number of AI projects, especially in computer vision. I really, really enjoyed working in computer vision. I applied to Facebook and got this job at Facebook, and I've now at Facebook at the time, now Meta, and I've been here for seven years, so very circuitous path, probably not a very unconventional, I didn't do a PhD, I'm not like a research, typical research scientist, definitely came from more of an engineering background, but since being at Meta, Have had amazing opportunities to work across so many different interesting problems in computer vision from 3D computer vision.[00:05:50] Nikhila Ravi: How can you go from images of objects to 3D structures and then going back to 2D computer vision and actually understanding the objects and the pixels and the images themselves. So it's been a very interesting journey over the past seven years.[00:06:05] swyx: It's weird because like, I guess with segment anything too, it's like 4D because you solve time, you know, you started with 3D and now you're solving the 4D.[00:06:14] Nikhila Ravi: Yeah, it's just going from 3D to images to video. It's really covering the full spectrum. And actually, one of the nice things has been, so I think I mentioned I, Wanted to become a doctor, but actually Sam is having so much impact in medicine, probably more than I could have ever had as a doctor myself. So I think, you know, hopefully Sam too can also have a similar sort of impact in medicine and other fields.[00:06:39] The Impact of SAM 1 in 2023[00:06:39] swyx: Yeah. I want to give Joseph a chance to comment. Does that also mirror your, we know your story about going into, into vision, but like in the past year, since we did our podcast on Sam what's been the impact that you've seen?[00:06:51] Joseph Nelson: Segment anything. Set a new standard in computer vision, you know recapping from from the first release to present Sam introduces the ability for models to near zero shot meaning without any training identify kind of perfect polygons and outlines of items and objects inside images and that capability previously required a Lots of manual labeling, lots of manual preparation, clicking very meticulously to create outlines of individuals and people.[00:07:25] Joseph Nelson: And there were some models that attempted to do zero shot segmentation. of items inside images, though none were as high quality as segment anything. And with the introduction of segment anything, you can pass an image with SAM1, SAM2 videos as well, and get perfect pixel perfect outlines of most everything inside the images.[00:07:52] Joseph Nelson: Now there are some edge cases across domains and Similar to the human eye, sometimes you need to say, like, which item maybe you most care about for the downstream task and problem you're working on. Though, SAM has accelerated the rate at which developers are able to use computer vision in production applications.[00:08:13] Joseph Nelson: So, at RoboFlow, we were very quick to enable the community of computer vision developers and engineers to use SAM and apply it to their problems. The principle ways of using SAM, you could kind of use SAM as is to like pass an image and receive back masks. Another use case for SAM is in preparation of data for other types of problems.[00:08:37] Joseph Nelson: So, for example, in the medical domain, let's say that you're working on a problem where you have a bunch of images from a wet lab experiment. And from each of those images, you need to count the presence of a particular protein that reacts to some experiment. To count all the individual protein reactions, You can go in and lab assistants to this day will still like kind of individually count and say what are the presence of all those proteins.[00:09:07] Joseph Nelson: With Segment Anything, it's able to identify all of those individual items correctly. But often you may need to also add like a class name to what the protein is. Or you may need to say, hey, like, I care about the protein portion of this. I don't care about the rest of the portion of this in the image.[00:09:26] Joseph Nelson: And, or what it encourages and asks for the user to do is to provide some visual prompting to say, hey, which part, like, Sam says, hey, I can find segments of anything, but which segments do you care about? And so you can do visual prompting, which is kind of a new primitive that Sam introduced. And so at RoboFlow, we have one portion of our tool stack enables users to very quickly label data.[00:09:48] Joseph Nelson: With segment anything, Sam can already provide, hey, here's where I see the outlines of objects. Or a user can click to prompt to say, Hey, here's where the outlines of objects matter. And I recently pulled statistics from the usage of SAM in RoboFlow over the course of the last year. And users have labeled about 49 million images using segment anything on the hosted side of the RoboFlow platform.[00:10:12] Joseph Nelson: And that's like 5 million in the last 30 days alone. And of those images, We did kind of like a rough bafka napkin calculation of like how much time that has saved. Because, again, the alternative is you're clicking individual points to create a polygon, and with SAM you just click once and it guesses where the polygon is.[00:10:32] Joseph Nelson: And I'm sure in a bit we can maybe screen share and show some examples of what this experience is like. And in that time estimation, it's like, On average saves, you know, maybe a dozen or so seconds. And we estimate that this is probably saved on the order of magnitude of 35 years of time for users.[00:10:53] Nikhila Ravi: That's incredible.[00:10:54] Joseph Nelson: So, I mean, basically like in the first, the first year of a model being available, not only can you say, Hey, I'm just going to go use this model, those numbers that like 49 million images. is an estimate directly related to just the hosted side. So imagine all of the users that are self hosting or using SAM for robotics applications or out in the field or offline where it's not even, like, the time or the image counts are tabulated.[00:11:20] Joseph Nelson: And we're probably talking about, you know, just a fraction of the amount of value that's actually being produced for a number of downstream tasks. So to say that the impact has been You know, people use terms like game changing and these sorts of things. It has changed the industry. It's set a new standard.[00:11:36] Joseph Nelson: And with the release of SAM 2, I think we're about to see an acceleration of those capabilities for a lot of reasons.[00:11:42] Nikhila Ravi: That's really great to hear. I think one of the, really SAM 1 was. How many fields actually rely on manual segmentation? I think we're not really exposed to that. Maybe you are at Roboflow because you get to see all the users of these tools.[00:11:57] Nikhila Ravi: But for me, it was, you know, people working on understanding coral reef bleaching or farmers counting their cows and so many different applications that as a researcher. You never get exposed to, but you can have impact towards. So I think that was really awesome to hear.[00:12:15] Do People Finetune SAM?[00:12:15] swyx: So as sort of audience surrogate, who knows less than the two of you, I'm going to ask a really dumb question maybe, but is everyone using stock, a segment, anything?[00:12:23] swyx: Are they fine tuning for the medical domain? Like how on earth could it work for the medical field without fine tuning, right? Like, is that a thing?[00:12:32] Nikhila Ravi: So I mean, I can give a quick perspective from the research side. So one of the things, design decisions we made in SAM was to not have class labels. And so all the data is annotated in a class agnostic way.[00:12:48] Nikhila Ravi: So anything that has a boundary, we consider to be an object. So for example, in any image, there's lots of small objects. We might not know what the name of them are, but they're If you can draw a boundary around it, so you can imagine that we have 11 million images in the SA 1B dataset, we annotated all the objects, there's many, many small objects.[00:13:12] Nikhila Ravi: And so if you think about cells, they're also kind of small objects, there's probably things in the training data. That looked like it, but we didn't have to label it. And so that means that even when you use SAM for applications that it wasn't really trained for, because we didn't restrict it to a certain set of categories, you can actually use it out of the box without custom adaptation.[00:13:35] Nikhila Ravi: But having said that, there's probably certain domains where you need some expertise in order to be able to segment something properly. And for those use cases, Having some extra fine tuning data would probably help, and we've sort of seen that there's some papers that have come out that do this, and, you know, we'd love to hear, Joseph, how people are collecting data with SAM and fine tuning for their use cases.[00:13:59] Joseph Nelson: Once SAM came out, there were adaptations that said, could we use SAM to be, you know, like, efficient SAM? Like, basically take SAM and maybe accelerate it. And then there were domain adapted SAMs, like CellSAM, for example, out of the UC system. Now, what's interesting is, there's, like, adapting SAM to a domain, there's kind of two ways by which that's done.[00:14:21] Joseph Nelson: One is, as you mentioned, like, potentially SAM doesn't have a good concept of The objects of interest. And so you need to do domain adaptation and increase the accuracy for zero shot prediction. The second way though, is it's not fine tuning. It's actually just prompting. It's just guiding the model existing knowledge.[00:14:42] Joseph Nelson: to say which segments you care about. And both those are actually kind of equally important on the application side. You need to, like, a priori ensure that the objects of interest can be correctly segmented and maybe collect data to do that. But even if you had, like, a perfect SAM, like an omniscient SAM that could see every segment in every domain with all pixels perfectly outlined, in production, you would still need some way to Almost like signal to the model what you care about like to paint this picture if you are like a retailer and you are providing Photos of models wearing your clothing on your retail site You may care about you know only the shirt and Sam by default might segment the full person And so there's you know visual prompting that you can do to ensure that you only outline Maybe the shirt for the purposes of swapping in and out different shirts for displaying a given model on a retail page You And so I think what's interesting is that's where, like I wouldn't call it domain adaptation, but that's where, like, when you apply to industry, like, one thing that's particularly important with tooling and enabling SAM to reach its full potential.[00:15:51] swyx: That's really encouraging to hear. I should also think, like, you know, the last time we talked about this, we wanted to, the very natural addition on the class labeling side is the grounding Dino work, right? So I think people, built a grounding SAM and all the other extensions.[00:16:05] Video Demo of SAM[00:16:05] swyx: I think it's, it's probably a good time to cut to a quick demo of SAM2 for people who are, who are tuning in for SAM2 and who better to demo SAM2 than Nikki.[00:16:15] Nikhila Ravi: Sure. So I'll try to narrate what I'm what I'm doing. So audio listeners can also understand. So we have a web demo where anyone can try SAM2 on a video. Here we have a video of someone kicking a football, and I'm going to click on the football to select the object in the first frame. But you can actually select the object in any frame of the video, and this will work.[00:16:40] Nikhila Ravi: The next step is to hit track. So the model's now tracking this in real time. We don't save any of this, it's all running in real time. And now you can see the ball has been tracked throughout the entire video. There's even like a little bit of a challenging case here where the shoe covers the football.[00:16:59] Nikhila Ravi: And actually, you know, the model makes a little bit of a mistake, but that's okay. Because we can actually, here, the model makes a little bit of a mistake here. But you know, we can actually add a refinement click. You can add negative clicks until we get the mask that we want on this frame. And then you can hit track again, and the model will track the object, taking into account the additional information I've provided at that frame.[00:17:25] Nikhila Ravi: We've also added a couple of other fun things you can do on top of the track, like add effects. We can add you know, foreground effects, background effects. And these are just ways of showing how we can use the output from SAM2 as part of other tools like video editing tools. Other systems, so this is just a preview of what you can do with SAM2, but the really cool use cases are places where we might not have even imagined SAM2 being useful.[00:17:54] Nikhila Ravi: So we have a number of examples of things you might want to use it for. There's like underwater videos that it works actually really well for even though we, models never really seen an octopus before and octopus have a lot of moving parts that SAM2 can actually quite effectively. Keep track of all the different tentacles and we can probably see it more clearly if I desaturate the background.[00:18:18] Nikhila Ravi: We can see that actually the tracking of all the different tentacles is Quite accurate. Another challenge with video is that objects can actually become occluded. They can disappear from view and reappear. And a really fun example here is the shuffling cup game, which many of you might have seen. And so here I can click on the ball in the first frame.[00:18:41] Nikhila Ravi: I can also, You know, click on a different cup. And so here, the additional challenge is that there's three cups that look exactly the same. And then there's the ball that will get occluded by the cup. So the ball's no longer visible, the cups are all moving around, they all look the same. But the model actually keeps track of the cup that we selected.[00:19:02] Nikhila Ravi: And, as you can see at the end, here I'll jump to the end so you can see. It actually finds the cup again. I wanted to point out a couple of fun demo UX features that we added that actually really helped with this. So if you can see at the bottom, there's these swim lanes and then the swim lanes, actually the thickness of the swim lane tells you if the object's visible or not.[00:19:22] Nikhila Ravi: So at the beginning, the object's visible,[00:19:25] swyx: the object[00:19:26] Nikhila Ravi: disappears, and then the object comes back. So you can actually visually tell. When the object's being occluded and when it's not, and so it's a nice way of like, knowing if you need to go in and fix the model prediction or not. And so these are some of the UX innovations that we came up with, as well as the model innovations.[00:19:46] Joseph Nelson: One thing that I think is really notable here, there's two things. One is that like, I'd love to have a little bit of a discussion about how the models keeping track of the embedded scene to keep track of the ball and the cup in different places. Put a pause on that for a second.[00:19:59] Why the Demo is so Important[00:19:59] Joseph Nelson: One thing that Meta has put an emphasis on here in a much greater degree than other model releases is the demo experience of recognizing that in addition to having a model that can do zero shot segmentation, you've created a web experience that allows folks to kind of experience both the video effects but the types of UX innovations that encourage usage and adoption.[00:20:23] Joseph Nelson: It's actually kind of reminiscent of The underlying technology of ChatGPT was available prior to the web experience of ChatGPT. Can you talk a bit about why that was a consideration to your team and how you thought about the creation of The demo experience in tandem with training and releasing a new model.[00:20:41] Nikhila Ravi: Yeah, absolutely. I think that's a really great example of how, you know, Chad, GPT was really more of a UX innovation. Obviously it was like a number of research innovations that helped to get to this point. But as you said, like the underlying technology was around for a while. And, you know, putting this UX around as a chat interface helped tremendously with the.[00:21:03] Nikhila Ravi: Adoption and people understanding how it could be useful for real world use cases. And in computer vision, especially, it's so visual. The best way to show how these models work. Is by trying it on your own image or your own video with the original SAM, we put a lot of effort in building like a high quality demo.[00:21:23] Nikhila Ravi: And the other piece here is that the demo is actually the annotation tool. So we actually. Use the demo as a way to improve our annotation tool. And so then it becomes very natural to invest in building a good demo because it speeds up your annotation and improves the data quality and that will improve the model quality.[00:21:43] Nikhila Ravi: With this approach, we found it to be really successful. And obviously externally, people really liked being able to try it. I think, you know, people in fields outside of machine learning would never have tried SAM if we didn't have that demo. And I think that definitely led to a lot of the adoption in, like, diverse fields.[00:22:05] Nikhila Ravi: And so because we saw that with SAM 2, like, the demo was a priority first class citizen from day one. And so we really invested in making that. And I think with SAM2 as well, we wanted to have like a step change in the demo experience. Interactive video segmentation, I think that experience is something that maybe has not had much thought given to it.[00:22:27] Nikhila Ravi: And we really wanted to be like, okay, if we are to design a step changing video segmentation experience, what would that look like? And that really did influence our model. And annotation design as well.[00:22:40] Joseph Nelson: It's a really encouraging trend for not thinking about only the new model capability, but what sort of applications folks want to build with models as a result of that downstream.[00:22:49] Nikhila Ravi: I think it also really forces you to think about many things that you might postpone, for example, efficiency.[00:22:55] Joseph Nelson: Yes.[00:22:55] Nikhila Ravi: For a good demo experience. Making it real time is super important. No one wants to wait. And so it really forces you to think about these things much sooner and actually makes us think about how to, what kind of image encoder we want to use or like other hardware efficiency improvements.[00:23:13] Nikhila Ravi: So those kinds of things, I think, become a first class citizen when you put the demo first.[00:23:19] SAM 1 vs SAM 2 Architecture[00:23:19] Joseph Nelson: That's one thing I was going to ask about, and this is related to the architecture change. So SAM1 and the SAM1 demo experience. You have the encoder that's creating the embeddings of all the potential spaces.[00:23:31] Joseph Nelson: That needs to be run on a GPU. That's a relatively intensive operation. But then the query of those embeddings can be run independently and on a cheaper process. So in the SAM1 demo, the way that it was structured, and also this is the way that we have our SAM tool structured in Robloflow as well, is images go to a GPU to get all the SAM based embeddings.[00:23:53] Joseph Nelson: But then for querying those embeddings, we do that client side, in the browser, so that the user can very quickly, you know, you can move your mouse over and you get the proposed candidate masks that Sam found for that region of the image. In SAM 2 you dropped that in the web demo. And I think that's because you made some notable improvements to the rate at which encoding happens.[00:24:16] Joseph Nelson: Can you talk a bit about what led to those speed increases and, again, how that interplays with providing a fast encryption? user experience for interacting with the model.[00:24:29] Nikhila Ravi: Yeah. So the SAM2 web demo is primarily focused on video. We, we decided to just keep it simple and focus on video and on GitHub, we have a Colab notebook that shows how to run SAM2 on images.[00:24:41] Nikhila Ravi: So if you're interested in using, replacing SAM with SAM2 for images, check out GitHub, but on the SAM2 demo, it's not as straightforward to adopt the same architecture as SAM. For video, because we can't send the per frame image embeddings for an entire video back to the front end. In SAM, each frame embedding was like four megabytes, but if you have a long video and that's like per frame, it would become impossible to send that back to the front end.[00:25:11] Nikhila Ravi: So, SAM 2 actually, in terms of the architecture details, I was actually just looking at this earlier, but SAM1 model was around 630 million parameters. It's a fraction of the size of these large language models, but very small. Actually, SAM2, the largest model, is around 224 million parameters. So it's actually One third the size of the SAM original model.[00:25:38] Nikhila Ravi: So we changed the imaging coder from A-V-I-T-H and SAM to a higher model, which has also developed by by meta. So that definitely was something that helped. And in terms of the efficiency compared to sam, so if we were to run SAM per frame on a video or run SAM two, it's around six times faster to run SAM two versus run SAM per frame.[00:26:03] Nikhila Ravi: A number of things improved the efficiency of SAM2 such that we were actually able to run this entirely on the server and not have any component in the front end. But I am very curious to see who puts this on device, like I'm pretty sure soon we'll see like an on device SAM2 or, you know, maybe even running in the browser or something, so.[00:26:25] Nikhila Ravi: I think that could definitely unlock some of these edge use cases that we were able to make a compelling web demo without having to do that.[00:26:34] swyx: Hugging face is probably already working on Transformers. js version of it, but totally makes sense. I want to talk about more about things from the paper, but I think we're still in this sort of demo section.[00:26:42] Video Demo of SAM on Roboflow[00:26:42] swyx: And so I want to hand it to Joseph for his demo to see what the RoboFlow site looks like.[00:26:47] Joseph Nelson: So I can, I can give some context into one key area that Nicola, you mentioned earlier, which is. Sam has made the decision, both Sam 1 and Sam 2, to be class agnostic in terms of its predictions. And that, you then have the ability to have a generalizable, model for zero shot capability.[00:27:05] Joseph Nelson: However, in a lot of domain applications, you do want the class wise name. And so a lot of the challenge can be adding that class wise name for the, at least the annotation to an experience that we've created. That's one of the key considerations. So I will similarly Share my screen and show an example.[00:27:27] Joseph Nelson: Here, I have a bunch of images, and there's a number of ways that I could annotate things, like I could prompt a large multimodal model with like grounding capabilities, you know, you could outsource it, or I can do manual labeling. And with the manual labeling, this is where we make use of models like segment anything.[00:27:45] Joseph Nelson: to propose candidate masks and make it faster. So we have, you know, this annotation pane and what we call the smart poly tool, which is powered by Segment Anything. This is currently Segment Anything 1. We're accelerating and seeing improvements from similar to what the paper shows of Segment Anything 2 performed better on E3.[00:28:06] Joseph Nelson: Images as well as video, but with a segment, anything I'm able to basically prompt regions of my image of interest. So for example, if like, I wanted to say, I want to like add the drum set. You'll see here that like, the original candidate proposal is just the base drum, but let's say I wanted the whole drum set.[00:28:26] Joseph Nelson: So the UX primitive of being able to add and subtract candidate regions of interest is really intuitive here. And now, great, I have this outline, but in fact what I want is, I want to name that as a class. Because maybe for the model that I'm building, I want to build like a task specific model, you know, like an object detection model or an instant segmentation model.[00:28:50] Joseph Nelson: Or, you know, maybe I'm even using like a multimodal model and I want that multimodal model to refer to regions of interest in the images as a specific thing. And so I think what's, you know, really powerful is, of course, like, I get this really rich zero shot prediction. And here we have our friend Rick.[00:29:10] Joseph Nelson: So I get this really rich candidate set of predictions. But then by adding the class wise label, I can, you know, very quickly make sure that any downstream tasks are aware not just of the segment, but also of the, what is inside that segment. Which actually takes me to A separate point of something that I predict that's probably going to happen and Nikhil, I'm actually kind of interested why maybe your team made a conscious decision to not do this initially with SAM2.[00:29:40] Joseph Nelson: There's been an emergent set of models that are also adding open text prompting capabilities to grounding models. So for example, like you've seen models like Grounding Dino or Owlvit, which, you know, you can do. Even image to image or text to image based prompting to find regions of interest. And maybe maybe I can actually give an example of that even in the context of this same data.[00:30:05] Joseph Nelson: So if I wanted to try out, you know, grounding dino on this same set of images, I could try out, you know, prompting grounding dino for a set of different classes. And what's notable is let's do, I don't know, let's prompt for person and we'll prompt for person and prompt for I don't know, microphone.[00:30:26] Joseph Nelson: NLASC or microphone. Here I can text prompt the image and then the understanding, in this case Grounding Dino's understanding, of where people are in this image allows me to create, in this case, bounding boxes, but, you know, soon you can do segmentations or in tandem with SAM do segmentations. And, you know, we've already seen applications of using SAM2 in tandem with models like Grounding Dino or Florence 2.[00:30:54] Joseph Nelson: So that people can basically text prompt and then get the benefits of the zero shot segmentation at the same time as getting the open form querying. And in doing so, you know, we maintain a framework called like autodistill so like folks can very quickly, you know, bring some images and then using autodistill to find some ontology and then prompt and say what you want from that ontology.[00:31:19] Nikhila Ravi: So you already do this for video as well?[00:31:21] Joseph Nelson: You can apply videos or groups of images, yes. So this is using a project called Autodistill. And the concept of Autodistill is, use a base model, like a big base model, which could be like SAM or Grounding Dino, and then you pass a directory of images, which also could be video, broken into individual frames, and you pass an ontology as well.[00:31:43] Joseph Nelson: So an example I was just showing was like the hello world we have, which is like a shipping container. And then the combination of the grounding capabilities of, in the example I was showing, Florence 2 plus SAM, looks for the concept of container, and then SAM does the rich segmentation of turning that concept of container into the candidate proposal of the region, so that a user could just say, hey, I want all the shipping containers, run this across a bunch of images or video frames, And then get back the class wise labels plus the regions of interest.[00:32:17] Joseph Nelson: And this feels like a natural extension. And in fact, like the open form grounding capabilities between SAM1 and SAM2 became something the field was broadly doing. So I'm curious, like, from your perspective, one of the things I thought maybe SAM2 would do is actually add this capability natively. So I'm curious to hear, like, the conscious decision to say, hey, we want to continue to be class agnostic.[00:32:39] Extending SAM 2 with other models[00:32:39] Joseph Nelson: We don't want to add yet maybe open form text prompting as a part of finding the segments and parts of images. And I'd love to hear about like the decision to think about it that way. And if you are encouraged or if you want kind of like what's happening here where people are naturally combining these capabilities as something that you would expect and encourage to happen despite not having it.[00:33:00] Joseph Nelson: In the base model itself.[00:33:02] Nikhila Ravi: Yeah, it's a great question. So I think it's really cool that the community is taking SAM and taking SAM 2 and building on top of it and coming up with cool applications. We love to see that. That's exactly why we open source our work. And then in terms of why we didn't put it into SAM 2, so as you've probably seen with SAM and SAM 2, it's a fairly narrow problem.[00:33:25] Nikhila Ravi: But we really tried to make it a step change in the capability. And so with each version, we are trying to limit the focus on one thing that we can know we can do really well. And in this case, like the first SAM, it was class agnostic segmentation, but can we do it so well that it's effectively solved?[00:33:47] Nikhila Ravi: And similarly, can we do that same thing, but with Video segmentation. So one step at a time, we are working on each of these problems one at a time so that we can actually deliver something that's really world class and step changing.[00:34:03] Joseph Nelson: So does that mean SAM 3 will have the text prompting? Problem is like the next challenge.[00:34:09] Nikhila Ravi: Who knows, who knows? Maybe the community will, will we'll build that too. So[00:34:15] Joseph Nelson: it makes sense to like very narrowly do something very well. And that's, I think, proven to be well accomplished.[00:34:21] Nikhila Ravi: It's like taking the, the, both the data, the model and the demo, and how can we push all three towards solving one thing really well?[00:34:30] Nikhila Ravi: So we found that. That's like a good recipe and that's what we've limited the focus of these, of each of these models.[00:34:38] swyx: This development reminds me of how, you know, when you do, and you break out the interpretability of ConvNets and you can see like, Oh, this is the edge detection one. I feel like SAM is the edge detection version equivalent.[00:34:51] swyx: And then you build up to whatever the next feature is on top of that.[00:34:54] Limitations of SAM: Screenshots[00:34:54] Joseph Nelson: Can I bring up one? Limitation of SAM. So like we've like even SAM one, SAM two, and the monitor is released at 4 PM Pacific on Monday. We're recording this on 11 AM Pacific on, on, on Thursday. So the, it's very fresh for a lot of the capabilities and.[00:35:09] Joseph Nelson: It is so clear that it is a stepwise change in the capability that, Nikhila, you mentioned your team wants to do, which is extend SAM's zero shot class agnostic capability to video, like, A plus, kind of mission accomplished. One thing that's interesting is finding, like, domain problems where there might be still domain applicability and domain adaptation that is available.[00:35:32] Joseph Nelson: One benchmark that we introduced at CBPR is this thing called RF100, which is like, seven different domain type problems that the industry commonly is working on in vision, like underwater document processing, aerial examples, medicine examples. And one place where interestingly segment anything maybe less performant than other models is handling screenshots.[00:35:57] Joseph Nelson: For example, like a lot of folks that are building agents to interact with the web are particularly interested in that challenge of given a screenshot of a computer, what are all the buttons. And how could I autonomously navigate and prompt and tell it to click? And I can show an example of like maybe what, how like Sam kind of performs on this challenge just to outline some of the context of this problem.[00:36:23] Joseph Nelson: But I'm curious like how you think about limitations like this and what you would expect to want to be the case. So here I just have a notebook where I run Sam on the source image on the left. Or the source image on the left and then Sam output is on the right. And this is just a screenshot of, of a website where we just grab like the top 100 websites by traffic and grab screenshots from them.[00:36:42] Joseph Nelson: One example of a place where I could see the community improving on Sam, and I'm curious how you think about this challenge and maybe why Sam is less well adapted for this type of problem. Is processing screenshots. So I'll share my screen to give an example for, for viewers that are participating here, you see like an example, a screenshot of a website on the left, and then right is SAM two running on that image.[00:37:06] Joseph Nelson: And in the context of agents, folks usually want to have like, Hey, tell me all of the buttons that a, an agent could press. Tell me like maybe the headlines of the articles tell me the individual images and Sam two behaves perhaps predictably, where it outlines like people in the images and like some of like the, the screen text.[00:37:22] Joseph Nelson: I'm curious, like, how you think about a challenge like this for a model that sees everything in the world, what about handling digital contexts? And Why maybe it could perform better here and how you would expect to see improvement for domains that might have been out of distribution from the training data?[00:37:40] Nikhila Ravi: Yeah, this is a good question. So fair, we don't really build with a specific use case in mind. We try to build like these foundational models that can be applied to lots of different use cases out of the box. So I think in this kind of example, potentially people might want to annotate some data.[00:37:59] Nikhila Ravi: Fine tune on top of what we release. I think we probably won't build things that are very custom for different use cases. I think that's not a direction we'll go in, but as you said, like the model is an annotation tool to improve the model. And so I think that's definitely the approach we want to take is we provide the tools for you to improve the model as well as the model itself.[00:38:27] Joseph Nelson: That makes sense. Focus on like as many. Multi or zero shot problems and then allow the community to pick up the torch for domain adaptation.[00:38:34] Nikhila Ravi: Yeah, absolutely. Like, we can't solve all the problems ourselves. Like, we can't solve all the different domains. But if we can provide a sort of base hammer tool, and then people can apply it to all their different problems.[00:38:48] SAM 2 Paper[00:38:48] swyx: If you don't mind, I guess we want to transition to a little bit on like asking more questions about the paper.[00:38:53] Udio AI: Sure.[00:38:54] swyx: There's a lot in here. I love the transparency from Meta recently with like LLAMA 3 last week and then, and was it last week? Maybe, maybe a little bit less than last week. But just like just really, really well written and a lot of disclosures, including the data set as well.[00:39:08] SA-V Dataset and SAM Data Engine[00:39:08] swyx: I think the top question that people had on the data set, you know, you release a diverse videos and there was, there's a lot of discussion about the data engine as well, which I really love. And I think it's innovative if you wanted. I think the top question is like, how do you decide the size of data set?[00:39:22] swyx: You know, what were you constrained by? People are asking about scaling laws. You had some ablations, but as a research manager for this whole thing, like how do you decide what you need?[00:39:32] Nikhila Ravi: Yeah. I mean, it's a great question. I think it's, as with all papers, you write them at the end of the project, so we can put these nice plots at the end, but going into it, I think, you know, the data engine design really follows.[00:39:47] Nikhila Ravi: So, this is sort of the model design, how we thought about the task, how we thought of the model capabilities. You can really see it's reflected in the different phases of the data engine. We started with just SAM, we apply SAM per frame. That's like the most basic way of extending SAM to video. Then the most obvious thing to do is to take the output masks from SAM and then provide it as input into a video object segmentation model that takes the mask as the first frame input.[00:40:19] Nikhila Ravi: And that's exactly what we did. We had SAM plus a version of SAM2 that only had mask as input. And then in the last phase, we got rid of SAM entirely and just had this one unified model that can do both image. And video segmentation. And I can do everything in just one model. And we found that, you know, going from each phase, it both improved the efficiency and it improved the data quality.[00:40:46] Nikhila Ravi: And in particular, when you get rid of this two part model, one of the advantages is that when you make refinement clicks, so, You prompt the model in one frame to select an object, then you propagate those predictions to all the other frames of the video to track the object. But if the model makes a mistake and you want to correct it, when you have this unified model, you only need to provide refinement clicks.[00:41:14] Nikhila Ravi: So you can provide maybe a negative click to remove a region or a positive click to add a region. But if you had this decoupled model, you would have to Delete that frame prediction and re annotate from scratch. And so you can imagine for more complex objects, this is actually adding like a lot of extra time to redefine that object every time you want to make a correction.[00:41:39] Nikhila Ravi: So both the data and the data engine phases really follow, like how we thought about the model design and the evolution of the capabilities, because it really helped us to do that. improve the data quality and the annotation efficiency as well.[00:41:54] swyx: Yeah, you had a really nice table with like time taken to annotate and it was just going down and down.[00:41:58] swyx: I think it was like down by like 90 percent by the time you hit stage[00:42:02] Joseph Nelson: three, which is kind of cool. We joke that when SAM 1 came out at RoboFlow, we're like, was this purpose built for our software? Like you have like the embedding, you have the embedding take like a big model and the querying of the embeddings A smaller model that happens in browser, which felt remarkably aligned.[00:42:18] Joseph Nelson: Now hearing you talk about how you think about building models with a demo in mind, it makes sense. Like, you're thinking about the ways that folks downstream are going to be consuming and creating value. So, what felt like maybe a coincidence was perhaps a deliberate choice by Meta to take into account how industry is going to take Seminal advances and apply them.[00:42:36] Nikhila Ravi: Yeah. And it's not just humans. Like it could also be a model that outputs boxes that then get fed into this model. So really thinking about this as a component that could be used by a human or as a component, as part of a, of a larger AI system. And that has, you know, a number of design requirements. It needs to be promptable.[00:42:56] Nikhila Ravi: It needs to be, have the zero shot generalization capability. We, you know, need it to be real time and. Those requirements really are very core to how we think about these models.[00:43:08] Memory Attention to solve Video[00:43:08] swyx: I cannot end this podcast without talking about the architecture, because this is your, effectively the sort of research level, architecture level innovation that enabled what I've been calling object permanence for SAM.[00:43:22] swyx: And it's memory retention. What was the inspiration going into it? And you know, what did you find?[00:43:27] Nikhila Ravi: Yeah, so at a high level, the way we think about extending SAM to video is that an image is just a special case of a video that just has one frame. With that idea in mind, we can extend the SAM architecture to be able to support segmentation across videos.[00:43:45] Nikhila Ravi: So this is a quick video that shows how this works. So SAM architecture, we have the image encoder, we have a prompt encoder, we have a mask decoder. You can click on an image. And that basically is a prompt, we use that prompt along with the image embedding to make a mask prediction for that image. Going to SAM2, we can also apply SAM2 to images because we can, you know, as I said, treat an image as a video with a single frame.[00:44:15] Nikhila Ravi: And so when we, in the SAM2 architecture, we introduce this new memory mechanism that consists of three main components. There's memory attention, there's a memory encoder, and then there's a memory bank. And when we apply SAM2 to images, these are effectively not used. And the architecture just collapses down to the original SAM architecture.[00:44:35] Nikhila Ravi: But when we do apply this to video, the memory components become really useful because they provide the context of the target object from Other frames. And so this could be from past frames. It can be from, there's two types of memory. So there's like the condition, conditional frames or the prompted frames, which are basically the frames at which a user or a model provides input like clicks.[00:45:01] Nikhila Ravi: And then there's like the surrounding frames. And say we use six frames around the current frame as memory of the object. So there's, there's those, those, both those types of memory that we use to make the prediction. Going into a little bit more detail about that, there's like two kinds of memory that we use.[00:45:18] Nikhila Ravi: So one is like spatial memory. So it's like this high resolution memory that captures the spatial details. And then we also have this like longer term object pointer memory that captures some of the sort of higher level concepts. And I think Swyx, you had a comment about how does this relate to sort of context window and LLMs.[00:45:37] Nikhila Ravi: And both of these types of memories have some relation to context window, so they both provide different types of information on the spatial side or in terms of the concept of the objects that we want to track. And so we found that having like six frame length for the spatial memory, Coupled with this longer period of the object pointer memory provides strong video segmentation accuracy at high speed.[00:46:01] Nikhila Ravi: So, as I mentioned, the real time aspect is really important. We have to find this speed accuracy trade off. And one way in which we sort of circumvent this is by allowing additional prompts on subsequent frames. So even if the model makes a mistake, maybe it loses the object. After an occlusion, you can provide another prompt, which actually goes into the memory.[00:46:24] Nikhila Ravi: And so the prompted frames are always in the memory. And so if you provide a prompt on a frame, we will, or the model will always remember what you provided. And so that's a way in which we can sort of avoid some of the model failure cases that actually is a big limitation of current models, current video object segmentation models.[00:46:45] Nikhila Ravi: Don't allow any way to recover if the model makes a mistake. And so, Joseph, going back to your point about the demo, that's something that we found just by playing with these models. There's no way to make a correction, and in many real world use cases, like, it's not going to be a one time prediction, but you actually want to be able to intervene, like, if an LLM makes a mistake, you can actually be like, no, actually do it this way, and provide feedback, and so, We really want to bring some of that thinking into how we build these computer vision models as well.[00:47:16] "Context Length" in Memory Attention[00:47:16] swyx: Amazing. My main reaction to finding out about the context length of eight input frames and six pass frames as their default is why not 60? Why not 600? In text language models, we're very used to severely extending context windows. And what does that do to the memory of your model?[00:47:35] Nikhila Ravi: So I think maybe one, one thing that's different is that the object in video, it is challenging.[00:47:41] Nikhila Ravi: Objects can, you know, change in appearance. There's different lighting conditions. They can deform, but I think a difference to language models is probably the amount of context that you need is significantly less than maintaining a long multi time conversation. And so, you know, coupling this. Short term spatial memory with this, like, longer term object pointers we found was enough.[00:48:03] Nikhila Ravi: So, I think that's probably one difference between vision models and LLMs.[00:48:09] Object Tracking[00:48:09] Joseph Nelson: I think so. If one wanted to be really precise with how literature refers to object re identification, object re identification is not only what SAM does for identifying that an object is similar across frames, It's also assigning a unique ID.[00:48:25] Joseph Nelson: How do you think about models keeping track of occurrences of objects in addition to seeing that the same looking thing is present in multiple places?[00:48:37] Nikhila Ravi: Yeah, it's a good question. I think, you know, SAM2 definitely isn't perfect and there's many limitations that, you know, we'd love to see. People in the community help us address, but one definitely challenging case is where there are multiple similar looking objects, especially if that's like a crowded scene with multiple similar looking objects, keeping track of the target object is a challenge.[00:49:03] Nikhila Ravi: That's still something that I don't know if we've solved perfectly, but again, the ability to provide refinement clicks. That's one way to sort of circumvent that problem. In most cases, when there's lots of similar looking objects, if you add enough refinement clicks, you can get the perfect track throughout the video.[00:49:22] Nikhila Ravi: So definitely that's one way to, to solve that problem. You know, we could have better motion estimation. We could do other things in the model to be able to disambiguate similar looking objects more effectively.[00:49:35] swyx: I'm just interested in leaving breadcrumbs for other researchers, anyone interested in this kind of architecture.[00:49:41] swyx: Like, are there papers that you would refer people to that are influential in your thinking or, you know, have, have other interesting alternative approaches?[00:49:49] Nikhila Ravi: I think there's other ways in which you can do tracking and video. You might not even need the full mask. I think that's it. Some other works that just track like points on objects.[00:49:59] Nikhila Ravi: It really, really depends on what your application is. Like if you don't care about the entire mask, you could just track a bounding box. You could just track a point on an object. And so having the high fidelity mask might not actually be necessary for certain use cases. From that perspective, you might not need the full capabilities.[00:50:19] Nikhila Ravi: of SAM or SAM2. There's many different approaches to tracking, I think I would encourage people to think about like what actually they need for their use case and then try to find something that that fits versus, yeah, maybe SAM2 is too much, you know, maybe you don't even need the full mask.[00:50:37] swyx: Makes total sense, but you have solved the problem that you set out to solve, which is no mean feat, which is something that we're still appreciating even today.[00:50:44] The Future of FAIR[00:50:44] swyx: If there are no further questions, I would just transition to sort of forward looking, future looking stuff. Joseph already hinted at, like, you know, our interest in SAM and the future of SAM, and obviously you're the best person to ask about that. I'm also interested in, like, How should external people think about FAIR, you know, like there's this stuff going on, this llama, this chameleon, this voice box, this image bind, like, how is, how are things organized?[00:51:09] swyx: And, you know, where are things trending?[00:51:11] Nikhila Ravi: Yeah, so in FAIR, we, you know, we have a number of different research areas. I work in an area called perception. So we built vision systems that solve basically, Look at all the fundamental problems in Compute Division. Can we build a step change in all of these different capabilities?[00:51:29] Nikhila Ravi: SAM was one example. SAM2 is another example. There are tons of other problems in Compute Division where we've made a lot of progress, but can we really say that they're solved? And so that's really the area in which I work on. And then there's a number of other research areas in language and in embodied AI.[00:51:49] Nikhila Ravi: And more efficient models and various other topics. So fair in general is still very much pushing the boundaries on solving these foundational problems across different domains. Well,[00:52:07] swyx: fair enough, maybe just outside of fair, just the future of computer vision, right?[00:52:10] CVPR, Trends in Vision[00:52:10] swyx: Like you are very involved in the community. What's the talk of the town at CVPR? Both of you went, who's doing the most interesting work? It's a question for both of you.[00:52:19] Joseph Nelson: I think the trends we're seeing towards more zero shot capability for common examples will accelerate. I think Mutu modality, meaning using, you know, images in tandem with text for richer understanding or images and video in tandem with audio and other mixed media will be a continued acceleration trend.[00:52:43] Joseph Nelson: The way I kind of see the field continuing to progress, the problem statement of computer vision is making sense of visual input. And I think about the world as the things that need to be observed follow your traditional bell curve, where like things that most frequently exist out in the world are on the center of that bell curve.[00:53:05] Joseph Nelson: And then there's things that are less frequently occurring that are in those long tails. For example, you know, as back as like 2014, you have the Cocoa data set, which sets out to say, Hey, can we find 80 common objects in context, like silverware and fridge and these sorts of things. And we also conceptualized the challenge of computer vision in terms of breaking it down into individual task types, because that's like the tools we had for the day.[00:53:29] Joseph Nelson: So that's why, you know, you have the origination of classification, object detection, instant segmentation. And then as you see things continue to progress. You have models and things that need to observe areas in the long tails. And so if you think of the Cocoa dataset as the center of that bell curve, I think of like the long tails, like really edge case problems.[00:53:49] Joseph Nelson: Some of our customers like Rivian, for example, only Rivian knows what the inside of like a Rivian should look like as it's assembled and put together before it makes its way to a customer and they're making custom parts. Right? So how could a model you've been trained on the things that go inside the componentry of producing a vehicle and Andreesen, What's kind of happening with computer vision is you're seeing models that generalize in the middle of the bell curve push outward faster.[00:54:17] Joseph Nelson: That's where you see the advent of like open text models or the richness of understanding of multimodal models. To allow richer understanding without perhaps any training, or maybe just using pre training and applying it to a given problem. And then, there's like, you know, kind of like the messy middle in between those two, right?[00:54:38] Joseph Nelson: So like, Akila kind of talked about examples where SAM does well out of distribution, where like, it finds an octopus, even though there wasn't octopi in the training data. I showed an example where, like, screenshots, where Sam isn't yet super great at screenshots, so maybe that's, like, in the messy middle or in the longer tails for now.[00:54:54] Joseph Nelson: But what's going to happen is there needs to be systems of validating the point of view that I think about, like, tooling to also validate that models are doing what we want them to do, adapting to datasets that we want them to adapt to. And so there's a lot of things on a forward looking basis that allow propelling that expansion of generalizability.[00:55:14] Joseph Nelson: That's for open text problems. That's where scaling up of training, of dataset curation, continues to play a massive role. Something that's notable, I think, about SAM2 is it's, what, 57, 000 videos? 51,[00:55:30] Nikhila Ravi: 000 videos? About 51, 000, yeah.[00:55:32] Joseph Nelson: And 100, 000 internal datasets. That's, like, not Massive, right? And the model size also isn't, you know, the largest, largest model being a couple hundred million parameters.[00:55:43] Joseph Nelson: The smallest model is 38 million parameters and can run at 45 FPS on an A100, right? Like the capabilities of, we're going to see more capable, more generalizable models. Being able to run on a higher wide array of problems with zero or multi shot capability on a faster, a faster rate. And I think the architecture innovations and things like SAM2 of memory, of increasingly like transformers making their way into division and probably blended architectures increasingly too.[00:56:15] Joseph Nelson: So my viewpoint of like on a go forward basis is we will have that bell curve of what humans can see both in the center of that curve and the long tails. And architectural changes allow richer understanding, multi and zero shot, and putting those into systems and putting those into industry and putting those into contexts that allow using them in practical and pragmatic ways.[00:56:38] Joseph Nelson: Nicola, I'd love to hear like your thought and perspective of like how you think the research trends map or don't map to that. And like maybe some of the key innovations that you saw at CVPR this year that, you know, Got you excited about the direction and maybe some promising early directions that you're thinking about researching or pushing the boundaries of further.[00:56:56] Nikhila Ravi: Yeah, I just wanted to actually reply to a couple of things that you said about so actually in video object segmentation, the number of classes. that are annotated in these, and then the size of these datasets are really small. So with SAM, it's, you know, we had a billion masks, we had 11 million images, didn't have class labels.[00:57:17] Nikhila Ravi: But even before that, there were a lot of datasets that have class labels and are annotated. With significantly more with, with like a lot of class labels, whereas in video datasets, the number of class labels are very small. So there's like YouTube VOS, which has 94 object categories, there's Mose, which has around like 30 or so object categories.[00:57:38] Nikhila Ravi: And they're usually like people, there's cars, there's dogs and cats and all these common objects, but not really, they don't really cover a very large number of object categories. And so while Sam learned this general notion of what an object is in an image. These video tracking models actually don't have that knowledge at all.[00:58:01] Nikhila Ravi: And so that's why having this data set is really important for the segment anything capability in video because if you just provide the mask as the input to an off the shelf Video object segmentation model. It might not actually be able to track that arbitrary object mask as effectively as a SAM2 model that's actually trained to track.[00:58:24] Nikhila Ravi: Any object across the entire video. So doing these sort of combining two models together to try to get a capability that will actually only get you so far and being able to actually create that the dataset to enable that anything capability, it was actually really important and we can actually see that when we do comparisons with baselines where we provide some two with the same input mask and the baseline model with the same input mask.[00:58:53] Nikhila Ravi: For example, the t shirt of a person, SAM2 can track the t shirt effectively across the entire video, whereas these baselines might actually start tracking the entire person, because that's what they're used to doing, and isolating it to just one part of the person is not something they were ever trained to do, and so those are sort of some of the limitations.
The Rational Egoist: The Hard Problem of Consciousness and an Integrated Theory of Psychology with Gregg Henriques In this episode of The Rational Egoist, host Michael Liebowitz is joined by Gregg Henriques, a psychologist and professor for the Combined-Integrated Doctoral Program at James Madison University in Harrisonburg, VA. Henriques delves into the "hard problem" of consciousness, exploring the complexities of understanding subjective experience. He also discusses his vision for moving toward an integrated theory of psychology, aiming to bridge gaps between various psychological disciplines. This episode promises to offer profound insights into the nature of consciousness and the future of psychological science. Tune in for an intellectually stimulating conversation that tackles some of the most challenging questions in psychology. Michael Leibowitz, host of The Rational Egoist podcast, is a philosopher and political activist who draws inspiration from Ayn Rand's philosophy, advocating for reason, rational self-interest, and individualism. His journey from a 25-year prison sentence to a prominent voice in the libertarian and Objectivist communities highlights the transformative impact of embracing these principles. Leibowitz actively participates in political debates and produces content aimed at promoting individual rights and freedoms. He is the co-author of “Down the Rabbit Hole: How the Culture of Correction Encourages Crime” and “View from a Cage: From Convict to Crusader for Liberty,” which explore societal issues and his personal evolution through Rand's teachings.Explore his work and journey further through his books:“Down the Rabbit Hole”: https://www.amazon.com.au/Down-Rabbit-Hole-Corrections-Encourages/dp/197448064X“ View from a Cage”: https://books2read.com/u/4jN6xj join our Ayn Rand Adelaide Meetups here for some seriously social discussions on Freedom https://www.meetup.com/adelaide-ayn-rand-meetup/
WATCH: https://youtu.be/rZX7hSK8-u4 Tony Nader is a Medical Doctor trained at Harvard University and Massachusetts Institute of Technology (PhD in Neuroscience), and a globally recognised Vedic Scholar. @DrTonyNader is an author and fellow podcaster, with his book titled: "One Unbounded Ocean of Consciousness" and his podcast titled: "Consciousness Is All There Is", which also happens to be the name of his NEW book (link below). As Maharishi Mahesh Yogi's successor, Dr Nader is head of the international Transcendental Meditation® organisations in over 100 countries. From the Americas to Asia, from Europe to Africa, Dr Nader guides the Transcendental Meditation program and its advanced practices, and the practical applications of this technology in all areas of national life – education, health, business, defense, agriculture, and more. TIMESTAMPS: (0:00) - Introduction (2:01) - Consciousness vs consciousness (big "C" vs small "c") (4:38) - How does Monistic Idealism explain concepts like Space, Time, & Matter? (10:25) - The Hard Problem of Physicality (16:18) - The Canvas of Manifestation (20:29) - Other theories of consciousness (25:05) - David Lynch's (filmmaker and best-selling author) interest in Tony's Work (29:03) - The Consciousness Paradigm (Simplest Idea with the Highest Explanatory Power) (33:44) - Higher States of Consciousness (40:00) - Altered States of Consciousness (47:15) - Pragmatic & Ethical Implications of Monistic Idealism (52:45) - What is the Nature of Consciousness? (57:48) - Freedom & Choice (1:02:20) - Human Flourishing (1:08:37) - Join Tony's Transcendental Experience (1:13:37) - Final thoughts (1:16:13) - Conclusion EPISODE LINKS: - Tony's Website: https://www.drtonynader.com/ - Pre-order Tony's New Book: https://www.drtonynader.com/preorder/ - Tony's Podcast: https://www.youtube.com/@DrTonyNader - Tony's Round 1: https://youtu.be/PFHiMYKubrU - Tony's Interview: https://youtu.be/hHBBpsF-K3U - Consciousness Is All There Is: https://youtu.be/T2_GAcVb8yE?feature=shared CONNECT: - Website: https://tevinnaidu.com - Podcast: https://podcasters.spotify.com/pod/show/drtevinnaidu - Twitter: https://twitter.com/drtevinnaidu - Facebook: https://www.facebook.com/drtevinnaidu - Instagram: https://www.instagram.com/drtevinnaidu - LinkedIn: https://www.linkedin.com/in/drtevinnaidu ============================= Disclaimer: The information provided on this channel is for educational purposes only. The content is shared in the spirit of open discourse and does not constitute, nor does it substitute, professional or medical advice. We do not accept any liability for any loss or damage incurred from you acting or not acting as a result of listening/watching any of our contents. You acknowledge that you use the information provided at your own risk. Listeners/viewers are advised to conduct their own research and consult with their own experts in the respective fields.
Annaka Harris dives deep into some of the most profound and perplexing questions about the nature of consciousness, perception, free will, AI, and the underlying meaning of love and existence. Annaka begins by defining consciousness and exploring the "hard problem". She discusses neuroscientific insights into how the brain processes conscious experiences, and how our intuitions about the nature of the self and decision-making can often mislead us. The conversation then ventures into the realms of plant consciousness, the criteria for discerning whether something is truly conscious and capable of suffering, and the idea that consciousness may be a fundamental feature of the universe. She shares her personal experience with using meditation to transcend the illusory nature of the self. Red light therapy: Go to https://BonCharge.com/KnowThyself and use code KNOWTHYSELF to save 15% André's Book Recommendations: https://www.knowthyself.one/books ___________ Timecodes: 0:00 Intro 2:20 Defining Consciousness 6:25 Why the 'Hard Problem' is Hard 14:48 How the Brain Processes Conscious Experiences 19:30 You're Not Crazy, You're Waking Up 25:37 How Your Intuitions May Lead You Astray 29:12 Are Plants Conscious? 39:28 Discerning What Makes Something Conscious or Able to Suffer 47:51 Boncharge: Red Lights 15% Off 49:01 Pan-psychism & Consciousness as Fundamental 1:02:27 Consciousness at a Molecular Level 1:15:35 Illusory Nature of Self 1:21:57 Transcending the Self Through Meditation 1:32:31 Decision Making & The Readiness potential 1:43:10 Free Will vs Conscious Will 1:44:52 The Love Underneath it All 1:50:46 Experimental Science & the Language Barrier to Describing This 1:53:58 Annaka's Personal Path to Studying Consciousness 2:01:40 Life's Inherent Intelligence & Meaning 2:08:51 Artificial Intelligence 2:14:16 Do Aliens Exist? 2:18:40 Seeing the Bigger Picture 2:23:05 Conclusion ___________ Annaka Harris is the New York Times bestselling author of CONSCIOUS: A Brief Guide to the Fundamental Mystery of the Mind. Her work has appeared in The New York Times, Nautilus Magazine, the Journal of Consciousness Studies, and IAI Magazine. She is also an editor and consultant for science writers, specializing in neuroscience and physics. Annaka is the author of the children's book I Wonder, coauthor of the Mindful Games Activity Cards, and a volunteer mindfulness teacher for the organization Inner Kids. Website: https://annakaharris.com Instagram: https://www.instagram.com/annakaharrisprojects/ ___________ Know Thyself Instagram: https://www.instagram.com/knowthyself/ Website: https://www.knowthyself.one Clips Channel: https://www.youtube.com/channel/UCJ4wglCWTJeWQC0exBalgKg Listen to all episodes on Audio: Spotify: https://open.spotify.com/show/4FSiemtvZrWesGtO2MqTZ4?si=d389c8dee8fa4026 Apple: https://podcasts.apple.com/us/podcast/know-thyself/id1633725927 André Duqum Instagram: https://www.instagram.com/andreduqum/
Ogi Ogas is a Mathematical Neuroscientist and Author. He attained his PhD in Computational Neuroscience at Boston University. He was a United States Department of Homeland Security Fellow during his graduate studies, and is the director of the Dark Horse Project in the Laboratory for the Science of Individuality at the Harvard Graduate School of Education. His the author of several books including “A Billion Wicked Thoughts” (2011), “This is What It Sounds Like: What the Music You Love Says About You” (2022), “Journey of the Mind: How Thinking Emerged from Chaos” (2022), and "Consciousness: How It's Made: A Super-Simple Explanation for Everyone" (2024), among several others. His work focuses on a unified account of the mind that explains how consciousness, language, the Self, and civilization emerged incrementally out of chaos. A Grand Unified Theory of Consciousness. TIMESTAMPS: (0:00 - Introduction (0:43) - "The Hard Problem"10:50 - Subjectivity vs Objectivity (17:00) - IIT & Panpsychism (21:13) - Stephen Grossberg's Adaptive Resonance Theory (28:55) - Ogi's take on Autism (50:55) - Different States of Consciousness (1:08:05) - UAPs/UFOs (1:13:43) - Intex (1:28:40) - Steve Grossberg's Incredible Work (1:36:35) - Final thoughts on Reality & the Universe (1:42:20) - What's on Ogi's mind right now?1:48:40 - Conclusion EPISODE LINKS: - Ogi's Round 1: https://youtu.be/TmNqg1vssoo - Ogi's Website: https://www.ogiogas.com/ - Ogi's Books: https://tinyurl.com/5csz8y2m - Journey of the Mind: https://thejourneyofthemind.com/ CONNECT: - Website: https://tevinnaidu.com/ - Podcast: https://podcasters.spotify.com/pod/show/drtevinnaidu - Twitter: https://twitter.com/drtevinnaidu/ - Facebook: https://www.facebook.com/drtevinnaidu - Instagram: https://www.instagram.com/drtevinnaidu/ - LinkedIn: https://www.linkedin.com/in/drtevinnaidu/ ============================= Disclaimer: The information provided on this channel is for educational purposes only. The content is shared in the spirit of open discourse and does not constitute, nor does it substitute, professional or medical advice. We do not accept any liability for any loss or damage incurred from you acting or not acting as a result of listening/watching any of our contents. You acknowledge that you use the information provided at your own risk. Listeners/viewers are advised to conduct their own research and consult with their own experts in the respective fields.
This episode, first released on the “Into the Impossible” channel with Dr. Brian Keating, brings together the brilliant minds of John Vervaeke and Shawn Coyne to discuss the advent of artificial general intelligence and its potential consequences. The conversation starts with the motivations behind major tech figures' drive towards AI development and touches upon the issues of trust, adaptation, and the inherent human susceptibility to self-deception. Vervaeke and Coyne, through their book "Mentoring the Machines: Surviving the Deep Impact of the Artificially Intelligent Tomorrow," advocate for a nuanced understanding of AI, urging for a mentorship approach to machine development that could ensure AI's alignment with human flourishing. Their dialogue also ventures into the realms of psychology, cognitive science, and the philosophical underpinnings of AI, making a compelling case for the transformative power of AI, not only technologically but also existentially for humanity. Bios and Links: Dr. Brian Keating is the Chancellor's Distinguished Professor of Physics at UC San Diego, specializing in cosmic microwave background research to explore the universe's origins. An acclaimed writer, his book "Losing the Nobel Prize" is an Amazon Editors' favorite. He excels as a public speaker, inventor, and podcaster. Explore more at his website, follow him on Twitter, or watch his insights on YouTube. Shawn Coyne, creator of Story Grid, brings over three decades of publishing expertise, notably with the Big Five publishers, as an independent publisher, literary agent, and head of Genre Management Inc. Dive into his editing method and explore more at Story Grid. Embark on a journey with us to tackle the Meaning Crisis by joining our exclusive Patreon group: John Vervaeke | Responding to The Meaning Crisis with The Vervaeke Foundation. Connect with John: Website | YouTube | Patreon | X Resources: The Vervaeke Foundation Awaken to Meaning Mentoring the Machines: Orientation - Part One: Surviving the Deep Impact of the Artificially Intelligent Tomorrow - John Vervaeke, Shawn Coyne Mentoring the Machines: Origins - Part 2: Surviving the Deep Impact of the Artificially Intelligent Tomorrow - John Vervaeke, Shawn Coyne John Vervaeke Video Essay: AI: The Coming Thresholds and The Path We Must Take | Internationally Acclaimed Cognitive Scientist Quotes: "We should really be framing artificial intelligence as a mentoring of intelligent beings who have the capability and potentialities of becoming even perhaps better than we are." - Shawn Coyne [00:05:52] "It's only when you have genuine intelligence for the actual system or entity itself—an autopoietic system—a system that cares about information because it's taking care of itself in a moment by moment basis. Only then could you have something that would actually care about what's going on—the true, the good, or the beautiful." - John Vervaeke [00:15:05] Glossary of Terms: AGI (Artificial General Intelligence): A level of artificial intelligence that can understand, learn, and apply knowledge across a wide range of tasks at a level of competence comparable to or surpassing that of a human. Relevance Realization: The process by which cognitive beings determine what information is relevant to their goals and what is not. Autopoiesis: The property of a living system (such as a bacterial cell or a multicellular organism) that allows it to maintain and renew itself. Chapters: 00:00:00 - Introduction 00:02:45 - The Genesis of "Mentoring the Machines" 00:08:50 - AI, Psychology, and the Alignment Problem 00:16:40 - The Evolution of Editing and Publishing in the AI Era 00:21:00 - Bridging Knowledge and Wisdom 00:29:00 - Einstein, Imagination, and AI's Emotional Depth 00:37:30 - Deciphering Consciousness: AI and the Hard Problem 00:44:40 - Educational Evolution: AI, Pedagogy, and the Future of Teaching 00:53:50 - AI's Impact on Personalized Storytelling 00:58:30 - AI, Psychology, and the Future of Psychotherapy 01:04:20 - Conclusion
Prof. Joel Pearson (Neuroscientist; AI and intuition expert) developed the first scientific test to measure intuition, dragging it out of the woo-woo realm and into a cognitive framework. He's now written The Intuition Toolkit: The New Science of Knowing What without Knowing Why to show us how and when to use this mysterious superpower in our lives (not while rock-climbing on a date, not at a casino!).Joel is the founder and Director of Future Minds Lab which applies neuroscience findings to art, AI, media, advertising and various philosophical quandaries. He's also a National Health and Medical Research Council fellow and Professor of Cognitive Neuroscience at the University of New South Wales, Australia.In this chat we cover when and how to use intuition, why intuition is hijacked by anxiety and depression, whether AI will ever be able to have intuition, aphantasia and a bunch of deep, wide questions about what it means to be human, including the Hard Problem of Consciousness. Mostly, Joel is a great conversationalist, someone you'd want to sit next to at a dinner party.SHOW NOTESGet Joel's book The Intuition Toolkit: The New Science of Knowing What without Knowing WhyFollow Joel on his Future Minds Lab Substack You might also like to listen to my WILD chat with Sheena Iyengar, the scientist who first ran those “paradox of choice” studiesAnd with George Paxinos, regarded as the world's leading brain expert on whether our brains are “good” enough to save the planetI mention the book Klara and the Sun by Kazuo IshiguroIf you need to know a bit more about me… head to my "about" pageFor more such conversations subscribe to my Substack newsletter, it's where I interact the most!Get your copy of my book, This One Wild and Precious LifeLet's connect on Instagram and WeAre8 Hosted on Acast. See acast.com/privacy for more information.
In this episode, Caitlin Ornitz, Vice President of Strategy at Champagne Hospitality, draws from her background in management consulting with McKinsey & Company to share how she pushes through tough industry problems to stay ahead of competitors, innovate, and attract guests. Listen to gain insights on how to empower your team, make big bets, and position your hospitality business to be durable and relevant for years to come. You may also enjoy: Our other episode with Caitlin: How Our Hotels Become Community Pillars, Delighting Neighbors and Guests Alike Building Sustainable Luxury with Denise Dupré, Founder of Champagne HospitalityOur other episodes on strategyIf you care about hospitality, check out the Masters of Moments podcast where Jake Wurzak interviews top leaders in hospitality. His conversations with Bashar Wali and Matt Marquis are a great place to start, but also check out his solo episodes such as how he underwrites investment deals and a deep dive into GP fees you know about.Music by Clay Bassford of Bespoke Sound: Music Identity Design for Hospitality Brands
Adam Butler, author of Butler's DMT Field Guide, joins InnerVerse to talk about how the most powerful (and natural) of psychedelics was part of his path of transformation from living with a stressed out deathwish to experiencing the wonder of life with awestruck gratitude. We discuss what DMT does endogenously in the body, what the blast off, hyperspace beings, and cosmic consciousness are like, the eternal question of "who am I?" and much more. Get the Plus+ Extension to continue the trip reports and enlightened speculations.Support InnerVerse Rokfin and Patreon for extended episodes!https://rokfin.com/stream/44279https://www.patreon.com/posts/96982079 EPISODE LINKSVideo Episode - https://youtu.be/cGu6j_rDGaoRead Butler's DMT Field Guide - https://www.amazon.com/dp/B0BY5KSYBKOutro Music by Acid Katz - https://soundcloud.com/acidkatzhttps://www.innerversepodcast.com/season-10/adam-butler-dmt TELEGRAM LINKShttps://t.me/innerversepodcasthttps://t.me/innerversepodcastchat GET TUNEDhttps://www.innerversepodcast.com/sound-healing SUPPORT INNERVERSEInnerVerse Merch - https://www.innerversemerch.comTippecanoe Herbs - Use INNERVERSE code at checkout - https://tippecanoeherbs.com/Check out the Spirit Whirled series, narrated by Chance - https://www.innerversepodcast.com/audiobooksDonate on CashApp at $ChanceGartonBuy from Clive de Carle with this link to support InnerVerse with your purchase - https://clivedecarle.ositracker.com/197164/11489The Aquacure AC50 (Use "innerverse" as a coupon code for a discount) - https://eagle-research.com/product/ac50TT InnerVerse intro theme by Conspiracy Music Guru - https://www.conspiracymusicguru.com Hosted on Acast. See acast.com/privacy for more information.
CEO, Matt Devost, describes many firsts in his career including hacking into systems on an aircraft carrier at sea. He shares how he enjoys solving hard problems and the red teamer perspective, and how he was able to translate those into a career. For those interested in cybersecurity, Matt advises opportunities for self-directed learning including heading down to your basement and building your own lab. Our thanks to Matt for sharing his story with us. Learn more about your ad choices. Visit megaphone.fm/adchoices
Michael Hoffman, or “Hoff” as most know him, is the co-founder and CEO at IQXR, a company solving the hardest problems facing global-scale, enterprise XR deployments, and doing so with an open source approach.Previously Michael spent nearly a decade working with the Microsoft Hololens team. He was a Principal Engineering Lead at Microsoft for a couple of years, left to be the founding partner of Object theTheory, where he and his team worked with enterprises to leverage AR and VR technologies, often in combination with IoT and AI/Machine Learning. And then he went back to Microsoft for a couple of years to lead the development of the Mixed Reality Toolkit (MRTK) project.Earlier in his career, Michael worked in software engineering roles at Google, Nike, and several startups.In this conversation, Hoff describes how 3D visualization, with AR and VR technologies, changes our comprehension of digital information, contributes to the value of having your hands free to interact with the world, and enables better efficiency and better insights.Within the enterprise setting, Hoff notes it's relatively easy to get to a pilot and prove value, but it's really difficult to deliver that value at scale.We go on to talk about making AR/VR solutions viable within an enterprise setting at scale, including challenges around visual and audio haptics, working both online and offline, and other key bits of plumbing, as well as the misconceptions that many enterprises have about the technology.We also discuss:- the rationale and corporate strategy for building open source solutions, - the role of AI in accelerating software and content development,- the art of the AI prompt, and- how Apple Vision Pro accelerates the market.Hoff wraps up by discussing neurodivergence and his own growing awareness and acceptance of the challenges and benefits of neurodivergence for both his children and himself.You can find all of the show notes at thearshow.com. Please consider contributing to my Patreon at https://www.patreon.com/theARshow.Links From The Episode- Press Release: [Microsoft Talent Joins The Mesmerise Group to Drive Growth of Immersive Technology Solutions for the Enterprise](https://www.prnewswire.com/news-releases/microsoft-talent-joins-the-mesmerise-group-to-drive-growth-of-immersive-technology-solutions-for-the-enterprise-301856247.html)- Article: [What is ikigai and how can it change my life?](https://www.betterup.com/blog/what-is-ikigai) by Elizabeth Perry for BetterUp- Book: [Ikigai: The Japanese Secret to a Long and Happy Life](https://amzn.to/48pf15s) by Héctor García and Francesc Miralles - Book: [The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers](https://amzn.to/2ZpiQ8m) by Ben Horowitz- Book: [Ready Player One](https://amzn.to/2X9Eu2t) by Ernest Cline