Podcasts about Seminal

  • 306PODCASTS
  • 446EPISODES
  • 48mAVG DURATION
  • 1EPISODE EVERY OTHER WEEK
  • Apr 4, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Seminal

Latest podcast episodes about Seminal

Discograffiti
201. THE BEST SHOW'S TOM SCHARPLING ON THE BEACH BOYS' PARTY! (PLUS THE SEMINAL STEPPING STONES EN ROUTE TO PET SOUNDS)

Discograffiti

Play Episode Listen Later Apr 4, 2025 82:24


The Best Show's Tom Scharpling has been a Beach Boys obsessive for over 40 years, and the idea behind bringing him on for Party!, The Beach Boys' oft-dismissed 10th LP that also served as a stopgap measure allowing Brian Wilson to buy time to shape Pet Sounds, was to feature a guest who took this record seriously. In addition, the various unreleased experiments in which Brian was engaging during this era are some of the most fascinating studio visits Brian would ever make in his life, and each and every one is covered in forensic detail (especially in The Director's Cut edit). Greatness was not just around the corner…it was already here.Here's just a few of the many things that Tom discusses with Discograffiti in this podcast:How Tom initially became deeply, deeply obsessed with The Beach Boys in the early 1980s, and the burnout he eventually incurred via oversaturation;The time in the late 1980s that a young Tom contacted a radio call-in show that featured Brian, only to harangue him about when Smile was coming out;A fascinating series of unreleased studio experiments that paved the way for both Pet Sounds & Smile;A discussion about whether or not The Beach Boys are an inherently funny band…or not at all in the least;A debate over whether covering their own material on Party was proto-Weird Al or just a complete misfire;And a track-by-track rundown of the Party! LP.Listen: linktr.ee/discograffitiI support a wife and a six-year-old son with Discograffiti as my sole source of income. If you're a Beach Boys superfan like me, you'll want The Director's Cut of this episode. It's ad-free and features 13 additional minutes of essential material. Purchase it as a one-off, get the entire Season 2 Series as a bundle (listed under Collections), or better yet…Subscribe to Discograffiti's Patreon and receive a ceaseless barrage (4 shows a week!) of must-hear binge-listening. And now with our 2025 Patreon Membership Drive, you'll also get an episode all about YOU and a FREE copy of Metal Machine Muzak at the Lieutenant Tier or higher: Patreon.com/DiscograffitiCONNECTJoin our Soldiers of Sound Facebook Group: https://www.facebook.com/groups/1839109176272153Patreon: www.Patreon.com/DiscograffitiPodfollow: ⁠⁠https://podfollow.com/1592182331⁠⁠YouTube Channel: https://www.youtube.com/channel/UClyaQCdvDelj5EiKj6IRLhwInstagram: https://www.instagram.com/discograffitipod/Facebook: https://www.facebook.com/Discograffiti/Twitter: https://twitter.com/DiscograffitiOrder the Digital version of the METAL MACHINE MUZAK 2xLP (feat. Lou Barlow, Cory Hanson, Mark Robinson, & W. Cullen Hart): www.patreon.com/discograffiti/shop/197404Order the $11 Digital version of the MMM 2xLP on Bandcamp: https://discograffiti.bandcamp.com/album/metal-machine-muzakOrder the METAL MACHINE MUZAK Double Vinyl + Digital package: www.patreon.com/discograffiti/shop/169954Merch Shop: https://discograffitipod.myspreadshop.com/allVenmo Dave A Tip: @David-GebroeWeb site: http://discograffiti.com/CONTACT DAVEEmail: dave@discograffiti.comFacebook: https://www.facebook.com/hooligandaveInstagram:  https://www.instagram.com/davidgebroe/Twitter: https://twitter.com/DaveGebroeThere is no other Patreon in existence where you get more for your money. 4 shows a week is what it takes these days to successfully blot out our unacceptable reality…so do yourself a favor and give it a shot for at least one month to see what I'm talking about.  If you're already a member, please comment below about your experience.  www.Patreon.com/discograffiti#tomscharpling #thebestshow #thebestshowwithtomscharpling #thebeachboysparty #barbaraann #davidmarks #thebeachboys #brianwilson #beachboys #denniswilson #mikelove #carlwilson #music #vinyl #aljardine #thebeatles #brucejohnston #rock #petsounds #goodvibrations #surf #rocknroll #surfing #california #beach #surfrock #discograffiti #metalmachinemuzak #soldiersofsound #andyourdreamscometrue

Machine Learning Street Talk
Eiso Kant (CTO poolside) - Superhuman Coding Is Coming!

Machine Learning Street Talk

Play Episode Listen Later Apr 2, 2025 96:28


Eiso Kant, CTO of poolside AI, discusses the company's approach to building frontier AI foundation models, particularly focused on software development. Their unique strategy is reinforcement learning from code execution feedback which is an important axis for scaling AI capabilities beyond just increasing model size or data volume. Kant predicts human-level AI in knowledge work could be achieved within 18-36 months, outlining poolside's vision to dramatically increase software development productivity and accessibility. SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***Eiso Kant:https://x.com/eisokanthttps://poolside.ai/TRANSCRIPT:https://www.dropbox.com/scl/fi/szepl6taqziyqie9wgmk9/poolside.pdf?rlkey=iqar7dcwshyrpeoz0xa76k422&dl=0TOC:1. Foundation Models and AI Strategy [00:00:00] 1.1 Foundation Models and Timeline Predictions for AI Development [00:02:55] 1.2 Poolside AI's Corporate History and Strategic Vision [00:06:48] 1.3 Foundation Models vs Enterprise Customization Trade-offs2. Reinforcement Learning and Model Economics [00:15:42] 2.1 Reinforcement Learning and Code Execution Feedback Approaches [00:22:06] 2.2 Model Economics and Experimental Optimization3. Enterprise AI Implementation [00:25:20] 3.1 Poolside's Enterprise Deployment Strategy and Infrastructure [00:26:00] 3.2 Enterprise-First Business Model and Market Focus [00:27:05] 3.3 Foundation Models and AGI Development Approach [00:29:24] 3.4 DeepSeek Case Study and Infrastructure Requirements4. LLM Architecture and Performance [00:30:15] 4.1 Distributed Training and Hardware Architecture Optimization [00:33:01] 4.2 Model Scaling Strategies and Chinchilla Optimality Trade-offs [00:36:04] 4.3 Emergent Reasoning and Model Architecture Comparisons [00:43:26] 4.4 Balancing Creativity and Determinism in AI Models [00:50:01] 4.5 AI-Assisted Software Development Evolution5. AI Systems Engineering and Scalability [00:58:31] 5.1 Enterprise AI Productivity and Implementation Challenges [00:58:40] 5.2 Low-Code Solutions and Enterprise Hiring Trends [01:01:25] 5.3 Distributed Systems and Engineering Complexity [01:01:50] 5.4 GenAI Architecture and Scalability Patterns [01:01:55] 5.5 Scaling Limitations and Architectural Patterns in AI Code Generation6. AI Safety and Future Capabilities [01:06:23] 6.1 Semantic Understanding and Language Model Reasoning Approaches [01:12:42] 6.2 Model Interpretability and Safety Considerations in AI Systems [01:16:27] 6.3 AI vs Human Capabilities in Software Development [01:33:45] 6.4 Enterprise Deployment and Security ArchitectureCORE REFS (see shownotes for URLs/more refs):[00:15:45] Research demonstrating how training on model-generated content leads to distribution collapse in AI models, Ilia Shumailov et al. (Key finding on synthetic data risk)[00:20:05] Foundational paper introducing Word2Vec for computing word vector representations, Tomas Mikolov et al. (Seminal NLP technique)[00:22:15] OpenAI O3 model's breakthrough performance on ARC Prize Challenge, OpenAI (Significant AI reasoning benchmark achievement)[00:22:40] Seminal paper proposing a formal definition of intelligence as skill-acquisition efficiency, François Chollet (Influential AI definition/philosophy)[00:30:30] Technical documentation of DeepSeek's V3 model architecture and capabilities, DeepSeek AI (Details on a major new model)[00:34:30] Foundational paper establishing optimal scaling laws for LLM training, Jordan Hoffmann et al. (Key paper on LLM scaling)[00:45:45] Seminal essay arguing that scaling computation consistently trumps human-engineered solutions in AI, Richard S. Sutton (Influential "Bitter Lesson" perspective)

Training Data
The AI Product Going Viral With Doctors: OpenEvidence, with CEO Daniel Nadler

Training Data

Play Episode Listen Later Mar 4, 2025 64:52


OpenEvidence is transforming how doctors access medical knowledge at the point of care, from the biggest medical establishments to small practices serving rural communities. Founder Daniel Nadler explains his team's insight that training smaller, specialized AI models on peer-reviewed literature outperforms large general models for medical applications. He discusses how making the platform freely available to all physicians led to widespread organic adoption and strategic partnerships with publishers like the New England Journal of Medicine. In an industry where organizations move glacially, 10-20% of all U.S. doctors began using OpenEvidence overnight to find information buried deep in the long tail of new medical studies, to validate edge cases and improve diagnoses. Nadler emphasizes the importance of accuracy and transparency in AI healthcare applications. Hosted by: Pat Grady, Sequoia Capital  Mentioned in this episode:  Do We Still Need Clinical Language Models?: Paper from OpenEvidence founders showing that small, specialized models outperformed large models for healthcare diagnostics Chinchilla paper: Seminal 2022 paper about scaling laws in large language models Understand: Ted Chiang sci-fi novella published in 1991

The Main Thing Podcast
Ep. 123 - A Journalist's Wisdom Journey with Hoppy Kercheval

The Main Thing Podcast

Play Episode Listen Later Feb 14, 2025 33:54


Hoppy Kercheval, the esteemed “dean of broadcasting” in West Virginia, brings his wealth of experience in journalism to this dynamic episode, offering listeners valuable insights into the art of showing up every day and the transformative power of wisdom. A Broadcaster's Path to Lifelong Learning and Legacy Hoppy, renowned for his work in covering public affairs, politics, and sports, shares personal stories that illuminate the importance of being present and recognizing the pivotal moments that define our paths.   We explore Hoppy's unexpected start in journalism during high school, which set him on a remarkable career trajectory, and discuss the subtle signals that might indicate when it's time to step back or retire. Throughout the episode, we underscore the value of lifelong learning and stepping out of comfort zones to embrace diverse perspectives.   Today you will discover Hoppy's main thing, the most important wisdom lesson he wants to share from his lifetime and his career. In this rich, authentic wisdom conversation you'll also learn about:  Power of seminal moments to shape life's direction and course; What can happen when we're fully present and listening; How beautiful results can flow from simply showing up every day. More About Our Special Guest Hoppy Kercheval Hoppy Kercheval joined West Virginia Radio Corporation in 1976. A founder of MetroNews, Kercheval served as news director until assuming the role of vice president of operations in 1991. In 1993, he created “Metro News Talk Line,” which became the signature program of the network. Hoppy has received a number of honors over the years, including the Mel Burka Award, which is given to the state's top broadcaster. An avid traveler, Hoppy's adventures have taken him to 19 different countries around the world. He and his wife, Karin, live in Morgantown, West Virginia.   Resources Link to Hoppy's company website WV MetroNews Link to podcast site for 3 Guys Before the Game Hoppy's treasured interview with Anthony Bourdain RAGBRAI website - the bike ride across Iowa   Credits Editor + Technical Advisor Bob Hotchkiss Brand + Strategy Advisor Andy Malinoski PR + Partnerships Advisor Rachel Bell Marketing, Social Media and Graphic Design Chloe Lineberg   Stay Connected with Us on Social YouTube @themainthingpod Twitter @themainthingpod Instagram @themainthingpod Facebook  @TheMainThingPod LinkedIn   Help Support and Sustain This Podcast Become a subscriber. Share the podcast with one or two friends. Follow us on social media @TheMainThingPod Buy some Main Thing Merch from our Merchandise Store. Buy a book from our curated wisdom collection on bookshop.org. Become a patron and support us on Patreon with funding.   Episode Chapters [0:04:53] - Radio careers and riding a bicycle across Iowa; [0:07:33] - How Hoppy and Skip are connected; Eastern Panhandle roots [0:10:03] - An opportunity, a nudge and a seminal moment [0:12:46] - A deciding factor for Hoppy; knowing when it's time to move on [0:14:34] - Hoppy shares his Main Thing [0:18:49] - Reflecting on interviews - the good, the bad and the key to it all [0:24:33] - Hoppy's next chapter; future endeavors; and his legacy [0:28:15] - A parting thought from Hoppy on the value of being fully present   Episode Keywords Wisdom, Fairness, Transitions, Hoppy, Kercheval, Journalism, WVU, Mountaineers, Broadcasting, Showing Up, Radio, Podcasting, Talk, TalkLine, Seminal, Retirement, Learning, Growth, Understanding, Curiosity, LGBTQ, Interviews, Anthony,  Bourdain, McGraw, Legacy, Balance, Consistency, Dedication, Network, WAJR, WRNR, WXVA, Jefferson, WV, West Virginia, Balance, Credibility, Walter, Cronkite, RAGBRAI, cycling, Iowa, MetroNews  

Airtalk
Checking in on the SoCal storm, Valentine's Day fails, TV Talk and more

Airtalk

Play Episode Listen Later Feb 13, 2025 100:33


Today on AirTalk, as the heaviest storm of the season rains down on Los Angeles, we're looking at the potential threat of mudslides in the wildfire burn areas and we'll tell you how to stay prepared. An SF Bay Area city becomes the first in the state to ban ‘aiding’ or ‘abetting’ homeless encampments, we're talking to officials to get the details. A seminal work of jazz and protest music is reimagined for one night only. at the Eli and Edythe Broad Stage. As wildfire cleanup gets underway, renters want to know what, if anything, they will be held responsible for. With Valentine's Day on its way, we want to hear some V-day horror stories from listenrers. Stay until the end of the show for TV Talk. We'll look at 'The White Lotus,' 'Mo,' 'Harlem Ice,' and more! Today on AirTalk: Weather conditions threaten mudslides in LA (0:15) CA city bans ‘aiding’ or ‘abetting’ homeless encampments (19:47) Seminal jazz album gets reimagined for one night (39:52) Wildfire cleanup for renters (51:87) Valentine's Day horror stories (1:11:19) TV Talk: 'Mo,' and more (1:25:48)

Everything Epigenetics
Breaking Down Epigenetics: Sperm, Seminal Plasma, and Generational Impact

Everything Epigenetics

Play Episode Listen Later Jan 29, 2025 48:46


Epigenetics offers fascinating insights into how our genes are influenced by lifestyle and environmental factors. In this week's episode of the Everything Epigenetics podcast, Dr. Raffaele Teperino and I delve into groundbreaking research on epigenetic inheritance and how reproductive fitness impacts long-term health. From the transfer of epigenetic material during conception to the role of paternal health in childhood obesity and diabetes risk, we discuss how these factors shape generational health outcomes. You'll learn about: • The role of epigenetics in reproductive fitness and how it goes beyond reproductive capacity. • How sperm and eggs transfer more than just DNA, influencing offspring development through epigenetic material. • The surprising impact of paternal health at conception on childhood obesity and metabolic disorders. • The importance of lifestyle changes before conception to improve offspring health. • Practical insights into integrating epigenetics into preventive medicine and public health. Chapters: 00:00 Welcome & Introduction 01:09 Dr. Teperino's journey into epigenetics and its link to complex diseases. 03:56 Understanding reproductive fitness and its connection to epigenetics. 08:33 The transfer of epigenetic material from sperm to oocyte. 17:01 The role of paternal health in shaping offspring health risks. 24:07 Longitudinal studies on early-life risks of obesity and diabetes. 37:44 The overlooked role of seminal plasma in reproductive health. 42:00 The future of epigenetics: Neurodevelopmental disorders and preventive health. 45:32 Closing thoughts and where to connect with Dr. Teperino.Support the showWhere to Find Us:Instagram Twitter Facebook Follow us on:Apple Podcast Spotify YouTube Visit our website for more information and resources: everythingepigenetics.com Thank you for joining us at the Everything Epigenetics Podcast and remember you have control over your Epigenetics, so tune in next time to learn more about how to harness this knowledge for your benefit.

Machine Learning Street Talk
Nicholas Carlini (Google DeepMind)

Machine Learning Street Talk

Play Episode Listen Later Jan 25, 2025 81:15


Nicholas Carlini from Google DeepMind offers his view of AI security, emergent LLM capabilities, and his groundbreaking model-stealing research. He reveals how LLMs can unexpectedly excel at tasks like chess and discusses the security pitfalls of LLM-generated code. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? Goto https://tufalabs.ai/ *** Transcript: https://www.dropbox.com/scl/fi/lat7sfyd4k3g5k9crjpbf/CARLINI.pdf?rlkey=b7kcqbvau17uw6rksbr8ccd8v&dl=0 TOC: 1. ML Security Fundamentals [00:00:00] 1.1 ML Model Reasoning and Security Fundamentals [00:03:04] 1.2 ML Security Vulnerabilities and System Design [00:08:22] 1.3 LLM Chess Capabilities and Emergent Behavior [00:13:20] 1.4 Model Training, RLHF, and Calibration Effects 2. Model Evaluation and Research Methods [00:19:40] 2.1 Model Reasoning and Evaluation Metrics [00:24:37] 2.2 Security Research Philosophy and Methodology [00:27:50] 2.3 Security Disclosure Norms and Community Differences 3. LLM Applications and Best Practices [00:44:29] 3.1 Practical LLM Applications and Productivity Gains [00:49:51] 3.2 Effective LLM Usage and Prompting Strategies [00:53:03] 3.3 Security Vulnerabilities in LLM-Generated Code 4. Advanced LLM Research and Architecture [00:59:13] 4.1 LLM Code Generation Performance and O(1) Labs Experience [01:03:31] 4.2 Adaptation Patterns and Benchmarking Challenges [01:10:10] 4.3 Model Stealing Research and Production LLM Architecture Extraction REFS: [00:01:15] Nicholas Carlini's personal website & research profile (Google DeepMind, ML security) - https://nicholas.carlini.com/ [00:01:50] CentML AI compute platform for language model workloads - https://centml.ai/ [00:04:30] Seminal paper on neural network robustness against adversarial examples (Carlini & Wagner, 2016) - https://arxiv.org/abs/1608.04644 [00:05:20] Computer Fraud and Abuse Act (CFAA) – primary U.S. federal law on computer hacking liability - https://www.justice.gov/jm/jm-9-48000-computer-fraud [00:08:30] Blog post: Emergent chess capabilities in GPT-3.5-turbo-instruct (Nicholas Carlini, Sept 2023) - https://nicholas.carlini.com/writing/2023/chess-llm.html [00:16:10] Paper: “Self-Play Preference Optimization for Language Model Alignment” (Yue Wu et al., 2024) - https://arxiv.org/abs/2405.00675 [00:18:00] GPT-4 Technical Report: development, capabilities, and calibration analysis - https://arxiv.org/abs/2303.08774 [00:22:40] Historical shift from descriptive to algebraic chess notation (FIDE) - https://en.wikipedia.org/wiki/Descriptive_notation [00:23:55] Analysis of distribution shift in ML (Hendrycks et al.) - https://arxiv.org/abs/2006.16241 [00:27:40] Nicholas Carlini's essay “Why I Attack” (June 2024) – motivations for security research - https://nicholas.carlini.com/writing/2024/why-i-attack.html [00:34:05] Google Project Zero's 90-day vulnerability disclosure policy - https://googleprojectzero.blogspot.com/p/vulnerability-disclosure-policy.html [00:51:15] Evolution of Google search syntax & user behavior (Daniel M. Russell) - https://www.amazon.com/Joy-Search-Google-Master-Information/dp/0262042878 [01:04:05] Rust's ownership & borrowing system for memory safety - https://doc.rust-lang.org/book/ch04-00-understanding-ownership.html [01:10:05] Paper: “Stealing Part of a Production Language Model” (Carlini et al., March 2024) – extraction attacks on ChatGPT, PaLM-2 - https://arxiv.org/abs/2403.06634 [01:10:55] First model stealing paper (Tramèr et al., 2016) – attacking ML APIs via prediction - https://arxiv.org/abs/1609.02943

The Best of the Bible Answer Man Broadcast
The Seminal Issue of Apologetics, and Q&A

The Best of the Bible Answer Man Broadcast

Play Episode Listen Later Jan 17, 2025 28:01


On today's Bible Answer Man broadcast (01/17/25), the late Elliot Miller, former Editor-in-Chief of the Christian Research Journal joins Hank in answering questions. Hank talks about the three key issues of Christian apologetics—origins, the incarnation and resurrection of Christ, and the reliability of the Bible.Hank and Elliot also answer the following questions:In 1 Corinthians 11:10, what does “because of the angels” mean? Stanley - Honey Grove, TX (5:24)The man who teaches adult Sunday school at our church is a convicted child molester. What should we do? Charlotte - Townsend, VA (9:11)Can you explain what happened with Jephthah and his daughter in Judges 11? Ronald - Springfield, MO (15:16)I have a problem with the Law of Moses concerning rapists paying a fine or marrying the woman they raped. Can you help me? Leia - Blue Springs, MO (18:31)

The Gramophone podcast
Pianists Yevgeny Sudbin and Jean-Efflam Bavouzet in conversation with James Jolly

The Gramophone podcast

Play Episode Listen Later Jan 3, 2025 38:21


Last August Gramophone's James Jolly travelled to Montana in the USA, to sample the musical, artistic and architectural wonders of Tippet Rise, an arts centre created by Peter and Cathy Halstead on a 12,000 acre working ranch. As well as possesssing a wonderful concert hall, Tippet Rise also plays host to numerous large sculptures, some of which can also be used as performance spaces. And for a number of weekends each year musicians from all over the world come to perform at Tippet Rise.  In 2024, pianists Yevgeny Sudbin and Jean-Efflam Bavouzet were among the performers and James took the opportunity, made considerably easier by Tippet Rise's state-of-the-art recording facilities, to sit down with them to talk about pianos, recording, repertoire and the place in which they all found ourselves … The photograph was taken in The Olivier Music Barn, Tippet Rise's concert hall, in front of Mark di Suvero's painting Seminal (1978-82, acrylic on linen).

Steinmetz and Guru
Dennis Schröder Trade is "Seminal Moment" for Warriors...

Steinmetz and Guru

Play Episode Listen Later Dec 18, 2024 11:02


Steiny & Guru welcome Dennis Schröder to the Bay Area and discuss why his presence alongside Stephen Curry is something we've never seen before in Golden State.

Radio HombreAlfa.top
273: Beneficios del NoFap y Semen Retention (Experiencia después de +365 días de NoFap)

Radio HombreAlfa.top

Play Episode Listen Later Dec 7, 2024 45:13


[Conversaciones Infinitas: https://www.hombrealfa.top/minicurso-conversaciones-infinitas/ ] [Círculos Sociales: https://www.hombrealfa.top/minicurso-circulos-sociales/ ] [Únete a la Comunidad de Email: https://www.hombrealfa.top/comunidad/ ] *¿Qué aprenderás en este episodio?: 1) Visión Red Pill sobre el NoFap. 2) Visión Red Pill sobre el Semen Retention. 3) Testosterona, hormonas y feromonas con la práctica del NoFap y Retención Seminal (estudios comentados). 4) Mi experiencia personal practicando NoFap en las distintas etapas de mi vida. 5) Lo que la biología y la evolución dicen acerca de la masturbación y la eyaculación en hombres. En el Episodio de hoy analizamos con total profundidad y desde un punto de vista Red Pill y de la biología evolutiva qué beneficios tiene para un hombre el NoFap, el Semen Retention y el resto de prácticas que tan comunes son hoy día. [Conversaciones Infinitas: https://www.hombrealfa.top/minicurso-conversaciones-infinitas/ ] [Círculos Sociales: https://www.hombrealfa.top/minicurso-circulos-sociales/ ] [Únete a la Comunidad de Email: https://www.hombrealfa.top/comunidad/ ] Además comentamos mi experiencia y cómo el condicionamiento femenino de una sociedad que busca eliminar la masculinidad del mapa impacta en estos conceptos. ¡Suscríbete y dale like si te aporta!

Principal Center Radio Podcast – The Principal Center
Carl Hendrick—How Learning Happens: Seminal Works in Educational Psychology and What They Mean in Practice

Principal Center Radio Podcast – The Principal Center

Play Episode Listen Later Dec 2, 2024 37:24


Get the book, How Learning Happens: Seminal Works in Educational Psychology and What They Mean in Practice Follow Carl on X @C_Hendrick About The Author Carl Hendrick is Professor at Academica University of Applied Sciences in Amsterdam, the Netherlands, where he translates research findings into practical teaching strategies. He taught English at the secondary level for 18 years, and holds a PhD in education from King's College London. He is the author of three books.

Charlas ninja
La "retención seminal" te hace más atractivo (según +50 estudios)

Charlas ninja

Play Episode Listen Later Nov 14, 2024 32:15


#651. Se dice que uno de los poderes de no tocarse es una aura magnética de atracción que embriaga a todas las personas de nuestro alrededor. Por esto hoy he decidido profundizar en la evidencia científica de la retención seminal y al parecer, podríamos estar realmente ante algo revelador aunque sea de una forma menos directa de los que muchos piensan. • Notas de este episodio: https://podcast.pau.ninja/651 • Comunidad + episodios exclusivos: https://sociedad.ninja/ (00:00) Introducción (4:43) ¿Es verdad que la retención seminal te hace más atractivo? (10:52) Energía, vitalidad y creatividad (14:00) Piel (15:58) Rostro (19:23) Visión (23:16) Virilidad, músculo y resistencia (27:23) Masculinidad

Charlas ninja
Retención seminal: el poder de NO eyacular licor sagrado (+30 estudios)

Charlas ninja

Play Episode Listen Later Nov 5, 2024 28:27


#647. ¿Es verdad que evitar la eyaculación de forma premeditada nos aporta beneficios fisiológicos tanto a corto como a largo plazo? La pseudociencia del NoFap está a la orden del día, pero podría ser que hubiese más ventajas mentales y físicas de lo que pensamos, con estudios que confirmarían unos poderes que obtendríamos de forma indirecta. • Notas de este episodio: https://podcast.pau.ninja/647 • Comunidad + episodios exclusivos: https://sociedad.ninja/ (00:00) Introducción (7:20) Testosterona (11:30) Energía y vitalidad (13:10) Inteligencia (25:38) Atractivo

Rolling Stone's 500 Greatest Songs
Nicki Minaj's Seminal Pivot to Pop with “Super Bass”

Rolling Stone's 500 Greatest Songs

Play Episode Listen Later Aug 21, 2024 31:16 Transcription Available


It took two years for Nicki Minaj to take over the world. Following her 2009 mixtape Beam Me Up Scotty, she caught the attention of Lil Wayne who signed her. Minaj quickly became ubiquitous, taking over the charts and winning over rap heavyweights and pop divas with her next level guest verses. It was Nicki's debut album Pink Friday and single “Super Bass” that made her a force to be reckoned with across the board. Her pop pivot was a huge risk, especially as she pulled double duty singing and rapping on the hit. But it paid off: the song was her first Top 10 single and became the highest charting song by a female rapper since Missy Elliott's “Work It.” Joining us to discuss the song's impact and the ups and downs of Minaj's legacy is Rolling Stone staff writer Mankaprr Conteh.See omnystudio.com/listener for privacy information.

Welcome to Wellness
#61 The Anti-Aging Vitamin, Plus Reverse Lupus & Rheumatoid Arthritis

Welcome to Wellness

Play Episode Listen Later Aug 9, 2024 70:17


Struggling with Lupus, Rheumatoid Arthritis, and/or Hypothyroidism...? Or are you interested in what scientists are calling the number one anti-aging vitamin to improve memory, hair growth, and suppleness of your skin? Then this episode is for you. Welcome to Wellness releases new episodes every Friday. Not listening on Spotify? Find additional show notes on my website: https://www.ashleydeeley.com/w2w/spermadine Episode brought to you by the world's best organic, regenerative sheets: AIZOME. Discount code: DEELEY15 Episode brought to you by the world's best water purifier: MyPureWater. Code: Ashley5 5:16: Doctor told her she had arthritis and Lupus in her 30s 9:37: Immunoglobulin (IVIG) to reboot her system 16:54: food derived molecule rejuvenating the immune system in elderly mice 20:35: As we get older, we stop making spermadine 22:04: Want to avoid secreting the odor an elderly person emits...? Eat mushrooms! 25:56: Spermadine can normalize hormones 26:53: What is autophagy? 29:48: Spermadine can ease perimenopause symptoms and increase the health of your hair, skin, and nails 30:19: Spermadine also increases keratin! 34:03: Spermadine is best for women approaching or in perimenopause as well as menopause and great for men approaching 40 or over (great for men trying to re-grow hair as well) 37:38: Best time to take Primeadine (with last meal or at night!) 39:18: Spermadine can help you sleep better! 45:01: Natto naturally has spermadine 48:36: Why Primeadine is the best version of Spermadine on the market (it's food derived -- not synthetic!) 54:10: Stay away from supplements labeled: spermidine HCL (hydrochloride) as well as spermidine TCL (tetra- hydrochloride) as these are the synthetic versions and do give you the benefits that you would get from a food derived supplement like Primeadine 56:42: Seminal retention 1:01:01: How men can produce spermadine 1:02:21: Lovemaking for Longevity 1:03:36: Katsu (Blood Flow Restriction Training) Where to find Primeadine: Website https://oxford-healthspan.myshopify.com/ashleydeeley Instagram https://www.instagram.com/primeadine/

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

Because of the nature of SAM, this is more video heavy than usual. See our YouTube!Because vision is first among equals in multimodality, and yet SOTA vision language models are closed, we've always had an interest in learning what's next in vision. Our first viral episode was Segment Anything 1, and we have since covered LLaVA, IDEFICS, Adept, and Reka. But just like with Llama 3, FAIR holds a special place in our hearts as the New Kings of Open Source AI.The list of sequels better than the originals is usually very short, but SAM 2 delighted us by not only being a better image segmentation model than SAM 1, it also conclusively and inexpensively solved video segmentation in just an elegant a way as SAM 1 did for images, and releasing everything to the community as Apache 2/CC by 4.0.“In video segmentation, we observe better accuracy, using 3x fewer interactions than prior approaches. In image segmentation, our model is more accurate and 6x faster than the Segment Anything Model (SAM).”Surprisingly EfficientThe paper reports that SAM 2 was trained on 256 A100 GPUs for 108 hours (59% more than SAM 1). Taking the upper end $2 A100 cost off gpulist.ai means SAM2 cost ~$50k to train if it had an external market-rate cost - surprisingly cheap for adding video understanding!The newly released SA-V dataset is also the largest video segment dataset to date, with careful attention given to scene/object/geographical diversity, including that of annotators. In some ways, we are surprised that SOTA video segmentation can be done on only ~50,000 videos (and 640k masklet annotations). Model-in-the-loop Data Engine for Annotations and Demo-first DevelopmentSimilar to SAM 1, a 3 Phase Data Engine helped greatly in bootstrapping this dataset. As Nikhila says in the episode, the demo you see wasn't just for show, they actually used this same tool to do annotations for the model that is now demoed in the tool:“With the original SAM, we put a lot of effort in building a high-quality demo. And the other piece here is that the demo is actually the annotation tool. So we actually use the demo as a way to improve our annotation tool. And so then it becomes very natural to invest in building a good demo because it speeds up your annotation. and improve the data quality, and that will improve the model quality. With this approach, we found it to be really successful.”An incredible 90% speedup in annotation happened due to this virtuous cycle which helped SA-V reach this incredible scale.Building the demo also helped the team live the context that their own downstream users, like Roboflow, would experience, and forced them to make choices accordingly.As Nikhila says:“It's a really encouraging trend for not thinking about only the new model capability, but what sort of applications folks want to build with models as a result of that downstream.I think it also really forces you to think about many things that you might postpone. For example, efficiency. For a good demo experience, making it real time is super important. No one wants to wait. And so it really forces you to think about these things much sooner and actually makes us think about what kind of image encoder we want to use or other things. hardware efficiency improvements. So those kind of things, I think, become a first-class citizen when you put the demo first.”Indeed, the team swapped out standard ViT-H Vision Transformers for Hiera (Hierarchical) Vision Transformers as a result of efficiency considerations.Memory AttentionSpeaking of architecture, the model design is probably the sleeper hit of a project filled with hits. The team adapted SAM 1 to video by adding streaming memory for real-time video processing:Specifically adding memory attention, memory encoder, and memory bank, which surprisingly ablated better than more intuitive but complex architectures like Gated Recurrent Units.One has to wonder if streaming memory can be added to pure language models with a similar approach… (pls comment if there's an obvious one we haven't come across yet!)Video PodcastTune in to Latent Space TV for the video demos mentioned in this video podcast!Timestamps* [00:00:00] The Rise of SAM by Udio (David Ding Edit)* [00:03:07] Introducing Nikhila* [00:06:38] The Impact of SAM 1 in 2023* [00:12:15] Do People Finetune SAM?* [00:16:05] Video Demo of SAM* [00:20:01] Why the Demo is so Important* [00:23:23] SAM 1 vs SAM 2 Architecture* [00:26:46] Video Demo of SAM on Roboflow* [00:32:44] Extending SAM 2 with other models* [00:35:00] Limitations of SAM: Screenshots* [00:38:56] SAM 2 Paper* [00:39:15] SA-V Dataset and SAM Data Engine* [00:43:15] Memory Attention to solve Video* [00:47:24] "Context Length" in Memory Attention* [00:48:17] Object Tracking* [00:50:52] The Future of FAIR* [00:52:23] CVPR, Trends in Vision* [01:02:04] Calls to ActionTranscript[00:00:00] [music intro][00:02:11] AI Charlie: Happy Yoga! This is your AI co host Charlie. Thank you for all the love for our special 1 million downloads Wins of AI Winter episode last week, especially Sam, Archie, Trellis, Morgan, Shrey, Han, and more. For this episode, we have to go all the way back to the first viral episode of the podcast Segment Anything Model and the Hard Problems of Computer Vision, which we discussed with Joseph Nelson of Roboflow.[00:02:39] AI Charlie: Since Meta released SAM 2 last week, we are delighted to welcome Joseph back as our fourth guest co host to chat with Nikhila Ravi, Research Engineering Manager at Facebook AI Research and lead author of SAM 2. Just like our SAM 1 podcast, this is a multimodal pod because of the vision element, so we definitely encourage you to hop over to our YouTube at least for the demos, if not our faces.[00:03:04] AI Charlie: Watch out and take care.[00:03:10] Introducing Nikhila[00:03:10] swyx: Welcome to the latest podcast. I'm delighted to do segment anything to our first, one of our very first viral podcasts was segment anything one with Joseph. Welcome back. Thanks so much. And this time we are joined by the lead author of Segment Anything 2, Nikki Ravi, welcome.[00:03:25] Nikhila Ravi: Thank you. Thanks for having me.[00:03:26] swyx: There's a whole story that we can refer people back to episode of the podcast way back when for the story of Segment Anything, but I think we're interested in just introducing you as a researcher, as a, on the human side what was your path into AI research? Why, you know, why did you choose computer vision coming out of your specialization at Cambridge?[00:03:46] Nikhila Ravi: So I did my undergraduate. Degree in engineering at Cambridge university. The engineering program is very general. So first couple of years, you sort of study everything from mechanical engineering to fluid mechanics, structural mechanics, material science, and also computer science.[00:04:04] Nikhila Ravi: Towards the end of my degree, I started taking more classes in machine learning and computational neuroscience, and I really enjoyed it. And actually after graduating from undergrad, I had a place at Oxford to study medicine. And so I was. Initially planning on becoming a doctor, had everything planned and then decided to take a gap year after finishing undergrad.[00:04:28] Nikhila Ravi: And actually that was around the time that sort of deep learning was emerging. And in my machine learning class in undergrad, I remember one day our professor came in and that was when Google acquired DeepMind. And so that became like a huge thing. We talked about it for the whole class. It kind of really stuck.[00:04:48] Nikhila Ravi: And I was kicked off thinking about, okay, maybe I want to try something different other than medicine. Maybe this is a different path I want to take. And then in the gap year, I did a bunch of coding, worked on a number of projects. Did some sort of freelance contracting work. And then I got a scholarship to come and study in America.[00:05:06] Nikhila Ravi: So I went to Harvard for a year, took a bunch of computer science classes at Harvard and MIT, worked on a number of AI projects, especially in computer vision. I really, really enjoyed working in computer vision. I applied to Facebook and got this job at Facebook, and I've now at Facebook at the time, now Meta, and I've been here for seven years, so very circuitous path, probably not a very unconventional, I didn't do a PhD, I'm not like a research, typical research scientist, definitely came from more of an engineering background, but since being at Meta, Have had amazing opportunities to work across so many different interesting problems in computer vision from 3D computer vision.[00:05:50] Nikhila Ravi: How can you go from images of objects to 3D structures and then going back to 2D computer vision and actually understanding the objects and the pixels and the images themselves. So it's been a very interesting journey over the past seven years.[00:06:05] swyx: It's weird because like, I guess with segment anything too, it's like 4D because you solve time, you know, you started with 3D and now you're solving the 4D.[00:06:14] Nikhila Ravi: Yeah, it's just going from 3D to images to video. It's really covering the full spectrum. And actually, one of the nice things has been, so I think I mentioned I, Wanted to become a doctor, but actually Sam is having so much impact in medicine, probably more than I could have ever had as a doctor myself. So I think, you know, hopefully Sam too can also have a similar sort of impact in medicine and other fields.[00:06:39] The Impact of SAM 1 in 2023[00:06:39] swyx: Yeah. I want to give Joseph a chance to comment. Does that also mirror your, we know your story about going into, into vision, but like in the past year, since we did our podcast on Sam what's been the impact that you've seen?[00:06:51] Joseph Nelson: Segment anything. Set a new standard in computer vision, you know recapping from from the first release to present Sam introduces the ability for models to near zero shot meaning without any training identify kind of perfect polygons and outlines of items and objects inside images and that capability previously required a Lots of manual labeling, lots of manual preparation, clicking very meticulously to create outlines of individuals and people.[00:07:25] Joseph Nelson: And there were some models that attempted to do zero shot segmentation. of items inside images, though none were as high quality as segment anything. And with the introduction of segment anything, you can pass an image with SAM1, SAM2 videos as well, and get perfect pixel perfect outlines of most everything inside the images.[00:07:52] Joseph Nelson: Now there are some edge cases across domains and Similar to the human eye, sometimes you need to say, like, which item maybe you most care about for the downstream task and problem you're working on. Though, SAM has accelerated the rate at which developers are able to use computer vision in production applications.[00:08:13] Joseph Nelson: So, at RoboFlow, we were very quick to enable the community of computer vision developers and engineers to use SAM and apply it to their problems. The principle ways of using SAM, you could kind of use SAM as is to like pass an image and receive back masks. Another use case for SAM is in preparation of data for other types of problems.[00:08:37] Joseph Nelson: So, for example, in the medical domain, let's say that you're working on a problem where you have a bunch of images from a wet lab experiment. And from each of those images, you need to count the presence of a particular protein that reacts to some experiment. To count all the individual protein reactions, You can go in and lab assistants to this day will still like kind of individually count and say what are the presence of all those proteins.[00:09:07] Joseph Nelson: With Segment Anything, it's able to identify all of those individual items correctly. But often you may need to also add like a class name to what the protein is. Or you may need to say, hey, like, I care about the protein portion of this. I don't care about the rest of the portion of this in the image.[00:09:26] Joseph Nelson: And, or what it encourages and asks for the user to do is to provide some visual prompting to say, hey, which part, like, Sam says, hey, I can find segments of anything, but which segments do you care about? And so you can do visual prompting, which is kind of a new primitive that Sam introduced. And so at RoboFlow, we have one portion of our tool stack enables users to very quickly label data.[00:09:48] Joseph Nelson: With segment anything, Sam can already provide, hey, here's where I see the outlines of objects. Or a user can click to prompt to say, Hey, here's where the outlines of objects matter. And I recently pulled statistics from the usage of SAM in RoboFlow over the course of the last year. And users have labeled about 49 million images using segment anything on the hosted side of the RoboFlow platform.[00:10:12] Joseph Nelson: And that's like 5 million in the last 30 days alone. And of those images, We did kind of like a rough bafka napkin calculation of like how much time that has saved. Because, again, the alternative is you're clicking individual points to create a polygon, and with SAM you just click once and it guesses where the polygon is.[00:10:32] Joseph Nelson: And I'm sure in a bit we can maybe screen share and show some examples of what this experience is like. And in that time estimation, it's like, On average saves, you know, maybe a dozen or so seconds. And we estimate that this is probably saved on the order of magnitude of 35 years of time for users.[00:10:53] Nikhila Ravi: That's incredible.[00:10:54] Joseph Nelson: So, I mean, basically like in the first, the first year of a model being available, not only can you say, Hey, I'm just going to go use this model, those numbers that like 49 million images. is an estimate directly related to just the hosted side. So imagine all of the users that are self hosting or using SAM for robotics applications or out in the field or offline where it's not even, like, the time or the image counts are tabulated.[00:11:20] Joseph Nelson: And we're probably talking about, you know, just a fraction of the amount of value that's actually being produced for a number of downstream tasks. So to say that the impact has been You know, people use terms like game changing and these sorts of things. It has changed the industry. It's set a new standard.[00:11:36] Joseph Nelson: And with the release of SAM 2, I think we're about to see an acceleration of those capabilities for a lot of reasons.[00:11:42] Nikhila Ravi: That's really great to hear. I think one of the, really SAM 1 was. How many fields actually rely on manual segmentation? I think we're not really exposed to that. Maybe you are at Roboflow because you get to see all the users of these tools.[00:11:57] Nikhila Ravi: But for me, it was, you know, people working on understanding coral reef bleaching or farmers counting their cows and so many different applications that as a researcher. You never get exposed to, but you can have impact towards. So I think that was really awesome to hear.[00:12:15] Do People Finetune SAM?[00:12:15] swyx: So as sort of audience surrogate, who knows less than the two of you, I'm going to ask a really dumb question maybe, but is everyone using stock, a segment, anything?[00:12:23] swyx: Are they fine tuning for the medical domain? Like how on earth could it work for the medical field without fine tuning, right? Like, is that a thing?[00:12:32] Nikhila Ravi: So I mean, I can give a quick perspective from the research side. So one of the things, design decisions we made in SAM was to not have class labels. And so all the data is annotated in a class agnostic way.[00:12:48] Nikhila Ravi: So anything that has a boundary, we consider to be an object. So for example, in any image, there's lots of small objects. We might not know what the name of them are, but they're If you can draw a boundary around it, so you can imagine that we have 11 million images in the SA 1B dataset, we annotated all the objects, there's many, many small objects.[00:13:12] Nikhila Ravi: And so if you think about cells, they're also kind of small objects, there's probably things in the training data. That looked like it, but we didn't have to label it. And so that means that even when you use SAM for applications that it wasn't really trained for, because we didn't restrict it to a certain set of categories, you can actually use it out of the box without custom adaptation.[00:13:35] Nikhila Ravi: But having said that, there's probably certain domains where you need some expertise in order to be able to segment something properly. And for those use cases, Having some extra fine tuning data would probably help, and we've sort of seen that there's some papers that have come out that do this, and, you know, we'd love to hear, Joseph, how people are collecting data with SAM and fine tuning for their use cases.[00:13:59] Joseph Nelson: Once SAM came out, there were adaptations that said, could we use SAM to be, you know, like, efficient SAM? Like, basically take SAM and maybe accelerate it. And then there were domain adapted SAMs, like CellSAM, for example, out of the UC system. Now, what's interesting is, there's, like, adapting SAM to a domain, there's kind of two ways by which that's done.[00:14:21] Joseph Nelson: One is, as you mentioned, like, potentially SAM doesn't have a good concept of The objects of interest. And so you need to do domain adaptation and increase the accuracy for zero shot prediction. The second way though, is it's not fine tuning. It's actually just prompting. It's just guiding the model existing knowledge.[00:14:42] Joseph Nelson: to say which segments you care about. And both those are actually kind of equally important on the application side. You need to, like, a priori ensure that the objects of interest can be correctly segmented and maybe collect data to do that. But even if you had, like, a perfect SAM, like an omniscient SAM that could see every segment in every domain with all pixels perfectly outlined, in production, you would still need some way to Almost like signal to the model what you care about like to paint this picture if you are like a retailer and you are providing Photos of models wearing your clothing on your retail site You may care about you know only the shirt and Sam by default might segment the full person And so there's you know visual prompting that you can do to ensure that you only outline Maybe the shirt for the purposes of swapping in and out different shirts for displaying a given model on a retail page You And so I think what's interesting is that's where, like I wouldn't call it domain adaptation, but that's where, like, when you apply to industry, like, one thing that's particularly important with tooling and enabling SAM to reach its full potential.[00:15:51] swyx: That's really encouraging to hear. I should also think, like, you know, the last time we talked about this, we wanted to, the very natural addition on the class labeling side is the grounding Dino work, right? So I think people, built a grounding SAM and all the other extensions.[00:16:05] Video Demo of SAM[00:16:05] swyx: I think it's, it's probably a good time to cut to a quick demo of SAM2 for people who are, who are tuning in for SAM2 and who better to demo SAM2 than Nikki.[00:16:15] Nikhila Ravi: Sure. So I'll try to narrate what I'm what I'm doing. So audio listeners can also understand. So we have a web demo where anyone can try SAM2 on a video. Here we have a video of someone kicking a football, and I'm going to click on the football to select the object in the first frame. But you can actually select the object in any frame of the video, and this will work.[00:16:40] Nikhila Ravi: The next step is to hit track. So the model's now tracking this in real time. We don't save any of this, it's all running in real time. And now you can see the ball has been tracked throughout the entire video. There's even like a little bit of a challenging case here where the shoe covers the football.[00:16:59] Nikhila Ravi: And actually, you know, the model makes a little bit of a mistake, but that's okay. Because we can actually, here, the model makes a little bit of a mistake here. But you know, we can actually add a refinement click. You can add negative clicks until we get the mask that we want on this frame. And then you can hit track again, and the model will track the object, taking into account the additional information I've provided at that frame.[00:17:25] Nikhila Ravi: We've also added a couple of other fun things you can do on top of the track, like add effects. We can add you know, foreground effects, background effects. And these are just ways of showing how we can use the output from SAM2 as part of other tools like video editing tools. Other systems, so this is just a preview of what you can do with SAM2, but the really cool use cases are places where we might not have even imagined SAM2 being useful.[00:17:54] Nikhila Ravi: So we have a number of examples of things you might want to use it for. There's like underwater videos that it works actually really well for even though we, models never really seen an octopus before and octopus have a lot of moving parts that SAM2 can actually quite effectively. Keep track of all the different tentacles and we can probably see it more clearly if I desaturate the background.[00:18:18] Nikhila Ravi: We can see that actually the tracking of all the different tentacles is Quite accurate. Another challenge with video is that objects can actually become occluded. They can disappear from view and reappear. And a really fun example here is the shuffling cup game, which many of you might have seen. And so here I can click on the ball in the first frame.[00:18:41] Nikhila Ravi: I can also, You know, click on a different cup. And so here, the additional challenge is that there's three cups that look exactly the same. And then there's the ball that will get occluded by the cup. So the ball's no longer visible, the cups are all moving around, they all look the same. But the model actually keeps track of the cup that we selected.[00:19:02] Nikhila Ravi: And, as you can see at the end, here I'll jump to the end so you can see. It actually finds the cup again. I wanted to point out a couple of fun demo UX features that we added that actually really helped with this. So if you can see at the bottom, there's these swim lanes and then the swim lanes, actually the thickness of the swim lane tells you if the object's visible or not.[00:19:22] Nikhila Ravi: So at the beginning, the object's visible,[00:19:25] swyx: the object[00:19:26] Nikhila Ravi: disappears, and then the object comes back. So you can actually visually tell. When the object's being occluded and when it's not, and so it's a nice way of like, knowing if you need to go in and fix the model prediction or not. And so these are some of the UX innovations that we came up with, as well as the model innovations.[00:19:46] Joseph Nelson: One thing that I think is really notable here, there's two things. One is that like, I'd love to have a little bit of a discussion about how the models keeping track of the embedded scene to keep track of the ball and the cup in different places. Put a pause on that for a second.[00:19:59] Why the Demo is so Important[00:19:59] Joseph Nelson: One thing that Meta has put an emphasis on here in a much greater degree than other model releases is the demo experience of recognizing that in addition to having a model that can do zero shot segmentation, you've created a web experience that allows folks to kind of experience both the video effects but the types of UX innovations that encourage usage and adoption.[00:20:23] Joseph Nelson: It's actually kind of reminiscent of The underlying technology of ChatGPT was available prior to the web experience of ChatGPT. Can you talk a bit about why that was a consideration to your team and how you thought about the creation of The demo experience in tandem with training and releasing a new model.[00:20:41] Nikhila Ravi: Yeah, absolutely. I think that's a really great example of how, you know, Chad, GPT was really more of a UX innovation. Obviously it was like a number of research innovations that helped to get to this point. But as you said, like the underlying technology was around for a while. And, you know, putting this UX around as a chat interface helped tremendously with the.[00:21:03] Nikhila Ravi: Adoption and people understanding how it could be useful for real world use cases. And in computer vision, especially, it's so visual. The best way to show how these models work. Is by trying it on your own image or your own video with the original SAM, we put a lot of effort in building like a high quality demo.[00:21:23] Nikhila Ravi: And the other piece here is that the demo is actually the annotation tool. So we actually. Use the demo as a way to improve our annotation tool. And so then it becomes very natural to invest in building a good demo because it speeds up your annotation and improves the data quality and that will improve the model quality.[00:21:43] Nikhila Ravi: With this approach, we found it to be really successful. And obviously externally, people really liked being able to try it. I think, you know, people in fields outside of machine learning would never have tried SAM if we didn't have that demo. And I think that definitely led to a lot of the adoption in, like, diverse fields.[00:22:05] Nikhila Ravi: And so because we saw that with SAM 2, like, the demo was a priority first class citizen from day one. And so we really invested in making that. And I think with SAM2 as well, we wanted to have like a step change in the demo experience. Interactive video segmentation, I think that experience is something that maybe has not had much thought given to it.[00:22:27] Nikhila Ravi: And we really wanted to be like, okay, if we are to design a step changing video segmentation experience, what would that look like? And that really did influence our model. And annotation design as well.[00:22:40] Joseph Nelson: It's a really encouraging trend for not thinking about only the new model capability, but what sort of applications folks want to build with models as a result of that downstream.[00:22:49] Nikhila Ravi: I think it also really forces you to think about many things that you might postpone, for example, efficiency.[00:22:55] Joseph Nelson: Yes.[00:22:55] Nikhila Ravi: For a good demo experience. Making it real time is super important. No one wants to wait. And so it really forces you to think about these things much sooner and actually makes us think about how to, what kind of image encoder we want to use or like other hardware efficiency improvements.[00:23:13] Nikhila Ravi: So those kinds of things, I think, become a first class citizen when you put the demo first.[00:23:19] SAM 1 vs SAM 2 Architecture[00:23:19] Joseph Nelson: That's one thing I was going to ask about, and this is related to the architecture change. So SAM1 and the SAM1 demo experience. You have the encoder that's creating the embeddings of all the potential spaces.[00:23:31] Joseph Nelson: That needs to be run on a GPU. That's a relatively intensive operation. But then the query of those embeddings can be run independently and on a cheaper process. So in the SAM1 demo, the way that it was structured, and also this is the way that we have our SAM tool structured in Robloflow as well, is images go to a GPU to get all the SAM based embeddings.[00:23:53] Joseph Nelson: But then for querying those embeddings, we do that client side, in the browser, so that the user can very quickly, you know, you can move your mouse over and you get the proposed candidate masks that Sam found for that region of the image. In SAM 2 you dropped that in the web demo. And I think that's because you made some notable improvements to the rate at which encoding happens.[00:24:16] Joseph Nelson: Can you talk a bit about what led to those speed increases and, again, how that interplays with providing a fast encryption? user experience for interacting with the model.[00:24:29] Nikhila Ravi: Yeah. So the SAM2 web demo is primarily focused on video. We, we decided to just keep it simple and focus on video and on GitHub, we have a Colab notebook that shows how to run SAM2 on images.[00:24:41] Nikhila Ravi: So if you're interested in using, replacing SAM with SAM2 for images, check out GitHub, but on the SAM2 demo, it's not as straightforward to adopt the same architecture as SAM. For video, because we can't send the per frame image embeddings for an entire video back to the front end. In SAM, each frame embedding was like four megabytes, but if you have a long video and that's like per frame, it would become impossible to send that back to the front end.[00:25:11] Nikhila Ravi: So, SAM 2 actually, in terms of the architecture details, I was actually just looking at this earlier, but SAM1 model was around 630 million parameters. It's a fraction of the size of these large language models, but very small. Actually, SAM2, the largest model, is around 224 million parameters. So it's actually One third the size of the SAM original model.[00:25:38] Nikhila Ravi: So we changed the imaging coder from A-V-I-T-H and SAM to a higher model, which has also developed by by meta. So that definitely was something that helped. And in terms of the efficiency compared to sam, so if we were to run SAM per frame on a video or run SAM two, it's around six times faster to run SAM two versus run SAM per frame.[00:26:03] Nikhila Ravi: A number of things improved the efficiency of SAM2 such that we were actually able to run this entirely on the server and not have any component in the front end. But I am very curious to see who puts this on device, like I'm pretty sure soon we'll see like an on device SAM2 or, you know, maybe even running in the browser or something, so.[00:26:25] Nikhila Ravi: I think that could definitely unlock some of these edge use cases that we were able to make a compelling web demo without having to do that.[00:26:34] swyx: Hugging face is probably already working on Transformers. js version of it, but totally makes sense. I want to talk about more about things from the paper, but I think we're still in this sort of demo section.[00:26:42] Video Demo of SAM on Roboflow[00:26:42] swyx: And so I want to hand it to Joseph for his demo to see what the RoboFlow site looks like.[00:26:47] Joseph Nelson: So I can, I can give some context into one key area that Nicola, you mentioned earlier, which is. Sam has made the decision, both Sam 1 and Sam 2, to be class agnostic in terms of its predictions. And that, you then have the ability to have a generalizable, model for zero shot capability.[00:27:05] Joseph Nelson: However, in a lot of domain applications, you do want the class wise name. And so a lot of the challenge can be adding that class wise name for the, at least the annotation to an experience that we've created. That's one of the key considerations. So I will similarly Share my screen and show an example.[00:27:27] Joseph Nelson: Here, I have a bunch of images, and there's a number of ways that I could annotate things, like I could prompt a large multimodal model with like grounding capabilities, you know, you could outsource it, or I can do manual labeling. And with the manual labeling, this is where we make use of models like segment anything.[00:27:45] Joseph Nelson: to propose candidate masks and make it faster. So we have, you know, this annotation pane and what we call the smart poly tool, which is powered by Segment Anything. This is currently Segment Anything 1. We're accelerating and seeing improvements from similar to what the paper shows of Segment Anything 2 performed better on E3.[00:28:06] Joseph Nelson: Images as well as video, but with a segment, anything I'm able to basically prompt regions of my image of interest. So for example, if like, I wanted to say, I want to like add the drum set. You'll see here that like, the original candidate proposal is just the base drum, but let's say I wanted the whole drum set.[00:28:26] Joseph Nelson: So the UX primitive of being able to add and subtract candidate regions of interest is really intuitive here. And now, great, I have this outline, but in fact what I want is, I want to name that as a class. Because maybe for the model that I'm building, I want to build like a task specific model, you know, like an object detection model or an instant segmentation model.[00:28:50] Joseph Nelson: Or, you know, maybe I'm even using like a multimodal model and I want that multimodal model to refer to regions of interest in the images as a specific thing. And so I think what's, you know, really powerful is, of course, like, I get this really rich zero shot prediction. And here we have our friend Rick.[00:29:10] Joseph Nelson: So I get this really rich candidate set of predictions. But then by adding the class wise label, I can, you know, very quickly make sure that any downstream tasks are aware not just of the segment, but also of the, what is inside that segment. Which actually takes me to A separate point of something that I predict that's probably going to happen and Nikhil, I'm actually kind of interested why maybe your team made a conscious decision to not do this initially with SAM2.[00:29:40] Joseph Nelson: There's been an emergent set of models that are also adding open text prompting capabilities to grounding models. So for example, like you've seen models like Grounding Dino or Owlvit, which, you know, you can do. Even image to image or text to image based prompting to find regions of interest. And maybe maybe I can actually give an example of that even in the context of this same data.[00:30:05] Joseph Nelson: So if I wanted to try out, you know, grounding dino on this same set of images, I could try out, you know, prompting grounding dino for a set of different classes. And what's notable is let's do, I don't know, let's prompt for person and we'll prompt for person and prompt for I don't know, microphone.[00:30:26] Joseph Nelson: NLASC or microphone. Here I can text prompt the image and then the understanding, in this case Grounding Dino's understanding, of where people are in this image allows me to create, in this case, bounding boxes, but, you know, soon you can do segmentations or in tandem with SAM do segmentations. And, you know, we've already seen applications of using SAM2 in tandem with models like Grounding Dino or Florence 2.[00:30:54] Joseph Nelson: So that people can basically text prompt and then get the benefits of the zero shot segmentation at the same time as getting the open form querying. And in doing so, you know, we maintain a framework called like autodistill so like folks can very quickly, you know, bring some images and then using autodistill to find some ontology and then prompt and say what you want from that ontology.[00:31:19] Nikhila Ravi: So you already do this for video as well?[00:31:21] Joseph Nelson: You can apply videos or groups of images, yes. So this is using a project called Autodistill. And the concept of Autodistill is, use a base model, like a big base model, which could be like SAM or Grounding Dino, and then you pass a directory of images, which also could be video, broken into individual frames, and you pass an ontology as well.[00:31:43] Joseph Nelson: So an example I was just showing was like the hello world we have, which is like a shipping container. And then the combination of the grounding capabilities of, in the example I was showing, Florence 2 plus SAM, looks for the concept of container, and then SAM does the rich segmentation of turning that concept of container into the candidate proposal of the region, so that a user could just say, hey, I want all the shipping containers, run this across a bunch of images or video frames, And then get back the class wise labels plus the regions of interest.[00:32:17] Joseph Nelson: And this feels like a natural extension. And in fact, like the open form grounding capabilities between SAM1 and SAM2 became something the field was broadly doing. So I'm curious, like, from your perspective, one of the things I thought maybe SAM2 would do is actually add this capability natively. So I'm curious to hear, like, the conscious decision to say, hey, we want to continue to be class agnostic.[00:32:39] Extending SAM 2 with other models[00:32:39] Joseph Nelson: We don't want to add yet maybe open form text prompting as a part of finding the segments and parts of images. And I'd love to hear about like the decision to think about it that way. And if you are encouraged or if you want kind of like what's happening here where people are naturally combining these capabilities as something that you would expect and encourage to happen despite not having it.[00:33:00] Joseph Nelson: In the base model itself.[00:33:02] Nikhila Ravi: Yeah, it's a great question. So I think it's really cool that the community is taking SAM and taking SAM 2 and building on top of it and coming up with cool applications. We love to see that. That's exactly why we open source our work. And then in terms of why we didn't put it into SAM 2, so as you've probably seen with SAM and SAM 2, it's a fairly narrow problem.[00:33:25] Nikhila Ravi: But we really tried to make it a step change in the capability. And so with each version, we are trying to limit the focus on one thing that we can know we can do really well. And in this case, like the first SAM, it was class agnostic segmentation, but can we do it so well that it's effectively solved?[00:33:47] Nikhila Ravi: And similarly, can we do that same thing, but with Video segmentation. So one step at a time, we are working on each of these problems one at a time so that we can actually deliver something that's really world class and step changing.[00:34:03] Joseph Nelson: So does that mean SAM 3 will have the text prompting? Problem is like the next challenge.[00:34:09] Nikhila Ravi: Who knows, who knows? Maybe the community will, will we'll build that too. So[00:34:15] Joseph Nelson: it makes sense to like very narrowly do something very well. And that's, I think, proven to be well accomplished.[00:34:21] Nikhila Ravi: It's like taking the, the, both the data, the model and the demo, and how can we push all three towards solving one thing really well?[00:34:30] Nikhila Ravi: So we found that. That's like a good recipe and that's what we've limited the focus of these, of each of these models.[00:34:38] swyx: This development reminds me of how, you know, when you do, and you break out the interpretability of ConvNets and you can see like, Oh, this is the edge detection one. I feel like SAM is the edge detection version equivalent.[00:34:51] swyx: And then you build up to whatever the next feature is on top of that.[00:34:54] Limitations of SAM: Screenshots[00:34:54] Joseph Nelson: Can I bring up one? Limitation of SAM. So like we've like even SAM one, SAM two, and the monitor is released at 4 PM Pacific on Monday. We're recording this on 11 AM Pacific on, on, on Thursday. So the, it's very fresh for a lot of the capabilities and.[00:35:09] Joseph Nelson: It is so clear that it is a stepwise change in the capability that, Nikhila, you mentioned your team wants to do, which is extend SAM's zero shot class agnostic capability to video, like, A plus, kind of mission accomplished. One thing that's interesting is finding, like, domain problems where there might be still domain applicability and domain adaptation that is available.[00:35:32] Joseph Nelson: One benchmark that we introduced at CBPR is this thing called RF100, which is like, seven different domain type problems that the industry commonly is working on in vision, like underwater document processing, aerial examples, medicine examples. And one place where interestingly segment anything maybe less performant than other models is handling screenshots.[00:35:57] Joseph Nelson: For example, like a lot of folks that are building agents to interact with the web are particularly interested in that challenge of given a screenshot of a computer, what are all the buttons. And how could I autonomously navigate and prompt and tell it to click? And I can show an example of like maybe what, how like Sam kind of performs on this challenge just to outline some of the context of this problem.[00:36:23] Joseph Nelson: But I'm curious like how you think about limitations like this and what you would expect to want to be the case. So here I just have a notebook where I run Sam on the source image on the left. Or the source image on the left and then Sam output is on the right. And this is just a screenshot of, of a website where we just grab like the top 100 websites by traffic and grab screenshots from them.[00:36:42] Joseph Nelson: One example of a place where I could see the community improving on Sam, and I'm curious how you think about this challenge and maybe why Sam is less well adapted for this type of problem. Is processing screenshots. So I'll share my screen to give an example for, for viewers that are participating here, you see like an example, a screenshot of a website on the left, and then right is SAM two running on that image.[00:37:06] Joseph Nelson: And in the context of agents, folks usually want to have like, Hey, tell me all of the buttons that a, an agent could press. Tell me like maybe the headlines of the articles tell me the individual images and Sam two behaves perhaps predictably, where it outlines like people in the images and like some of like the, the screen text.[00:37:22] Joseph Nelson: I'm curious, like, how you think about a challenge like this for a model that sees everything in the world, what about handling digital contexts? And Why maybe it could perform better here and how you would expect to see improvement for domains that might have been out of distribution from the training data?[00:37:40] Nikhila Ravi: Yeah, this is a good question. So fair, we don't really build with a specific use case in mind. We try to build like these foundational models that can be applied to lots of different use cases out of the box. So I think in this kind of example, potentially people might want to annotate some data.[00:37:59] Nikhila Ravi: Fine tune on top of what we release. I think we probably won't build things that are very custom for different use cases. I think that's not a direction we'll go in, but as you said, like the model is an annotation tool to improve the model. And so I think that's definitely the approach we want to take is we provide the tools for you to improve the model as well as the model itself.[00:38:27] Joseph Nelson: That makes sense. Focus on like as many. Multi or zero shot problems and then allow the community to pick up the torch for domain adaptation.[00:38:34] Nikhila Ravi: Yeah, absolutely. Like, we can't solve all the problems ourselves. Like, we can't solve all the different domains. But if we can provide a sort of base hammer tool, and then people can apply it to all their different problems.[00:38:48] SAM 2 Paper[00:38:48] swyx: If you don't mind, I guess we want to transition to a little bit on like asking more questions about the paper.[00:38:53] Udio AI: Sure.[00:38:54] swyx: There's a lot in here. I love the transparency from Meta recently with like LLAMA 3 last week and then, and was it last week? Maybe, maybe a little bit less than last week. But just like just really, really well written and a lot of disclosures, including the data set as well.[00:39:08] SA-V Dataset and SAM Data Engine[00:39:08] swyx: I think the top question that people had on the data set, you know, you release a diverse videos and there was, there's a lot of discussion about the data engine as well, which I really love. And I think it's innovative if you wanted. I think the top question is like, how do you decide the size of data set?[00:39:22] swyx: You know, what were you constrained by? People are asking about scaling laws. You had some ablations, but as a research manager for this whole thing, like how do you decide what you need?[00:39:32] Nikhila Ravi: Yeah. I mean, it's a great question. I think it's, as with all papers, you write them at the end of the project, so we can put these nice plots at the end, but going into it, I think, you know, the data engine design really follows.[00:39:47] Nikhila Ravi: So, this is sort of the model design, how we thought about the task, how we thought of the model capabilities. You can really see it's reflected in the different phases of the data engine. We started with just SAM, we apply SAM per frame. That's like the most basic way of extending SAM to video. Then the most obvious thing to do is to take the output masks from SAM and then provide it as input into a video object segmentation model that takes the mask as the first frame input.[00:40:19] Nikhila Ravi: And that's exactly what we did. We had SAM plus a version of SAM2 that only had mask as input. And then in the last phase, we got rid of SAM entirely and just had this one unified model that can do both image. And video segmentation. And I can do everything in just one model. And we found that, you know, going from each phase, it both improved the efficiency and it improved the data quality.[00:40:46] Nikhila Ravi: And in particular, when you get rid of this two part model, one of the advantages is that when you make refinement clicks, so, You prompt the model in one frame to select an object, then you propagate those predictions to all the other frames of the video to track the object. But if the model makes a mistake and you want to correct it, when you have this unified model, you only need to provide refinement clicks.[00:41:14] Nikhila Ravi: So you can provide maybe a negative click to remove a region or a positive click to add a region. But if you had this decoupled model, you would have to Delete that frame prediction and re annotate from scratch. And so you can imagine for more complex objects, this is actually adding like a lot of extra time to redefine that object every time you want to make a correction.[00:41:39] Nikhila Ravi: So both the data and the data engine phases really follow, like how we thought about the model design and the evolution of the capabilities, because it really helped us to do that. improve the data quality and the annotation efficiency as well.[00:41:54] swyx: Yeah, you had a really nice table with like time taken to annotate and it was just going down and down.[00:41:58] swyx: I think it was like down by like 90 percent by the time you hit stage[00:42:02] Joseph Nelson: three, which is kind of cool. We joke that when SAM 1 came out at RoboFlow, we're like, was this purpose built for our software? Like you have like the embedding, you have the embedding take like a big model and the querying of the embeddings A smaller model that happens in browser, which felt remarkably aligned.[00:42:18] Joseph Nelson: Now hearing you talk about how you think about building models with a demo in mind, it makes sense. Like, you're thinking about the ways that folks downstream are going to be consuming and creating value. So, what felt like maybe a coincidence was perhaps a deliberate choice by Meta to take into account how industry is going to take Seminal advances and apply them.[00:42:36] Nikhila Ravi: Yeah. And it's not just humans. Like it could also be a model that outputs boxes that then get fed into this model. So really thinking about this as a component that could be used by a human or as a component, as part of a, of a larger AI system. And that has, you know, a number of design requirements. It needs to be promptable.[00:42:56] Nikhila Ravi: It needs to be, have the zero shot generalization capability. We, you know, need it to be real time and. Those requirements really are very core to how we think about these models.[00:43:08] Memory Attention to solve Video[00:43:08] swyx: I cannot end this podcast without talking about the architecture, because this is your, effectively the sort of research level, architecture level innovation that enabled what I've been calling object permanence for SAM.[00:43:22] swyx: And it's memory retention. What was the inspiration going into it? And you know, what did you find?[00:43:27] Nikhila Ravi: Yeah, so at a high level, the way we think about extending SAM to video is that an image is just a special case of a video that just has one frame. With that idea in mind, we can extend the SAM architecture to be able to support segmentation across videos.[00:43:45] Nikhila Ravi: So this is a quick video that shows how this works. So SAM architecture, we have the image encoder, we have a prompt encoder, we have a mask decoder. You can click on an image. And that basically is a prompt, we use that prompt along with the image embedding to make a mask prediction for that image. Going to SAM2, we can also apply SAM2 to images because we can, you know, as I said, treat an image as a video with a single frame.[00:44:15] Nikhila Ravi: And so when we, in the SAM2 architecture, we introduce this new memory mechanism that consists of three main components. There's memory attention, there's a memory encoder, and then there's a memory bank. And when we apply SAM2 to images, these are effectively not used. And the architecture just collapses down to the original SAM architecture.[00:44:35] Nikhila Ravi: But when we do apply this to video, the memory components become really useful because they provide the context of the target object from Other frames. And so this could be from past frames. It can be from, there's two types of memory. So there's like the condition, conditional frames or the prompted frames, which are basically the frames at which a user or a model provides input like clicks.[00:45:01] Nikhila Ravi: And then there's like the surrounding frames. And say we use six frames around the current frame as memory of the object. So there's, there's those, those, both those types of memory that we use to make the prediction. Going into a little bit more detail about that, there's like two kinds of memory that we use.[00:45:18] Nikhila Ravi: So one is like spatial memory. So it's like this high resolution memory that captures the spatial details. And then we also have this like longer term object pointer memory that captures some of the sort of higher level concepts. And I think Swyx, you had a comment about how does this relate to sort of context window and LLMs.[00:45:37] Nikhila Ravi: And both of these types of memories have some relation to context window, so they both provide different types of information on the spatial side or in terms of the concept of the objects that we want to track. And so we found that having like six frame length for the spatial memory, Coupled with this longer period of the object pointer memory provides strong video segmentation accuracy at high speed.[00:46:01] Nikhila Ravi: So, as I mentioned, the real time aspect is really important. We have to find this speed accuracy trade off. And one way in which we sort of circumvent this is by allowing additional prompts on subsequent frames. So even if the model makes a mistake, maybe it loses the object. After an occlusion, you can provide another prompt, which actually goes into the memory.[00:46:24] Nikhila Ravi: And so the prompted frames are always in the memory. And so if you provide a prompt on a frame, we will, or the model will always remember what you provided. And so that's a way in which we can sort of avoid some of the model failure cases that actually is a big limitation of current models, current video object segmentation models.[00:46:45] Nikhila Ravi: Don't allow any way to recover if the model makes a mistake. And so, Joseph, going back to your point about the demo, that's something that we found just by playing with these models. There's no way to make a correction, and in many real world use cases, like, it's not going to be a one time prediction, but you actually want to be able to intervene, like, if an LLM makes a mistake, you can actually be like, no, actually do it this way, and provide feedback, and so, We really want to bring some of that thinking into how we build these computer vision models as well.[00:47:16] "Context Length" in Memory Attention[00:47:16] swyx: Amazing. My main reaction to finding out about the context length of eight input frames and six pass frames as their default is why not 60? Why not 600? In text language models, we're very used to severely extending context windows. And what does that do to the memory of your model?[00:47:35] Nikhila Ravi: So I think maybe one, one thing that's different is that the object in video, it is challenging.[00:47:41] Nikhila Ravi: Objects can, you know, change in appearance. There's different lighting conditions. They can deform, but I think a difference to language models is probably the amount of context that you need is significantly less than maintaining a long multi time conversation. And so, you know, coupling this. Short term spatial memory with this, like, longer term object pointers we found was enough.[00:48:03] Nikhila Ravi: So, I think that's probably one difference between vision models and LLMs.[00:48:09] Object Tracking[00:48:09] Joseph Nelson: I think so. If one wanted to be really precise with how literature refers to object re identification, object re identification is not only what SAM does for identifying that an object is similar across frames, It's also assigning a unique ID.[00:48:25] Joseph Nelson: How do you think about models keeping track of occurrences of objects in addition to seeing that the same looking thing is present in multiple places?[00:48:37] Nikhila Ravi: Yeah, it's a good question. I think, you know, SAM2 definitely isn't perfect and there's many limitations that, you know, we'd love to see. People in the community help us address, but one definitely challenging case is where there are multiple similar looking objects, especially if that's like a crowded scene with multiple similar looking objects, keeping track of the target object is a challenge.[00:49:03] Nikhila Ravi: That's still something that I don't know if we've solved perfectly, but again, the ability to provide refinement clicks. That's one way to sort of circumvent that problem. In most cases, when there's lots of similar looking objects, if you add enough refinement clicks, you can get the perfect track throughout the video.[00:49:22] Nikhila Ravi: So definitely that's one way to, to solve that problem. You know, we could have better motion estimation. We could do other things in the model to be able to disambiguate similar looking objects more effectively.[00:49:35] swyx: I'm just interested in leaving breadcrumbs for other researchers, anyone interested in this kind of architecture.[00:49:41] swyx: Like, are there papers that you would refer people to that are influential in your thinking or, you know, have, have other interesting alternative approaches?[00:49:49] Nikhila Ravi: I think there's other ways in which you can do tracking and video. You might not even need the full mask. I think that's it. Some other works that just track like points on objects.[00:49:59] Nikhila Ravi: It really, really depends on what your application is. Like if you don't care about the entire mask, you could just track a bounding box. You could just track a point on an object. And so having the high fidelity mask might not actually be necessary for certain use cases. From that perspective, you might not need the full capabilities.[00:50:19] Nikhila Ravi: of SAM or SAM2. There's many different approaches to tracking, I think I would encourage people to think about like what actually they need for their use case and then try to find something that that fits versus, yeah, maybe SAM2 is too much, you know, maybe you don't even need the full mask.[00:50:37] swyx: Makes total sense, but you have solved the problem that you set out to solve, which is no mean feat, which is something that we're still appreciating even today.[00:50:44] The Future of FAIR[00:50:44] swyx: If there are no further questions, I would just transition to sort of forward looking, future looking stuff. Joseph already hinted at, like, you know, our interest in SAM and the future of SAM, and obviously you're the best person to ask about that. I'm also interested in, like, How should external people think about FAIR, you know, like there's this stuff going on, this llama, this chameleon, this voice box, this image bind, like, how is, how are things organized?[00:51:09] swyx: And, you know, where are things trending?[00:51:11] Nikhila Ravi: Yeah, so in FAIR, we, you know, we have a number of different research areas. I work in an area called perception. So we built vision systems that solve basically, Look at all the fundamental problems in Compute Division. Can we build a step change in all of these different capabilities?[00:51:29] Nikhila Ravi: SAM was one example. SAM2 is another example. There are tons of other problems in Compute Division where we've made a lot of progress, but can we really say that they're solved? And so that's really the area in which I work on. And then there's a number of other research areas in language and in embodied AI.[00:51:49] Nikhila Ravi: And more efficient models and various other topics. So fair in general is still very much pushing the boundaries on solving these foundational problems across different domains. Well,[00:52:07] swyx: fair enough, maybe just outside of fair, just the future of computer vision, right?[00:52:10] CVPR, Trends in Vision[00:52:10] swyx: Like you are very involved in the community. What's the talk of the town at CVPR? Both of you went, who's doing the most interesting work? It's a question for both of you.[00:52:19] Joseph Nelson: I think the trends we're seeing towards more zero shot capability for common examples will accelerate. I think Mutu modality, meaning using, you know, images in tandem with text for richer understanding or images and video in tandem with audio and other mixed media will be a continued acceleration trend.[00:52:43] Joseph Nelson: The way I kind of see the field continuing to progress, the problem statement of computer vision is making sense of visual input. And I think about the world as the things that need to be observed follow your traditional bell curve, where like things that most frequently exist out in the world are on the center of that bell curve.[00:53:05] Joseph Nelson: And then there's things that are less frequently occurring that are in those long tails. For example, you know, as back as like 2014, you have the Cocoa data set, which sets out to say, Hey, can we find 80 common objects in context, like silverware and fridge and these sorts of things. And we also conceptualized the challenge of computer vision in terms of breaking it down into individual task types, because that's like the tools we had for the day.[00:53:29] Joseph Nelson: So that's why, you know, you have the origination of classification, object detection, instant segmentation. And then as you see things continue to progress. You have models and things that need to observe areas in the long tails. And so if you think of the Cocoa dataset as the center of that bell curve, I think of like the long tails, like really edge case problems.[00:53:49] Joseph Nelson: Some of our customers like Rivian, for example, only Rivian knows what the inside of like a Rivian should look like as it's assembled and put together before it makes its way to a customer and they're making custom parts. Right? So how could a model you've been trained on the things that go inside the componentry of producing a vehicle and Andreesen, What's kind of happening with computer vision is you're seeing models that generalize in the middle of the bell curve push outward faster.[00:54:17] Joseph Nelson: That's where you see the advent of like open text models or the richness of understanding of multimodal models. To allow richer understanding without perhaps any training, or maybe just using pre training and applying it to a given problem. And then, there's like, you know, kind of like the messy middle in between those two, right?[00:54:38] Joseph Nelson: So like, Akila kind of talked about examples where SAM does well out of distribution, where like, it finds an octopus, even though there wasn't octopi in the training data. I showed an example where, like, screenshots, where Sam isn't yet super great at screenshots, so maybe that's, like, in the messy middle or in the longer tails for now.[00:54:54] Joseph Nelson: But what's going to happen is there needs to be systems of validating the point of view that I think about, like, tooling to also validate that models are doing what we want them to do, adapting to datasets that we want them to adapt to. And so there's a lot of things on a forward looking basis that allow propelling that expansion of generalizability.[00:55:14] Joseph Nelson: That's for open text problems. That's where scaling up of training, of dataset curation, continues to play a massive role. Something that's notable, I think, about SAM2 is it's, what, 57, 000 videos? 51,[00:55:30] Nikhila Ravi: 000 videos? About 51, 000, yeah.[00:55:32] Joseph Nelson: And 100, 000 internal datasets. That's, like, not Massive, right? And the model size also isn't, you know, the largest, largest model being a couple hundred million parameters.[00:55:43] Joseph Nelson: The smallest model is 38 million parameters and can run at 45 FPS on an A100, right? Like the capabilities of, we're going to see more capable, more generalizable models. Being able to run on a higher wide array of problems with zero or multi shot capability on a faster, a faster rate. And I think the architecture innovations and things like SAM2 of memory, of increasingly like transformers making their way into division and probably blended architectures increasingly too.[00:56:15] Joseph Nelson: So my viewpoint of like on a go forward basis is we will have that bell curve of what humans can see both in the center of that curve and the long tails. And architectural changes allow richer understanding, multi and zero shot, and putting those into systems and putting those into industry and putting those into contexts that allow using them in practical and pragmatic ways.[00:56:38] Joseph Nelson: Nicola, I'd love to hear like your thought and perspective of like how you think the research trends map or don't map to that. And like maybe some of the key innovations that you saw at CVPR this year that, you know, Got you excited about the direction and maybe some promising early directions that you're thinking about researching or pushing the boundaries of further.[00:56:56] Nikhila Ravi: Yeah, I just wanted to actually reply to a couple of things that you said about so actually in video object segmentation, the number of classes. that are annotated in these, and then the size of these datasets are really small. So with SAM, it's, you know, we had a billion masks, we had 11 million images, didn't have class labels.[00:57:17] Nikhila Ravi: But even before that, there were a lot of datasets that have class labels and are annotated. With significantly more with, with like a lot of class labels, whereas in video datasets, the number of class labels are very small. So there's like YouTube VOS, which has 94 object categories, there's Mose, which has around like 30 or so object categories.[00:57:38] Nikhila Ravi: And they're usually like people, there's cars, there's dogs and cats and all these common objects, but not really, they don't really cover a very large number of object categories. And so while Sam learned this general notion of what an object is in an image. These video tracking models actually don't have that knowledge at all.[00:58:01] Nikhila Ravi: And so that's why having this data set is really important for the segment anything capability in video because if you just provide the mask as the input to an off the shelf Video object segmentation model. It might not actually be able to track that arbitrary object mask as effectively as a SAM2 model that's actually trained to track.[00:58:24] Nikhila Ravi: Any object across the entire video. So doing these sort of combining two models together to try to get a capability that will actually only get you so far and being able to actually create that the dataset to enable that anything capability, it was actually really important and we can actually see that when we do comparisons with baselines where we provide some two with the same input mask and the baseline model with the same input mask.[00:58:53] Nikhila Ravi: For example, the t shirt of a person, SAM2 can track the t shirt effectively across the entire video, whereas these baselines might actually start tracking the entire person, because that's what they're used to doing, and isolating it to just one part of the person is not something they were ever trained to do, and so those are sort of some of the limitations.

The Retrospectors
Meet The Addams Family

The Retrospectors

Play Episode Listen Later Aug 6, 2024 12:07


The Addams Family debuted as a one-panel cartoon in The New Yorker on 6th August, 1938. Created by Charles Addams, the family (who for decades were essentially archetypes, without character names) were a satirical inversion of the ideal postwar American middle-class nuclear family, delighting in the macabre, and seemingly unaware or unconcerned that other people find them bizarre or frightening. In this episode, Arion, Rebecca and Olly explain how the Addamses finally got their TV names; consider the on-screen rivalry between their show and the similarly-themed The Munsters; and recall MC Hammer's SEMINAL interpretation of their iconic theme tune… Further Reading: • ‘Charles Addams Cartoons Are Far Darker Than The Addams Family Films' (Den of Geek, 2021): https://www.denofgeek.com/culture/charles-addams-cartoons-darker-than-the-addams-family/ • ‘Charles Addams' (The New Yorker, 2010); https://www.newyorker.com/cartoons/bob-mankoff/charles-addams • ‘The Addams Family: Wednesday Leaves Home' (MGM, 1964): https://www.youtube.com/watch?v=qxZgUp-E0fo&list=PLwwhtOnMyjuxQy81h7uJMCdsR-bS-uVaD&index=5 This episode first premiered in 2023, for members of

Lung Cancer Considered
Seminal Trial Series: Crizotinib in PROFILE 1001

Lung Cancer Considered

Play Episode Listen Later Aug 6, 2024 56:03


Seminal Trial Series: Crizotinib in PROFILE 1001 by IASLC

The Jake Feinberg Show
The Pat Thrall Interview

The Jake Feinberg Show

Play Episode Listen Later Jul 27, 2024 61:33


Seminal guitarist talks about the great players who influenced him growing up in the Bay Area and the stunning stories of his career on the bandstand.

Hearts of Space Promo Podcast
PGM 1377 'SEQUENCER AIRLINES' : jul 26-aug 2

Hearts of Space Promo Podcast

Play Episode Listen Later Jul 27, 2024


THE ARRIVAL OF PRACTICAL ELECTRONIC SYNTHESIZERS in the late 1960s and early 1970s caused a sensation around the world, but nowhere more than Germany. Postwar German artists were restless, intent on leaving behind all forms of traditional German music, as well as the Rhythm & Blues roots and song structure of popular Anglo-American rock. An innocent feature of early modular synthesizers called a “step sequencer” provided a tool that led to the development of an original style called “Kosmische Musik” or "Cosmic Music" in Germany, and—more playfully—“KrautRock.” The step sequencer made it easy to create hypnotic rhythm loops with up to 32 notes or steps, set a tempo, and mix them over flowing electronic drones. The effect was to “float” the listener through endless terrestrial or cosmic space: it was addictive. Seminal groups and individuals like CAN, KLAUS SCHULZE, TANGERINE DREAM, KRAFTWERK, CLUSTER, ASH RA TEMPEL, HARMONIA and others, created an enduring style that has influenced genres from Minimalism, Ambient and Electronic Dance Music, to New Age and Techno. Today we call it the "Berlin School." On this transmission of Hearts of Space, another timeless flight on electronic rhythms, on a program called SEQUENCER AIRLINES. Music is by ALPHA WAVE MOVEMENT, STATE AZURE, STEVE HAUSCHILDT, STARTERRA, MARTIN STURTZER, SYNTH REPLICANTS, STRAY THEORIES, NILS FRAHM, and EDGAR FROESE. [ view playlist ] [ view Flickr image gallery ] [ play 30 second MP3 promo ]

Keystone Bible Church
Exodus 9:13-10:29 - Learning from Seminal Moments in HIStory - Jim Bargfeldt

Keystone Bible Church

Play Episode Listen Later Jul 7, 2024 65:54


Keystone Bible Church
Exodus 9:13-10:29 - Learning from Seminal Moments in HIStory - Jim Bargfeldt

Keystone Bible Church

Play Episode Listen Later Jul 7, 2024 65:54


Fertility and Sterility On Air
Fertility and Sterility On Air - Seminal Article: Ernest Ng, and Zhi Chen

Fertility and Sterility On Air

Play Episode Listen Later Jun 30, 2024 17:55


Fertility & Sterility on Air brings you a deep dive into the June issue Seminal Contribution: a randomized controlled trial studying the use of progestins for ovulation supression in predicted high responders. With Micah Hill, Ernest Ng, and Zhi Chen. Read the article: https://www.fertstert.org/article/S0015-0282(24)00030-X/abstract View Fertility and Sterility at https://www.fertstert.org/  

The Best of the Money Show
Bruce Whitfield recalls seminal moment that led to his radio career

The Best of the Money Show

Play Episode Listen Later Jun 27, 2024 4:12


See omnystudio.com/listener for privacy information.

Rounding Down with Chid
Rounding Down Top 100 Records: @somanybadtweets Guest Submission - The Colour and The Shape by the Foo Fighters

Rounding Down with Chid

Play Episode Listen Later Jun 19, 2024 55:29


When compiling our definitivie list of the greatest 100 records of all time it's important to carefully curate and ensure we're including records we like... Other lists shouldn't influence your decision-making. So of course Chid & Sigh started with the 13 specific records we've discussed on the show in the last 4 years that they may or may NOT like.Records discussed in detail thus far:Thrice - The Artist in the AmbulanceWilco - Cruel CountryGlocca Morra - Just MarriedSufjan Stevens - Age of AdzRilo Kiley - The Execution of All ThingsRed Hot Chili Peppers - CalifornicationThe Postal Service - Give UpThe Lawrence Arms - Oh, Calcutta!Santana - SupernaturalPink Floyd - AnimalsThird Eye Blind - Third Eye BlindFastball - All the Pain Money Can BuyThe Rolling Stones - Let It BleedPlus our dear friend  @somanybadtweets joins us to submit a record he thought shouldn't be left off our list: 1997's Foo Fighters - The Colour and The Shape. Join us to see how and where we rank 13 previously discussed records, and where Brew's SEMINAL record choice lands in Chid & Sigh's rankings. Support the Show.Follow us on Twitter: @CHIDSPIN / @SighFieri / @RoundingDownRate and review us on Apple Podcasts it's not called iTunes anymore, no one calls it that!Tell 25 friends about the show! Actually, don't even tell them about it--just borrow their phones and subscribe them to it!$RoundingDown on the CashApp--we only need $5 million, that's all we ask!

Surfing the Nash Tsunami
EASL Congress 2024: Interviews From A Seminal Meeting For MASLD & MASH

Surfing the Nash Tsunami

Play Episode Listen Later Jun 13, 2024 70:16


00:00:00 - Surf's Up: Season 5 Episode 19During EASL Congress 2024, US-based Roger Green, conducted interviews with Mike Betel, Louise Campbell (twice) and Sven Francque from Milano. These interviews focused on the major MASLD themes and presentations at the event. 00:04:14 - Conversation with Mike Betel beginsOn Wednesday, the first afternoon of the meeting, Mike Betel joined Roger from the convention center. The first part of the conversation centered  on the Patient Advocate session that Mike chaired with Shira Zelber-Sagi. The session's goal was to discuss barriers to addressing unmet needs in a clinical setting and explore potential solutions. Mike's key takeaway: patients around the world are having challenges getting personal attention and time from their treaters. The rest of this interview touched briefly on other sessions Mike attended. 00:15:23 - First conversation with Louise Campbell Roger's first interview with Louise took place late on Thursday. She described the "really nice vibe" of the meeting, dampened by the fact that Stephen Harrison is no longer with us.The first session Louise chose to discuss was the previous day's Patient Advocate session. To her, the key point was to learn a key question that every provider should share with every patient once a year. She briefly mentioned the one presentation from the day's General Session she was able to attend:  analysis of the predictive value of VCTE.00:19:53 - Philosophically important presentationsLouise discussed two sessions that delivered powerful, somewhat novel messages. The first was a symposium sponsored by Novo Nordisk about how SLD treatment could "manage the cardiometabolic side...rather than focusing on liver disease." The second was the "Healthy Livers, Healthy Lives" presentation which presented "very startling figures" about SLD impact on US healthcare costs and productivity and how and why India has targeted this disease aggressively. 00:26:29 - Building momentum and energy around AI Louise and Roger both observed that momentum is building in MASLD and mentioned why they believe this is happening, 00:33:05 - Second conversation with Louise beginsTwo days later, Louise and Roger conducted a second conversation, which focused on her enthusiasm for the updated Clinical Practice Guidelines and their practical implications. 00:36:12 - CPG session implicationsLouise said this session had "blown her mind" with its forward-thinking style and recommendations. Her favorite point? The guidelines mentioned resmetirom even before it was approved in Europe.   00:44:07 - Thoughts about medicationsRoger suggested that CPG aligned broadly with the drug presentations in the Late Breaker and General sessions. Collectively, those highlighted drugs with an array of modes of action and strengths across the metabolic continuum.  00:46:09 - Thoughts about devicesRoger asked whether Louise believed that, over time, the diagnostic focus would shift from liver stiffness and CAP to in-office PDFF. Louise discusses why this might be difficult.00:51:48 - Conversation with Sven beginsThis conversation, which took place 90 minutes after the final gavel, started with Sven praising the "vibrant hepatology community" evident at the meeting. From there, the discussion covered the Clinical Practice Guidelines, major drug development presentations and other categories. The conversation is fairly short, but packed with information and insight. 01:06:42 - Question of the WeekRoger asks what kinds of support and education primary care will need to step into a leading role in treating SLD.01:07:13 - Business ReportPlans for the next month, growth of the SurfingMASH Community, a special surprise instead of the Vault.

One F*cking Hour
SLACKER (1990): Richard Linklater's seminal 90s time capsule

One F*cking Hour

Play Episode Listen Later Jun 10, 2024 76:44


Episode 105: It's Marcus' birthday episode and we're going One Fucking Hour on Richard Linklater's seminal 90s time capsule SLACKER. We're also kicking off a new '90s summer series covering 1 film per year of the 90s! One Fucking Hour t-shirts and merchandise are available to order here: ⁠https://onefuckinghour.myshopify.com/⁠ Join the OFH Patreon for just $5-a month a gain instant access to all of our bonus episodes and audio commentaries: ⁠https://www.patreon.com/onefuckinghour⁠

Cider Voice
Cider Voice 38 – 'You're Stealing My Show Now' – Seminal Ciders 4: Natalia Wszelaki (Cider Explorer)

Cider Voice

Play Episode Listen Later Jun 7, 2024 64:26


In the latest instalment of 'Seminal Ciders' Adam catches up with someone who's been writing about cider even longer than he has – Natalia of @ciderexplorer. Based in Germany, Natalia's been covering the continental cider scene since 2017, shining a light on central European cider cultures as well as countries whose cider doesn't always get a spotlight. We talk about the cider that got her started, the current state of German cider, the magical event that is @CiderWorld, the first cider she ever gave full marks, the bottle that persuaded her perry could be good and more! Featuring ciders from Poland, Germany, Austria, France and Croatia: @cydr_ignacow @gutshof_kraatz @blakstoc @Jerome.forget61 @kertelreiter_cider @buzdovan_craft_cider Albert explores apples @rosscider @adamhwells charts words @cider_review Justin may be some time @justinwellsjustin

YUTORAH: R' Moshe Taragin -- Recent Shiurim
Keriyas Hatorah: Encountering the Divine Word; Recreating the Seminal Moment of Jewish Faith

YUTORAH: R' Moshe Taragin -- Recent Shiurim

Play Episode Listen Later Jun 6, 2024 0:02


Bookey App 30 mins Book Summaries Knowledge Notes and More
Understanding Solitude: An In-Depth Exploration in Anthony Storr's Seminal Work

Bookey App 30 mins Book Summaries Knowledge Notes and More

Play Episode Listen Later May 30, 2024 10:01


Chapter 1:Summary of Solitude Book"Solitude: A Return to the Self" by Anthony Storr, published in 1988, explores the concept of solitude and its psychological significance in personal development and creativity. Contrary to the common view that regards excessive solitude as undesirable and typically associated with mental health issues like depression and anxiety, Storr proposes that solitude can also be beneficial and crucial for self-discovery and inner growth.The main thesis of the book is that periods of solitude are essential for individual differentiation and can be equally as important as interpersonal relationships in contributing to personal development. Storr argues that the capacity to be alone is vital for self-realization and innovation. Through examining the lives and works of various novelists, poets, musicians, and scientists, he illustrates how many creative individuals have utilized solitude to enhance their creativity and deepen their understanding of themselves.Storr discusses the balance between solitude and interpersonal relationships, suggesting that while relationships are important for validation and feedback, solitude provides a unique space for reflection and the formation of one's own thoughts and values. He addresses societal misconceptions about solitude, aiming to shift the negative perceptions and highlight its positive aspects.Through a blend of psychological theory, biographical sketches, and insightful observations, "Solitude" encourages readers to reconsider the role of solitude in a balanced life, suggesting that spending time alone is not just acceptable, but essential for some people in fostering their creativity and emotional wellbeing.Chapter 2:the theme of Solitude Book"Solitude: A Return to the Self" by Anthony Storr, first published in 1988, is an insightful exploration of the concept of solitude and its role in personal development and creativity. Storr, a British psychiatrist, argues against the prevailing notion that constant interpersonal relationships are the optimal condition for mental health. Instead, he posits that solitude can be equally vital for psychological development and well-being. Here are key elements of the book:### Key Plot Points"Solitude" isn't a narrative book with a plot but rather a psychological and philosophical examination. Key points in the book include:1. **Definition and Understanding of Solitude**: Storr delves into what solitude actually means and distinguishes it from loneliness—a negative state associated with lack of companionship.2. **Historical Perspectives**: The book discusses how views on solitude have changed over time and what historical figures and thinkers have said about it.3. **Case Studies**: Storr provides analysis of significant figures such as Ludwig van Beethoven, Wolfgang Amadeus Mozart, and Sigmund Freud to illustrate how solitude has played a key role in creativity and introspection.4. **Psychological Examinations**: There is an in-depth exploration of how solitary periods can foster personal growth, helping individuals to come to terms with their own identity without the influence of others.### Character DevelopmentSince "Solitude" is non-fiction, it doesn't feature character development in the traditional literary sense. However, by investigating the lives of historical figures, Storr paints detailed psychological profiles and shows how these individuals used solitude:- **Beethoven and Mozart**: Their creative processes are examined, showing how isolation contributed to their musical innovations.- **Freud**: His methodical introspection and...

Ecomonics
Chris Wane — From Side Hustle to Seminal Enterprise

Ecomonics

Play Episode Listen Later May 24, 2024 63:42


Chris Wane has faced some significant challenges on his journey into e-commerce, but he has taken his problems head on. His goal is one I think we can all strive for. Freedom. Like many of the minds we've talked to, he's willing to offer his expertise to likeminded people as a mentor and as an expert drop-shipper we talked to him today about how he sailed past his modest goal of 200 pounds, his strategy for dropshipping and how to handle criticism, as just a few examples. No time to waste. Let's hop to it.Chris Wane started his business with just £250 in his pocket. Fast forward to today and he's now generated over $1,000,000 and has acquired 5 different income streams. He's also CEO & Founder of The Advanced Dropshipping Academy where he teaches people how to build and scale their own dropshipping businesses from scratch. Join us as he talks with Joseph about his journey to the top.

Cider Voice
Cider Voice 35 - 'A Very Sexy Solero' - Seminal Ciders 3: Rachel Hendry of J'Adore Le Plonk

Cider Voice

Play Episode Listen Later May 17, 2024 55:06


For the third instalment of our 'this is your life, but in cider' series, Adam finally stops just talking to co-hosts and sits down with drinks writer Rachel Hendry (@ratchellle), who is definitely appearing in a full-length cider voice episode for the first time. With absolutely no technical hitches whatsoever we cover accidentally becoming an award-winning beer writer, the relationships between pub regulars and the staff who work in those pubs, ciders so good they make you scream and the trials and tribulations of fact retention. Have a listen and discover Rachel's five all-time most important picks, from the perry that 'did it' to the 'most fun drink ever'. Albert makes cider @rosscider Adam (@adamhwells) writes on @cider_review (and has written a book!) Justin is London's hottest new cider tasting impresario (@justinwellsjustin)

The LIFERS Podcast
167. LIFERS - Jake Burns

The LIFERS Podcast

Play Episode Listen Later Apr 19, 2024 80:35


It's legend time again around here at the LIFERS podcast. Of all the bands to emerge from the punk explosion of the late ‘70s, few were as adept at fusing melody and fury in quite the same way that the great Stiff Little Fingers were. Seminal in every way, SLF would go on to influence Naked Raygun, Green Day, and pretty much every punk band that ever cared to carry a tune. This week we welcome the voice of that band —a one Mr. Jake Burns— to tell us about growing up in Belfast, seeing Rory Gallagher on TV, soccer, getting a call from Pete Townsend, touring with the Tom Robinson Band, Eric Clapton and Rock Against Racism, soccer, KT's Kids, Sir Kenny Branagh, moving to Chicago, moving to West Virginia, soccer, and how to cut down on touring without quitting.

Bull & Fox
MAC commissioner Jon Steinbrecher joins Afternoon Drive: Women's Final Four could be seminal moment for sport; TBD what a 'Super League' could mean for conference

Bull & Fox

Play Episode Listen Later Apr 4, 2024 14:23


Jon Steinbrecher talks about whether the conference could add another member school, the idea of a Super League being discussed by university presidents, the momentum around the Women's Final Four, Cleveland's proven ability to host big events like this and more.

Your Mileage May Vary
Where To Place Your Semen? Hard Penis Feel, 69 Positioning, Inaccurate Tinder Pics

Your Mileage May Vary

Play Episode Listen Later Mar 29, 2024 63:45


Keith wanted to discuss where to place his semen in an early sexual encounter. It's not as simple a problem as it sounds. It requires forethought for each position. And, since the woman may often simply express the desire that he "put it inside," it's a lonely road the man must travel to find a proper exterior setting. As I sit here in my bed in Dublin, Ireland, contemplating Keith's semen situation, I can't help wondering if this is the same kind of dilemma faced by such writers as James Joyce when trying to frame his life in terms comprehensible to others. The expulsion of semen is a momentary event, but it can be seminal. And, where it's put can set the tone for years to come with a partner. Speaking of literary masterpieces, we discuss a Reddit user's comment which encapsulates well the feeling of a hard penis, described for the benefit of a young woman who has never touched one. And, we explain why the 69 position usually places the man on the bottom, not the top. We get a lot of our questions from Reddit, so for our listeners' enjoyment, here are links to some of the questions we discussed this week: https://ymmv.me/162/69 https://ymmv.me/162/hardness Twitter: @ymmvpod Facebook: ymmvpod Email: ymmvpod@gmail.com

Fertility and Sterility On Air
Fertility and Sterility On Air - Seminal Article: Dr. Jeremy Applebaum

Fertility and Sterility On Air

Play Episode Listen Later Mar 24, 2024 10:49


Listen to this interview featuring Dr. Jeremy Applebaum, who recently published "Impact of coronavirus disease 2019 vaccination on live birth rates after in vitro fertilization"  Read the article  https://www.fertstert.org/article/S0015-0282(23)02029-0/abstract View Fertility and Sterility at https://www.fertstert.org/

Bookey App 30 mins Book Summaries Knowledge Notes and More
The revolutionary ideas within Le Corbusier's seminal work

Bookey App 30 mins Book Summaries Knowledge Notes and More

Play Episode Listen Later Mar 22, 2024 2:00


Chapter 1 What's Book Towards A New Architecture by Le Corbusier"Towards a New Architecture" is a book written by the famous Swiss architect Charles-Édouard Jeanneret, better known as Le Corbusier. First published in 1923, the book outlines Le Corbusier's ideas and principles on architecture, design, and urban planning. It is considered a seminal work in the field of modern architecture and has had a significant influence on architectural theory and practice. In the book, Le Corbusier discusses the need for a new approach to architecture that is functional, efficient, and in tune with the modern industrial age. He advocates for simple geometric forms, open floor plans, and the use of modern materials such as concrete and steel. The book also touches on topics such as urban planning, the relationship between architecture and nature, and the role of the architect in society. Overall, "Towards a New Architecture" is a key text for anyone interested in modern architecture and design.Chapter 2 Is Book Towards A New Architecture A Good BookMany people consider "Towards A New Architecture" by Le Corbusier to be a seminal work in the field of architecture. The book presents Le Corbusier's ideas and principles on modern architecture, including concepts such as the use of geometric shapes, open floor plans, and the importance of functionality in design.However, some critics argue that the book can be overly idealistic or dogmatic in its approach, and may not offer a complete or balanced view of architecture. Additionally, some readers may find Le Corbusier's writing style to be dense or difficult to follow.Ultimately, whether "Towards A New Architecture" is a good book will depend on the reader's interest in architecture and their willingness to engage with Le Corbusier's ideas. It is recommended as a foundational text in the field of architecture, but readers should approach it with a critical eye and an awareness of its limitations.Chapter 3 Book Towards A New Architecture by Le Corbusier SummaryTowards a New Architecture, also known as Vers une Architecture, is a book written by the famous architect Le Corbusier. The book was first published in 1923 and has since become a classic in the field of architecture.In the book, Le Corbusier discusses his ideas and theories on architecture, design, and urban planning. He advocates for a more functional and efficient approach to architecture, arguing that buildings should be designed with the needs of the inhabitants in mind.Le Corbusier also emphasizes the importance of simplicity, efficiency, and honesty in design, stating that "a house is a machine for living in." He criticizes the decorative and ornamental styles of the past, calling for a new, modern approach to architecture that is based on logic and functionality.Throughout the book, Le Corbusier presents his famous five points of architecture, which include pilotis (supports), flat roofs, open floor plans, horizontal windows, and free facades. These principles have become foundational in modern architecture and have had a lasting impact on the field.Overall, Towards a New Architecture is a groundbreaking work that has influenced generations of architects and designers. Le Corbusier's ideas and principles continue to shape the way we think about and create buildings today, making this book a must-read for anyone interested in the history and theory of architecture. Chapter 4 Book Towards A New Architecture AuthorLe Corbusier, whose real name was Charles-Édouard Jeanneret-Gris, was a Swiss-French architect,...

Mind the Gap: Making Education Work Across the Globe
The Seminal Albums of Educational Research - and how they apply in the classroom with Carl Hendrick, Mind the Gap, Ep.74 (S4,E11)

Mind the Gap: Making Education Work Across the Globe

Play Episode Listen Later Mar 18, 2024 53:29


On this episode of Mind The Gap, Tom Sherrington and Emma Turner are joined by Carl Hendrick, author of two books about the science of teaching and learning and a third about bridging the gap between research and practice. Carl said he approached finding the research papers for his books in the same way that he would have compiled an album of seminal classic rock tracks, but with the criterion of having the greatest use for teachers and school leaders. The discussion turned to how education research is conducted and how "a lot of debates in education are people in different stages talking past one another". Carl also says that we now have a good understanding of the science of learning, but the three agree that especially in the early years and early primary education, even research-proven pedagogical practices like interleaving can't take the place of play-based learning, for example. Listen now to hear more on how teachers can really engage with educational research. Carl Hendrick works at the Academica University of Applied Sciences in Amsterdam where his focus is on bridging the gap between research and practice. Carl was a secondary English teacher for 18 years in a range of different contexts and completed his PhD in education at King's College London. He is the co-author of How Learning Happens, How Teaching Happens, and What Does this Look Like in the Classroom. Follow Carl on Twitter ⁠@C_Hendrick⁠ Tom Sherrington has worked in schools as a teacher and leader for 30 years and is now a consultant specialising in teacher development and curriculum & assessment planning. He regularly contributes to conferences and CPD sessions locally and nationally and is busy working in schools and colleges across the UK and around the world. Follow Tom on Twitter ⁠⁠⁠⁠⁠⁠⁠@teacherhead⁠⁠⁠⁠⁠⁠⁠ Emma Turner joined Discovery Schools Academy Trust as the Research and CPD lead after 20 years in primary teaching. She founded ‘NewEd – Joyful CPD for early-career teachers,' a not-for-profit approach to CPD to encourage positivity amongst the profession and help retain teachers in post. Follow Emma on Twitter ⁠⁠⁠⁠⁠⁠⁠@emma_turner75⁠⁠⁠⁠⁠⁠⁠. This podcast is produced by Haringey Education Partnership. Find out more at https://haringeyeducationpartnership.co.uk/ --- Send in a voice message: https://podcasters.spotify.com/pod/show/mindthegap-edu/message

All For Literacy
Fostering Educational Equity With Dr. Gretchen Givens Generett

All For Literacy

Play Episode Listen Later Mar 5, 2024 61:00


“I can't talk about how I understand the research without first going into…the history and the experiences of the communities that I'm looking to serve,” Dr. Gretchen Givens Generett says in Season 2, Episode 4 of All For Literacy. Host Dr. Liz Brooke has a compassionate discussion with Generett about understanding the lived experiences of students and educators and how to provide support so both can thrive.  Generett currently serves as dean, professor, and the Noble J. Dick Endowed Chair of Community Outreach at the School of Education at Duquesne University. Her teaching and research work aim to enhance educators' skills and habits so they can effectively teach diverse populations of students. Gain thoughtful and research-backed insight into how educators can create truly equitable systems, understand education as a human system, and foster meaningful learning and relationships while considering diverse histories and lived experiences. Educational leaders will especially benefit from Generett's deep look into her co-authored book, Five Practices for Equity-Focused School Leadership.  Strengthen your classrooms with useful tips for navigating challenging moments, especially those often exacerbated by the realities of power, privilege, and different lived experiences.    Episode Breakdown (01:58) – How Generett's own educational experience influenced her professional career (11:24) – Leading during challenging times (i.e. the pandemic) (16:08) – Seminal studies on leadership in education (25:03) – Five Practices for Equity-Focused School Leadership (26:21) – Education as a human system (27:51) – Building teams with good relationships (31:25) – The importance of stories (38:17) – Supporting leaders in creating equitable systems (43:22) – Flipping deficit-oriented stories to create change (53:42) – How districts are embracing the work that needs to be done   Join our community of listeners and never miss an episode. Subscribe to All For Literacy today!    

Lung Cancer Considered
Seminal Trials: Nivolumab Phase I Trial & Immunotherapy for NSCLC

Lung Cancer Considered

Play Episode Listen Later Feb 29, 2024 32:42


Today's episode of Lung Cancer Considered is part of our special series on seminal trials in thoracic oncology, focusing on the phase I study of the PD-1 inhibitor nivolumab. Hosts Dr. Narjust Florez and Dr. Stephen Liu discuss that study with guest Dr. Julie Brahmer from Johns Hopkins University. This podcast features surprise guest interviews with several doctors who were mentored by Dr. Brahmer.

#plugintodevin - Your Mark on the World with Devin Thorpe
Seminal Work ‘Reclaiming Our Democracy' Updated for Today's ‘Trouble' - s11 ep44

#plugintodevin - Your Mark on the World with Devin Thorpe

Play Episode Listen Later Jan 9, 2024 25:28


Remember, you can watch the Superpowers for Good show on e360tv. To watch the episode, download the #e360tv channel app to your streaming device–Roku, AppleTV or AmazonFireTV–or your mobile device. You can even watch it on the web.When you purchase an item after clicking a link here, we may earn a commission. It's an easy way to support our work.Devin: What is your superpower?Sam: My superpower is that I'm in on the joke, and I want to say it this way. We know we make a difference in our family, and we make a difference on our block. And yes, we make a difference in our community. But I say that when we look to our state and our nation and our world, people don't see the difference they make. It's just crystal clear to me. I'm in on the joke that it's not accurate that we don't make a difference. We do. I just see it in a crystal clear way.Sam Daley-Harris has updated his seminal work, Reclaiming Our Democracy, Every Citizen's Guide to Transformational Advocacy. Originally written decades ago, it reflected the success of the anti-poverty lobby RESULTS. Since then, it has helped guide that effort.RESULTS has earned respect for being a critical piece in lobbying on behalf of the global population of people living in or near extreme poverty. The result has been a 66 percent reduction in infant mortality, saving tens of millions of lives.“This 66 percent decline wasn't only RESULTS, but when it comes to the advocacy, RESULTS was a central leader in the US and then Britain and Canada and Australia and elsewhere in this tremendous progress that volunteers had a real hand in,” Sam says.Why update the book? The answer seems simple to anyone whose read a newspaper in the past decade. Sam matter-of-factly summarizes the context, saying, “Our democracy is in trouble.” He adds, “We need to really get our act together.”Sam explains the meaning of the book's cover image, saying, “We're saying that there's a missing piece in this puzzle, and the missing piece is us.”To do this work on behalf of those living in poverty, Sam deploys a superpower he describes as being in on the joke that what you do to change the world doesn't matter–because it does.AI Episode Summary1. Devin Thorpe, the host of the Superpowers for Good show, introduces his guest, Sam Daley-Harris, an influential figure in social change, the author of Reclaiming Our Democracy, and the founder of RESULTS, an anti-poverty lobby.2. Sam shares his origin story, beginning with a background in music and two impactful deaths that occurred during his high school and college graduations, leading him to ponder his life's purpose.3. His journey shifted when he attended a presentation on ending world hunger organized by The Hunger Project, which inspired him to get involved after realizing hunger solutions existed, but people were not acting on them.4. Sam's advocacy work started with educating high school students about political will and engaging citizenship, subsequently creating RESULTS based on the disconnect he saw between public awareness and political action.5. RESULTS focuses on child survival issues and has been a significant influence in reducing the global child death rate by 66% over the past 40 years through continuous lobbying and advocacy.6. A personal story of success is recounted, where Sam received written gratitude from Jim Grant, the then-head of UNICEF, for the advocacy work RESULTS volunteers did to increase the Child Survival Fund.7. Sam details the reasons for updating his book "Reclaiming Our Democracy" in response to the current challenges faced by democracy and the public's eagerness for ideas on making a difference.8. The puzzle piece on the cover of his book symbolizes that the "missing piece" in democracy is the citizens themselves, awakening to their power to make a difference.9. Sam's superpower, "being in on the joke," refers to his awareness and conviction that individuals do have power and can make a significant impact on state, national, and global issues despite common skepticism.10. The book Reclaiming Our Democracy provides guidance on transformational advocacy, highlighting the importance of organizational enrollment and community building, skill development, and enabling individuals to experience breakthroughs in advocacy.If you want to help reclaim our democracy, please share this post!How to Develop Knowing We Can Change the World As a SuperpowerSam cleverly describes his superpower, knowing that we can change the world, as being in the joke. He explains, “You do make a difference. Don't believe all that stuff that you read or you think that you don't.”Sam shared an impressive example of how volunteers organized using the principles of his book to make a big difference in the U.S. Federal budget for international aid.In 2019, the Global Fund to Fight Aids, Tuberculosis and Malaria had saved 38 million lives since its inception in 2002 to 2019. It was up for a three-year replenishment, and President Trump called for a 29 percent cut to the Global Fund to Fight Aids, Tuberculosis and Malaria. Now, most people would go, “Well, what can you do? You can't fight city hall. President's calling for…” No, no, no, no! RESULTS volunteers–there were others, but results volunteers led in getting hundreds of Republicans and Democrats to sign letters to Secretary of State Mike Pompeo, to the leaders of the appropriating subcommittees in the House and Senate to co-sponsor resolutions, all in support of the full funding for the Global Fund to Fight Aids, Tuberculosis and Malaria.At the end of that year, 2019, two Republicans and two Democrats stood on a stage in Lyon, France, at the Global Fund replenishment and announced that the US Congress would increase the Global Fund by 16 percent and by the end of the year, the president, President Trump signed into law legislation that didn't cut it by 29 percent but increased the Global Fund by 16 percent.That incredible achievement alone will save millions from the triple-health threat fought by The Global Fund.By following Sam's example–and the guidance in his book–you can learn to make your confidence in knowing we can change the world into a superpower that will allow you to do more good.Remember, however, that research into success suggests that building on your own superpowers is more important than creating new ones or overcoming weaknesses. You do you!Guest ProfileSam Daley-Harris (he/him):Founder and Senior Partner, Civic CourageAbout Civic Courage: Civic Courage trains nonprofit organizations to create structures of support that allow their members to make a profound difference as advocates on issues they care about. Website: civiccourage.org and reclaimingourdemocracy.comX/Twitter Handle: @civiccourageBiographical Information: After a career in music, Sam Daley-Harris founded the anti-poverty lobby RESULTS in 1980, co-founded the Microcredit Summit Campaign with Nobel Peace Prize Laureate Muhammad Yunus and FINCA founder John Hatch, and founded Civic Courage in 2012.  The completely revised and updated edition of Sam's book, Reclaiming Our Democracy: Every Citizen's Guide to Transformational Advocacy, will be released on January 9, 2024.  Publisher's Weekly BookLife has made it an “Editor's Pick” and called it “[A] rousing guide to…enacting change in cynical times.”  Kirkus Reviews has said, “Overall, the author's analysis of effective action is as persuasive as it is accessible, and his call to democratic participation is inspiring.”  X/Twitter Handle: http://twitter.com/samdaleyharrisPersonal Facebook Profile: facebook.com/sam.daleyharris/Linkedin: linkedin.com/in/sam-daley-harrisSuperpowers for Good is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Get full access to Superpowers for Good at www.superpowers4good.com/subscribe

The Death Of Journalism
Episode One Hundred Sixteen: Seminole Moment, Seminal Moment

The Death Of Journalism

Play Episode Listen Later Jan 4, 2024 112:06


Zig takes a deep dive into the state of college football following this year's slate of bowl games and concludes that NIL has been even worse for the sport than he anticipated. Florida State's epic loss of players prior to the Orange Bowl makes his point. Zig has some fun with the NFL as well and the obsurdity of the Detroit Lions loss to the Cowboys takes center stage. We've also got Claudine Gay's resignation and more fun with Tucker Carlson. Happy New Year!This show is part of the Spreaker Prime Network, if you are interested in advertising on this podcast, contact us at https://www.spreaker.com/show/5691723/advertisement

Datey Ladies with Barbara Ann & Vera Duffy
DL063 – The Five Love Languages – part 2 “Seminal Works”

Datey Ladies with Barbara Ann & Vera Duffy

Play Episode Listen Later Dec 18, 2023 26:15


We continue on, with guest Jaime Sena!

The Dershow
Is advocating genocide of Jews protected speech? A 1st amendment seminal for university presidents

The Dershow

Play Episode Listen Later Dec 6, 2023 31:53


Alan Dershowitz's podcast. Dershow media APPLE PODCAST: https://podcasts.apple.com/us/podcast/the-dershow/id1531775772 SPOTIFY: https://open.spotify.com/show/7Cx3Okc9mMNWtQyKJZoqVO?si=1164392dd4144a99 _________________________________________________________ FOLLOW ME: TWITTER: https://twitter.com/AlanDersh RUMBLE: https://rumble.com/user/Sav_says LOCALS: https://dershow.locals.com/ YOUTUBE: https://www.youtube.com/c/TheDershowWithAlanDershowitz _________________________________________________________ SUPPORT MY WORK: SUBSTACK: https://dersh.substack.com/   -- 

History of South Africa podcast
Episode 145 - The seminal Battle on the Ncome known as Blood River

History of South Africa podcast

Play Episode Listen Later Nov 18, 2023 26:00


This is episode 145 - we're joining the AmaZulu and the Voortrekkers at the apocalyptic clash on the River Ncome, which was soon renamed Blood River. This battle has seared its way into South African consciousness — it is so symbolic that its reference frames modern politics. Just when someone comes along and pooh poohs Blood River's importance, events conspire against them. And so, to the matter at hand. We join the two forces preparing for battle on the evening of 15th December 1838, the amaButho arraigned in their units below the Mkhonjane Mountain east of the Ncome, and the 464 Voortrekker men waiting inside their 64 wagons. Joining them was Alexander Biggar the Port Natal trader and 60 black levies, Biggar wanted revenge for the death of his son Robert killed by the AmaZulu at the Battle of Thukela. Also at hand were Robert Joyce and Edward Parker, aiding Voortrekker commander Andries Pretorius as intelligence officers. Both were fluent in Zulu and had already passed on vital information to Pretorius about Prince Mpande who had to flee into exile. Dingane had tried to have his half-brother assassinated - the paranoid Zulu king thought Mpande was planning to oust him as he had done to his half-brother, Shaka. The scene was set folks for this seminal battle at a picturesque place. The laager had been drawn up in an oval shape on the western bank of the Ncome river, to its south was a deep donga about fifty meters away that had been scoured by rain, and this ran into the Ncome with banks that were over two meters high. While AmaZulu warriors could hide in this donga, it really worked in the trekkers favour because it broke up the ground - they could not charge the wagons but had to clamber over the trenchlike ledge and were then easy pickings for the Boer sharpshooters. The Eastern side of the laager faced the Ncome River - about 80 meters away and this was regarded as even more difficult to assault. The River bank was muddy, and covered in reeds, making the approach almost impossible to achieve with any speed. Almost half a kilometer upstream, this river broadened into a marsh dotted with deep pools and crossing at that point would be almost impossible. Downstream from the laager was a very deep hippo pool or seekoeigat as it was known, so deep that the Boers couldn't feel its bottom with their long whipstocks. No AmaZulu warrior would be crossing there either. More than half a kilometer downstream was a well used drift, and south east of the Ncome was a broad open plain dotted with small marshes and pools, and further south east lies the Shogane ridge, more than a kilometer away. It was summer, and the rains had come. The river was flooding which was to further complicate the AmaZulu assault. On the other side of the River, near Mthonjane mountain, Zulu commander Ndlela kaSompisi and his two IC Nzobo were finalising their plans on the night of 15th December 1838. IT was well before dawn on the 16th December that Ndlela ordered his warriors to rise and prepare.

Midwife Monday
Seminal Fluid Matters to YOU

Midwife Monday

Play Episode Listen Later Oct 30, 2023 24:56


If you plan on getting a good "weinering"...perhaps think about how beneficial it could be.  We don't mean the sensations, although that matters.  We want to dive into the rise and downfall of seminal fluid (see what we did there).  We have been fed a diet of all the scary ways unprotected sex can get us, but have you ever considered the wins. Let's go! Winner Weiners and their fluids.

The Ricochet Audio Network Superfeed
Matters of Policy & Politics: Matters Of Policy & Politics: Silicon Triangle: Semiconductors and Seminal Moments Across the Pacific | Bill Whalen, James Ellis, and Glenn Tiffert | Hoover Institution (#394)

The Ricochet Audio Network Superfeed

Play Episode Listen Later Aug 31, 2023


Can America re-create a vibrant domestic semiconductor industry and, if so, what does that portend for an already strategically-vulnerable Taiwan? Glenn Tiffert, a Hoover Institution distinguished research fellow and co-chair of Hoover's Project on China's Global Sharp Power, and Retired Admiral James Ellis, Hoover's Annenberg Distinguished Visiting Fellow and a carrier battle group commander during […]

Area 45
Matters Of Policy & Politics: Silicon Triangle: Semiconductors and Seminal Moments Across the Pacific | Bill Whalen, James Ellis, and Glenn Tiffert | Hoover Institution

Area 45

Play Episode Listen Later Aug 31, 2023 51:34


Can America re-create a vibrant domestic semiconductor industry and, if so, what does that portend for an already strategically-vulnerable Taiwan? Glenn Tiffert, a Hoover Institution distinguished research fellow and co-chair of Hoover's Project on China's Global Sharp Power, and Retired Admiral James Ellis, Hoover's Annenberg Distinguished Visiting Fellow and a carrier battle group commander during 1996's “Third Taiwan Strait Crisis”, discuss Silicon Triangle: The United States, Taiwan, China and Global Semiconductor Security – a joint Hoover Institution report examining the Pacific Rim's geopolitics.

The Tim Ferriss Show
#657: Professor John Vervaeke — On Cultivating Wisdom, Finding Flow States, The Power and Perils of Intuition, The Four Ways of Knowing, Learning to Fall in Love with Reality, and More

The Tim Ferriss Show

Play Episode Listen Later Feb 23, 2023 155:49


Brought to you by Wealthfront high-yield savings account, Basecamp refreshingly simple project management, and Eight Sleep's Pod Cover sleeping solution for dynamic cooling and heating.John Vervaeke (@vervaeke_john) is a professor of psychology at the University of Toronto. He currently teaches courses on thinking and reasoning with an emphasis on cognitive development, intelligence, rationality, mindfulness, and the psychology of wisdom.Vervaeke is the director of UToronto's Consciousness and Wisdom Studies Laboratory and its Cognitive Science program, where he teaches Introduction to Cognitive Science and The Cognitive Science of Consciousness, emphasizing the 4E model, which contends that cognition and consciousness are embodied, embedded, enacted, and extended beyond the brain.Vervaeke has taught courses on Buddhism and Cognitive Science in the Buddhism, Psychology, and Mental Health program for 15 years. He is the author and presenter of the YouTube series “Awakening from the Meaning Crisis” and his brand new series, ‘After Socrates.'Please enjoy!This episode is brought to you by Basecamp! Basecamp combines everything you need to manage your team and projects into one simple platform. Optimize your business with Basecamp and cut your inboxes and calendars in half. You can save time and money. Right now, Basecamp is offering a free 30-day trial. Plus, listeners of The Tim Ferriss Show get an exclusive discount: get 10% off your first year's annual subscription when you sign up at Basecamp.com/Tim. *This episode is also brought to you by Wealthfront! Wealthfront is an app that helps you save and invest your money. Right now, you can earn 4.05% APY—that's the Annual Percentage Yield—with the Wealthfront Cash Account. That's more than twelve times more interest than if you left your money in a savings account at the average bank, according to FDIC.gov. It takes just a few minutes to sign up, and then you'll immediately start earning 3.8% interest on your savings. And when you open an account today, you'll get an extra fifty-dollar bonus with a deposit of five hundred dollars or more. Visit Wealthfront.com/Tim to get started.*This episode is also brought to you by Eight Sleep! Eight Sleep's Pod Cover is the easiest and fastest way to sleep at the perfect temperature. It pairs dynamic cooling and heating with biometric tracking to offer the most advanced (and user-friendly) solution on the market. Simply add the Pod Cover to your current mattress and start sleeping as cool as 55°F or as hot as 110°F. It also splits your bed in half, so your partner can choose a totally different temperature.Go to EightSleep.com/Tim and save $250 on the Eight Sleep Pod Cover. Eight Sleep currently ships within the USA, Canada, the UK, select countries in the EU, and Australia.*[05:31] The four ways of knowing (4P).[10:15] Affordances.[13:04] Semantic memory.[13:37] Flow.[27:03] Did John find Tai Chi, or did Tai Chi find him?[29:46] Leaving Christianity.[34:42] Wisdom vs. knowledge.[36:54] Self-deception.[41:53] When is logic the illogical choice for solving a problem?[46:05] The powers and perils of intuition.[55:05] Spotting patterns that need breaking.[59:18] Meditation vs. contemplation.[1:05:30] Misunderstanding love.[1:06:36] Circling.[1:12:28] “God is related to the world the way the mind is related to the body.”[1:14:34] A non-theist in the no-thingness.[1:24:03] Responsive poiesis and Sufism.[1:27:31] Neoplatonism.[1:29:16] Seminal moments.[1:31:36] Pierre Hadot.[1:32:43] Two books.[1:34:38] Potent poetry.[1:37:40] The four Es.[1:42:38] Two bonus Es.[1:45:24] Heretical beliefs.[1:54:12] Panpsychism.[2:00:56] Most unusual modes of cognition.[2:02:37] Jordan Peterson.[2:10:27] Opponent processing.[2:13:53] How to support friends endeavoring to lead meaningful lives.[2:17:50] After Socrates.[2:21:44] Western words.[2:25:11] John's changing perspective of experienced reality.[2:28:01] Something old, something new.*For show notes and past guests on The Tim Ferriss Show, please visit tim.blog/podcast.For deals from sponsors of The Tim Ferriss Show, please visit tim.blog/podcast-sponsorsSign up for Tim's email newsletter (5-Bullet Friday) at tim.blog/friday.For transcripts of episodes, go to tim.blog/transcripts.Discover Tim's books: tim.blog/books.Follow Tim:Twitter: twitter.com/tferriss Instagram: instagram.com/timferrissYouTube: youtube.com/timferrissFacebook: facebook.com/timferriss LinkedIn: linkedin.com/in/timferrissPast guests on The Tim Ferriss Show include Jerry Seinfeld, Hugh Jackman, Dr. Jane Goodall, LeBron James, Kevin Hart, Doris Kearns Goodwin, Jamie Foxx, Matthew McConaughey, Esther Perel, Elizabeth Gilbert, Terry Crews, Sia, Yuval Noah Harari, Malcolm Gladwell, Madeleine Albright, Cheryl Strayed, Jim Collins, Mary Karr, Maria Popova, Sam Harris, Michael Phelps, Bob Iger, Edward Norton, Arnold Schwarzenegger, Neil Strauss, Ken Burns, Maria Sharapova, Marc Andreessen, Neil Gaiman, Neil de Grasse Tyson, Jocko Willink, Daniel Ek, Kelly Slater, Dr. Peter Attia, Seth Godin, Howard Marks, Dr. Brené Brown, Eric Schmidt, Michael Lewis, Joe Gebbia, Michael Pollan, Dr. Jordan Peterson, Vince Vaughn, Brian Koppelman, Ramit Sethi, Dax Shepard, Tony Robbins, Jim Dethmer, Dan Harris, Ray Dalio, Naval Ravikant, Vitalik Buterin, Elizabeth Lesser, Amanda Palmer, Katie Haun, Sir Richard Branson, Chuck Palahniuk, Arianna Huffington, Reid Hoffman, Bill Burr, Whitney Cummings, Rick Rubin, Dr. Vivek Murthy, Darren Aronofsky, Margaret Atwood, Mark Zuckerberg, Peter Thiel, Dr. Gabor Maté, Anne Lamott, Sarah Silverman, Dr. Andrew Huberman, and many more.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

united states god university canada learning australia uk mental health wisdom reality meditation toronto psychology european union western entrepreneurship startups lebron james fall in love states productivity intuition cultivating awakening consciousness mark zuckerberg tony robbins arnold schwarzenegger buddhism optimize kevin hart jordan peterson richard branson matthew mcconaughey hugh jackman jamie foxx tim ferriss seth godin neil gaiman perils jerry seinfeld bren brown malcolm gladwell sia spotting bill burr neil degrasse tyson tai chi peter thiel misunderstanding flow state bob iger basecamp opponent margaret atwood sam harris elizabeth gilbert ray dalio michael phelps terry crews responsive vince vaughn jocko willink fdic jane goodall yuval noah harari edward norton four ways ken burns darren aronofsky jim collins rick rubin arianna huffington sarah silverman michael lewis circling cognitive science potent esther perel michael pollan andrew huberman reid hoffman gabor mat eric schmidt sufism dax shepard naval ravikant ramit sethi whitney cummings marc andreessen dan harris anne lamott peter attia lifestyle design cheryl strayed vitalik buterin chuck palahniuk vivek murthy semantic amanda palmer madeleine albright professor john kelly slater maria sharapova howard marks finding flow daniel ek tim ferriss show wealthfront neil strauss doris kearns goodwin seminal timothy ferriss apy heretical 4p brian koppelman panpsychism neoplatonism john vervaeke 4e meaning crisis elizabeth lesser maria popova mary karr joe gebbia jim dethmer leaving christianity affordances tools of titans pierre hadot katie haun discover tim timferrissfacebook longform interviews