Electrically excitable cell that communicates via synapses
POPULARITY
Categories
In this enlightening episode, I’m joined by Dr. Nathan Bryan to discuss the multifaceted role of nitric oxide in human health. Dr. Bryan, a pioneering scientist in nitric oxide research, offers an in-depth exploration of its function as a crucial signalling molecule, its impact on cardiovascular health, and its potential therapeutic applications for various diseases. Through this conversation, the importance of dietary choices, the dangers of common health misconceptions, and the power of simple lifestyle changes come to light. What You'll Learn: Origins and Discovery: Learn about the groundbreaking discovery of nitric oxide as a signalling molecule in the cardiovascular system, a discovery that earned its pioneers a Nobel Prize. Physiological Roles: Understand how nitric oxide functions as a neurotransmitter and its significance in long-term memory and nerve communication. Connection to Diet: Discover the vital relationship between dietary nitrates, nitric oxide production, and the importance of a healthy microbiome. Impact of Lifestyle Choices: Explore how lifestyle choices, like diet and oral hygiene practices, can profoundly affect nitric oxide production and overall health. Current and Future Research: Gain insight into Dr. Bryan’s ongoing research into nitric oxide-based therapies for diseases like Alzheimer's and cardiovascular issues. Key Takeaways: Diet and Health: Consuming a diet rich in green leafy vegetables can optimise nitric oxide production, supporting cardiovascular and immune health. Oral Hygiene: Antiseptic mouthwashes and fluoride toothpaste can disrupt the oral microbiome, negatively impacting nitric oxide production and raising blood pressure. The Microbiome's Role: A diverse and healthy microbiome supports nitric oxide production, critical for numerous bodily functions. Therapeutic Potential: Nitric oxide has the potential to revolutionise treatments for diseases like heart disease and Alzheimer’s. Holistic Health Approaches: Combining dietary changes, careful oral care, and appropriate supplements with conventional medical treatments can significantly enhance health outcomes. Resources Discover more through Dr. Nathan Bryan’s book, "The Secret of Nitric Oxide: Bringing the Science to Life," available at major retailers. Explore Dr. Bryan's educational videos on YouTube to deepen your understanding of nitric oxide and its health impacts. Connect with Dr. Bryan on LinkedIn for ongoing insights and information. Connect with Dr. Bryan on Instagram Website: https://n1o1.com/about-dr-nathan-bryan/ Through this conversation, Dr. Bryan reveals that understanding and optimising nitric oxide production can play a significant role in improving health outcomes. By making informed lifestyle choices and staying updated with scientific advances, we can harness this powerful molecule for better health and longevity. If you're interested in health innovation and the body's complex biochemistry, this discussion is a must-listen. Support the Podcast If you found this conversation enriching and beneficial, subscribe, rate, and review us on your favourite podcast platform. Your support spreads these essential dialogues, offering more individuals the opportunity to embark on their healing journeys. 01:53 Understanding Nitric Oxide: Basics and Discoveries 02:49 Nitric Oxide in the Cardiovascular System 08:28 Nitric Oxide's Role in the Immune System 10:33 Nitric Oxide in Neurons and Brain Health 16:37 Diet, Microbiome, and Nitric Oxide Production 31:49 Impact of Medications on Nitric Oxide Production 42:29 Optimising Nitric Oxide Through Diet and Supplements 46:15 The Impact of Dental Hygiene on Nitric Oxide Production 46:37 The Controversy of Fluoride in Water and Toothpaste 48:48 The Dangers of Antiseptic Mouthwash 53:06 The Benefits of Tongue Scraping 54:05 Optimising Nitric Oxide Through Lifestyle Practices 56:26 The Science of Photo biomodulation and Infrared Saunas 59:00 The Debate Between Traditional and Infrared Saunas 01:02:40 Morning Routine for Optimal Health 01:04:45 Scepticism Around Biohacking Supplements 01:12:02 Exciting Future of Nitric Oxide ResearchSee omnystudio.com/listener for privacy information.
Today's episode explores the role of inhibitory neurons & the Sonic Hedgehog (SHH) gene in shaping the Autistic phenotype, focusing on the excitation-inhibition imbalance that drives sensory hypersensitivity and cognitive challenges. Through a neuroscience lens, the episode connects these mechanisms to heightened gamma activity.Ben Ari Episode https://youtu.be/jo-ffwF9u0YParvalbumin Interneurons episode https://youtu.be/PBHVssvoQkM?si=t8WYGlcHcv7WiE-TVisual Thinking Part 1 https://youtu.be/XqQ8jCvWzYc?si=lffUEjGHjWj4mGOMNeurulation Part 1 https://youtu.be/gZdg9bX3Nuw?si=xvwtlz-p1hPHI8FADaylight Computer Company, use "autism" for $50 off athttps://buy.daylightcomputer.com/autismChroma Light Devices, use "autism" for 10% discount athttps://getchroma.co/?ref=autismuse "autism" for 10% discount at Fig Tree Christian Golf Apparel & Accessories https://figtreegolf.com/?ref=autismCognity AI for Autistic Social Skills, use "autism" for 10% discount athttps://thecognity.com00:00 - Autistic phenotype, excitation-inhibition imbalance, sensory hypersensitivity, cognitive deficits03:40 - Inhibitory neurons, GABA receptors, GABA-A, GABA-B, GABA-C, tonic inhibition07:24 - Tonic firing, burst firing, phasic firing, neural oscillations, sensory processing08:31 - Sonic Hedgehog gene, neural development, GABAergic identity, thalamic reticular nucleus (TRN)14:48 - Parvalbumin interneurons, fast-spiking, gamma oscillations, sensory gating, dorsolateral prefrontal cortex18:22 - Parvalbumin dysfunction, sensory hypersensitivity, visual cortex (V1), EI imbalance, brain-derived neurotrophic factor (BDNF)22:02 - Somatostatin interneurons, feedback inhibition, dendritic modulation, sensory adaptation, hippocampus25:43 - Vasoactive intestinal peptide (VIP) interneurons, disinhibition, pyramidal activity, attention, social processing29:30 - Calbindin interneurons, calretinin interneurons, dendritic inhibition, sensory processing, anterior insula33:15 - Purkinje cells, cerebellum, motor control, cognitive timing, cell loss in autism36:00 - Evolutionary perspective, parvalbium density, neural circuit stabilization, sensory-cognitive processing39:25 - Gamma activity, visual processing, retina, lateral geniculate nucleus, attention to detail, autistic self39:52 Daylight Computer Company, use "autism" for $50 discount42:13 Chroma Light Devices, use "autism" for 10% discount45:17 Reviews & Contact InfoX: https://x.com/rps47586YT: https://www.youtube.com/channel/UCGxEzLKXkjppo3nqmpXpzuAemail: info.fromthespectrum@gmail.com
Emily is an Assistant Professor at MDI Biological Laboratory where she studies neurodegenerative disease-associated genes using super-resolution imaging of living, adult worms. Emily earned her Ph.D. at the University of Maine while embedded in the Jackson Laboratory and during her post-doc at MDI Bio Lab, she was recognized by the National Institutes of Health as an “Outstanding Scholar in Neuroscience”.This conversation was recorded in September 2025. ~~~~~The Maine Science Podcast is a production of the Maine Discovery Museum. It is recorded at Discovery Studios, at the Maine Discovery Museum, in Bangor, ME. The Maine Science Podcast is hosted and executive produced by Kate Dickerson; edited and produced by Scott Loiselle. The Discover Maine theme was composed and performed by Nick Parker. To support our work: https://www.mainediscoverymuseum.org/donate. Find us online:Maine Discovery MuseumMaine Discovery Museum on social media: Facebook Instagram LinkedIn Bluesky Maine Science Festival on social media: Facebook Instagram LinkedInMaine Science Podcast on social media: Facebook Instagram © 2025 Maine Discovery Museum
Nociceptors tamp down the production of reactive oxygen species in response to heat, chemical irritants or toxins.
Hello and welcome to my book on: Habit Stacking, Synaptic Pruning, and Diabetes Naturally. I hope you enjoy and find this book useful.Our life is what our thoughts make it. Do every act of your life as if it were your last. In a word, your life is short. You must make the most of the present with the aid of reason and justice. Since it is possible that you may be quitting life at this very moment, govern every act and thought accordingly.Marcus Aurelius Emperor of Rome, 121 AD -180 ADIs your glass half-empty or half-full? How you answer this age-old question about positive thinking may reflect your outlook on life, your attitude toward yourself, and whether you are optimistic or pessimistic—and it may even affect your health. [1]In 2007, Researchers at Oxford University compared the number of neurons in new born babies with that of adults and found the new born babies had a significantly higher number of neurons than adults.[2] As we mature, synapses or connections between neurons in the brain are pruned if they are not used, and the energy is redirected to maintaining synapses that are used.You likely have strong habits and connections that you take for granted each day. There are many thousands of daily habits our brain is good at repeating over and over. When developing new habits, you can leverage the interconnectedness of behaviour to your advantage. One of the most effective methods for establishing a new habit is to identify a current habit and then stack your new behaviour on top of it. This technique is known as habit stacking. I have consciously developed habits that help me avoid consuming foods containing highly refined carbohydrates, sugar, and sugar alcohols. You can learn to turn negative thinking into positive thinking. The process is simple, but it does take time and practice because you are creating a new habit, after all. It is important to learn the value of positive thinking and to focus on having a positive outlook.Habit Stacking is the foundation of Lifestyle Changes to Prevet and Reverse Type 2 Diabetes NaturallyMark Ashfordinformation@markaashford.com[1] Mayo Clinic, "Stress Management," (2025).[2] Nielsen RD Abitz M, Jones EG, Laursen H, Graem N, Pakkenberg B. Excess of neurons in the human newborn mediodorsal thalamus compared with that of the adult. Cereb Cortex. 2007 Nov;17(11):2573-8. doi: 10.1093/cercor/bhl163. Epub 2007 Jan 11. PMID: 17218480., "Excess of Neurons in the Human Newborn Mediodorsal Thalamus Compared with That of the Adult," (2007).
Join The Deep Dive (Life-changing teachings for spiritual mastery, guided sound journeys, and access to live community gatherings to share your most authentic self) https://iamemilioortiz.com/the-deep-d... In this powerful interview, Emilio Ortiz and Anne Tucker explore the activations and new energy transforming our planet and our lives. Anne channels angelic insights on the earth changes underway, describing how the Earth must crack open to allow light to enter and why humanity's role in witnessing this release is essential. Together they discuss the rise of self-awareness, the experience of loneliness in both humanity and the Earth, and how our thoughts and presence contribute to collective healing. This dialogue also reveals how we are moving beyond the karmic construct into a new earth shaped by unity, self-acceptance, and manifestation through presence. Anne describes the coming flash of light as a turning point that will awaken a higher octave of self, allowing us to co-create reality in alignment with our soul's truth. By embracing transformation, practicing self-love, and allowing what is, we become more dynamically open than ever before. Anne Tucker is healer & channel, supporting awakening souls on their journey through community, spiritual connection, and healing, so they can take advantage of this time of rapid growth and ascension. As a trance channel, Anne enters a meditative state where she connects to the Angelic realm and allows Angels to speak directly through her. She is a channel for Angelic guidance and healing frequencies that will prepare us and help to usher us through the planetary shift in consciousness. ___________________ Read Anne Tucker's book: What's Coming Next: A Channeled Guide to Navigating the Greatest Shift of our Time
Are you ready to explore the truth of human origins, evolution & consciousness from the perspective of universal beings so you can reclaim your true identity? Episode 146 of The Authentic Human Podcast is part 71 of The Story of Creation. Michelle Vickers invites you to explore the universal truth of human origins that have been strategically hidden from our collective memory. In this candid conversation with The Creator and Universal Beings, they reveal how universal communication (energy language) - the most ancient language in existence is the key to decoding the truth of human history and restoring the evolution of consciousness. You will discover: • How Michelle communicates with the Universal Beings who shaped existence to retrieve the true story of creation • Why modern systems are designed to trap us in classifications (labels, diagnoses, identities) • How the neurons in your brain communicate with your energy and vibrations revealing the higher truth of how they were designed to work for you. • The difference between living by society's rules and choosing universal truth • Why surrendering old belief systems (that comfy, stained sweater) is the doorway to freedom, joy, and peace This episode is for the truth seekers, for those who feel a deep pull toward spiritual awakening, for those who feel there must be more than what they've been told. You don't need to be an expert — you only need a curious heart and courage to question. Michelle shares universal level truth in simple, accessible terms so every human can reclaim the wisdom of their soul. She combines the best of both science and spirit — proving that the real history of humanity is encoded inside you and is accessible through universal communication, once we drop the lies. You're invited to step beyond the classification system into the fulfillment system. Stay curious. You are not alone. 0:00 Why The Story of Creation Matters 5:30 Neurons, Soul Energy & Universal Communication 8:49 Why Classification Limits You 12:12 Neuronal Vibrations & Universal Communication 15:43 Letting Go of the Comforting Beliefs 17:50 Overcoming Panic, Resistance & the Comfort Zone 20:26 Reclaiming Your Power Over Labels 24:06 Energy Moves, You Are Movement 28:11 Break Up with the Old To-Do List 📌 Subscribe to The Authentic Human Podcast to unlock more episodes of universal truth and cosmic discovery. Michelle Vickers is an explorer, researcher, and intuitive explorer bridging the deepest realms of consciousness and universal communication. She reveals how creation, existence, and human origins are encoded in you and they're all accessible through the universal language itself. Michelle translates that knowledge into simple terms so every human can access the truth of their soul, their purpose, and their connection to the universe. ________________________________________ #universaltruth #selfawareness #consciousness #humanorigins #universalperspective #spiritualawakening #spiritualawakening #ancientwisdom #hiddenknowledge #alchemy #spiritualdevelopment #higherconsciousness #spiritualpodcast #authentichuman #massawakening #quantumintelligence
A protective screen of spurious transcriptional activity enables each olfactory neuron to express exactly one out of hundreds of olfactory receptors.
A researcher's theory hangs muscle degeneration on a broken neural circuit.
“I'm someone who's building life one day at a time—and loving it.” In this episode, coach, facilitator, TEDx speaker, and podcast host Utkarsh Narang checks in from Melbourne—“in the future,” as he jokes—and lays out a life stitched together by courage, curiosity, and conscious impact. This episode is sponsored by my new book BRANDJITSU, helping you find, shape, and share your story with the world. In This Episode Raised in a family of doctors, he followed a straight path into physiotherapy, running three clinics in Delhi. But a restless question kept tugging: what's my larger impact? That question pulled him into a start-up tied to Columbia Business School, where “I'll figure it out” turned a three-month experiment into eight years of filmmaking, production, and later sales and operations. Then came another leap. In the middle of the 2020 pandemic, Utkarsh quit a stable corporate role to become a coach—despite understandable pushback from family. Those pivots, he says, weren't acts of recklessness; they were experiments run in partnership with fear. Understand the fear, test it with small actions, then move. Utkarsh shares the values that anchor his days—freedom, growth, love, discipline—and why he recently replaced excellence with presence after a powerful retreat. He talks parenting two boys, starting the Ignite Neurons podcast to spark deeper conversations, and committing to an 800-plus-day meditation streak as a practice of simply “being.” Travel, too, shapes his worldview: Melbourne's four seasons in a day, the energy of New York, and a romantic fascination with Paris. There's lighter fare—kangaroos, garage spiders named Spidey, and table tennis battles in the backyard—but the thread is consistent: define success on your terms, keep learning, and cultivate the courage to choose your own road—even when you can't see it to the end. His long-game ambition? Support a billion lives to create more clarity, courage, and conscious impact—one day at a time. Quoteable Quotes “I'm someone who's building life one day at a time—and loving it.” — Utkarsh Narang “If you can see your path till the end, you're walking someone else's path.” — Utkarsh Narang “Courage to express love, to leave what's not serving you, to try—courage is the point.” — Utkarsh Narang “Sometimes you just have to let shit go to make space for what's next.” — Michael Dargie “Fear can be a guide if you run small experiments instead of freezing.” — Utkarsh Narang “What's the most rebellious thing you've done?” — Michael Dargie Episode Highlights Melbourne Check-In | “I'm in the future” and yes, the lotto joke. Many Hats, One Line | Coach, facilitator, speaker, author—“building life one day at a time.” Doctor's Son → Physio | Three clinics in Delhi and a fitness centre. Bigger Impact Itch | From local care to wondering about global reach. The Three-Month Bet | A start-up gig linked to Columbia Business School turns into eight years. Filmmaking to Ops & Sales | Stacking skills across creative and commercial lanes. Pandemic Pivot | Quitting a stable job in 2020 to coach—family shock included. Values Upgrade | Swapping excellence for presence after deep inner work. Coaching Focus | Clients across Australia, the US, and India—especially in tech. Define “Rebel” | Freedom to choose your days; a path you can't fully see. Face Fear Thoughtfully | Identify the real trigger, then test with small actions. Meditation Streak | 800+ consecutive days; “not about success—just being.” Travel & Empathy | Melbourne's weather, NYC's energy, and a Paris daydream. Backyard Life | Kangaroos in the Grampians, a respectful détente with “Spidey.” Sport & Play | Table tennis with his kids; tennis fandom lives on. Rebels-in-Waiting | Know your why, keep learning, define success for yourself. The Big Goal | Touch a billion lives with clarity, courage, and conscious impact. LINKS FROM EPISODE Utkarsh Narang Website (http://www.utkarshnarang.com/) Utkarsh on LinkedIn (https://www.linkedin.com/in/utkarsh-narang/) Utkarsh Narang TedTalk (https://youtu.be/5aj0btxnBLo) Utkarsh on Instagram (https://www.instagram.com/ignitedneurons?igsh=MTB4aGJyMTdsdmMwdw==) Get Your Copy of Michael's Book: "BrandJitsu™: Move Your Brand From 'Meh' To Memorable" Indigo | Barnes & Noble | MichaelDargie.com
Functional synapses between brain cells and cancer cells are key to the metastatic growth, according to new findings from two independent teams.
Health technologist and science communicator Caitlin Shure (NextSense) joins me to unpack brainwave-measuring earbuds that can both read and change your mental state—starting with sleep. We dive into why “Gen-1” wearables overwhelm you with charts but don't help you in the moment, how earbud-EEG works (yes, with just three electrodes), the messy truth on EMF fears, and why women—especially moms—need “do-it-for-me” tools, not just dashboards. This episode is for women—especially busy moms and high-achievers—who want better sleep, calmer days, and science-backed wearables that actually help (not just track). WE TALK ABOUT: 05:00 - Caitlin's origin story: From “not a science person” to neuroscience and science communication 06:20 - Making complex brain science click for everyday life (and why wearables help) 09:55 - Why “one new idea at a time” beats info overload in health education 10:20 - Early neurotech (Muse), then Apple Watch, Oura, CGM—what worked and what didn't 13:20 - Why “wearable must be wearable”: Rings, watches, now earbuds near your brain 14:30 - Starting with sleep: The lowest-hanging lever for memory, mood, and cognition 19:40 - Neurons, synchrony, and how three earbud electrodes capture EEG 24:50 - Women's health use-cases: Pregnancy, postpartum, perimenopause, menopause 29:05 - “Numbers, not judgments”: Why NextSense won't label your sleep ‘good' or ‘bad' 34:30 - What's next: Adaptive Focus and Relaxation sessions (neurofeedback-like audio) 36:30 - Binaural beats: Mixed evidence, likely responders vs non-responders SPONSORS: CaloCurb (get 10% OFF) is my go-to, 100% plant-based alternative to Ozempic—helping you feel full sooner, snack less, and finally trust your body again without needles, drugs, or yo-yo diets. Join me in Costa Rica for Optimize Her, a 5-night luxury women's retreat in Costa Rica with yoga, healing rituals, and biohacking workshops—only 12 spots available. RESOURCES: Trying to conceive? Join my Baby Steps Course to optimize your fertility with biohacking. Free gift: Download my hormone-balancing, fertility-boosting chocolate recipe. Explore my luxury retreats and wellness events for women. Shop my faves: Check out my Amazon storefront for wellness essentials. NextSense website and Instagram Caitlin Shure's website LET'S CONNECT: Instagram, TikTok, Facebook Shop my favorite health products Listen on Spotify, Apple Podcasts, YouTube Music
Host Doug Stephan and Dr. Ken Kronhaus of Lake Cardiology (352-735-1400) cover a number of topics affecting our health. First up, Doug and Dr. Ken discuss the biggest news stories in the medical world, starting with a targeted itch-relief for the various types of bee/wasp/hornet stings, raising the question of what is in bee/hornet/wasp venom that causes pain, itching, as well as the more severe allergic reactions that can lead to anaphylaxis. Next up, new research linking prenatal exposure to acetaminophen (like in Tylenol) with Autism and ADHD and why some medical professionals are pushing back. Then, Doug and Dr. Ken continue the pain-reliever conversation and the benefits of routinely taking low-dose Aspirin, but the importance that a routine like this should be done under a doctor's supervision as there can be adverse effects if you are not someone that needs to be taking low-dose Aspirin regularly. Moving on, the focus shifts to brain injuries and how it affects young athletes, college-age and younger, with early neuron loss and CFE's. It's a finding that highlights long-term risk associated with sports, like football and soccer, leading to new guidelines for player safety and concussion protocols. More warnings on pregnancy and cannabis use, as use has climbed due to wider legalization and social acceptance, with many turning to marijuana to curb morning sickness and pregnancy pain symptoms. Lastly, new research explains a phenomenon how exercise suppresses appetite through a previously-unknown molecular pathway, questioning why bugs bite us, and the latest on the screwworm. Website: GoodDayHealthrShow.com Social Media: @GoodDayNetworks
By David Stephen who looks at Human Intelligence in this article. How does learning new things, regularly, become easier? What is the mix for mastery - or its variations - to boost problem-solving, creativity, innovation and understanding? Why are adequate facilities and instructors no guarantees for universal education? What are the limits of human intelligence that AI tutors of present-day may not change? There is a new [September 12, 2025] story by The Associated Press, Google's top AI scientist says 'learning how to learn' will be next generation's most needed skill, stating that, "One thing we'll know for sure is you're going to have to continually learn … throughout your career." AI Tutor Learning new and unfamiliar things, for humans, is generally tough. Why? Probably because of how human intelligence [or its source] works. Even the best explanations, from the best instructors, with the best examples, facilities, practical and so forth, are not equal to the emergence of expertise, in the same instance. Those may also not be followed by creativity, problem-solving, innovation, understanding or even the ability to later recall. This means that the challenges of learning are beyond another new facility, when [the way] human intelligence [works] remains unknown. Even if the pace of learning - or other aspects of it - does not change, if there is, at least, a probable chart on how human intelligence works, it could lead to learning designs, for what to optimize, for certain outcomes. How does an individual learn if the focus is problem-solving? Would it be different if the focus is to be certified? How might that contrast learning for creativity, innovation or understanding? How should human intelligence pivot now, with AI, having the capacity to do many of what human intelligence can do? How should learning be sculpted, not only for some outcomes, but to outcompete [if that's even possible] AI? Sketching Human Intelligence What are the components of human intelligence in the brain? How do they interact and relay for outcomes like problem-solving, creativity, innovation and understanding? What activities are likelier to result in those, and how much time is necessary? First, it is postulated that electrical and chemical signals are the configurators of intelligence, in the human brain. This postulation is derived from empirical neuroscience, establishing that all functions of the brain [for human experiences], are mechanized by neurons - with their electrical and chemical signals. Neurons, according to neuroscience, are also in clusters. Secondly, it is postulated that electrical and chemical signals interact, in sets, to result in functions. Sets of signals are proposed to be available in clusters of neurons. There are also states that signals are, that determine the extents of interactions. There are specifications for interactions - and attributes - for intelligence. This is where progress is, to have it displayed, and used to prospect pathways for prioritized learning outcomes. Advancing education in the AI era There is a new [September 12, 2025] report by CBC, N.L.'s 10-year education action plan cites sources that don't exist, stating that, "A major report on modernizing the education system in Newfoundland and Labrador is peppered with fake sources some educators say were likely fabricated by generative artificial intelligence (AI). Released last month, the Education Accord NL final report, a 10-year roadmap for improving the province's public schools and post-secondary institutions, includes at least 15 citations for non-existent journal articles and documents. In one case, the report references a 2008 movie from the National Film Board called Schoolyard Games. The film doesn't exist, according to a spokesperson for the board. But the exact citation used in the report can be found in a University of Victoria style guide - a document that clearly lists fake references designed as templates for researchers writing a...
Today we have Dr. Ewelina Kurtys on the show. Ewelina has a background in Neuroscience and is currently working at FinalSpark. FinalSpark is using live Neurons for computations instead of traditional electric CPUs. The advantage is that live Neurons are significantly more energy efficient than traditional computing, and given all the energy concerns right now with regards to running AI workloads and data centers, this seems quite relevant, even though bioprocessors are still very much in the research phase.
My guest is Dr. Christof Koch, PhD, a pioneering researcher on the topic of consciousness, an investigator at the Allen Institute for Brain Science and the chief scientist at the Tiny Blue Dot Foundation. We discuss the neuroscience of consciousness—how it arises in our brain, how it shapes our identity and how we can modify and expand it. Dr. Koch explains how we all experience life through a unique “perception box,” which holds our beliefs, our memories and thus our biases about reality. We discuss how human consciousness is changed by meditation, non-sleep deep rest, psychedelics, dreams and virtual reality. We also discuss neuroplasticity (rewiring the brain), flow states and the ever-changing but also persistent aspect of the “collective consciousness” of humanity. Read the episode show notes at hubermanlab.com. Thank you to our sponsors AGZ by AG1: https://drinkagz.com/huberman BetterHelp: https://betterhelp.com/huberman Our Place: https://fromourplace.com/huberman Helix: https://helixsleep.com/huberman LMNT: https://drinklmnt.com/huberman Timestamps (0:00) Christof Koch (2:31) Consciousness; Self, Flow States (8:02) NSDR, Yoga Nidra, Liminal States; State of Being, Intelligence vs Consciousness (13:14) Sponsors: BetterHelp & Our Place (15:53) Self, Derealization, Psychedelics; Selflessness & Flow States (19:53) Transformative Experience, VR, Racism & Self; Perception Box, Bayesian Model (28:29) Oliver Sacks, Empathy & Animals (34:01) Changing Outlook on Life, Tool: Belief & Agency (37:48) Sponsors: AGZ by AG1 & Helix Sleep (40:23) Alcoholics Anonymous (AA) & Higher Power (42:09) Neurobiology of Consciousness; Accidents, Covert Consciousness (51:09) Non-Responsive State; Disability Bias, Will to Live, Resilience (55:34) Will to Live, Akinetic Mutism, Neural Correlates of Consciousness (57:43) Conflicting Perception Boxes, Meta Prior, Religion, AI (1:06:47) AI, Violence, Swapping Perception Boxes, Video (1:12:19) 5-MeO-DMT, Psychedelics, Light, Consciousness & Awe; Loss of Self (1:20:54) Death, Mystical Experience, Ocean Analogy; Physicalism & Observer (1:27:57) Sponsor: LMNT (1:29:29) Meditation, Tool: Spacetime Bridging; Ball-bearing Analogy; Digital Twin (1:36:16) Mental Health Decline, Social Media, Pandemic, Family & Play, Tool: Body-Awareness Exercises (1:41:34) Dog Breeds; Movement, Cognitive Flexibility & Longevity (1:47:17) Cynicism, Ketamine, Tool: Belief Effect; Heroes & Finding Flaws (1:52:46) Cynicism vs Curiosity, Compassion; Deaths of Despair, Mental Health Crisis (1:57:26) Jennifer Aniston, Recognition & Neurons; Grandmother Hypothesis (2:03:20) Book Recommendation; Meaning of Life (2:09:10) Zero-Cost Support, YouTube, Spotify & Apple Follow & Reviews, Sponsors, YouTube Feedback, Protocols Book, Social Media, Neural Network Newsletter Disclaimer & Disclosures Learn more about your ad choices. Visit megaphone.fm/adchoices
Support the show:https://www.paypal.me/Truelifepodcast?locale.x=en_US
Show Notes:Story 1: Map of 600,000 brain cells rewrites the textbook on how the brain makes decisionsSource: LiveScience.comLink: https://www.livescience.com/health/neuroscience/map-of-600-000-brain-cells-rewrites-the-textbook-on-how-the-brain-makes-decisionsStory 2: Super solar cells withstand 10,000 bends, retain over 85% output after 2,800 hoursSource: InterestingEngineering.com via MSNLink: https://www.msn.com/en-us/technology/renewable-energy/super-solar-cells-withstand-10-000-bends-retain-over-85-output-after-2-800-hours/ar-AA1JEC2zStory 3: Soft skin, sharp senses: New robotic 'touch' sees danger before it hitsSource: TechExplore.comLink: https://techxplore.com/news/2025-08-soft-skin-sharp-robotic-danger.htmlResearch Paper: https://ieeexplore.ieee.org/document/11097357Story 4: Soft wearable robot jacket responds to movements, supports upper-limb impairmentsSource: InterestingEngineering.comLink: https://interestingengineering.com/innovation/us-engineers-robot-worn-like-jacketsResearch Paper: https://www.nature.com/articles/s41467-025-62538-8Video: https://www.youtube.com/watch?v=dFHTyUrAaz0&t=12sHonorable Mentions:Story: Some People Are Immune to All Viruses. Scientists Now Want to Replicate This Ability for a Universal AntiviralSource: ZME ScienceLink: https://www.zmescience.com/medicine/universal-antiviral/Story: Generative AI designs compounds that can kill drug-resistant bacteriaSource: Phys.orgLink: https://phys.org/news/2025-08-generative-ai-compounds-drug-resistant.htmlResearch Paper: https://www.cell.com/cell/abstract/S0092-8674(25)00855-4Story: This New Indoor Solar Cell Could Power the Entire Internet of Things Using Only the Light from Your CeilingSource: ZME ScienceLink: https://www.zmescience.com/science/news-science/this-new-indoor-solar-cell-could-power-the-entire-internet-of-things-using-only-the-light-from-your-ceiling/Story: Researchers Explore ‘Biomining' Seaweed for Critical MineralsSource: Pacific Northwest National LaboratoryLink: https://www.pnnl.gov/news-media/researchers-explore-biomining-seaweed-critical-minerals
In this Huberman Lab Essentials episode, I discuss science-supported nutrients that directly support brain structure, function and long-term cognitive health. I highlight specific nutrients, including omega-3 fatty acids, creatine, phosphatidylserine, anthocyanins, choline and glutamine, and discuss their recommended doses and dietary sources. I also describe how taste perception, gut-brain signaling and learned associations shape what foods we prefer and crave. Finally, I share practical behavioral tools to help rewire your food preferences toward healthier choices. Read the episode show notes at hubermanlab.com. Thank you to our sponsors AGZ by AG1: https://drinkagz.com/huberman David: https://davidprotein.com/huberman LMNT: https://drinklmnt.com/huberman Timestamps 00:00:00 Food & Brain Health 00:01:32 Structural Fat & Neurons, Tools: Omega-3s, EPA; Phosphatidylserine 00:05:11 Acetylcholine & Focus, Tool: Dietary Choline 00:06:48 Creatine & Brain Health, Tool: Creatine Supplementation 00:08:26 Sponsor: David 00:09:41 Anthocyanins & Brain Function, Tool: Blueberries & Berries 00:10:52 Glutamine & Brain, Tool: Glutamine & Offset Sugar Cravings 00:12:48 Brain-Boosting Nutrients, Foods & Supplements 00:15:03 Food Preference; Yum, Yuck or Meh; Mouth, Taste & Palatability 00:19:30 Gut, Nutrients & Subconscious Signaling 00:21:56 Learned Food Preferences 00:23:36 Sponsors: AGZ by AG1 & LMNT 00:26:16 Food & Learned Associations, Tool: Unpair Artificial Sweeteners 00:30:18 Belief Effects, Satiety, Tool: Rewiring Food Preferences 00:35:48 Recap & Key Takeaways Disclaimer & Disclosures Learn more about your ad choices. Visit megaphone.fm/adchoices
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
BASE64 Over DNS The base64 character set exceeds what is allowable in DNS. However, some implementations will work even with these invalid characters. https://isc.sans.edu/diary/BASE64%20Over%20DNS/32274 Google Chrome Update Google released an update for Google Chrome, addressing two vulnerabilities. One of the vulnerabilities is rated critical and may allow code execution. https://chromereleases.googleblog.com/2025/09/stable-channel-update-for-desktop_9.html Ivanti Updates Ivanti patched a number of vulnerabilities, several of them critical, across its product portfolio. https://forums.ivanti.com/s/article/September-Security-Advisory-Ivanti-Connect-Secure-Policy-Secure-ZTA-Gateways-and-Neurons-for-Secure-Access-Multiple-CVEs Sophos Patches Sophos resolved authentication bypass vulnerability in Sophos AP6 series wireless access point firmware (CVE-2025-10159) https://www.sophos.com/en-us/security-advisories/sophos-sa-20250909-ap6 Apple Introduces Memory Integrity Enforcement With the new hardware promoted in yesterday s event, Apple also introduced new memory integrity features based on this new hardware. https://security.apple.com/blog/memory-integrity-enforcement/
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership: https://www.thetransmitter.org/partners/ Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/ To explore more neuroscience news and perspectives, visit thetransmitter.org. What changes and what stays the same as you scale from single neurons up to local populations of neurons up to whole brains? How tuning parameters like the gain in some neural populations affects the dynamical and computational properties of the rest of the system. Those are the main questions my guests today discuss. Michael Breakspear is a professor of Systems Neuroscience and runs the Systems Neuroscience Group at the University of Newcastle in Australia. Mac Shine is back, he was here a few years ago. Mac runs the Shine Lab at the University of Sidney in Australia. Michael and Mac have been collaborating on the questions I mentioned above, using a systems approach to studying brains and cognition. The short summary of what they discovered in their first collaboration is that turning up or down the gain across broad networks of neurons in the brain affects integration - working together - and segregation - working apart. They map this gain modulation on to the ascending arousal pathway, in which the locus coeruleus projects widely throughout the brain distributing noradrenaline. At a certain sweet spot of gain, integration and segregation are balanced near a bifurcation point, near criticality, which maximizes properties that are good for cognition. In their recent collaboration, they used a coarse graining procedure inspired by physics to study the collective dynamics of various sizes of neural populations, going from single neurons to large populations of neurons. Here they found that despite different coding properties at different scales, there are also scale-free properties that suggest neural populations of all sizes, from single neurons to brains, can do cognitive stuff useful for the organism. And they found this is a conserved property across many different species, suggesting it's a universal principle of brain dynamics in general. So we discuss all that, but to get there we talk about what a systems approach to neuroscience is, how systems neuroscience has changed over the years, and how it has inspired the questions Michael and Mac ask. Breakspear: Systems Neuroscience Group. @DrBreaky. Shine: Shine Lab. @jmacshine. Related papers Dynamic models of large-scale brain activity Metastable brain waves The modulation of neural gain facilitates a transition between functional segregation and integration in the brain Multiscale Organization of Neuronal Activity Unifies Scale-Dependent Theories of Brain Function. The brain that controls itself. Metastability demystified — the foundational past, the pragmatic present and the promising future. Generation of surrogate brain maps preserving spatial autocorrelation through random rotation of geometric eigenmodes. Related episodes BI 212 John Beggs: Why Brains Seek the Edge of Chaos BI 216 Woodrow Shew and Keith Hengen: The Nature of Brain Criticality BI 121 Mac Shine: Systems Neurobiology 0:00 - Intro 4:28 - Neuroscience vs neurobiology 8:01 - Systems approach 26:52 - Physics for neuroscience 33:15 - Gain and bifurcation: earliest collaboration 55:32 - Multiscale organization 1:17:54 - Roadblocks
Professor Barbara Webb from the University of Edinburgh in Scotland uses physical robots to validate neural mechanisms in crickets, ants, and bees. In this episode of Brains and Machines, she talks to Dr. Sunny Bains of University College London about her work. Discussion follows with Dr. Giulia D'Angelo from the Czech Technical University in Prague and Professor Ralph Etienne-Cummings of Johns Hopkins University.
#STRANGESCIENCE – This bright orange shark has shocked and delighted scientists.
Neurons. Immune systems. MRIs. Weed gummies? One of the greats in neurology, Dr. Aaron Boster, takes time to chat all about Multiple Sclerosis, a neurological autoimmune disease close to our hearts. Alie's mom, your grammapod a.k.a. Fancy Nancy, was diagnosed with MS over two decades ago, and this episode explores in depth the factors that can cause MS, therapies that do – and don't – show promise, how diet, exercise and mindfulness actually can help folks who have MS, the oftentimes agonizing journey to a diagnosis, and advice for those who've MS for a while – or are newly in the community. Also: yeah, weed. Visit the Boster Center for Multiple Sclerosis and follow Dr. Boster on YouTubeA donation went to MS Views and NewsMore episode sources and linksOther episodes you may enjoy: Disability Sociology (DISABILITY PRIDE), Diabetology (BLOOD SUGAR), Post-Viral Epidemiology (LONG COVID), Dolorology (PAIN), Salugenology (WHY HUMANS REQUIRE HOBBIES), Psychedeliology (HALLUCINOGENS), Cardiology (THE HEART), Somnology (SLEEP)400+ Ologies episodes sorted by topicSmologies (short, classroom-safe) episodesSponsors of OlogiesTranscripts and bleeped episodesBecome a patron of Ologies for as little as a buck a monthOlogiesMerch.com has hats, shirts, hoodies, totes!Follow Ologies on Instagram and BlueskyFollow Alie Ward on Instagram and TikTokEditing by Mercedes Maitland of Maitland Audio Productions and Jake ChaffeeManaging Director: Susan HaleScheduling Producer: Noel DilworthTranscripts by Aveline Malek Website by Kelly R. DwyerTheme song by Nick Thorburn
Greg and Matias interview Brendon Boot of Skin2Neuron. Brendon is a neurologist at Harvard and Mayo Clinic with prior experience as Medical Director at Biogen overseeing their Phase 1b Alzheimer's trials.In this episode, we discuss:How skin-derived cells can be turned into neurons to repair the brain.Why replacing lost neurons may succeed where drugs have failed in Alzheimer's.What makes autologous (self-derived) cell therapy safer and more effective.How testing in dogs brought stronger proof than traditional mouse models.Why this approach could shift Alzheimer's treatment from slowing decline to restoring memory.Credits:Created by Greg Kubin and Matias SerebrinskyHost: Matias Serebrinsky & Greg KubinProduced by Caitlin Ner & Nico V. Rey Find us at businesstrip.fm and psymed.venturesFollow us on Instagram and Twitter!Theme music by Dorian LoveAdditional Music: Distant Daze by Zack Frank
In the aging brain, neurons begin to lose a hidden currency. Not just ATP, but GTP - that powers their ability to clear away toxic proteins. Without it, the cleanup crews stall, and amyloid builds up. A team at UC Irvine may have uncovered a way to recharge that system using two familiar compounds. In aged and Alzheimer's model neurons, this pairing restored GTP, reactivated trafficking pathways, and swept away protein aggregates. In this episode, we follow the trail from dwindling cellular energy to revived cleanup machinery, and explore how these findings fit with human evidence.00:00 Introduction: The Overlooked Clue in Aging Brains00:47 The Energy Crisis in Aging Neurons01:21 Natural Compounds to the Rescue01:55 The UC Irvine Study: A Closer Look03:05 Mechanisms Behind Nicotinamide and EGCG04:37 Human Data: What Do We Know?06:59 Comparing Strategies: Drugs vs. Natural Compounds08:11 Challenges and Future Directions09:27 Conclusion: A Promising but Unproven StrategyPMID: 40661491
Send me a message!Stop your diet. Stop counting calories. Your body's intelligence is far superior to any tracking app, and it's time you learned how to listen to it. This is how you move from fighting your body with rules to aligning with its natural wisdom for effortless, sustainable health.✅ EAT System | Get Your FREE Action Plan:► https://craigvan.com/eat/poster––– THE CRAIG VAN CAST –––✉️ Newsletter: https://craigvan.com
Neurologist Michelle Monje studies the close relationship between cancer and the nervous system, particularly in an aggressive brain cancer that often strikes in childhood. Her research shows that the cancer cells are electrically integrated into the brain itself and these connections actually help the cancer to grow. Monje and collaborators have now developed an immunotherapy that has shown great promise in mice and early human trials. One patient had a “complete response” and is cancer-free four years after treatment, Monje tells host Russ Altman on this episode of Stanford Engineering's The Future of Everything podcast.Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Michelle MonjeConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces guest Michelle Monje, a professor of pediatric neurology at Stanford University.(00:03:39) Focus on Cancer ResearchMonje's clinical observations led to exploring cancer-neuron interactions.(00:05:28) Neurons and Glial CellsThe role of neurons and glial cells in brain function and disease.(00:08:32) Gliomas in ChildrenAn overview of gliomas and their origins in glial precursor cells.(00:10:12) Rethinking Brain Cancer BehaviorHow gliomas don't just grow—they integrate with brain circuits.(00:14:49) Mechanisms of Tumor GrowthTwo primary mechanisms by which cancer exploits the nervous system.(00:16:32) Synaptic Integration of Cancer CellsThe discovery that glioma cells form synapses with neurons.(00:20:06) CAR T-Cell TherapyAdapting CAR T-cell immunotherapy to target brain tumors.(00:22:52) Targeting GD2 AntigenIdentification of a surface marker enables precision CAR T-cell therapy.(00:24:35) Immune Access to the BrainThe ability of CAR T-cells to reach the brain, despite prior understanding.(00:26:16) First Clinical Trial ResultsThe significant tumor reduction and response from CAR T-cell therapy.(00:28:21) Combined TherapiesPairing immune therapy with neural signaling blockers for better outcomes.(00:30:35) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook
Dr. Ewelina Kurtys is leading the way in biocomputing at FinalSpark where she is working on the next evolutionary leap for AI and neuron-powered computing. It's a brave new world, just 10 years in the making. We discuss lab-grown human brain organoids connected to electrodes, the possibility to solve AI's massive energy consumption challenge, post-silicon approach to computing, biological vs quantum physics and more.
Dr. Ewelina Kurtys is leading the way in biocomputing at FinalSpark where she is working on the next evolutionary leap for AI and neuron-powered computing. It's a brave new world, just 10 years in the making. We discuss lab-grown human brain organoids connected to electrodes, the possibility to solve AI's massive energy consumption challenge, post-silicon approach to computing, biological vs quantum physics and more.
The role of neuronal influences on cancer pathogenesis and progression is increasingly appreciated in the nervous system. Neurons have been shown to enhance the proliferation and migration of gliomas, a glial-derived tumor of the CNS, via diffusible paracrine factors or synaptic inputs onto tumor cells. In glioblastomas, a highly aggressive glioma, mostly glutamatergic inputs have been identified. While the potential for glioblastomas to receive projections from neurons of other neurotransmitter subtypes, such as from cholinergic neurons, has recently been discovered in xenotransplantation models, whether synapses can form between human cholinergic neurons and glioblastoma cells and consequences of these inputs and other non-synaptic mechanisms are still unknown. Human induced pluripotent stem cell-based models have been emerging as a powerful platform for studying human-specific disease mechanisms. Today's guests developed a co-culture model for the study of neuron-tumor interactions by combining patient derived glioblastoma organoids and hiPSC-derived cholinergic neurons. They will discuss their recent findings and what it means for understanding and potentially treating a tumor for which there is no known cure. GuestsGuo-li Ming, MD, PhD, Department of Neuroscience and Mahoney Institute for Neurosciences, Perelman School of Medicine, University of PennsylvaniaHongjun Song, PhD, Department of Neuroscience and Mahoney Institute for Neurosciences, Perelman School of Medicine, University of Pennsylvania Yusha Sun, Neuroscience Graduate Group, Perelman School of Medicine, University of PennsylvaniaHostJanet Rossant, Editor-in-Chief, Stem Cell Reports and The Gairdner FoundationSupporting ContentPaper link: Cholinergic neuron-to-glioblastoma synapses in a human iPSC-derived co-culture model, Stem Cell ReportsAbout Stem Cell ReportsStem Cell Reports is the open access, peer-reviewed journal of the International Society for Stem Cell Research (ISSCR) for communicating basic discoveries in stem cell research, in addition to translational and clinical studies. Stem Cell Reports focuses on original research with conceptual or practical advances that are of broad interest to stem cell biologists and clinicians.X: @StemCellReportsAbout ISSCRWith nearly 5,000 members from more than 80 countries, the International Society for Stem Cell Research (@ISSCR) is the preeminent global, cross-disciplinary, science-based organization dedicated to stem cell research and its translation to the clinic. The ISSCR mission is to promote excellence in stem cell science and applications to human health.ISSCR StaffKeith Alm, Chief Executive OfficerYvonne Fisher, Managing Editor, Stem Cell ReportsKym Kilbourne, Director of Media and Strategic CommunicationsMegan Koch, Senior Marketing ManagerJack Mosher, Scientific DirectorHunter Reed, Senior Marketing Coordinator
Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they dive deeper into the world of artificial intelligence, analyzing the types of machine learning. They also discuss deep learning, including how it works, its applications, and its advantages and challenges. From chatbot assistants to speech-to-text systems and image recognition, they explore how deep learning is powering the tools we use today. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we went through the basics of artificial intelligence. If you missed it, I really recommend listening to that episode before you start this one. Today, we're going to explore some foundational AI concepts, starting with machine learning. After that, we'll discuss the two main machine learning models: supervised learning and unsupervised learning. And we'll close with deep learning. Lois: Himanshu Raj, our Principal AI/ML Instructor, joins us for today's episode. Hi Himanshu! Let's dive right in. What is machine learning? 01:12 Himanshu: Machine learning lets computers learn from examples to make decisions or predictions without being told exactly what to do. They help computers learn from past data and examples so they can spot patterns and make smart decisions just like humans do, but faster and at scale. 01:31 Nikita: Can you give us a simple analogy so we can understand this better? Himanshu: When you train a dog to sit or fetch, you don't explain the logic behind the command. Instead, you give this dog examples and reinforce correct behavior with rewards, which could be a treat, a pat, or a praise. Over time, the dog learns to associate the command with the action and reward. Machine learning learns in a similar way, but with data instead of dog treats. We feed a mathematical system called models with multiple examples of input and the desired output, and it learns the pattern. It's trial and error, learning from the experience. Here is another example. Recognizing faces. Humans are incredibly good at this, even as babies. We don't need someone to explain every detail of the face. We just see many faces over time and learn the patterns. Machine learning models can be trained the same way. We showed them thousands or millions of face images, each labeled, and they start to detect patterns like eyes, nose, mouth, spacing, different angles. So eventually, they can recognize faces they have seen before or even match new ones that are similar. So machine learning doesn't have any rules, it's just learning from examples. This is the kind of learning behind things like face ID on your smartphone, security systems that recognizes employees, or even Facebook tagging people in your photos. 03:05 Lois: So, what you're saying is, in machine learning, instead of telling the computer exactly what to do in every situation, you feed the model with data and give it examples of inputs and the correct outputs. Over time, the model figures out patterns and relationships within the data on its own, and it can make the smart guess when it sees something new. I got it! Now let's move on to how machine learning actually works? Can you take us through the process step by step? Himanshu: Machine learning actually happens in three steps. First, we have the input, which is the training data. Think of this as showing the model a series of examples. It could be images of historical sales data or customer complaints, whatever we want the machine to learn from. Next comes the pattern finding. This is the brain of the system where the model starts spotting relationships in the data. It figures out things like customer who churn or leave usually contacts support twice in the same month. It's not given rules, it just learns patterns based on the example. And finally, we have output, which is the prediction or decision. This is the result of all this learning. Once trained, the computer or model can say this customer is likely to churn or leave. It's like having a smart assistant that makes fast, data-driven guesses without needing step by step instruction. 04:36 Nikita: What are the main elements in machine learning? Himanshu: In machine learning, we work with two main elements, features and labels. You can think of features as the clues we provide to the model, pieces of information like age, income, or product type. And the label is the solution we want the model to predict, like whether a customer will buy or not. 04:55 Nikita: Ok, I think we need an example here. Let's go with the one you mentioned earlier about customers who churn. Himanshu: Imagine we have a table with data like customer age, number of visits, whether they churned or not. And each of these rows is one example. The features are age and visit count. The label is whether the customer churned, that is yes or no. Over the time, the model might learn patterns like customer under 30 who visit only once are more likely to leave. Or frequent visitors above age 45 rarely churn. If features are the clues, then the label is the solution, and the model is the brain of the system. It's what's the machine learning builds after learning from many examples, just like we do. And again, the better the features are, the better the learning. ML is just looking for patterns in the data we give it. 05:51 Lois: Ok, we're with you so far. Let's talk about the different types of machine learning. What is supervised learning? Himanshu: Supervised learning is a type of machine learning where the model learns from the input data and the correct answers. Once trained, the model can use what it learned to predict the correct answer for new, unseen inputs. Think of it like a student learning from a teacher. The teacher shows labeled examples like an apple and says, "this is an apple." The student receives feedback whether their guess was right or wrong. Over time, the student learns to recognize new apples on their own. And that's exactly how supervised learning works. It's learning from feedback using labeled data and then make predictions. 06:38 Nikita: Ok, so supervised learning means we train the model using labeled data. We already know the right answers, and we're essentially teaching the model to connect the dots between the inputs and the expected outputs. Now, can you give us a few real-world examples of supervised learning? Himanshu: First, house price prediction. In this case, we give the model features like a square footage, location, and number of bedrooms, and the label is the actual house price. Over time, it learns how to predict prices for new homes. The second one is email: spam or not. In this case, features might include words in the subject line, sender, or links in the email. The label is whether the email is spam or not. The model learns patterns to help us filter our inbox, as you would have seen in your Gmail inboxes. The third one is cat versus dog classification. Here, the features are the pixels in an image, and the label tells us whether it's a cat or a dog. After seeing many examples, the model learns to tell the difference on its own. Let's now focus on one very common form of supervised learning, that is regression. Regression is used when we want to predict a numerical value, not a category. In simple terms, it helps answer questions like, how much will it be? Or what will be the value be? For example, predicting the price of a house based on its size, location, and number of rooms. Or estimating next quarter's revenue based on marketing spend. 08:18 Lois: Are there any other types of supervised learning? Himanshu: While regression is about predicting a number, classification is about predicting a category or type. You can think of it as the model answering is this yes or no, or which group does this belong to. Classification is used when the goal is to predict a category or a class. Here, the model learns patterns from historical data where both the input variables, known as features, and the correct categories, called labels, are already known. 08:53 Ready to level-up your cloud skills? The 2025 Oracle Fusion Cloud Applications Certifications are here! These industry-recognized credentials validate your expertise in the latest Oracle Fusion Cloud solutions, giving you a competitive edge and helping drive real project success and customer satisfaction. Explore the certification paths, prepare with MyLearn, and position yourself for the future. Visit mylearn.oracle.com to get started today. 09:25 Nikita: Welcome back! So that was supervised machine learning. What about unsupervised machine learning, Himanshu? Himanshu: Unlike supervised learning, here, the model is not given any labels or correct answers. It just handed the raw input data and left to make sense of it on its own. The model explores the data and discovers hidden patterns, groupings, or structures on its own, without being explicitly told what to look for. And it's more like a student learning from observations and making their own inferences. 09:55 Lois: Where is unsupervised machine learning used? Can you take us through some of the use cases? Himanshu: The first one is product recommendation. Customers are grouped based on shared behavior even without knowing their intent. This helps show what the other users like you also prefer. Second one is anomaly detection. Unusual patterns, such as fraud, network breaches, or manufacturing defects, can stand out, all without needing thousands of labeled examples. And third one is customer segmentation. Customers can be grouped by purchase history or behavior to tailor experiences, pricing, or marketing campaigns. 10:32 Lois: And finally, we come to deep learning. What is deep learning, Himanshu? Himanshu: Humans learn from experience by seeing patterns repeatedly. Brain learns to recognize an image by seeing it many times. The human brain contains billions of neurons. Each neuron is connected to others through synapses. Neurons communicate by passing signals. The brain adjusts connections based on repeated stimuli. Deep learning was inspired by how the brain works using artificial neurons and connections. Just like our brains need a lot of examples to learn, so do the deep learning models. The more the layers and connections are, the more complex patterns it can learn. The brain is not hard-coded. It learns from patterns. Deep learning follows the same idea. Metaphorically speaking, a deep learning model can have over a billion neurons, more than a cat's brain, which have around 250 million neurons. Here, the neurons are mathematical units, often called nodes, or simply as units. Layers of these units are connected, mimicking how biological neurons interact. So deep learning is a type of machine learning where the computer learns to understand complex patterns. What makes it special is that it uses neural networks with many layers, which is why we call it deep learning. 11:56 Lois: And how does deep learning work? Himanshu: Deep learning is all about finding high-level meaning from low-level data layer by layer, much like how our brains process what we see and hear. A neural network is a system of connected artificial neurons, or nodes, that work together to learn patterns and make decisions. 12:15 Nikita: I know there are different types of neural networks, with ANNs or Artificial Neural Networks being the one for general learning. How is it structured? Himanshu: There is an input layer, which is the raw data, which could be an image, sentence, numbers, a hidden layer where the patterns are detected or the features are learned, and the output layer where the final decision is made. For example, given an image, is this a dog? A neural network is like a team of virtual decision makers, called artificial neurons, or nodes, working together, which takes input data, like a photo, and passes it through layers of neurons. And each neuron makes a small judgment and passes its result to the next layer. This process happens across multiple layers, learning more and more complex patterns as it goes, and the final layer gives the output. Imagine a factory assembly line where each station, or the layer, refines the input a bit more. By the end, you have turned raw parts into something meaningful. And this is a very simple analogy. This structure forms the foundations of many deep learning models. More advanced architectures, like convolutional neural networks, CNNs, for images, or recurrent neural networks, RNN, for sequences built upon this basic idea. So, what I meant is that the ANN is the base structure, like LEGO bricks. CNNs and RNNs use those same bricks, but arrange them in a way that are better suited for images, videos, or sequences like text or speech. 13:52 Nikita: So, why do we call it deep learning? Himanshu: The word deep in deep learning does not refer to how profound or intelligent the model is. It actually refers to the number of layers in the neural network. It starts with an input layer, followed by hidden layers, and ends with an output layer. The layers are called hidden, in the sense that these are black boxes and their data is not visible or directly interpretable to the user. Models which has only one hidden layer is called shallow learning. As data moves, each layer builds on what the previous layer has learned. So layer one might detect a very basic feature, like edges or colors in an image. Layer two can take those edges and starts forming shapes, like curves or lines. And layer three use those shapes to identify complete objects, like a face, a car, or a person. This hierarchical learning is what makes deep learning so powerful. It allows the model to learn abstract patterns and generalize across complex data, whether it's visual, audio, or even language. And that's the essence of deep learning. It's not just about layers. It's about how each layer refines the information and one step closer to understanding. 15:12 Nikita: Himanshu, where does deep learning show up in our everyday lives? Himanshu: Deep learning is not just about futuristic robots, it's already powering the tools we use today. So think of when you interact with a virtual assistant on a website. Whether you are booking a hotel, resolving a banking issue, or asking customer support questions, behind the scenes, deep learning models understand your text, interpret your intent, and respond intelligently. There are many real-life examples, for example, ChatGPT, Google's Gemini, any airline website's chatbots, bank's virtual agent. The next one is speech-to-text systems. Example, if you have ever used voice typing on your phone, dictated a message to Siri, or used Zoom's live captions, you have seen this in action already. The system listens to your voice and instantly converts it into a text. And this saves time, enhances accessibility, and helps automate tasks, like meeting transcriptions. Again, you would have seen real-life examples, such as Siri, Google Assistant, autocaptioning on Zoom, or YouTube Live subtitles. And lastly, image recognition. For example, hospitals today use AI to detect early signs of cancer in x-rays and CT scans that might be missed by the human eye. Deep learning models can analyze visual patterns, like a suspicious spot on a lung's X-ray, and flag abnormalities faster and more consistently than humans. Self-driving cars recognize stop signs, pedestrians, and other vehicles using the same technology. So, for example, cancer detection in medical imaging, Tesla's self-driving navigation, security system synchronizes face are very prominent examples of image recognition. 17:01 Lois: Deep learning is one of the most powerful tools we have today to solve complex problems. But like any tool, I'm sure it has its own set of pros and cons. What are its advantages, Himanshu? Himanshu: It is high accuracy. When trained with enough data, deep learning models can outperform humans. For example, again, spotting early signs of cancer in X-rays with higher accuracy. Second is handling of unstructured data. Deep learning shines when working with messy real-world data, like images, text, and voice. And it's why your phone can recognize your face or transcribe your speech into text. The third one is automatic pattern learning. Unlike traditional models that need hand-coded features, deep learning models figure out important patterns by themselves, making them extremely flexible. And the fourth one is scalability. Once trained, deep learning systems can scale easily, serving millions of customers, like Netflix recommending movies personalized to each one of us. 18:03 Lois: And what about its challenges? Himanshu: The first one is data and resource intensive. So deep learning demands huge amount of labeled data and powerful computing hardware, which means high cost, especially during training. The second thing is lacks explainability. These models often act like a black box. We know the output, but it's hard to explain exactly how the model reached that decision. This becomes a problem in areas like health care and finance where transparency is critical. The third challenge is vulnerability to bias. If the data contains biases, like favoring certain groups, the model will learn and amplify those biases unless we manage them carefully. The fourth and last challenge is it's harder to debug and maintain. Unlike a traditional software program, it's tough to manually correct a deep learning model if it starts behaving unpredictably. It requires retraining with new data. So deep learning offers powerful opportunities to solve complex problems using data, but it also brings challenges that require careful strategy, resources, and responsible use. 19:13 Nikita: We're taking away a lot from this conversation. Thank you so much for your insights, Himanshu. Lois: If you're interested to learn more, make sure you log into mylearn.oracle.com and look for the AI for You course. Join us next week for part 2 of the discussion on AI Concepts & Terminology, where we'll focus on Data Science. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 19:39 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode I'm interviewing a scientist who is trying to revolutionize computing by replacing power hungry silicon gates with highly efficient biological neurons. For a present-day silicon-based computer to approach the processing power of a human brain would take MegaWatts of power—we are seeing some of these inefficiencies looking at AI systems that require massive amounts of energy to run. A human brain can do the same thing on less than 20 Watts. My guest today is a scientist who is working on the interface between brain and machine. Dr. Ewelina Kurtys is an entrepreneur with a PhD in neuroscience. After academia, she transitioned into business development and technology commercialization, advising tech companies on sales, partnerships, and market strategy. She founded Ekai Ltd to support innovation and scale in science-driven companies. Her work spans advisory roles, go-to-market planning, and translating complex R&D into real-world impact. She also speaks publicly on innovation, neuroscience, and the intersection of science and entrepreneurship. Check out the video versions on my YouTube channel. Chat with me on Facebook.
UCLA's Avishek Adhikari, PhD, presents new research on the role of GABAergic neurons in the brain's periaqueductal gray (PAG) region. Previously studied for their involvement in fear and defensive behaviors, these neurons were found to promote food-seeking behavior when activated—even in fully fed mice. Using calcium imaging and optogenetics, Adhikari's team discovered that these neurons are active during food approach but suppressed during eating. The effect is stronger for high-value foods like chocolate or crickets and depends on the mouse's prior experience with that food. A key finding is that these neurons influence behavior through a specific projection to the zona incerta, a subthalamic region. Rather than signaling hunger, this pathway appears to drive food seeking based on reward value, highlighting a new motivational circuit in the brain. Series: "Stem Cell Channel" [Health and Medicine] [Science] [Show ID: 40444]
UCLA's Avishek Adhikari, PhD, presents new research on the role of GABAergic neurons in the brain's periaqueductal gray (PAG) region. Previously studied for their involvement in fear and defensive behaviors, these neurons were found to promote food-seeking behavior when activated—even in fully fed mice. Using calcium imaging and optogenetics, Adhikari's team discovered that these neurons are active during food approach but suppressed during eating. The effect is stronger for high-value foods like chocolate or crickets and depends on the mouse's prior experience with that food. A key finding is that these neurons influence behavior through a specific projection to the zona incerta, a subthalamic region. Rather than signaling hunger, this pathway appears to drive food seeking based on reward value, highlighting a new motivational circuit in the brain. Series: "Stem Cell Channel" [Health and Medicine] [Science] [Show ID: 40444]
UCLA's Avishek Adhikari, PhD, presents new research on the role of GABAergic neurons in the brain's periaqueductal gray (PAG) region. Previously studied for their involvement in fear and defensive behaviors, these neurons were found to promote food-seeking behavior when activated—even in fully fed mice. Using calcium imaging and optogenetics, Adhikari's team discovered that these neurons are active during food approach but suppressed during eating. The effect is stronger for high-value foods like chocolate or crickets and depends on the mouse's prior experience with that food. A key finding is that these neurons influence behavior through a specific projection to the zona incerta, a subthalamic region. Rather than signaling hunger, this pathway appears to drive food seeking based on reward value, highlighting a new motivational circuit in the brain. Series: "Stem Cell Channel" [Health and Medicine] [Science] [Show ID: 40444]
UCLA's Avishek Adhikari, PhD, presents new research on the role of GABAergic neurons in the brain's periaqueductal gray (PAG) region. Previously studied for their involvement in fear and defensive behaviors, these neurons were found to promote food-seeking behavior when activated—even in fully fed mice. Using calcium imaging and optogenetics, Adhikari's team discovered that these neurons are active during food approach but suppressed during eating. The effect is stronger for high-value foods like chocolate or crickets and depends on the mouse's prior experience with that food. A key finding is that these neurons influence behavior through a specific projection to the zona incerta, a subthalamic region. Rather than signaling hunger, this pathway appears to drive food seeking based on reward value, highlighting a new motivational circuit in the brain. Series: "Stem Cell Channel" [Health and Medicine] [Science] [Show ID: 40444]
Send us a textHow brain synapses work and fuel themselves with fat.Episode Summary: Dr. Timothy Ryan talks about the high energy costs of synapses, the role of mitochondria and glycolysis, and challenge the long-held view that the brain relies solely on glucose by discussing new evidence that neurons burn fats from lipid droplets for fuel, especially during activity. The talk touches on metabolic flexibility, links to epilepsy treatments like ketogenic diets, neurodegenerative diseases, and future research on brain energy sources.About the guest: Timothy Ryan, PhD is a professor of biochemistry at Cornell University. His research focuses on the molecular mechanisms of synaptic transmission, particularly vesicle recycling and the bioenergetics that power neural communication. Discussion Points:Synapses are tiny structures with ~100 vesicles per site (on average), converting electrical signals to chemical ones.Brain tissue is energy-hungry due to trillions of synapses (in humans), relying on local mitochondria (present in only ~half of synapses) and glycolysis.Vesicles use proton pumps and transporters to concentrate neurotransmitters, requiring ATP to maintain gradients.Neurons are metabolically fragile; cutting fuel supply quickly impairs synapses.Dogma held brains don't burn fats, but new work shows neurons form lipid droplets (fat stores) that are invisible because constantly used for energy via beta-oxidation.Silencing neurons builds lipid droplets like resting muscle; activity speeds their breakdown, indicating demand-driven fat use.Inhibiting neuron-specific fat-processing enzymes accumulates droplets and induces torpor (hibernation-like state) in animals, signaling metabolic stress.Ketogenic diets aid epilepsy by shifting to ketones; fats may explain this, with potential ties to aging and neurodegeneration like Alzheimer's.Brain may be a "hybrid" fuel user (glucose + fats), with open questions on fat sources and roles in cognition or disease protection.Related episode:M&M 158: Ketosis & Ketogenic Diet: Brain & Mental Health, Metabolism, Diet & Exercise, Cancer, Diabetes | Dominic D'Agostino*Not medical advice.Support the showAll episodes, show notes, transcripts, and more at the M&M Substack Affiliates: KetoCitra—Ketone body BHB + potassium, calcium & magnesium, formulated with kidney health in mind. Use code MIND20 for 20% off any subscription (cancel anytime) Lumen device to optimize your metabolism for weight loss or athletic performance. Code MIND for 10% off Readwise: Organize and share what you read. 60 days FREE through link SiPhox Health—Affordable at-home blood testing. Key health markers, visualized & explained. Code TRIKOMES for a 20% discount. MASA Chips—delicious tortilla chips made from organic corn & grass-fed beef tallow. No seed oils or artificial ingredients. Code MIND for 20% off For all the ways you can support my efforts
In the second episode of this two-part series, Dr. Jeff Ratliff and Dr. Claire Henchcliffe discuss how she advises patients and families on the future of stem cell therapy and Parkinson disease. Show reference: https://www.nature.com/articles/s41586-025-08845-y
In the first part of this two-part series, Dr. Jeff Ratliff and Dr. Claire Henchcliffe discuss the important lessons learned from these results. Show reference: https://www.nature.com/articles/s41586-025-08845-y
Dr. Jeff Ratliff talks with Dr. Claire Henchcliffe about the study's key findings while emphasizing the importance of educating patients about stem cell therapies and the ongoing advancements in this field. Read the related article in Nature. Disclosures can be found at Neurology.org.
Have you ever wondered what actually makes insomnia recovery possible? In this episode, I'm going to talk about exactly that. We'll explore one of the most hopeful principles of neuroplasticity — and why it's the reason getting beyond insomnia isn't just possible… but inevitable when you understand how the brain works.You see, the human brain has the amazing capacity to change associations and shift links.And truly, is there anything more miraculous than that?In this episode, you'll learn how:Your responses play a key role in rewiring the brainInsomnia isn't a sign that something is broken, but a conditioned pattern rooted in hyperarousalChanging your brain's association with wakefulness can be a turning point in recoveryI also share a personal story that illustrates how learned fear responses can shift, and why that same shift is completely possible for you, too.Enjoy!Mentioned Resources:Ep 32: “Neurons that fire together, wire together.”Connect with Beth:
In this Huberman Lab Essentials episode, I explore the sensations of pain and pleasure, explaining how they are sensed in the body and interpreted by the brain as well as methods to control their intensity. I discuss both the hardwired mechanisms and subjective factors that shape an individual's perception of pain and pleasure. I also explain why pain thresholds vary from person to person and discuss various treatments for pain management such as acupuncture and supplements. Finally, I explain the role of key neurochemicals like dopamine and serotonin in mediating our experience of pain and pleasure. Read the episode show notes at hubermanlab.com. Thank you to our sponsors AG1: https://drinkag1.com/huberman LMNT: https://drinklmnt.com/huberman Eight Sleep: https://eightsleep.com/huberman Timestamps 00:00:00 Pain & Pleasure 00:00:39 Skin, Appetitive vs Aversive Behaviors 00:02:10 Skin, Neurons & Brain 00:04:46 Brain Interpretation, Homunculus, Two-Point Discrimination Test 00:07:43 Pain & Pleasure, Subjective Interpretation 00:09:53 Sponsor: AG1 00:11:30 Tool: Pain & Expectation 00:13:08 Pain Threshold 00:14:46 Heat & Cold, Tool: Moving into Cold or Hot Environments 00:16:37 Subjective Pain, Psychosomatic, Fibromyalgia, Whole Body Pain, Acetyl-L-carnitine 00:20:54 Acupuncture, Electroacupuncture, Pain Management 00:23:44 Sponsors: LMNT & Eight Sleep 00:26:36 Red Heads & Pain Threshold, Endorphins 00:28:32 Improving Pain Threshold, Dopamine 00:30:00 Pleasure, Dopamine, Serotonin; Depression, Anti-depressants 00:34:12 Pleasure & Pain Balance, Dopamine, Addiction 00:36:08 Recap & Key Takeaways Disclaimer & Disclosures Learn more about your ad choices. Visit megaphone.fm/adchoices
What happens in your brain when Cupid's arrow strikes? As a teenager, Alison developed an intense crush on George Harrison from the Beatles. But, she wants to know, why do we develop these feelings for pop stars we've never actually met? And what potent swirl of neurochemistry drives those fierce emotions?With neuroscientist Dr. Dean Burnett and evolutionary anthropologist Dr. Anna Machin as their guides, Hannah and Dara investigate everything from the brain's chemical fireworks during a crush to the evolutionary perks of love and bonding. Along the way, they dissect teenage infatuations, lifelong love affairs with football teams, and why love can feel as addictive as heroin.There's even a guest appearance from two cute rodents: the monogamous prairie voles and their more, shall we say, commitment-phobic cousins, the montane voles, who gave us early clues about the role of the ‘cuddle' hormone oxytocin. Whether you're a hopeless romantic or a hard-nosed skeptic, prepare to fall head over heels for the science of love.Contributors:Dr Anna Machin - evolutionary anthropologist and author of Why We Love Dr Dean Burnett - honorary research fellow at Cardiff Psychology School, author of The Idiot Brain and The Happy Brain. Carmine Pariante - Professor of Biological Psychiatry at King's College LondonProducer: Ilan Goodman Executive Producer: Alexandra Feachem A BBC Studios Audio Production
Kimberly Snyder, author of 'The Hidden Power of the Five Hearts,' shares fascinating insights on the concept of heart coherence and its transformative power. Discover how emotions like appreciation and love can align your heart, brain, and nervous system. Kimberly also explores the 5 stages of heart coherence, shares personal experiences with emotional healing, and provides practical tools for achieving a more coherent state. As a special bonus, the episode includes a guided heart coherence meditation to help you experience the profound effects of these practices on your well-being. Don't miss out on this enlightening episode! To view full show notes, more information on our guests, resources mentioned in the episode, discount codes, transcripts, and more, visit https://drmindypelz.com/ep285 Kimberly Snyder is the 3-time New York Times bestselling author of The Hidden Power of the Five Hearts, who is the change-maker of the heart-led living and wellness movement. The founder of the holistic lifestyle brand Solluna and host of the top-rated Feel Good Podcast, Kimberly is a wellness expert, creator of the research-based HeartAlign Meditation, nutritionist and international speaker. She co-authored Radical Beauty with Deepak Chopra, and has been the go-to expert to help celebrities feel their best, including Drew Barrymore, Reese Witherspoon and Channing Tatum. Kimberly's work is featured on Good Morning America, Today, The Wall Street Journal, Vogue, The New York Times, and many other publications. Check out our fasting membership at resetacademy.drmindypelz.com. Please note our medical disclaimer.
In this Huberman Lab Essentials episode, I explain how to build endurance and describe targeted protocols to enhance different types of endurance. I discuss how endurance—the ability to sustain effort—requires the coordination of physical and mental systems driven by energy availability, brain willpower, and specific training adaptations in the muscles, heart, lungs and neurons. I explain conditioning protocols designed to enhance four types of endurance, from long-duration steady state to muscular endurance and high-intensity intervals, and how each training style triggers unique adaptations in the body and brain, such as improved mitochondrial function and oxygen utilization. Additionally, I highlight the crucial role of hydration and electrolytes, which are essential for neural function and influence the brain's willpower to sustain effort. Read the episode show notes at hubermanlab.com. Thank you to our sponsors AG1: https://drinkag1.com/huberman LMNT: https://drinklmnt.com/huberman Eight Sleep: https://eightsleep.com/huberman Function: https://functionhealth.com/huberman Follow Huberman Lab Instagram Threads X Facebook TikTok LinkedIn Timestamps 00:00:00 Huberman Lab Essentials; Build Endurance 00:00:50 Energy Sources, ATP, Oxygen 00:04:14 Neurons & Willpower, Glucose & Electrolytes 00:09:19 Heart, Lungs; Physiology & Performance Limiting Factors 00:10:35 Sponsor: AG1 00:12:30 Muscular Endurance, Protocol, Concentric Movements, Mitochondria 00:19:10 Sponsors: LMNT & Eight Sleep 00:22:00 Long-Duration Endurance, Efficiency, Mitochondria, Capillaries 00:25:54 High-Intensity Interval Training (HIIT), Anaerobic Endurance, Protocol 00:32:33 High-Intensity Aerobic Endurance, Adaptations 00:35:30 Sponsor: Function 00:37:26 Brain & Body Adaptations, Heart 00:40:40 Hydration, Tool: Galpin Equation 00:42:21 Supplements, Stimulants, Magnesium Malate 00:43:11 Recap & Key Takeaways Disclaimer & Disclosures