Electrically excitable cell that communicates via synapses
POPULARITY
Categories
In this Huberman Lab Essentials episode, I discuss science-supported nutrients that directly support brain structure, function and long-term cognitive health. I highlight specific nutrients, including omega-3 fatty acids, creatine, phosphatidylserine, anthocyanins, choline and glutamine, and discuss their recommended doses and dietary sources. I also describe how taste perception, gut-brain signaling and learned associations shape what foods we prefer and crave. Finally, I share practical behavioral tools to help rewire your food preferences toward healthier choices. Read the episode show notes at hubermanlab.com. Thank you to our sponsors AGZ by AG1: https://drinkagz.com/huberman David: https://davidprotein.com/huberman LMNT: https://drinklmnt.com/huberman Timestamps 00:00:00 Food & Brain Health 00:01:32 Structural Fat & Neurons, Tools: Omega-3s, EPA; Phosphatidylserine 00:05:11 Acetylcholine & Focus, Tool: Dietary Choline 00:06:48 Creatine & Brain Health, Tool: Creatine Supplementation 00:08:26 Sponsor: David 00:09:41 Anthocyanins & Brain Function, Tool: Blueberries & Berries 00:10:52 Glutamine & Brain, Tool: Glutamine & Offset Sugar Cravings 00:12:48 Brain-Boosting Nutrients, Foods & Supplements 00:15:03 Food Preference; Yum, Yuck or Meh; Mouth, Taste & Palatability 00:19:30 Gut, Nutrients & Subconscious Signaling 00:21:56 Learned Food Preferences 00:23:36 Sponsors: AGZ by AG1 & LMNT 00:26:16 Food & Learned Associations, Tool: Unpair Artificial Sweeteners 00:30:18 Belief Effects, Satiety, Tool: Rewiring Food Preferences 00:35:48 Recap & Key Takeaways Disclaimer & Disclosures Learn more about your ad choices. Visit megaphone.fm/adchoices
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
BASE64 Over DNS The base64 character set exceeds what is allowable in DNS. However, some implementations will work even with these invalid characters. https://isc.sans.edu/diary/BASE64%20Over%20DNS/32274 Google Chrome Update Google released an update for Google Chrome, addressing two vulnerabilities. One of the vulnerabilities is rated critical and may allow code execution. https://chromereleases.googleblog.com/2025/09/stable-channel-update-for-desktop_9.html Ivanti Updates Ivanti patched a number of vulnerabilities, several of them critical, across its product portfolio. https://forums.ivanti.com/s/article/September-Security-Advisory-Ivanti-Connect-Secure-Policy-Secure-ZTA-Gateways-and-Neurons-for-Secure-Access-Multiple-CVEs Sophos Patches Sophos resolved authentication bypass vulnerability in Sophos AP6 series wireless access point firmware (CVE-2025-10159) https://www.sophos.com/en-us/security-advisories/sophos-sa-20250909-ap6 Apple Introduces Memory Integrity Enforcement With the new hardware promoted in yesterday s event, Apple also introduced new memory integrity features based on this new hardware. https://security.apple.com/blog/memory-integrity-enforcement/
Support the show to get full episodes, full archive, and join the Discord community. The Transmitter is an online publication that aims to deliver useful information, insights and tools to build bridges across neuroscience and advance research. Visit thetransmitter.org to explore the latest neuroscience news and perspectives, written by journalists and scientists. Read more about our partnership: https://www.thetransmitter.org/partners/ Sign up for the “Brain Inspired” email alerts to be notified every time a new “Brain Inspired” episode is released: https://www.thetransmitter.org/newsletters/ To explore more neuroscience news and perspectives, visit thetransmitter.org. What changes and what stays the same as you scale from single neurons up to local populations of neurons up to whole brains? How tuning parameters like the gain in some neural populations affects the dynamical and computational properties of the rest of the system. Those are the main questions my guests today discuss. Michael Breakspear is a professor of Systems Neuroscience and runs the Systems Neuroscience Group at the University of Newcastle in Australia. Mac Shine is back, he was here a few years ago. Mac runs the Shine Lab at the University of Sidney in Australia. Michael and Mac have been collaborating on the questions I mentioned above, using a systems approach to studying brains and cognition. The short summary of what they discovered in their first collaboration is that turning up or down the gain across broad networks of neurons in the brain affects integration - working together - and segregation - working apart. They map this gain modulation on to the ascending arousal pathway, in which the locus coeruleus projects widely throughout the brain distributing noradrenaline. At a certain sweet spot of gain, integration and segregation are balanced near a bifurcation point, near criticality, which maximizes properties that are good for cognition. In their recent collaboration, they used a coarse graining procedure inspired by physics to study the collective dynamics of various sizes of neural populations, going from single neurons to large populations of neurons. Here they found that despite different coding properties at different scales, there are also scale-free properties that suggest neural populations of all sizes, from single neurons to brains, can do cognitive stuff useful for the organism. And they found this is a conserved property across many different species, suggesting it's a universal principle of brain dynamics in general. So we discuss all that, but to get there we talk about what a systems approach to neuroscience is, how systems neuroscience has changed over the years, and how it has inspired the questions Michael and Mac ask. Breakspear: Systems Neuroscience Group. @DrBreaky. Shine: Shine Lab. @jmacshine. Related papers Dynamic models of large-scale brain activity Metastable brain waves The modulation of neural gain facilitates a transition between functional segregation and integration in the brain Multiscale Organization of Neuronal Activity Unifies Scale-Dependent Theories of Brain Function. The brain that controls itself. Metastability demystified — the foundational past, the pragmatic present and the promising future. Generation of surrogate brain maps preserving spatial autocorrelation through random rotation of geometric eigenmodes. Related episodes BI 212 John Beggs: Why Brains Seek the Edge of Chaos BI 216 Woodrow Shew and Keith Hengen: The Nature of Brain Criticality BI 121 Mac Shine: Systems Neurobiology 0:00 - Intro 4:28 - Neuroscience vs neurobiology 8:01 - Systems approach 26:52 - Physics for neuroscience 33:15 - Gain and bifurcation: earliest collaboration 55:32 - Multiscale organization 1:17:54 - Roadblocks
Professor Barbara Webb from the University of Edinburgh in Scotland uses physical robots to validate neural mechanisms in crickets, ants, and bees. In this episode of Brains and Machines, she talks to Dr. Sunny Bains of University College London about her work. Discussion follows with Dr. Giulia D'Angelo from the Czech Technical University in Prague and Professor Ralph Etienne-Cummings of Johns Hopkins University.
#STRANGESCIENCE – This bright orange shark has shocked and delighted scientists.
Neurons. Immune systems. MRIs. Weed gummies? One of the greats in neurology, Dr. Aaron Boster, takes time to chat all about Multiple Sclerosis, a neurological autoimmune disease close to our hearts. Alie's mom, your grammapod a.k.a. Fancy Nancy, was diagnosed with MS over two decades ago, and this episode explores in depth the factors that can cause MS, therapies that do – and don't – show promise, how diet, exercise and mindfulness actually can help folks who have MS, the oftentimes agonizing journey to a diagnosis, and advice for those who've MS for a while – or are newly in the community. Also: yeah, weed. Visit the Boster Center for Multiple Sclerosis and follow Dr. Boster on YouTubeA donation went to MS Views and NewsMore episode sources and linksOther episodes you may enjoy: Disability Sociology (DISABILITY PRIDE), Diabetology (BLOOD SUGAR), Post-Viral Epidemiology (LONG COVID), Dolorology (PAIN), Salugenology (WHY HUMANS REQUIRE HOBBIES), Psychedeliology (HALLUCINOGENS), Cardiology (THE HEART), Somnology (SLEEP)400+ Ologies episodes sorted by topicSmologies (short, classroom-safe) episodesSponsors of OlogiesTranscripts and bleeped episodesBecome a patron of Ologies for as little as a buck a monthOlogiesMerch.com has hats, shirts, hoodies, totes!Follow Ologies on Instagram and BlueskyFollow Alie Ward on Instagram and TikTokEditing by Mercedes Maitland of Maitland Audio Productions and Jake ChaffeeManaging Director: Susan HaleScheduling Producer: Noel DilworthTranscripts by Aveline Malek Website by Kelly R. DwyerTheme song by Nick Thorburn
Greg and Matias interview Brendon Boot of Skin2Neuron. Brendon is a neurologist at Harvard and Mayo Clinic with prior experience as Medical Director at Biogen overseeing their Phase 1b Alzheimer's trials.In this episode, we discuss:How skin-derived cells can be turned into neurons to repair the brain.Why replacing lost neurons may succeed where drugs have failed in Alzheimer's.What makes autologous (self-derived) cell therapy safer and more effective.How testing in dogs brought stronger proof than traditional mouse models.Why this approach could shift Alzheimer's treatment from slowing decline to restoring memory.Credits:Created by Greg Kubin and Matias SerebrinskyHost: Matias Serebrinsky & Greg KubinProduced by Caitlin Ner & Nico V. Rey Find us at businesstrip.fm and psymed.venturesFollow us on Instagram and Twitter!Theme music by Dorian LoveAdditional Music: Distant Daze by Zack Frank
In the aging brain, neurons begin to lose a hidden currency. Not just ATP, but GTP - that powers their ability to clear away toxic proteins. Without it, the cleanup crews stall, and amyloid builds up. A team at UC Irvine may have uncovered a way to recharge that system using two familiar compounds. In aged and Alzheimer's model neurons, this pairing restored GTP, reactivated trafficking pathways, and swept away protein aggregates. In this episode, we follow the trail from dwindling cellular energy to revived cleanup machinery, and explore how these findings fit with human evidence.00:00 Introduction: The Overlooked Clue in Aging Brains00:47 The Energy Crisis in Aging Neurons01:21 Natural Compounds to the Rescue01:55 The UC Irvine Study: A Closer Look03:05 Mechanisms Behind Nicotinamide and EGCG04:37 Human Data: What Do We Know?06:59 Comparing Strategies: Drugs vs. Natural Compounds08:11 Challenges and Future Directions09:27 Conclusion: A Promising but Unproven StrategyPMID: 40661491
Send me a message!Stop your diet. Stop counting calories. Your body's intelligence is far superior to any tracking app, and it's time you learned how to listen to it. This is how you move from fighting your body with rules to aligning with its natural wisdom for effortless, sustainable health.✅ EAT System | Get Your FREE Action Plan:► https://craigvan.com/eat/poster––– THE CRAIG VAN CAST –––✉️ Newsletter: https://craigvan.com
Neurologist Michelle Monje studies the close relationship between cancer and the nervous system, particularly in an aggressive brain cancer that often strikes in childhood. Her research shows that the cancer cells are electrically integrated into the brain itself and these connections actually help the cancer to grow. Monje and collaborators have now developed an immunotherapy that has shown great promise in mice and early human trials. One patient had a “complete response” and is cancer-free four years after treatment, Monje tells host Russ Altman on this episode of Stanford Engineering's The Future of Everything podcast.Have a question for Russ? Send it our way in writing or via voice memo, and it might be featured on an upcoming episode. Please introduce yourself, let us know where you're listening from, and share your question. You can send questions to thefutureofeverything@stanford.edu.Episode Reference Links:Stanford Profile: Michelle MonjeConnect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>> Twitter/X / Instagram / LinkedIn / FacebookChapters:(00:00:00) IntroductionRuss Altman introduces guest Michelle Monje, a professor of pediatric neurology at Stanford University.(00:03:39) Focus on Cancer ResearchMonje's clinical observations led to exploring cancer-neuron interactions.(00:05:28) Neurons and Glial CellsThe role of neurons and glial cells in brain function and disease.(00:08:32) Gliomas in ChildrenAn overview of gliomas and their origins in glial precursor cells.(00:10:12) Rethinking Brain Cancer BehaviorHow gliomas don't just grow—they integrate with brain circuits.(00:14:49) Mechanisms of Tumor GrowthTwo primary mechanisms by which cancer exploits the nervous system.(00:16:32) Synaptic Integration of Cancer CellsThe discovery that glioma cells form synapses with neurons.(00:20:06) CAR T-Cell TherapyAdapting CAR T-cell immunotherapy to target brain tumors.(00:22:52) Targeting GD2 AntigenIdentification of a surface marker enables precision CAR T-cell therapy.(00:24:35) Immune Access to the BrainThe ability of CAR T-cells to reach the brain, despite prior understanding.(00:26:16) First Clinical Trial ResultsThe significant tumor reduction and response from CAR T-cell therapy.(00:28:21) Combined TherapiesPairing immune therapy with neural signaling blockers for better outcomes.(00:30:35) Conclusion Connect With Us:Episode Transcripts >>> The Future of Everything WebsiteConnect with Russ >>> Threads / Bluesky / MastodonConnect with School of Engineering >>>Twitter/X / Instagram / LinkedIn / Facebook
Dr. Ewelina Kurtys is leading the way in biocomputing at FinalSpark where she is working on the next evolutionary leap for AI and neuron-powered computing. It's a brave new world, just 10 years in the making. We discuss lab-grown human brain organoids connected to electrodes, the possibility to solve AI's massive energy consumption challenge, post-silicon approach to computing, biological vs quantum physics and more.
Dr. Ewelina Kurtys is leading the way in biocomputing at FinalSpark where she is working on the next evolutionary leap for AI and neuron-powered computing. It's a brave new world, just 10 years in the making. We discuss lab-grown human brain organoids connected to electrodes, the possibility to solve AI's massive energy consumption challenge, post-silicon approach to computing, biological vs quantum physics and more.
In this episode, we welcome Professor Vivek Singal, author of the book "Dominion and Dharma," to discuss his critique of modern capitalism and his vision for a more conscious and equitable future. A graduate of IIT Delhi and the University of Michigan, with a background in management advisory and liberal arts, Professor Singal offers a unique perspective on the intersection of capitalism, spirituality, and civilizational values.He shares the personal anguish and series of events that led him to write his book, from observing the hypocrisy at the World Economic Forum to a powerful encounter at the Age of Discovery monument in Lisbon. Professor Singal delves into the fundamental incompatibility between the Western capitalist worldview, which sees the world as a marketplace, and the dharmic worldview, which views the world as an interconnected mandala.The conversation explores the historical roots of extractive capitalism, using the island of Madeira as a prototype of a "minimum viable plantation model" that combined monoculture and slave labor. Professor Singal draws a compelling analogy between old colonial models and modern institutions like corporations and NGOs, which he argues perpetuate a form of "colonialism with a compassion filter."The discussion also touches on the Industrial Revolution's role in turning humans into surplus labor and the potential dangers and promises of emerging technologies like AI. Professor Singal concludes with a call to action for the younger generation to ask critical questions, challenge existing paradigms, and reconnect with a sense of sacred responsibility to create a future our descendants can be proud of.Chapters:[00:00] Introduction to Professor Vivek Singal[01:10] The Premise of "Dominion and Dharma"[02:10] The World Economic Forum and the Spark for the Book[03:32] The Age of Discovery Monument and Civilizational Grief[04:20] Reconnecting with Dharmic Roots in Kochi[05:15] The Goal: Not to Reject the West, but to Remember the East[06:23] Capitalism's "World as a Market" vs. the Dharmic "World as a Mandala"[07:49] The Dangers of a "Win-Lose" Mentality[09:34] Identity: "What Do I Do?" vs. "Who Am I?"[12:21] The Island of Madeira: A Prototype for Capitalist Exploitation[13:59] The "Minimum Viable Plantation Model"[17:02] "Humanitarian Capitalism": Colonialism with a Compassion Filter[18:38] The Threat of Exponentially Growing Technologies[20:17] The Hypocrisy of Modern Diversity and Inclusion Initiatives[24:35] The Industrial Revolution: Transforming Souls into Surplus Labor[27:31] The Advent of AI and the "Phygital Civilization"[30:21] The Unseen Consequences of the Automobile[31:07] The Neurons in Our Heart and Gut[33:43] A Call for Young People to Challenge the Status Quo[36:26] How to Reconnect with Dharma and Sacred Responsibility[37:25] The Power of Asking Questions[38:42] Closing Remarks
The role of neuronal influences on cancer pathogenesis and progression is increasingly appreciated in the nervous system. Neurons have been shown to enhance the proliferation and migration of gliomas, a glial-derived tumor of the CNS, via diffusible paracrine factors or synaptic inputs onto tumor cells. In glioblastomas, a highly aggressive glioma, mostly glutamatergic inputs have been identified. While the potential for glioblastomas to receive projections from neurons of other neurotransmitter subtypes, such as from cholinergic neurons, has recently been discovered in xenotransplantation models, whether synapses can form between human cholinergic neurons and glioblastoma cells and consequences of these inputs and other non-synaptic mechanisms are still unknown. Human induced pluripotent stem cell-based models have been emerging as a powerful platform for studying human-specific disease mechanisms. Today's guests developed a co-culture model for the study of neuron-tumor interactions by combining patient derived glioblastoma organoids and hiPSC-derived cholinergic neurons. They will discuss their recent findings and what it means for understanding and potentially treating a tumor for which there is no known cure. GuestsGuo-li Ming, MD, PhD, Department of Neuroscience and Mahoney Institute for Neurosciences, Perelman School of Medicine, University of PennsylvaniaHongjun Song, PhD, Department of Neuroscience and Mahoney Institute for Neurosciences, Perelman School of Medicine, University of Pennsylvania Yusha Sun, Neuroscience Graduate Group, Perelman School of Medicine, University of PennsylvaniaHostJanet Rossant, Editor-in-Chief, Stem Cell Reports and The Gairdner FoundationSupporting ContentPaper link: Cholinergic neuron-to-glioblastoma synapses in a human iPSC-derived co-culture model, Stem Cell ReportsAbout Stem Cell ReportsStem Cell Reports is the open access, peer-reviewed journal of the International Society for Stem Cell Research (ISSCR) for communicating basic discoveries in stem cell research, in addition to translational and clinical studies. Stem Cell Reports focuses on original research with conceptual or practical advances that are of broad interest to stem cell biologists and clinicians.X: @StemCellReportsAbout ISSCRWith nearly 5,000 members from more than 80 countries, the International Society for Stem Cell Research (@ISSCR) is the preeminent global, cross-disciplinary, science-based organization dedicated to stem cell research and its translation to the clinic. The ISSCR mission is to promote excellence in stem cell science and applications to human health.ISSCR StaffKeith Alm, Chief Executive OfficerYvonne Fisher, Managing Editor, Stem Cell ReportsKym Kilbourne, Director of Media and Strategic CommunicationsMegan Koch, Senior Marketing ManagerJack Mosher, Scientific DirectorHunter Reed, Senior Marketing Coordinator
Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they dive deeper into the world of artificial intelligence, analyzing the types of machine learning. They also discuss deep learning, including how it works, its applications, and its advantages and challenges. From chatbot assistants to speech-to-text systems and image recognition, they explore how deep learning is powering the tools we use today. AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500 Oracle University Learning Community: https://education.oracle.com/ou-community LinkedIn: https://www.linkedin.com/showcase/oracle-university/ X: https://x.com/Oracle_Edu Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Hello and welcome to the Oracle University Podcast. I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services. Nikita: Hi everyone! Last week, we went through the basics of artificial intelligence. If you missed it, I really recommend listening to that episode before you start this one. Today, we're going to explore some foundational AI concepts, starting with machine learning. After that, we'll discuss the two main machine learning models: supervised learning and unsupervised learning. And we'll close with deep learning. Lois: Himanshu Raj, our Principal AI/ML Instructor, joins us for today's episode. Hi Himanshu! Let's dive right in. What is machine learning? 01:12 Himanshu: Machine learning lets computers learn from examples to make decisions or predictions without being told exactly what to do. They help computers learn from past data and examples so they can spot patterns and make smart decisions just like humans do, but faster and at scale. 01:31 Nikita: Can you give us a simple analogy so we can understand this better? Himanshu: When you train a dog to sit or fetch, you don't explain the logic behind the command. Instead, you give this dog examples and reinforce correct behavior with rewards, which could be a treat, a pat, or a praise. Over time, the dog learns to associate the command with the action and reward. Machine learning learns in a similar way, but with data instead of dog treats. We feed a mathematical system called models with multiple examples of input and the desired output, and it learns the pattern. It's trial and error, learning from the experience. Here is another example. Recognizing faces. Humans are incredibly good at this, even as babies. We don't need someone to explain every detail of the face. We just see many faces over time and learn the patterns. Machine learning models can be trained the same way. We showed them thousands or millions of face images, each labeled, and they start to detect patterns like eyes, nose, mouth, spacing, different angles. So eventually, they can recognize faces they have seen before or even match new ones that are similar. So machine learning doesn't have any rules, it's just learning from examples. This is the kind of learning behind things like face ID on your smartphone, security systems that recognizes employees, or even Facebook tagging people in your photos. 03:05 Lois: So, what you're saying is, in machine learning, instead of telling the computer exactly what to do in every situation, you feed the model with data and give it examples of inputs and the correct outputs. Over time, the model figures out patterns and relationships within the data on its own, and it can make the smart guess when it sees something new. I got it! Now let's move on to how machine learning actually works? Can you take us through the process step by step? Himanshu: Machine learning actually happens in three steps. First, we have the input, which is the training data. Think of this as showing the model a series of examples. It could be images of historical sales data or customer complaints, whatever we want the machine to learn from. Next comes the pattern finding. This is the brain of the system where the model starts spotting relationships in the data. It figures out things like customer who churn or leave usually contacts support twice in the same month. It's not given rules, it just learns patterns based on the example. And finally, we have output, which is the prediction or decision. This is the result of all this learning. Once trained, the computer or model can say this customer is likely to churn or leave. It's like having a smart assistant that makes fast, data-driven guesses without needing step by step instruction. 04:36 Nikita: What are the main elements in machine learning? Himanshu: In machine learning, we work with two main elements, features and labels. You can think of features as the clues we provide to the model, pieces of information like age, income, or product type. And the label is the solution we want the model to predict, like whether a customer will buy or not. 04:55 Nikita: Ok, I think we need an example here. Let's go with the one you mentioned earlier about customers who churn. Himanshu: Imagine we have a table with data like customer age, number of visits, whether they churned or not. And each of these rows is one example. The features are age and visit count. The label is whether the customer churned, that is yes or no. Over the time, the model might learn patterns like customer under 30 who visit only once are more likely to leave. Or frequent visitors above age 45 rarely churn. If features are the clues, then the label is the solution, and the model is the brain of the system. It's what's the machine learning builds after learning from many examples, just like we do. And again, the better the features are, the better the learning. ML is just looking for patterns in the data we give it. 05:51 Lois: Ok, we're with you so far. Let's talk about the different types of machine learning. What is supervised learning? Himanshu: Supervised learning is a type of machine learning where the model learns from the input data and the correct answers. Once trained, the model can use what it learned to predict the correct answer for new, unseen inputs. Think of it like a student learning from a teacher. The teacher shows labeled examples like an apple and says, "this is an apple." The student receives feedback whether their guess was right or wrong. Over time, the student learns to recognize new apples on their own. And that's exactly how supervised learning works. It's learning from feedback using labeled data and then make predictions. 06:38 Nikita: Ok, so supervised learning means we train the model using labeled data. We already know the right answers, and we're essentially teaching the model to connect the dots between the inputs and the expected outputs. Now, can you give us a few real-world examples of supervised learning? Himanshu: First, house price prediction. In this case, we give the model features like a square footage, location, and number of bedrooms, and the label is the actual house price. Over time, it learns how to predict prices for new homes. The second one is email: spam or not. In this case, features might include words in the subject line, sender, or links in the email. The label is whether the email is spam or not. The model learns patterns to help us filter our inbox, as you would have seen in your Gmail inboxes. The third one is cat versus dog classification. Here, the features are the pixels in an image, and the label tells us whether it's a cat or a dog. After seeing many examples, the model learns to tell the difference on its own. Let's now focus on one very common form of supervised learning, that is regression. Regression is used when we want to predict a numerical value, not a category. In simple terms, it helps answer questions like, how much will it be? Or what will be the value be? For example, predicting the price of a house based on its size, location, and number of rooms. Or estimating next quarter's revenue based on marketing spend. 08:18 Lois: Are there any other types of supervised learning? Himanshu: While regression is about predicting a number, classification is about predicting a category or type. You can think of it as the model answering is this yes or no, or which group does this belong to. Classification is used when the goal is to predict a category or a class. Here, the model learns patterns from historical data where both the input variables, known as features, and the correct categories, called labels, are already known. 08:53 Ready to level-up your cloud skills? The 2025 Oracle Fusion Cloud Applications Certifications are here! These industry-recognized credentials validate your expertise in the latest Oracle Fusion Cloud solutions, giving you a competitive edge and helping drive real project success and customer satisfaction. Explore the certification paths, prepare with MyLearn, and position yourself for the future. Visit mylearn.oracle.com to get started today. 09:25 Nikita: Welcome back! So that was supervised machine learning. What about unsupervised machine learning, Himanshu? Himanshu: Unlike supervised learning, here, the model is not given any labels or correct answers. It just handed the raw input data and left to make sense of it on its own. The model explores the data and discovers hidden patterns, groupings, or structures on its own, without being explicitly told what to look for. And it's more like a student learning from observations and making their own inferences. 09:55 Lois: Where is unsupervised machine learning used? Can you take us through some of the use cases? Himanshu: The first one is product recommendation. Customers are grouped based on shared behavior even without knowing their intent. This helps show what the other users like you also prefer. Second one is anomaly detection. Unusual patterns, such as fraud, network breaches, or manufacturing defects, can stand out, all without needing thousands of labeled examples. And third one is customer segmentation. Customers can be grouped by purchase history or behavior to tailor experiences, pricing, or marketing campaigns. 10:32 Lois: And finally, we come to deep learning. What is deep learning, Himanshu? Himanshu: Humans learn from experience by seeing patterns repeatedly. Brain learns to recognize an image by seeing it many times. The human brain contains billions of neurons. Each neuron is connected to others through synapses. Neurons communicate by passing signals. The brain adjusts connections based on repeated stimuli. Deep learning was inspired by how the brain works using artificial neurons and connections. Just like our brains need a lot of examples to learn, so do the deep learning models. The more the layers and connections are, the more complex patterns it can learn. The brain is not hard-coded. It learns from patterns. Deep learning follows the same idea. Metaphorically speaking, a deep learning model can have over a billion neurons, more than a cat's brain, which have around 250 million neurons. Here, the neurons are mathematical units, often called nodes, or simply as units. Layers of these units are connected, mimicking how biological neurons interact. So deep learning is a type of machine learning where the computer learns to understand complex patterns. What makes it special is that it uses neural networks with many layers, which is why we call it deep learning. 11:56 Lois: And how does deep learning work? Himanshu: Deep learning is all about finding high-level meaning from low-level data layer by layer, much like how our brains process what we see and hear. A neural network is a system of connected artificial neurons, or nodes, that work together to learn patterns and make decisions. 12:15 Nikita: I know there are different types of neural networks, with ANNs or Artificial Neural Networks being the one for general learning. How is it structured? Himanshu: There is an input layer, which is the raw data, which could be an image, sentence, numbers, a hidden layer where the patterns are detected or the features are learned, and the output layer where the final decision is made. For example, given an image, is this a dog? A neural network is like a team of virtual decision makers, called artificial neurons, or nodes, working together, which takes input data, like a photo, and passes it through layers of neurons. And each neuron makes a small judgment and passes its result to the next layer. This process happens across multiple layers, learning more and more complex patterns as it goes, and the final layer gives the output. Imagine a factory assembly line where each station, or the layer, refines the input a bit more. By the end, you have turned raw parts into something meaningful. And this is a very simple analogy. This structure forms the foundations of many deep learning models. More advanced architectures, like convolutional neural networks, CNNs, for images, or recurrent neural networks, RNN, for sequences built upon this basic idea. So, what I meant is that the ANN is the base structure, like LEGO bricks. CNNs and RNNs use those same bricks, but arrange them in a way that are better suited for images, videos, or sequences like text or speech. 13:52 Nikita: So, why do we call it deep learning? Himanshu: The word deep in deep learning does not refer to how profound or intelligent the model is. It actually refers to the number of layers in the neural network. It starts with an input layer, followed by hidden layers, and ends with an output layer. The layers are called hidden, in the sense that these are black boxes and their data is not visible or directly interpretable to the user. Models which has only one hidden layer is called shallow learning. As data moves, each layer builds on what the previous layer has learned. So layer one might detect a very basic feature, like edges or colors in an image. Layer two can take those edges and starts forming shapes, like curves or lines. And layer three use those shapes to identify complete objects, like a face, a car, or a person. This hierarchical learning is what makes deep learning so powerful. It allows the model to learn abstract patterns and generalize across complex data, whether it's visual, audio, or even language. And that's the essence of deep learning. It's not just about layers. It's about how each layer refines the information and one step closer to understanding. 15:12 Nikita: Himanshu, where does deep learning show up in our everyday lives? Himanshu: Deep learning is not just about futuristic robots, it's already powering the tools we use today. So think of when you interact with a virtual assistant on a website. Whether you are booking a hotel, resolving a banking issue, or asking customer support questions, behind the scenes, deep learning models understand your text, interpret your intent, and respond intelligently. There are many real-life examples, for example, ChatGPT, Google's Gemini, any airline website's chatbots, bank's virtual agent. The next one is speech-to-text systems. Example, if you have ever used voice typing on your phone, dictated a message to Siri, or used Zoom's live captions, you have seen this in action already. The system listens to your voice and instantly converts it into a text. And this saves time, enhances accessibility, and helps automate tasks, like meeting transcriptions. Again, you would have seen real-life examples, such as Siri, Google Assistant, autocaptioning on Zoom, or YouTube Live subtitles. And lastly, image recognition. For example, hospitals today use AI to detect early signs of cancer in x-rays and CT scans that might be missed by the human eye. Deep learning models can analyze visual patterns, like a suspicious spot on a lung's X-ray, and flag abnormalities faster and more consistently than humans. Self-driving cars recognize stop signs, pedestrians, and other vehicles using the same technology. So, for example, cancer detection in medical imaging, Tesla's self-driving navigation, security system synchronizes face are very prominent examples of image recognition. 17:01 Lois: Deep learning is one of the most powerful tools we have today to solve complex problems. But like any tool, I'm sure it has its own set of pros and cons. What are its advantages, Himanshu? Himanshu: It is high accuracy. When trained with enough data, deep learning models can outperform humans. For example, again, spotting early signs of cancer in X-rays with higher accuracy. Second is handling of unstructured data. Deep learning shines when working with messy real-world data, like images, text, and voice. And it's why your phone can recognize your face or transcribe your speech into text. The third one is automatic pattern learning. Unlike traditional models that need hand-coded features, deep learning models figure out important patterns by themselves, making them extremely flexible. And the fourth one is scalability. Once trained, deep learning systems can scale easily, serving millions of customers, like Netflix recommending movies personalized to each one of us. 18:03 Lois: And what about its challenges? Himanshu: The first one is data and resource intensive. So deep learning demands huge amount of labeled data and powerful computing hardware, which means high cost, especially during training. The second thing is lacks explainability. These models often act like a black box. We know the output, but it's hard to explain exactly how the model reached that decision. This becomes a problem in areas like health care and finance where transparency is critical. The third challenge is vulnerability to bias. If the data contains biases, like favoring certain groups, the model will learn and amplify those biases unless we manage them carefully. The fourth and last challenge is it's harder to debug and maintain. Unlike a traditional software program, it's tough to manually correct a deep learning model if it starts behaving unpredictably. It requires retraining with new data. So deep learning offers powerful opportunities to solve complex problems using data, but it also brings challenges that require careful strategy, resources, and responsible use. 19:13 Nikita: We're taking away a lot from this conversation. Thank you so much for your insights, Himanshu. Lois: If you're interested to learn more, make sure you log into mylearn.oracle.com and look for the AI for You course. Join us next week for part 2 of the discussion on AI Concepts & Terminology, where we'll focus on Data Science. Until then, this is Lois Houston… Nikita: And Nikita Abraham signing off! 19:39 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.
In this episode I'm interviewing a scientist who is trying to revolutionize computing by replacing power hungry silicon gates with highly efficient biological neurons. For a present-day silicon-based computer to approach the processing power of a human brain would take MegaWatts of power—we are seeing some of these inefficiencies looking at AI systems that require massive amounts of energy to run. A human brain can do the same thing on less than 20 Watts. My guest today is a scientist who is working on the interface between brain and machine. Dr. Ewelina Kurtys is an entrepreneur with a PhD in neuroscience. After academia, she transitioned into business development and technology commercialization, advising tech companies on sales, partnerships, and market strategy. She founded Ekai Ltd to support innovation and scale in science-driven companies. Her work spans advisory roles, go-to-market planning, and translating complex R&D into real-world impact. She also speaks publicly on innovation, neuroscience, and the intersection of science and entrepreneurship. Check out the video versions on my YouTube channel. Chat with me on Facebook.
Ashley Clark discusses her paper, “Oculomotor Contributions to Foveal Crowding,” published in Vol. 44, Issue 48 of JNeurosci, with Megan Sansevere from SfN's Journals' staff. Then, Sara Patterson discusses her paper, “Cone-Opponent Ganglion Cells in the Primate Fovea Tuned to Noncardinal Color Directions,” published in Vol. 44 Issue 18 of JNeurosci. Find the rest of the Spotlight collection here. With special guests: Ashley Clark and Sara Patterson Hosted by: Megan Sansevere On Neuro Current, we delve into the stories and conversations surrounding research published in the journals of the Society for Neuroscience. Through its publications, JNeurosci, eNeuro, and the History of Neuroscience in Autobiography, SfN promotes discussion, debate, and reflection on the nature of scientific discovery, to advance the understanding of the brain and the nervous system. Find out more about SfN and connect with us on BlueSky, X, Instagram, and LinkedIn.
UCLA's Avishek Adhikari, PhD, presents new research on the role of GABAergic neurons in the brain's periaqueductal gray (PAG) region. Previously studied for their involvement in fear and defensive behaviors, these neurons were found to promote food-seeking behavior when activated—even in fully fed mice. Using calcium imaging and optogenetics, Adhikari's team discovered that these neurons are active during food approach but suppressed during eating. The effect is stronger for high-value foods like chocolate or crickets and depends on the mouse's prior experience with that food. A key finding is that these neurons influence behavior through a specific projection to the zona incerta, a subthalamic region. Rather than signaling hunger, this pathway appears to drive food seeking based on reward value, highlighting a new motivational circuit in the brain. Series: "Stem Cell Channel" [Health and Medicine] [Science] [Show ID: 40444]
UCLA's Avishek Adhikari, PhD, presents new research on the role of GABAergic neurons in the brain's periaqueductal gray (PAG) region. Previously studied for their involvement in fear and defensive behaviors, these neurons were found to promote food-seeking behavior when activated—even in fully fed mice. Using calcium imaging and optogenetics, Adhikari's team discovered that these neurons are active during food approach but suppressed during eating. The effect is stronger for high-value foods like chocolate or crickets and depends on the mouse's prior experience with that food. A key finding is that these neurons influence behavior through a specific projection to the zona incerta, a subthalamic region. Rather than signaling hunger, this pathway appears to drive food seeking based on reward value, highlighting a new motivational circuit in the brain. Series: "Stem Cell Channel" [Health and Medicine] [Science] [Show ID: 40444]
UCLA's Avishek Adhikari, PhD, presents new research on the role of GABAergic neurons in the brain's periaqueductal gray (PAG) region. Previously studied for their involvement in fear and defensive behaviors, these neurons were found to promote food-seeking behavior when activated—even in fully fed mice. Using calcium imaging and optogenetics, Adhikari's team discovered that these neurons are active during food approach but suppressed during eating. The effect is stronger for high-value foods like chocolate or crickets and depends on the mouse's prior experience with that food. A key finding is that these neurons influence behavior through a specific projection to the zona incerta, a subthalamic region. Rather than signaling hunger, this pathway appears to drive food seeking based on reward value, highlighting a new motivational circuit in the brain. Series: "Stem Cell Channel" [Health and Medicine] [Science] [Show ID: 40444]
UCLA's Avishek Adhikari, PhD, presents new research on the role of GABAergic neurons in the brain's periaqueductal gray (PAG) region. Previously studied for their involvement in fear and defensive behaviors, these neurons were found to promote food-seeking behavior when activated—even in fully fed mice. Using calcium imaging and optogenetics, Adhikari's team discovered that these neurons are active during food approach but suppressed during eating. The effect is stronger for high-value foods like chocolate or crickets and depends on the mouse's prior experience with that food. A key finding is that these neurons influence behavior through a specific projection to the zona incerta, a subthalamic region. Rather than signaling hunger, this pathway appears to drive food seeking based on reward value, highlighting a new motivational circuit in the brain. Series: "Stem Cell Channel" [Health and Medicine] [Science] [Show ID: 40444]
By David Stephen who looks at digital therapeutics in this article. A priority innovation, using AI, in medicine is simply not drug discovery or protein structures, but a solution for the side-effects of medications. There are already several useful medications for different conditions, but some side-effects can be so cataclysmic that eradicating them would be a core leap of AI for humanity. Why do medications have side-effects? If an answer is that targeting [say] a receptor could be therapeutic but would derail some other functions, then could a problem be the lack of an overall molecular configuration for functions? Can Digital Therapeutics help? Simply, say the functions of a particular segment of digestion has a molecular configuration of A1B2C3D4. Where the letters are respective molecules and the figures are their volumes. If, say, the receptors for molecule A is inhibited elsewhere, making it unavailable at the right volume for the digestive function, could it be possible to map the general configuration of molecules to functions and then prospect how to mitigate or eradicate side-effects? While it is true that the basis of functions are much more complicated, sometimes involving enzymes, proteins, ions, and so forth, it is possible to seek a rough map of functions - by ions and molecules - from several empirical data in biology. This could begin with brain science for psychiatric medications, such that data of conditions, targets, side-effects are explored for patterns, with AI. Why the brain? The brain is complex but can be functionally reductive. While there are several components of the brain for functions, neurons are correlated with most known functions - and experiences. But, neurons plus their electrical and chemical signals. Electrical signals are ions. Chemical signals are molecules. Neurons and their signals are not just responsible for memory, feelings and emotions, they mechanize regulation of internal senses [or bodily control]. They are also responsible for measures of those functions, or attributes like attention, subjectivity, intent and so forth, conceptually. Since signals are far flexible than neurons [which are cells], and they [signals] transport [and are directly correlated with] functions, it can be theorized that signals are the basis for functions. So, it is possible to develop a conceptual model of how the mind works by ions and molecules. These could then be useful to develop the basis for several functions, with AI, placing therapies and potential side-effects. Simply, to mitigate the side-effects of drugs [starting with psychiatry] using AI, it is possible to zero in [conceptually] on the ionic-molecular configurations that specify functions. Then to look at where some of those ions or molecules are present elsewhere in the body or in other brain functions, to predict what or where else might be affected, to determine how to increase the drug candidates for some conditions since configurations are already prognosticated, shaping far-reaching pre-clinical progress with AI. This can later be extended to other conditions and other parts of the body. However, psychiatry is a medical opacity, needing biomarkers that could be initially computational, and medications with fewer or less intense side-effects. There is a recent [July, 15, 2025] open question in The New Yorker, Can A.I. Find Cures for Untreatable Diseases - Using Drugs We Already Have?, stating that, "Doctors have long prescribed off-label medications, usually through trial and error or in clinical trials, but now A.I. appears poised to supercharge the practice. Earlier models needed examples of effective drugs to learn, so they were unlikely to identify promising candidates unless treatments already existed. But more advanced models can conduct what's called "zero-shot inference," nominating drug candidates for conditions without any known treatments. The tally of conceivable drug-disease combinations numbers in the tens of milli...
UCLA's Avishek Adhikari, PhD, presents new research on the role of GABAergic neurons in the brain's periaqueductal gray (PAG) region. Previously studied for their involvement in fear and defensive behaviors, these neurons were found to promote food-seeking behavior when activated—even in fully fed mice. Using calcium imaging and optogenetics, Adhikari's team discovered that these neurons are active during food approach but suppressed during eating. The effect is stronger for high-value foods like chocolate or crickets and depends on the mouse's prior experience with that food. A key finding is that these neurons influence behavior through a specific projection to the zona incerta, a subthalamic region. Rather than signaling hunger, this pathway appears to drive food seeking based on reward value, highlighting a new motivational circuit in the brain. Series: "Stem Cell Channel" [Health and Medicine] [Science] [Show ID: 40444]
UCLA's Avishek Adhikari, PhD, presents new research on the role of GABAergic neurons in the brain's periaqueductal gray (PAG) region. Previously studied for their involvement in fear and defensive behaviors, these neurons were found to promote food-seeking behavior when activated—even in fully fed mice. Using calcium imaging and optogenetics, Adhikari's team discovered that these neurons are active during food approach but suppressed during eating. The effect is stronger for high-value foods like chocolate or crickets and depends on the mouse's prior experience with that food. A key finding is that these neurons influence behavior through a specific projection to the zona incerta, a subthalamic region. Rather than signaling hunger, this pathway appears to drive food seeking based on reward value, highlighting a new motivational circuit in the brain. Series: "Stem Cell Channel" [Health and Medicine] [Science] [Show ID: 40444]
Send us a textHow brain synapses work and fuel themselves with fat.Episode Summary: Dr. Timothy Ryan talks about the high energy costs of synapses, the role of mitochondria and glycolysis, and challenge the long-held view that the brain relies solely on glucose by discussing new evidence that neurons burn fats from lipid droplets for fuel, especially during activity. The talk touches on metabolic flexibility, links to epilepsy treatments like ketogenic diets, neurodegenerative diseases, and future research on brain energy sources.About the guest: Timothy Ryan, PhD is a professor of biochemistry at Cornell University. His research focuses on the molecular mechanisms of synaptic transmission, particularly vesicle recycling and the bioenergetics that power neural communication. Discussion Points:Synapses are tiny structures with ~100 vesicles per site (on average), converting electrical signals to chemical ones.Brain tissue is energy-hungry due to trillions of synapses (in humans), relying on local mitochondria (present in only ~half of synapses) and glycolysis.Vesicles use proton pumps and transporters to concentrate neurotransmitters, requiring ATP to maintain gradients.Neurons are metabolically fragile; cutting fuel supply quickly impairs synapses.Dogma held brains don't burn fats, but new work shows neurons form lipid droplets (fat stores) that are invisible because constantly used for energy via beta-oxidation.Silencing neurons builds lipid droplets like resting muscle; activity speeds their breakdown, indicating demand-driven fat use.Inhibiting neuron-specific fat-processing enzymes accumulates droplets and induces torpor (hibernation-like state) in animals, signaling metabolic stress.Ketogenic diets aid epilepsy by shifting to ketones; fats may explain this, with potential ties to aging and neurodegeneration like Alzheimer's.Brain may be a "hybrid" fuel user (glucose + fats), with open questions on fat sources and roles in cognition or disease protection.Related episode:M&M 158: Ketosis & Ketogenic Diet: Brain & Mental Health, Metabolism, Diet & Exercise, Cancer, Diabetes | Dominic D'Agostino*Not medical advice.Support the showAll episodes, show notes, transcripts, and more at the M&M Substack Affiliates: KetoCitra—Ketone body BHB + potassium, calcium & magnesium, formulated with kidney health in mind. Use code MIND20 for 20% off any subscription (cancel anytime) Lumen device to optimize your metabolism for weight loss or athletic performance. Code MIND for 10% off Readwise: Organize and share what you read. 60 days FREE through link SiPhox Health—Affordable at-home blood testing. Key health markers, visualized & explained. Code TRIKOMES for a 20% discount. MASA Chips—delicious tortilla chips made from organic corn & grass-fed beef tallow. No seed oils or artificial ingredients. Code MIND for 20% off For all the ways you can support my efforts
In the second episode of this two-part series, Dr. Jeff Ratliff and Dr. Claire Henchcliffe discuss how she advises patients and families on the future of stem cell therapy and Parkinson disease. Show reference: https://www.nature.com/articles/s41586-025-08845-y
In the first part of this two-part series, Dr. Jeff Ratliff and Dr. Claire Henchcliffe discuss the important lessons learned from these results. Show reference: https://www.nature.com/articles/s41586-025-08845-y
Dr. Jeff Ratliff talks with Dr. Claire Henchcliffe about the study's key findings while emphasizing the importance of educating patients about stem cell therapies and the ongoing advancements in this field. Read the related article in Nature. Disclosures can be found at Neurology.org.
What if the future of computing isn't silicon... but neurons?In this episode, we explore a radical new frontier where biology and technology come together — programming living neurons as processors.I am joined by Dr. Evelina Curtis of FinalSpark, a pioneering scientist bridging neuroscience and AI. We unpack the astonishing potential of bioprocessors—miniaturised clusters of human neurons that can process and store data while consuming a fraction of the energy traditional systems demand.Evelina reveals how her team is learning to "program" neurons using electrical and chemical signals to store information — not to replicate the brain, but to unlock new, energy-efficient forms of computing.We cover the challenges of working with living cells, how neuron-based processors may outpace silicon chips in energy efficiency, and what it means for AI, medicine, and beyond.From sci-fi to serious science, this episode pushes the limits of what's possible in computational neuroscience.“We're building processors from living cells.” – Dr. Evelina CurtisYou'll hear about:· Neurons being significantly more energy efficient than traditional computing methods.· How FinalSpark aims to program neurons to perform computational tasks.· Why maintaining the health of neurons in a lab setting is crucial for research.· Investment being needed to accelerate the development of neuronal computing.· Ways neuronal computing has potential applications in drug development and brain interfaces.· Ethical considerations arise when discussing programming living neurons.· The future of computing may involve a blend of biological and digital technologies. Connect with Dr. Ewelina KurtysLinkedIn - https://www.linkedin.com/in/ewelinakurtys/ Website - https://www.ewelinakurtys.com/ Connect with me:LinkedIn: https://www.linkedin.com/in/markdavison100/ If you need any lab equipment:Grant Instruments: https://www.grantinstruments.com/ Grant Instruments on LinkedIn: https://www.linkedin.com/company/grant-instruments-cambridge-ltd/
In this episode of Spinal Cast, we're joined by Dr. Murray Blackmore – renowned for making complex biology accessible. He breaks down the incredible science behind spinal cord repair, from neurons and axons to the genes that guide regeneration. His groundbreaking research explores how mimicking gene activity could help severed axons reconnect, offering real hope for recovery after spinal cord injury. We also discuss a critical issue: potential cuts to NIH and federal research funding. Dr. Blackmore shares what's at stake if these cuts go through – and why the future of innovation depends on continued support. This is a must listen! A professional bio for Dr. Blackmore can be found here: https://www.marquette.edu/biomedical-sciences/directory/murray-blackmore.php Special thanks to Dr. Blackmore for being our guest! This production is a collaborative effort of volunteers working to create a quality audio and visual experience around the subject of spinal cord injury. A special shout out of appreciation to Clientek for providing studio space and top-notch recording equipment. Most importantly, thank YOU for being part of the Spinal Cast audience! Interested in watching these episodes?! Check out our YouTube playlist! - https://youtube.com/playlist?list=PL40rLlxGS4VzgAjW8P6Pz1mVWiN0Jou3vIf you'd like to learn more about the Morton Cure Paralysis Fund you can visit our website at - https://mcpf.org/ Donations are always appreciated - https://mcpf.org/you-can-help/
Have you ever wondered what actually makes insomnia recovery possible? In this episode, I'm going to talk about exactly that. We'll explore one of the most hopeful principles of neuroplasticity — and why it's the reason getting beyond insomnia isn't just possible… but inevitable when you understand how the brain works.You see, the human brain has the amazing capacity to change associations and shift links.And truly, is there anything more miraculous than that?In this episode, you'll learn how:Your responses play a key role in rewiring the brainInsomnia isn't a sign that something is broken, but a conditioned pattern rooted in hyperarousalChanging your brain's association with wakefulness can be a turning point in recoveryI also share a personal story that illustrates how learned fear responses can shift, and why that same shift is completely possible for you, too.Enjoy!Mentioned Resources:Ep 32: “Neurons that fire together, wire together.”Connect with Beth:
We are excited to host Stephanie and Giancarlo Canavesio on this episode of the Mangu.tv podcast, interviewed by long-term friend and poet, Alexcia Panay. Stephanie is a psychotherapist devoted to deep healing and conscious transformation. Through her work with presence embodied, she weaves compassionate inquiry with meditative practices to support clients in cultivating self-awareness and authentic expression. Trained by Dr. Gabor Mate, Stephanie brings a trauma-informed lens to her work, specialising in trauma, emotional patterns, and addiction. She empowers individuals to reconnect with their inner truth and vitality. Stephanie's presence is intuitive and grounded, making her a quiet but powerful force for lasting change.Giancarlo is a multidisciplinary entrepreneur focused on meaningful impact. He produced Neurons to Nirvana and 2012: Time for Change, through Mangusta Productions, exploring psychedelics and sustainability. He co-founded Terra Viva, a regenerative farm and founded Difuso Ibiza, revitalising historic Sa Penya homes into spaces for community and transformation. Giancarlo, bridge's media, land stewardship and regenerative living, combining creativity, discipline, and purpose. His ventures foster innovation, conscious living, and resilience. He's not just building businesses, but nurturing cultural and ecological renewal. Giancarlo and Stephanie delve into their story, the things they most admire about each other, tools for a healthy relationship, and how they alchemised difficult moments in their marriage to grow both together and individually. They discuss ISTA, Compassionate Inquiry, and other modalities, and how these approaches have enabled them to expand in their relationship and as parents.
In this Huberman Lab Essentials episode, I explore the sensations of pain and pleasure, explaining how they are sensed in the body and interpreted by the brain as well as methods to control their intensity. I discuss both the hardwired mechanisms and subjective factors that shape an individual's perception of pain and pleasure. I also explain why pain thresholds vary from person to person and discuss various treatments for pain management such as acupuncture and supplements. Finally, I explain the role of key neurochemicals like dopamine and serotonin in mediating our experience of pain and pleasure. Read the episode show notes at hubermanlab.com. Thank you to our sponsors AG1: https://drinkag1.com/huberman LMNT: https://drinklmnt.com/huberman Eight Sleep: https://eightsleep.com/huberman Timestamps 00:00:00 Pain & Pleasure 00:00:39 Skin, Appetitive vs Aversive Behaviors 00:02:10 Skin, Neurons & Brain 00:04:46 Brain Interpretation, Homunculus, Two-Point Discrimination Test 00:07:43 Pain & Pleasure, Subjective Interpretation 00:09:53 Sponsor: AG1 00:11:30 Tool: Pain & Expectation 00:13:08 Pain Threshold 00:14:46 Heat & Cold, Tool: Moving into Cold or Hot Environments 00:16:37 Subjective Pain, Psychosomatic, Fibromyalgia, Whole Body Pain, Acetyl-L-carnitine 00:20:54 Acupuncture, Electroacupuncture, Pain Management 00:23:44 Sponsors: LMNT & Eight Sleep 00:26:36 Red Heads & Pain Threshold, Endorphins 00:28:32 Improving Pain Threshold, Dopamine 00:30:00 Pleasure, Dopamine, Serotonin; Depression, Anti-depressants 00:34:12 Pleasure & Pain Balance, Dopamine, Addiction 00:36:08 Recap & Key Takeaways Disclaimer & Disclosures Learn more about your ad choices. Visit megaphone.fm/adchoices
Cells in the bogong moth brain respond to astral landmarks to orient the insects in the direction they need to go.
Movement-sensing neurons that target the striatum influence a mouse's choice of action by favoring routine.
In this episode we explore spindle neurons, also known as von Economo neurons (VENs), discovered by Constantine von Economo in the 1920s. These large, cylinder-shaped cells, found in the anterior cingulate cortex (ACC), anterior (or called frontoinsula) insula, and rarely the dorsolateral prefrontal cortex (DLPFC), are crucial for rapid communication in social behaviors. Three key points stand out: first, spindle neurons are located exclusively in the ACC and insula, the core hubs of the salience network, which is disrupted in autism as early as six weeks of age, as shown in a UCLA study. Second, these neurons are primarily involved in social behaviors, underpinning complex interactions in humans and other social species. Third, they are rare, found only in highly social animals like great apes, cetaceans, and possibly elephants, highlighting their evolutionary significance. In autism, increased spindle neuron density in the insula of children aged 4–14 is linked to early brain overgrowth, though this normalizes in adulthood due to pruning.The salience network, driven by the ACC and anterior insula, relies on spindle neurons to process relevant stimuli, integrate sensory data, and guide social-emotional responses. In autism, hyper-connectivity between the insula and sensory regions causes sensory hypersensitivity, while hypo-connectivity with prefrontal areas, including the DLPFC, impairs social cognition and adaptive behaviors. This disrupts the network's ability to switch between the default mode network and executive functions, often trapping autistic individuals in an internal world. Spindle neurons are also linked to disorders like frontotemporal dementia, schizophrenia, Alzheimer's, and emotional dysregulation.Daylight Computer Companyuse "autism" for $25 off athttps://buy.daylightcomputer.com/RYAN03139Chroma Iight Devicesuse "autism" for 10% discount athttps://getchroma.co/?ref=autism0:00 Chroma Light Devices, use "autism" for 10% discount3:10 Spindle Neurons; Interesting Point #1: Salience Network6:03 Interesting Point #2: Rapid Communication & Social Behaviors7:11 "rediscovery" of Spindle Neurons in 1990s by John Allman8:30 Interesting Point #3: Highly Social Species10:57 Neuropeptide Y & Monoamine- Dopamine & Serotonin11:45 Morphology of Spindle Neurons & In Utero-Children; Social Learning12:54 Species with Spindle Neurons14:47 Small Clusters within the dlPFC15:08 Salience Network, ACC, Anterior Insula; Fulcrum for Default Mode and Executive Networks22:48 Daylight Computer Company, use "autism" for $25 discount27:08 Diseases/Conditions associated with Spindle Neurons29:26 The Autistic Phenotype (!!) -- The Whole Reason for the Podcast32:25 Reviews/Ratings, Contact InfoX: https://x.com/rps47586YT: https://www.youtube.com/channel/UCGxEzLKXkjppo3nqmpXpzuAemail: info.fromthespectrum@gmail.com
Streamlining the problem from 3D to 1D eases the expedition—a strategy the study investigators deployed to rewire an olfactory circuit in flies.
There is a river of wisdom that's been flowing through human history in every culture since the beginning of civilization. It's the wise understandings of those rare individuals who were able to remember something we so often forget: who we really are as human beings. In this episode, we'll dip into that timeless stream, that many have called the Wisdom of the Ages, and explore how it flows directly into the heart of our new project, NeuroHarmonics. This work is about more than just ideas; it's a guide that you can use to bring harmony into your own intelligence, an inner alignment that can quietly, yet profoundly transform your life. So, basically, what is the Wisdom of the Ages all about? Well, although the term may sound a bit lofty or poetic, it's far from just a throw away line. It points to something deeply real – an enduring thread of understanding that weaves through cultures, continents and centuries. It's timeless not because it ignores the changes of history, but because it speaks to something that never changes with the times; the essence of being human. Let's look at it this way - throughout the long arc of human history, from the ancient river valleys of India and Mesopotamia to the mountains of China, the deserts of the Middle East, and the forests of the Americas, there have always been extraordinary men and women who saw the essence of life more clearly than the rest of us. They pierced the surface of things. And even while living in the ever-shifting world of change and impermanence, they were able to reach something that they said was eternal. Their lives, their words, and often just their very presence spoke of something greater than themselves - something vast and invisible, and yet deeply and intimately known. Some became renowned spiritual teachers: Krishna, Ram, Buddha, Jesus, Moses, Mohammed. Others appeared as philosophers, sages, mystics, and shamans. Many left poems, stories, songs, and scriptures, depicting their glimpses into the higher realms of understanding. Some of their names may be less well known, but their examples are no less luminous. Of course, on the surface, these wise ones seemed vastly different. They spoke different languages, wore different clothes, lived in different lands, and practiced different rituals. But remarkably, the essence of their message was the same. To put their vast teachings into a few simple words: there is something beyond this world. Something infinite. You can call it God. Or the Divine. Or Truth, Spirit, Source, or just the Infinite. There may be a thousand names for it—but the names don't matter. In fact, many of these teachers insisted that words can't matter. Because this Presence—this vast, formless essence—is beyond concept, beyond logic, and beyond the reach of ordinary human comprehension. Why? It's actually quite simple. For all its brilliance, human intelligence is still finite. And the finite, by its very nature, can never truly grasp the Infinite. It just can't be done. Test your own mind as an example. Try picturing a box that is so large, there is no space outside of it. Or try describing something that has no beginning and no end. Or tell me what biggest number in the world. You can't because there isn't one. Whatever number you come up with, you can always add one more and it gets bigger. So there's no such thing. That's the thing about infinity. There's no edge, no boundary, no final point. And when we try to wrap our minds around it, the circuits start to smoke and the brain just watts out. Because it's not built to contain the Infinite. So according to the wisdom tradition, this thing that can't be understood or even named - exists. For our sake here, let's just call it the Infinite, a term that is relatively devoid of the tarnishing nature of human religious tribalism. But that's just one of the five thousand opinions my mind has churned out in the last hour. Now, what does the Wisdom of the Ages tell us about our relationship with this infinite presence, this reality that you can't define, draw or capture in a book? Well, in the simplest terms, it tells us this: we can experience it. And not only that, we can grow into it. Like a plant reaching toward sunlight, we are drawn toward that light, that warmth and that truth that seems to emanate from it. And this idea of growth is where the Wisdom of the Ages begins. Because it's not just about belief – it's about transformation. It speaks of a journey toward inner realization, where you begin to see that you are not merely a body, not just a personality, not your thoughts, opinions, or accomplishments. You are something deeper. In essence, something sacred, something eternal that has temporarily taken human form—for the purpose of learning, of growing, of remembering. And ultimately, in a very real and quiet way... for returning. With that being said, the Wisdom of the Ages does rest on a set of core understandings—fundamental insights that form its foundation. And while these truths have appeared in every form of civilization throughout history, they are not relics of the past. Far from it. This wisdom is alive. It's woven from the highest human understandings about life—ancient, modern, and everything in between. So, let's take a brief look at some of its basic tenets. We'll touch on nine of them here, though the first one needs a little more attention than the rest. The first core teaching is Impermanence—and at first, it can be a hard one to face. It simply says this: everything changes, and everything ends. Ourselves included. Look around with clear eyes, and it becomes obvious. From the rise and fall of empires to fleeting joys and sorrows, nothing stays the same. As the Buddha put it: “All conditioned things are impermanent. Work out your own salvation with diligence.” When we cling to what must pass, we suffer. But when we accept impermanence, we begin to live with Grace. Why does this matter? Because so much of our pain comes from forgetting that. We build our identities on outer things—titles, possessions, opinions, appearances, money—and we defend those identities as if they were permanent. But they're not. And this is what the sages warned us about. It's not that success, wealth, or recognition are wrong. It's that when we make them the foundation of who we are; we build on sand. All of it—status, stuff, praise, even the ego itself—rises and falls. Sparkles, then fades. Seems solid, then vanishes. And when our sense of self is tied to what vanishes, we suffer. We become anxious, greedy, and ultimately disappointed. We chase mirages, hoping they'll fulfill us—only to find out thatthey never really could. And worst of all, we miss the deeper reason we came here: to remember who we truly are, and to feel the joy that naturally comes with that remembrance. At least, that's what the wisdom says. Personally, I think it has a nice ring to it. 2. You Are Not Your Thoughts The second insight is deceptively simple, but not so easy to live: You are not your thoughts. From the Upanishads to modern psychology, the message is clear: You are not the mental chatter, the looping stories, or the voices of doubt and fear. You're not your résumé, your wounds, or the roles you've played. Beneath all that noise lives a deeper awareness that is luminous, spacious, and free. Most of what runs through our minds didn't originate with us anyway. We picked it up from parents, culture, trauma, media. But we end up thinking that these random thoughts are who we really are. And some of the major experts in the field haven't had such an elevated opinion of our abilities in navigating the thought field. William James for example, who is often called the father of modern psychology, once said: “Most people think they are thinking when they are merely rearranging their prejudices.” The deeper self lives in a realm beyond all that. And rediscovering it is part of waking up. 3. Stillness Is the Gateway to Higher Knowing Stillness isn't just the absence of noise. It's the presence of something greater. As the old biblical phrase goes, “Be still, and know…” In the Taoist tradition, stillness is the source of right action—what they call wu wei, or effortless movement that flows from deep inner alignment. This knowing is not vague spirituality. It's direct experience. Not a theory, but a felt presence. Stillness is where insight arises—gently, precisely, and often when we least expect it. And the Wisdom of the Ages doesn't ask for blind belief. It invites direct experience. We all know that there is a world of difference between actually eating a delicious meal and just reading the menu. 4. Love Is the Deepest Truth and Kindness Is the Highest Power At the heart of every great tradition is this simple truth: Love is the essence of life. Not just romance or sentiment, but love as a radiant, unshakable presence. When that love moves into action, it manifests as kindness. And kindness doesn't mean weakness—it's strength under wisdom's guidance. To grow in this kind of love is to become more fully human, which is another term for more connected to the divine. 5. You Become What You Practice This one is carved into every tradition—and now verified by neuroscience: What you repeat gets stronger. In ancient terms: “As you sow, so shall you reap.” Modern neuroscience puts it this way: Neurons that fire together, wire together. This means that your attention—and your habits—literally shape the architecture of your brain. So, what you practice daily—whether it's judgement and fear, or gratitude and patience - becomes the blueprint of your inner life. 6. Gratitude Opens the Heart and Expands Consciousness The ancient ones knew it. And now neuroscience confirms it. When you begin to see life as a gift, everything starts to shift the more grateful you become of it. And Gratitude isn't just a virtue—it's a form of perception. It quiets the craving mind and awakens a deeper presence. Suddenly, you're not waiting for “more” to feel whole. You begin to see how much has already been given. And that soft opening of the heart that you feel within? That's consciousness expanding. 7. Life Is a School for the Soul This one can change your life. When you see life as a school, everything becomes part of the curriculum. Adversity isn't punishment—it's instruction. Each loss, betrayal, or hardship carries within it a hidden message, a deeper lesson. The soul came here to grow. And when you see that, you start to see that Life isn't happening to you. It's happening for you. And nothing—absolutely nothing—is wasted on the soul. 8. Everything Is Connected—There Is No Real Separation We live in a world of apparent separateness. But beneath the surface, everything is woven together. Ancient mystics knew this. So did the early Native Americans. As Chief Seattle said: “Whatever befalls the Earth befalls the sons of the Earth. Man did not weave the web of life—he is merely a strand in it.” Modern science agrees. From quantum physics to ecology to trauma healing, it all points to one truth: There is no such thing as alone. Only all one. 9. Your Attention Is Your Greatest Power The final tenet is the hidden key: Where you place your attention, determines what grows for you. In a world full of noise, mastering your attention is an act of power—and peace. When you begin to master your attention, you begin to master your life. Whatever you feed with your focus becomes stronger. Fear? It grows. Anger? It grows. Gratitude? It grows. Love? Presence? Joy? These grow too. The game of life changes when you realize:You are the gardener. And your attention is the sun. So, in essence the Wisdom of the Ages tells us that everything outside is temporary. But what is real within you does not fade. It was never born. It will never die. It simply is. And the bottom line is that as human beings, we have far more intelligence, love, happiness, and joyful sense of purpose than we have been taught by our current culture. And the wise ones would tell us that the way to access it all is to pause, breathe and listen. The universal power of this wisdom in not far away. It is within you right now, right here. You don't have to become anyone else.You just have to become who you already are. Tune yourself into stop chasing the illusion and start honoring the real. The path is ancient. But that's not what matters. What matters is that it is alive within us now. It begins wherever we are, and whenever we are ready. At least that's what they say. For me, this wisdom has been in the winner's circle since the beginning of recorded history so – I'll take the odds… Well, I guess this is a good time for us to stop this episode. As always, keep your eyes, mind and heart open. And let's get together in the next one.
Oh heyyy there, fellow parent-people struggling through the beautiful chaos! In this episode, I spill ALL the tea about how that nasty little voice in your head (you know the one that's all "you're failing at this parenting gig") isn't just making YOU feel like garbage—it's actually rewiring your kid's emotional thermometer too!
What happens in your brain when Cupid's arrow strikes? As a teenager, Alison developed an intense crush on George Harrison from the Beatles. But, she wants to know, why do we develop these feelings for pop stars we've never actually met? And what potent swirl of neurochemistry drives those fierce emotions?With neuroscientist Dr. Dean Burnett and evolutionary anthropologist Dr. Anna Machin as their guides, Hannah and Dara investigate everything from the brain's chemical fireworks during a crush to the evolutionary perks of love and bonding. Along the way, they dissect teenage infatuations, lifelong love affairs with football teams, and why love can feel as addictive as heroin.There's even a guest appearance from two cute rodents: the monogamous prairie voles and their more, shall we say, commitment-phobic cousins, the montane voles, who gave us early clues about the role of the ‘cuddle' hormone oxytocin. Whether you're a hopeless romantic or a hard-nosed skeptic, prepare to fall head over heels for the science of love.Contributors:Dr Anna Machin - evolutionary anthropologist and author of Why We Love Dr Dean Burnett - honorary research fellow at Cardiff Psychology School, author of The Idiot Brain and The Happy Brain. Carmine Pariante - Professor of Biological Psychiatry at King's College LondonProducer: Ilan Goodman Executive Producer: Alexandra Feachem A BBC Studios Audio Production
Kimberly Snyder, author of 'The Hidden Power of the Five Hearts,' shares fascinating insights on the concept of heart coherence and its transformative power. Discover how emotions like appreciation and love can align your heart, brain, and nervous system. Kimberly also explores the 5 stages of heart coherence, shares personal experiences with emotional healing, and provides practical tools for achieving a more coherent state. As a special bonus, the episode includes a guided heart coherence meditation to help you experience the profound effects of these practices on your well-being. Don't miss out on this enlightening episode! To view full show notes, more information on our guests, resources mentioned in the episode, discount codes, transcripts, and more, visit https://drmindypelz.com/ep285 Kimberly Snyder is the 3-time New York Times bestselling author of The Hidden Power of the Five Hearts, who is the change-maker of the heart-led living and wellness movement. The founder of the holistic lifestyle brand Solluna and host of the top-rated Feel Good Podcast, Kimberly is a wellness expert, creator of the research-based HeartAlign Meditation, nutritionist and international speaker. She co-authored Radical Beauty with Deepak Chopra, and has been the go-to expert to help celebrities feel their best, including Drew Barrymore, Reese Witherspoon and Channing Tatum. Kimberly's work is featured on Good Morning America, Today, The Wall Street Journal, Vogue, The New York Times, and many other publications. Check out our fasting membership at resetacademy.drmindypelz.com. Please note our medical disclaimer.
In this Huberman Lab Essentials episode, I explain how to build endurance and describe targeted protocols to enhance different types of endurance. I discuss how endurance—the ability to sustain effort—requires the coordination of physical and mental systems driven by energy availability, brain willpower, and specific training adaptations in the muscles, heart, lungs and neurons. I explain conditioning protocols designed to enhance four types of endurance, from long-duration steady state to muscular endurance and high-intensity intervals, and how each training style triggers unique adaptations in the body and brain, such as improved mitochondrial function and oxygen utilization. Additionally, I highlight the crucial role of hydration and electrolytes, which are essential for neural function and influence the brain's willpower to sustain effort. Read the episode show notes at hubermanlab.com. Thank you to our sponsors AG1: https://drinkag1.com/huberman LMNT: https://drinklmnt.com/huberman Eight Sleep: https://eightsleep.com/huberman Function: https://functionhealth.com/huberman Follow Huberman Lab Instagram Threads X Facebook TikTok LinkedIn Timestamps 00:00:00 Huberman Lab Essentials; Build Endurance 00:00:50 Energy Sources, ATP, Oxygen 00:04:14 Neurons & Willpower, Glucose & Electrolytes 00:09:19 Heart, Lungs; Physiology & Performance Limiting Factors 00:10:35 Sponsor: AG1 00:12:30 Muscular Endurance, Protocol, Concentric Movements, Mitochondria 00:19:10 Sponsors: LMNT & Eight Sleep 00:22:00 Long-Duration Endurance, Efficiency, Mitochondria, Capillaries 00:25:54 High-Intensity Interval Training (HIIT), Anaerobic Endurance, Protocol 00:32:33 High-Intensity Aerobic Endurance, Adaptations 00:35:30 Sponsor: Function 00:37:26 Brain & Body Adaptations, Heart 00:40:40 Hydration, Tool: Galpin Equation 00:42:21 Supplements, Stimulants, Magnesium Malate 00:43:11 Recap & Key Takeaways Disclaimer & Disclosures
In this episode, I am joined by Adam Safron, an independent researcher with a wild range of interests—from consciousness and free will to psychedelics, artificial life, and AI alignment. With affiliations at the Institute for Advanced Consciousness Studies and Michael Levin's lab at Tufts University, Adam brings a rare interdisciplinary lens to questions about mind, matter, and meaning. We explore how agency arises, what it means to align intelligent systems (both artificial and biological), and how compassion might be the ultimate protocol for thriving.
Scientific Sense ® by Gill Eapen: Prof. Ziv Williams is Associate Professor of Neurosurgery at Harvard division of Medical Sciences. The major goal of his lab has been to investigate neural computations that underlie motor and cognitive behavior.Please subscribe to this channel:https://www.youtube.com/c/ScientificSense?sub_confirmation=1
SANS Internet Stormcenter Daily Network/Cyber Security and Information Security Stormcast
In this episode, we discuss critical vulnerabilities in Ivanti Connect Secure and Policy Secure, command injection risks in Aviatrix Network Controllers, and the risks posed by hijacked abandoned backdoors. Episode Links and Topics: More Governments Backdoors in Your Backdoors https://labs.watchtowr.com/more-governments-backdoors-in-your-backdoors/ Researchers reveal how expired domains linked to abandoned backdoors can be hijacked, exposing systems to further compromise. Security Update: Ivanti Connect Secure, Policy Secure, and Neurons for ZTA Gateways https://www.ivanti.com/blog/security-update-ivanti-connect-secure-policy-secure-and-neurons-for-zta-gateways Ivanti addresses critical vulnerabilities (CVE-2025-0282, CVE-2025-0283) in their secure gateway products, with active exploitation in the wild. CVE-2024-50603: Aviatrix Network Controller Command Injection Vulnerability https://www.securing.pl/en/cve-2024-50603-aviatrix-network-controller-command-injection-vulnerability/ A command injection vulnerability in Aviatrix Network Controllers allows unauthenticated code execution, posing severe risks to network environments.