Theoretical Neuroscience Podcast

Follow Theoretical Neuroscience Podcast
Share on
Copy link to clipboard

The podcast focuses on topics in theoretical/computational neuroscience and is primarily aimed at students and researchers in the field.

Gaute Einevoll


    • Apr 26, 2025 LATEST EPISODE
    • monthly NEW EPISODES
    • 1h 35m AVG DURATION
    • 27 EPISODES


    Search for episodes from Theoretical Neuroscience Podcast with a specific topic:

    Latest episodes from Theoretical Neuroscience Podcast

    On construction and clinical use of multipurpose neuron models - with Etay Hay - #27

    Play Episode Listen Later Apr 26, 2025 73:21


    Numerous neuron models have been made, but most of them are "single-purpose" in that they are made to address a single scientific question. In contrast, multipurpose neuron models are made to be used to address many scientific questions. In 2011, the guest published a multipurpose rodent pyramidal-cell model which has been actively used by the community ever since. We talk about how such models are made, and how his group later built human neuron models to explore network dynamics in brains of depressed patients.

    On the population code in visual cortex - with Kenneth Harris - #26

    Play Episode Listen Later Mar 29, 2025 84:49


    With modern electrical and optical measurement techniques, we can now measure neural activity in hundreds or thousands of neurons simultaneously. This allows for the investigation of population codes, that is, of how groups of neurons together encode information. In 2019 today's guest published a seminal paper with collaborators at UCL in London where analysis of optophysiological data from 10.000 neurons in mouse visual cortex revealed an intriguing population code balancing the needs for efficient and robust coding. We discuss the paper and (towards the end) also how new AI tools may be a game-changer for neuroscience data analysis.

    On growing synthetic dendrites – with Hermann Cuntz - #25

    Play Episode Listen Later Mar 1, 2025 94:34


    The observed variety of dendritic structures in the brains is striking. Why are they so different, and what determine the branching patterns? Following the dictum “if you understand it, you can build it”, the lab of the guest builds dendritic structures in a computer and explore the underlying principles. Two key principles seem to be to minimize (i) the overall length of dendrites and (ii) the path length from the synapses to the soma. 

    On neuroscience foundation models - with Andreas Tolias - #24

    Play Episode Listen Later Feb 1, 2025 91:43


    The term “foundation model” refers to machine learning models that are trained on vast datasets and can be applied to a wide range of situations. The large language model GPT-4 is an example. The group of the guest has recently presented a foundation model for optophysiological responses in mouse visual cortex trained on recordings from 135.000 neurons in mice watching movies. We discuss the design, validation, use of this and future neuroscience foundation models.  

    On human whole-brain models - with Viktor Jirsa - #23

    Play Episode Listen Later Jan 4, 2025 115:21


    A holy grail of the multiscale approach for physical brain modelling is to link the different scales from molecules, via cells and local neural networks, up to whole-brain models. The goal of the Virtual Brain Twin project, lead by today's guest, is to use personalized human whole-brain models to aid clinicians in treating brain ailments. The podcast discusses how such models are presently made using neural field models, starting with neuron population dynamics rather than molecular dynamics.

    On 50 years with the Hopfield network model - with Wulfram Gerstner - #22

    Play Episode Listen Later Dec 7, 2024 87:49


    In 1982 John Hopfield published the paper "Neural networks and physical systems with emergent collective computational abilities" describing a simple network model functioning as an associative and content-addressable memory. The paper started a new subfield in computational neuroscience and led to the influx of numerous theoretical scientists, in particular physicists, to the field. The podcast guest wrote his PhD thesis on the model in the early 1990s, and we talk about the history and present impact of the model.    

    On models for short-term memory - with Pawel Herman - #21

    Play Episode Listen Later Nov 9, 2024 107:42


    The leading theory for learning and memorization in the brain is that learning is provided by synaptic learning rules and memories stored in synaptic weights between neurons. But this is for long-term memory. What about short-term, or working, memory where objects are kept in memory for only a few seconds?  The traditional theory held that here the mechanism is different, namely persistent firing of select neurons in areas such as prefrontal cortex. But this view is challenged by recent synapse-based models explored by today's guest and others. 

    On neuro-AI on the boat - part 2 of 2 - with Paul Middlebrooks, Cristina Savin, Tim Vogels, Mikkel Lepperød - #20

    Play Episode Listen Later Oct 11, 2024 81:03


    In September Paul Middlebrooks, the producer of the podcast BrainInspired, and I were both on a neuro-AI workshop on a coast liner cruising the Norwegian fjords. We decided to make two joint podcasts with some of the participants where we discuss the role of AI in neuroscience. In this second part we discuss the topic with Cristina Savin and Tim Vogels and round off with a brief discussion with Mikkel Lepperød, the main organizer of the workshop, about what he learned from the workshop.  

    On neuro-AI on the boat - part 1 of 2 - with Paul Middlebrooks, Ken Harris, Andreas Tolias, Mikkel Lepperod - #19

    Play Episode Listen Later Oct 8, 2024 79:26


    In September Paul Middlebrooks, the producer of the podcast BrainInspired, and I were both on a neuro-AI workshop on a coast liner cruising the Norwegian fjords. We decided to make two joint podcasts with some of the participants where we discuss the role of AI in neuroscience. In this first part we talk with Mikkel Lepperod, the main organizer about the goal of the workshop, and with Ken Harris and Andreas Tolias about how AI has affected their research neuroscientists and their thoughts about the future of neuro-AI.

    On electric brain signals - solo episode - #18

    Play Episode Listen Later Sep 15, 2024 108:22


    Most of what we have learned about the functioning of the living brain has come from extracellular electrical recordings, like the measurement of spikes, LFP, ECoG and EEG signals. And most analysis of these recordings has been statistical, looking for correlations between the recorded signals and what the animal/human is doing or being exposed to.   However, starting with the neuron rather than the data, these electrical brain signals can also be computed from biophysics-based forward models, and this is topic of this podcast. 

    On dendritic function - with Yiota Poirazi - #17

    Play Episode Listen Later Aug 17, 2024 87:39


    The most prominent visual characteristic of neurons is their dendrites. Even more than 100 years after their first observation by Cajal, their function is not fully understood. Biophysical modeling based on cable theory is a key research tool for exploring putative functions, and today's guest is one the leading researchers in this field.   We talk about of passive and active dendrites, the kind of filtering of synaptic inputs they support, the key role of synapse placements, and how the inclusion of dendrites may facilitate AI.    

    On consciousness - with Christof Koch - #16

    Play Episode Listen Later Aug 3, 2024 124:59


    The greatest mystery of all is why a group of atoms, like the ones constituting me, can feel anything.  The mind-brain problem has puzzled philosophers for millennia. Thanks to pioneers like Christof Koch, consciousness studies have recently become a legitimate field of scientific inquiry.   In this vintage episode, recorded in February 2021, we discuss many aspects of the phenomenon, including an intriguing candidate theory: Integrated Information Theory.

    On the simulation tool NEURON - with Michael Hines - #15

    Play Episode Listen Later Jul 20, 2024 86:35


    Computational neuroscientists use many software tools, and NEURON has become the leading tool for biophysical modeling of neurons and neural network. Today's guest has been the leading developer of NEURON since the infancy almost 50 years ago. We talk about how the tool got started and the development up until today's modern version of the software, including CoreNEURON optimized for parallel execution of large-scale network models on multicore supercomputers.

    On the molecular memory code - with Sam Gershman - #14

    Play Episode Listen Later Jun 22, 2024 80:44


    The idea that memories are stored in molecules was popular in the middle of the 20th century. However, since the discovery of long-term potentiation (LTP) in the 1970s, the dominant view has been that our memories are stored in synapses, that is, in the connections between neurons. Today, there are signs that the interest in molecular memory is returning, and the guest has presented a theory suggesting that molecular and synaptic memory might serve complementary needs for animals. 

    On quantum biology - with Johnjoe McFadden - #13

    Play Episode Listen Later Jun 8, 2024 77:20


    Is quantum physics important in determining how living systems, including brains, work? Today's guest is a professor of molecular genetics at the University of Surrey in England and explores this question in the book “Life at the edge: The coming of age of quantum biology”. In this “vintage” episode, recorded in late 2019, we talk about how quantum physics is or may be key in photosynthesis, smelling, navigation, evolution and even thinking. And we also touch on development of new antibiotics, another expertise of McFadden.

    On modeling of signaling pathways inside the neuron - with Avrama Blackwell - #12

    Play Episode Listen Later May 25, 2024 91:40


    Most computational neuroscientists investigate electric dynamics in neurons or neural networks, but there is also computations going on inside neurons. Here the key dynamical variables are concentrations of numerous different molecules, and the signaling is typically done in cascades of chemical reactions, called signaling pathways. Today's guest is an expert in this kind of modelling and is particularly interested in the signaling role of calcium.

    On synaptic learning rules for spiking neurons - with Friedemann Zenke - #11

    Play Episode Listen Later Apr 27, 2024 90:44


    Today's AI is largely based on supervised learning of neural networks using the backpropagation-of-error synaptic learning rule. This learning rule relies on differentiation of continuous activation functions and is thus not directly applicable to spiking neurons. Today's guest has developed the algorithm SuperSpike to address the problem. He has also recently developed a biologically more plausible learning rule based on self-supervised learning. We talk about both.  

    On large-scale modeling of mouse primary visual cortex - with Anton Arkhipov - #10

    Play Episode Listen Later Mar 30, 2024 122:08


    Over the last ten years or so, the MindScope project at the Allen Institute in Seattle has pursued an industrylab-like approach to study the mouse visual cortex in unprecedented detail using electrophysiology, optophysiology, optical imaging and electron microscopy.  Together with collaborators at Allen, today's guest has worked to integrate of these data into large-scale neural network, and in the podcast he talks about their ambitious endeavor.

    On origins of computational neuroscience and AI as scientific fields - with Terrence Sejnowski (vintage) - #9

    Play Episode Listen Later Mar 16, 2024 115:27


    Today's guest is a pioneer both in the fields of computational neuroscience and artificial intelligence (AI) and has had a front seat during their development.  His many contributions include, for example, the invention of the Boltzmann machine with Ackley and Hinton in the mid 1980s.  In this “vintage” episode recorded in late 2019 he describes the joint births of these adjacent scientific fields and outlines how they came about.

    On reverse engineering of the roundworm C.elegans - with Konrad Kording - #8

    Play Episode Listen Later Mar 2, 2024 94:14


    Today's guest has argued that the present dominant way of doing systems neuroscience in mammals (large-scale electric or optical recordings of neural activity combined with data analysis) will be inadequate for understanding how their brain works. Instead, he proposes to focus on the simple roundworm C.elegans with only 302 neurons and try to reverse engineer it by means of optical stimulation and recordings, and modern machine-learning techniques.   

    On topological data analysis and Hopfield-like network models - with Carina Curto - #7

    Play Episode Listen Later Feb 3, 2024 134:41


    Over the last decade topological analysis has been established as a new tool for analysis of spiking data. Today's guest has been a pioneer in adapting this mathematical technique for use in our field and explains concepts and example applications.  We also also talk about so-called threshold-linear network model, a generalization of Hopfield networks exhibiting a much richer dynamics, where Carina has done some exciting mathematical explorations

    On central pattern generators in the spinal cord - with Henrik Lindén - #6

    Play Episode Listen Later Jan 6, 2024 86:33


    Not all interesting network activity occurs in cortex. Networks in the spinal cord, the long thin tubular structure extending downwards from the neck, is responsible for setting up rhythmic motor activity needed for moving around. How do these so-called central pattern generators work? Today's guest has, together with colleagues in Copenhagen, developed a neuron-based network theory for how these rhythmic oscillations may arise even without pace-maker neurons driving the collective. 

    On how vision works - with Li Zhaoping - #5

    Play Episode Listen Later Dec 9, 2023 81:57


    We know a lot about of how neurons in the primary visual cortex (V1) of mammals respond to visual stimuli. But how does the vast information contained in the spiking of millions of neurons in V1 give rise to our visual percepts?  The guest's theory is that V1 acts as a “saliency detector” directing the gaze to the most important object in the visual scene. Then V1 in collaboration with higher visual areas determines what this object is in an iterative feedforward-feedback loop. 

    On multi-area cortex models - with Sacha van Albada - #4

    Play Episode Listen Later Nov 18, 2023 92:51


    A key goal of computational neuroscience is to build mathematical models linking single-neuron activity to systems-level activity. The guest has taken some bold steps in this direction by developing and exploring a multi-area model for the macaque visual cortex, and later also a model for the human cortex, using millions of simplified spiking neuron models.   We discuss the many design choices, the challenge of running the models, and what has been learned so far.

    On the neural code - with Arvind Kumar - #3

    Play Episode Listen Later Nov 4, 2023 85:46


    It is widely thought that spikes (action potentials) are the main carrier of information in the brain. But what is the neural code, that is, what aspects of the spike trains carry the information? The detailed temporal structure or maybe only the average firing rate?  And is there information in the correlation between spike trains in populations of similar neurons?   The guest has thought about these and other coding questions throughout his career.

    kumar arvind neural code
    On biophysics of computation – with Christof Koch - #2

    Play Episode Listen Later Oct 28, 2023 80:10


    Starting from the pioneering work of Hodgkin, Huxley and Rall in the 1950s and 60s, we have a well-founded biophysics-based mathematical understanding of how neurons integrate signals from other neurons and generate action potentials. Today's guest wrote the classic book “Biophysics of Computation” on the subject in 1998. We discuss its contents, what has changed in the last 25 years, and also touch on his other main research interest: consciousness research.  

    On models of the mind - with Grace Lindsay - #1

    Play Episode Listen Later Oct 13, 2023 111:39


    The book “Models of the Mind” published in 2021 gives an excellent popular account of the history and questions of interest in theoretical neuroscience. I could think of no other person more suitable to invite for the inaugural episode of the podcast than its author Grace Lindsay. In the podcast we discuss highlights from the book as well as recent developments and the future of our field.  

    Claim Theoretical Neuroscience Podcast

    In order to claim this podcast we'll send an email to with a verification link. Simply click the link and you will be able to edit tags, request a refresh, and other features to take control of your podcast page!

    Claim Cancel