A podcast about neuroscience, artificial intelligence, and science more broadly, run by a group of computational neuroscientists.
Grace wrote a book! And she talked to Brain Inspired host Paul Middlebrooks about it. The book is about the many different ways mathematical methods have influenced neuroscience, from models of single cells all the way up to equations to explain behavior. You can learn more about the book and how to get it in ebook, audiobook, and hard cover worldwide by visiting tinyurl.com/h9dn4bw7 On this cross-posting of Brain Inspired, Grace talks about the book and the field of computational neuroscience more generally. Give it a listen and go check out other episodes of Brain Inspired for more great conversations.
To some neuroscientists, the brain exists to produce movement and everything else it does should be understood in light of this goal. On this episode, we talk about these "motor chauvinists" and the broader topic of how motor control is studied in neuroscience and artificial intelligence. First we briefly discuss the tangled anatomy of motor control in animals. Then we get into how artificial motor control is done, including optimal feedback control, reinforcement learning, and the six core principles of hierarchical motor control. Finally we relate these principles back to the biology and talk about what the future of the study of motor control needs. Throughout we conduct an experiment on ourselves, reflect on what makes motor outputs different from other tasks, and hear what Alex thinks is undeniably true or undeniably false about the motor system.
Most neuroscience research takes place in a full, live animal. But brain organoids are different. Brain organoids are three-dimensional blobs of brain grown from human stem cells and they offer novel access to the study of human brain development. On this episode we go beyond our computational comfort zone to talk about the history of stem cells, the potion of chemicals needed to get these little blobs to grow, and the extent to which they mimic features of the human brain when they do. We also discuss the promise of studying and treating disease through personalized organoids, and how this gets hard for higher level disorders like schizophrenia. Then we get into questions of embodiment and if giving these organoids more means to interact with the world would make them better models of the brain and of information processing. Finally we get to the ethics of it all, and find that bioethicists these days are actually chill AF. Throughout, we find out that Josh is not surprised by any of this, and we tackle the pressing moral issue of our time: is it OK to eat your thermostat?
The age-old debate of nature versus nurture is now being played out between artificial intelligence and neuroscience. The dominant approach in AI, machine learning, puts an emphasis on adapting processing to fit the data at hand. Animals, on the other hand, seem to have a lot of built in structure and tendencies, that mean they function well right out of the womb. So are most of our abilities the result of genetically-encoded instructions, honed over generations of evolution? Or are our interactions with the environment key? We discuss the research that has been done on human brain development to try to get at the answers to these questions. We take about the compromise position that says animals may be "born to learn"---that is, innate tendencies help make sure the right training data is encountered and used efficiently during development. We also get into what all this means for AI and whether machine learning researchers should be learning less. Throughout, we ask if humans are special, argue that development can happen without learning, and discuss the special place of the octopus in the animal kingdom.
The brain is the result of evolution. A lot of evolution. Most neuroscientists don't really think about this fact. Should we? On this episode we talk about two papers---one focused on brains and the other on AI---that argue that following evolution is the path to success. As part of this argument, they make the point that, in evolution, each stage along the way needs to be fully functional, which impacts the shape and role of the brain. As a result, the system is best thought of as a whole---not chunked into perception, cognition and action, as many psychologists and neuroscientists are wont to do. In discussing these arguments, we talk about the role of representations in intelligence, go through a bit of the evolution of the nervous system, and remind ourselves that evolution does not necessarily optimize. Throughout, we ask how this take on neuroscience impacts our own work and try to avoid saying "represents".
The recent advances in deep learning have done more than just make money for startups and tech companies. They've also infiltrated neuroscience! Deep neural networks---models originally inspired by the basics of the nervous system---are finding ever more applications in the quest to understand the brain. We talk about many of those uses in the episode. After first describing more traditional approaches to modeling behavior, we talk about how neuroscientists compare deep net models to real brains using both performance and neural activity. We then get into the attempts by the field of machine learning to understand their own models and how ML and neuroscience can share methods (and maybe certain cultural tendencies). Finally we talk about the use of deep nets to generate stimuli specifically tailored to drive real neurons to their extremes. Throughout, we notice how deep learning is "complicating the narrative", ask "are deep nets normative models?", and struggle to talk about a topic we actually know about.
From worms to flies, and mice to macaques, neuroscientists study a range (but not very large range...) of animals when they study "the brain". On this episode we ask a lot of questions about these model organisms, such as: how are they chosen? should we use more diverse ones? and what is a model organism actually a model of? We also talk about how the development of genetic tools for certain animals, like mice, have made them the dominant lab animal and the difficulty of bringing a new model species onto the scene. We also get into the special role that simple organisms, like C. elegans, play and how we can extrapolate findings from these small animals to more complex ones. Throughout, special guest Adam Calhoun joins us in asking "What even is the purpose of neuroscience???" and discussing the extent to which mice do or do not see like humans.
Working memory is the ability to keep something in mind several seconds after it's gone. Neurons don't tend to keep firing when their input is removed, so how does the brain hold on to information when it's out of sight? Scientists have been probing this question for decades. On this episode, we talk about how working memory is studied and the traditional view of how it works, which includes elevated persistent firing rates in neurons in the prefrontal cortex. The traditional view, however, is being challenged in many ways at the moment. As evidence of that we read a "dueling" paper on the topic, which argues for a view that incorporates bursts of firing, oscillations, and synaptic changes. In addition to covering the experimental evidence for different views, we also talk about the many computational models of working memory that have been developed over the years. Throughout we talk about energy efficiency, the difference between maintenance and manipulation, and the effects of putting scientific disagreements in writing. We also admit to not reading *any* primary sources.
In 2002, cancer biologist Yuri Lazebnik raised and addressed the semi-facetious question "Can a biologist fix a radio?" in a short paper. The paper is a critique of current practices in the biological sciences, claiming they are inefficient at getting to truth. We discuss the stages of research progress in biological science Yuri describes, including the "paradoxical" stage where more facts leads to less understanding. We then dive into his view of how a biologist would approach a radio: describing what its parts look like, lesioning some of them, and making claims about what's necessary for the radio to work as a result. We reflect on how this framing of common biological research practices impacts our view of them and highlights how hard it is to understand complex systems. We talk about the (in)adequacy of Yuri's proposed solution to the problem (that biologists need to embrace formal, quantitative language) and the difference between engineering and science. Finally, we discuss a new take on this paper that goes through the effort of actually applying neuroscience methods to a microprocessor and the conclusions we took from that. Throughout we bring in specific examples from neuroscience we find relevant and Josh dismisses almost everything as "satirical".
Despite the fact that the brain is full of them, glial cells don't get much attention from neuroscientists. The traditional view of these non-neurons is that they are supportive cells---there to silently help neurons do what they need to do. On this episode we start by describing this traditional view, including types of glial cells and their roles. Then we get into the more interesting stuff. How do glia communicate with each other and with neurons? Turns out there are many chemical messages that get sent between these different cell types, including via the energy molecule ATP! We then talk about the ways in which these messages impact neurons and reasons why the role of glia may be hard for neuroscientists to see. In particular, glia seem to have a lot to say about the birth and control of synapses, making them important for scientists interested in learning. Finally we cover some of the diseases related to glia, such as multiple sclerosis and (surprisingly) depression. Throughout, we ask if glia are important for computation, and relatedly, how the hell do we define computation? Also Grace is weirded out that glia are everywhere but nobody is talking about (or drawing) them.
For decades, neuroscientists have explored the ways in which neurons update and control the strength of their connections. For slightly fewer decades, machine learning researchers have been developing ways to train the connections between artificial neurons in their networks. The former endeavour shows us what happens in the brain and the latter shows us what's actually needed to make a system that works. Unfortunately, these two research directions have not settled on the same rules of learning. In this episode we will talk about the attempts to make artificial learning rules more biologically plausible in order to understand how the brain is capable of the powerful learning that it is. In particular, we focus on different models of biologically-plausible backpropagation---the standard method of training artificial neural networks. We start by explaining both backpropagation and biological learning rules (such as spike time dependent plasticity) and the ways in which the two differ. We then describe four different models that tackle how backpropagation could be done by the brain. Throughout, we talk dendrites and cell types and the role of other biological bits and bobs, and ask "should we actually expect to see backprop in the brain?". We end by discussing which of the four options we liked most and why!
This very special episode of Unsupervised Thinking takes place entirely at the IBRO-Simons Computational Neuroscience Imbizo in Cape Town, South Africa! Computational neuroscience is a very interdisciplinary field and people come to it in many different ways from many different backgrounds. In this episode, you'll hear from a variety of summer school students who are getting some of their first exposure to computational neuroscience as they explain their background and what they find interesting about the field. In the second segment of the episode, we go into a conversation with the teaching assistants about what could make training in computational neuroscience better in the future and what we wish we had learned when we entered the field. Finally, we throw it back to the students to summarize the impact this summer school had on them and their future career plans.
In the past few years, we've noticed researchers making more explicit efforts to engage with scientists in other countries, particularly those where science isn't well-represented. Inspired by these efforts, we took a historical dive into the international element of science with special guest Alex Antrobus. How have scientists viewed and communicated with their peers in other countries over time? To what extent do nationalist politics influence science and vice versa? How did the euro-centric view of science arise? In tackling these issues, we start in the 1700s and work our way up to the present, covering the "Republic of Letters," the Olympic model of scientific nationalism, communism, and decolonization. We end by discussing the ethical pros and cons of mentoring and building academic "outposts" in other countries. Throughout, we talk about the benefits of open science, the King of Spain's beard, and how Grace doesn't do sports.
Cerebellum literally means "little brain," and in a way, it has been treated as a second-class citizen in neuroscience for awhile. In this episode we describe the traditional view of the cerebellum as a circuit for motor control and associative learning and how its more cognitive roles have been overlooked. First we talk about the beautiful architecture of the cerebellum and the functions of its different cell types, including the benefits of diversity. We then discuss the evidence for non-motor functions of the cerebellum and why this evidence was hard to find until recently. During this, we struggle to explain what cognitive issues someone with a cerebellar lesion may have and special guest/cerebellum expert Alex Cayco-Gajic tests our cerebellar function. Finally, we end by lamenting the fact that good science is impossible and Alex tells us how the future of neuroscience is subcortical!
Reinforcement learning is important for understanding behavior because it tells us how actions are guided by reward. But the topic also has a broader significance---as an example of the happy marriage that can come from blending computer science, psychology and neuroscience. In this way, RL is a poster child for what's known as Marr's levels analysis, an approach to understanding computation that essentially asks why, how, and where. On this episode we first define some of the basic terms of reinforcement learning (action, state, environment, policy, value). Then we break it down according to Marr's three levels: what is the goal of RL? How can we (or an artificial intelligence) learn better behavior through rewards? and where in the brain is this carried out? Also we get into the relationship between reinforcement learning and evolution, discuss what counts as a reward, and try to improvise some relatable examples involving cake, cigarettes, chess, and tomatoes.
In part two of our conversation on what counts as an explanation in science, we pickup with special guest David Barack giving his thoughts on the "model–mechanism–mapping" criteria for explanation. This leads us into a lengthy discussion on explanatory versus phenomenological (or "descriptive") models. We ask if there truly is a distinction between these model classes or if a sufficiently good description will end up being explanatory. We illustrate these points with examples such as the Nernst equation, the Hodgkin-Huxley model of the action potential, and multiple uses of Difference of Gaussians in neuroscience. Throughout, we ask such burning questions as: can a model be explanatory if the people who made it thought it wasn't? are diagrams explanations? and, is gravity descriptive or mechanistic?
As scientists, we throw around words like "explanation" a lot. We assume explaining stuff is part of what we're doing when we make and synthesize discoveries. But what does it actually take for something to be an explanation? Can a theory or model be successful without truly being one? How do these questions play out in computational neuroscience specifically? We bring in philosopher-neuroscientist David Barack to tackle this big topic. In part one of the conversation, David describes the historical trajectory of the concept of "explanation" in philosophy. We then take some time to try to define computational neuroscience, and discuss "computational chauvinism": the (extremist) view that the mind could be understood and explained independently of the brain. We end this first half of the conversation by defining the "3M" model of explanation and giving our initial reactions to it.
Machine learning has been making big strides in a lot of straightforward tasks, such as taking an image and labeling the objects in it. But what if you want an algorithm that can, for example, generate an image of an object? That's a much vaguer and more difficult request. And it's where generative models come in! We discuss the motivation for making generative models (in addition to making cool images) and how they help us understand the core components of our data. We also get into the specific types of generative models and how they can be trained to create images, text, sound and more. We then move onto the practical concerns that would arise in a world with good generative models: fake videos of politicians, AI assistants making our phone calls, and computer-generated novels. Finally, we connect these ideas to neuroscience, asking both how can neuroscientists make use of these and is the brain a generative model?
Because of the sheer number of neurons in the gut, the enteric nervous system is sometimes called the second brain. What're all those neurons doing down there? And what, or who, is controlling them? Science has recently revealed that the incredibly large population of microorganisms in the gut have a lot to say to the brain, by acting on these neurons and other mechanisms, and can impact everything from stress to obesity to autism. In this episode, we give the basic stats and facts about the enteric nervous system (and argue about whether it really is a "second brain") and cover how the gut can alter the brain via nerves, hormones, and the immune system. We then talk about what happens when mice are raised without gut microbes (weird) and whether yogurt has any chance of curing things like anxiety. Throughout, we marvel at how intuitive all this seems despite being incredibly difficult to actually study. All that plus: obscure literary references, Josh's hilariously extreme fear of snakes, multiple misuses of the word "species," and DIY feces transplants! (Apologies for minor audio issues on this one!)
You may have heard of predictive coding; it's a theory that gets around. In fact, it's been used to understand everything from the retina to consciousness. So, before we get into the details, we start this episode by describing our impressions of predictive coding. Where have we encountered it? Has it influenced our work? Why do philosophers like it? And, finally, what does it actually mean? Eventually we settle on a two-tiered definition: "hard" predictive coding refers to a very specific hypothesis about how the brain calculates errors, and "soft" predictive coding refers to the general idea that the brain predicts things. We then get into how predictive coding relates to other theories, like Bayesian modeling. But like Bayesian models, which we've covered on a previous episode, predictive coding is prone to "just-so" stories. So we discuss what concrete predictions predictive coding can make, and whether the data supports them. Finally, Grace tries to describe the free energy principle, which extends predictive coding into a grand unified theory of the brain and beyond.
There is a tension when it comes to the study of behavior in neuroscience. On the one hand, we would love to understand animals as they behave in the wild---with the full complexity of the stimuli they take in and the actions they emit. On the other hand, such complexity is almost antithetical to the scientific endeavor, where control over inputs and precise measurement of outputs is required. Throw in the constraints that come when trying to record from and manipulate neurons and you've got a real mess. In this episode, we discuss these tensions and the modern attempts to resolve them. First, we take the example of decision-making in rodents to showcase what behavior looks like in neuroscience experiments (and how strangely we use the term "decision-making"). In these studies, using more natural stimuli can help with training and lead to better neural responses. But does going natural make the analysis of the data more difficult? We then talk about how machine learning can be used to automate the analysis of behavior, and potentially remove harmful human biases. Throughout, we provide multiple definitions of "behavior", Grace relates animal training to parenting, and our special guest Adam Calhoun uses his encyclopedic knowledge of this area to provide many insightful examples!
On this unique episode of Unsupervised Thinking, we talk not about a particular area of science, but about the process of doing science itself. In particular, we're discussing how scientists take in information from their niche research areas and beyond. The topic for this free-form conversation stemmed from interviews we collected at the latest Computational and Systems Neuroscience Conference (Cosyne), where we asked people to tell us about a research finding from outside their area that they thought was cool. You'll hear those interviews in this episode, along with our motivation for asking that question and our reaction to the responses. We then go on to speak broadly about our experiences at different conferences both big and small. In particular, we reveal how attending talks from far-reaching areas of science is a great way to build appreciation for your field and contextual your research. Ultimately, influence from talks and colleagues is how scientists choose their projects, and so decisions of what to consume can have long-lasting effects. We give our personal examples of times when talks have unexpectedly impacted our research, and the concrete things we do to keep up with the literature. Throughout, there is also talk of the culture divides that arise in research and Josh tells us about Game of Thrones special effects are done. Thanks to all the Cosyne attendees who agreed to be interviewed for this, and special thanks to Nancy Padilla, our special guest for this episode!
Sleep is such a ubiquitous part of our lives we may forget just how weird of a thing it is to spend a third of our days laying in darkness. In this episode on the science of sleep, we start by describing types of sleep (while appreciating its strangeness) and the negative cognitive effects of missing out on it. We also discuss the potential role of rapid eye movement (REM) sleep in training neural connections and how that idea has been ported to artificial intelligence. We then take a rare (for us) trip to the brain stem to understand the neural mechanisms of switching from wake to sleep and nonREM to REM sleep. While there we appreciate how difficult it is to tease apart the function of different neuron types in a heavily redundant system. Finally, we dip our toes into the world of dreams, discussing a theory of dreaming's role in developing consciousness and our own experiences with lucid dreaming. Throughout, we go down tangents to argue about the value of speculative science and how to quantify knowledge via Google search results.
In the late 19th century, many of the most basic foundations of neuroscience were laid, but not without a fight. In this episode, we cover the famous debate between Camillo Golgi and Santiago Ramon y Cajal regarding the very nature of neurons: are they separate cells or one conjoined mass? We also tell the story of some of the lesser known players, remark on the inability of those on the losing side to let go, and ask what a modern Neuron Doctrine debate would look like. After covering the history, we discuss two modern rethinkings of the Neuron Doctrine that focus the question not just on anatomy, but on whether neurons are the functional unit of the brain. What does "functional unit" mean? We don't know, but as we grapple with it we speak about the need to balance simplification with appreciation of details and (unfortunately) attempt to understand emergence again.
In this very special episode of Unsupervised Thinking, we bring together a group of neuroscientists and neuroscience enthusiasts to have a semi-structured discussion on the past, present, and future of the field of neuroscience. The group includes your three regular hosts plus Yann, Alex, and Ryan (whose voice you may recall from our Deep Learning episode) and we each give our thoughts on what got us into neuroscience, what we feel the field is lacking, and where the field will be in 20 years. This leads us on a path of discussing statistics, emergence, religion, depression, behavior, engineering, society, and more!
In the second part of our discussion on the concept of coding, we get into specific examples of what could count as the "neural code". In particular, we talk rate vs temporal coding along with examples of each in different organisms. We then get into the use of coding as a metaphor. How does it influence thought and experiments? Is an "action" or "process" metaphor a better alternative? We also discuss why language is the wrong metaphor for the neural code and the relationship between metaphors and formal mathematical systems. To illustrate the tension between a coding framework versus an action/process framework, we discuss a concrete example where the same finding is explained two different ways: does orientation-selective cells in primary visual cortex arise from sparse coding principles or is it just what happens when you train a system to do vision? We read: Neural Representation and the Cortical Code Is coding a relevant metaphor for the brain?
The concept of "coding," along with language referencing it, is abundant in neuroscience whether describing sensory systems, cognition, or motor control. The notion that neurons encode information is so core to neuroscience it is almost difficult to imagine the field without it. In the first part of this two-part discussion on coding, we talk about the origins of the coding concept. We start with some of the early experimental work that demonstrated the most basic response properties of neurons. We then delve into a conference report from the 60's that summarized the state of the field at that time and find it (depressingly?) relevant for today. In particular, the focus at that time on coding as only an imperfect metaphor is contrasted with its perhaps outsized role in modern work. We ask things like: When is the metaphor working, and when is it stretched beyond recognition? What qualities does a code need to have to be a candidate for the "neural code"? and If there is a neural code, who's reading it out? By the end, Grace freaks out about how abstract "information processing" is, and Josh and Conor claim the whole world is just one big information processing machine. We read: Neurosciences Research Program report on Neural Coding And mentioned: List of recommended readings on coding Spikes Our fMRI episode Upcoming readings for Part 2: Neural Representation and the Cortical Code Is coding a relevant metaphor for the brain?
You move your eyes several times a second, making choices about what to attend to without even noticing. That is a lot of behavioral data that scientists could use to understand underlying computations, preferences, memories, and intentions. On this episode, we talk about just such endeavors to understand cognition by monitoring eye movements. Eye movements are a comparatively easy thing to measure (though as we discuss, some of the older methods seemed like torture devices), and can be used in a wide range of settings: in animals and babies, in the lab or in the wild. In this episode, Josh regals us with tales of using a modern eye-tracking device at a conference, and we talk about the basic findings you can discover from that and exactly how surprising or interesting they are. We then get into how tasks influence eye movements and the many forms of memory that eye movements can measure, including differences between novices and experts. We top it all off with a delightful study about looking at porn, and some speculations about how eye-tracking could be used in the future. We read: Eye movements in natural behavior Worth a Glance: Using Eye Movements to Investigate the Cognitive Neuroscience of Memory Sex differences in viewing sexual stimuli: An eye-tracking study in men and women A Breadth-First Survey of Eye Tracking Applications
For this episode, special guest Nancy Padilla (E5: Neural Oscillations) returns to talk about a topic she now studies: social neuroscience. We get into the methods this rather new field uses to probe the neural processes behind social interaction, including the inherent difficulties in studying such a complex subject. We go from special pathways for bottom-up social processing such as smells and facial recognition to ideas about theory of mind and cooperation. In the process, we hit on the mirror neuron system, simultaneous dual-brain recordings, and the role of philosophy in the field. Two common questions throughout are: (1) are humans unique amongst animals? and (2) is social processing unique amongst neuroscience topics? Ultimately, we try to discover if social neuroscience is greater than the sum of its parts. We read: Conceptual Challenges and Directions for Social Neuroscience Brain Basis of Human Social Interaction: From Concepts to Brain Imaging Brain-to-Brain coupling: A mechanism for creating and sharing a social world And mentioned: Episode 10: Brain Size
For this episode, we try to de-thorn one of the thorniest topics in neuroscience: consciousness! Starting with the (not so) simple task of defining the c-word, we first lay out our own definitions but end up relying on the work of philosopher David Chalmers (easy and hard problems of consciousness, philosophical zombies). Then, after establishing the myriad of reasons why science can't actually study consciousness, we get into its attempts to do so. Specifically, we talk about studies on anaesthesia and the role of optical illusions in probing conscious perception. After that we go through a laundry list of all kinds of theories/models of consciousness put forth by neuroscientists, psychologists, and (ugh) physicists. Throughout, Josh complains about people saying they have "heightened consciousness", Conor advocates a lot of drug use (for science), and we discuss the politics of urination. If you're confused by any of this, no worries, Conor is too. We read: Scholarpedia: Neuronal Correlates of Consciousness Scholarpedia: Models of Consciousness Neural Correlates of Consciousness (sensory paper by Rees) Neural correlates of consciousness during general anesthesia using functional magnetic resonance imaging (fMRI) And mentioned: 20 Years of ASSC: are we ready for its coming of age? (thanks to @tweetsatpreet for pointing us to this journal)
Sloppiness, stiffness, and stomatogastric ganglion! This episode on underdeterminacy in neural circuits will introduce you to all these topics, as well as to special guest Alex Williams! To start, we take you way back to algebra class with a refresher on what makes a system "underdetermined" (essentially, more unknowns than constraints). There are two ways this can be a problem in neuroscience: (1) neural circuit modelers don't have enough data to constrain their models, and (2) biology itself is underconstrained, leading to differences across individuals within a species. We talk about both of these issues separately, the ways in which they interact, and the practical effects they have for the study of the nervous system. The first topic spurs a broad discussion on the philosophy of modelling and the potential pitfalls that careful scientists need to avoid. To explore this in more detail, we discuss an excellent modelling paper on the oculor-motor system that demonstrates ways in which models should guide experiments. For the latter topic, we delve into Eve Marder's work on crustaceans, wherein she carefully documents the incredible variety across individuals. Having worked in Eve's lab himself, Alex provides expertise and anecdotes on this topic throughout! We read: Computational models in the age of large datasets A modeling framework for deriving the structural and functional architecture of a short-term memory microcircuit ROBUST CIRCUIT RHYTHMS IN SMALL CIRCUITS ARISE FROM VARIABLE CIRCUIT COMPONENTS AND MECHANISMS And mentioned: Our episode on "Does Neuroscience Need More Behavior?" Why Are Computational Neuroscience and Systems Biology So Separate? James Sethna's work on sloppiness
To much of the world, the face of neuroscience is an image of a brain with small colored blobs on it. Those images come from functional magnetic resonance imaging (fMRI), a technology that's made a big splash in its relatively short tenure. For this episode, we delve into fMRI and what scientists do with the data it produces. To start, we review the technology behind MRI and fMRI. We get into the thorny issue of relating the BOLD signal recorded from fMRI with actual neural activity, and what's been learned from animal studies that have looked at both simultaneously. After that we talk stats: particularly the trouble with traditional "voxel"-wise methods and putting all your eggs in one basket (or in separate, but similar, baskets?). Approaches to fMRI analysis are quickly evolving however, and so we discuss multi-voxel pattern analysis, comparing across individual brains, real time-analysis, mind reading, and lie detecting. Finally, we turn a little more philosophical and ask "What does it mean to measure information in the brain?". Is what an experimenter can see in these colored patterns even relevant to the brain itself?? We read: Interpreting the BOLD Signal Computational Approaches to fMRI Analysis Is Neuroimaging Measuring Information in the Brain?
Sometimes scientists decide to turn their tools of inquiry inward to understand their own fields and behaviors. For our 20th episode, we're diving into this meta-science by reading some papers about by papers written by scientists studying scientists. In particular, we start with a commentary discussing the increasing size of scientific teams, and what that means for credit assignment. Do we need to move to a more Hollywood approach by highlighting specific achievements in different roles? Also, when will we address the fact that most young researchers on these teams will not have a career in academic science? We then get into a modeling study that aims to show how incentivizing the publication of novel results can ultimate lead to a widespread decrease in scientific quality. This raises questions of whether individuals or the system is to blame for high rates of shoddy publications. We then touch on a small experiment that the conference NIPS (Neural Information Processing Systems) performed on their peer review system, showing that (spoiler alert!, or probably not if you've been subjected to peer review...) the process can appear somewhat random. Finally, we go over a report that tracked trends in neuroscience research over the past ten years. We find that a meta-study of a field can seem very different from the view inside of it. Finally, we mention how studies of science done by scientists differ from those done by the humanities, and how both may be of use. We read: Together We Stand The Natural Selection of Bad Science The NIPS Experiment The Changing Landscape of Neuroscience Research, 2006–2015: A Bibliometric Study
Way back in Episode 16 we paired up with Always Already to talk about a book on gender and the interaction of science and society. Unsurprisingly, that conversation spanned far beyond the scientific study of gender and so we never really got into the biological weeds. Our intent with this episode was to go back to gender, with a focus on the explaining the current state of the science. What we quickly learn, however, is that it's very difficult to talk about gender without talking about society. So we first work through this by airing our anxieties on the topic, and our personal motivations for finding this science interesting. Eventually though, we break into the biology of embryonic sexual differentiation and certain "natural experiments" that alter the course of this differentiation. People with abnormal differentiation offer a chance to see what happens when things like chromosomal sex (XX vs XY) and external genitalia are decoupled, which offers some insight into normal gender development. Next we cover some biological hypotheses on sex that didn't pan out (but are still being promoted...). Finally we turn to the better controlled world of animal experimentation and cover what factors impact gendered behavior in macaque monkeys. As it turns a lot of findings on gender don't replicate, but here's one that does: whenever the Unsupervised Thinking crew has a conversation on gender it takes more than an hour.
For most people, the desire to study neuroscience comes from a desire to understand how, in some form, the brain leads to behavior. Generally, neuroscientists focus on the brain side of that relationship, but what obligation do they have to study behavior? Is it even possible to do proper neuroscience without a clear documenting of the behavior we seek to understand? We use a recent opinion article ("Neuroscience Needs Behavior: Correcting a Reductionist Bias") as a jumping off point to discuss these issues. In the paper, the authors argue that behavior is being neglected amongst neuroscientists and it must return to its status as "epistemologically prior." In particular, there are arguments for studying more natural behavior and quantifying behavior more precisely. In this episode we explain our general sympathies with this argument, but question the extent to which change is required. Should all neuroscientists stop what they're doing and study behavior? Are modern technologies drawing scientists away from the "bigger questions"? No, probably not. But this article does bring up questions about how we, as individuals and as a field, choose what to study. Different implicit beliefs about what levels of explanation are satisfying lead to different research priorities. Progress in neuroscience would be best suited by neuroscientists who better understand these implicit beliefs in themselves and others.
There's a lot to be said about the technical progress being made with artificial intelligence, but what about the impact these rapid advances have on the society in which they unfold? In this episode, we tackle a broad range of such issues, from the possibility of removing human bias from algorithms to how likely we are to fall in love with an AI (Conor might). We speculate on how difficult the transition from humans to self-driving cars will be and our wild uncertainty about the future of jobs/the value of human labor. Throughout you will see a poorly-veiled concern about the current political state of the world and how wealth and power will be distributed in the future. What we learn though, is that in addition to the economic and technological impacts, the use of AI is having at least one major side effect: it's forcing us to explicitly define our goals and values, such that we can impart them to our digital offspring. Now if we could just agree on what those goals and values are...
On this "very special" episode of Unsupervised Thinking, we partner with our fellow podcasters over at Always Already, a critical theory podcast, to burst out of our respective academic bubbles and tackle issues of science and society. The fodder for our conversation is Brain Storm, a book by Rebecca Jordan-Young, that lays out the evidence that prenatal hormone exposures influence gender differences in behavior later in life. In the book, she claims that the sum total of the studies she covers only offers weak support for the hypothesis, and that scientists need to appropriately incorporate other factors into their models such as socialization and environment. While we use this book as a common starting point, our conversation quickly moves beyond the particulars of these gender science studies. We start by questioning who is the intended audience of this book and what it's trying to say to different groups. This moves us into a discussion on critiques of science made by non-scientists and the role that those should/could play in shaping research agendas. We also spend some time dissecting the two-way street between science and society: particularly, how are common notions of gender shaped by scientific studies and how do society's stereotypes seep into the methods of science? An underlying disagreement about the nature of truth peppers the discussion, but we hold off on a full blown debate on that. Ultimately it is clear that the extent and cause of gender differences in behavior is far from settled science, and that is something on which we all can agree.
In the late 1700s, English minister Thomas Bayes discovered a simple mathematical rule for calculating probabilities based on different information sources. Since then Bayesian models for describing uncertain events have taken off in a wide variety of fields, not the least of which is psychology. This Bayesian framework has been used to understand far-reaching psychological processes, such as how humans combine noisy sensory information with their prior beliefs about the world in order to come to decisions on how to act. But not everyone is riding the Bayesian train. In this episode, we discuss a published back and forth between scientists arguing over the use and merits of Bayesian modeling in neuroscience and psychology. First, though, we set the stage by describing Bayesian math, how it is used in psychology, and the significance of certain terms such as "optimal" (it may not mean what you think it does) and "utility". We then get into the arguments for and against Bayesian modeling, including its falsifiability and the extent to which Bayesian findings are overstated or outright confused. Ultimately, it seems the expansive power of Bayesian modeling to describe almost anything may in fact be its downfall. Do Bayesian models give us insight on animal brains and behaviors, or just a bunch of "just-so" stories?
Computational psychiatry is a field in its infancy, but it offers potential to change the way mental disorders are diagnosed, treated, and understood. In this episode, we cover the different components of computational psychiatry, compare and contrast it to computational neuroscience, and discuss which of its promises are actually likely to be fulfilled. In particular, we get into: -online games that inform diagnoses, -models of learning mechanisms that can explain disorder, -machine learning techniques that advise treatment plans, -neural circuit mechanisms that can('t) explain disease
As computational neuroscientists, the members of this podcast take for granted the fact that math can be used to describe the brain. But is this ability of math trivial or deeply surprising? On this episode we tackle the large philosophical issues behind computational sciences, with the help of a set of articles entitled "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" (1960) and "The Unreasonable Effectiveness of Mathematics" (1980). These articles, written by mathematicians, claim that the ability of mathematics to explain the world is unexpectedly impressive. This leads us to a set of questions: What is math? What is the world? And how do we know what to expect from either? Is this ability of math truly unreasonable or did it just appear so at a certain time? As we explore these ideas we discuss thought experiments involving artificial intelligence and real experiments involving the Cold War. We also hypothesize on the role of elegance in mathematics and what "understanding" really means to us as humans.
The body is generally kept at around 37 degrees C, and the brain maybe even a degree higher. So it might seem like a bad idea to cool it down. But against intuition (or with it?) freezing the brain can actually be helpful, both for science and for medicine. In this episode we explain how cooling regions of the brain is used to create temporary lesions that allow scientists to explore their function. We also get into the tools scientists use to achieve this. We then cover more modern experiments, wherein specific cooling has given insights on key aspects of the visual system, bird song production, and even human speech production. Finally, we get into the medical uses of cooling including how it can be used to fight long-term effects of traumatic brain injury. Throughout, we hit on the absurdities of brain region naming, how rats feel about research, and some wild speculation about ancient Egyptians.
The connectome--a silly word and a somewhat simple concept that has caused significant debate amongst neuroscientists worldwide. In this episode, we teach the controversy surrounding the big C. First, we trudge through a definition of the connectome (it means different things to different people, but generally centers on mapping out what's connected to what in the brain), and methods used to obtain it. We then talk about what we've learned from the only animal for which we have the full connectome: the worm C. elegans. Then, via a comparison to the Human Genome Project (inspiration for the word "connectome"), we ask whether neuroscientists believe it's worth the money and the collective effort of the field to pursue the connectome in other animals. This leads us onto a tangent regarding how the public views neuroscience and whether it's morally acceptable to oversell work in order to gain funding. Ultimately, we end in a state of uncertainty, much like the field of neuroscience itself.
It's the big 1-0 here at Unsupervised Thinking and for it we are tackling a big topic: brain size! [insert obligatory "does size matter" joke here]. Ever since humans started using their intelligence to compare their brains to that of other animals, we've been searching for what makes us so much more intelligent than those other animals. In this episode we ask are we so much more intelligent? And if so is it due to our brain mass? volume? number of neurons? After summarizing how these properties vary across species, we get into the evolutionary pressures that would lead to big brains and how scientists study brain size today. Finally, we look within humans to see how brain size varies across people and whether it's correlated with intelligence. Throughout we learn that Conor is freaked out by the fact that whales are mammals in water and Josh has some really weird ideas about fabric.
Do you know what happens when scientists patent their ideas? Do you know who gets the money from those patents? Did you know that scientists even patent things at all? We weren't so sure about these topics back in our Optogenetics episode, so we brought in Penn Law alum/fellow neuroscience grad student Jozsef Meszaros to tell us about this interesting and under-covered topic for scientists. We ask if academics are allowed to use patented objects without a licence, who owns the rights to a patent when research is publicly-funded, how do scientists feel about the patent system, and why does the patent system even exist? We also work through who benefits and who is getting shafted in this system between scientists, universities, tax-payers, the government, and corporations.
In some ways, neuroscience is like psychology's little brother. It came around later, started encroaching on psychology's turf and drawing attention to itself, and so they don't always get along. On this episode we explore the place where neuro and psych interact, and the separate roles for each. We embarked on this after reading an article written by psychologists, wherein they describe their perspective on how neuroscience is affecting the field of psychology. We cover our opinions on the differences between these two fields, the philosophical basis for these differences, and how that manifests into practical considerations. Specifically we talk hiring practices, disease treatment and "neuro-seduction". At the heart of all this is the question: on what level should we approach the study of the brain and mind? and who are we to decide?
Inside a skull it's generally pretty dark. But about 10 years ago, as part of a new method for toying with neurons, scientists started shining a light in there. That method is optogenetics, and it allows for highly-specified control of neural activity by light, via genetic engineering. As part of the 10th anniversary, we cover the development and impact of this influential technology. With help from a review article, we talk about the scientific atmosphere when optogenetics was unveiled and what allowed it to take off so quickly. Then we get into the mechanisms of how neurons come to be controlled by light (and we stumble upon a fantastic government conspiracy theory along the way). Finally we discuss what has been learned by applying optogenetics for the past 10 years, what we could possibly expect to learn, and the possibility of therapeutics based in this technology.
tDCS is old school brain manipulation involving electrodes and sponges. For our 6th episode we describe the basic premise of tDCS (really simple) and its effects on the brain (really not simple). We also talk about its use as a treatment of depression and other clinical applications, which leads us into a meaty digression on clinical science and whether the methods used for discovering medical treatments make any goddamn sense. Finally, we speak of tDCS's scientific uses for understanding the role of various brain areas, and if any of us would ever try it on ourselves.
For our 5th episode, we get into braaaiiiiinnnwaaaaaavessss. By which we mean neural oscillations. By which we really mean a lot of different things it turns out. For this, we bring in special guest Nancy Padilla, who actually puts electrodes into animals to study these things. We define the vocabulary of the field and then Nancy tells us how she uses these measurements for her own work. Then with the help of this paper, we get into what we think can reasonably be concluded from extracellular-oscillation style studies, and the seemingly seductive nature of oscillations to explain everything. All throughout you're gonna hear a lot about LFPs (local field potentials), including Conor's lament about their undefinable nature. And Josh is going to demand that Nancy explain how oscillations could be of use to us computational types. Finally we wrap up with a bit of redemption and common ground, surrounding this paper on "ephaptic coupling".
As you may have heard, there's been a lot of talk about something called deep learning lately. So for our fourth episode, we and our special guest, Ryan Tombacco, are taking a deep dive into deep learning! We give broad overviews of machine learning, artificial intelligence, and the historical foundations of deep learning and then we get into some nitty gritty examples of what deep learning can do today. All throughout Ryan uses his podcast-ready voice to ask some really insightful questions, and we end up musing on the relationships between deep learning and the brain along with some philosophy of learning (Josh even brings up Kant...)
For our third episode, we cover "neuromorphic computing". The attempt to build hardware that functions like neurons is a fairly new field of research. We discuss how building neurons on a chip is possible, how it compares to standard computing and standard neural modeling, and the principles of design that make something "neuromorphic". We also ask whether any of this is worth it, for engineering purposes or for neuroscience.
For our second episode, we break down BCI. We'll go over what signals in the brain are used to interface with computers and how. Included is some constrained optimism about how well these devices can perform and where the limitations come from. Our focus is mostly on motor interfaces (prosthetic arms, for example), though we touch on other applications.