POPULARITY
Kiko Loureiro is a Brazilian Grammy award winning heavy metal guitarist, best known as a member of the bands Angra and Megadeth. His new solo album "Theory Of Mind" is a tour de force of strong melodies, incendiary guitar playing, and great arrangements! Alongside his band Angra, he has released eight studio albums, received two gold records, and is considered one of the most important Brazilian musicians of all time. His solo career started with the album "No Gravity". His second album was "Universo Inverso", recorded in only three days, featuring some of the best jazz musicians in Latin America. Three years later, Kiko released, "Fullblast" , a heavy and progressive piece. In 2008 he also presented his fusion project: Neural Code, in which the band explores rock and jazz, fueled with Brazilian rhythms. "Sounds of Innocence", his fourth solo album, was released in 2012, combining metal, jazz and Brazilian rhythms. The year 2015 came with big news: Kiko was invited to join Megadeth and recorded the guitars for Megadeth's fifteenth studio album "Dystopia" It was in early 2017 that he got the greatest recognition in his career. He became the first Brazilian to receive a Grammy Award by playing in a rock band. Megadeth got the award for "Best Metal Performance" with the album "Dystopia." Open Source released in July 2020 presents new musical ways in Kiko's compositions. "Out of Nothing" is the new single for Kiko's sixth solo album After a career of 25 years, Kiko Loureiro shows that he is a tireless and complete musician who is constantly looking to push his limits and discover something new! Website www.KikoLoureiro.com Social Media https://www.facebook.com/KIKOLOUREIROofficial https://open.spotify.com/artist/5RmLUrjJz5bpP8dF4lYdyK?si=sJW471T6QLqxVjAULSOdYA https://www.instagram.com/kikoloureiro Music Matters podcast is hosted by pro musician, producer, and Sports Illustrated photographer Darrell Craig Harris who is based in Las Vegas, Nevada. The series has reached over 400,000 streams in 40 countries. MMP receives gear support from Shure Microphones,.Focusrite, and others. Intro and outro voice overs provided by Nigel John Farmer from www.VoiceWrapStudio.com in France. Our thanks to Rodney Hall FAME Recording Studios in Muscle Shoals, Alabama for our intro and outro backing music clip. Contact us at inquiry@darrellcraigharris.com Social: www.Instagram.com/musicmatterspodcastofficial www.Facebook.com/musicmatterspodcastofficial Website www.MusicMattersPodcast.com
This episode of To the Edge & Beyond is Part 2 of the Edge Neural Technology series, where host Michelle Dawn Mooney is joined by Intel AI experts Zach Meicler-Garcia, Sanjana Kamath, and Sanjay Addicam to explore the groundbreaking advancements in Intel's Edge Neural Technology. This episode delves into the inception, functionality, and far-reaching impact of Neural Code technology, a revolutionary approach to AI model building and training that is reshaping industries like healthcare and education.Zach Meicler-Garcia begins by tracing the origins of Neural Code technology, which draws inspiration from Dr. Sheila Nirenberg's pioneering research at Weill Cornell Medicine. "The neural code mimics the human retina's behavior, extracting key features from a scene and converting them into a format that AI can process efficiently," Garcia explains. This technology reduces reliance on large datasets by focusing on motion and essential features, making it an innovative solution for AI model creation with minimal data inputs.Sanjana Kamath discusses the practical applications and benefits of Neural Code technology, emphasizing its ability to enhance AI explainability. "The Neural Code enables the creation of shallower Convolutional Neural Networks (CNNs), which preserve privacy and remove bias, making them ideal for data-sensitive environments," she highlights. Kamath also underscores how Intel's no-code graphical interfaces and edge training capabilities make advanced AI accessible to users across various sectors, without the need for extensive coding expertise.Sanjay Addicam expands on the technology's potential, particularly in addressing challenges like hallucinations caused by generative AI video algorithms. "Even with limited data, Neural Code ensures accurate AI outputs and supports rapid model building," Addicam explains, pointing to the future of qualitative benchmarking as a game-changer in the AI space.Intel's Edge Neural Technology stands as a major leap forward in AI, offering a blend of accuracy, privacy, and seamless deployment. This revolutionary approach is poised to redefine AI applications across industries, transforming how we interact with technology.Discover more about their cutting-edge technology:Zach Meicler-GarciaSanjana KamathSanjay AddicamSubscribe to the "To the Edge & Beyond" podcast on Apple Podcasts and Spotify to engage with more thought leaders from the Intel and Edge Network group.
This episode of To the Edge & Beyond is Part 2 of the Edge Neural Technology series, where host Michelle Dawn Mooney is joined by Intel AI experts Zach Meicler-Garcia, Sanjana Kamath, and Sanjay Addicam to explore the groundbreaking advancements in Intel's Edge Neural Technology. This episode delves into the inception, functionality, and far-reaching impact of Neural Code technology, a revolutionary approach to AI model building and training that is reshaping industries like healthcare and education.Zach Meicler-Garcia begins by tracing the origins of Neural Code technology, which draws inspiration from Dr. Sheila Nirenberg's pioneering research at Weill Cornell Medicine. "The neural code mimics the human retina's behavior, extracting key features from a scene and converting them into a format that AI can process efficiently," Garcia explains. This technology reduces reliance on large datasets by focusing on motion and essential features, making it an innovative solution for AI model creation with minimal data inputs.Sanjana Kamath discusses the practical applications and benefits of Neural Code technology, emphasizing its ability to enhance AI explainability. "The Neural Code enables the creation of shallower Convolutional Neural Networks (CNNs), which preserve privacy and remove bias, making them ideal for data-sensitive environments," she highlights. Kamath also underscores how Intel's no-code graphical interfaces and edge training capabilities make advanced AI accessible to users across various sectors, without the need for extensive coding expertise.Sanjay Addicam expands on the technology's potential, particularly in addressing challenges like hallucinations caused by generative AI video algorithms. "Even with limited data, Neural Code ensures accurate AI outputs and supports rapid model building," Addicam explains, pointing to the future of qualitative benchmarking as a game-changer in the AI space.Intel's Edge Neural Technology stands as a major leap forward in AI, offering a blend of accuracy, privacy, and seamless deployment. This revolutionary approach is poised to redefine AI applications across industries, transforming how we interact with technology.Discover more about their cutting-edge technology:Zach Meicler-GarciaSanjana KamathSanjay AddicamSubscribe to the "To the Edge & Beyond" podcast on Apple Podcasts and Spotify to engage with more thought leaders from the Intel and Edge Network group.
Today, we are honored to present a luminary in the realm of neuroscience, the visionary inventor of the groundbreaking Precognition Method, and an adventurer in the vast landscapes of science and human potential - the brilliant Dr. Kaushik Ram, PhD. Dr. Ram's career is nothing short of extraordinary, marked by a relentless pursuit of understanding the intricacies of the human brain and its profound connections with matters of the heart. What sets him apart is not just his extensive academic achievements but his exceptional talent for translating complex scientific concepts into relatable and digestible insights. In today's episode, we have the privilege of delving into the depths of Dr. Ram's wisdom. Join us as he unravels the necessity of that first heartbreak, the peril of excessive swiping on social media and dating apps, and the fascinating impact of hormonal changes on attraction. Dr. Ram takes us on a journey through the unprecedented exposure to countless connections in our modern world, urging us to contemplate the essence of genuine connections in an era dominated by fleeting encounters. Prepare to be enlightened and inspired as we explore the intersection of science, emotions, and the human experience with the brilliant mind of Dr. Kaushik Ram. Thank you for watching and, if you enjoyed this, please consider liking, commenting, and subscribing to the channel. Dr. Kaushik Ram Download FROME Instagram TikTok Joe
It is widely thought that spikes (action potentials) are the main carrier of information in the brain. But what is the neural code, that is, what aspects of the spike trains carry the information? The detailed temporal structure or maybe only the average firing rate? And is there information in the correlation between spike trains in populations of similar neurons? The guest has thought about these and other coding questions throughout his career.
Dr Amanda Foo-Ryland, life coach, neural coder and international speaker, shares how her life's incredible experiences led to her strengths, depth, to live life more fully, and help others to rediscover power, passion and mental resilience no matter what life throws at you. And how we have choices. We discuss the neural necessity of context, wholeness and relativity. Resources: Amanda's TEDx talk Find out why you might be stuck - Neural Code report Shira Cohen's book Reclaiming Our Wellbeing Iain McGilchrist - The Matter With Things
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.04.22.537782v1?rss=1 Authors: Akritas, M., Armstrong, A. G., Lebert, J. M., Meyer, A. F., Sahani, M., Linden, J. F. Abstract: The perceptual salience of a sound depends on the acoustic context in which it appears. Single-neuron correlates of this contextual sensitivity can be estimated from neuronal responses to complex sounds using the nonlinear-linear "context model". Context models provide estimates of both the principal (spectrotemporal) receptive field of a neuron and a "contextual gain field" describing its nonlinear sensitivity to combinations of sound input. Previous studies of contextual gain fields in auditory cortex of anesthetized mice have revealed strong neuron-specific patterns of nonlinear sensitivity to sound context. However, the stability of these patterns over time, especially in awake animals, is unknown. We recorded electrophysiological activity of neurons in the auditory cortex of awake mice over many days using chronically implanted tetrode arrays, while also obtaining continuous measures of the animal's behavioral state (locomotor activity and pupil diameter), during repeated presentations of prolonged complex sounds. Waveform matching identified neurons that were recorded over multiple days. We estimated principal receptive fields and contextual gain fields for each neuron in each recording session, and quantified the stability of these fields within and across days. We also examined the dependence of context model fits on measures of behavioral state. Contextual gain fields of auditory cortical neurons in awake mice were remarkably stable across many days of recording, and comparable in stability to principal receptive fields. Interestingly, while patterns of contextual sensitivity to sound combinations were qualitatively similar to those previously observed in anesthetized mice, there were small but significant effects of changes in locomotion or pupil size on the ability of the context model to fit temporal fluctuations in the neuronal response. We conclude that contextual sensitivity is an integral and stable feature of the neural code in the awake auditory cortex, which might be modulated by behavioral state. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.11.19.517091v1?rss=1 Authors: Akbiyik, S., Caramazza, A., Wurm, M. F. Abstract: Observing others' actions recruits frontoparietal and posterior temporal brain regions - also called the action observation network. It is typically assumed that these regions support recognizing actions of animate entities (e.g., person jumping over a box). However, objects can also participate in events with rich meaning and structure (e.g., ball bouncing over a box). So far, it has not been clarified which brain regions encode information specific to goal-directed actions or more general information that also defines object events. Here, we revealed a shared neural code for visually presented actions and object events throughout the action observation network. We argue that this neural representation captures the structure and physics of events regardless of animacy. We find that lateral occipitotemporal cortex encodes information about events that is also invariant to stimulus modality. Our results shed new light onto the representational profiles of posterior temporal and frontoparietal cortices, and their roles in encoding event information. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.11.10.515823v1?rss=1 Authors: Huang, C. G., Sabesan, S., Lesica, N. A. Abstract: Natural sounds, such as speech, are complex time-varying waveforms containing information critical to how we communicate with each other and navigate the external world. Hearing loss results in a breakdown of this information and causes distortions in the neural code. As a result, perception of complex sounds such as speech is compromised. This problem is further complicated by the fact that sound intensity varies in natural settings, both in quiet and in noisy backgrounds. Somewhat paradoxically, despite increased audibility at high sound intensities, perception and discrimination of speech is actually diminished, especially in the presence of background noise. This phenomenon is known as rollover of speech and its neural basis is poorly understood in both normal-hearing listeners and hearing-impaired listeners. Here we performed in-vivo electrophysiology in awake and anaesthetized Mongolian gerbils (Meriones Unguiculatus) to investigate how hearing loss affects the neural encoding of speech. We presented 22 Vowel-Consonant-Vowel (VCV) syllables to the gerbil and recorded neural responses from the inferior colliculus (IC). We used a k-nearest neighbor neural classifier to investigate whether IC neurons could discriminate between different consonants in normal hearing (NH) and noise-exposed hearing-loss (HL) animals. We found that neural correlates of perceptual rollover were present in the IC and that performance in discrimination decreased when VCVs were presented in background noise when compared to in quiet. Furthermore, we found that forward masking played a prominent role in shaping neural responses and discrimination between various consonants in NH and HL animals. These results suggest there is a critical trade-off in listening between audibility and rollover mediated by temporal masking. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.10.24.513537v1?rss=1 Authors: Avery, J. A., Carrington, M., Martin, A. Abstract: Inferences about the taste of foods are a key aspect of our everyday experience of food choice. Despite this, gustatory mental imagery is a relatively under-studied aspect of our mental lives. In the present study, we examined subjects during high-field fMRI as they actively imagined basic tastes and subsequently viewed pictures of foods dominant in those specific taste qualities. Imagined tastes elicited activity in the bilateral dorsal mid-insula, one of the primary cortical regions responsive to the experience of taste. In addition, within this region we reliably decoded imagined tastes according to their dominant quality (sweet, sour, or salty) thus indicating that, like actual taste, imagined taste activates distinct quality-specific neural patterns. Using a cross-task decoding analysis, we found that the neural patterns for imagined tastes and food pictures in the mid-insula were reliably similar and quality-specific, suggesting a common code for representing taste quality regardless of whether explicitly imagined or automatically inferred when viewing food. These findings have important implications for our understanding of the mechanisms of mental imagery and the multimodal nature of presumably primary sensory brain regions like the dorsal mid-insula. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.10.15.512349v1?rss=1 Authors: Patel, T., Morales, M., Pickering, M. J., Hoffman, P. Abstract: How does the brain code the meanings conveyed by language? Neuroimaging studies have investigated this by linking neural activity patterns during discourse comprehension to semantic models of language content. Here, we applied this approach to the production of discourse for the first time. Participants underwent fMRI while producing and listening to discourse on a range of topics. We quantified the semantic similarity of different speech passages and identified where similarity in neural activity was predicted by semantic similarity. A widely distributed and bilateral network, similar to that found for comprehension, showed such effects when participants produced their own discourse. Critically, cross-task neural similarities between comprehension and production were also predicted by similarities in semantic content. These results indicate that discourse semantics engages a common neural code during both comprehension and production. Furthermore, common coding in right-hemisphere regions challenges the idea that language production processes are strongly left-lateralised. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.10.07.511138v1?rss=1 Authors: Naud, R., Wang, X., Friedenberger, Z., Shin, J. N., Beique, J.-C., Larkum, M., Doron, G. Abstract: Theories of attention and learning have hypothesized a central role for high-frequency bursting in cognitive functions, but experimental reports of burst-mediated representations in vivo have been limited. Here we used a novel demultiplexing approach to separate independent streams of information from considering neurons as having three possible states: silent, singlet- and burst-firing. We studied this ternary neural code in vivo while animals learned to behaviorally report direct electrical stimulation of the somatosensory cortex and found two acquired yet independent representations. One code, the event rate, represented the stimulus in a small fraction of cells and showed a small modulation upon detection errors. The other code, the burst fraction, correlated more globally with stimulation and more promptly responded to detection errors. Bursting modulation was potent and its time course evolved, even in cells that were considered unresponsive based on the firing rate. During the later stages of training, this modulation in bursting happened earlier, gradually aligning temporally with the representation in event rate. The alignment of bursting and event rate modulation sharpened firing rate coded representations, and was strongly associated behavioral accuracy. Thus a fine grain separation of spike timing patterns reveals two signals that accompany stimulus representations: an error signal that can be essential to guide learning and a sharpening signal that could enact top-down attention. Copy rights belong to original authors. Visit the link for more info Podcast created by PaperPlayer
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.10.03.510639v1?rss=1 Authors: Lee Masson, H., Chen, J., Isik, L. Abstract: Recognizing and remembering social information is a crucial cognitive skill. Neural patterns in the superior temporal sulcus (STS) support our ability to perceive others' social interactions. However, despite the prominence of social interactions in memory, the neural basis of retrieving social interactions is still unknown. To fill this gap, we investigated the brain mechanisms underlying memory of others' social interactions during free spoken recall of a naturalistic movie. By applying machine learning-based fMRI encoding analyses to densely labeled movie and recall data we found that STS activity patterns evoked by viewing social interactions predicted neural responses to social interaction memories. This finding suggests that the STS contains high-level conceptual, representations of social interactions, and its reactivation underlies our ability to remember others' interactions. Copy rights belong to original authors. Visit the link for more info Podcast created by PaperPlayer
Neural Implant podcast - the people behind Brain-Machine Interface revolutions
E.J. Chichilnisky is a Professor of Neurosurgery and Ophthalmology at Stanford University. His research has focused on understanding the spatiotemporal patterns of electrical activity in the retina that convey visual information to the brain, and their origins in retinal circuitry, using large-scale multi-electrode recordings. His ongoing work now focuses on using basic science knowledge along with electrical stimulation to develop a novel high-fidelity artificial retina for treating incurable blindness. ***This podcast is sponsored by Ripple Neuro, check out their Neuroscience Research Tools here*** Top 3 Takeaways: "The basic 3 steps of the device are to 1) read the spontaneous activity that's there and use that to identify the different cells and different cell types; 2) pass current through each of the many electrodes (a thousand electrodes in the first device) to figure out which electrodes activate which cells and how much; and 3) finally translate that according to the “scores" we have for the different cell types into patterns of activity. This is way more complicated than any neural interface that's been designed." "Plasticity in the brain is not there just because the brain wants to be plastic. It's there because it's evolutionarily valuable to have a plastic circuit. If you think about it from an electronics point of view, a plastic circuit is costly." "Our understanding of the retina is 30 years ahead of our understanding of the brain. That's not because retina neuroscientists are smarter – it's because the retina is more accessible and easier to understand. So our understanding in the retina always precedes our understanding of the cortex." 0:45 "Do you do you wanna introduce yourself?" 1:30 "So how did you get into this?" 3:45 "What is your proposal versus how things were being done before?" 9:45 What are the 20 types of retinal cells? 14:30 "What's is this a specific design of the device? And then what's your role in this as well?" 17:30 "Are you guys working with other companies in the field too?" 19:30 Sponsorship by Ripple Neuro 20:00 Are cortical implants more or less sensitive to your 'orchestra' approach? 26:15 "What are some of the biggest challenges in this or if you had unlimited funding, what would you throw that funding at first" 35:00 "How are you interfacing with these different cell types?" 38:30 " That all sounds very tedious. And especially if you're talking about thousands of electrodes, is this done by hand, or is this the poor graduate student that has to do this? Or is there a way to automate it?" 43:15 "What are some timelines that we can expect with this technology?" 44:30 "What's the difference between you and Dr. Palanker?" 46:15 "Is there anything that we didn't talk about that you wanted to mention?"
This is Cognitive Revolution, my show about the personal side of the intellectual journey. Each week, I interview an eminent scientist, writer, or academic about the experiences that shaped their ideas. The show is available wherever you listen to podcasts.This is the second part of my conversation with Randy Gallistel. In the first part, we talk about his ideas around why the neuroscience of memory is completely inadequate to the task of understanding… memory. In this episode, we get into the backstory on those ideas, as well as Randy’s major influences. We talk about the influence of foundational experiments by Tony Deutsch, why neuroscientists “squirm” when you bring up the problem of representation, which of Randy’s research projects didn't work out the way he thought, how his academic mentor introduced him to his future wife, and what a good theory should not look like.Randy’s Three Books:The Eighth Day of Creation, Horace Freeland Judson The Logic of Life, Francois JacobSpikes: Exploring the Neural Code, Rieka et alHonorary mention: Endless Forms Most Beautiful, Sean B CarrollBooks we talked about: Randy’s Memory and the Computational BrainThe Structural Basis of Behavior, J. Anthony DeutschPapers we talked about: Randy’s recent paper on the Physical Basis of MemoryLike this episode? Here’s another one to check out:I’d love to know what you thought of this episode! Just reply to this email or send a note directly to my inbox. Feel free to tweet me @CodyKommers. You can also leave a rating for the show on iTunes (or another platform). This is super helpful, as high ratings are one of the biggest factors platforms look at in their recommender system algorithms. The better the ratings, the more they present the show to new potential listeners.Also: If you’d like to unsubscribe from these weekly podcast emails, you can do so while still remaining on the email list that features my weekly writing. Thanks for following my work! This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit codykommers.substack.com/subscribe
This is Cognitive Revolution, my show about the personal side of the intellectual journey. Each week, I interview an eminent scientist, writer, or academic about the experiences that shaped their ideas. The show is available wherever you listen to podcasts.Randy Gallistel has made a career (at least partly) out of making the neuroscientific community angry. He’s made broad, sweeping claims that cut to the core of the neuroscience of learning and memory. The problem with his claims is not that they’re broad and sweeping. The problem is that they might be right. In particular, he’s taking aim at neuroscience for not grappling with the symbolic nature of cognition. The neuroscience of memory, he argues, doesn’t even acknowledge the basis existence of “facts.” That is, there’s no neural theory of what a fact is, or how the brain might store it. In this episode, we talk about Randy’s claims in this space of topics. This was the second half of a much longer conversation. In a follow-up episode, I’ll publish the first half—which goes into the backstory of how Randy developed these ideas and the major figures who influenced him. Randy’s Three Books:The Eighth Day of Creation, Horace Freeland Judson The Logic of Life, Francois JacobSpikes: Exploring the Neural Code, Rieka et alHonorary mention: Endless Forms Most Beautiful, Sean B CarrollBooks we talked about: Randy’s Memory and the Computational BrainThe Structural Basis of Behavior, J. Anthony DeutschPapers we talked about: Randy’s recent paper on the Physical Basis of MemoryLike this episode? Here’s another one to check out:I’d love to know what you thought of this episode! Just reply to this email or send a note directly to my inbox. Feel free to tweet me @CodyKommers. You can also leave a rating for the show on iTunes (or another platform). This is super helpful, as high ratings are one of the biggest factors platforms look at in their recommender system algorithms. The better the ratings, the more they present the show to new potential listeners.Also: If you’d like to unsubscribe from these weekly podcast emails, you can do so while still remaining on the email list that features my weekly writing. Thanks for following my work! This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit codykommers.substack.com/subscribe
Episode 4 – A Glimpse into the Neural Code: Theoretical Neuroscience with Dr. Larry Abbott Our guest is Dr. Larry Abbott, a pioneer in the field of theoretical neuroscience and also the co-director of the Center of Theoretical Neuroscience at Columbia University. We enjoyed learning about theoretical neuroscience as it relates to the broader neuroscience […]
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.11.05.369827v1?rss=1 Authors: Sokoloski, S., Aschner, A., Coen-Cagli, R. Abstract: The activity of a neural population encodes information about the stimulus that caused it, and decoding population activity reveals how neural circuits process that information. Correlations between neurons strongly impact both encoding and decoding, yet we still lack models that simultaneously capture stimulus encoding by large populations of correlated neurons and allow for accurate decoding of stimulus information, thus limiting our quantitative understanding of the neural code. To address this, we propose a class of models of large-scale population activity based on the theory of exponential family distributions. We apply our models to macaque primary visual cortex (V1) recordings, and show they capture a wide range of response statistics, facilitate accurate Bayesian decoding, and provide interpretable representations of fundamental properties of the neural code. Ultimately, our framework could allow researchers to quantitatively validate predictions of theories of neural coding against both large-scale response recordings and cognitive performance. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.11.01.363861v1?rss=1 Authors: Zhao, Z., Zung, J. L., Kriete, A. L., Iqbal, A., Younger, M. A., Matthews, B. J., Merhof, D., Thiberge, S., Strauch, M., McBride, C. S. Abstract: A globally invasive form of the mosquito Aedes aegypti specializes in biting humans, making it an efficient vector of dengue, yellow fever, Zika, and chikungunya viruses. Host-seeking females strongly prefer human odour over the odour of non-human animals, but exactly how they distinguish the two is not known. Vertebrate odours are complex blends of volatile chemicals with many shared components, making discrimination an interesting sensory coding challenge. Here we show that human and animal odour blends evoke activity in unique combinations of olfactory glomeruli within the Aedes aegypti antennal lobe. Human blends consistently activate a 'universal' glomerulus, which is equally responsive to diverse animal and nectar-related blends, and a more selective 'human-sensitive' glomerulus. This dual signal robustly distinguishes humans from animals across concentrations, individual humans, and diverse animal species. Remarkably, the human-sensitive glomerulus is narrowly tuned to the long-chain aldehydes decanal and undecanal, which we show are consistently enriched in (though not specific to) human odour and which likely originate from unique human skin lipids. We propose a model of host-odour coding wherein normalization of activity in the human-sensitive glomerulus by that in the broadly-tuned universal glomerulus generates a robust discriminatory signal of the relative concentration of long-chain aldehydes in a host odour blend. Our work demonstrates how animal brains may distil complex odour stimuli of innate biological relevance into simple neural codes and reveals novel targets for the design of next-generation mosquito-control strategies. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.06.09.142695v1?rss=1 Authors: Salvan, P., Lazari, A., Vidaurre, D., Mandino, F., Johansen-Berg, H., Grandjean, J. Abstract: An important question in neuroscience is how local activity can be flexibly and selectively routed across the brain network. A proposed mechanism to flexibly route information is frequency division multiplexing: selective readout can be achieved by segregating the signal into non-overlapping frequency bands. Here, in wild-type mice and in a transgenic model (3xTgAD) of Alzheimer s Disease (AD), we use optogenetic activation of the entorhinal cortex, concurrent whole-brain fMRI, and hidden Markov modeling. We demonstrate how inducing neuronal spiking with different theta frequencies causes spatially distinct states of brain network dynamics to emerge and to preferentially respond to one frequency, showing how selective information streams can arise from a single neuronal source of activity. This theta modulation mechanism, however, is impaired in the AD model. This work demonstrates that neuronal multiplexing is a sufficient mechanism to enable flexible brain network communication, and provides insight into the aberrant mechanisms underlying cognitive decline. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.06.03.131870v1?rss=1 Authors: Dombeck, D. A., Climer, J. R. Abstract: Information theoretic metrics have proven highly useful to quantify the relationship between behaviorally relevant parameters and neuronal activity with relatively few assumptions. However, such metrics are typically applied to action potential recordings and were not designed for the slow timescales and variable amplitudes typical of functional fluorescence recordings (e.g. calcium imaging). Therefore, the power of information theoretic metrics has yet to be fully exploited by the neuroscience community due to a lack of understanding for how to apply and interpret the metrics with such fluorescence traces. Here, we used computational methods to create mock action potential traces with known amounts of information and from them generated fluorescence traces to examine the ability of different information metrics to recover the known information values. We provide guidelines for the use of information metrics when applied to functional fluorescence and demonstrate their appropriate application to GCaMP6f population recordings from hippocampal neurons imaged during virtual navigation. Copy rights belong to original authors. Visit the link for more info
The Naked Scientists strip down the brain in Spain - attending the Federation of European Neuroscience Societies conference in Barcelona. We find out how your brain computes information, ask if watching worms can tell us about human social interaction, and we explore how we make up our minds when faced with life's choices. Like this podcast? Please help us by supporting the Naked Scientists
The Naked Scientists strip down the brain in Spain - attending the Federation of European Neuroscience Societies conference in Barcelona. We find out how your brain computes information, ask if watching worms can tell us about human social interaction, and we explore how we make up our minds when faced with life's choices. Like this podcast? Please help us by supporting the Naked Scientists
The Naked Scientists strip down the brain in Spain - attending the Federation of European Neuroscience Societies conference in Barcelona. We find out how your brain computes information, ask if watching worms can tell us about human social interaction, and we explore how we make up our minds when faced with life's choices. Like this podcast? Please help us by supporting the Naked Scientists
The Naked Scientists strip down the brain in Spain - attending the Federation of European Neuroscience Societies conference in Barcelona. We find out how your brain computes information, ask if watching worms can tell us about human social interaction, and we explore how we make up our minds when faced with life's choices. Like this podcast? Please help us by supporting the Naked Scientists
It will be a long time before machines can be "more human than human," as scientists are just starting to decode what happens inside our brains as we recognize a spoken word. Christie Nicholson reports.
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.04.12.038117v1?rss=1 Authors: Katti, H., Arun, S. Abstract: Reading distorted letters is easy for us but so challenging for machine vision that it is used on websites as CAPTCHAs (Completely Automated Public Turing test to tell Computers and Humans Apart). How does our brain solve this problem? One solution to have neurons invariant to letter distortions but selective for letter combinations. Another is for neurons to separately encode letter distortions and combinations. Here, we provide evidence for the latter using neural recordings in the monkey inferior temporal cortex. Neurons encoded letter distortions as a product of letter and distortion tuning, and letter combinations as a sum of letters. These rules were sufficient for perfect CAPTCHA decoding, and present in neural networks trained for letter recognition. Thus, a separable neural code enables efficient reading. Copy rights belong to original authors. Visit the link for more info