Podcasts about Neural computation

  • 29PODCASTS
  • 33EPISODES
  • 1h 1mAVG DURATION
  • ?INFREQUENT EPISODES
  • Mar 5, 2025LATEST

POPULARITY

20172018201920202021202220232024


Best podcasts about Neural computation

Latest podcast episodes about Neural computation

Lepszy Klimat
#122 | Co pornografia robi z mózgiem? Czy jest porno lepsze i gorsze, i jak o nim rozmawiać?

Lepszy Klimat

Play Episode Listen Later Mar 5, 2025 58:42


Przeciętny dziesięciolatek w ciągu godziny jest w stanie dziś zobaczyć więcej form seksu i nagości niż jego pradziadkowie, zebrani wszyscy razem, w ciągu całego życia. Ta niespotykana dotychczas dostępność pornografii stwarza dylematy etyczne. Ale nie tylko o nich rozmawiałam z moim gościem - prof. Mateuszem Golą.Prof. dr hab. Mateusz Gola to psychoterapeuta i neuronaukowiec, pracownik Instytutu Psychologii Polskiej Akademii Nauk oraz Institute for Neural Computation na Uniwersytecie Kalifornijskim w San Diego. Specjalista w zakresie uzależnień behawioralnych, autor ponad 130 publikacji naukowych oraz popularnonaukowej książki „Gdy porno przestaje być sexy”.Z odcinka dowiesz się m.in.:Dlaczego oglądamy porno? Czemu korzysta z niego coraz więcej kobiet?Jak to wpływa na relacje, na związek? Jak rozmawiać o pornografii z partnerem czy z dziećmi?Jakie zmiany w mózgu powoduje problemowe korzystanie z porno?Czy istnieje gorsze i lepsze porno?Więcej informacji:Informacje o badaniach prowadzonych przez prof. Mateusza Golę, w których aktualnie można wziąć udział znajdują się na jego stronie.Więcej o książce „Gdy porno przestaje być sexy” przeczytasz na stronie wydawnictwa.Podobają Ci się tematy, które poruszam w podcaście? Więcej znajdziesz tu:Instagram: http://bit.ly/3Vene60YouTube: http://bit.ly/3iddUR7TikTok: http://bit.ly/3gDdaobRealizacja: Karolina Deling-Jóźwik - redakcjaIrena Suska - montażP & C Paulina Górska | Varsovia Lab. 

BJKS Podcast
104. James Shine: Integrating neuroscience with fMRI, collaboration, and the importance of dumb questions

BJKS Podcast

Play Episode Listen Later Oct 25, 2024 108:03 Transcription Available


James (Mac) Shine is a PI and fellow at the University of Sydney. We talk about his background in sports, using fMRI to integrate various parts of neuroscience, collaboration, and much more.BJKS Podcast is a podcast about neuroscience, psychology, and anything vaguely related, hosted by Benjamin James Kuper-Smith.Support the show: https://geni.us/bjks-patreonTimestamps0:00:00: Mac's sporting background0:07:46: Overview of Mac's review in Nature (w/ Emily Finn and Russell Poldrack)0:14:03: The role of great editors in improving scientists and their work0:32:53: Connecting different levels of description0:40:07: Integration and specialisation0:48:49: You can scan any animal with fMRI - but they're usually anaesthetised0:54:13: The transfer from human fMRI to animal electrophysiology1:01:53: N=1 studies and layer-fMRI in clinical neuroscience1:16:17: Collaboration and building a multidisciplinary lab1:26:52: The magic formula in science: annoyance, excitement, and a constructive mindset1:34:51: Writing grants as a test to oneself, and the art of reframing1:41:52: A book or paper more people should read1:43:37: Something Mac wishes he'd learnt sooner1:45:43: Advice for PhD students/postdocsPodcast linksWebsite: https://geni.us/bjks-podTwitter: https://geni.us/bjks-pod-twtMac's linksWebsite: https://geni.us/shine-webGoogle Scholar: https://geni.us/shine-scholarTwitter: https://geni.us/shine-twtBen's linksWebsite: https://geni.us/bjks-webGoogle Scholar: https://geni.us/bjks-scholarTwitter: https://geni.us/bjks-twtReferences and linksOHMB interview: https://www.youtube.com/watch?v=ucDj_94ovaUBoyden, ... & Deisseroth (2005). Millisecond-timescale, genetically targeted optical control of neural activity. Nature Neuroscience.Finn, Poldrack & Shine (2023). Functional neuroimaging as a catalyst for integrated neuroscience. Nature.Friston, ... (2017). Active inference: a process theory. Neural Computation.Munn, ... Larkum & Shine (2023). A thalamocortical substrate for integrated information via critical synchronous bursting. PNAS.Newbold, ... & Dosenbach (2020). Plasticity and spontaneous activity pulses in disused human brain circuits. Neuron.Pezzulo & Cisek (2016). Navigating the affordance landscape: feedback control as a process model of behavior and cognition. TiCS.Poldrack, ... (2015). Long-term neural and physiological phenotyping of a single human. Nature Communications.Rao & Ballard (1999). Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. Nature Neuroscience.Shine, ... (2011). Visual misperceptions and hallucinations in Parkinson's disease: dysfunction of attentional control networks?. Movement Disorders.Shine, ... & Poldrack (2016). The dynamics of functional brain networks: integrated network states during cognitive task performance. Neuron.Shine, ... & Poldrack (2016). Temporal metastates are associated with differential patterns of time-resolved connectivity, network topology, and attention. PNAS.Shine & Poldrack (2018). Principles of dynamic network reconfiguration across diverse brain states. NeuroImage.

Futureproof with Jonathan McCrea
What can we learn from whale ear wax?

Futureproof with Jonathan McCrea

Play Episode Listen Later Jul 14, 2024 33:07


Stephen J. Trumble - Associate Professor in the Department of Biology at Baylor University & Affiliate Professor in the Institute of Marine Science at the University of Alaska, Fairbanks @stephenjtrumbleMichael Brecht - Professor for 'Systems Neurobiology and Neural Computation' at the Bernstein Center for Computational Neuroscience at Humboldt-University, Berlin @bccn_berlin @HumboldtUni @NNCN_Germany

Machine Learning Street Talk
Autopoitic Enactivism and the Free Energy Principle - Prof. Friston, Prof Buckley, Dr. Ramstead

Machine Learning Street Talk

Play Episode Listen Later Sep 5, 2023 94:46


We explore connections between FEP and enactivism, including tensions raised in a paper critiquing FEP from an enactivist perspective. Dr. Maxwell Ramstead provides background on enactivism emerging from autopoiesis, with a focus on embodied cognition and rejecting information processing/computational views of mind. Chris shares his journey from robotics into FEP, starting as a skeptic but becoming convinced it's the right framework. He notes there are both "high road" and "low road" versions, ranging from embodied to more radically anti-representational stances. He doesn't see a definitive fork between dynamical systems and information theory as the source of conflict. Rather, the notion of operational closure in enactivism seems to be the main sticking point. The group explores definitional issues around structure/organization, boundaries, and operational closure. Maxwell argues the generative model in FEP captures organizational dependencies akin to operational closure. The Markov blanket formalism models structural interfaces. We discuss the concept of goals in cognitive systems - Chris advocates an intentional stance perspective - using notions of goals/intentions if they help explain system dynamics. Goals emerge from beliefs about dynamical trajectories. Prof Friston provides an elegant explanation of how goal-directed behavior naturally falls out of the FEP mathematics in a particular "goldilocks" regime of system scale/dynamics. The conversation explores the idea that many systems simply act "as if" they have goals or models, without necessarily possessing explicit representations. This helps resolve tensions between enactivist and computational perspectives. Throughout the dialogue, Maxwell presses philosophical points about the FEP abolishing what he perceives as false dichotomies in cognitive science such as internalism/externalism. He is critical of enactivists' commitment to bright line divides between subject areas. Prof. Karl Friston - Inventor of the free energy principle https://scholar.google.com/citations?user=q_4u0aoAAAAJ Prof. Chris Buckley - Professor of Neural Computation at Sussex University https://scholar.google.co.uk/citations?user=nWuZ0XcAAAAJ&hl=en Dr. Maxwell Ramstead - Director of Research at VERSES https://scholar.google.ca/citations?user=ILpGOMkAAAAJ&hl=fr We address critique in this paper: Laying down a forking path: Tensions between enaction and the free energy principle (Ezequiel A. Di Paolo, Evan Thompson, Randall D. Beere) https://philosophymindscience.org/index.php/phimisci/article/download/9187/8975 Other refs: Multiscale integration: beyond internalism and externalism (Maxwell J D Ramstead) https://pubmed.ncbi.nlm.nih.gov/33627890/ MLST panel: Dr. Tim Scarfe and Dr. Keith Duggar TOC (auto generated): 0:00 - Introduction 0:41 - Defining enactivism and its variants 6:58 - The source of the conflict between dynamical systems and information theory 8:56 - Operational closure in enactivism 10:03 - Goals and intentions 12:35 - The link between dynamical systems and information theory 15:02 - Path integrals and non-equilibrium dynamics 18:38 - Operational closure defined 21:52 - Structure vs. organization in enactivism 24:24 - Markov blankets as interfaces 28:48 - Operational closure in FEP 30:28 - Structure and organization again 31:08 - Dynamics vs. information theory 33:55 - Goals and intentions emerge in the FEP mathematics 36:58 - The Good Regulator Theorem 49:30 - enactivism and its relation to ecological psychology 52:00 - Goals, intentions and beliefs 55:21 - Boundaries and meaning 58:55 - Enactivism's rejection of information theory 1:02:08 - Beliefs vs goals 1:05:06 - Ecological psychology and FEP 1:08:41 - The Good Regulator Theorem 1:18:38 - How goal-directed behavior emerges 1:23:13 - Ontological vs metaphysical boundaries 1:25:20 - Boundaries as maps 1:31:08 - Connections to the maximum entropy principle 1:33:45 - Relations to quantum and relational physics

PaperPlayer biorxiv neuroscience
Neural computation through sensorimotor dynamics for predictive sensorimotor control

PaperPlayer biorxiv neuroscience

Play Episode Listen Later Apr 6, 2023


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.04.06.535795v1?rss=1 Authors: Zhang, Y., Chen, Y., Wang, T., Cui, H. Abstract: Although recent studies have shown that activity of the motor cortex conveys substantial information regarding sensory inputs in addition to motor outputs, it is unclear how sensory inflows and motor outflows interplay to issue motor commands. Here, we recorded population neural activity in the motor cortex via microelectrode arrays while monkeys performed flexible manual interception of moving targets, in a task that required predictive sensorimotor control. For such a dynamic sensory-motor task, the activity of most neurons in the motor cortex not only encoded upcoming movements, but was also modulated by ongoing target motion. Dimensionality reduction at movement onset demonstrated a latent orbital neural geometry with neural states clustered by reach directions, and tilted in ellipses corresponding to target speed. Such neural geometry also emerged in a recurrent neural network (RNN) with task-specific input-output mapping. We propose that neural computation through sensorimotor dynamics is based on neuronal mixed selectivity, and acts like a state evolution. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC

Max Planck Florida’s Neurotransmissions Podcast
75. Neural Computation in Flies with Dr. Lukas Groschner

Max Planck Florida’s Neurotransmissions Podcast

Play Episode Listen Later Jan 17, 2023 30:40


In this episode, host Joe chats with Dr. Lukas Groschner, a Max Planck NeuroMEETS invited speaker, about his curiosity-driven research into the molecules and mechanisms that mediate information processing in flies. Don't miss Dr. Groschner's story of being inspired after listening to a past episode of the podcast. For more information about the NeuroMEETS seminar series, visit https://www.mpfi.org/science/max-planck-neuromeets Do you enjoy listening to the podcast? Feel free to like this episode and follow us to hear more. Website: neuropodcast.org

flies neural computation
Mind & Matter
Neural Computation, Neuromodulators, Serotonin, Psychedelics, Subjective Experience | Zach Mainen | #98

Mind & Matter

Play Episode Play 60 sec Highlight Listen Later Nov 28, 2022 144:26 Transcription Available


Nick talks to computational neuroscientist Dr. Zach Mainen, a neuroscientist and director of the Champalimaud Neuroscience Programme at the Centre of the Unknown in Lisbon, Portugal. His lab studies the neuroscience of decision making, neuromodulation & behavior using both experimental and computational approaches. They discuss: what kinds of things animal brains are actually computing, and to what extent we can think about brains like computers; the difference between neurotransmitters and neuromodulators in the brain, including some discussion of what specific neuromodulators like dopamine and serotonin do in the brain; and we also discussed some of the research his lab is starting to do on serotonergic psychedelics, including how he thinks about whether the subjective effects of psychedelics are likely to be important for the therapeutic and neuroplastic effects.SUPPORT M&M:Sign up for the free weekly Mind & Matter newsletter:[https://mindandmatter.substack.com/?sort=top]Learn how you can further support the podcast: [https://mindandmatter.substack.com/p/how-to-support-mind-and-matter]Support the show

OHBM Neurosalience
S3E6: Changing your brain with real-time fMRI neurofeedback

OHBM Neurosalience

Play Episode Listen Later Nov 23, 2022 76:12


Real-time neurofeedback fMRI is a unique and powerful kind of fMRI involving real time feedback of brain activity to the subject towards the goal of enhancing or suppressing activity or connectivity, and ultimately changing behavior. Michal's work has taken real time neurofeedback fMRI to the next level, embracing operant conditioning to alter measured fMRI network activity independent of the subject's awareness or conscious control. Here Peter and Michal discuss all the types of neurofeedback-based fMRI, focusing mostly on her implicit neurofeedback studies. They discuss the real time fMRI feedback setup as well as the potential applications - for understanding how the brain reprograms itself as well as clinical applications. Today's Guest: Michal Ramot, Ph.D. is a Senior scientist in the Department of Brain Sciences and the Roel C. Buck Career Development Chair at the Weizmann Institute of Science in Rehovot, Isreal. She received her Bachelor of Science in Mathematics from the Hebew University in Jerusalem in 2004. She went on to receive her PhD from Interdisciplinary Centre for Neural Computation working under the guidance of Rafi Malach and Leon Deouell. She carried out a postdoc at the Department of Neurobiology at the Weizmann Institute of Science also under Dr. Malach and then did a second post doc under Dr. Alex Martin in the Laboratory of Brain and Cognition at the National Institute of Mental Health. Episode producers: Anastasia Brovkin Alfie Wearn Brain Art Artist: Joseph Salvo Title: MRI Self Portrait Author's Description: “I've been inspired by the Woodland art style, that has been called "x-ray art" for its depictions of subject's interiors. I sought to adapt this style for MRI images. The goal is to provide a glimpse of what lies beyond the surface, while maintaining respect for the subject.” Please send any feedback, guest suggestions, or ideas to ohbm.comcom@gmail.com

CaregiverDave.com
MIT Graduate, Alex Backer - 101 Clues to a Happy Life

CaregiverDave.com

Play Episode Listen Later Sep 13, 2022 55:00


A National Champion of Informatics, Alex holds a degree in Biology and Economics from MIT, and a M.S. and Ph.D. in Computation and Neural Systems and Biology from the California Institute of Technology, where he was awarded the Dean's award for ‘great contributions and outstanding qualities of Leadership and Responsibility'. He has held positions at McKinsey & Co., the Center for Computation, Computers, Information and Mathematics of Sandia National Labs, and Caltech. Alex conceived QLess while standing in line. Prior to starting QLess, Alex was also the founder of a pioneer of search engine marketing technology that has boosted the reach of search engine marketing campaigns by up to 580% while simultaneously reducing their cost per action by up to 78%, the invention factory, a semantic people search engine with machine vision which beat Google and every other search engine tested 3 to 1 or more in results relevance in a blind people search comparison, and the first resume- or file-based search engine. Alex was appointed by the President to serve in the California Institute of Technology's Information Sciences and Technology Board of Advisors. In 2013, Alex was named the Gold Stevie Winner of IT Executive of the Year and Silver Stevie Winner of Innovator of the Year by the International Business Awards. In 2011, Alex was honored as the keynote speaker at LA County's Tech Week, an honor that was previously accorded a Nobel Prize winner, the CEO of Cisco, the CEO of Adobe, the Chairman of Deloitte, and an astronaut. In 2010, Alex was honored as one of “40 under 40″ for the inaugural 40 Under 40 M&A Advisor Recognition Awards. Alex's research on neural coding and artificial intelligence has been published in the world's leading publications such as Nature and Neural Computation.

Gutral Gada
Uzależnienia. Od substancji i nie tylko. Rozmowa z dr hab. Mateuszem Golą.

Gutral Gada

Play Episode Listen Later Jul 9, 2022 46:48


Od czego można się uzależnić?Najpowszechniej kojarzy nam się uzależnienie od substancji, np. alkoholu czy narkotyków. Ale co z uzależnieniami behawioralnymi, tj. takimi, które dotyczą pewnych zachowań i czynności. Czy seks, zakupy, praca mogą uzależniać?W dzisiejszym odcinku rozmawiamy o procesach, które za uzależnieniami stoją. A zaproszenie do tego podcastu przyjął wybitny naukowiec, dr hab. Mateusz Gola, prof. Instytutu Psychologii PAN i Institute for Neural Computation na Uniwersytecie Kalifornijskim w San Diego, psycholog, neurokognitywista, psychoterapeuta poznawczo-behawioralny. Tym razem macie szansę nie tylko wysłuchać bardzo mięsistego w warstwie merytorycznej odcinka podcastu, ale sami macie szansę wesprzeć badania naukowe!Możesz być jedną z pierwszych osób, które bezpłatnie wypróbują aplikację Nałogometr i wezmą udział w wielkim projekcie zainicjowanym przez psychologów Polskiej Akademii Nauk, który wspiera osoby wychodzące z nałogów. Ogólnopolskie Badanie Nałogów, bo o nim mowa, jest całkowicie anonimowe. Inteligentna aplikacja mobilna w czasie rzeczywistym przewiduje prawdopodobieństwo wystąpienia ryzyka zachowań nałogowych i oferuje nowoczesne metody wsparcia psychologicznego.Serio, polecam Wam udział w tym badaniu i ślijcie śmiało link do wszystkich zainteresowanych!!!xoxo, Gutral GadaMontaż: Eugeniusz KarlovInstagram: @joanna_gutralTikTok: @joanna_gutralPatronite: https://patronite.pl/gutralOkładkowe (foto) @opowiedziane_swiatlemLink do udziału w badaniu:https://badanienalogow.pl/?&utm_source=spotify&utm_medium=podcast&utm_campaign=jgutrobn

The Strategy Skills Podcast: Management Consulting | Strategy, Operations & Implementation | Critical Thinking

Welcome to Strategy Skills episode 241, an episode with scientist, inventor, writer, speaker, and entrepreneur, Alex Bäcker. In this episode, Alex spoke about the main ingredients of a successful life and the key to happiness. He discussed why he wrote his book, 101 Clues to a Happy Life, and shared how having children exponentially changed his life and perspective. Alex also elaborated on the role of sunlight in the prevention and treatment of COVID-19 and the potential role of vitamin D. In this conversation, he shared valuable advice related to business, finding business partners and investors, and the practical steps that should be implemented to attain success and happiness. Alex Bäcker is the founder and CEO of QLess and co-founder at Drisit. He holds 11 patents and in 2021 was named among the top 100 MIT alumni in technology. His seminal papers on COVID and sunlight were picked up by the press around the world.  Alex is a National Champion of Informatics and holds a degree in Biology and Economics from MIT, and a M.S. and Ph.D. in Computation and Neural Systems and Biology from the California Institute of Technology, where he was awarded the Dean's Award for “great contributions and outstanding qualities of Leadership and Responsibility.” He has held positions at McKinsey & Co.; the Center for Computation, Computers, Information and Mathematics of Sandia National Labs; and Caltech. Alex devised the idea of QLess while standing in a line. Prior to starting QLess, Alex was also the founder of a pioneer of search engine marketing technology, the invention factory, that has boosted the reach of search engine marketing campaigns by up to 580% while simultaneously reducing their cost per action by up to 78%. It is a semantic people search engine with machine vision, which beat Google and every other search engine tested 3 to 1 or more in results relevance in a blind people search comparison, and the first resume- or file-based search engine. Alex was appointed by the President to serve in the California Institute of Technology's Information Sciences and Technology Board of Advisors. In 2013, Alex was named the Gold Stevie Winner of IT Executive of the Year and Silver Stevie Winner of Innovator of the Year by the International Business Awards. In 2011, Alex was honored as the keynote speaker at LA County's Tech Week, an honor that was previously accorded a Nobel Prize winner, the CEO of Cisco, the CEO of Adobe, the Chairman of Deloitte, and an astronaut. In 2010, Alex was honored as one of “40 under 40” for the inaugural 40 Under 40 M&A Advisor Recognition Awards. Alex's research on neural coding and artificial intelligence has been published in the world's leading publications such as Nature and Neural Computation. Get Alex's book here: 101 Clues to a Happy Life. Alex Bäcker Enjoying this episode? Get access to sample advanced training episodes here: www.firmsconsulting.com/promo

Machine Learning Street Talk
#54 Gary Marcus and Luis Lamb - Neurosymbolic models

Machine Learning Street Talk

Play Episode Listen Later Jun 4, 2021 144:12


Professor Gary Marcus is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. Gary said in his recent next decade paper that — without us, or other creatures like us, the world would continue to exist, but it would not be described, distilled, or understood. Human lives are filled with abstraction and causal description. This is so powerful. Francois Chollet the other week said that intelligence is literally sensitivity to abstract analogies, and that is all there is to it. It's almost as if one of the most important features of intelligence is to be able to abstract knowledge, this drives the generalisation which will allow you to mine previous experience to make sense of many future novel situations. Also joining us today is Professor Luis Lamb — Secretary of Innovation for Science and Technology of the State of Rio Grande do Sul, Brazil. His Research Interests are Machine Learning and Reasoning, Neuro-Symbolic Computing, Logic in Computation and Artificial Intelligence, Cognitive and Neural Computation and also AI Ethics and Social Computing. Luis released his new paper Neurosymbolic AI: the third wave at the end of last year. It beautifully articulated the key ingredients needed in the next generation of AI systems, integrating type 1 and type 2 approaches to AI and it summarises all the of the achievements of the last 20 years of research. We cover a lot of ground in today's show. Explaining the limitations of deep learning, Rich Sutton's the bitter lesson and "reward is enough", and the semantic foundation which is required for us to build robust AI.

PaperPlayer biorxiv neuroscience
Synergistic neural computation is greater downstream of recurrent connectivity in organotypic cortical cultures

PaperPlayer biorxiv neuroscience

Play Episode Listen Later May 14, 2020


Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.05.12.091215v1?rss=1 Authors: Sherrill, S. P., Timme, N. M., Beggs, J. M., Newman, E. L. Abstract: Cortical information processing requires synergistic integration of input. Understanding the determinants of synergistic integration--a form of computation--in cortical circuits is therefore a critical step in understanding the functional principles underlying cortical information processing. We established previously that synergistic integration varies directly with the strength of feedforward connectivity. What relationship recurrent and feedback connectivity have with synergistic integration remains unknown. To address this, we analyzed the spiking activity of hundreds of well-isolated neurons in organotypic cultures of mouse somatosensory cortex, recorded using a high-density 512-channel microelectrode array. We asked how empirically observed synergistic integration, quantified through partial information decomposition, varied with local functional network structure. Toward that end, local functional network structure was categorized into motifs with varying recurrent and feedback connectivity. We found that synergistic integration was elevated in motifs with greater recurrent connectivity and was decreased in motifs with greater feedback connectivity. These results indicate that the directionality of local connectivity, beyond feedforward connections, has distinct influences on neural computation. Specifically, more upstream recurrence predicts greater downstream computation, but more feedback predicts lesser computation. Copy rights belong to original authors. Visit the link for more info

Data Science at Home
Episode 57: Neural networks with infinite layers

Data Science at Home

Play Episode Listen Later Apr 23, 2019 16:19


How are differential equations related to neural networks? What are the benefits of re-thinking neural network as a differential equation engine? In this episode we explain all this and we provide some material that is worth learning. Enjoy the show!   Residual Block     References [1] K. He, et al., “Deep Residual Learning for Image Recognition”, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770-778, 2016 [2] S. Hochreiter, et al., “Long short-term memory”, Neural Computation 9(8), pages 1735-1780, 1997. [3] Q. Liao, et al.,”Bridging the gaps between residual learning, recurrent neural networks and visual cortex”, arXiv preprint, arXiv:1604.03640, 2016. [4] Y. Lu, et al., “Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equation”, Proceedings of the 35th International Conference on Machine Learning (ICML), Stockholm, Sweden, 2018. [5] T. Q. Chen, et al., ” Neural Ordinary Differential Equations”, Advances in Neural Information Processing Systems 31, pages 6571-6583}, 2018

Strefa Psyche Uniwersytetu SWPS
Jak się nie zgubić, czyli o tym, jak mózg tworzy mapy przestrzeni - dr Rafał Czajkowski

Strefa Psyche Uniwersytetu SWPS

Play Episode Listen Later Nov 9, 2018 64:08


Jak mózg tworzy mapy przestrzeni - jak się nie zgubić? O prelegencie: dr Rafał Czajkowski – pracownik Pracowni Modeli Zwierzęcych w Centrum Neurobiologii w Instytucie Biologii Doświadczalnej im. M. Nenckiego PAN w Warszawie. Przez trzy lata pracował w Center for Neural Computation na Uniwersytecie Nauki i Technologii w Trondheim założonym przez May-Britt Moser i Edvarda I. Mosera – laureatów zeszłorocznej Nagrody Nobla w dziedzinie medycyny i fizjologii. Zajmuje się głównie mechanizmami pamięci przestrzennej, w szczególności współdziałaniem hipokampa i obszarów korowych w jej formowaniu.

jak czyli przez trondheim warszawie tworzy technologii zajmuje mapy neural computation may britt moser neuronauka
SSCS Chip Chat
Interview with Dr Gert Cauwenberghs

SSCS Chip Chat

Play Episode Listen Later May 10, 2018 40:47


Dr Gert Cauwenberghs is Professor of Bioengineering and Co-Director of the Institute for Neural Computation at UC San Diego, La Jolla CA. Received his Ph.D. degree in Electrical Engineering from CalTech in 1994 and undergraduate in his native Brussels in 1988. Previously Professor of Electrical and Computer Engineering at Johns Hopkins University, and Visiting Professor of Brain and Cognitive Science at MIT. Co-founded Cognionics Inc. and chairs its Scientific Advisory Board. His research focuses on micropower biomedical instrumentation, neuron-silicon and brain-machine interfaces, neuromorphic engineering, and adaptive intelligent systems. He received the NSF Career Award in 1997, ONR Young Investigator Award in 1999, and Presidential Early Career Award for Scientists and Engineers in 2000. He was a Francqui Fellow of the Belgian American Educational Foundation, and is a Fellow of the Institute of Electrical and Electronic Engineers (IEEE) and the American Institute for Medical and Biological Engineering. He served IEEE in a variety of roles including as Distinguished Lecturer of the IEEE Circuits and Systems Society, as General Chair of the IEEE Biomedical Circuits and Systems Conference (BioCAS 2011, San Diego), as Program Chair of the IEEE Engineering in Medicine and Biology Conference (EMBC 2012, San Diego), and as Editor-in-Chief of the IEEE Transactions on Biomedical Circuits and Systems. UCSD Webpage: http://isn.ucsd.edu/ Cognionics: https://www.cognionics.net/ SSCS: https://sscs.ieee.org

Convergent Science Network Podcast
Interview Edvard Moser

Convergent Science Network Podcast

Play Episode Listen Later Apr 5, 2018 60:10


Edvard Moser (Kavli Institute and Centre for Neural Computation, Norway; Nobel prize 2014) discusses the work of his group, his wife and himself on the brain’s computation of spatial memory.

science norway neuroscience nobel neural computation edvard moser
Max Planck Florida’s Neurotransmissions Podcast
#21: The Neural Computation of Space, with Dr. Edvard Moser

Max Planck Florida’s Neurotransmissions Podcast

Play Episode Listen Later Nov 29, 2017 42:19


"Where are we and where are we going?" Our sense of where we are in the world is one of the most fundamental cognitive elements of our day-to-day lives, and the discovery of our brain's internal system for mapping our physical location is one of the landmark achievements of modern neuroscience. In this episode we sit down Dr. Edvard Moser, one of the three recipients of the 2014 Nobel Prize in Physiology or Medicine for his work discovering the neural computation of our brain's internal "GPS" system. We discuss the organization of this system, as well as many of the mysteries that remain unsolved in understanding the functional circuits involved in forming a neural representation of space.

space medicine gps nobel prize physiology neural computation edvard moser
Team Human
Ep. 52 William Softky and Criscillia Benford "Recalibrating For Trust"

Team Human

Play Episode Listen Later Aug 30, 2017 50:57


Consider a text message in which you are deciding where to meet for dinner with a friend. As you go back and forth in that stream of abbreviated communication, you suggest, “Let’s eat Italian,” to which your friend responds “Fine.” The one-word text sends you into a state of alarm. Did they mean “fine” as in “great,” or “fine” as in “Fine, you win, we can eat where you’d like”? Somewhere in this chain of communication trust, has been lost. Playing for Team Human today are William Softky and Criscillia Benford, a husband and wife team who are looking deep into the biophysical and mathematical underpinnings of establishing trust in communications. They’ve just published a paper titled “Sensory Metrics of Neuromechanical Trust” in the journal, Neural Computation. In this conversation, Softky and Benford join Douglas Rushkoff to dive deep into experiences of digital disorientation and even screen addiction. What’s really going on in our brains as we consume more and more digital stimuli? Have our nervous systems been recalibrated to these flattened, two-dimensional interactions? Are we "out of tune" when it comes to trust? Opening today’s show, Rushkoff looks at the recent Mayweather vs. McGregor boxing match. Did this spectacle represent a surrogate Trump vs. Obama battle? And while that fight is over, the victor declared, what does it mean when governance itself becomes a prolonged spectacle? See acast.com/privacy for privacy and opt-out information.

Software engineering digest
No.70 - 인공지능 딥러닝 기술 소개 및 사례

Software engineering digest

Play Episode Listen Later Jul 10, 2017


안녕하세요. 오늘은 지난 시간에 이어, 몇 년사이에 큰 이슈가 되고 있는 인공지능 딥러닝에 대한 이야기를 간략히 소개해 보겠습니다. 이 내용은 BIM학회에 '인공지능 딥러닝 기술 동향 및 구현 사례'로 소개된 글의 일부 내용입니다. CNN, RNN, LSTM, GAN과 같은 대표적인 딥러닝 신경망 종류 및 3차원 스캔과 관련된 CNN활용 사례 등을 소개합니다.No.70 Podcast 방송 - 인공지능 딥러닝 기술 소개 및 사례소개된 부분에 대한 좀 더 상세한 내용은 다음 레퍼런스에서 확인할 수 있습니다. 1. Daddy maker, 텐서플로우 최신버전 설치 및 개념2. Daddy maker, 텐서플로우 MNIST 딥러닝 및 텐서보드 그래프 가시화3. BIM principle, 2017, 페이스북 딥러닝 CNN 기반 오픈소스 번역 기술 CSSL4. 딥러닝 Hello World - MNIST, CIFAR-10 데이터베이스 구조와 이미지넷5. Konstantin Lackner, 2016, Composing a melody with long-short term memory (LSTM) Recurrent Neural Networks, Institute for DataProcessing Technische Universitat Munchen6. Patrick Hebron, 2016, Unsupervised Learning, patrickhebron.com7. Alec Radford & Luke Metz, Soumith Chintala, 2016, Unsupervised Representation Learning with Deep Convolutional GenerativeAdversarial Network8. Adam Santoro외, 2017, A simple neural network module for relational reasoning, DeepMind9. Microsoft, Going Deep: Convolutional Neural Networks10. The Asimov Institute, 2016, THE NEURAL NETWORK ZOO11. Adit Deshpande, 2016, The 9 Deep Learning Papers You Need To Know About CNN12. Adrian Rosebrock, 2016, My Top 9 Favorite Python Deep Learning Libraries13. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin, 2017.5, A novel approach to neural machine translation,FAIR(Facebook AI Research Group)14. Adit Deshpande, A Beginner's Guide To Understanding Convolutional Neural Networks, UCLA15. Rob Verger, 2017.5, Facebook created a faster, more accurate translation system using artificial intelligence, Popular Science16. James Vincent, 2017.5.9, Facebook says its prototype translation technique is nine times faster than rivals, THE VERGE17. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin, 2017.5, Convolutional Sequence to Sequence Learning,FAIR(Facebook AI Research Group)18. Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas, 2016, PointNet: Deep Learning on Point Sets for 3D Classification andSegmentation19. Sepp H., Jurgen S, 1997, Long short-term memory, Neural Computation. 9 (8) ,pp.1735–178020. Goodfellow. Ian J, Pouget-Abadie. J, Mirza. M, Xu. Bing, Warde-Farley. David, Ozair. Sherjil, Courville. Aaron, Bengio. Yoshua, 2014,Generative Adversarial Networks

Stanford Neurosciences Institute
Automatically Inferring Meso-scale Models of Neural Computation

Stanford Neurosciences Institute

Play Episode Listen Later Nov 16, 2016 48:41


Dr. Tom Dean, a research scientist at Google in Mountain View, discusses the idea of an intermediate or meso-scale computational theory that connects a molecular (micro-scale) account of neural function to a behavioral (macro-scale) account of animal cognition and environmental complexity.

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
Geoff Hinton, the Father of Deep Learning

Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)

Play Episode Listen Later Jun 20, 2016 36:16


In this interview, Geof Hinton, Distinguished Emeritus Professor at the University of Toronto, discusses World Class IT Principle One, People, and Principle Two, Infrastructure. Some of the topics discussed include: His day to day work as a professor at the University of Toronto and as a distinguished researcher at Google His thoughts on the current progress of the development of true artificial intelligence What inspired him to pursue his PhD in Artificial Intelligence when the topic was less prominent than it is today The founding of the Neural Computation and Adaptive Perception Program (NCAP) and how he brought these world class thinkers together The outcomes of his work at the NCAP His perspectives on non-profit work being a major player within the Artificial Intelligence space How much crossover there is between his work at the University of Toronto and his work at Google DeepMind, the importance of it, and how contests like the recent AlphaGo are important in the field of Artificial Intelligence His perspectives on the risk vs. opportunity within AI Whether the government has a role to play in pending job losses due to artificial intelligence His thoughts on the recent trend of skipping university to move straight into entrepreneurship

The Strong Women’s Club Women's Success Stories in Business and in Life
048: IWF: Connecting Women Around the World to Build Better Global Leadership

The Strong Women’s Club Women's Success Stories in Business and in Life

Play Episode Listen Later May 26, 2016 65:01


The International Women's Forum Cornerstone Conference was in Tel-Aviv this week, May 2016. As a member of the media, I was able to listen to lectures, interview prominent women, soak up the positive energy that was buzzing throughout and talk to the organizers and one of the sponsors. IWF understands the power of the group, the mentoring and teaching. One of its main missions is to cultivate young women leaders, as well as exchange ideas, learn and inspire and build leadership. In this episode you will hear: How women can be confident without apology How we can be leaders without belittling How we can brag about our accomplishments without ego How we can push limits and erase boundaries There are five short interviews in today's episode: Crosby Cromwell: Director of External Affairs and Partnerships at IWF Dr. Mina Teicher: Professor of Mathematics and Neural Computation, ex-Chief Scientist of Israel Dr. Cady Coleman: NASA Astronaut Claudia Patton: Chief Talent Officer at Edelman Asma Chaabi: Moroccan Parliamentarian, first Moroccan female mayor, artist Mich Atagana: Head of Communications and Public Affairs for Google South Africa Ziva Patir: Previously the Director General of the Standards Institution of Israel, currently co-CEO at Patir Consultants The podcast is very different this week to its usual format. I look forward to hearing your feedback. What do you think? Let me know on Facebook or at edie@thestrongwomensclub.com Have a great day and stay strong! Edie

Education Issues (Video)
STEaM Connect Ascend Conference 2014: Research Updates

Education Issues (Video)

Play Episode Listen Later Mar 28, 2014 37:52


From the inaugural STEaM Connect Ascend Conference 2014, M.A. Greenstein of the George Greenstein institute moderates a panel featuring John Iversen of UC San Diego’s Institute for Neural Computation presenting various research perspectives in the field of STEaM education. Series: "STEaM Connect Ascend Conference 2014" [Education] [Show ID: 28110]

education research arts institute stem steam educational leadership uc san diego cota education policy greenstein schools and teaching ascend conference neural computation education show id john iversen george greenstein m.a. greenstein steam connect steam connect ascend conference series steam connect ascend conference
Education Issues (Audio)
STEaM Connect Ascend Conference 2014: Research Updates

Education Issues (Audio)

Play Episode Listen Later Mar 28, 2014 37:52


From the inaugural STEaM Connect Ascend Conference 2014, M.A. Greenstein of the George Greenstein institute moderates a panel featuring John Iversen of UC San Diego’s Institute for Neural Computation presenting various research perspectives in the field of STEaM education. Series: "STEaM Connect Ascend Conference 2014" [Education] [Show ID: 28110]

education research arts institute stem steam educational leadership uc san diego cota education policy greenstein schools and teaching ascend conference neural computation education show id john iversen george greenstein m.a. greenstein steam connect steam connect ascend conference series steam connect ascend conference
STEAM Channel (Video)
STEaM Connect Ascend Conference 2014: Research Updates

STEAM Channel (Video)

Play Episode Listen Later Mar 28, 2014 37:52


From the inaugural STEaM Connect Ascend Conference 2014, M.A. Greenstein of the George Greenstein institute moderates a panel featuring John Iversen of UC San Diego’s Institute for Neural Computation presenting various research perspectives in the field of STEaM education. Series: "STEaM Connect Ascend Conference 2014" [Education] [Show ID: 28110]

education research arts institute stem steam educational leadership uc san diego cota education policy greenstein schools and teaching ascend conference neural computation education show id john iversen george greenstein m.a. greenstein steam connect steam connect ascend conference series steam connect ascend conference
STEAM Channel (Audio)
STEaM Connect Ascend Conference 2014: Research Updates

STEAM Channel (Audio)

Play Episode Listen Later Mar 28, 2014 37:52


From the inaugural STEaM Connect Ascend Conference 2014, M.A. Greenstein of the George Greenstein institute moderates a panel featuring John Iversen of UC San Diego’s Institute for Neural Computation presenting various research perspectives in the field of STEaM education. Series: "STEaM Connect Ascend Conference 2014" [Education] [Show ID: 28110]

education research arts institute stem steam educational leadership uc san diego cota education policy greenstein schools and teaching ascend conference neural computation education show id john iversen george greenstein m.a. greenstein steam connect steam connect ascend conference series steam connect ascend conference
TEDxCaltech 2013
Neural Computation (Jan. 18, 2013)

TEDxCaltech 2013

Play Episode Listen Later Apr 15, 2013


neural computation
TEDxCaltech 2013 - HD
Neural Computation (Jan. 18, 2013)

TEDxCaltech 2013 - HD

Play Episode Listen Later Apr 15, 2013 16:31


neural computation
TEDxCaltech 2013 - SD
Neural Computation (Jan. 18, 2013)

TEDxCaltech 2013 - SD

Play Episode Listen Later Apr 15, 2013


neural computation
DRI Environment
Symposium M. Braungart (part 5)

DRI Environment

Play Episode Listen Later Apr 15, 2011 25:18