POPULARITY
Start Artist Song Time Album Year Pattern-Seeking Animals Another Holy Grail 12:17 Friend Of All Creatures 2025 0:13:50 Mark Trueack What We've Done 8:54 Save Us 2025 0:23:57 Random Option Redemption 13:44 Random Option 2025 12:39:45 Fluctus Quadratum Direlight 12:45 Laplacian 2025 0:52:30 Danefae P.S. Far er død 12:34 Trøst 2025 1:06:13 Yesterdays Seven 11:35 […]
Start Artist Song Time Album Year 0:00:47 Corde Oblique Gnossienne no. 1 3:26 Cries and Whispers 2025 0:04:14 Fluctus Quadratum Bridge to Suffering 0:49 Laplacian 2025 0:05:03 Ologram 22.43 0:56 La mia scia 2025 0:05:59 Jack O' The Clock No. 4 Mountain 2:05 Portraits 2025 0:08:04 Pymlico Don't Do That 6:31 Core 2025 0:14:32 Earthside […]
Riddhiman Das is the CEO of TripleBlind, a privacy platform for AI. They have raised $32M in funding, with their most recent round led by General Catalyst. He was previously the Head of International Technology Investments at Ant Financial, which is Alibaba's financial services arm. He was the Product Architect at Zoloz, Chairman at Laplacian, Chief Data Offier at mySideWalk, and CTO of Galleon Labs. He has received the 2013 White House Champions of Change from President Barack Obama. In this episode, we cover a range of topics including: - Attack surface of an AI application - What are the ways in which privacy can be compromised during training and deployment of AI models - Role based access control for Generative AI applications - Data leakage in Generative AI applications - Characteristics of a good privacy product - How is TripleBlind used in healthcare and financial sectors Riddhiman's favorite book: Twenty Thousand Leagues Under the Sea (Author: Jules Verne)--------Where to find Prateek Joshi: Newsletter: https://prateekjoshi.substack.com Website: https://prateekj.com LinkedIn: https://www.linkedin.com/in/prateek-joshi-91047b19 Twitter: https://twitter.com/prateekvjoshi
Timestamps: (00:00) - Intro (02:08) - Tony's background, Costa Rican singing mouse (06:59) - Traditional & embodied Turing Test, large language models (15:16) - Mouse intelligence, evolution, modularity, dish-washing dogs? (26:16) - Platform for training non-human animal-like virtual agents (36:14) - Exploration in children vs animals, innate vs learning, cognitive maps, complementary learning systems theory (46:53) - Genomic bottleneck, transfer learning, artificial Laplacian evolution (01:02:06) - Why AI needs connectomics? (01:06:55) - Brainbow, molecular connectomics: MAPseq & BRICseq (01:14:52) - Comparative (corvid) connectomics (01:18:04) - "Human uniqueness" - why do/ don't people believe in evolutionary continuity (01:25:29) - Career questions & virtual mouse passing the Embodied Turing Test in 5 years? Tony's lab website https://zadorlab.labsites.cshl.edu/ Tony's Twitter https://twitter.com/TonyZador Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution - Embodied Turing Test paper (2022) https://arxiv.org/ftp/arxiv/papers/2210/2210.08340.pdf A critique of pure learning and what artificial neural networks can learn from animal brains paper (2019) http://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2019/08/A-critique-of-pure-learning-and-what-artificial-neuralnetworks-can-learn-from-animal-brains.pdf Genomic bottleneck paper (2021) http://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2021/03/Encoding-innate-ability-through-a-genomic-bottleneck.pdf MAPseq paper (2016) http://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2018/04/Zador-etal_2016_neuron_High-throughput-mapping.pdf BRICseq paper (2020) http://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2020/07/BRICseq-Bridges-Brain-wide-Interregional.pdf Squirrel ninja warrior course video https://www.youtube.com/watch?v=hFZFjoX2cGg Marbled Lungfish wiki https://en.wikipedia.org/wiki/Marbled_lungfish Papers about corvids https://www.science.org/doi/10.1126/science.1098410 https://link-springer-com.ezproxy1.bath.ac.uk/article/10.3758/s13420-020-00434-5 Twitter https://twitter.com/Embodied_AI
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2022.09.29.510097v1?rss=1 Authors: Behjat, H., Tarun, A., Abramian, D., Larsson, M., Van De Ville, D. Abstract: Structural brain graphs are conventionally limited to defining nodes as gray matter regions from an atlas, with edges reflecting the density of axonal projections between pairs of nodes. Here we explicitly model the entire set of voxels within a brain mask as nodes of high-resolution, subject-specific graphs. We define the strength of local voxel-to-voxel connections using diffusion tensors and orientation distribution functions derived from diffusion-weighted MRI data. We study the graphs' Laplacian spectral properties on data from the Human Connectome Project. We then assess the extent of inter-subject variability of the Laplacian eigenmodes via a procrustes validation scheme. Finally, we demonstrate the extent to which functional MRI data are shaped by the underlying anatomical structure via graph signal processing. The graph Laplacian eigenmodes manifest highly resolved spatial profiles, reflecting distributed patterns that correspond to major white matter pathways. We show that the intrinsic dimensionality of the eigenspace of such high-resolution graphs is only a mere fraction of the graph dimensions. By projecting task and resting-state data on low-frequency graph Laplacian eigenmodes, we show, firstly, that brain activity can be well approximated by a small subset of low-frequency components, and secondly, that spectral graph energy profiles differ under different functional loads. The proposed graphs open new avenues in studying the brain, be it, by exploring their organisational properties via graph or spectral graph theory, or by treating them as the scaffold on which brain function is observed at the individual level. Copy rights belong to original authors. Visit the link for more info Podcast created by PaperPlayer
Pioneering neuroscientist Karl Friston joins Tim to talk about a concept he's developed called the free-energy principle, which may hold the key to advancing the understanding human intelligence as we know it. Karl is a theoretical neuroscientist. He's an authority on brain imaging. His work has advanced mankind's understanding of schizophrenia, among other things. At the moment, he's becoming better known as the originator of the free-energy principle for human action and perception. In this episode, we'll talk with Karl about that free-energy principle, what it is, what it means and what it can mean for the future. https://traffic.libsyn.com/secure/shapingopinion/Karl_Friston_Final_auphonic.mp3 I hope you have your coffee and are sitting in a comfortable place, because this conversation is going to introduce you to some entirely new thinking from one of the world's most unique scientific thinkers, Karl Friston. Before we get started, you need to know a little about Karl, and you will need an explanation of some of the words we will use here. Karl Friston is a theoretical neuroscientist. As mentioned, he is an authority on brain imaging. 1990, he invented something called statistical parametric mapping or (SPM). invented SPM, a computational technique that helps create brain images in a consistent shape so researchers can make consistent comparisons. He then invented Voxel-based morphometry or (VBM). An example of this is when he studied London taxi drivers to measure the rear side of the brain's hippocampus to watch it grow as their knowledge of the streets grew. After that, he invented something called dynamic causal modeling (DCM) for brain imaging, to determine if people who have severe brain damage or minimally conscious or vegetative. He is one of the most frequently cited neuroscientists in the world. Each one of these inventions centered on schizophrenia research and theoretical studies of value-learning – formulated as the dysconnection hypothesis of schizophrenia. To try to simplify, it's the hypothesis that when the so-called wiring in your brain isn't all connecting properly. Karl currently works on models of functional integration in the human brain and the principles that underlie neuronal interactions. His main contribution to theoretical neurobiology is a free-energy principle for action and perception (active inference). That's what we cover in this episode. Karl received the first Young Investigators Award in Human Brain Mapping in 1996. He was elected a Fellow of the Academy of Medical Sciences in 1999. Since then, he has received numerous other honors and recognition for his work. Links The Genius Neuroscientist Who Might Hold the Key to True AI, Wired Karl Friston, The Helix Center Karl Friston and the Free Energy Principle, ExploringYourMind.com About this Episode's Guest Karl Friston Karl Friston is a theoretical neuroscientist and authority on brain imaging. He invented statistical parametric mapping (SPM), voxel-based morphometry (VBM) and dynamic causal modelling (DCM). These contributions were motivated by schizophrenia research and theoretical studies of value-learning, formulated as the dysconnection hypothesis of schizophrenia. Mathematical contributions include variational Laplacian procedures and generalized filtering for hierarchical Bayesian model inversion. Friston currently works on models of functional integration in the human brain and the principles that underlie neuronal interactions. His main contribution to theoretical neurobiology is a free-energy principle for action and perception (active inference). Friston received the first Young Investigators Award in Human Brain Mapping (1996) and was elected a Fellow of the Academy of Medical Sciences (1999). In 2000 he was President of the international Organization of Human Brain Mapping. In 2003 he was awarded the Minerva Golden Brain Award and was elected a Fellow of th...
John Urschel received his bachelors and masters in mathematics from Penn State and then went on to become a professional football player for the Baltimore Ravens in 2014. During his second season, Urschel began his graduate studies in mathematics at MIT alongside his professional football career. Urschel eventually decided to retire from pro football to pursue his real passion, the study of mathematics, and he completed his doctorate in 2021. Urschel is currently a scholar at the Institute for Advanced Study where he is actively engaged in research on graph theory, numerical analysis, and machine learning. In addition, Urschel is the author of Mind and Matter, a New York Times bestseller about his life as an athlete and mathematician, and has been named as one of Forbes 30 under 30 for being an outstanding young scientist. In this episode, John and I discuss a hodgepodge of topics in spectral graph theory. We start off light and discuss the famous Braess's Paradox in which traffic congestion can be increased by opening a road. We then discuss the graph Laplacian to enable us to present Cheeger's Theorem, a beautiful result relating graph bottlenecks to graph eigenvalues. We then discuss various graph embedding and clustering results, and end with a discussion of the PageRank algorithm that powers Google search. Originally published on June 9, 2022 on YouTube: https://youtu.be/O6k0JRpA2mg Corrections: 01:14:24 : The inequalities are reversed here. It is corrected at 01:16:16. Timestamps: 00:00:00 : Introduction 00:04:30 : Being a professional mathematician and academia vs industry 00:09:41 : John's taste in mathematics 00:13:00 : Outline 00:17:23 : Braess's Paradox: "Opening a highway can increase traffic congestion." 00:25:34 : Prisoner's Dilemma. We need social forcing mechanisms to avoid undesirable outcomes (traffic jams). 00:31:20 : What is a graph 00:36:33 : Graph bottlenecks. Practical situations: Task assignment, the economy, organizational management. 00:42:44 : Quantifying bottlenecks: Cheeger's constant 00:46:43 : Cheeger's constant sample computations 00:52:07 : NP Hardness 00:55:48 : Graph Laplacian 00:58:30 : Graph Laplacian: Relation to Laplacian from calculus 01:00:27 : Graph Laplacian: 1-dimensional example 01:01:22 : Graph Laplacian: Analyst's Laplacian vs Geometer's Laplacian (they differ by a minus sign) 01:04:44 : Graph Laplacian: Some history 01:07:35 : Cheeger's Inequality: Statement 01:09:37 : ***Cheeger's Inequality: A great example of beautiful mathematics*** 01:10:46 : Cheeger's Inequality: Computationally tractable approximation of Cheeger's constant 01:14:57 : Rayleigh quotient, discussion of proof of Cheeger's inequality 01:19:16 : Harmonic oscillators: Springs heuristic for lambda_2 and Cheeger's inequality 01:22:20 : Interlude: Tutte's Spring Embedding Theorem (planar embeddings in terms of springs) 01:26:23 : Harmonic oscillators: Resume lambda_2 discussion and spring tension 01:29:45 : Interlude: Graph drawing using eigenfunctions 01:33:17 : Harmonic oscillators: Resume lambda_2 discussion: complete graph example and bottleneck is a perturbation of two disconnected components 01:38:26 : Summary thus far and graph bisection 01:42:44 : Graph bisection: Large eigenvalues for PCA vs low eigenvalues for spectral bisection 01:43:40 : Graph bisection: 1-dimensional intuition 01:44:40 : Graph bisection: Nodal domains 01:46:29 : Graph bisection: Aha, the 1-d example now makes sense. Splitting by level set of second eigenfunction is a good way to partition domain. 01:47:43 : Spectral graph clustering (complementary to graph bisection) 01:51:50 : Ng-Jordan-Weiss paper 01:52:10 : PageRank: Google's algorithm for ranking search results 01:53:44 : PageRank: Markov chain (Markov matrix) 01:57:32 : PageRank: Stationary distribution 02:00:20 : Perron-Frobenius Theorem 02:06:10 : Spectral gap: Analogy between bottlenecks for graphs and bottlenecks for Markov chain mixing 02:07:56 : Conclusion: State of the field, Urschel's recent results 02:10:28 : Joke: Two kinds of mathematicians Further Reading: A. Ng, M. Jordan, Y. Weiss. "On Spectral Clustering: Analysis and an algorithm" D. Spielman. "Spectral and Algebraic Graph Theory"
Tonight is a special show discussing the Lakeland Weather Club. Joining us as the first of several Guest WeatherBrains is author Dorna Schroeter. Welcome! Our next Guest WeatherBrain might have more impact on the field of meteorology than almost any other person. He's had a positive impact on the thousands of students he's had. He was nominated by his former students and recognized with an AMS special award for his innovative leadership in teaching meteorology at the high school level while mentoring and inspiring his students. Jim Witt, welcome! Also joining us is the DIrector of the Hurricane Research Division at the Atlantic Oceanographic and Meteorology Laboratories for NOAA. Dr. Frank Marks, thanks for joining. Karl Silverman retired as a Lead Forecaster in the Space Flight meteorology group at Johnson Space Center. Thanks for joining. Greg Tripoli became the Chair of the Atmospheric and Oceanic Sciences Department at the University of Wisconsin-Madison. Welcome!
Gal.30 エロゲー新聞:新作成双&新人入坑 GG Channel 我们不止聊动漫! 一档讨论ACG及其相关内容的动漫播客 主播:黩武大祭司、德国咕科 黩武大祭司B站账号:https://space.bilibili.com/23128836?from=search&seid=12776679223031499197&spm_id_from=333.337.0.0 欢迎加入QQ催更群:553943741 可以赞助爱发电支持我们:https://afdian.net/@GGChannel?tab=home 节目简介: 00:57 Frontwing『グリザイア クロノスリベリオン』PC版制作决定 09:16 Sonora『ウチはもう、延期できない。』PV和声优公布 13:24 Laplacian社员全体感染新冠 14:36 Key 麻枝准新作手游《Heaven Burns Red》开启预约 16:49 Key Kinetic第二弹『LUNARiA -Virtualized Moonchild-』视觉图公布 18:11 Key Kinetic第三弹『終のステラ』视觉图公布 22:34 ensemble『星の乙女と六華の姉妹』OP和体验版公布 26:38 CRYSTALiA 5th Project NEW RELEASES RE:D Cherish! 视觉图公布 28:43 少女爱上姐姐3 OVA 12月24号发售 31:03 創作彼女の恋愛公式OP更不 37:31 汉化消息: 近月2.1,2.2汉化布丁 10月26号发布 Monkey!!海外版制作决定 39:35 本期话题:新游戏和娱乐禁令下的新人入坑问题 本期节目OP,ED ,BGM主要来自夜羊社旗下作品 本期封面选自《我的妹妹哪有那么可爱》
In this interview, Mitch Belkin and Daniel Belkin speak with Dr. Karl Fristonabout his proposed free energy principle and how it applies to various psychiatric and neurological disorders including schizophrenia, depression, autism, and Parkinson's. They also touch on the disconnection hypothesis of schizophrenia, how theories of schizophrenia have evolved over the last two centuries, and the relationship between schizophrenia and autism.Who is Karl Friston?Dr. Karl Friston is a professor of neuroscience at University College London and an authority on brain imaging. He is the 20th most-cited living scientist with over 260,000 citations for his works. After studying natural sciences at Cambridge, he completed his medical studies at King's College Hospital in London and worked for 2 years in an inpatient psychiatric facility on the outskirts of Oxford, where treated patients suffering from schizophrenia.Dr. Friston has developed a number of statistical tools for analyzing data from the brain, including statistical parametric mapping (SPM), voxel-based morphometry (VBM) or dynamic causal modeling (DCM). His mathematical contributions include variational Laplacian procedures and generalized filtering for hierarchical Bayesian model inversion. ______________________Follow us @ExMedPod, and sign up for our newsletter at www.externalmedicinepodcast.com/subscribeDaniel Belkin and Mitch Belkin are brothers and 4th year medical students. The External Medicine Podcast is a podcast exploring nontraditional medical ideas and innovation.
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Traditional physics works within the “Laplacian paradigm”: you give me the state of the universe (or some closed system), some equations of motion, then I use those equations to evolve the system through time. Constructor theory proposes an alternative paradigm: to think of physical systems in terms of counterfactuals — the set of rules governing what can and cannot happen. Originally proposed by David Deutsch, constructor theory has been developed by today's guest, Chiara Marletto, and others. It might shed new light on quantum gravity and fundamental physics, as well as having applications to higher-level processes of thermodynamics and biology.Support Mindscape on Patreon.Chiara Marletto received her DPhil in physics from the University of Oxford. She is currently a research fellow at Wolfson College, University of Oxford. Her new book is The Science of Can and Can't: A Physicist's Journey Through the Land of Counterfactuals.Web siteOxford web pageGoogle Scholar publicationsWikipedia“How to Rewrite the Laws of Physics in the Language of Impossibility,” QuantaSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
HTT.26 エロゲー新聞:ASMR&全年龄 GG Channel 我们不止聊动漫! 一档讨论ACG及其相关内容的动漫播客 主播:祭司,德国咕科 欢迎加入QQ催更群:553943741 可以赞助爱发电支持我们:https://afdian.net/@GGChannel?tab=home 节目简介:8月Gal相关新闻的讨论 涉及新闻: 1:26 Laplacian社《的构想图》中文版众筹59w 13:01 柚子社生放送 30:38 ぱれっと生放送 38:22 汉化等新闻 『サクラノ刻』ティザーサイト更新です! 「体験版」2021年9月10日発表 Sprite 苍之彼方的四重奏EXTRA2进入录音阶段 Sekai Project 公布新的英化(&官中)计划:龙姬2、QUALIA、巧克甜恋2 circus【D.C.4 Plus Harmony 】マスターアップ SukeraSparo新品牌SukeraSono启动!百全音声「メイドインドリーム」预售 富婆妹汉化进入测试阶段 50:05 总结讨论 本期节目OP,ED ,BGM主要来自《樱之诗》的OST和OP
Christian Szegedy is a Research Scientist at Google. His research machine learning methods such as the inception architecture, batch normalization and adversarial examples, and he currently investigates machine learning for mathematical reasoning. Christian’s PhD thesis is titled "Some Applications of the Weighted Combinatorial Laplacian" which he completed in 2005 at the University of Bonn. We discuss Christian’s background in mathematics, his PhD work on areas of both pure and applied mathematics, and his path into machine learning research. Finally, we discuss his recent work with using deep learning for mathematical reasoning and automatically formalizing mathematics. Episode notes: https://cs.nyu.edu/~welleck/episode15.html Follow the Thesis Review (@thesisreview) and Sean Welleck (@wellecks) on Twitter, and find out more info about the show at https://cs.nyu.edu/~welleck/podcast.html Support The Thesis Review at www.buymeacoffee.com/thesisreview
The 365 Days of Astronomy, the daily podcast of the International Year of Astronomy 2009
We'e not sure about this one… Being a principle, there are a number of different ways in which Heisenberg’s uncertainty principle can come into play. The best known example is that the more you try to hone in on a particle’s position x, the less you are able to determine its momentum p. And if you instead devote your efforts towards determining its momentum, you will find your ability to determine its position beginning to slip away. Essentially, you can never know either of these aspects of the particle’s behavior exactly and the more precisely you know one, the less precisely you can know the other. Also: The Wave Function. Since this is Fantastic Physics Formulas, we’ll first tell you what the formula is and then we’ll spend the rest of the episode trying to explain what the heck it means. So, firstly the formula says that the second derivative of u with respect to time is equal to c squared times the Laplacian of the wave at u. We've added a new way to donate to 365 Days of Astronomy to support editing, hosting, and production costs. Just visit: https://www.patreon.com/365DaysOfAstronomy and donate as much as you can! Share the podcast with your friends and send the Patreon link to them too! Every bit helps! Thank you! ------------------------------------ Do go visit http://astrogear.spreadshirt.com/ for cool Astronomy Cast and CosmoQuest t-shirts, coffee mugs and other awesomeness! http://cosmoquest.org/Donate This show is made possible through your donations. Thank you! (Haven't donated? It's not too late! Just click!) The 365 Days of Astronomy Podcast is produced by Astrosphere New Media. http://www.astrosphere.org/ Visit us on the web at 365DaysOfAstronomy.org or email us at info@365DaysOfAstronomy.org.
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.11.14.382655v1?rss=1 Authors: Coskun, M., Koyuturk, M. Abstract: Motivation: Link prediction is an important and well-studied problem in computational biology, with a broad range of applications including disease gene prioritization, drug disease associations, and drug response in cancer. The general principle in link prediction is to use the topological characteristics and the attributes--if available-- of the nodes in the network to predict new links that are likely to emerge/disappear. Recently, graph representation learning methods, which aim to learn a low-dimensional representation of topological characteristics and the attributes of the nodes, have drawn increasing attention to solve the link prediction problem via learnt low-dimensional features. Most prominently, Graph Convolution Network (GCN)-based network embedding methods have demonstrated great promise in link prediction due to their ability of capturing non-linear information of the network. To date, GCN-based network embedding algorithms utilize a Laplacian matrix in their convolution layers as the convolution matrix and the effect of the convolution matrix on algorithm performance has not been comprehensively characterized in the context of link prediction in biomedical networks. On the other hand, for a variety of biomedical link prediction tasks, traditional node similarity measures such as Common Neighbor, Ademic-Adar, and other have shown promising results, and hence there is a need to systematically evaluate the node similarity measures as convolution matrices in terms of their usability and potential to further the state-of-the-art. Results: We select 8 representative node similarity measures as convolution matrices within the single-layered GCN graph embedding method and conduct a systematic comparison on 3 important biomedical link prediction tasks: drug-disease association (DDA) prediction, drug-drug interaction (DDI) prediction, protein-protein interaction (PPI) prediction. Our experimental results demonstrate that the node similarity-based convolution matrices significantly improves GCN-based embedding algorithms and deserve more attention in the future biomedical link prediction Availability: Our method is implemented as a python library and is available at githublink Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.10.28.359125v1?rss=1 Authors: Behjat, H., Aganj, I., Abramian, D., Eklund, A., Westin, C.-F. Abstract: In this work, we leverage the Laplacian eigenbasis of voxel-wise white matter (WM) graphs derived from diffusion-weighted MRI data, dubbed WM harmonics, to characterize the spatial structure of WM fMRI data. By quantifying the energy content of WM fMRI data associated with subsets of WM harmonics across multiple spectral bands, we show that the data exhibits notable subtle spatial modulations under functional load that are not manifested during rest. WM harmonics provide a novel means to study the spatial dynamics of WM fMRI data, in such way that the analysis is informed by the underlying anatomical structure. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.16.300384v1?rss=1 Authors: Xie, X., Cai, C., Damasceno, P. F., Nagarajan, S., Raj, A. Abstract: How do functional brain networks emerge from the underlying wiring of the brain? We examine how resting-state functional activation patterns emerge from the underlying connectivity and length of white matter fibers that constitute its "structural connectome". By introducing realistic signal transmission delays along fiber projections, we obtain a complex-valued graph Laplacian matrix that depends on two parameters: coupling strength and oscillation frequency. This complex Laplacian admits a complex-valued eigen-basis in the frequency domain that is highly tunable and capable of reproducing the spatial patterns of canonical functional networks without requiring any detailed neural activity modeling. Specific canonical functional networks can be predicted using linear superposition of small subsets of complex eigenmodes. Using a novel parameter inference procedure we show that the complex Laplacian outperforms the real-valued Laplacian in predicting functional networks. The complex Laplacian eigenmodes therefore constitute a tunable yet parsimonious substrate on which a rich repertoire of realistic functional patterns can emerge. Although brain activity is governed by highly complex nonlinear processes and dense connections, our work suggests that simple extensions of linear models to the complex domain effectively approximate rich macroscopic spatial patterns observable on BOLD fMRI. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.11.292789v1?rss=1 Authors: Karabanov, A. N., Madsen, K. H., Krohne, L., Siebner, H. R. Abstract: Background: Electroencephalography (EEG) and single-pulse transcranial magnetic stimulation (spTMS) of the primary motor hand area (M1-HAND) have been combined to explore whether the instantaneous expression of pericentral mu-rhythm drives fluctuations in corticomotor excitability, but this line of research has yielded diverging results. Objectives: To re-assess the relationship between the mu-rhythm power expressed in left pericentral cortex and the amplitude of motor potentials (MEP) evoked with spTMS in left M1-HAND. Methods: 15 non-preselected healthy young participants received spTMS to the motor hot spot of left M1-HAND. Regional expression of mu-rhythm was estimated online based on a radial source at motor hotspot and informed the timing of spTMS which was applied either during epochs belonging to the highest or lowest quartile of regionally expressed mu-power. Using MEP amplitude as dependent variable, we computed a linear mixed-effects model, which included mu-power and mu-phase at the time of stimulation and the inter-stimulus interval (ISI) as fixed effects and subject as a random effect. Mu-phase was estimated by post-hoc sorting of trials into four discrete phase bins. We performed a follow-up analysis on the same EEG-triggered MEP data set in which we isolated mu-power at the sensor level using a Laplacian montage centered on the electrode above the M1-HAND. Results: Pericentral mu-power traced as radial source at motor hot spot did not significantly modulate the MEP, but mu-power determined by the surface Laplacian did, showing a positive relation between mu-power and MEP amplitude. In neither case, there was an effect of mu-phase on MEP amplitude. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.09.08.287110v1?rss=1 Authors: Aqil, M., Atasoy, S., Kringelbach, M. L., Hindriks, R. Abstract: Tools from the field of graph signal processing, in particular the graph Laplacian operator, have recently been successfully applied to the investigation of structure-function relationships in the human brain. The eigenvectors of the human connectome graph Laplacian, dubbed "connectome harmonics", have been shown to relate to the functionally relevant resting-state networks. Whole-brain modelling of brain activity combines structural connectivity with local dynamical models to provide insight into the large-scale functional organization of the human brain. In this study, we employ the graph Laplacian and its properties to define and implement a large class of neural activity models directly on the human connectome. These models, consisting of systems of stochastic integrodifferential equations on graphs, are dubbed graph neural fields , in analogy with the well-established continuous neural fields. We obtain analytic predictions for harmonic and temporal power spectra, as well as functional connectivity and coherence matrices, of graph neural fields, with a technique dubbed CHAOSS (shorthand for Connectome-Harmonic Analysis Of Spatiotemporal Spectra ). Combining graph neural fields with appropriate observation models allows for estimating model parameters from experimental data as obtained from electroencephalography (EEG), magnetoencephalography (MEG), or functional magnetic resonance imaging (fMRI); as an example application, we study a stochastic Wilson-Cowan graph neural field model on a high-resolution connectome, and show that the model equilibrium fluctuations can reproduce the empirically observed harmonic power spectrum of BOLD fMRI data. Graph neural fields natively allow the inclusion of important features of cortical anatomy and fast computations of observable quantities for comparison with multimodal empirical data. They thus appear particularly suitable for modelling whole-brain activity at mesoscopic scales, and opening new potential avenues for connectome-graph-based investigations of structure-function relationships. Copy rights belong to original authors. Visit the link for more info
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2020.06.20.162131v1?rss=1 Authors: O'Reilly, C., Larson, E., Richards, J. E., Elsabbagh, M. Abstract: Electroencephalographic (EEG) source reconstruction is a powerful approach that helps to unmix scalp signals, mitigates volume conduction issues, and allows anatomical localization of brain activity. Algorithms used to estimate cortical sources require an anatomical model of the head and the brain, generally reconstructed using magnetic resonance imaging (MRI). When such scans are unavailable, a population average can be used for adults, but no average surface template is available for cortical source imaging in infants. To address this issue, this paper introduces a new series of 12 anatomical models for subjects between zero and 24 months of age. These templates are built from MRI averages and volumetric boundary element method segmentation of head tissues available as part of the Neurodevelopmental MRI Database. Surfaces separating the pia mater, the gray matter, and the white matter were estimated using the Infant FreeSurfer pipeline. The surface of the skin as well as the outer and inner skull surfaces were extracted using a cube marching algorithm followed by Laplacian smoothing and mesh decimation. We post-processed these meshes to correct topological errors and ensure watertight meshes. The use of these templates for source reconstruction is demonstrated and validated using 100 high-density EEG recordings in 7-month-old infants. Hopefully, these templates will support future studies based on EEG source reconstruction and functional connectivity in healthy infants as well as in clinical pediatric populations. Particularly, they should make EEG-based neuroimaging more feasible in longitudinal neurodevelopmental studies where it may not be possible to scan infants at multiple time points. Copy rights belong to original authors. Visit the link for more info
Adam speaks with former Baltimore Ravens Centre John Urschel who is even more at home creating “eigensolvers for minimal Laplacian eigenvectors” than he is crashing into players in the NFL! Follow Adam Spencer on Twitter Find LiSTNR on Facebook: https://www.facebook.com/LiSTNRau/ Follow LiSTNR on Instagram: https://www.instagram.com/listnrau/ Follow LiSTNR Australia on Twitter: https://twitter.com/listnrau Download the LiSTNR app from the Apple and Google Play app stores. Or go to listnr.com See omnystudio.com/listener for privacy information.
Join former NFL lineman/current mathematician John Urschel to learn about how to take a dense graph and find a sparse graph whose Laplacian is very close to that of the original graph. Applied math at its finest.
Andrii Khrabustovskyi works at our faculty in the group Nonlinear Partial Differential Equations and is a member of the CRC Wave phenomena: analysis and numerics. He was born in Kharkiv in the Ukraine and finished his studies as well as his PhD at the Kharkiv National University and the Institute for Low Temperature Physics and Engineering of the National Academy of Sciences of Ukraine. He joined our faculty in 2012 as postdoc in the former Research Training Group 1294 Analysis, Simulation and Design of Nanotechnological Processes, which was active until 2014. Gudrun Thäter talked with him about one of his research interests Asymptotic analysis and homogenization of PDEs. Photonic crystals are periodic dielectric media in which electromagnetic waves from certain frequency ranges cannot propagate. Mathematically speaking this is due to gaps in the spectrum of the related differential operators. For that an interesting question is if there are gaps inbetween bands of the spectrum of operators related to wave propagation, especially on periodic geometries and with periodic coeffecicients in the operator. It is known that the spectrum of periodic selfadjoint operators has bandstructure. This means the spectrum is a locally finite union of compact intervals called bands. In general, the bands may overlap and the existence of gaps is therefore not guaranteed. A simple example for that is the spectrum of the Laplacian in which is the half axis . The classic approach to such problems in the whole space case is the Floquet–Bloch theory. Homogenization is a collection of mathematical tools which are applied to media with strongly inhomogeneous parameters or highly oscillating geometry. Roughly spoken the aim is to replace the complicated inhomogeneous by a simpler homogeneous medium with similar properties and characteristics. In our case we deal with PDEs with periodic coefficients in a periodic geometry which is considered to be infinite. In the limit of a characteristic small parameter going to zero it behaves like a corresponding homogeneous medium. To make this a bit more mathematically rigorous one can consider a sequence of operators with a small parameter (e.g. concerning cell size or material properties) and has to prove some properties in the limit as the parameter goes to zero. The optimal result is that it converges to some operator which is the right homogeneous one. If this limit operator has gaps in its spectrum then the gaps are present in the spectra of pre-limit operators (for small enough parameter). The advantages of the homogenization approach compared to the classical one with Floquet Bloch theory are: The knowledge of the limit operator is helpful and only available through homogenization. For finite domains Floquet Bloch does not work well. Though we always have a discrete spectrum we might want to have the gaps in fixed position independent of the size of our domain. Here the homogenization theory works in principle also for the bounded case (it is just a bit technical). An interesting geometry in this context is a domain with periodically distributed holes. The question arises: what happens if the sizes of holes and the period simultaneously go to zero? The easiest operator which we can study is the Laplace operator subject to the Dirichlet boundary conditions. There are three possible regimes: For holes of the same order as the period (even slightly smaller) the Dirichelet conditions on the boundary of holes dominate -- the solution for the corresponding Poisson equation tends to zero. For significantly smaller holes the influence on the holes is so small that the problem "forgets" about the influence of the holes as the parameter goes to zero. There is a borderline case which lies between cases 1 and 2. It represents some interesting effects and can explain the occurance of so-called strange terms. A traditional ansatz in homogenization works with the concept of so-called slow and fast variables. The name comes from the following observation. If we consider an infinite layer in cylindrical coordinates, then the variable r measures the distance from the origin when going "along the layer", the angle in that plane, and z is the variable which goes into the finite direction perpendicular to that plane. When we have functions then the derivative with respect to r changes the power to while the other derivatives leave that power unchanged. In the interesting case k is negative and the r-derivate makes it decreasing even faster. This leads to the name fast variable. The properties in this simple example translate as follows. For any function we will think of having a set of slow and fast variables (characteristic to the problem) and a small parameter eps and try to find u as where in our applications typically . One can formally sort through the -levels using the properties of the differential operator. The really hard part then is to prove that this formal result is indeed true by finding error estimates in the right (complicated) spaces. There are many more tools available like the technique of Tartar/Murat, who use a weak formulation with special test functions depending on the small parameter. The weak point of that theory is that we first have to know the resulat as the parameter goes to zero before we can to construct the test function. Also the concept of Gamma convergence or the unfolding trick of Cioranescu are helpful. An interesting and new application to the mathematical results is the construction of wave guides. The corresponding domain in which we place a waveguide is bounded in two directions and unbounded in one (e.g. an unbounded cylinder). Serguei Nazarov proposed to make holes in order to make gaps into the line of the spectrum for a specified wave guide. Andrii Khrabustovskyi suggests to distribute finitely many traps, which do not influence the essential spectrum but add eigenvalues. One interesting effect is that in this way one can find terms which are nonlocal in time or space and thus stand for memory effects of the material. References P. Exner and A. Khrabustovskyi: On the spectrum of narrow Neumann waveguide with periodically distributed δ′ traps, Journal of Physics A: Mathematical and Theoretical, 48 (31) (2015), 315301. A. Khrabustovskyi: Opening up and control of spectral gaps of the Laplacian in periodic domains, Journal of Mathematical Physics, 55 (12) (2014), 121502. A. Khrabustovskyi: Periodic elliptic operators with asymptotically preassigned spectrum, Asymptotic Analysis, 82 (1-2) (2013), 1-37. S.A. Nazarov, G. Thäter: Asymptotics at infinity of solutions to the Neumann problem in a sieve-type layer, Comptes Rendus Mecanique 331(1) (2003) 85-90. S.A. Nazarov: Asymptotic Theory of Thin Plates and Rods: Vol.1. Dimension Reduction and Integral Estimates. Nauchnaya Kniga: Novosibirsk, 2002.
Andrii Khrabustovskyi works at our faculty in the group Nonlinear Partial Differential Equations and is a member of the CRC Wave phenomena: analysis and numerics. He was born in Kharkiv in the Ukraine and finished his studies as well as his PhD at the Kharkiv National University and the Institute for Low Temperature Physics and Engineering of the National Academy of Sciences of Ukraine. He joined our faculty in 2012 as postdoc in the former Research Training Group 1294 Analysis, Simulation and Design of Nanotechnological Processes, which was active until 2014. Gudrun Thäter talked with him about one of his research interests Asymptotic analysis and homogenization of PDEs. Photonic crystals are periodic dielectric media in which electromagnetic waves from certain frequency ranges cannot propagate. Mathematically speaking this is due to gaps in the spectrum of the related differential operators. For that an interesting question is if there are gaps inbetween bands of the spectrum of operators related to wave propagation, especially on periodic geometries and with periodic coeffecicients in the operator. It is known that the spectrum of periodic selfadjoint operators has bandstructure. This means the spectrum is a locally finite union of compact intervals called bands. In general, the bands may overlap and the existence of gaps is therefore not guaranteed. A simple example for that is the spectrum of the Laplacian in which is the half axis . The classic approach to such problems in the whole space case is the Floquet–Bloch theory. Homogenization is a collection of mathematical tools which are applied to media with strongly inhomogeneous parameters or highly oscillating geometry. Roughly spoken the aim is to replace the complicated inhomogeneous by a simpler homogeneous medium with similar properties and characteristics. In our case we deal with PDEs with periodic coefficients in a periodic geometry which is considered to be infinite. In the limit of a characteristic small parameter going to zero it behaves like a corresponding homogeneous medium. To make this a bit more mathematically rigorous one can consider a sequence of operators with a small parameter (e.g. concerning cell size or material properties) and has to prove some properties in the limit as the parameter goes to zero. The optimal result is that it converges to some operator which is the right homogeneous one. If this limit operator has gaps in its spectrum then the gaps are present in the spectra of pre-limit operators (for small enough parameter). The advantages of the homogenization approach compared to the classical one with Floquet Bloch theory are: The knowledge of the limit operator is helpful and only available through homogenization. For finite domains Floquet Bloch does not work well. Though we always have a discrete spectrum we might want to have the gaps in fixed position independent of the size of our domain. Here the homogenization theory works in principle also for the bounded case (it is just a bit technical). An interesting geometry in this context is a domain with periodically distributed holes. The question arises: what happens if the sizes of holes and the period simultaneously go to zero? The easiest operator which we can study is the Laplace operator subject to the Dirichlet boundary conditions. There are three possible regimes: For holes of the same order as the period (even slightly smaller) the Dirichelet conditions on the boundary of holes dominate -- the solution for the corresponding Poisson equation tends to zero. For significantly smaller holes the influence on the holes is so small that the problem "forgets" about the influence of the holes as the parameter goes to zero. There is a borderline case which lies between cases 1 and 2. It represents some interesting effects and can explain the occurance of so-called strange terms. A traditional ansatz in homogenization works with the concept of so-called slow and fast variables. The name comes from the following observation. If we consider an infinite layer in cylindrical coordinates, then the variable r measures the distance from the origin when going "along the layer", the angle in that plane, and z is the variable which goes into the finite direction perpendicular to that plane. When we have functions then the derivative with respect to r changes the power to while the other derivatives leave that power unchanged. In the interesting case k is negative and the r-derivate makes it decreasing even faster. This leads to the name fast variable. The properties in this simple example translate as follows. For any function we will think of having a set of slow and fast variables (characteristic to the problem) and a small parameter eps and try to find u as where in our applications typically . One can formally sort through the -levels using the properties of the differential operator. The really hard part then is to prove that this formal result is indeed true by finding error estimates in the right (complicated) spaces. There are many more tools available like the technique of Tartar/Murat, who use a weak formulation with special test functions depending on the small parameter. The weak point of that theory is that we first have to know the resulat as the parameter goes to zero before we can to construct the test function. Also the concept of Gamma convergence or the unfolding trick of Cioranescu are helpful. An interesting and new application to the mathematical results is the construction of wave guides. The corresponding domain in which we place a waveguide is bounded in two directions and unbounded in one (e.g. an unbounded cylinder). Serguei Nazarov proposed to make holes in order to make gaps into the line of the spectrum for a specified wave guide. Andrii Khrabustovskyi suggests to distribute finitely many traps, which do not influence the essential spectrum but add eigenvalues. One interesting effect is that in this way one can find terms which are nonlocal in time or space and thus stand for memory effects of the material. References P. Exner and A. Khrabustovskyi: On the spectrum of narrow Neumann waveguide with periodically distributed δ′ traps, Journal of Physics A: Mathematical and Theoretical, 48 (31) (2015), 315301. A. Khrabustovskyi: Opening up and control of spectral gaps of the Laplacian in periodic domains, Journal of Mathematical Physics, 55 (12) (2014), 121502. A. Khrabustovskyi: Periodic elliptic operators with asymptotically preassigned spectrum, Asymptotic Analysis, 82 (1-2) (2013), 1-37. S.A. Nazarov, G. Thäter: Asymptotics at infinity of solutions to the Neumann problem in a sieve-type layer, Comptes Rendus Mecanique 331(1) (2003) 85-90. S.A. Nazarov: Asymptotic Theory of Thin Plates and Rods: Vol.1. Dimension Reduction and Integral Estimates. Nauchnaya Kniga: Novosibirsk, 2002.
Turner, A (Lancaster University) Tuesday 17th March 2015 - 14:00 to 15:00
Boutillier, C (Université Pierre & Marie Curie-Paris VI) Friday 30 January 2015, 11:30-12:30
Wu, H (Stanford University) Wednesday 12 February 2014, 13:30-14:15
Chipot, M (Universität Zürich) Wednesday 22 January 2014, 15:15-16:15
We derive a general formula for the Laplacian acting on a function f(r) then demonstrate that the Laplacian is zero in the case that f(r) = 1/r, thereby showing that 1/r is harmonic.
Introduces the del operator and gives brief discussion of how it is used as a gradient, divergence and curl, then in the formation of the Laplacian.
If you experience any technical difficulties with this video or would like to make an accessibility-related request, please send a message to digicomm@uchicago.edu. Partha Niyogi Memorial Conference: "Laplacian Eigenfunctions Learn Population Structure". This conference is in honor of Partha Niyogi, the Louis Block Professor in Computer Science and Statistics at the University of Chicago. Partha lost his battle with cancer in October of 2010, at the age of 43. Partha made fundamental contributions to a variety of fields including language evolution, statistical inference, and speech recognition. The underlying themes of learning from observations and a rigorous basis for algorithms and models permeated his work.
If you experience any technical difficulties with this video or would like to make an accessibility-related request, please send a message to digicomm@uchicago.edu. Partha Niyogi Memorial Conference: "Toward Understanding Complex Data: Graph Laplacian on Singular Manifolds". This conference is in honor of Partha Niyogi, the Louis Block Professor in Computer Science and Statistics at the University of Chicago. Partha lost his battle with cancer in October of 2010, at the age of 43. Partha made fundamental contributions to a variety of fields including language evolution, statistical inference, and speech recognition. The underlying themes of learning from observations and a rigorous basis for algorithms and models permeated his work.
If you experience any technical difficulties with this video or would like to make an accessibility-related request, please send a message to digicomm@uchicago.edu. Partha Niyogi Memorial Conference: "Vector Diffusion Maps and the Connection Laplacian". This conference is in honor of Partha Niyogi, the Louis Block Professor in Computer Science and Statistics at the University of Chicago. Partha lost his battle with cancer in October of 2010, at the age of 43. Partha made fundamental contributions to a variety of fields including language evolution, statistical inference, and speech recognition. The underlying themes of learning from observations and a rigorous basis for algorithms and models permeated his work.
If you experience any technical difficulties with this video or would like to make an accessibility-related request, please send a message to digicomm@uchicago.edu. Partha Niyogi Memorial Conference: "Toward Understanding Complex Data: Graph Laplacian on Singular Manifolds". This conference is in honor of Partha Niyogi, the Louis Block Professor in Computer Science and Statistics at the University of Chicago. Partha lost his battle with cancer in October of 2010, at the age of 43. Partha made fundamental contributions to a variety of fields including language evolution, statistical inference, and speech recognition. The underlying themes of learning from observations and a rigorous basis for algorithms and models permeated his work.
If you experience any technical difficulties with this video or would like to make an accessibility-related request, please send a message to digicomm@uchicago.edu. Partha Niyogi Memorial Conference: "Laplacian Eigenfunctions Learn Population Structure ". This conference is in honor of Partha Niyogi, the Louis Block Professor in Computer Science and Statistics at the University of Chicago. Partha lost his battle with cancer in October of 2010, at the age of 43. Partha made fundamental contributions to a variety of fields including language evolution, statistical inference, and speech recognition. The underlying themes of learning from observations and a rigorous basis for algorithms and models permeated his work.
If you experience any technical difficulties with this video or would like to make an accessibility-related request, please send a message to digicomm@uchicago.edu. Partha Niyogi Memorial Conference: "Vector Diffusion Maps and the Connection Laplacian". This conference is in honor of Partha Niyogi, the Louis Block Professor in Computer Science and Statistics at the University of Chicago. Partha lost his battle with cancer in October of 2010, at the age of 43. Partha made fundamental contributions to a variety of fields including language evolution, statistical inference, and speech recognition. The underlying themes of learning from observations and a rigorous basis for algorithms and models permeated his work.
Salo, M (University of Helsinki) Tuesday 02 August 2011, 14:45-15:30
Vasy, A (Stanford University) Wednesday 03 August 2011, 12:00-12:45
Teplyaev, A (Connecticut) Tuesday 27 July 2010, 14:00-14.45
Levitin, M (Heriot-Watt) Thursday 12 April 2007, 11:30-12:30 Graph Models of Mesoscopic Systems, Wave-Guides and Nano-Structures
Grieser, D (Carl von Ossietzky, Oldenburg) Thursday 12 April 2007, 14:00-15:00 Graph Models of Mesoscopic Systems, Wave-Guides and Nano-Structures
Human observers were trained to criterion in classifying compound Gabor signals with sym- metry relationships, and were then tested with each of 18 blob-only versions of the learning set. General- ization to dark-only and light-only blob versions of the learning signals, as well as to dark-and-light blob versions was found to be excellent, thus implying virtually perfect generalization of the ability to classify mirror-image signals. The hypothesis that the learning signals are internally represented in terms of a 'blob code' with explicit labelling of contrast polarities was tested by predicting observed generalization behaviour in terms of various types of signal representations (pixelwise, Laplacian pyramid, curvature pyramid, ON/OFF, local maxima of Laplacian and curvature operators) and a minimum-distance rule. Most representations could explain generalization for dark-only and light-only blob patterns but not for the high-thresholded versions thereof. This led to the proposal of a structure-oriented blob-code. Whether such a code could be used in conjunction with simple classifiers or should be transformed into a propo- sitional scheme of representation operated upon by a rule-based classification process remains an open question.