POPULARITY
- China, Advanced GPUs, Advanced AI - High Tech companies pursue government contracts - Neuromorphic chips, artificial fast neurons - Farewell 2024, thank you @HPCpodcast listeners [audio mp3="https://orionx.net/wp-content/uploads/2024/12/HPCNB_20241230.mp3"][/audio] The post HPC News Bytes – 20241230 appeared first on OrionX.net.
Neuromorphic computing is a powerful tool for identifying time-varying patterns, but is often less effective than some AI-based techniques for more complex tasks. Researchers at the iCAS Lab directed by Ramtin Zand at the University of South Carolina, work on an NSF CAREER project to show how the capabilities of neuromorphic systems could be improved by blending them with specialized machine learning systems, without sacrificing their impressive energy efficiency. Using their approach, the team aims to show how the gestures of American Sign Language could be instantly translated into written and spoken language.
Dr. Katie Schuman of the University of Tennessee explains the advantages of evolutionary approaches in neural processing to Dr. Sunny Bains of University College London. Discussion follows with Dr. Giulia D'Angelo from the Czech Technical University in Prague and Professor Ralph Etienne-Cummings of Johns Hopkins University.
Professor Atwood listens attentively and with a bit of disbelief as he learns about how the nefarious Professor P. met his ex-wife Julia.
UCL's Dr. Sunny Bains talks parallelism, neural net efficiency and risk taking with Caltech's Prof. Carver Mead. Now an emeritus professor, Mead has been instrumental in the development of chip design, and was one of the first employees of Noyce and Moore, which later became Intel. He's also one of the founders of the field of neuromorphic engineering. Discussion follows with Dr Giulia D'Angelo from the Czech Technical University in Prague and Prof. Ralph Etienne-Cummings of Johns Hopkins University.
Today, you'll learn about the amazing transformative power of an out-of-body experience, how scientists want to use brain cells to do their computing, and a study that suggests eating cheese might make you live longer. Out-Of-Body Experience “Exploring the transformative potential of out-of-body experiences: A pathway to enhanced empathy.” by Marina Weiler, et al. 2024. “Out of body experiences and their neural basis.” by Olaf Blanke. 2004. Brain Cell Computing “Open and remotely accessible Neuroplatform for research in wetware computing.” by Fred D. Jordan, et al. 2024. “Neuromorphic wetware for artificial neural networks that overcome the limits of traditional computer hardware.” Innovation Toronto. 2023. “How Many Joules Does My Surge Protector Need?” by Karenann Brow. 2024. Cheese & Aging “Eating cheese plays a role in healthy, happy aging - who are we to argue?” by Bronwyn Thompson. 2024. “Mendelian randomization evidence for the causal effect of mental well-being on healthy aging.” by Chao-Jie Ye, et al. 2024. Follow Curiosity Daily on your favorite podcast app to get smarter with Calli and Nate — for free! Still curious? Get exclusive science shows, nature documentaries, and more real-life entertainment on discovery+! Go to https://discoveryplus.com/curiosity to start your 7-day free trial. discovery+ is currently only available for US subscribers. Hosted on Acast. See acast.com/privacy for more information.
Clifford Mapp is the global head of ecosystem development and information security at Dynex, the world's only accessible neuromorphic quantum computing cloud for solving real-world problems at scale. Dynex is already supporting thousands of projects in health/pharma, research, AI/ML, architecture, aerospace, EVs, and fintech. Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode, we're exploring the backbone of AI – network fabrics. The network fabric is the backbone of the data centre - keeping everything together between the storage, compute, and users. It's much more than patch cables, it's a finely balanced, interconnected process ecosystem. With the advent of AI, the demands on those network fabrics are changing, putting pressure on our compute resources, as well as on our energy usage. So what can be done, and can AI help optimize itself? To find out more, we're joined by Puneet Sharma, director of Hewlett Packard Labs' Networking and Distributed Systems Lab.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it. Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMA About this week's guest: https://www.linkedin.com/in/puneetsharma Sources and statistics cited in this episode: A16z report on data centre expenditure: https://a16z.com/navigating-the-high-cost-of-ai-compute/ Research and Markets report into data centre accelerators: https://www.researchandmarkets.com/reports/4804594/data-center-accelerators-global-strategicBio-engineering life for Mars: https://www.asimov.press/p/grow-mars
Tech behind the Trends on The Element Podcast | Hewlett Packard Enterprise
In this episode, we're exploring the backbone of AI – network fabrics. The network fabric is the backbone of the data centre - keeping everything together between the storage, compute, and users. It's much more than patch cables, it's a finely balanced, interconnected process ecosystem. With the advent of AI, the demands on those network fabrics are changing, putting pressure on our compute resources, as well as on our energy usage. So what can be done, and can AI help optimize itself? To find out more, we're joined by Puneet Sharma, director of Hewlett Packard Labs' Networking and Distributed Systems Lab.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it. Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMA About this week's guest: https://www.linkedin.com/in/puneetsharma Sources and statistics cited in this episode: A16z report on data centre expenditure: https://a16z.com/navigating-the-high-cost-of-ai-compute/ Research and Markets report into data centre accelerators: https://www.researchandmarkets.com/reports/4804594/data-center-accelerators-global-strategicBio-engineering life for Mars: https://www.asimov.press/p/grow-mars
In this episode, we're exploring the backbone of AI – network fabrics. The network fabric is the backbone of the data centre - keeping everything together between the storage, compute, and users. It's much more than patch cables, it's a finely balanced, interconnected process ecosystem. With the advent of AI, the demands on those network fabrics are changing, putting pressure on our compute resources, as well as on our energy usage. So what can be done, and can AI help optimize itself? To find out more, we're joined by Puneet Sharma, director of Hewlett Packard Labs' Networking and Distributed Systems Lab.This is Technology Now, a weekly show from Hewlett Packard Enterprise. Every week we look at a story that's been making headlines, take a look at the technology behind it, and explain why it matters to organizations and what we can learn from it. Do you have a question for the expert? Ask it here using this Google form: https://forms.gle/8vzFNnPa94awARHMA About this week's guest: https://www.linkedin.com/in/puneetsharma Sources and statistics cited in this episode: A16z report on data centre expenditure: https://a16z.com/navigating-the-high-cost-of-ai-compute/ Research and Markets report into data centre accelerators: https://www.researchandmarkets.com/reports/4804594/data-center-accelerators-global-strategicBio-engineering life for Mars: https://www.asimov.press/p/grow-mars
Data Bytes listeners get an exclusive discount to join Women in Data. View discount here. (Intro 00:00:00) Deep learning inspired by the human brain. (00:00:29) Besa's leadership and analytics background. (00:01:06) Excitement for Besa's interdisciplinary background. (00:01:28) Besa's journey combining technology and human condition. (00:03:49) AI's impact on healthcare discussed. (00:05:41) Neuromorphic hardware and future technology. (00:07:22) Modeling common sense and emotion in AI. (00:08:26) Human uniqueness and AI creativity. (00:10:41) AI in healthcare and elder care. (00:14:08) Emotional AI and human attachments. (00:15:24) AI in psychoanalysis and psychotherapy. (00:17:14) Privacy risks with AI. (00:19:35) AI's potential mental health pitfalls. (00:22:00) Data protection laws for AI. (00:23:12) Need for AI regulations. (00:25:30) Educating regulators on AI. (00:26:50) Importance of asking questions. (00:30:10) Research interests in AI ethics. (00:32:31) Advice: stay curious, ask questions. (00:33:22) Encouragement to continue learning. --- Support this podcast: https://podcasters.spotify.com/pod/show/women-in-data/support
In this special episode of the Brains and Machines podcast, Dr. Sunny Bains and Dr. Giulia D'Angelo talk to four early career researchers: Dr. Kenneth Stewart, a computer scientist at the U.S. Naval Research Laboratory in Washington DC; Dr. Laura Kriener, a postdoctoral researcher at The University of Bern in Switzerland; Jens Pedersen, a Ph.D. student at The Royal Institute of Technology (KTH) in Stockholm, Sweden; and Dr. Fabrizio Ottati, an AI/ML computer architect at NXP Semiconductors in Hamburg, Germany. They discuss learning rules for spiking neural networks, primitives for computations on neuromorphic hardware, and the benefits and drawbacks of neuromorphic engineering.
In this episode of From the Crow's Nest, host Ken Miller talks to author Dr. Joseph Guerci about the evolution and outlook for cognitive electronic warfare systems. Dr. Guerci is an internationally recognized leader in research and development of next generation sensor and cognitive systems and has over 30 years of experience in government, industry and academia, including several years with DARPA.Ken asks Dr. Guerci about the relationship between artificial intelligence, deep learning and cognitive systems, as well as what “neuromorphic” chips are. The two explore the role and limitations of “digitally engineering” in electronic warfare training. To learn more about today's topics or to stay updated on EMSO and EW developments, visit our homepage.
Mar 13, 2024 – Woody Preucil at 13D Research and Strategy delves into a pressing issue surrounding the explosive growth of artificial intelligence: its substantial power consumption. In today's FS Insider interview, Woody shares up-to-date...
In this episode of InTechnology, Camille gets into emerging technologies and telecommunications with Mischa Dohler, VP of Emerging Technologies at Ericsson. They talk about his research with 5G for the arts and healthcare, use cases for AI in telecommunications, how Mischa keeps up with so many emerging technologies, neuromorphic computing, quantum computing, and more. The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation.
In this episode of Brains and Machines, you'll hear Dr. Chiara Bartolozzi talk about how neuromorphic technology can be used to implement attention mechanisms, the importance of embodiment, and why we need a solid theory of how neural systems can work together to create intelligence.
A transformative shift in the way devices are perceived and interacted with is being brought about by AI neural network accelerators. BrainChip Inc. is a leading developer of such an accelerator, a technology that enables AI to be embedded into various devices, thereby facilitating intelligent functions. AI neural network accelerators: A technological revolution BrainChip's neural … Continue reading BrainChip's neuromorphic technology: The next wave of AI @ CES 2024 → The post BrainChip's neuromorphic technology: The next wave of AI @ CES 2024 appeared first on Tech Podcast Network.
A transformative shift in the way devices are perceived and interacted with is being brought about by AI neural network accelerators. BrainChip Inc. is a leading developer of such an accelerator, a technology that enables AI to be embedded into various devices, thereby facilitating intelligent functions.AI neural network accelerators: A technological revolutionBrainChip's neural network accelerator is compared to other AI technologies in the market, such as those developed by NVIDIA. The distinguishing factor for BrainChip is its architecture, which is based on neuromorphic principles, akin to the human brain. This implies that the accelerator is capable of multitasking and processing multiple events simultaneously, much like the human brain. Furthermore, the accelerator is noted for its minimal energy consumption, contributing to its high efficiency.The end product for consumers is a device that requires less frequent charging, thanks to its neuromorphic architecture. This technology enables devices to perform multiple functions concurrently, similar to the human brain. Take, for example, a smart cabin in a vehicle, where the accelerator can recognize the driver, gestures, and voice. This level of recognition and interaction is made possible by BrainChip's technology.AI still has room to growHowever, it is important to note that the adoption of this technology is still underway. The development and refinement of AI technologies require time, as they need to achieve a level of accuracy that can be utilized by consumers on a daily basis. It is the combination of inputting correct data and understanding it that facilitates the development of AI-based applications.Excitement about the future of AI and the demand for more intelligent devices from consumers is a constant. At CES 2024, AI has been a prevalent theme, with numerous companies and individuals focusing on delivering intelligent solutions. Whether it is through software, hardware, or embedded intelligence, AI is transforming various industries and products.Rob Telson, the Vice President of Ecosystem and Partnerships at BrainChip Inc., acknowledges that AI has only been around for a relatively short period of time, with significant advancements occurring in the past three years. He specifically mentions GPT, or Generative Pre-trained Transformer, which is a language model that has gained attention for its ability to generate human-like text. Despite these advancements, Linsdell emphasizes that truly intelligent devices that can think at the same level as a human are still three to seven years away.Of course, the big question in AI whether AI will ever surpass the human brain in intelligence. Linsdell firmly states that AI will not be able to achieve the same level of thinking as the human brain. He acknowledges that AI may be able to think ahead of humans in certain areas, but it still lacks the efficiency, rationality, and logic that the human brain possesses.However, Linsdell also believes that AI has the potential to bring about significant benefits if used correctly. He mentions the importance of playing the game right and ensuring that AI is used in a way that aligns with human thinking. While AI may make decisions that are not in line with human thinking currently, Linsdell suggests that with further tinkering and advancements, AI may eventually reach a point where it can match human thinking.Conclusion: BrainChip is on the bleeding edge of AI processingIn conclusion, AI neural network accelerators, such as the one developed by BrainChip Inc., are revolutionizing the capabilities of devices. By embedding AI into devices, they can perform multiple functions simultaneously, akin to the human brain. This technology holds the potential to enhance various industries and create more intelligent and interactive devices for consumers.Interview by Don Baine, The Gadget Professor.Sponsored by: Get $5 to protect your credit card information online with Privacy. Amazon Prime gives you more than just free shipping. Get free music, TV shows, movies, videogames and more. The most flexible tools for podcasting. Get a 30 day free trial of storage and statistics.
A transformative shift in the way devices are perceived and interacted with is being brought about by AI neural network accelerators. BrainChip Inc. is a leading developer of such an accelerator, a technology that enables AI to be embedded into various devices, thereby facilitating intelligent functions.AI neural network accelerators: A technological revolutionBrainChip's neural network accelerator is compared to other AI technologies in the market, such as those developed by NVIDIA. The distinguishing factor for BrainChip is its architecture, which is based on neuromorphic principles, akin to the human brain. This implies that the accelerator is capable of multitasking and processing multiple events simultaneously, much like the human brain. Furthermore, the accelerator is noted for its minimal energy consumption, contributing to its high efficiency.The end product for consumers is a device that requires less frequent charging, thanks to its neuromorphic architecture. This technology enables devices to perform multiple functions concurrently, similar to the human brain. Take, for example, a smart cabin in a vehicle, where the accelerator can recognize the driver, gestures, and voice. This level of recognition and interaction is made possible by BrainChip's technology.AI still has room to growHowever, it is important to note that the adoption of this technology is still underway. The development and refinement of AI technologies require time, as they need to achieve a level of accuracy that can be utilized by consumers on a daily basis. It is the combination of inputting correct data and understanding it that facilitates the development of AI-based applications.Excitement about the future of AI and the demand for more intelligent devices from consumers is a constant. At CES 2024, AI has been a prevalent theme, with numerous companies and individuals focusing on delivering intelligent solutions. Whether it is through software, hardware, or embedded intelligence, AI is transforming various industries and products.Rob Telson, the Vice President of Ecosystem and Partnerships at BrainChip Inc., acknowledges that AI has only been around for a relatively short period of time, with significant advancements occurring in the past three years. He specifically mentions GPT, or Generative Pre-trained Transformer, which is a language model that has gained attention for its ability to generate human-like text. Despite these advancements, Linsdell emphasizes that truly intelligent devices that can think at the same level as a human are still three to seven years away.Of course, the big question in AI whether AI will ever surpass the human brain in intelligence. Linsdell firmly states that AI will not be able to achieve the same level of thinking as the human brain. He acknowledges that AI may be able to think ahead of humans in certain areas, but it still lacks the efficiency, rationality, and logic that the human brain possesses.However, Linsdell also believes that AI has the potential to bring about significant benefits if used correctly. He mentions the importance of playing the game right and ensuring that AI is used in a way that aligns with human thinking. While AI may make decisions that are not in line with human thinking currently, Linsdell suggests that with further tinkering and advancements, AI may eventually reach a point where it can match human thinking.Conclusion: BrainChip is on the bleeding edge of AI processingIn conclusion, AI neural network accelerators, such as the one developed by BrainChip Inc., are revolutionizing the capabilities of devices. By embedding AI into devices, they can perform multiple functions simultaneously, akin to the human brain. This technology holds the potential to enhance various industries and create more intelligent and interactive devices for consumers.Interview by Don Baine, The Gadget Professor.Sponsored by: Get $5 to protect your credit card information online with Privacy. Amazon Prime gives you more than just free shipping. Get free music, TV shows, movies, videogames and more. The most flexible tools for podcasting. Get a 30 day free trial of storage and statistics.
In this episode, we're taking a look at how the explosion in our demand for data storage has led to needing more capacity than ever before, and whether long-vanished ideas from our computing past could influence technological innovation in the future. In 2022 the world generated 97 Zettabytes of data. It has been predicted that, by 2025, that number will almost have doubled to 181 Zettabytes. Although at the rate generative AI and machine learning is expanding, that figure could be even higher.As the Head of the Hewlett Packard Enterprise storage division, Senior Vice President Patrick Osborne has storage at the forefront of everything he does. He sees just how much his customers' needs are growing every year and is always actively looking for new methods and fabrics to meet those needs.Alongside those requirements for greater data storage also sits the need for faster data processing - and there are a number of technologies nearing maturity which could revolutionise the space. Aidong Xu is Head of Semiconductor Capability at Cambridge Consultants, and is keeping a close eye on these technologies, especially in the memory space. For him, the big challenge is combining performance with efficiency. However, whilst we're looking at the future of data storage, it's hard not to draw parallels with the past. Colin Eby from the National Museum of Computing knows a thing or two about that, guiding us through the history of the storage technologies which marked our pathway to today - some of which, in the decades since they fell out of favour, may have come round once more.But what if the future of data storage isn't data at all, but something more organic. Mark Bathe is a professor of biological engineering at MIT, specialising in DNA storage, and what that can mean for the future of our digital archiving needs. Sources and statistics cited in this episode:Zettabytes usage - https://www.statista.com/statistics/871513/worldwide-data-created/Sales of storage units - https://www.statista.com/forecasts/1251240/worldwide-storage-unit-sales-volumeHard drive shipment figures - https://www.statista.com/statistics/398951/global-shipment-figures-for-hard-disk-drives/Random access DNA memory using Boolean search in an archival file storage system - https://www.nature.com/articles/s41563-021-01021-3
Save Souls with an OfGod Tshirt: https://sjwellfire.com/shop/ Join our newsletter: https://sjwellfire.com/ Gab: https://gab.com/sjwellfire Support us to save souls via the news: https://sjwellfire.com/support/ or scott@sjwellfire.com paypal Prepare: https://sjwellfire.com/partners/ VCAst is about the MOTB infrastructure. How does X tie into this mind controlism optogenics soul trap system. What about ai the fake little god that is using real brain tissue to run. Artificial intelligence is starting to become a little less artificial… by using actual chunks of the human brain to improve what it's capable of. Researchers at Indiana University Bloomington in the US have built brain-infused computers named ‘Brainoware' following a first of its kind experiment. And it's thought the new creation may even one day be capable of operating without any human supervision at all. Will the Anti-Christ have a fake resurrection to the AI quantum computer that ties to other dimensions. You Don't Need a Chip for the MOTB Implied by Elon and Other's “Like, essentially, through the jugular vein, going through your neck. So it doesn't involve, like, drilling holes in your skull. It's, like, basically an angiogram (blood xray}. Because the thing about putting things in your veins and arteries is that's how you are on a regular basis. You know, when you go to the hospital and they put a stent in your heart, how do they do that? They don't, like, crack open your chest and, like, get a hammer and chisel and, like, put in a stent. They go through your veins and your arteries, because that's a natural highway to all parts of your body. So if you can do that for the heart, and, you know, they're starting to do it for some other things, like, why not the brain?“ Elon Musk Beast Tech is Here Defiling the Temple of God Neuromorphic computing is an approach to computing that is inspired by the structure and function of the human brain. A neuromorphic computer/chip is any device that uses physical artificial neurons to do computations. Neuromorphic computing could completely transform everything about the technology industry from programming languages to hardware. Neuralink aims to develop brain-computer interfaces (BCIs) that can connect humans and machines. Neuralink's devices consist of implantable electrodes that can record and stimulate brain activity, and a wireless module that can transmit data to external devices. Neuralink's goal is to enable humans to interact with artificial intelligence and enhance their cognitive abilities. Think X and the payment system / hive mind apparatus. Nanotechnology is the manipulation of matter at the atomic or molecular scale. Nanotechnology can be used to create new materials, devices, and systems with novel properties and functions. Nanotechnology has applications in various fields, such as medicine, electronics, energy, and biotechnology. Seeds of Men will not Cleave. Wireless body area network (WBAN) is a type of wireless network that uses wearable or implantable devices to monitor and communicate physiological data of a person. WBAN can be used for the beast system. Last, was Solomon's temple he built for God really a human from a birds eye view.
In this episode of the Brains and Machines podcast, Dr. Giulia D'Angelo from the Italian Institute of Technology interviews her IIT colleague, Dr. Simeon Bamford, who is currently working on tactile neuromorphic sensors. They talk about creating circuits to perform functions lost to brain damage, Bamford's involvement with the commercialization of dynamic vision sensors, and his latest research on robotic touch. Discussion follows with Dr. Sunny Bains of University College London, and Prof. Ralph Etienne-Cummings of Johns Hopkins University.
#AIEvolution #DrRogeneWest #NeuromorphicTech #ChatGPT #BrainInspiredAI #EmpathyChatbots #NeuronNetworks #EEGAnalysis #SuisunCityTalks #AIandEmotion Join Dr. Rogene Eighler West as she sheds light on the significant advancements in artificial intelligence since our last encounter at Suisun City, California. In this comprehensive discussion, she focuses on: Neuromorphic Innovations - Journey through the cutting-edge world of brain-inspired hardware. Discover the latest neuromorphic chips that emulate cortex structures. Empathy-Driven Chatbots - A deep dive into AI's rapidly developing ability to discern emotional subtleties, with a special nod to the groundbreaking research from the University of Washington. Brain Analysis via Artificial Neuron Networks - The emerging trend of employing neuron networks to analyze EEG data, emphasizing the critical interplay between tech specialists and clinical domain experts. For an in-depth look at this intriguing topic, don't miss our highlighted video: https://youtu.be/RnLU0PyHUSU. Stay informed about the fascinating intersection of technology, neuroscience, and human emotion in the modern AI domain. --- Send in a voice message: https://podcasters.spotify.com/pod/show/neuronoodle/message Support this podcast: https://podcasters.spotify.com/pod/show/neuronoodle/support
In this episode of Brains and Machines, EE Times regular Sunny Bains talks to Elisabetta Chicca, head of the bio-inspired Circuits and Systems research group at the University of Groningen, about building neural chips with memristors, adding electronic brains to neural robots, some of the current difficulties with learning algorithms for spiking systems and more. Discussion follows with Giulia D'Angelo from the Italian Institute of Technology and Ralph Etienne-Cummings from Johns Hopkins University.
In this first episode of the new Brains and Machines podcast, EE Times regular Sunny Bains interviews André van Schaik from the Western Sydney University about how neuromorphic engineering has changed since the early 90s, a new project to help simulate neural and neuromorphic models, and more. Discussion follows with Giulia D'Angelo from the Italian Institute of Technology and Ralph Etienne-Cummings from Johns Hopkins University.
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.07.25.550525v1?rss=1 Authors: Stuck, M., Naud, R. Abstract: The need for energy-efficient solutions in Deep Neural Network (DNN) applications has led to a growing interest in Spiking Neural Networks (SNNs) implemented in neuromorphic hardware. The Burstprop algorithm enables online and local learning in hierarchical networks, and therefore can potentially be implemented in neuromorphic hardware. This work presents an adaptation of the algorithm for training hierarchical SNNs on MNIST. Our implementation requires an order of magnitude fewer neurons than the previous ones. While Burstprop outperforms Spike-timing dependent plasticity (STDP), it falls short compared to training with backpropagation through time (BPTT). This work establishes a foundation for further improvements in the Burstprop algorithm, developing such algorithms is essential for achieving energy-efficient machine learning in neuromorphic hardware. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
Today's guest is Sarah Hamburg – a cognitive neuroscientist and researcher currently working as a post-doctoral Neuromorphic AI Engineer in Developmental Robotics. Sarah is also a DeSci (decentralised science) advocate and organiser, heavily involved in the decentralised science (DeSci) space since early 2021. After working as a core member of an Open Science DAO, she published a letter in Nature to increase awareness of DeSci and was then commissioned to write a DeSci "explainer" article for a16z's Future magazine. In 2022 she cofounded a "web3" consultancy which worked with UK Aid on blockchain for International Development. In this conversation, Sarah and I discuss her work and research, what neuroscience and consciousness is, why neuromorphic computing is such an exciting field to be working in. We also bond over a shared suffering of a chronic pain condition called fibromyalgia and I open up about the time as a kid when I nearly drowned and what I saw during that near-death experience.This interview is a little bit different from most of the other ones I do but it was thoroughly enjoyable and I am so grateful to Sarah for exploring these truly fascinating topics together.I am sure you will enjoy it too!Sarah on Twitter / ResearchDanielle on Twitter @daniellenewnham and Instagram @daniellenewnham / Newsletter Mentioned in this episode:The Case Against Reality: How Evolution Hid The Truth From Our Eyes by Donald D. HoffmanReframing Fibromyalgia by Sarah HamburgBrain Activity Detected in DyingA Guide to DeSci, The Latest Web3 Movement in a16z Future magazineCall to Join the DeSci Movement in Nature
Link to bioRxiv paper: http://biorxiv.org/cgi/content/short/2023.07.06.548044v1?rss=1 Authors: Zhang, Z., Xu, Z., McGuire, H., Essam, C., Nicholson, A., Hamilton, T. J., Li, J., Eshraghian, J. K., Yong, K.-T., Vigolo, D., Kavehei, O. Abstract: Flow cytometry is a widespread and high-throughput technology that can measure the features of cells and can be combined with fluorescence analysis for additional phenotypical characterisations but only provide low-dimensional output and spatial resolution. Imaging flow cytometry is another technology that offers rich spatial information, allowing more profound insight into single-cell analysis. However, offering such high-resolution, full-frame feedback can compromise speed and has become a significant trade-off challenge to tackle during development. In addition, the current dynamic range offered by conventional photosensors can only capture limited fluorescence signals, exacerbating the difficulties in elevating performance speed. Neuromorphic photo-sensing architecture focuses on the events of interest via individual-firing pixels to reduce data redundancy and provide low latency in data processing. With the inherent high dynamic range, this architecture has the potential to drastically elevate the performance in throughput by incorporating motion-activated spatial resolution. Herein, we presented an early demonstration of neuromorphic cytometry with the implementation of object counting and size estimation to measure 8~$mu$m and 15~$mu$m polystyrene-based microparticles and human monocytic cell line (THP-1). In this work, our platform has achieved highly consistent outputs with a widely adopted flow cytometer (CytoFLEX) in detecting the total number and size of the microparticles. Although the current platform cannot deliver multiparametric measurements on cells, future endeavours will include further functionalities and increase the measurement parameters (granularity, cell condition, fluorescence analysis) to enrich cell interpretation. Copy rights belong to original authors. Visit the link for more info Podcast created by Paper Player, LLC
On this episode of the AI For All Podcast, Nandan Nayampally, CMO at BrainChip, joins Ryan Chacon and Neil Sahota to discuss edge AI, neuromorphic computing, and AI hardware. They talk about the benefits and trade-offs of edge AI, neuromorphic chips and computing, the economics of hardware, top AI applications, AI in health, and future opportunities in AI. Nandan Nayampally is an entrepreneurial executive with more than 25 years of success in building or growing technology businesses with industry-wide impact. He was at Arm for more than 15 years in a variety of product marketing and product management leadership roles, eventually becoming vice president and general manager of Arm's signature CPU group and the Client Line of Business where he identified key technology investments, developed a strategy and roadmap of products to deliver compelling, market-leading solutions for billions of SoCs, while establishing strategic partnerships and alliances. Nayampally comes to BrainChip from Amazon, where he helped accelerate the adoption of Alexa Voice and other multimodal services into third-party devices. BrainChip is the worldwide leader in edge AI on-chip processing and learning. The company's first-to-market neuromorphic processor, AkidaTM, mimics the human brain to analyze only essential sensor inputs at the point of acquisition, processing data with unparalleled efficiency, precision, and economy of energy. Keeping machine learning local to the chip, independent of the cloud, also dramatically reduces latency while improving privacy and data security. In enabling effective edge compute to be universally deployable across real world applications such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI, close to the sensor, is the future, for its customers' products, as well as the planet. More about BrainChip: https://brainchip.com Connect with Nandan: https://www.linkedin.com/in/nandannayampally/ Key Questions and Topics from This Episode: (00:00) Intro to the AI For All Podcast (01:13) Intro to Nandan Nayampally and BrainChip (01:25) What is edge AI? (04:09) Benefits of edge AI (06:25) Edge AI trade-offs (08:23) Is hardware the next frontier? (10:38) Investment in neuromorphic chips (13:31) What is neuromorphic computing? (15:32) Impact and economics of hardware (18:48) Long-term benefits for business (21:12) Implications of edge AI and decentralization (24:58) Top AI applications (26:16) AI in health (30:28) Data sources for enterprise AI (34:16) Future opportunities in AI (36:28) Learn more about BrainChip Subscribe on YouTube: https://bit.ly/43dYQV9 Join Our Newsletter: https://ai-forall.com Follow Us on Twitter: https://twitter.com/_aiforall
In this episode of Embracing Digital Transformation, Dr. Pamela Follett, a neurologist and co-founder of Lewis Rhodes Labs, shares her background and expertise in the field of neurology, specifically with regards to research on the developing brain in early childhood. Video: TBD Blog: https://www.embracingdigital.org/episode-EDT141
Axel Hoffmann is a Professor of Material Science and Engineering at the University of Illinois at Urbana-Champaign. In this conversation, Dr. Hoffmann shares his upbringing in South Germany and his fascination with flying, providing a glimpse into his formative years. He reflects on his undergraduate life and his journey to the United States, emphasizing the role of luck in shaping his academic path. Dr. Hoffmann's expertise in magnetism-related subjects becomes evident as he discusses antiferromagnets, memory devices, and the exciting advancements in the field. Along the way, he provides invaluable advice for young students embarking on their academic journeys. Prior to joining UIUC, Hoffmann spent an impressive 18 years at Argonne National Laboratory, where he made significant contributions as a Material Scientist. In 2019, he made the transition to the University of Illinois as a Founder Professor, further enriching the academic community with his wealth of knowledge and experience. This is The UIUC Talkshow. EPISODE LINKS: Axel Hoffmann's UIUC Website: https://matse.illinois.edu/people/profile/axelh Axel Hoffmann's Research Group: https://hoffmann.matse.illinois.edu/ OUTLINE: 0:00 - Introduction 0:34 - Growing up in South Germany 3:52 - Flying 6:32 - Life as an Undergraduate 9:24 - United States 13:12 - Luck 16:14 - Academia 19:51 - Europe 21:51 - Tennis 24:43 - Magnetism and Data Storage 30:50 - Computation and Energy Consumption 33:30 - Neuromorphic Computing 40:23 - Birds 45:45 - Magnetic Fields 48:55 - Gravity & Magnetism 51:53 - Fundamental Theory of Electricity and Magnetism (E&M) 56:02 - Transportation 1:04:01 - Electrons 1:07:56 - Antiferromagents 1:14:52 - Memory Devices 1:16:45 - Advice for Young Students
In this podcast episode , Sally speaks to SynSense's Dylan Muir about the company's design win for its neuromorphic processor in a toy robot that recognises gestures, the company's two processors and the differences between them, as well as the synergies between dynamic vision sensor cameras and neuromorphic processors.
YouTube Link: https://www.youtube.com/watch?v=27zHyw_oHSI Ben Goertzel is a computer scientist, mathematician, and entrepreneur. His work focuses on AGI, which aims to create truly intelligent machines that can learn, reason, and think like humans. This episode has been released early in an ad-free audio version for TOE members at http://theoriesofeverything.org. Sponsors: - Brilliant: https://brilliant.org/TOE for 20% off - *New* TOE Website (early access to episodes): https://theoriesofeverything.org/ - Patreon: https://patreon.com/curtjaimungal - Crypto: https://tinyurl.com/cryptoTOE - PayPal: https://tinyurl.com/paypalTOE - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast... - Pandora: https://pdora.co/33b9lfP - Spotify: https://open.spotify.com/show/4gL14b9... - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeveryt... LINKS MENTIONED: Center for future mind (FAU): https://www.fau.edu/future-mind/ Wolfram talk from Mindfest https://youtu.be/xHPQ_oSsJgg Singularity Net https://singularitynet.io/ TIMESTAMPS: 00:00:00 Introduction 00:02:37 How to make machines that think like people 00:10:03 GPT will make 95% of jobs obsolete 00:18:59 The 5-year Turing test 00:21:37 Definition of "intelligence" doesn't matter 00:26:15 Mathematical definition of self-transcendence 00:30:10 The 3 routes to AGI 00:44:19 Unfolding AI with Galois connections 00:49:32 Neuromorphic chips, hybrid architectures, and future hardware 00:54:05 Super AGI will overshadow humanity 00:56:33 Infinity groupoid 01:01:52 There are no limitations to AI development 01:05:00 Social intelligence is independent in OpenCog Hyperon systems 01:07:33 Embodied collaboration is fundamental to human intelligence 01:08:49 Algorithmic information theory and the Robot College Test Learn more about your ad choices. Visit megaphone.fm/adchoices
Biological brains can accomplish more than modern computing systems while using much less power. However, computers are much better at dealing with computation, while brains are (unsurprisingly) much better at interacting with ever-changing environments. What if we were able to design a computing system with interconnecting neurons and synapses to get the best of both worlds? In today's episode, we welcome Dr. Jean Anne Incorvia. She is a current Assistant Professor at the University of Texas at Austin, and a leading expert on the research of neuromorphic computing systems. With her, we discuss:
Jeffrey Krichmar On Neuromorphic Chips Could Be A Game-Changing For Saving Robots Energy Consumption by Marwa ElDiwiny
Our guest in this episode is Rodolphe Sepulchre, Professor of Engineering at KU Leuven in the Deparment of Electrical Engineering (STADIUS) and at the University of Cambridge in the Deparment of Engineering (Control Group). We dive into Rodophe's scientific journey across nonlinear control, neuroscience and optimization on manifolds through the unifying lens of control theory.Outline- 00:00 - Intro - 03:54 - Why control? - 11:08 - Spiking control systems - 20:47 - The mixed feedback principle - 23:52 - On thermodynamics - 25:17 - Event-based systems - 29:33 - On dissipativity theory - 48:00 - Stability, positivity and monotonicity - 55:00 - Control, cybernetics and neuroscience - 59:10 - Neuromorphic control principles - 01:00:01 - Optimization on manifolds - 01:05:01 - Influential figures - 01:08:52 - On the future of control - 01:12:35 - Advice to future students - 01:15:01 - About creativity - 01:20:35 - OutroEpisode links- Rodolphe's lab: https://tinyurl.com/yc4bubyy - IEEE CSM editorials: https://tinyurl.com/2bhch6w3 - Spiking control systems: https://tinyurl.com/3x6pwm9m- O. Pamuk: https://tinyurl.com/4akzyk37 - Event based control: https://tinyurl.com/5apuh5kw - A simple neuron servo: https://tinyurl.com/4pjnkx5u - C. Mead: https://tinyurl.com/mr29xta9 - L. Chua: https://tinyurl.com/5n935ssp - Inventing the negative feedback amplifier: https://tinyurl.com/4573rv2d - Hodgkin-Huxley model: https://tinyurl.com/mr46cv79 - R. Ashby: https://tinyurl.com/45jrp6hw - G. J. Minty: https://tinyurl.com/4u4v22ue - J. C. Willems: https://tinyurl.com/3zthcxc2 - P. Kokotovic: https://tinyurl.com/mrymffch - Wholeness and the Implicate Order: https://tinyurl.com/yckpnybp Podcast infoPodcast website: https://www.incontrolpodcast.com/Apple Podcasts: https://tinyurl.com/5n84j85jSpotify: https://tinyurl.com/4rwztj3cRSS: https://tinyurl.com/yc2fcv4yYoutube: https://tinyurl.com/bdbvhsj6Facebook: https://tinyurl.com/3z24yr43Twitter: https://twitter.com/IncontrolPInstagram: https://tinyurl.com/35cu4kr4Acknowledgments and sponsorsThis episode was supported by the National Centre of Competence in Research on «Dependable, ubiquitous automation» and the IFAC Activity fund. The podcast benefits from the help of an incredibly talented and passionate team. Special thanks to B. Seward, E. Cahard, F. Banis, F. Dörfler, J. Lygeros, as well as the ETH and mirrorlake studios. Music was composed by A New Element. Support the show
It's a new year of listening and learning. We're launching 2023 with the fascinating, emerging field of neuromorphic engineering, the development of biologically inspired technology that emulates the human brain. Neuromorphic systems use spiking neural networks to retain “memories,” like the human brain, making computer processing faster, more accurate and more efficient. Potential applications for neuromorphic technologies are limitless and could cover a range of industries, including aerospace, space science, automotive, smart devices and more. Listen now as SwRI Engineer and Neuroscientist Dr. Steven Harbour, neuromorphic engineering expert, explains how the technology works, why the brain is a superior computing model and what the future holds for neuromorphic developments.
Mark Hersam "Hybrid Hard And Soft Nanoscale Materials, Neuromorphic Computing" by Marwa ElDiwiny
Clip: Mark Hersam "Hybrid Hard And Soft Nanoscale Materials, Neuromorphic Computing" by Marwa ElDiwiny
#toctw #artificialintelligence #deeplearning #neuromorphic #neuroscience #podcast #innatera Dr. Sumeet Kumar is a seasoned technologist with a background in leading strategic R&D efforts to identify and bridge technology gaps in industry roadmaps. He has held product engineering positions at Indrion Technologies, where he was responsible for developing ultra-low power processors for IoT nodes, and at #intel where he worked on building design tools used to create powerful media processors in Intel Atom and Core-series SoCs. Sumeet also held leadership positions at the Delft University of Technology, where he conceived, launched, and led multiple EU-funded R&D programs worth over €100M in close collaboration with European industry (BMW, Maserati, Scania, Infineon, NXP, Bosch…), to develop critical perception technologies for highly automated vehicles. Sumeet holds a BE in Electronics and Communications Engineering from the CMR Institute of Technology, Bangalore, India, an MSc. in Microelectronics, and a Ph.D. in Microelectronics and Computer Engineering from the Delft University of Technology, The Netherlands. Sumeet is a co-founder of Innatera nanosystems and serves as its Chief Executive Officer. https://www.innatera.com https://nl.linkedin.com/in/sumeetskumar http://www.sumeetkumar.net/
Hey everybody, this is Chris Brandt Last Week, Sandesh Patel and I headed out to New York to attend the HPC + AI Wallstreet High Performance Computing conference, and I am here to break it down for you.So, like I mentioned, Sandesh and I went to the HPC + AI Wall Street conference where friend of the show Ryan Quick was a featured speaker. There were a bunch of sessions focusing on everything from Quantum computing and Neuromorphic computing to Artificial Intelligence and Big Data. The show was run by Tabor communications that is the publisher of HPC Wire, Datanami, and EnterpriseAI to name a few.This is a show with a limited audience, so it is interesting to see how small a community it is right now. Most of the people know each other in this space and there is a lot of back and forth with people. Much of the discussion focuses on fairly cutting edge and exotic tech, so I thought it was a ton of fun. There was even a great session on the ethics surrounding AI.A lot of the vendors in the space sponsored the event and were there to talk about what they have for HPC. I was able to chat with a few of them and I wanted to share those interviews with you.Let's start with Run:AI. They have a scheduler that is getting a lot of attention, but let me have them tell it in their own words.Next up, we got a chance to catch up with Memverge. You might remember Memverge from episode #44 where we talked with Memverge founder and CEO Charles Fan.Last, here is friend of the show Ryan Quick who not only hosted a couple of sessions, but was there to help promote Data Vortex, a new networking stack designed around extremely high performance.Overall, this was a small, but great show that will only be growing over the years. If you are looking into High Performance Computing I suggest you check the next one out.As always, thanks for watching, I would love to hear from you in the comments, please click the subscribe button, and I will see you in the next one.FUTR.tv focuses on startups, innovation, culture and the business of emerging tech with weekly podcasts featuring Chris Brandt and Sandesh Patel talking with Industry leaders and deep thinkers.Occasionally we share links to products we use. As an Amazon Associate we earn from qualifying purchases on Amazon.
In this episode, co-hosts Calum Chace and David Wood continue their review of progress in AI, taking up the story at the 2012 "Big Bang".00.05: Introduction: exponential impact, big bangs, jolts, and jerks00.45: What enabled the Big Bang01.25: Moore's Law02.05: Moore's Law has always evolved since its inception in 196503.08: Intel's tick tock becomes tic tac toe03.49: GPUs - Graphic Processing Units04.29: TPUs - Tensor Processing Units04.46: Moore's Law is not dead or dying05.10: 3D chips05.32: Memristors05.54: Neuromorphic chips06.48: Quantum computing08.18: The astonishing effect of exponential growth09.08: We have seen this effect in computing already. The cost of an iPhone in the 1950s.09.42: Exponential growth can't continue forever, but Moore's Law hasn't reached any theoretical limits10.33: Reasons why Moore's Law might end: too small, too expensive, not worthwhile11.20: Counter-arguments12.01: "Plenty more room at the bottom"12.56: Software and algorithms can help keep Moore's Law going14.15: Using AI to improve chip design14.40: Data is critical15.00: ImageNet, Fei Fei Lee, Amazon Turk16.10: AIs labelling data16.35: The Big Bang17.00: Jürgen Schmidhuber challenges the narrative17.41: The Big Bang enabled AI to make money18.24: 2015 and the Great Robot Freak-Out18.43: Progress in many domains, especially natural language processing19.44: Machine Learning and Deep Learning20.25: Boiling the ocean vs the scientific method's hypothesis-driven approach21.15: Deep Learning: levels21.57: How Deep Learning systems recognise faces22.48: Supervised, Unsupervised, and Reinforcement Learning24.00: Variants, including Deep Reinforcement Learning and Self-Supervised Learning24.30: Yann LeCun's camera metaphor for Deep Learning26.05: Lack of transparency is a concern27.45: Explainable AI. Is it achievable?29.00: Other AI problems29.17: Has another Big Bang taken place? Large Language Models like GPT-330.08: Few-shot learning and transfer learning30.40: Escaping Uncanny Valley31.50: Gato and partially general AIMusic: Spike Protein, by Koi Discovery, available under CC0 1.0 Public Domain DeclarationFor more about the podcast hosts, see https://calumchace.com/ and https://dw2blog.com/
Check out my short video series about what's missing in AI and Neuroscience. Support the show to get full episodes and join the Discord community. Rodolphe Sepulchre is a control engineer and theorist at Cambridge University. He focuses on applying feedback control engineering principles to build circuits that model neurons and neuronal circuits. We discuss his work on mixed feedback control - positive and negative - as an underlying principle of the mixed digital and analog brain signals,, the role of neuromodulation as a controller, applying these principles to Eve Marder's lobster/crab neural circuits, building mixed-feedback neuromorphics, some feedback control history, and how "If you wish to contribute original work, be prepared to face loneliness," among other topics. Rodolphe's website.Related papersSpiking Control Systems.Control Across Scales by Positive and Negative Feedback.Neuromorphic control. (arXiv version)Related episodes:BI 130 Eve Marder: Modulation of NetworksBI 119 Henry Yin: The Crisis in Neuroscience 0:00 - Intro 4:38 - Control engineer 9:52 - Control vs. dynamical systems 13:34 - Building vs. understanding 17:38 - Mixed feedback signals 26:00 - Robustness 28:28 - Eve Marder 32:00 - Loneliness 37:35 - Across levels 44:04 - Neuromorphics and neuromodulation 52:15 - Barrier to adopting neuromorphics 54:40 - Deep learning influence 58:04 - Beyond energy efficiency 1:02:02 - Deep learning for neuro 1:14:15 - Role of philosophy 1:16:43 - Doing it right
Is neuromorphic computing the only way we can actually achieve general artificial intelligence? Very likely yes, according to Gordon Wilson, CEO of Rain Neuromorphics, who is trying to recreate the human brain in hardware and "give machines all of the capabilities that we recognize in ourselves." Rain Neuromorphics has built a neuromorphic chip that is analog. In other words it does not simulate neural networks: it is a neural network in analog, not digital. It's a physical collection of neurons and synapses, as opposed to an abstraction of neurons and synapses. That means no ones and zeroes of traditional computing but voltages and currents that represent the mathematical operations you want to perform. Right now it's 1000X more energy efficient than existing neural networks, Wilson says, because it doesn't have to spend all those computing cycles simulating the brain. The circuit is the neural network, which leads to some extraordinary gains in both speed improvement and power reduction, according to Wilson. Links: Rain Neuromorphics: https://rain.ai Episode sponsor: SMRT1 https://smrt1.ca/ Support TechFirst with $SMRT coins: https://rally.io/creator/SMRT/ Buy $SMRT to join a community focused on tech for good: the emerging world of smart matter. Access my private Slack, get your name in my book, suggest speakers for TechFirst ... and support my work. TechFirst transcripts: https://johnkoetsier.com/category/tech-first/ Forbes columns: https://www.forbes.com/sites/johnkoetsier/ Full videos: https://www.youtube.com/c/johnkoetsier?sub_confirmation=1 Keep in touch: https://twitter.com/johnkoetsier
Technovation with Peter High (CIO, CTO, CDO, CXO Interviews)
620: Mike Davies discusses the relevance and impact of neuromorphic computing on Intel as a company and the world more generally. Mike gives a birds-eye view of what neuromorphic computing aims to achieve and the relationship it has to traditional forms of artificial intelligence and deep learning. He breaks down the near-, medium-, and long-term implications the technology could have on how we think about computing capabilities and the technology and cost challenges that his team is working to overcome. A key component of commercializing this technology is the collaboration with partners, and Mike spends a moment discussing how he looks to curate an ecosystem of academics and corporations to help deliver this technology as well as how he collaborates internally with other groups within Intel Labs.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Jeff Hawkins on neuromorphic AGI within 20 years, published by Steven Byrnes on the LessWrong. I just listened to AI podcast: Jeff Hawkins on the Thousand Brain Theory of Intelligence, and read some of the related papers. Jeff Hawkins is a theoretical neuroscientist; you may have heard of his 2004 book On Intelligence. Earlier, he had an illustrious career in EECS, including inventing the Palm Pilot. He now runs the company Numenta, which is dedicated to understanding how the human brain works (especially the neocortex), and using that knowledge to develop bio-inspired AI algorithms. In no particular order, here are some highlights and commentary from the podcast and associated papers. Every part of the neocortex is running the same algorithm The neocortex is the outermost and most evolutionarily-recent layer of the mammalian brain. In humans, it is about the size and shape of a dinner napkin (maybe 1500cm²×3mm), and constitutes 75% of the entire brain. Jeff wants us to think of it like 150,000 side-by-side "cortical columns", each of which is a little 1mm²×3mm tube, although I don't think we're supposed to the "column" thing too literally (there's no sharp demarcation between neighboring columns). When you look at a diagram of the brain, the neocortex has loads of different parts that do different things—motor, sensory, visual, language, cognition, planning, and more. But Jeff says that all 150,000 of these cortical columns are virtually identical! Not only do they each have the same types of neurons, but they're laid out into the same configuration and wiring and larger-scale structures. In other words, there seems to be "general-purpose neocortical tissue", and if you dump visual information into it, it does visual processing, and if you connect it to motor control pathways, it does motor control, etc. He said that this theory originated with Vernon Mountcastle in the 1970s, and is now widely (but not universally) accepted in neuroscience. The theory is supported both by examining different parts of the brain under the microscope, and also by experiments, e.g. the fact that congenitally blind people can use their visual cortex for non-visual things, and conversely he mentioned in passing some old experiment where a scientist attached the optic nerve of a lemur to a different part of the cortex and it was able to see (or something like that). Anyway, if you accept that premise, then there is one type of computation that the neocortex does, and if we can figure it out, we'll understand everything from how the brain does visual processing to how Einstein's brain invented General Relativity. To me, cortical uniformity seems slightly at odds with the wide variety of instincts we have, like intuitive physics, intuitive biology, language, and so on. Are those not implemented in the neocortex? Are they implemented as connections between (rather than within) cortical columns? Or what? This didn't come up in the podcast. (ETA: I tried to answer this question in my later post, Human instincts, Symbol grounding, and the blank-slate neocortex.) (See also previous LW discussion at: The brain as a universal learning machine, 2015) Grid cells and displacement cells Background: Grid cells for maps in the hippocampus Grid cells, discovered in 2005, help animals build mental maps of physical spaces. (Grid cells are just one piece of a complicated machinery, along with "place cells" and other things, more on which shortly.) Grid cells are not traditionally associated with the neocortex, but rather the entorhinal cortex and hippocampus. But Jeff says that there's some experimental evidence that they're also in the neocortex, and proposes that this is very important. What are grid cells? Numenta has an educational video here. Here's my oversimplified 1D toy example (the modules can als...
When it comes to brain computing, timing is everything. It's how neurons wire up into circuits. It's how these circuits process highly complex data, leading to actions that can mean life or death. It's how our brains can make split-second decisions, even when faced with entirely new circumstances. And we do so without frying the brain from extensive energy consumption. To rephrase, the brain makes an excellent example of an extremely powerful computer to mimic—and computer scientists and engineers have taken the first steps towards doing so. The field of neuromorphic computing looks to recreate the brain's architecture and data processing abilities with novel hardware chips and software algorithms. It may be a pathway towards true artificial intelligence. But one crucial element is lacking. Most algorithms that power neuromorphic chips only care about the contribution of each artificial neuron—that is, how strongly they connect to one another, dubbed “synaptic weight.” What's missing—yet tantamount to our brain's inner working—is timing. This month, a team affiliated with the Human Brain Project, the European Union's flagship big data neuroscience endeavor, added the element of time to a neuromorphic algorithm. The results were then implemented on physical hardware—the BrainScaleS-2 neuromorphic platform—and pitted against state-of-the-art GPUs and conventional neuromorphic solutions. “Compared to the abstract neural networks used in deep learning, the more biological archetypes.still lag behind in terms of performance and scalability” due to their inherent complexity, the authors said. In several tests, the algorithm compared “favorably, in terms of accuracy, latency, and energy efficiency” on a standard benchmark test, said Dr. Charlotte Frenkel at the University of Zurich and ETH Zurich in Switzerland, who was not involved in the study. By adding a temporal component into neuromorphic computing, we could usher in a new era of highly efficient AI that moves from static data tasks—say, image recognition—to one that better encapsulates time. Think videos, biosignals, or brain-to-computer speech. To lead author Dr. Mihai Petrovici, the potential goes both ways. “Our work is not only interesting for neuromorphic computing and biologically inspired hardware. It also acknowledges the demand . to transfer so-called deep learning approaches to neuroscience and thereby further unveil the secrets of the human brain,” he said. Let's Talk Spikes At the root of the new algorithm is a fundamental principle in brain computing: spikes. Let's take a look at a highly abstracted neuron. It's like a tootsie roll, with a bulbous middle section flanked by two outward-reaching wrappers. One side is the input—an intricate tree that receives signals from a previous neuron. The other is the output, blasting signals to other neurons using bubble-like ships filled with chemicals, which in turn triggers an electrical response on the receiving end. Here's the crux: for this entire sequence to occur, the neuron has to “spike.” If, and only if, the neuron receives a high enough level of input—a nicely built-in noise reduction mechanism—the bulbous part will generate a spike that travels down the output channels to alert the next neuron. But neurons don't just use one spike to convey information. Rather, they spike in a time sequence. Think of it like Morse Code: the timing of when an electrical burst occurs carries a wealth of data. It's the basis for neurons wiring up into circuits and hierarchies, allowing highly energy-efficient processing. So why not adopt the same strategy for neuromorphic computers? A Spartan Brain-Like Chip Instead of mapping out a single artificial neuron's spikes—a Herculean task—the team honed in on a single metric: how long it takes for a neuron to fire. The idea behind “time-to-first-spike” code is simple: the longer it takes a neuron to spike, the lower its activity levels. Compared to counting spikes, it's an extremely sp...
In this episode, Rob telson speaks with Industry Expert, Michael Azoff, about neuromorphic computing in the AI space including edge AI, neural network architectures, sensory-based applications, and more.
In this episode, the guys discuss ancient human footprints, neuromorphic computing, & tail-wagging dinosaurs. Become a Patreon for your chance to come on the show www.patreon.com/scigasmpodcast See omnystudio.com/listener for privacy information.
Hosted by BrainChip CEO Louis DiNardo, the third episode in the series highlights Peter van der Made's success in being at the forefront of computer innovation and invention for 45 years. He designed the first generations of digital neuromorphic devices on which BrainChip's Akida™ neuromorphic processor is based, and holds its patent. His book, Higher Intelligence: How to Create a Functional Artificial Brain, published in 2013, describes the architecture of the brain from a computer science perspective. He remains actively involved in the design of the next generation of Akida chips and in research on advanced neuromorphic architectures.