POPULARITY
In this episode of Brains and Machines, Dr. Terry Sejnowski talks to Dr. Sunny Bains of the University College London about how information flows both ways between neuroscience and engineered intelligence, proposes a new way of looking at memory and considers the Hopfield-Hinton Nobel Prize.
Please join my mailing list here
There are more potential moves on a Go board than there are atoms in the universe; the game is universally considered to be one of the most complex played by humans. And, yet, an AI computer program can play it perfectly. What does that mean for humanity? Terry Sejnowski is the Frances Crick Chair at the Salk Institute for Biological Studies, a Distinguished Professor at the University of San Diego, and author of the book “ChatGPT and The Future of AI.” Ricky Mulvey caught up with Sejnowski for a conversation about: - How chatbots work. - Mapping large neural models. - What a self-aware parrot can teach us about human consciousness. Premium Motley Fool members can catch replays from this week's AI Summit here: https://www.fool.com/premium/4056/coverage/2025/01/15/ai-summit-replay To become a premium Motley Fool member, go to www.fool.com/signup Host: Ricky Mulvey Guest: Terrence Sejnowski Producer: Mary Long Engineer: Rick Engdahl Learn more about your ad choices. Visit megaphone.fm/adchoices
In this episode of Remarkable People, Guy Kawasaki engages in a fascinating dialogue with Terry Sejnowski, the Francis Crick Chair at the Salk Institute and Distinguished Professor at UC San Diego. Together, they unpack the mysteries of artificial intelligence, exploring how AI mirrors human learning in unexpected ways. Sejnowski shatters common misconceptions about large language models while sharing compelling insights about their potential to augment human capabilities. Discover why being polite to AI might yield better results and why the future of AI is less about academic debates and more about practical applications that can transform our world.---Guy Kawasaki is on a mission to make you remarkable. His Remarkable People podcast features interviews with remarkable people such as Jane Goodall, Marc Benioff, Woz, Kristi Yamaguchi, and Bob Cialdini. Every episode will make you more remarkable.With his decades of experience in Silicon Valley as a Venture Capitalist and advisor to the top entrepreneurs in the world, Guy's questions come from a place of curiosity and passion for technology, start-ups, entrepreneurship, and marketing. If you love society and culture, documentaries, and business podcasts, take a second to follow Remarkable People.Listeners of the Remarkable People podcast will learn from some of the most successful people in the world with practical tips and inspiring stories that will help you be more remarkable.Episodes of Remarkable People organized by topic: https://bit.ly/rptopologyListen to Remarkable People here: **https://podcasts.apple.com/us/podcast/guy-kawasakis-remarkable-people/id1483081827**Like this show? Please leave us a review -- even one sentence helps! Consider including your Twitter handle so we can thank you personally!Thank you for your support; it helps the show!See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
This episode of Eye on AI is sponsored by Citrusx. Unlock reliable AI with Citrusx! Our platform simplifies validation and risk management, empowering you to make smarter decisions and stay compliant. Detects and mitigate AI vulnerabilities, biases, and errors with ease. Visit http://email.citrusx.ai/eyeonai to download our free fairness use case and see the solution in action. In this episode of the Eye on AI podcast, Terry Sejnowski, a pioneer in neural networks and computational neuroscience, joins Craig Smith to discuss the future of AI, the evolution of ChatGPT, and the challenges of understanding intelligence. Terry, a key figure in the deep learning revolution, shares insights into how neural networks laid the foundation for modern AI, including ChatGPT's groundbreaking generative capabilities. From its ability to mimic human-like creativity to its limitations in true understanding, we explore what makes ChatGPT remarkable and what it still lacks compared to human cognition. We also dive into fascinating topics like the debate over AI sentience, the concept of "hallucinations" in AI models, and how language models like ChatGPT act as mirrors reflecting user input rather than possessing intrinsic intelligence. Terry explains how understanding language and meaning in AI remains one of the field's greatest challenges. Additionally, Terry shares his perspective on nature-inspired AI and what it will take to develop systems that go beyond prediction to exhibit true autonomy and decision-making. Learn why AI models like ChatGPT are revolutionary yet incomplete, how generative AI might redefine creativity, and what the future holds for AI as we continue to push its boundaries. Don't miss this deep dive into the fascinating world of AI with Terry Sejnowski. Like, subscribe, and hit the notification bell for more cutting-edge AI insights! Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction to Terry Sejnowski and His Work (03:02) The Origins of Modern AI and Neural Networks (05:29) The Deep Learning Revolution and ImageNet (07:11) Understanding ChatGPT and Generative AI (12:34) Exploring AI Creativity (16:03) Lessons from Gaming AI: AlphaGo and Backgammon (18:37) Early Insights into AI's Affinity for Language (24:48) Syntax vs. Semantics: The Purpose of Language (30:00) How Written Language Transformed AI Training (35:10) Can AI Become Sentient? (41:37) AI Agents and the Next Frontier in Automation (45:43) Nature-Inspired AI: Lessons from Biology (50:02) Digital vs. Biological Computation: Key Differences (54:29) Will AI Replace Jobs? (57:07) The Future of AI
In this episode, my guest is Dr. Terry Sejnowski, Ph.D., professor of computational neurobiology at the Salk Institute for Biological Studies. He is world-renowned for exploring how our brain processes and stores information and, with that understanding, for developing tools that enable us to markedly improve our ability to learn all types of information and skills. We discuss how to learn most effectively in order to truly master a subject or skill. Dr. Sejnowski explains how to use AI tools to forage for new information, generate ideas, predict the future, and assist in analyzing health data and making health-related decisions. We also explore non-AI strategies to enhance learning and creativity, including how specific types of exercise can improve mitochondrial function and cognitive performance. Listeners will gain insights into how computational methods and AI are transforming our understanding of brain function, learning, and memory, as well as the emerging roles of these tools in addressing personal health and treating brain diseases such as Alzheimer's and Parkinson's. Access the full show notes for this episode at hubermanlab.com. Pre-order Andrew's new book, Protocols: protocolsbook.com Thank you to our sponsors AG1: https://drinkag1.com/huberman BetterHelp: https://betterhelp.com/huberman Helix Sleep: https://helixsleep.com/huberman David Protein: https://davidprotein.com/huberman LMNT: https://drinklmnt.com/huberman Joovv: https://joovv.com/huberman Timestamps 00:00:00 Dr. Terry Sejnowski 00:02:32 Sponsors: BetterHelp & Helix Sleep 00:05:19 Brain Structure & Function, Algorithmic Level 00:11:49 Basal Ganglia; Learning & Value Function 00:15:23 Value Function, Reward & Punishment 00:19:14 Cognitive vs. Procedural Learning, Active Learning, AI 00:25:56 Learning & Brain Storage 00:30:08 Traveling Waves, Sleep Spindles, Memory 00:32:08 Sponsors: AG1 & David 00:34:57 Tool: Increase Sleep Spindles; Memory, Ambien; Prescription Drugs 00:42:02 Psilocybin, Brain Connectivity 00:45:58 Tool: ‘Learning How to Learn' Course 00:49:36 Learning, Generational Differences, Technology, Social Media 00:58:37 Sponsors: LMNT & Joovv 01:01:06 Draining Experiences, AI & Social Media 01:06:52 Vigor & Aging, Continued Learning, Tool: Exercise & Mitochondrial Function 01:12:17 Tool: Cognitive Velocity; Quick Stressors, Mitochondria 01:16:58 AI, Imagined Futures, Possibilities 01:27:14 AI & Mapping Potential Options, Schizophrenia 01:30:56 Schizophrenia, Ketamine, Depression 01:36:15 AI, “Idea Pump,” Analyzing Research 01:42:11 AI, Medicine & Diagnostic Tool; Predicting Outcomes 01:50:04 Parkinson's Disease; Cognitive Velocity & Variables; Amphetamines 01:59:49 Free Will; Large Language Model (LLM), Personalities & Learning 02:12:40 Tool: Idea Generation, Mind Wandering, Learning 02:18:18 Dreams, Unconscious, Types of Dreams 02:22:56 Future Projects, Brain & Self-Attention 02:31:39 Zero-Cost Support, YouTube, Spotify & Apple Follow & Reviews, Sponsors, YouTube Feedback, Protocols Book, Social Media, Neural Network Newsletter Disclaimer & Disclosures
As the longtime collaborator of the 2024 Nobel laureates John Hopfield and Geoffrey Hinton, Terrence Sejnowski is one of America's most distinguished AI scientists. In his new book, ChatGPT and the Future of AI: The Deep Language Revolution, Sejnowski addresses some of the central technical and philosophical issues of today's large language model AI revolution. And in this wide-ranging conversation, we talked about everything from the origins of human language to the existential question of whether our brains and smart machines are converging. Unlike other AI researchers, Terry Sejnowski is able to make the deep language revolution accessible to a mainstream audience. Strongly recommended. Terrence J. Sejnowski is Francis Crick Chair at The Salk Institute for Biological Studies and Distinguished Professor at the University of California at San Diego. He has published over 500 scientific papers and 12 books, including The Computational Brain with Patricia Churchland. He was instrumental in shaping the BRAIN Initiative that was announced by the White House in 2013, and he received the prestigious Gruber Prize in Neuroscience in 2022.Named as one of the "100 most connected men" by GQ magazine, Andrew Keen is amongst the world's best known broadcasters and commentators. In addition to presenting KEEN ON, he is the host of the long-running How To Fix Democracy show. He is also the author of four prescient books about digital technology: CULT OF THE AMATEUR, DIGITAL VERTIGO, THE INTERNET IS NOT THE ANSWER and HOW TO FIX THE FUTURE. Andrew lives in San Francisco, is married to Cassandra Knight, Google's VP of Litigation & Discovery, and has two grown children.Keen On is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit keenon.substack.com/subscribe
How should we go about understanding LLMs? Do these language models truly understand what they are saying? Or is it possible that what appears to be intelligence in LLMs may be a mirror that merely reflects the intelligence of the human observer? In this episode, Terry Sejnowski explores the history and future of AI and its profound implications for understanding the human mind. He explains AI's evolution from early logic-based systems to cutting-edge advancements like deep learning and large language models, including ChatGPT, emphasizing the need for clear operational definitions and a strong mathematical foundation to advance AI research. Sejnowski also highlights the parallels between scientific discovery and engineering, discussing practical applications of AI in areas such as language translation and weather prediction. Terry Sejnowski is the Francis Crick Chair at The Salk Institute for Biological Studies and a Distinguished Professor at the University of California, San Diego. He has published over 500 scientific papers and 12 books, including ChatGPT and The Future of AI: The Deep Language Revolution.
This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more. Download NetSuite's popular KPI Checklist, designed to give you consistently excellent performance - absolutely free at https://netsuite.com/EYEONAI Join host Craig Smith in episode #178 of Eye on AI for an enlightening conversation with Terry Sejnowski, President of the NeurIPS Foundation and is the Francis Crick Professor at the Salk Institute for Biological Studies where he directs the Computational Neurobiology Laboratory In this episode, Terry shares his unique insights into the evolution of neural networks, the monumental impact of models like GPT-4, and the intricate relationship between AI technologies and societal norms. Learn about the future trajectory of AI development, as Terry highlights the importance of integrating human brain development aspects into AI to foster models that better understand and integrate into human culture. Discover Terry's perspective on the role of reinforcement learning in AI, the challenges of aligning AI with ethical considerations, and the potential of neuromorphic engineering to revolutionize energy-efficient computing. This discussion also touches on the implications of AI in reshaping industries, science, and the global workforce. An essential listen for those fascinated by the ethical dimensions of AI, the potential of neuroscience to inform AI development, and the broader societal impacts of technological advancement. If you're drawn to the complexities of AI and its potential to redefine our future, don't forget to rate us on Apple Podcast and Spotify. Stay Updated: Craig Smith Twitter: https://twitter.com/craigss Eye on A.I. Twitter: https://twitter.com/EyeOn_AI (00:00) Introduction to Terry Sejnowski (01:54) Bridging Computational Neuroscience with AI Development (05:24) Biological Principles in AI: Learning and Adaptation (07:07) The Crucial Role of Reinforcement Learning in AI (10:12) Breakthroughs in AI: Scaling Models and Language Processing (13:38) The Rapid Pace of AI Evolution: Implications and "AI Time" (16:58) Ethical Considerations and AI Alignment Challenges (20:23) Neuromorphic Engineering and Energy Efficiency (25:42) Real-world Applications: AI's Expanding Role in Society (29:04) The Developmental Approach: Training AI with Human Nuances (33:32) The Impact of AI on Science and Discovery (40:19) AI Education and Lifelong Learning: Overcoming Barriers (43:20) MOOCs and the Democratization of Learning in the AI Era (47:08) Balancing Innovation and Safety in the Accelerating AI Landscape
Stephen Wolfram answers questions from his viewers about the history science and technology as part of an unscripted livestream series, also available on YouTube here: https://wolfr.am/youtube-sw-qa Excerpt from livestream episode History of Science and Technology Q&A (March 22, 2023), Stephen Wolfram answers: What is the history of AI? What is the first recorded example of artificial intelligence? Stephen's conversation with Terry Sejnowski on the history of neural nets is available here: https://youtu.be/XKC-4Tosdd8
Part 2 (of 2)—Stephen Wolfram plays the role of Salonnière in an on-going series of intellectual explorations with special guests. In this episode, Terry Sejnowski joins Stephen to discuss the the long story of how neural nets got to where they are. Watch all of the conversations here: https://wolfr.am/youtube-sw-conversations
Part 1 (of 2)—Stephen Wolfram plays the role of Salonnière in an on-going series of intellectual explorations with special guests. In this episode, Terry Sejnowski joins Stephen to discuss the the long story of how neural nets got to where they are. Watch all of the conversations here: https://wolfr.am/youtube-sw-conversations
Anthropogeny, has provided many new discoveries over the past decade, ranging from new fossil finds to ancient DNA data, including from extinct hominins. This CARTA symposium highlights where the future efforts should be focused and what type of novel collaborations are most promising for improving our understanding of the human phenomenon. Terry Sejnowski discusses how large-scale neural network models have inspired major advances in artificial intelligence. Sarah Tishkoff talks on how Africa is thought to be the ancestral homeland of all modern human populations. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38631]
Anthropogeny, has provided many new discoveries over the past decade, ranging from new fossil finds to ancient DNA data, including from extinct hominins. This CARTA symposium highlights where the future efforts should be focused and what type of novel collaborations are most promising for improving our understanding of the human phenomenon. Terry Sejnowski discusses how large-scale neural network models have inspired major advances in artificial intelligence. Sarah Tishkoff talks on how Africa is thought to be the ancestral homeland of all modern human populations. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38631]
Anthropogeny, has provided many new discoveries over the past decade, ranging from new fossil finds to ancient DNA data, including from extinct hominins. This CARTA symposium highlights where the future efforts should be focused and what type of novel collaborations are most promising for improving our understanding of the human phenomenon. Terry Sejnowski discusses how large-scale neural network models have inspired major advances in artificial intelligence. Sarah Tishkoff talks on how Africa is thought to be the ancestral homeland of all modern human populations. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38631]
Anthropogeny, has provided many new discoveries over the past decade, ranging from new fossil finds to ancient DNA data, including from extinct hominins. This CARTA symposium highlights where the future efforts should be focused and what type of novel collaborations are most promising for improving our understanding of the human phenomenon. Terry Sejnowski discusses how large-scale neural network models have inspired major advances in artificial intelligence. Sarah Tishkoff talks on how Africa is thought to be the ancestral homeland of all modern human populations. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38631]
Anthropogeny, has provided many new discoveries over the past decade, ranging from new fossil finds to ancient DNA data, including from extinct hominins. This CARTA symposium highlights where the future efforts should be focused and what type of novel collaborations are most promising for improving our understanding of the human phenomenon. Terry Sejnowski discusses how large-scale neural network models have inspired major advances in artificial intelligence. Sarah Tishkoff talks on how Africa is thought to be the ancestral homeland of all modern human populations. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38631]
CARTA - Center for Academic Research and Training in Anthropogeny (Video)
Anthropogeny, has provided many new discoveries over the past decade, ranging from new fossil finds to ancient DNA data, including from extinct hominins. This CARTA symposium highlights where the future efforts should be focused and what type of novel collaborations are most promising for improving our understanding of the human phenomenon. Terry Sejnowski discusses how large-scale neural network models have inspired major advances in artificial intelligence. Sarah Tishkoff talks on how Africa is thought to be the ancestral homeland of all modern human populations. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38631]
Anthropogeny, has provided many new discoveries over the past decade, ranging from new fossil finds to ancient DNA data, including from extinct hominins. This CARTA symposium highlights where the future efforts should be focused and what type of novel collaborations are most promising for improving our understanding of the human phenomenon. Terry Sejnowski discusses how large-scale neural network models have inspired major advances in artificial intelligence. Sarah Tishkoff talks on how Africa is thought to be the ancestral homeland of all modern human populations. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38631]
Anthropogeny, has provided many new discoveries over the past decade, ranging from new fossil finds to ancient DNA data, including from extinct hominins. This CARTA symposium highlights where the future efforts should be focused and what type of novel collaborations are most promising for improving our understanding of the human phenomenon. Terry Sejnowski discusses how large-scale neural network models have inspired major advances in artificial intelligence. Sarah Tishkoff talks on how Africa is thought to be the ancestral homeland of all modern human populations. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38631]
Anthropogeny, has provided many new discoveries over the past decade, ranging from new fossil finds to ancient DNA data, including from extinct hominins. This CARTA symposium highlights where the future efforts should be focused and what type of novel collaborations are most promising for improving our understanding of the human phenomenon. Terry Sejnowski discusses how large-scale neural network models have inspired major advances in artificial intelligence. Sarah Tishkoff talks on how Africa is thought to be the ancestral homeland of all modern human populations. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38631]
Anthropogeny, has provided many new discoveries over the past decade, ranging from new fossil finds to ancient DNA data, including from extinct hominins. This CARTA symposium highlights where the future efforts should be focused and what type of novel collaborations are most promising for improving our understanding of the human phenomenon. Terry Sejnowski discusses how large-scale neural network models have inspired major advances in artificial intelligence. Sarah Tishkoff talks on how Africa is thought to be the ancestral homeland of all modern human populations. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38631]
Anthropogeny, has provided many new discoveries over the past decade, ranging from new fossil finds to ancient DNA data, including from extinct hominins. This CARTA symposium highlights where the future efforts should be focused and what type of novel collaborations are most promising for improving our understanding of the human phenomenon. Terry Sejnowski discusses how large-scale neural network models have inspired major advances in artificial intelligence. Sarah Tishkoff talks on how Africa is thought to be the ancestral homeland of all modern human populations. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38631]
Terry Sejnowski, an AI pioneer, chairman of the NeurIPS Foundation, and co-creator of Boltzmann Machines - whose sleep-wake cycle has been repurposed in Geoff Hinton's new Forward-Forward algorithm, talks in this episode about the NeurIPS conference, and how advances in deep learning may help us understand our own brains.
Neuroscience has made great strides in the last decade following the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a science and engineering grand challenge that has greatly accelerated research on large-scale recordings from neurons and reconstructions of neural circuits. Large-scale neural network models have in turn inspired major advances in artificial intelligence. These network models have been trained on large-scale data sets to recognize objects in images, caption photographs, and translate text between languages. The most recent advance has been the emergence of pre-trained foundational language models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38302]
Neuroscience has made great strides in the last decade following the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a science and engineering grand challenge that has greatly accelerated research on large-scale recordings from neurons and reconstructions of neural circuits. Large-scale neural network models have in turn inspired major advances in artificial intelligence. These network models have been trained on large-scale data sets to recognize objects in images, caption photographs, and translate text between languages. The most recent advance has been the emergence of pre-trained foundational language models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38302]
Neuroscience has made great strides in the last decade following the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a science and engineering grand challenge that has greatly accelerated research on large-scale recordings from neurons and reconstructions of neural circuits. Large-scale neural network models have in turn inspired major advances in artificial intelligence. These network models have been trained on large-scale data sets to recognize objects in images, caption photographs, and translate text between languages. The most recent advance has been the emergence of pre-trained foundational language models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38302]
CARTA - Center for Academic Research and Training in Anthropogeny (Video)
Neuroscience has made great strides in the last decade following the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a science and engineering grand challenge that has greatly accelerated research on large-scale recordings from neurons and reconstructions of neural circuits. Large-scale neural network models have in turn inspired major advances in artificial intelligence. These network models have been trained on large-scale data sets to recognize objects in images, caption photographs, and translate text between languages. The most recent advance has been the emergence of pre-trained foundational language models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38302]
Neuroscience has made great strides in the last decade following the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a science and engineering grand challenge that has greatly accelerated research on large-scale recordings from neurons and reconstructions of neural circuits. Large-scale neural network models have in turn inspired major advances in artificial intelligence. These network models have been trained on large-scale data sets to recognize objects in images, caption photographs, and translate text between languages. The most recent advance has been the emergence of pre-trained foundational language models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38302]
Neuroscience has made great strides in the last decade following the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a science and engineering grand challenge that has greatly accelerated research on large-scale recordings from neurons and reconstructions of neural circuits. Large-scale neural network models have in turn inspired major advances in artificial intelligence. These network models have been trained on large-scale data sets to recognize objects in images, caption photographs, and translate text between languages. The most recent advance has been the emergence of pre-trained foundational language models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38302]
Neuroscience has made great strides in the last decade following the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a science and engineering grand challenge that has greatly accelerated research on large-scale recordings from neurons and reconstructions of neural circuits. Large-scale neural network models have in turn inspired major advances in artificial intelligence. These network models have been trained on large-scale data sets to recognize objects in images, caption photographs, and translate text between languages. The most recent advance has been the emergence of pre-trained foundational language models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38302]
Neuroscience has made great strides in the last decade following the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a science and engineering grand challenge that has greatly accelerated research on large-scale recordings from neurons and reconstructions of neural circuits. Large-scale neural network models have in turn inspired major advances in artificial intelligence. These network models have been trained on large-scale data sets to recognize objects in images, caption photographs, and translate text between languages. The most recent advance has been the emergence of pre-trained foundational language models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38302]
Neuroscience has made great strides in the last decade following the Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative, a science and engineering grand challenge that has greatly accelerated research on large-scale recordings from neurons and reconstructions of neural circuits. Large-scale neural network models have in turn inspired major advances in artificial intelligence. These network models have been trained on large-scale data sets to recognize objects in images, caption photographs, and translate text between languages. The most recent advance has been the emergence of pre-trained foundational language models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Humanities] [Science] [Education] [Show ID: 38302]
Stephen Wolfram plays the role of Salonnière in this new, on-going series of intellectual explorations with special guests, this time specifically with Terry Sejnowski. Part 1 (of 2 parts). Watch all of the conversations here: https://wolfr.am/youtube-sw-conversations
Stephen Wolfram plays the role of Salonnière in this new, on-going series of intellectual explorations with special guests, this time specifically with Terry Sejnowski. Part 2 (of 2) Watch all of the conversations here: https://wolfr.am/youtube-sw-conversations
Andy and Dave discuss the latest in AI news and research, including: 0:46: The GAO releases a more extensive report on US Federal agency use of facial recognition technology, including what purposes. 3:24: The US Department of Homeland Security Science and Technology Directorate publishes its AI and ML Strategic Plan, with an implementation plan to follow. 5:39: Ada Lovelace Institute, AI Now Institute, and Open Government Partnership publish a global study on Algorithmic Accountability for the Public Sector, which focuses on accountability mechanisms stemming from laws and policy. 9:04: Research from North Caroline State University shows that the benefits of autonomous vehicles will outweigh the risks, with proper regulation. 13:18: Research Section Introduction 14:24: Researchers at the Allen Institute for AI and the University of Washington demonstrate that artificial agents can learn generalizable visual representation during interactive gameplay, embodied within an environment (AI2-THOR); agents demonstrated knowledge of the principles of containment, object permanence, and concepts of free space. 19:37: Researchers at Stanford University introduce BEHAVIOR (Benchmark for Everyday Household Activities in Virtual, Interactive, and ecOlogical enviRonments), which establishes benchmarks for simulation of 100 activities that human often perform at home. 24:02: A survey examines the dynamics of research communities and AI benchmarks, suggesting that hybrid, multi-institution, and persevering communities are the ones more likely to improve state-of-the-art performance, among other things. 28:54: Springer-Verlag makes Representation Learning for Natural Language Processing available online. 32:09: Terry Sejnowski and Stephen Wolfram publish a three-hour discussion on AI and other topics. Follow the link below to visit our website and explore the links mentioned in the episode. https://www.cna.org/CAAI/audio-video
In this episode, I am going to talk about some learning techniques from "Learning how to learn: powerful mental tools to help you master tough subjects" by Dr. Barbara Oakley & Dr. Terry Sejnowski provided by McMaster University and the University of California San Diego. These techniques are not exclusive to learning Cantonese but it might be very helpful in your journey of language learning. Hope you will find some golden nuggets in this episode. Written material: http://www.theartofhongkongesebyadummy.com/skill-based-harnessing-your-brain-power-to-learn-cantonese Music credit: www.bensound.com
Artificial intelligence (AI) is a branch of engineering that has traditionally ignored brains, but recent advances in biologically-inspired deep learning have dramatically changed AI and made it possible to solve difficult problems in vision, planning and natural language. If you talk to Alexa or use Google Translate, you have experienced deep learning in action. This new technology opens a Pandoras box of problems that we must confront regarding privacy, bias and jobs. Terry Sejnowski, PhD, explains how his research strives understand the computational resources of brains and to build linking principles from brain to behavior using computational models. Series: "Exploring Ethics" [Show ID: 35462]
Artificial intelligence (AI) is a branch of engineering that has traditionally ignored brains, but recent advances in biologically-inspired deep learning have dramatically changed AI and made it possible to solve difficult problems in vision, planning and natural language. If you talk to Alexa or use Google Translate, you have experienced deep learning in action. This new technology opens a Pandoras box of problems that we must confront regarding privacy, bias and jobs. Terry Sejnowski, PhD, explains how his research strives understand the computational resources of brains and to build linking principles from brain to behavior using computational models. Series: "Exploring Ethics" [Show ID: 35462]
Artificial intelligence (AI) is a branch of engineering that has traditionally ignored brains, but recent advances in biologically-inspired deep learning have dramatically changed AI and made it possible to solve difficult problems in vision, planning and natural language. If you talk to Alexa or use Google Translate, you have experienced deep learning in action. This new technology opens a Pandoras box of problems that we must confront regarding privacy, bias and jobs. Terry Sejnowski, PhD, explains how his research strives understand the computational resources of brains and to build linking principles from brain to behavior using computational models. Series: "Exploring Ethics" [Show ID: 35462]
Artificial intelligence (AI) is a branch of engineering that has traditionally ignored brains, but recent advances in biologically-inspired deep learning have dramatically changed AI and made it possible to solve difficult problems in vision, planning and natural language. If you talk to Alexa or use Google Translate, you have experienced deep learning in action. This new technology opens a Pandoras box of problems that we must confront regarding privacy, bias and jobs. Terry Sejnowski, PhD, explains how his research strives understand the computational resources of brains and to build linking principles from brain to behavior using computational models. Series: "Exploring Ethics" [Show ID: 35462]
Artificial intelligence (AI) is a branch of engineering that has traditionally ignored brains, but recent advances in biologically-inspired deep learning have dramatically changed AI and made it possible to solve difficult problems in vision, planning and natural language. If you talk to Alexa or use Google Translate, you have experienced deep learning in action. This new technology opens a Pandoras box of problems that we must confront regarding privacy, bias and jobs. Terry Sejnowski, PhD, explains how his research strives understand the computational resources of brains and to build linking principles from brain to behavior using computational models. Series: "Exploring Ethics" [Show ID: 35462]
Artificial intelligence (AI) is a branch of engineering that has traditionally ignored brains, but recent advances in biologically-inspired deep learning have dramatically changed AI and made it possible to solve difficult problems in vision, planning and natural language. If you talk to Alexa or use Google Translate, you have experienced deep learning in action. This new technology opens a Pandoras box of problems that we must confront regarding privacy, bias and jobs. Terry Sejnowski, PhD, explains how his research strives understand the computational resources of brains and to build linking principles from brain to behavior using computational models. Series: "Exploring Ethics" [Show ID: 35462]
Artificial intelligence (AI) is a branch of engineering that has traditionally ignored brains, but recent advances in biologically-inspired deep learning have dramatically changed AI and made it possible to solve difficult problems in vision, planning and natural language. If you talk to Alexa or use Google Translate, you have experienced deep learning in action. This new technology opens a Pandoras box of problems that we must confront regarding privacy, bias and jobs. Terry Sejnowski, PhD, explains how his research strives understand the computational resources of brains and to build linking principles from brain to behavior using computational models. Series: "Exploring Ethics" [Show ID: 35462]
Artificial intelligence (AI) is a branch of engineering that has traditionally ignored brains, but recent advances in biologically-inspired deep learning have dramatically changed AI and made it possible to solve difficult problems in vision, planning and natural language. If you talk to Alexa or use Google Translate, you have experienced deep learning in action. This new technology opens a Pandoras box of problems that we must confront regarding privacy, bias and jobs. Terry Sejnowski, PhD, explains how his research strives understand the computational resources of brains and to build linking principles from brain to behavior using computational models. Series: "Exploring Ethics" [Show ID: 35462]
Terry Sejnowski, author of the book Deep Learning Revolution, who together with Geoff Hinton created Boltzmann machines, a deep learning network that has remarkable similarities to learning in the brain, talks about whether machines dream and the algorithms of the brain, whether Marvin Minsky was the devil and how deep learning is shaping the future of education.
CARTA - Center for Academic Research and Training in Anthropogeny (Audio)
CARTA celebrates its 10th anniversary with a whirlwind tour of anthropogeny, the study of the origin of humans, by addressing these questions across multiple disciplines: What do we know for certain? What do we think we know? What do we need to know? How do we proceed? Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Science] [Show ID: 34710]
CARTA - Center for Academic Research and Training in Anthropogeny (Video)
CARTA celebrates its 10th anniversary with a whirlwind tour of anthropogeny, the study of the origin of humans, by addressing these questions across multiple disciplines: What do we know for certain? What do we think we know? What do we need to know? How do we proceed? Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Science] [Show ID: 34710]
CARTA celebrates its 10th anniversary with a whirlwind tour of anthropogeny, the study of the origin of humans, by addressing these questions across multiple disciplines: What do we know for certain? What do we think we know? What do we need to know? How do we proceed? Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Science] [Show ID: 34710]
CARTA celebrates its 10th anniversary with a whirlwind tour of anthropogeny, the study of the origin of humans, by addressing these questions across multiple disciplines: What do we know for certain? What do we think we know? What do we need to know? How do we proceed? Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Science] [Show ID: 34710]
CARTA celebrates its 10th anniversary with a whirlwind tour of anthropogeny, the study of the origin of humans, by addressing these questions across multiple disciplines: What do we know for certain? What do we think we know? What do we need to know? How do we proceed? Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Science] [Show ID: 34710]
CARTA - Center for Academic Research and Training in Anthropogeny (Video)
CARTA celebrates its 10th anniversary with a whirlwind tour of anthropogeny, the study of the origin of humans, by addressing these questions across multiple disciplines: What do we know for certain? What do we think we know? What do we need to know? How do we proceed? Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Science] [Show ID: 34696]
CARTA celebrates its 10th anniversary with a whirlwind tour of anthropogeny, the study of the origin of humans, by addressing these questions across multiple disciplines: What do we know for certain? What do we think we know? What do we need to know? How do we proceed? Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Science] [Show ID: 34696]
CARTA celebrates its 10th anniversary with a whirlwind tour of anthropogeny, the study of the origin of humans, by addressing these questions across multiple disciplines: What do we know for certain? What do we think we know? What do we need to know? How do we proceed? Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Science] [Show ID: 34696]
CARTA - Center for Academic Research and Training in Anthropogeny (Audio)
CARTA celebrates its 10th anniversary with a whirlwind tour of anthropogeny, the study of the origin of humans, by addressing these questions across multiple disciplines: What do we know for certain? What do we think we know? What do we need to know? How do we proceed? Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Science] [Show ID: 34696]
Arnaud Delorme currently hold a faculty position in Toulouse university, France, an adjunct faculty at the University of San Diego California, and a consulting research scientist at the Institute of Noetic Sciences. In 2000, Dr. Delorme completed his PhD Thesis on visual categorization in humans and machines. Based on the results of his modeling work, he co-created the Spikenet Technology Company. Dr. Delorme then moved to the Salk Institute for a post-doc in Terry Sejnowski’s laboratory where he focused on statistical analysis of electro-encephalographic (EEG) signal recorded during various visual tasks. He co-developed and diffused the free EEGLAB software for advanced analysis of EEG signals (now the most popular software for EEG analysis). In 2003, he lectured in India for 3 months and taught the neural mechanism underlying human consciousness. Dr. Delorme has published more than 60 journal articles, and his work has been cited more than 12,000 times. In 2002, he was awarded a young investigator award by Brettencourt Schueller foundation and in 2006, he was one of the 3 researchers to receive the 10-year anniversary ANT young investigator award for his contribution to the field of EEG research. Dr. Delorme has a keen interest in scientific study of consciousness. He is studying brain dynamics underlying thought events during meditation or so called “rest” periods using high-density EEG recordings and advanced statistical and signal processing techniques. He is also developing and testing double-blind neurofeedback protocols in an attempt to train naïve subject to reproduce these mental states. He is finally studying frontiers topic such as non-local properties of consciousness. http://arnauddelorme.com
The Salk Institute's Terry Sejnowski provides a lively exploration of the challenges inherent in understanding the complexity of the human brain from the molecular to the entire nervous system, and how new technologies and methods are allowing us to know more, as well as reveal there is much to learn. Series: "Influence of Early Experience on Adult Brain Organization and Function - Kavli Institute for Brain and Mind Symposium" [Science] [Show ID: 31767]
The Salk Institute's Terry Sejnowski provides a lively exploration of the challenges inherent in understanding the complexity of the human brain from the molecular to the entire nervous system, and how new technologies and methods are allowing us to know more, as well as reveal there is much to learn. Series: "Influence of Early Experience on Adult Brain Organization and Function - Kavli Institute for Brain and Mind Symposium" [Science] [Show ID: 31767]
Three fascinating presentations reveal how exploring changes during critical periods of brain development may lead to interventions, therapies and perhaps cures to conditions from learning disabilities to Alzheimer's Disease. Series: "Influence of Early Experience on Adult Brain Organization and Function - Kavli Institute for Brain and Mind Symposium" [Science] [Show ID: 31758]
Three fascinating presentations reveal how exploring changes during critical periods of brain development may lead to interventions, therapies and perhaps cures to conditions from learning disabilities to Alzheimer's Disease. Series: "Influence of Early Experience on Adult Brain Organization and Function - Kavli Institute for Brain and Mind Symposium" [Science] [Show ID: 31758]
CARTA - Center for Academic Research and Training in Anthropogeny (Audio)
CARTA: Mind Reading: Human Origins and Theory of Mind: Wrap-Up: Terry Sejnowski Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Science] [Show ID: 26084]
CARTA - Center for Academic Research and Training in Anthropogeny (Video)
CARTA: Mind Reading: Human Origins and Theory of Mind: Wrap-Up: Terry Sejnowski Series: "CARTA - Center for Academic Research and Training in Anthropogeny" [Science] [Show ID: 26084]