We learn about the world through interaction - through a body. The Embodied AI Podcast believes artificial intelligence should do the same. I interview experts in philosophy, neuroscience, artificial intelligence, robotics, linguistics and more. Join me on a journey from symbolic AI to deep learning, from information processing to distributed cognition, from Wittgenstein to Natural Language Processing, from phenomenology to robots, from x to y, you decide! Twitter: https://twitter.com/Embodied_AI Email: lai24@bath.ac.uk My LinkedIn: https://www.linkedin.com/in/akseli-ilmanen-842098181/
Hi! I started a second podcast (the Brain Space Time Podcast), available here! Listen to this episode to find about it! Timestamps: (00:00) - What is the Brain Space Time Podcast about? (01:39) - The podcast logo explained. (05:28) - Getting in touch. Links Henri Bergson's 1986 Matter and memory PDF (Cone figure on p. 61) Uri Hasson on temporal receptive windows paper Follow me For updates on new episode releases, follow me on Twitter. I welcome your comments, questions, and suggestions. Feel free to email me at akseli.ilmanen@gmail.com If you are interested in my other work, click here to look at my blog, website, or (ongoing) Bachelor dissertation on time perception semantic networks.
Timestamps: (00:00) - Intro (02:08) - Tony's background, Costa Rican singing mouse (06:59) - Traditional & embodied Turing Test, large language models (15:16) - Mouse intelligence, evolution, modularity, dish-washing dogs? (26:16) - Platform for training non-human animal-like virtual agents (36:14) - Exploration in children vs animals, innate vs learning, cognitive maps, complementary learning systems theory (46:53) - Genomic bottleneck, transfer learning, artificial Laplacian evolution (01:02:06) - Why AI needs connectomics? (01:06:55) - Brainbow, molecular connectomics: MAPseq & BRICseq (01:14:52) - Comparative (corvid) connectomics (01:18:04) - "Human uniqueness" - why do/ don't people believe in evolutionary continuity (01:25:29) - Career questions & virtual mouse passing the Embodied Turing Test in 5 years? Tony's lab website https://zadorlab.labsites.cshl.edu/ Tony's Twitter https://twitter.com/TonyZador Toward Next-Generation Artificial Intelligence: Catalyzing the NeuroAI Revolution - Embodied Turing Test paper (2022) https://arxiv.org/ftp/arxiv/papers/2210/2210.08340.pdf A critique of pure learning and what artificial neural networks can learn from animal brains paper (2019) http://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2019/08/A-critique-of-pure-learning-and-what-artificial-neuralnetworks-can-learn-from-animal-brains.pdf Genomic bottleneck paper (2021) http://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2021/03/Encoding-innate-ability-through-a-genomic-bottleneck.pdf MAPseq paper (2016) http://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2018/04/Zador-etal_2016_neuron_High-throughput-mapping.pdf BRICseq paper (2020) http://zadorlab.labsites.cshl.edu/wp-content/uploads/sites/59/2020/07/BRICseq-Bridges-Brain-wide-Interregional.pdf Squirrel ninja warrior course video https://www.youtube.com/watch?v=hFZFjoX2cGg Marbled Lungfish wiki https://en.wikipedia.org/wiki/Marbled_lungfish Papers about corvids https://www.science.org/doi/10.1126/science.1098410 https://link-springer-com.ezproxy1.bath.ac.uk/article/10.3758/s13420-020-00434-5 Twitter https://twitter.com/Embodied_AI
Alex is a professor and the director of the Institution for Language, Cognition and Computation at Edinburgh. She is interested in discourse coherence, gestures, complex games and interactive task learning. After we find out about Alex's background and geek out over Ludwig Wittgenstein, she tells us about Dynamic Semantics and Segmented Discourse Representation Theory (SDRT). SDRT considers discourse as actions that change the state space of the world and requires agents to infer coherence in the discourse. Then, I initiate a discussion between Felix Hill and Alex by asking her about her opinion on compositionality and playing a clip where Felix gives his "spicy take" on theoretical linguistics. Next, we talk about gestures and how they could be analysed using logic or a deep learning classifier. Then, we talk about non-linguistic events and the conceptualization problem. Later, we discuss Alex's work on Settlers of Catan, and how this links to deep reinforcement learning, Monte Carlo tree search, and neurosymbolic AI. Next, we briefly bring up game theory and then talk about interactive task learning, which is about agents learning and adapting in unknown domains. Finally, there are some career questions on whether to do a PhD and what makes a good supervisee & supervisor. Timestamps: (00:00) - Intro (02:00) - Alex's background & Wittgenstein geekiness (05:15) - Discourse Coherence & Semantic Discourse Representation Theory (SDRT) (12:56) - Compositionality, Responding to Felix Hill's "spicy take" (23:50) - Analysing gestures with logic and deep learning (38:54) - Pointing and evolution (42:28) - Non-linguistics events in Settlers of Catan, conceptualization problem (54:15) - 3D simulations and supermarket stocktaking (59:19) - Settlers of Catan, Monte Carlo tree search, neurosymbolic AI (01:11:08) - Persuasion & Game Theory (01:17:23) - Interactive Task Learning, symbol grounding, unknown domain (01:25:28) - Career advice Alex Webpage (All articles are open access) https://homepages.inf.ed.ac.uk/alex/index.html Talk on Discourse Coherence and Segmented Discourse Representation Theory https://www.youtube.com/watch?v=3HfKq9E3syM A Formal Semantic Analysis of Gesture paper with Matthew Stone https://homepages.inf.ed.ac.uk/alex/papers/gesture_jos.pdf A formal semantics for situated conversation paper with Julie Hunter & Nicholas Asher https://semprag.org/index.php/sp/article/view/sp.11.10 Game strategies for The Settlers of Catan paper with Markus Guhe https://homepages.inf.ed.ac.uk/alex/papers/cig2014_gs.pdf Evaluating Persuasion Strategies and Deep Reinforcement Learning methods for Negotiation Dialogue agents paper with Simon Keizer , Markus Guhe, & Oliver Lemon https://homepages.inf.ed.ac.uk/alex/papers/eacl_2017.pdf Learning Language Games through Interaction paper with Sida Wang, Percy Liang, Christopher Manning https://arxiv.org/abs/1606.02447 Interactive Task Learning Paper with Mattias Appelgren https://homepages.inf.ed.ac.uk/alex/papers/aamas_grounding.pdf My Twitter https://twitter.com/Embodied_AI
Felix is a research scientist at DeepMind. He is interested in grounded language understanding and natural language processing (NLP). After finding out about Felix's background, we bring up compositionality and explore why natural language is NonCompositional (also, the name of Felix's blog). Then, Felix tells us a bit about his work in Cambridge on abstract vs concrete concepts and gives us a quick crash course on the role of recurrent neural networks (RNNs), long short-term memory (LSTMs), and transformers in language models. Next, we talk about Jeff Elman's landmark paper 'Finding Structure in Time' and how neural networks can learn to understand analogies. After, we discuss the core of Felix work: Training language agents in 3D simulations, where we raise some questions on language learning as an embodied agent in space and time, and Allan Paivio's dual coding theory implemented in the memory of a language model. Next, we stick with the theme of memory retrieval and discuss Felix and Andrew Lampinen's work on 'mental time travel' in language models. Finally, I ask Felix on some good strategies on how to get into DeepMind and the best way to learn NLP. Timestamps: (00:00) - Intro (07:57) - Compositionality in natural language (16:42) - Abstract vs concrete concepts (24:03) - RNNs, LSTMs, Transformers (34:12) - Prediction, time and Jeff Elman (48:04) - Neural networks & analogies (56:32) - Grounded language, 3D simulations, babies, (01:05:20) - Keeping vision and language data separate (01:13:51) - NeuroAI and mental time travel (01:21:47) - Getting into DeepMind and learning NLP Felix Website (good overview for his papers) https://fh295.github.io/ Abstract vs concrete concepts paper https://onlinelibrary.wiley.com/doi/full/10.1111/cogs.12076 Jeff Elman (1990): Finding structure in time https://onlinelibrary.wiley.com/doi/pdf/10.1207/s15516709cog1402_1 Analogies paper https://openreview.net/pdf?id=SylLYsCcFm Dual coding theory paper https://arxiv.org/abs/2009.01719 Mental Time Travel paper https://arxiv.org/abs/2105.14039 My Twitter https://twitter.com/Embodied_AI My LinkedIn https://www.linkedin.com/in/akseli-ilmanen-842098181/
Beren is a postdoc in Oxford with a background in machine learning and computational neuroscience. He is interested in Active Inference (related to the Free Energy Principle) and how the cortex can perform long-term credit assignment as deep artificial neural networks do. We start off with some shorter questions on the Free Energy Principle and its background concepts. Next, we get onto the exploration vs exploitation dilemma in reinforcement learning and Beren's strategy on how to maximize expected reward from restaurant visits - it's a long episode :=). We also discuss multimodal representations, shallow minima, autism and enactivism. Then, we explore predictive coding going all the way from the phenomenon of visual fading, to 20-eyed reinforcement learning agents and the 'Anti-Grandmother Cell'. Finally, we discuss some open questions about backpropagation and the role of time in the brain, and finish the episode with some career advice about writing, publishing and Beren's future projects! Timestamps: (00:00) - Intro (02:11) - The Free Energy Principle, Active Inference, and Reinforcement Learning (13:40) - Exploration vs Exploitation (26:47) - Multimodal representation, shallow minima, autism (36:11) - Biased generative models, enactivism, and representation in the brain? (45:21) - Fixational eye movements, predictive coding, and 20-eyed RL (52:57) - Precision, attention, and dopamine (01:01:51) - Sparsity, negative prediction errors, and the 'Anti-Grandmother Cell' (01:11:23) - Backpropagation in the brain? (01:19:25) - Time in machine learning and the brain? (01:25:32) - Career Questions Beren's Twitter: https://twitter.com/BerenMillidge Paper: Deep active inference as variational policy gradients: https://www.sciencedirect.com/science/article/abs/pii/S0022249620300298 Paper: Predictive Coding Approximates Backprop Along Arbitrary Computation Graphs: https://direct.mit.edu/neco/article-abstract/34/6/1329/110646/Predictive-Coding-Approximates-Backprop-Along Paper: Predictive Coding: a Theoretical and Experimental Review: https://arxiv.org/abs/2107.12979 Beren's other work: https://scholar.google.gr/citations?user=3GGkFTkAAAAJ&hl=en My Twitter https://twitter.com/Embodied_AI My LinkedIn https://www.linkedin.com/in/akseli-ilmanen-842098181/
Mark is a philosopher of computation and cognitive science at Edinburgh. We start off the conversation exploring why we shouldn't attribute computation to stones and talk about instances of distributed cognition in classical antiquity. Then, we discuss the relationship between functionalism and extended cognition with the paradigmatic example of Otto's notebook and some implications for deep learning researchers. Next off is the famous Chinese Room Argument and how the 'Robot Reply' illustrates the need for embodiment when going from 'cat' syntax to cat semantics. After a quick rendezvous with the frame problem (see also Ep1), Hubert Dreyfus and Heideggerian AI, we move onto predictive coding, David Marr's three levels of analysis and the idea of representation in the brain. We finish off the conversation with some very good reading strategies and why we should all move to Edinburgh. Timestamps: (00:00) - Intro (03:04) - Does a stone do computation? (08:16) - Distributed cognition in classical antiquity (20:27) - Functionalism and extended cognition (33:00) - Chinese Room Argument & Robot Reply (45:51) - Frame Problem, Hubert Dreyfus (56:47) - David Marr's Three Levels, Predictive Coding and Representation in the Brain (01:16:14) - Career advice & Why Edinburgh is the best Mark's Website (All of Mark's publications are freely available there - Yay Open Access) https://marksprevak.com/ Clark and Chalmers 1998 paper - The Extended Mind https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-8284.00096 Ballard et al. 1997 paper - Off-loading information onto the environment https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/deictic-codes-for-the-embodiment-of-cognition/C8A398E8E3785B5B921DD40AA6EA5CEB Rao and Ballard 1999 paper - Predictive Coding https://www.cs.utexas.edu/users/dana/nn.pdf Spratling 2008 paper - Predictive Coding https://www.frontiersin.org/articles/10.3389/neuro.10.004.2008/full My Twitter https://twitter.com/Embodied_AI My LinkedIn https://www.linkedin.com/in/akseli-ilmanen-842098181/
Barbara is a professor of Biorobotics at Edinburgh. We start with a quick philosophical exploration of robots using chairs, James Gibson's concept 'affordances', and whether insects have meaning. Next, we talk about how robots can be used to test hypotheses in biology. For most of the episode, we discuss the incredible things crickets, ants and the mushroom body can do. We explore some interesting questions such as: How does embodiment in crickets replace the need for neural processing? How do ants integrate different sensory modalities? Do insects have consciousness? And can we find associative general-purpose brain regions in the insect brain? We also discuss how her robotics work can inform predictive coding and reinforcement learning. As usual, we finish off with a career question and her future projects. Timestamps: (00:00) - Intro (02:43) - Philosophy questions (08:11) - Robot models to test biological hypotheses (17:45) - Cricket bodies, cricket robots, and cricket music (26:58) - Ants, spatial navigation, visual memory (30:14) - Insect learning, sparse coding, insect consciousness (39:25) - Lessons for embodied AI, predictive coding (43:57) - Reinforcement learning paper (50:58) - Career advice (56:57) - Upcoming project GRASP, postdoc positions Barbara's website https://homepages.inf.ed.ac.uk/bwebb/ Talks by Barbara: On ants, other insects: https://www.youtube.com/watch?v=5qwc9vAkiMQ On crickets: https://vdocument.in/using-robots-to-model-animals-a-cricket-test-barbara-webb-presenter-gholamreza.html 7 dimensions for robot models paper https://homepages.inf.ed.ac.uk/bwebb/publications/Webb,_B_(2001)_Can_robots_make_good_models_of_biological_behaviour.html Reinforcement Learning paper (freely available here:) https://www.research.ed.ac.uk/en/publications/a-bio-inspired-reinforcement-learning-rule-to-optimise-dynamical- Multimodal sensory integration in insects paper: https://iopscience.iop.org/article/10.1088/1748-3182/1/3/001 Twitter: twitter.com/Embodied_AI My LinkedIn: www.linkedin.com/in/akseli-ilmanen-842098181/ Postdoc in "Insect AI" at Edinburgh https://rb.gy/annvkd Postdoc in “InsectNeuroNano” at Edinburgh https://rb.gy/kzv4wa
In the first episode of the Embodied AI Podcast, Ron tells us about his journey from Stanford to machine learning and the people that inspired him to dig deeper on philosophical questions around embodiment. Ron lays out 4 dimensions to think about Embodied AI and situates its role in the history of AI - mainly the move away from Symbolic AI. We take a close look at his 2003 paper on Embodied Artificial Intelligence, making connections to the relevance problem, Lewis Carroll's What the Tortoise Said to Achilles, Ludwig Wittgenstein, and current matching learning. Ron also discusses his work on non-conceptual content and synthetic phenomenology, showing us how we can use embodied technologies to study non-systematic aspects of our experience or that of a robot. We finish off with his recent ideas about AI creativity and the future of Embodied AI, with some career advice for younger listeners. Timestamps: (00:00) - Intro (01:48) - Ron's background (06:08) - What is Embodied AI? (25:21) - Symbolic AI (35:10) - 2003 Paper: Relevance/Frame Problem (49:36) - Embodied AI and current machine learning (52:49) - Non-Conceptual Content (1:01:30) - Synthetic Phenomenology (1:05:48) - AI creativity (1:19:42) - Career Advice (1:24:23) - Future of Embodied AI Ron's Sussex webpage: https://profiles.sussex.ac.uk/p476-ron-chrisley 2003 Embodied Artificial Intelligence Paper: https://www.sciencedirect.com/science/article/pii/S0004370203000559 1996 DPhil Thesis on Non-Conceptual Content: https://www.academia.edu/3453644/Non_conceptual_psychological_explanation_Content_and_computation Ron's Blog page - see for synthetic phenomenology artwork: https://paics.wordpress.com/ Douglas Hofstadter: Waking Up from the Boolean Dream, or, Subcognition as Computation: https://archive.org/details/MetamagicalThemas Paper by Tom Froese and Tom Zienke: https://www.sciencedirect.com/science/article/pii/S0004370208002105 Twitter: twitter.com/Embodied_AI My LinkedIn: www.linkedin.com/in/akseli-ilmanen-842098181/ Edits: * What the Tortoise Said to Achilles was published in 1895 * I am 21 years old not 22 haha
A short episode, where I discuss what the podcast is about. Twitter - twitter.com/Embodied_AI Email - lai24@bath.ac.uk Related reading Ludwig Wittgenstein: Intro - https://plato.stanford.edu/entries/wittgenstein/ Tractatus Logico-Philosophicus - https://people.umass.edu/klement/tlp/tlp.pdf Philosophical Investigations - http://michaeljohnsonphilosophy.com/wp-content/uploads/2019/01/Ludwig.Wittgenstein.-.Philosophical.Investigations.pdf Hubert Dreyfus: What Computers Still Can't Do: A Critique of Artificial Reason https://mitpress.mit.edu/books/what-computers-still-cant-do Music Credit https://uppbeat.io/t/torus/progression License code: NT6SRVLHJCBXH1YE