POPULARITY
Prof. Kevin Ellis and Dr. Zenna Tavares talk about making AI smarter, like humans. They want AI to learn from just a little bit of information by actively trying things out, not just by looking at tons of data.They discuss two main ways AI can "think": one way is like following specific rules or steps (like a computer program), and the other is more intuitive, like guessing based on patterns (like modern AI often does). They found combining both methods works well for solving complex puzzles like ARC.A key idea is "compositionality" - building big ideas from small ones, like LEGOs. This is powerful but can also be overwhelming. Another important idea is "abstraction" - understanding things simply, without getting lost in details, and knowing there are different levels of understanding.Ultimately, they believe the best AI will need to explore, experiment, and build models of the world, much like humans do when learning something new.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT:https://www.dropbox.com/scl/fi/3ngggvhb3tnemw879er5y/BASIS.pdf?rlkey=lr2zbj3317mex1q5l0c2rsk0h&dl=0 Zenna Tavares:http://www.zenna.org/Kevin Ellis:https://www.cs.cornell.edu/~ellisk/TOC:1. Compositionality and Learning Foundations [00:00:00] 1.1 Compositional Search and Learning Challenges [00:03:55] 1.2 Bayesian Learning and World Models [00:12:05] 1.3 Programming Languages and Compositionality Trade-offs [00:15:35] 1.4 Inductive vs Transductive Approaches in AI Systems2. Neural-Symbolic Program Synthesis [00:27:20] 2.1 Integration of LLMs with Traditional Programming and Meta-Programming [00:30:43] 2.2 Wake-Sleep Learning and DreamCoder Architecture [00:38:26] 2.3 Program Synthesis from Interactions and Hidden State Inference [00:41:36] 2.4 Abstraction Mechanisms and Resource Rationality [00:48:38] 2.5 Inductive Biases and Causal Abstraction in AI Systems3. Abstract Reasoning Systems [00:52:10] 3.1 Abstract Concepts and Grid-Based Transformations in ARC [00:56:08] 3.2 Induction vs Transduction Approaches in Abstract Reasoning [00:59:12] 3.3 ARC Limitations and Interactive Learning Extensions [01:06:30] 3.4 Wake-Sleep Program Learning and Hybrid Approaches [01:11:37] 3.5 Project MARA and Future Research DirectionsREFS:[00:00:25] DreamCoder, Kevin Ellis et al.https://arxiv.org/abs/2006.08381[00:01:10] Mind Your Step, Ryan Liu et al.https://arxiv.org/abs/2410.21333[00:06:05] Bayesian inference, Griffiths, T. L., Kemp, C., & Tenenbaum, J. B.https://psycnet.apa.org/record/2008-06911-003[00:13:00] Induction and Transduction, Wen-Ding Li, Zenna Tavares, Yewen Pu, Kevin Ellishttps://arxiv.org/abs/2411.02272[00:23:15] Neurosymbolic AI, Garcez, Artur d'Avila et al.https://arxiv.org/abs/2012.05876[00:33:50] Induction and Transduction (II), Wen-Ding Li, Kevin Ellis et al.https://arxiv.org/abs/2411.02272[00:38:35] ARC, François Chollethttps://arxiv.org/abs/1911.01547[00:39:20] Causal Reactive Programs, Ria Das, Joshua B. Tenenbaum, Armando Solar-Lezama, Zenna Tavareshttp://www.zenna.org/publications/autumn2022.pdf[00:42:50] MuZero, Julian Schrittwieser et al.http://arxiv.org/pdf/1911.08265[00:43:20] VisualPredicator, Yichao Lianghttps://arxiv.org/abs/2410.23156[00:48:55] Bayesian models of cognition, Joshua B. Tenenbaumhttps://mitpress.mit.edu/9780262049412/bayesian-models-of-cognition/[00:49:30] The Bitter Lesson, Rich Suttonhttp://www.incompleteideas.net/IncIdeas/BitterLesson.html[01:06:35] Program induction, Kevin Ellis, Wen-Ding Lihttps://arxiv.org/pdf/2411.02272[01:06:50] DreamCoder (II), Kevin Ellis et al.https://arxiv.org/abs/2006.08381[01:11:55] Project MARA, Zenna Tavares, Kevin Ellishttps://www.basis.ai/blog/mara/
Episode 123I spoke with Suhail Doshi about:* Why benchmarks aren't prepared for tomorrow's AI models* How he thinks about artists in a world with advanced AI tools* Building a unified computer vision model that can generate, edit, and understand pixels. Suhail is a software engineer and entrepreneur known for founding Mixpanel, Mighty Computing, and Playground AI (they're hiring!).Reach me at editor@thegradient.pub for feedback, ideas, guest suggestions. Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (00:54) Ad read — MLOps conference* (01:30) Suhail is *not* in pivot hell but he *is* all-in on 50% AI-generated music* (03:45) AI and music, similarities to Playground* (07:50) Skill vs. creative capacity in art* (12:43) What we look for in music and art* (15:30) Enabling creative expression* (18:22) Building a unified computer vision model, underinvestment in computer vision* (23:14) Enhancing the aesthetic quality of images: color and contrast, benchmarks vs user desires* (29:05) “Benchmarks are not prepared for how powerful these models will become”* (31:56) Personalized models and personalized benchmarks* (36:39) Engaging users and benchmark development* (39:27) What a foundation model for graphics requires* (45:33) Text-to-image is insufficient* (46:38) DALL-E 2 and Imagen comparisons, FID* (49:40) Compositionality* (50:37) Why Playground focuses on images vs. 3d, video, etc.* (54:11) Open source and Playground's strategy* (57:18) When to stop open-sourcing?* (1:03:38) Suhail's thoughts on AGI discourse* (1:07:56) OutroLinks:* Playground homepage* Suhail on Twitter Get full access to The Gradient at thegradientpub.substack.com/subscribe
Language is the ultimate Lego. With it, we can take simple elements and construct them into an edifice of meaning. Its power is not only in mapping signs to concepts but in that individual words can be composed into larger structures. How did this systematicity arise in language?Simon Kirby is the head of Linguistics and English Language at The University of Edinburgh and one of the founders of the Centre for Langauge Evolution and Change. Over several decades he and his collaborators have run many elegant experiments that show that this property of language emerges inexorably as a system of communication is passed from generation to generation. Experiments with computer simulations, humans, and even baboons demonstrate that as a language is learned mistakes are made - much like the mutations in genes. Crucially, the mistakes that better match the language to the structure of the world (as conceived by the learner) are the ones that are most likely to be passed on.Links Simon's website with art, music, and talks on language evolution Simon's academic homepage Simon on X Multiverses Podcast homeOutline(00:00) Introduction(2:45) What makes language special?(5:30) Language extends our biological bounds(7:55) Language makes culture, culture makes language(9:30) John Searle: world to word and word to world(13:30) Compositionality: the expressivity of language is based on its Lego-like combinations(16:30) Could unique genes explain the fact of language compositionality?(17:20) … Not fully, though they might make our brains able to support compositional language(18:20) Using simulations to model language learning and search for the emergence of structure(19:35) Compositionality emerges from the transmission of representations across generations(20:18) The learners need to make mistakes, but not random mistakes(21:35) Just like biological evolution, we need variation(27:00) When, by chance, linguistic features echo the structure of the world these are more memorable(33:45) Language experiments with humans (Hannah Cornish)(36:32) Sign language experiments in the lab (Yasamin Motamedi)(38:45) Spontaneous emergence of sign language in populations(41:18) Communication is key to making language efficient, while transmission gives structure(47:10) Without intentional design these processes produce optimized systems(50:39) We need to perceive similarity in states of the world for linguistic structure to emerge(57:05) Why isn't language ubiquitous in nature …(58:00) … why do only humans have cultural transmissions(59:56) Over-imitation: Victoria Horner & Andrew Whiten, humans love to copy each other(1:06:00) Is language a spandrel?(1:07:10) How much of language is about information transfer? Partner-swapping conversations (Gareth Roberts)(1:08:49) Language learning = play?(1:12:25) Iterated learning experiments with baboons (& Tetris!)(1:17:50) Endogenous rewards for copying(1:20:30) Art as another angle on the same problems
“Ultimately, if you want more human-like systems that exhibit more human-like intelligence, you would want them to actually learn like humans do by interacting with the world and so interactive learning, not just passive learning. You want something that's more active where the model is going to actually test out some hypothesis, and learn from the feedback it's getting from the world about these hypotheses in the way children do, it should learn all the time. If you observe young babies and toddlers, they are constantly experimenting. They're like little scientists, you see babies grabbing their feet, and testing whether that's part of my body or not, and learning gradually and very quickly learning all these things. Language models don't do that. They don't explore in this way. They don't have the capacity for interaction in this way.”Raphaël MillièreHow do large language models work? What are the dangers of overclaiming and underclaiming the capabilities of large language models? What are some of the most important cognitive capacities to understand for large language models? Are large language models showing sparks of artificial general intelligence? Do language models really understand language? Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society and a Lecturer in the Philosophy Department at Columbia University. He completed his DPhil (PhD) in philosophy at the University of Oxford, where he focused on self-consciousness. His interests lie primarily in the philosophy of artificial intelligence and cognitive science. He is particularly interested in assessing the capacities and limitations of deep artificial neural networks and establishing fair and meaningful comparisons with human cognition in various domains, including language understanding, reasoning, and planning.Topics discussed in the episode:Introduction (0:00)How Raphaël came to work on AI (1:25)How do large language models work? (5:50)Deflationary and inflationary claims about large language models (19:25)The dangers of overclaiming and underclaiming (25:20)Summary of cognitive capacities large language models might have (33:20)Intelligence (38:10)Artificial general intelligence (53:30)Consciousness and sentience (1:06:10)Theory of mind (01:18:09)Compositionality (1:24:15)Language understanding and referential grounding (1:30:45)Which cognitive capacities are most useful to understand for various purposes? (1:41:10)Conclusion (1:47:23)Resources discussed in the episode are available at https://www.sentienceinstitute.org/podcastSupport the show
In this wide-ranging conversation, Tim Scarfe interviews Neel Nanda, a researcher at DeepMind working on mechanistic interpretability, which aims to understand the algorithms and representations learned by machine learning models. Neel discusses how models can represent their thoughts using motifs, circuits, and linear directional features which are often communicated via a "residual stream", an information highway models use to pass information between layers. Neel argues that "superposition", the ability for models to represent more features than they have neurons, is one of the biggest open problems in interpretability. This is because superposition thwarts our ability to understand models by decomposing them into individual units of analysis. Despite this, Neel remains optimistic that ambitious interpretability is possible, citing examples like his work reverse engineering how models do modular addition. However, Neel notes we must start small, build rigorous foundations, and not assume our theoretical frameworks perfectly match reality. The conversation turns to whether models can have goals or agency, with Neel arguing they likely can based on heuristics like models executing long term plans towards some objective. However, we currently lack techniques to build models with specific goals, meaning any goals would likely be learned or emergent. Neel highlights how induction heads, circuits models use to track long range dependencies, seem crucial for phenomena like in-context learning to emerge. On the existential risks from AI, Neel believes we should avoid overly confident claims that models will or will not be dangerous, as we do not understand them enough to make confident theoretical assertions. However, models could pose risks through being misused, having undesirable emergent properties, or being imperfectly aligned. Neel argues we must pursue rigorous empirical work to better understand and ensure model safety, avoid "philosophizing" about definitions of intelligence, and focus on ensuring researchers have standards for what it means to decide a system is "safe" before deploying it. Overall, a thoughtful conversation on one of the most important issues of our time. Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Twitter: https://twitter.com/MLStreetTalk Neel Nanda: https://www.neelnanda.io/ TOC [00:00:00] Introduction and Neel Nanda's Interests (walk and talk) [00:03:15] Mechanistic Interpretability: Reverse Engineering Neural Networks [00:13:23] Discord questions [00:21:16] Main interview kick-off in studio [00:49:26] Grokking and Sudden Generalization [00:53:18] The Debate on Systematicity and Compositionality [01:19:16] How do ML models represent their thoughts [01:25:51] Do Large Language Models Learn World Models? [01:53:36] Superposition and Interference in Language Models [02:43:15] Transformers discussion [02:49:49] Emergence and In-Context Learning [03:20:02] Superintelligence/XRisk discussion Transcript: https://docs.google.com/document/d/1FK1OepdJMrqpFK-_1Q3LQN6QLyLBvBwWW_5z8WrS1RI/edit?usp=sharing Refs: https://docs.google.com/document/d/115dAroX0PzSduKr5F1V4CWggYcqIoSXYBhcxYktCnqY/edit?usp=sharing
Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas
Welcome to another episode of Sean Carroll's Mindscape. Today, we're joined by Raphaël Millière, a philosopher and cognitive scientist at Columbia University. We'll be exploring the fascinating topic of how artificial intelligence thinks and processes information. As AI becomes increasingly prevalent in our daily lives, it's important to understand the mechanisms behind its decision-making processes. What are the algorithms and models that underpin AI, and how do they differ from human thought processes? How do machines learn from data, and what are the limitations of this learning? These are just some of the questions we'll be exploring in this episode. Raphaël will be sharing insights from his work in cognitive science, and discussing the latest developments in this rapidly evolving field. So join us as we dive into the mind of artificial intelligence and explore how it thinks.[The above introduction was artificially generated by ChatGPT.]Support Mindscape on Patreon.Raphaël Millière received a DPhil in philosophy from the University of Oxford. He is currently a Presidential Scholar in Society and Neuroscience at the Center for Science and Society, and a Lecturer in the Philosophy Department at Columbia University. He also writes and organizes events aimed at a broader audience, including a recent workshop on The Challenge of Compositionality for Artificial Intelligence.Web siteColumbia web pagePhilPeople profileGoogle Scholar publicationsTwitterSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
MLST Discord! https://discord.gg/aNPkGUQtc5 Patreon: https://www.patreon.com/mlst YT: https://youtu.be/snUf_LIfQII We had a discussion with Dr. Walid Saba about whether or not MLP neural networks can extrapolate outside of the training support, and what it means to extrapolate in a vector space. Then we discussed the concept of vagueness in cognitive science, for example, what does it mean to be "rich" or what is a "pile of sand"? Finally we discussed behaviourism and the reward is enough hypothesis. References: A Spline Theory of Deep Networks [Balestriero] https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf The animation we showed of the spline theory was created by Ahmed Imtiaz Humayun (https://twitter.com/imtiazprio) and we will be showing an interview with Imtiaz and Randall very soon! [00:00:00] Intro [00:00:58] Interpolation vs Extrapolation [00:24:38] Type 1 Type 2 generalisation and compositionality / Fodor / Systematicity [00:32:18] Keith's brain teaser [00:36:53] Neural turing machines / discrete vs continuous / learnability
Pedro Domingos, Professor Emeritus of Computer Science and Engineering at the University of Washington, is renowned for his research in machine learning, particularly for his work on Markov logic networks that allow for uncertain inference. He is also the author of the acclaimed book "The Master Algorithm". Panel: Dr. Tim Scarfe TOC: [00:00:00] Introduction [00:01:34] Galaxtica / misinformation / gatekeeping [00:12:31] Is there a master algorithm? [00:16:29] Limits of our understanding [00:21:57] Intentionality, Agency, Creativity [00:27:56] Compositionality [00:29:30] Digital Physics / It from bit / Wolfram [00:35:17] Alignment / Utility functions [00:43:36] Meritocracy [00:45:53] Game theory [01:00:00] EA/consequentialism/Utility [01:11:09] Emergence / relationalism [01:19:26] Markov logic [01:25:38] Moving away from anthropocentrism [01:28:57] Neurosymbolic / infinity / tensor algerbra [01:53:45] Abstraction [01:57:26] Symmetries / Geometric DL [02:02:46] Bias variance trade off [02:05:49] What seen at neurips [02:12:58] Chalmers talk on LLMs [02:28:32] Definition of intelligence [02:32:40] LLMs [02:35:14] On experts in different fields [02:40:15] Back to intelligence [02:41:37] Spline theory / extrapolation YT version: https://www.youtube.com/watch?v=C9BH3F2c0vQ References; The Master Algorithm [Domingos] https://www.amazon.co.uk/s?k=master+algorithm&i=stripbooks&crid=3CJ67DCY96DE8&sprefix=master+algorith%2Cstripbooks%2C82&ref=nb_sb_noss_2 INFORMATION, PHYSICS, QUANTUM: THE SEARCH FOR LINKS [John Wheeler/It from Bit] https://philpapers.org/archive/WHEIPQ.pdf A New Kind Of Science [Wolfram] https://www.amazon.co.uk/New-Kind-Science-Stephen-Wolfram/dp/1579550088 The Rationalist's Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity's Future [Tom Chivers] https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/1474608795 The Status Game: On Social Position and How We Use It [Will Storr] https://www.goodreads.com/book/show/60598238-the-status-game Newcomb's paradox https://en.wikipedia.org/wiki/Newcomb%27s_paradox The Case for Strong Emergence [Sabine Hossenfelder] https://philpapers.org/rec/HOSTCF-3 Markov Logic: An Interface Layer for Artificial Intelligence [Domingos] https://www.morganclaypool.com/doi/abs/10.2200/S00206ED1V01Y200907AIM007 Note; Pedro discussed “Tensor Logic” - I was not able to find a reference Neural Networks and the Chomsky Hierarchy [Grégoire Delétang/DeepMind] https://arxiv.org/abs/2207.02098 Connectionism and Cognitive Architecture: A Critical Analysis [Jerry A. Fodor and Zenon W. Pylyshyn] https://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/proseminars/Proseminar13/ConnectionistArchitecture.pdf Every Model Learned by Gradient Descent Is Approximately a Kernel Machine [Pedro Domingos] https://arxiv.org/abs/2012.00152 A Path Towards Autonomous Machine Intelligence Version 0.9.2, 2022-06-27 [LeCun] https://openreview.net/pdf?id=BZ5a1r-kVsf Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges [Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković] https://arxiv.org/abs/2104.13478 The Algebraic Mind: Integrating Connectionism and Cognitive Science [Gary Marcus] https://www.amazon.co.uk/Algebraic-Mind-Integrating-Connectionism-D
Alex is a professor and the director of the Institution for Language, Cognition and Computation at Edinburgh. She is interested in discourse coherence, gestures, complex games and interactive task learning. After we find out about Alex's background and geek out over Ludwig Wittgenstein, she tells us about Dynamic Semantics and Segmented Discourse Representation Theory (SDRT). SDRT considers discourse as actions that change the state space of the world and requires agents to infer coherence in the discourse. Then, I initiate a discussion between Felix Hill and Alex by asking her about her opinion on compositionality and playing a clip where Felix gives his "spicy take" on theoretical linguistics. Next, we talk about gestures and how they could be analysed using logic or a deep learning classifier. Then, we talk about non-linguistic events and the conceptualization problem. Later, we discuss Alex's work on Settlers of Catan, and how this links to deep reinforcement learning, Monte Carlo tree search, and neurosymbolic AI. Next, we briefly bring up game theory and then talk about interactive task learning, which is about agents learning and adapting in unknown domains. Finally, there are some career questions on whether to do a PhD and what makes a good supervisee & supervisor. Timestamps: (00:00) - Intro (02:00) - Alex's background & Wittgenstein geekiness (05:15) - Discourse Coherence & Semantic Discourse Representation Theory (SDRT) (12:56) - Compositionality, Responding to Felix Hill's "spicy take" (23:50) - Analysing gestures with logic and deep learning (38:54) - Pointing and evolution (42:28) - Non-linguistics events in Settlers of Catan, conceptualization problem (54:15) - 3D simulations and supermarket stocktaking (59:19) - Settlers of Catan, Monte Carlo tree search, neurosymbolic AI (01:11:08) - Persuasion & Game Theory (01:17:23) - Interactive Task Learning, symbol grounding, unknown domain (01:25:28) - Career advice Alex Webpage (All articles are open access) https://homepages.inf.ed.ac.uk/alex/index.html Talk on Discourse Coherence and Segmented Discourse Representation Theory https://www.youtube.com/watch?v=3HfKq9E3syM A Formal Semantic Analysis of Gesture paper with Matthew Stone https://homepages.inf.ed.ac.uk/alex/papers/gesture_jos.pdf A formal semantics for situated conversation paper with Julie Hunter & Nicholas Asher https://semprag.org/index.php/sp/article/view/sp.11.10 Game strategies for The Settlers of Catan paper with Markus Guhe https://homepages.inf.ed.ac.uk/alex/papers/cig2014_gs.pdf Evaluating Persuasion Strategies and Deep Reinforcement Learning methods for Negotiation Dialogue agents paper with Simon Keizer , Markus Guhe, & Oliver Lemon https://homepages.inf.ed.ac.uk/alex/papers/eacl_2017.pdf Learning Language Games through Interaction paper with Sida Wang, Percy Liang, Christopher Manning https://arxiv.org/abs/1606.02447 Interactive Task Learning Paper with Mattias Appelgren https://homepages.inf.ed.ac.uk/alex/papers/aamas_grounding.pdf My Twitter https://twitter.com/Embodied_AI
Felix is a research scientist at DeepMind. He is interested in grounded language understanding and natural language processing (NLP). After finding out about Felix's background, we bring up compositionality and explore why natural language is NonCompositional (also, the name of Felix's blog). Then, Felix tells us a bit about his work in Cambridge on abstract vs concrete concepts and gives us a quick crash course on the role of recurrent neural networks (RNNs), long short-term memory (LSTMs), and transformers in language models. Next, we talk about Jeff Elman's landmark paper 'Finding Structure in Time' and how neural networks can learn to understand analogies. After, we discuss the core of Felix work: Training language agents in 3D simulations, where we raise some questions on language learning as an embodied agent in space and time, and Allan Paivio's dual coding theory implemented in the memory of a language model. Next, we stick with the theme of memory retrieval and discuss Felix and Andrew Lampinen's work on 'mental time travel' in language models. Finally, I ask Felix on some good strategies on how to get into DeepMind and the best way to learn NLP. Timestamps: (00:00) - Intro (07:57) - Compositionality in natural language (16:42) - Abstract vs concrete concepts (24:03) - RNNs, LSTMs, Transformers (34:12) - Prediction, time and Jeff Elman (48:04) - Neural networks & analogies (56:32) - Grounded language, 3D simulations, babies, (01:05:20) - Keeping vision and language data separate (01:13:51) - NeuroAI and mental time travel (01:21:47) - Getting into DeepMind and learning NLP Felix Website (good overview for his papers) https://fh295.github.io/ Abstract vs concrete concepts paper https://onlinelibrary.wiley.com/doi/full/10.1111/cogs.12076 Jeff Elman (1990): Finding structure in time https://onlinelibrary.wiley.com/doi/pdf/10.1207/s15516709cog1402_1 Analogies paper https://openreview.net/pdf?id=SylLYsCcFm Dual coding theory paper https://arxiv.org/abs/2009.01719 Mental Time Travel paper https://arxiv.org/abs/2105.14039 My Twitter https://twitter.com/Embodied_AI My LinkedIn https://www.linkedin.com/in/akseli-ilmanen-842098181/
Category theory is well-known for abstraction—concepts and tools from diverse fields being recognized as specific cases of more foundational structures—though the field has always been driven and shaped by the needs of applications. Moreover, category theory is rarely introduced even to undergraduate math majors, despite its unifying role in theory and its flexibility in application. Postdoctoral Associate Brendan Fong and Research Scientist David I. Spivak, both at MIT, have written a marvelous and timely new textbook that, as its title suggests, invites readers of all backgrounds to explore what it means to take a compositional approach and how it might serve their needs. An Invitation to Applied Category Theory: Seven Sketches in Compositionality (Cambridge University Press, 2019) has few mathematical prerequisites and is designed in part as a gateway to a wide range of more specialized fields. It also centers its treatment on applications, motivating several key developments in terms of real-world use cases. In this interview we discussed their views on the promise of category theory inside and outside mathematics, their motivations for writing this book, several of the accessible examples and remarkable payoffs included in its chapters, and their aspirations for the future of the field. Suggested companion works: --Tai-Danae Bradley, Math3ma --Eugenia Cheng, The Catsters --Saunders Mac Lane, Mathematics Form and Function --F. William Lawvere & Stephen H. Schanuel, Conceptual Mathematics: A First Introduction to Categories --Eugenia Cheng, x + y: A Mathematician's Manifesto for Rethinking Gender Cory Brunson (he/him) is a Research Assistant Professor in the Laboratory for Systems Medicine at the University of Florida.
Category theory is well-known for abstraction—concepts and tools from diverse fields being recognized as specific cases of more foundational structures—though the field has always been driven and shaped by the needs of applications. Moreover, category theory is rarely introduced even to undergraduate math majors, despite its unifying role in theory and its flexibility in application. Postdoctoral Associate Brendan Fong and Research Scientist David I. Spivak, both at MIT, have written a marvelous and timely new textbook that, as its title suggests, invites readers of all backgrounds to explore what it means to take a compositional approach and how it might serve their needs. An Invitation to Applied Category Theory: Seven Sketches in Compositionality (Cambridge University Press, 2019) has few mathematical prerequisites and is designed in part as a gateway to a wide range of more specialized fields. It also centers its treatment on applications, motivating several key developments in terms of real-world use cases. In this interview we discussed their views on the promise of category theory inside and outside mathematics, their motivations for writing this book, several of the accessible examples and remarkable payoffs included in its chapters, and their aspirations for the future of the field. Suggested companion works: --Tai-Danae Bradley, Math3ma --Eugenia Cheng, The Catsters --Saunders Mac Lane, Mathematics Form and Function --F. William Lawvere & Stephen H. Schanuel, Conceptual Mathematics: A First Introduction to Categories --Eugenia Cheng, x + y: A Mathematician's Manifesto for Rethinking Gender Cory Brunson (he/him) is a Research Assistant Professor in the Laboratory for Systems Medicine at the University of Florida. Learn more about your ad choices. Visit megaphone.fm/adchoices
Category theory is well-known for abstraction—concepts and tools from diverse fields being recognized as specific cases of more foundational structures—though the field has always been driven and shaped by the needs of applications. Moreover, category theory is rarely introduced even to undergraduate math majors, despite its unifying role in theory and its flexibility in application. Postdoctoral Associate Brendan Fong and Research Scientist David I. Spivak, both at MIT, have written a marvelous and timely new textbook that, as its title suggests, invites readers of all backgrounds to explore what it means to take a compositional approach and how it might serve their needs. An Invitation to Applied Category Theory: Seven Sketches in Compositionality (Cambridge University Press, 2019) has few mathematical prerequisites and is designed in part as a gateway to a wide range of more specialized fields. It also centers its treatment on applications, motivating several key developments in terms of real-world use cases. In this interview we discussed their views on the promise of category theory inside and outside mathematics, their motivations for writing this book, several of the accessible examples and remarkable payoffs included in its chapters, and their aspirations for the future of the field. Suggested companion works: --Tai-Danae Bradley, Math3ma --Eugenia Cheng, The Catsters --Saunders Mac Lane, Mathematics Form and Function --F. William Lawvere & Stephen H. Schanuel, Conceptual Mathematics: A First Introduction to Categories --Eugenia Cheng, x + y: A Mathematician's Manifesto for Rethinking Gender Cory Brunson (he/him) is a Research Assistant Professor in the Laboratory for Systems Medicine at the University of Florida. Learn more about your ad choices. Visit megaphone.fm/adchoices
Discussion of a compositional method of termination checking using so-called sized types. Datatypes are indexed by sizes, and recursive calls can only be made on data of strictly smaller size than the data as input to the recursion. Since the method is type-based, it is compositional: we can break out helper functions from a recursive function and not upset the termination checker. A readable and interesting tutorial on the subject is here.
Review of need for termination analysis for recursive functions on inductive datatypes. Discussion of a serious problem with syntactic termination checks, namely noncompositionality. A function may pass the syntactic termination check, but abstracting part of it out into a helper function may result in code which no longer passes the check. So we need a compositional termination check, which will be discussed in subsequent episodes.
The rapid diffusion of social media like Facebook and Twitter, and the massive use of different types of forums like Reddit, Quora, etc., is producing an impressive amount of text data every day. There is one specific activity that many business owners have been contemplating over the last five years, that is identifying the social sentiment of their brand, by analysing the conversations of their users. In this episode I explain how one can get the best shot at classifying sentences with deep learning and word embedding. Additional material Schematic representation of how to learn a word embedding matrix E by training a neural network that, given the previous M words, predicts the next word in a sentence. Word2Vec example source code https://gist.github.com/rlangone/ded90673f65e932fd14ae53a26e89eee#file-word2vec_example-py References [1] Mikolov, T. et al., "Distributed Representations of Words and Phrases and their Compositionality", Advances in Neural Information Processing Systems 26, pages 3111-3119, 2013. [2] The Best Embedding Method for Sentiment Classification, https://medium.com/@bramblexu/blog-md-34c5d082a8c5 [3] The state of sentiment analysis: word, sub-word and character embedding https://amethix.com/state-of-sentiment-analysis-embedding/
A research fellow at the Centre National de la Recherche Scientifique (CNRS) in Paris since 1979, François Recanati has taught in several major universities around the world, including Berkeley, Harvard, Geneva, and St Andrews. In addition to his CNRS job, he is a ‘directeur d’études’ at EHESS and the Director of Institut Jean-Nicod, a research lab in philosophy, linguistics and cognitive science hosted by the Ecole Normale Supérieure. His publications in the philosophy of language and mind include more than one hundred articles, many edited books, and a dozen monographs, the most recent of which are Mental Files (Oxford University Press, 2012) and Mental Files in Flux (Oxford University Press, 2016). He was the first President of the European Society for Analytic Philosophy (1990-93), and the Principal Investigator of a research project on Context, Content and Compositionality funded by the European Research Council (Advanced Grant, 2009-2013). He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 2012, and was awarded the CNRS Silver Medal in 2014 and a Honorary Doctorate from Stockholm University (also in 2014). He is the general editor of the Jean-Nicod book series published by MIT Press and of the Context and Content series published by Oxford University Press. This podcast is an audio recording of Professor Recanati's talk - 'Fictional, Metafictional, Parafictional' - at the Aristotelian Society on 16 October 2017. The recording was produced by the Backdoor Broadcasting Company.
Thomas Ede Zimmermann (Frankfurt) gives a talk at the MCMP Colloquium (25 June, 2015) titled "Fregean Compositionality". Abstract: The distinction between transparent and opaque contexts has always played a major rôle in theories of linguistic semantics, though it has undergone a number of reformulations and precisifications since its origins in Frege’s classical substitution arguments. Most dramatically, the unfathomable distinction between Sinn and Bedeutung has been recast in more perspicuous set-theoretic terms, trading Frege’s senses for Carnap’s intensions and identifying functions with their courses of values. Still, at least part of the Fregean architecture has survived all these transformations. In particular, (i) the strategy of treating extensionality as the default case of semantic composition and invoking intensions only when need be, has become part of most common approaches to the syntax-semantics interface. On the other hand, (ii) Frege’s apparent commitment to a hierarchy of senses in the analysis of iterated opacity has been discarded for its alleged lack of cogency and coherence. In the talk I will take a closer look at both aspects of the Fregean architecture within the standard possible worlds framework of Montague’s Universal Grammar. Concerning (i), it will be argued that the Fregean strategy results in an interpretation of intensional constructions (i.e., opaque contexts) that goes beyond mere intensional compositionality in that it imposes a certain kind of uniformity on the pertinent semantic combinations. As to (ii), it will be shown how a hierarchy of intensions may help restoring compositionality when extensional and intensional scope effects appear to be out of tune. The historical background notwithstanding, the the talk will take systematic perspective, aiming at a better understanding and possible improvement of compositionality in possible worlds semantics.
Getting the year started just right with hot jazz from all over the Sunshine State including a cut from the Jazz Conceptions Orchestra. Plenty of guitar this episode including Rich Walker (Orlando), Dan Heck (Naples), LaRue Nickelson (Tampa) is featured on a Herb Silverstein cut, George Grosman (Orlando) as well as killer saxophone performances from Mark Johnson (Daytona), Jeff Rupert (Orlando) and Rex Wertz. All that and a jazz n bass treat from Erik Jackson!Host: Kenny MacKenzie Our Facebook page!Kenny also hosts: "Jazz Greats" on WFCF St. Augustine every Tuesday 3-7pm EST Listen on iHeart! USA Only "Kendo's Jazz Sampler" on WNHU New Haven Mondays at 6am EST. Listen via WNHU website!Have a peek at Kendo's Top 55 Jazz Tracks for 2014.Kenny's Twitter1. Introduction - Kenny (download our theme song "In Control" on iTunes!)2. "Somerset's Mom" - Rich Walker (Orlando, FL) website Rich Walker - guitar, Rex Wertz - tenor sax,Tom Parmeter - trumpet, Richard Drexler - piano,Carlos Fernandez - percussion, Walt Hubbard - drums,Mark Neuenschwander - bassfrom the album "Lazybird Revisited"Purchase at CD Baby or Amazon!3. "A Morning Walk" - The Jazz Conceptions Orchestra (Jacksonville, FL) website; Alex LoRe's websiteAlex Nguyen - trumpet, leader; Brandon Lee - trumpet,Alex LoRe - alto, Jeremy Fratti - tenor, Matt Zettlemoyer - bari,Robert Edwards - trombone, Joshua Bowlus - piano, Paul Sikivie - bass, Ben Adkins - drumsfrom the album "The Jazz Conceptions Orchestra". Purchase at CD Baby, Amazon or iTunes!4. Announcements - Kenny(background music from the album Gettin' In the Groove by Ron Pirtle)5. "The Whispering Eye" - Mark Johnson (Daytona Beach, FL) websiteMark Johnson - saxesNorbert Marius - bassYaaki Levi - drumsunreleased track.Purchase more great music by Mark at Bandcamp! 6. "Esmaralda" - George Grosman & Bohemian Swing (Orlando, FL) websiteGeorge Grosman - guitar, Brandon Walker - sax, piano;Ian MacGillivray - trumpet, Rachel Melas - bass;Rafael Keren - accordian, David MacDougall - drumsFrom the album "Sydney, Mon Ami".Purchase album at Amazon, CD Baby or iTunes!7. Announcements - Kenny8. "Through the Storm" - Erik Jackson (Orlando, FL) websiteErik Jackson - production, rhodes, bass, synths, Akai MPCKJ Sawka - drumsJoey Crown - trumpetfrom the album "Rainy Days".Purchase the album at Amazon, iTunes or Bandcamp!9. "Blade's Groove" - Dan Heck (Naples, FL) websiteDan Heck - guitarThomas Marriott - trumpetStuart Shelton - pianoRick Doll - bassJose Martinez - drumsfrom the album "Compositionality" courtesy of Origin Records. Purchase at Amazon or iTunes!10. Announcements - Kenny(background music from the album Gettin' In the Groove by Ron Pirtle)11. "Monday Morning" - Herb Silverstein (Sarasota, FL) websiteHerb Silverstein - pianoJeff Rupert - tenor saxLaRue Nickelson - guitarRichard Drexler - bassMarty Morrell - drumsfrom the album "Monday Morning".Purchase at Amazon, CD Baby or iTunes!11. Closing Announcements - Kenny Palm Coast Jazz closing theme by Seven Octaves.produced by Kenny MacKenzie If you are a jazz musician residing in Florida with quality recordings of your original music (new or old) and would like to submit for future podcasts, please contact us at jazzploration@gmail.com All recordings and compositions are the property of their respective performers and composers, all rights reserved. This podcast copyright 2014 Kenny MacKenzie. All rights reserved.
Nicholas Asher, Richard Holton, Kasia Jaszczolt, Stephen Clark, Ann Copestake, Aurelie Herbelot, William Marslen-Wilson
We're starting Spring off the jazziest way we know how - with Latin jazz, contemporary guitar trio, New Orleans second-line, head boppin' funk and much more. The show features 3 very different guitarists from Florida as well as sassy trombone and heart-melting soprano sax! Don't forget to share shows you like with your friends and spread the word about all the excellent jazz coming from FL! Hosts: Allison Paris & Kenny MacKenzie pictured: Dan Hecksee what we're up to on Facebook!Kenny host "Jazz Greats" on WFCF Saint Augustine - every Tuesday from 3-5pm EST. Listen online here!Kenny's Twitter - @DJKendo11. Introduction - Allison & Kenny2. "Boneyard" - Tom Brantley (Tampa, FL) websiteTom Brantley - trombone, Jack Wilkins - tenor sax,Rex Richardson - trumpet, LaRue Nickelson - guitarChris Rottmayer - Hammond organ, Steve Davis - drumsfrom the album "Boneyard"purchase album at Amazon or iTunes.3. "The 'E' Music" - Ermesc Gonzalez (Tampa, FL) websiteErmesc Gonzalez - guitarAldemar Valentin - bassFreddie Burgos - drumsfrom the album "Introspection". Purchase at Amazon, CD Baby or iTunes.4. Announcements - Kenny(background music from the album "Sid's Blast From the Past" by Sid Blair)5. "Peacock Cha-Cha" - Eastside Combo (Orlando, FL) myspaceTim Sheletter - tenor saxDerrick Harvin - pianoKevin Stever - bassWillie Rast - drumsFrom the album "Bamboo".Purchase album at Amazon, CD Baby or iTunes.6. "Naples" - Dan Heck (Naples, FL) websiteDan Heck - guitar, Stuart Shelton - piano,Rick Doll - bass, Jose Martinez - drumsFrom the album "Compositionality" on Origin Records.Purchase album at Amazon or iTunes.7. Announcements - Allison(background music from the album KMT I by Kenny MacKenzie Trio)8. "The Voo" - The Alan Craig Project (Largo, FL) websiteAlan Craig - guitar, George Harris - guitar,Ross Rice - organ, Ronnie Dee - sax,Rob McDowell - bass, Dave Reinhardt - drumsfrom the album "Not Black & White". Purchase cd at Amazon, CD Baby or iTunes.9. "21 Août" - Alain Bradette (Orlando, FL) websiteAlain Bradette - soprano sax, Alex Clements - piano,Bobby Koelble - guitar, Chris Queenan - bass,Gerald Myles - drumsfrom the album "State of Mind".Purchase cd at Amazon, CD Baby or iTunes.10. Announcements - Kenny(background music from the album "Live in the City" by Jack Pierson)11. Closing Announcements - Allison Palm Coast Jazz closing theme by Seven Octaves.produced by Kenny MacKenzie If you are a jazz musician residing in Florida with quality recordings of your original music (new or old) and would like to submit for future podcasts, please contact us at palmcoastjazz@gmail.com All recordings and compositions are the property of their respective performers and composers, all rights reserved. This podcast copyright 2013 Kenny MacKenzie. All rights reserved.
We're totally excited to be heading into a new year of jazz from fabulous Florida artists - from the obscure to the superstars and everyone in between. Our first show for 2013 covers the state from Central Florida to the Southwest corner, with guitar hardbop, a carefree samba for flute, contemporary tenor sax, jazzy trip hop for trumpet, funky swamp and a delightfully frenetic duet from jazz legends Larry Coryell & Kenny Drew Jr.!Pictured - Larry Coryell; photo by Nelson G. Onofre Electric Eyes Photographyvisit our Facebook pageHosts: Allison Paris & Kenny MacKenzie Kenny host "Jazz Greats" on WFCF Saint Augustine - every Tuesday from 3-5pm EST. Listen online here!Kenny's Twitter - @DJKendo11. Introduction - Allison & Kenny2. "Sunlit Samba" - Wilkins and Allen Quartet (Alford, FL) Skip Wilkins' website Jill Wofsey's websiteJill Wofsey - flute, Skip Wilkins - piano, Tony Marino - bass, Tom Whaley - drumsfrom the album "Petty Theft"purchase album at CD Baby.3. "Blue Stone" - Dan Heck (Naples, FL) websiteDan Heck - guitar, Thomas Marriott - trumpetStuart Shelton - piano, Rick Doll - bassJose Martinez - drumsfrom the album "Compositionality" on Origin records.Purchase at Amazon or Itunes.4. Announcements - Kenny(background music from the album "Second Chances" by Allison Paris)5. "Szabodar" - Larry Coryell & Kenny Drew Jr. (St. Petersburg, FL) Larry's website Kenny's websiteLarry Coryell - guitar, Kenny Drew Jr. - pianoFrom the album "Duality" courtesty of Random Act Records website Purchase album at Amazon, CD Baby or the Random Act Records store, where 10% of proceeds go to charity.6. "Yin Warriors" - Ray Guiser (Daytona Beach, FL) websiteRay Guiser - tenor saxophone, Akai EWI, percussion, drumand keyboard programmingLawrence App - fretless bassFrom the album "Macroism"Purchase album at Ray's website or the Palm Coast Jazz store.7. Announcements - Allison(background music from the album "Live in the City" by Jack Pierson)8. "Art Deco" - Michael Hawley (Winter Park, FL) websiteMichael Hawley - guitar, Peter Miles - drumsSean Tarelton - bass, Eric Brigmond - keysScott Rademacher - tenor saxophonefrom the album "Tele Talk"Purchase cd at Amazon, CD Baby or Michael's website.9. "Starlight" - Erik Jackson (DeBary, FL) FacebookErik Jackson - trumpet, keyboard & drum programmingfrom the album "Rainy Days"Purchase downloads at Amazon, Itunes or Bandcamp.10. Announcements - Kenny(background music from the album "Gettin' In the Groove" by Ron Pirtle)11. Closing Announcements - Allison Palm Coast Jazz closing theme by Seven Octaves.produced by Kenny MacKenzie If you are a jazz musician residing in Florida with quality recordings of your original music (new or old) and would like to submit for future podcasts, please contact us at palmcoastjazz@gmail.com All recordings and compositions are the property of their respective performers and composers, all rights reserved. This podcast copyright 2012 Kenny MacKenzie. All rights reserved.