Podcasts about Abstraction

Conceptual process where general rules and concepts are derived from the usage and classification of specific examples

  • 866PODCASTS
  • 1,396EPISODES
  • 45mAVG DURATION
  • 5WEEKLY NEW EPISODES
  • Apr 14, 2025LATEST
Abstraction

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Abstraction

Show all podcasts related to abstraction

Latest podcast episodes about Abstraction

Burned By Books
Betsy Lerner, "Shred Sisters" (Grove Press, 2024)

Burned By Books

Play Episode Listen Later Apr 14, 2025 41:24


It is said that when one person in a family is unstable, the whole family is destabilized. Meet the Shreds. Olivia is the sister in the spotlight until her stunning confidence becomes erratic and unpredictable, a hurricane leaving people wrecked in her wake. Younger sister Amy, cautious and studious to the core, believes in facts, proof, and the empirical world. None of that explains what's happening to Ollie, whose physical beauty and charisma mask the mental illness that will shatter Amy's carefully constructed life. As Amy comes of age and seeks to find her place—first in academics, then New York publishing, and through a series of troubled relationships—every step brings collisions with Ollie, who slips in and out of the Shred family without warning. Yet for all that threatens their sibling bond, Amy and Ollie cannot escape or deny the inextricable sister knot that binds them. Spanning two decades, Shred Sisters (Grove Press, 2024) is an intimate and bittersweet story exploring the fierce complexities of sisterhood, mental health, loss and love. If anything is true it's what Amy learns on her road to self-acceptance: No one will love you more or hurt you more than a sister. Betsy Lerner is the author of The Bridge Ladies, The Forest for the Trees, and Food and Loathing. With Temple Grandin, she is the co-author of the New York Times bestseller Visual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns and Abstractions. She received an MFA from Columbia University in Poetry where she was selected as one of PEN's Emerging Writers. She also received the Tony Godwin Publishing Prize for Editors. After working as an editor for 15 years, she became an agent and is currently a partner with Dunow, Carlson and Lerner Literary Agency. Recommended Books: Suzy Boyt, Loved and Missed Rufi Thorpe, Margo's Got Money Troubles Morning News Tournament of Books (March Madness for Books!) Chris Holmes is Chair of Literatures in English and Associate Professor at Ithaca College. He writes criticism on contemporary global literatures. His book, Kazuo Ishiguro Against World Literature, is published with Bloomsbury Publishing. He is the co-director of The New Voices Festival, a celebration of work in poetry, prose, and playwriting by up-and-coming young writers. Learn more about your ad choices. Visit megaphone.fm/adchoices

New Books Network
Betsy Lerner, "Shred Sisters" (Grove Press, 2024)

New Books Network

Play Episode Listen Later Apr 14, 2025 41:24


It is said that when one person in a family is unstable, the whole family is destabilized. Meet the Shreds. Olivia is the sister in the spotlight until her stunning confidence becomes erratic and unpredictable, a hurricane leaving people wrecked in her wake. Younger sister Amy, cautious and studious to the core, believes in facts, proof, and the empirical world. None of that explains what's happening to Ollie, whose physical beauty and charisma mask the mental illness that will shatter Amy's carefully constructed life. As Amy comes of age and seeks to find her place—first in academics, then New York publishing, and through a series of troubled relationships—every step brings collisions with Ollie, who slips in and out of the Shred family without warning. Yet for all that threatens their sibling bond, Amy and Ollie cannot escape or deny the inextricable sister knot that binds them. Spanning two decades, Shred Sisters (Grove Press, 2024) is an intimate and bittersweet story exploring the fierce complexities of sisterhood, mental health, loss and love. If anything is true it's what Amy learns on her road to self-acceptance: No one will love you more or hurt you more than a sister. Betsy Lerner is the author of The Bridge Ladies, The Forest for the Trees, and Food and Loathing. With Temple Grandin, she is the co-author of the New York Times bestseller Visual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns and Abstractions. She received an MFA from Columbia University in Poetry where she was selected as one of PEN's Emerging Writers. She also received the Tony Godwin Publishing Prize for Editors. After working as an editor for 15 years, she became an agent and is currently a partner with Dunow, Carlson and Lerner Literary Agency. Recommended Books: Suzy Boyt, Loved and Missed Rufi Thorpe, Margo's Got Money Troubles Morning News Tournament of Books (March Madness for Books!) Chris Holmes is Chair of Literatures in English and Associate Professor at Ithaca College. He writes criticism on contemporary global literatures. His book, Kazuo Ishiguro Against World Literature, is published with Bloomsbury Publishing. He is the co-director of The New Voices Festival, a celebration of work in poetry, prose, and playwriting by up-and-coming young writers. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/new-books-network

New Books in Literature
Betsy Lerner, "Shred Sisters" (Grove Press, 2024)

New Books in Literature

Play Episode Listen Later Apr 14, 2025 41:24


It is said that when one person in a family is unstable, the whole family is destabilized. Meet the Shreds. Olivia is the sister in the spotlight until her stunning confidence becomes erratic and unpredictable, a hurricane leaving people wrecked in her wake. Younger sister Amy, cautious and studious to the core, believes in facts, proof, and the empirical world. None of that explains what's happening to Ollie, whose physical beauty and charisma mask the mental illness that will shatter Amy's carefully constructed life. As Amy comes of age and seeks to find her place—first in academics, then New York publishing, and through a series of troubled relationships—every step brings collisions with Ollie, who slips in and out of the Shred family without warning. Yet for all that threatens their sibling bond, Amy and Ollie cannot escape or deny the inextricable sister knot that binds them. Spanning two decades, Shred Sisters (Grove Press, 2024) is an intimate and bittersweet story exploring the fierce complexities of sisterhood, mental health, loss and love. If anything is true it's what Amy learns on her road to self-acceptance: No one will love you more or hurt you more than a sister. Betsy Lerner is the author of The Bridge Ladies, The Forest for the Trees, and Food and Loathing. With Temple Grandin, she is the co-author of the New York Times bestseller Visual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns and Abstractions. She received an MFA from Columbia University in Poetry where she was selected as one of PEN's Emerging Writers. She also received the Tony Godwin Publishing Prize for Editors. After working as an editor for 15 years, she became an agent and is currently a partner with Dunow, Carlson and Lerner Literary Agency. Recommended Books: Suzy Boyt, Loved and Missed Rufi Thorpe, Margo's Got Money Troubles Morning News Tournament of Books (March Madness for Books!) Chris Holmes is Chair of Literatures in English and Associate Professor at Ithaca College. He writes criticism on contemporary global literatures. His book, Kazuo Ishiguro Against World Literature, is published with Bloomsbury Publishing. He is the co-director of The New Voices Festival, a celebration of work in poetry, prose, and playwriting by up-and-coming young writers. Learn more about your ad choices. Visit megaphone.fm/adchoices Support our show by becoming a premium member! https://newbooksnetwork.supportingcast.fm/literature

Groks Science Radio Show and Podcast
Fatal Abstraction -— Groks Science Show 2025-04-09

Groks Science Radio Show and Podcast

Play Episode Listen Later Apr 9, 2025 28:30


How do financial incentives in the technology industry lead to disastrous products that can rapidly impact billions of lives? On this episode, Darryl Campbell discussed his book, Fatal Abstraction.

Parallax Views w/ J.G. Michael
Rafah Reduced to Rubble as Turmoil in Israel Ratchets Up: Reflections from an Israeli w/ Ori Goldberg

Parallax Views w/ J.G. Michael

Play Episode Listen Later Apr 8, 2025 90:56


Recorded 4/7/2025 On this edition of Parallax Views, Israeli commentator Ori Goldberg returns to the show to discuss the latest development in Gaza and Israel. This conversation came about due to the horrific stories coming of the southern Gaza city Rafah and touches upon that as well as the political turmoil currently bubbling to a fever pitch in Israel. J.G. specifically reached out to interview Ori in the hopes of trying to make sense of what is happening on the ground. Abstractions are often attendant to discussions of Israel/Palestine, but the human cost cannot be forgotten. That is what led to this discussion, and it proved difficult on some level due to the intense nature of the horrors we've seen in the past year and a half whether it be the events of October 7th or the scenes coming out of Rafah. Ori's approach is highly reflective in nature and as such has a certain unique quality. Whether you agree or disagree with Ori's thinking, this is hopefully going to be a powerful discussion.

The Human Design Hive Podcast
Understanding the Fluidity of Your Defined Gates

The Human Design Hive Podcast

Play Episode Listen Later Apr 8, 2025 69:28


In this episode of Human Design Hive, Dana and Hali dive into a refreshing perspective on defined gates in your Human Design chart. If you've ever felt boxed in by gate descriptions or wondered why your experience doesn't match what you've read about your gates, you're in the right place!What You'll Discover:- Why your defined gates represent consistent potential rather than fixed traits- How conditioning, environment, and experiences affect how your gates express- The difference between genotype (potential) and phenotype (expression) in both physical DNA and energetic DNA- Real-life examples of how the same gate can express differently between peopleDefined gates are like radio stations that are always broadcasting, but how you tune into that frequency can vary. Just like physical DNA doesn't always express the same way in everyone who carries it, your energetic DNA (Human Design) has fluidity in how it manifests.The conversation includes examples of how gates like Gate 5 (Rhythm), Gate 51 (Shock), and Gate 64 (Abstraction) can express differently based on your unique circumstances and awareness. Key Takeaways:1. Your defined gates represent consistent access to certain energies, not a fixed expression2. How these energies manifest depends on your type, strategy, authority, and lived experiences3. There's no "right way" to express a gate—your expression is uniquely yoursYour Next Steps:If you'd like to explore your own gates more deeply, join the upcoming community call on Wednesday, April23rd or book a one-on-one session with Dana to uncover what might be getting in your way of fully embodying your design.Incarnation Cross of the Week:The Right Angle Cross of Service 3 (Gates 18, 17, 52, 58) This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit danaphillips.substack.com/subscribe

Startup Project
Warp Dev The AI Terminal Changing Software Development | Zach Lloyd CEO & Co-Founder of Warp Dev | Startup Project #98

Startup Project

Play Episode Listen Later Apr 6, 2025 50:51


Join host Nataraj as he sits down with Zach Lloyd, the founder and CEO of Warp, a company developing an intelligent terminal aimed at modernizing the command line experience for developers. Zach, a former principal engineer at Google, having worked on Google Sheets and Docs, and co-founder and CTO of Selfmade, shares his insights on the future of software development, the evolution of the terminal, and AI's role in building new products.In this episode, they discuss:How Warp leverages AI to improve developer productivityThe challenges of building an AI-powered developer toolThe future of coding and the evolution of the terminalBridging the gap between traditional terminals and modern IDEsThe current AI hype cycle and its impact on the developer communityGuest:Zach LloydFounder and CEO of WarpFormer Principal Engineer at GoogleCo-founder & CTO of SelfmadeWebsite: https://www.warp.dev/Host:NatarajHost of the Startup Project podcastSenior PM at Azure & InvestorLinkedIn: https://www.linkedin.com/in/natarajsindam/Twitter: https://x.com/natarajsindamEmail Updates: https://startupproject.substack.com/Website: https://thestartupproject.ioTimestamps:00:01 - Introduction and Guest Introduction00:55 - What is Warp?02:59 - How Developers Use Warp04:56 - Warp's Compatibility with Existing Developer Tools05:21 - Warp's Intelligence and Features06:31 - Integrating Existing Developer Nuances into Warp07:24 - Warp's AI-Powered Enhancements10:06 - The Future of IDEs and Terminals13:50 - The Evolution of Abstraction in Software Development16:37 - The AI Hype Cycle and Developer Productivity18:07 - Developer Feedback and Adoption of Warp20:30 - Go-to-Market Strategy and Customer Acquisition21:33 - Leveraging LLMs in Warp23:28 - The Role of AI Agents in Software Development25:49 - Cost and Sustainability of AI-Powered Tools27:17 - Warp's Pricing Model and Margins30:04 - Open-Source Models and Profitability32:43 - Key Metrics for Warp's Success34:45 - Go-to-Market Motion and Acquiring Customers37:40 - Using AI in Building Warp39:15 - The Impact of AI on Developer Demand41:00 - The Current State of AI and Developer Productivity43:31 - The Importance of Context and Knowledge in AI44:31 - What Zach is Consuming45:40 - Zach's Mentors46:25 - Lessons Learned as a FounderSubscribe:Subscribe to Startup Project for more engaging conversations with leading entrepreneurs!https://startupproject.substack.com/Tags:#StartupProject #Warp #AI #ArtificialIntelligence #Terminal #DeveloperTools #Coding #Productivity #SoftwareDevelopment #DevOps #VentureCapital #Entrepreneurship #Podcast #YouTube #Tech #Innovation

Sequences Magazine
Sequences Podcast No 267

Sequences Magazine

Play Episode Listen Later Mar 30, 2025 179:34


Our opening edition reflects a sombre tone within the lighter side of electronic music, venturing into ambient symphonic, down-tempo, and deeper, darker shades of ambient sounds. In light of his passing on February 15, 2025, we honour David Parsons with two tracks from “Yatra” and his final album, “Portal,” which provided a heartfelt sonic embodiment of the land and its people in the unique blend of Western technology and Eastern music. Download Bios: https://we.tl/t-JQCXCyck5f Playlist no 267 01.38 Boreal Talga ‘Melt Water' (album Selections From The Boreal Talga Collection) https://wayfarermusicgroup.bandcamp.com 06.41 Boreal Talga 'Tundra' 12.21 David Helpling & Eric “the” Taylor ‘The Precious Dark' (single) www.spottedpeccary.com 21.50 Sine 'Neuanfang' (single) https://www.sine-music.com 26.20 Gallery Six ‘Geshi' (album Kisetsu) https://le-mont-analogue.bandcamp.com/album/kisetsu 30.43 Gallery Six ‘Daikan' 35.16 Pietro Zollo ‘Going Beyond' (album Abstraction) https://projektrecords.bandcamp.com/album/abstraction 40.25 Pietro Zollo ‘Higher Existance' 45.06 David Parsons ‘Haha Puja' (album Yatra) ***https://www.groove.nl/shop/david-parsons-yatra/?v=7885444af42e 55.37 David Parsons ‘Be Still' (Album Portal) ***https://gterma.bandcamp.com/album/portal 01.04.12 Lawrence English ‘ETHKIBIII' (album Even The Horizon Knows Its Bounds) ***https://lawrenceenglish.bandcamp.com 01.09.43 Lawrence English ‘ETHKIBVII' 01.09.42 Anja ‘Void Of Abeyance' (album Algol) https://winter-light.bandcamp.com/album/algol 01.21.30 Anja ‘Creatures Without Eyes' 01.26.56 Alessandro Barbanera ‘When Darkness Drops Again (The Ancient Stars Are Holes In The Sky Tonight)' (album In The Darkness Let Me Dwell) https://owltotem.bandcamp.com/album/in-darkness-let-me-dwell 01.36.26 Interstellar Data Unit ‘Starship Antares' (album The Heliopause) https://interstellardataunit.bandcamp.com/album/crossing-the-heliopause 01.48.44 Bill Laswell & Peter Namlock ‘Angel Tech/Black Dawn' (The Psychonavigation) *** https://infinitefog.bandcamp.com/album/psychonavigation 02.03.53 Ai Yamamoto ‘Apple In The Sky' (album Less Hype More Hyphae) https://le-mont-analogue.bandcamp.com/album/less-hype-more-hyphae 02.08.56 Ai Yamamoto ‘Gigi's Lazy Day' 02.13.28 Ai Yamamoto + Dan West '10,000 Steps' (album Microdoses) https://aiyamamoto.bandcamp.com/album/microdoses 02.18.22 Diogene ‘Frozen Lake' (album Memories From A Past Life) https://marenostrumlabel.bandcamp.com/album/memories-from-a-past-life 02.21.21 The Choir 'Slippery Moss' (album Translucent) https://thechoir1.bandcamp.com/album/translucent 02.25.50 The Choir ‘Plastic Swords' 02.30.03 Carl Lord ‘Ice Glow Transformation' (EP The Transformation) https://heartdancerecords.bandcamp.com 02.34.37 Ambiente Soltice & Billy Denk ‘Air & Water' (album Eclipse) *** https://wayfarermusicgroup.bandcamp.com 02.44.57 Ian Crawford ‘Lost In The Leaves' (album Slow Pull On A Long Thread) https://wayfarermusicgroup.bandcamp.com 02.51.58 Parallel Relax ‘Libra' (album Twilight) https://heartdancerecords.bandcamp.com 02.55.47 Parallel Relax ‘Andromeda' Edit ***

Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

If you're in SF: Join us for the Claude Plays Pokemon hackathon this Sunday!If you're not: Fill out the 2025 State of AI Eng survey for $250 in Amazon cards!We are SO excited to share our conversation with Dharmesh Shah, co-founder of HubSpot and creator of Agent.ai.A particularly compelling concept we discussed is the idea of "hybrid teams" - the next evolution in workplace organization where human workers collaborate with AI agents as team members. Just as we previously saw hybrid teams emerge in terms of full-time vs. contract workers, or in-office vs. remote workers, Dharmesh predicts that the next frontier will be teams composed of both human and AI members. This raises interesting questions about team dynamics, trust, and how to effectively delegate tasks between human and AI team members.The discussion of business models in AI reveals an important distinction between Work as a Service (WaaS) and Results as a Service (RaaS), something Dharmesh has written extensively about. While RaaS has gained popularity, particularly in customer support applications where outcomes are easily measurable, Dharmesh argues that this model may be over-indexed. Not all AI applications have clearly definable outcomes or consistent economic value per transaction, making WaaS more appropriate in many cases. This insight is particularly relevant for businesses considering how to monetize AI capabilities.The technical challenges of implementing effective agent systems are also explored, particularly around memory and authentication. Shah emphasizes the importance of cross-agent memory sharing and the need for more granular control over data access. He envisions a future where users can selectively share parts of their data with different agents, similar to how OAuth works but with much finer control. This points to significant opportunities in developing infrastructure for secure and efficient agent-to-agent communication and data sharing.Other highlights from our conversation* The Evolution of AI-Powered Agents – Exploring how AI agents have evolved from simple chatbots to sophisticated multi-agent systems, and the role of MCPs in enabling that.* Hybrid Digital Teams and the Future of Work – How AI agents are becoming teammates rather than just tools, and what this means for business operations and knowledge work.* Memory in AI Agents – The importance of persistent memory in AI systems and how shared memory across agents could enhance collaboration and efficiency.* Business Models for AI Agents – Exploring the shift from software as a service (SaaS) to work as a service (WaaS) and results as a service (RaaS), and what this means for monetization.* The Role of Standards Like MCP – Why MCP has been widely adopted and how it enables agent collaboration, tool use, and discovery.* The Future of AI Code Generation and Software Engineering – How AI-assisted coding is changing the role of software engineers and what skills will matter most in the future.* Domain Investing and Efficient Markets – Dharmesh's approach to domain investing and how inefficiencies in digital asset markets create business opportunities.* The Philosophy of Saying No – Lessons from "Sorry, You Must Pass" and how prioritization leads to greater productivity and focus.Timestamps* 00:00 Introduction and Guest Welcome* 02:29 Dharmesh Shah's Journey into AI* 05:22 Defining AI Agents* 06:45 The Evolution and Future of AI Agents* 13:53 Graph Theory and Knowledge Representation* 20:02 Engineering Practices and Overengineering* 25:57 The Role of Junior Engineers in the AI Era* 28:20 Multi-Agent Systems and MCP Standards* 35:55 LinkedIn's Legal Battles and Data Scraping* 37:32 The Future of AI and Hybrid Teams* 39:19 Building Agent AI: A Professional Network for Agents* 40:43 Challenges and Innovations in Agent AI* 45:02 The Evolution of UI in AI Systems* 01:00:25 Business Models: Work as a Service vs. Results as a Service* 01:09:17 The Future Value of Engineers* 01:09:51 Exploring the Role of Agents* 01:10:28 The Importance of Memory in AI* 01:11:02 Challenges and Opportunities in AI Memory* 01:12:41 Selective Memory and Privacy Concerns* 01:13:27 The Evolution of AI Tools and Platforms* 01:18:23 Domain Names and AI Projects* 01:32:08 Balancing Work and Personal Life* 01:35:52 Final Thoughts and ReflectionsTranscriptAlessio [00:00:04]: Hey everyone, welcome back to the Latent Space podcast. This is Alessio, partner and CTO at Decibel Partners, and I'm joined by my co-host Swyx, founder of Small AI.swyx [00:00:12]: Hello, and today we're super excited to have Dharmesh Shah to join us. I guess your relevant title here is founder of Agent AI.Dharmesh [00:00:20]: Yeah, that's true for this. Yeah, creator of Agent.ai and co-founder of HubSpot.swyx [00:00:25]: Co-founder of HubSpot, which I followed for many years, I think 18 years now, gonna be 19 soon. And you caught, you know, people can catch up on your HubSpot story elsewhere. I should also thank Sean Puri, who I've chatted with back and forth, who's been, I guess, getting me in touch with your people. But also, I think like, just giving us a lot of context, because obviously, My First Million joined you guys, and they've been chatting with you guys a lot. So for the business side, we can talk about that, but I kind of wanted to engage your CTO, agent, engineer side of things. So how did you get agent religion?Dharmesh [00:01:00]: Let's see. So I've been working, I'll take like a half step back, a decade or so ago, even though actually more than that. So even before HubSpot, the company I was contemplating that I had named for was called Ingenisoft. And the idea behind Ingenisoft was a natural language interface to business software. Now realize this is 20 years ago, so that was a hard thing to do. But the actual use case that I had in mind was, you know, we had data sitting in business systems like a CRM or something like that. And my kind of what I thought clever at the time. Oh, what if we used email as the kind of interface to get to business software? And the motivation for using email is that it automatically works when you're offline. So imagine I'm getting on a plane or I'm on a plane. There was no internet on planes back then. It's like, oh, I'm going through business cards from an event I went to. I can just type things into an email just to have them all in the backlog. When it reconnects, it sends those emails to a processor that basically kind of parses effectively the commands and updates the software, sends you the file, whatever it is. And there was a handful of commands. I was a little bit ahead of the times in terms of what was actually possible. And I reattempted this natural language thing with a product called ChatSpot that I did back 20...swyx [00:02:12]: Yeah, this is your first post-ChatGPT project.Dharmesh [00:02:14]: I saw it come out. Yeah. And so I've always been kind of fascinated by this natural language interface to software. Because, you know, as software developers, myself included, we've always said, oh, we build intuitive, easy-to-use applications. And it's not intuitive at all, right? Because what we're doing is... We're taking the mental model that's in our head of what we're trying to accomplish with said piece of software and translating that into a series of touches and swipes and clicks and things like that. And there's nothing natural or intuitive about it. And so natural language interfaces, for the first time, you know, whatever the thought is you have in your head and expressed in whatever language that you normally use to talk to yourself in your head, you can just sort of emit that and have software do something. And I thought that was kind of a breakthrough, which it has been. And it's gone. So that's where I first started getting into the journey. I started because now it actually works, right? So once we got ChatGPT and you can take, even with a few-shot example, convert something into structured, even back in the ChatGP 3.5 days, it did a decent job in a few-shot example, convert something to structured text if you knew what kinds of intents you were going to have. And so that happened. And that ultimately became a HubSpot project. But then agents intrigued me because I'm like, okay, well, that's the next step here. So chat's great. Love Chat UX. But if we want to do something even more meaningful, it felt like the next kind of advancement is not this kind of, I'm chatting with some software in a kind of a synchronous back and forth model, is that software is going to do things for me in kind of a multi-step way to try and accomplish some goals. So, yeah, that's when I first got started. It's like, okay, what would that look like? Yeah. And I've been obsessed ever since, by the way.Alessio [00:03:55]: Which goes back to your first experience with it, which is like you're offline. Yeah. And you want to do a task. You don't need to do it right now. You just want to queue it up for somebody to do it for you. Yes. As you think about agents, like, let's start at the easy question, which is like, how do you define an agent? Maybe. You mean the hardest question in the universe? Is that what you mean?Dharmesh [00:04:12]: You said you have an irritating take. I do have an irritating take. I think, well, some number of people have been irritated, including within my own team. So I have a very broad definition for agents, which is it's AI-powered software that accomplishes a goal. Period. That's it. And what irritates people about it is like, well, that's so broad as to be completely non-useful. And I understand that. I understand the criticism. But in my mind, if you kind of fast forward months, I guess, in AI years, the implementation of it, and we're already starting to see this, and we'll talk about this, different kinds of agents, right? So I think in addition to having a usable definition, and I like yours, by the way, and we should talk more about that, that you just came out with, the classification of agents actually is also useful, which is, is it autonomous or non-autonomous? Does it have a deterministic workflow? Does it have a non-deterministic workflow? Is it working synchronously? Is it working asynchronously? Then you have the different kind of interaction modes. Is it a chat agent, kind of like a customer support agent would be? You're having this kind of back and forth. Is it a workflow agent that just does a discrete number of steps? So there's all these different flavors of agents. So if I were to draw it in a Venn diagram, I would draw a big circle that says, this is agents, and then I have a bunch of circles, some overlapping, because they're not mutually exclusive. And so I think that's what's interesting, and we're seeing development along a bunch of different paths, right? So if you look at the first implementation of agent frameworks, you look at Baby AGI and AutoGBT, I think it was, not Autogen, that's the Microsoft one. They were way ahead of their time because they assumed this level of reasoning and execution and planning capability that just did not exist, right? So it was an interesting thought experiment, which is what it was. Even the guy that, I'm an investor in Yohei's fund that did Baby AGI. It wasn't ready, but it was a sign of what was to come. And so the question then is, when is it ready? And so lots of people talk about the state of the art when it comes to agents. I'm a pragmatist, so I think of the state of the practical. It's like, okay, well, what can I actually build that has commercial value or solves actually some discrete problem with some baseline of repeatability or verifiability?swyx [00:06:22]: There was a lot, and very, very interesting. I'm not irritated by it at all. Okay. As you know, I take a... There's a lot of anthropological view or linguistics view. And in linguistics, you don't want to be prescriptive. You want to be descriptive. Yeah. So you're a goals guy. That's the key word in your thing. And other people have other definitions that might involve like delegated trust or non-deterministic work, LLM in the loop, all that stuff. The other thing I was thinking about, just the comment on Baby AGI, LGBT. Yeah. In that piece that you just read, I was able to go through our backlog and just kind of track the winter of agents and then the summer now. Yeah. And it's... We can tell the whole story as an oral history, just following that thread. And it's really just like, I think, I tried to explain the why now, right? Like I had, there's better models, of course. There's better tool use with like, they're just more reliable. Yep. Better tools with MCP and all that stuff. And I'm sure you have opinions on that too. Business model shift, which you like a lot. I just heard you talk about RAS with MFM guys. Yep. Cost is dropping a lot. Yep. Inference is getting faster. There's more model diversity. Yep. Yep. I think it's a subtle point. It means that like, you have different models with different perspectives. You don't get stuck in the basin of performance of a single model. Sure. You can just get out of it by just switching models. Yep. Multi-agent research and RL fine tuning. So I just wanted to let you respond to like any of that.Dharmesh [00:07:44]: Yeah. A couple of things. Connecting the dots on the kind of the definition side of it. So we'll get the irritation out of the way completely. I have one more, even more irritating leap on the agent definition thing. So here's the way I think about it. By the way, the kind of word agent, I looked it up, like the English dictionary definition. The old school agent, yeah. Is when you have someone or something that does something on your behalf, like a travel agent or a real estate agent acts on your behalf. It's like proxy, which is a nice kind of general definition. So the other direction I'm sort of headed, and it's going to tie back to tool calling and MCP and things like that, is if you, and I'm not a biologist by any stretch of the imagination, but we have these single-celled organisms, right? Like the simplest possible form of what one would call life. But it's still life. It just happens to be single-celled. And then you can combine cells and then cells become specialized over time. And you have much more sophisticated organisms, you know, kind of further down the spectrum. In my mind, at the most fundamental level, you can almost think of having atomic agents. What is the simplest possible thing that's an agent that can still be called an agent? What is the equivalent of a kind of single-celled organism? And the reason I think that's useful is right now we're headed down the road, which I think is very exciting around tool use, right? That says, okay, the LLMs now can be provided a set of tools that it calls to accomplish whatever it needs to accomplish in the kind of furtherance of whatever goal it's trying to get done. And I'm not overly bothered by it, but if you think about it, if you just squint a little bit and say, well, what if everything was an agent? And what if tools were actually just atomic agents? Because then it's turtles all the way down, right? Then it's like, oh, well, all that's really happening with tool use is that we have a network of agents that know about each other through something like an MMCP and can kind of decompose a particular problem and say, oh, I'm going to delegate this to this set of agents. And why do we need to draw this distinction between tools, which are functions most of the time? And an actual agent. And so I'm going to write this irritating LinkedIn post, you know, proposing this. It's like, okay. And I'm not suggesting we should call even functions, you know, call them agents. But there is a certain amount of elegance that happens when you say, oh, we can just reduce it down to one primitive, which is an agent that you can combine in complicated ways to kind of raise the level of abstraction and accomplish higher order goals. Anyway, that's my answer. I'd say that's a success. Thank you for coming to my TED Talk on agent definitions.Alessio [00:09:54]: How do you define the minimum viable agent? Do you already have a definition for, like, where you draw the line between a cell and an atom? Yeah.Dharmesh [00:10:02]: So in my mind, it has to, at some level, use AI in order for it to—otherwise, it's just software. It's like, you know, we don't need another word for that. And so that's probably where I draw the line. So then the question, you know, the counterargument would be, well, if that's true, then lots of tools themselves are actually not agents because they're just doing a database call or a REST API call or whatever it is they're doing. And that does not necessarily qualify them, which is a fair counterargument. And I accept that. It's like a good argument. I still like to think about—because we'll talk about multi-agent systems, because I think—so we've accepted, which I think is true, lots of people have said it, and you've hopefully combined some of those clips of really smart people saying this is the year of agents, and I completely agree, it is the year of agents. But then shortly after that, it's going to be the year of multi-agent systems or multi-agent networks. I think that's where it's going to be headed next year. Yeah.swyx [00:10:54]: Opening eyes already on that. Yeah. My quick philosophical engagement with you on this. I often think about kind of the other spectrum, the other end of the cell spectrum. So single cell is life, multi-cell is life, and you clump a bunch of cells together in a more complex organism, they become organs, like an eye and a liver or whatever. And then obviously we consider ourselves one life form. There's not like a lot of lives within me. I'm just one life. And now, obviously, I don't think people don't really like to anthropomorphize agents and AI. Yeah. But we are extending our consciousness and our brain and our functionality out into machines. I just saw you were a Bee. Yeah. Which is, you know, it's nice. I have a limitless pendant in my pocket.Dharmesh [00:11:37]: I got one of these boys. Yeah.swyx [00:11:39]: I'm testing it all out. You know, got to be early adopters. But like, we want to extend our personal memory into these things so that we can be good at the things that we're good at. And, you know, machines are good at it. Machines are there. So like, my definition of life is kind of like going outside of my own body now. I don't know if you've ever had like reflections on that. Like how yours. How our self is like actually being distributed outside of you. Yeah.Dharmesh [00:12:01]: I don't fancy myself a philosopher. But you went there. So yeah, I did go there. I'm fascinated by kind of graphs and graph theory and networks and have been for a long, long time. And to me, we're sort of all nodes in this kind of larger thing. It just so happens that we're looking at individual kind of life forms as they exist right now. But so the idea is when you put a podcast out there, there's these little kind of nodes you're putting out there of like, you know, conceptual ideas. Once again, you have varying kind of forms of those little nodes that are up there and are connected in varying and sundry ways. And so I just think of myself as being a node in a massive, massive network. And I'm producing more nodes as I put content or ideas. And, you know, you spend some portion of your life collecting dots, experiences, people, and some portion of your life then connecting dots from the ones that you've collected over time. And I found that really interesting things happen and you really can't know in advance how those dots are necessarily going to connect in the future. And that's, yeah. So that's my philosophical take. That's the, yes, exactly. Coming back.Alessio [00:13:04]: Yep. Do you like graph as an agent? Abstraction? That's been one of the hot topics with LandGraph and Pydantic and all that.Dharmesh [00:13:11]: I do. The thing I'm more interested in terms of use of graphs, and there's lots of work happening on that now, is graph data stores as an alternative in terms of knowledge stores and knowledge graphs. Yeah. Because, you know, so I've been in software now 30 plus years, right? So it's not 10,000 hours. It's like 100,000 hours that I've spent doing this stuff. And so I've grew up with, so back in the day, you know, I started on mainframes. There was a product called IMS from IBM, which is basically an index database, what we'd call like a key value store today. Then we've had relational databases, right? We have tables and columns and foreign key relationships. We all know that. We have document databases like MongoDB, which is sort of a nested structure keyed by a specific index. We have vector stores, vector embedding database. And graphs are interesting for a couple of reasons. One is, so it's not classically structured in a relational way. When you say structured database, to most people, they're thinking tables and columns and in relational database and set theory and all that. Graphs still have structure, but it's not the tables and columns structure. And you could wonder, and people have made this case, that they are a better representation of knowledge for LLMs and for AI generally than other things. So that's kind of thing number one conceptually, and that might be true, I think is possibly true. And the other thing that I really like about that in the context of, you know, I've been in the context of data stores for RAG is, you know, RAG, you say, oh, I have a million documents, I'm going to build the vector embeddings, I'm going to come back with the top X based on the semantic match, and that's fine. All that's very, very useful. But the reality is something gets lost in the chunking process and the, okay, well, those tend, you know, like, you don't really get the whole picture, so to speak, and maybe not even the right set of dimensions on the kind of broader picture. And it makes intuitive sense to me that if we did capture it properly in a graph form, that maybe that feeding into a RAG pipeline will actually yield better results for some use cases, I don't know, but yeah.Alessio [00:15:03]: And do you feel like at the core of it, there's this difference between imperative and declarative programs? Because if you think about HubSpot, it's like, you know, people and graph kind of goes hand in hand, you know, but I think maybe the software before was more like primary foreign key based relationship, versus now the models can traverse through the graph more easily.Dharmesh [00:15:22]: Yes. So I like that representation. There's something. It's just conceptually elegant about graphs and just from the representation of it, they're much more discoverable, you can kind of see it, there's observability to it, versus kind of embeddings, which you can't really do much with as a human. You know, once they're in there, you can't pull stuff back out. But yeah, I like that kind of idea of it. And the other thing that's kind of, because I love graphs, I've been long obsessed with PageRank from back in the early days. And, you know, one of the kind of simplest algorithms in terms of coming up, you know, with a phone, everyone's been exposed to PageRank. And the idea is that, and so I had this other idea for a project, not a company, and I have hundreds of these, called NodeRank, is to be able to take the idea of PageRank and apply it to an arbitrary graph that says, okay, I'm going to define what authority looks like and say, okay, well, that's interesting to me, because then if you say, I'm going to take my knowledge store, and maybe this person that contributed some number of chunks to the graph data store has more authority on this particular use case or prompt that's being submitted than this other one that may, or maybe this one was more. popular, or maybe this one has, whatever it is, there should be a way for us to kind of rank nodes in a graph and sort them in some, some useful way. Yeah.swyx [00:16:34]: So I think that's generally useful for, for anything. I think the, the problem, like, so even though at my conferences, GraphRag is super popular and people are getting knowledge, graph religion, and I will say like, it's getting space, getting traction in two areas, conversation memory, and then also just rag in general, like the, the, the document data. Yeah. It's like a source. Most ML practitioners would say that knowledge graph is kind of like a dirty word. The graph database, people get graph religion, everything's a graph, and then they, they go really hard into it and then they get a, they get a graph that is too complex to navigate. Yes. And so like the, the, the simple way to put it is like you at running HubSpot, you know, the power of graphs, the way that Google has pitched them for many years, but I don't suspect that HubSpot itself uses a knowledge graph. No. Yeah.Dharmesh [00:17:26]: So when is it over engineering? Basically? It's a great question. I don't know. So the question now, like in AI land, right, is the, do we necessarily need to understand? So right now, LLMs for, for the most part are somewhat black boxes, right? We sort of understand how the, you know, the algorithm itself works, but we really don't know what's going on in there and, and how things come out. So if a graph data store is able to produce the outcomes we want, it's like, here's a set of queries I want to be able to submit and then it comes out with useful content. Maybe the underlying data store is as opaque as a vector embeddings or something like that, but maybe it's fine. Maybe we don't necessarily need to understand it to get utility out of it. And so maybe if it's messy, that's okay. Um, that's, it's just another form of lossy compression. Uh, it's just lossy in a way that we just don't completely understand in terms of, because it's going to grow organically. Uh, and it's not structured. It's like, ah, we're just gonna throw a bunch of stuff in there. Let the, the equivalent of the embedding algorithm, whatever they called in graph land. Um, so the one with the best results wins. I think so. Yeah.swyx [00:18:26]: Or is this the practical side of me is like, yeah, it's, if it's useful, we don't necessarilyDharmesh [00:18:30]: need to understand it.swyx [00:18:30]: I have, I mean, I'm happy to push back as long as you want. Uh, it's not practical to evaluate like the 10 different options out there because it takes time. It takes people, it takes, you know, resources, right? Set. That's the first thing. Second thing is your evals are typically on small things and some things only work at scale. Yup. Like graphs. Yup.Dharmesh [00:18:46]: Yup. That's, yeah, no, that's fair. And I think this is one of the challenges in terms of implementation of graph databases is that the most common approach that I've seen developers do, I've done it myself, is that, oh, I've got a Postgres database or a MySQL or whatever. I can represent a graph with a very set of tables with a parent child thing or whatever. And that sort of gives me the ability, uh, why would I need anything more than that? And the answer is, well, if you don't need anything more than that, you don't need anything more than that. But there's a high chance that you're sort of missing out on the actual value that, uh, the graph representation gives you. Which is the ability to traverse the graph, uh, efficiently in ways that kind of going through the, uh, traversal in a relational database form, even though structurally you have the data, practically you're not gonna be able to pull it out in, in useful ways. Uh, so you wouldn't like represent a social graph, uh, in, in using that kind of relational table model. It just wouldn't scale. It wouldn't work.swyx [00:19:36]: Uh, yeah. Uh, I think we want to move on to MCP. Yeah. But I just want to, like, just engineering advice. Yeah. Uh, obviously you've, you've, you've run, uh, you've, you've had to do a lot of projects and run a lot of teams. Do you have a general rule for over-engineering or, you know, engineering ahead of time? You know, like, because people, we know premature engineering is the root of all evil. Yep. But also sometimes you just have to. Yep. When do you do it? Yes.Dharmesh [00:19:59]: It's a great question. This is, uh, a question as old as time almost, which is what's the right and wrong levels of abstraction. That's effectively what, uh, we're answering when we're trying to do engineering. I tend to be a pragmatist, right? So here's the thing. Um, lots of times doing something the right way. Yeah. It's like a marginal increased cost in those cases. Just do it the right way. And this is what makes a, uh, a great engineer or a good engineer better than, uh, a not so great one. It's like, okay, all things being equal. If it's going to take you, you know, roughly close to constant time anyway, might as well do it the right way. Like, so do things well, then the question is, okay, well, am I building a framework as the reusable library? To what degree, uh, what am I anticipating in terms of what's going to need to change in this thing? Uh, you know, along what dimension? And then I think like a business person in some ways, like what's the return on calories, right? So, uh, and you look at, um, energy, the expected value of it's like, okay, here are the five possible things that could happen, uh, try to assign probabilities like, okay, well, if there's a 50% chance that we're going to go down this particular path at some day, like, or one of these five things is going to happen and it costs you 10% more to engineer for that. It's basically, it's something that yields a kind of interest compounding value. Um, as you get closer to the time of, of needing that versus having to take on debt, which is when you under engineer it, you're taking on debt. You're going to have to pay off when you do get to that eventuality where something happens. One thing as a pragmatist, uh, so I would rather under engineer something than over engineer it. If I were going to err on the side of something, and here's the reason is that when you under engineer it, uh, yes, you take on tech debt, uh, but the interest rate is relatively known and payoff is very, very possible, right? Which is, oh, I took a shortcut here as a result of which now this thing that should have taken me a week is now going to take me four weeks. Fine. But if that particular thing that you thought might happen, never actually, you never have that use case transpire or just doesn't, it's like, well, you just save yourself time, right? And that has value because you were able to do other things instead of, uh, kind of slightly over-engineering it away, over-engineering it. But there's no perfect answers in art form in terms of, uh, and yeah, we'll, we'll bring kind of this layers of abstraction back on the code generation conversation, which we'll, uh, I think I have later on, butAlessio [00:22:05]: I was going to ask, we can just jump ahead quickly. Yeah. Like, as you think about vibe coding and all that, how does the. Yeah. Percentage of potential usefulness change when I feel like we over-engineering a lot of times it's like the investment in syntax, it's less about the investment in like arc exacting. Yep. Yeah. How does that change your calculus?Dharmesh [00:22:22]: A couple of things, right? One is, um, so, you know, going back to that kind of ROI or a return on calories, kind of calculus or heuristic you think through, it's like, okay, well, what is it going to cost me to put this layer of abstraction above the code that I'm writing now, uh, in anticipating kind of future needs. If the cost of fixing, uh, or doing under engineering right now. Uh, we'll trend towards zero that says, okay, well, I don't have to get it right right now because even if I get it wrong, I'll run the thing for six hours instead of 60 minutes or whatever. It doesn't really matter, right? Like, because that's going to trend towards zero to be able, the ability to refactor a code. Um, and because we're going to not that long from now, we're going to have, you know, large code bases be able to exist, uh, you know, as, as context, uh, for a code generation or a code refactoring, uh, model. So I think it's going to make it, uh, make the case for under engineering, uh, even stronger. Which is why I take on that cost. You just pay the interest when you get there, it's not, um, just go on with your life vibe coded and, uh, come back when you need to. Yeah.Alessio [00:23:18]: Sometimes I feel like there's no decision-making in some things like, uh, today I built a autosave for like our internal notes platform and I literally just ask them cursor. Can you add autosave? Yeah. I don't know if it's over under engineer. Yep. I just vibe coded it. Yep. And I feel like at some point we're going to get to the point where the models kindDharmesh [00:23:36]: of decide where the right line is, but this is where the, like the, in my mind, the danger is, right? So there's two sides to this. One is the cost of kind of development and coding and things like that stuff that, you know, we talk about. But then like in your example, you know, one of the risks that we have is that because adding a feature, uh, like a save or whatever the feature might be to a product as that price tends towards zero, are we going to be less discriminant about what features we add as a result of making more product products more complicated, which has a negative impact on the user and navigate negative impact on the business. Um, and so that's the thing I worry about if it starts to become too easy, are we going to be. Too promiscuous in our, uh, kind of extension, adding product extensions and things like that. It's like, ah, why not add X, Y, Z or whatever back then it was like, oh, we only have so many engineering hours or story points or however you measure things. Uh, that least kept us in check a little bit. Yeah.Alessio [00:24:22]: And then over engineering, you're like, yeah, it's kind of like you're putting that on yourself. Yeah. Like now it's like the models don't understand that if they add too much complexity, it's going to come back to bite them later. Yep. So they just do whatever they want to do. Yeah. And I'm curious where in the workflow that's going to be, where it's like, Hey, this is like the amount of complexity and over-engineering you can do before you got to ask me if we should actually do it versus like do something else.Dharmesh [00:24:45]: So you know, we've already, let's like, we're leaving this, uh, in the code generation world, this kind of compressed, um, cycle time. Right. It's like, okay, we went from auto-complete, uh, in the GitHub co-pilot to like, oh, finish this particular thing and hit tab to a, oh, I sort of know your file or whatever. I can write out a full function to you to now I can like hold a bunch of the context in my head. Uh, so we can do app generation, which we have now with lovable and bolt and repletage. Yeah. Association and other things. So then the question is, okay, well, where does it naturally go from here? So we're going to generate products. Make sense. We might be able to generate platforms as though I want a platform for ERP that does this, whatever. And that includes the API's includes the product and the UI, and all the things that make for a platform. There's no nothing that says we would stop like, okay, can you generate an entire software company someday? Right. Uh, with the platform and the monetization and the go-to-market and the whatever. And you know, that that's interesting to me in terms of, uh, you know, what, when you take it to almost ludicrous levels. of abstract.swyx [00:25:39]: It's like, okay, turn it to 11. You mentioned vibe coding, so I have to, this is a blog post I haven't written, but I'm kind of exploring it. Is the junior engineer dead?Dharmesh [00:25:49]: I don't think so. I think what will happen is that the junior engineer will be able to, if all they're bringing to the table is the fact that they are a junior engineer, then yes, they're likely dead. But hopefully if they can communicate with carbon-based life forms, they can interact with product, if they're willing to talk to customers, they can take their kind of basic understanding of engineering and how kind of software works. I think that has value. So I have a 14-year-old right now who's taking Python programming class, and some people ask me, it's like, why is he learning coding? And my answer is, is because it's not about the syntax, it's not about the coding. What he's learning is like the fundamental thing of like how things work. And there's value in that. I think there's going to be timeless value in systems thinking and abstractions and what that means. And whether functions manifested as math, which he's going to get exposed to regardless, or there are some core primitives to the universe, I think, that the more you understand them, those are what I would kind of think of as like really large dots in your life that will have a higher gravitational pull and value to them that you'll then be able to. So I want him to collect those dots, and he's not resisting. So it's like, okay, while he's still listening to me, I'm going to have him do things that I think will be useful.swyx [00:26:59]: You know, part of one of the pitches that I evaluated for AI engineer is a term. And the term is that maybe the traditional interview path or career path of software engineer goes away, which is because what's the point of lead code? Yeah. And, you know, it actually matters more that you know how to work with AI and to implement the things that you want. Yep.Dharmesh [00:27:16]: That's one of the like interesting things that's happened with generative AI. You know, you go from machine learning and the models and just that underlying form, which is like true engineering, right? Like the actual, what I call real engineering. I don't think of myself as a real engineer, actually. I'm a developer. But now with generative AI. We call it AI and it's obviously got its roots in machine learning, but it just feels like fundamentally different to me. Like you have the vibe. It's like, okay, well, this is just a whole different approach to software development to so many different things. And so I'm wondering now, it's like an AI engineer is like, if you were like to draw the Venn diagram, it's interesting because the cross between like AI things, generative AI and what the tools are capable of, what the models do, and this whole new kind of body of knowledge that we're still building out, it's still very young, intersected with kind of classic engineering, software engineering. Yeah.swyx [00:28:04]: I just described the overlap as it separates out eventually until it's its own thing, but it's starting out as a software. Yeah.Alessio [00:28:11]: That makes sense. So to close the vibe coding loop, the other big hype now is MCPs. Obviously, I would say Cloud Desktop and Cursor are like the two main drivers of MCP usage. I would say my favorite is the Sentry MCP. I can pull in errors and then you can just put the context in Cursor. How do you think about that abstraction layer? Does it feel... Does it feel almost too magical in a way? Do you think it's like you get enough? Because you don't really see how the server itself is then kind of like repackaging theDharmesh [00:28:41]: information for you? I think MCP as a standard is one of the better things that's happened in the world of AI because a standard needed to exist and absent a standard, there was a set of things that just weren't possible. Now, we can argue whether it's the best possible manifestation of a standard or not. Does it do too much? Does it do too little? I get that, but it's just simple enough to both be useful and unobtrusive. It's understandable and adoptable by mere mortals, right? It's not overly complicated. You know, a reasonable engineer can put a stand up an MCP server relatively easily. The thing that has me excited about it is like, so I'm a big believer in multi-agent systems. And so that's going back to our kind of this idea of an atomic agent. So imagine the MCP server, like obviously it calls tools, but the way I think about it, so I'm working on my current passion project is agent.ai. And we'll talk more about that in a little bit. More about the, I think we should, because I think it's interesting not to promote the project at all, but there's some interesting ideas in there. One of which is around, we're going to need a mechanism for, if agents are going to collaborate and be able to delegate, there's going to need to be some form of discovery and we're going to need some standard way. It's like, okay, well, I just need to know what this thing over here is capable of. We're going to need a registry, which Anthropic's working on. I'm sure others will and have been doing directories of, and there's going to be a standard around that too. How do you build out a directory of MCP servers? I think that's going to unlock so many things just because, and we're already starting to see it. So I think MCP or something like it is going to be the next major unlock because it allows systems that don't know about each other, don't need to, it's that kind of decoupling of like Sentry and whatever tools someone else was building. And it's not just about, you know, Cloud Desktop or things like, even on the client side, I think we're going to see very interesting consumers of MCP, MCP clients versus just the chat body kind of things. Like, you know, Cloud Desktop and Cursor and things like that. But yeah, I'm very excited about MCP in that general direction.swyx [00:30:39]: I think the typical cynical developer take, it's like, we have OpenAPI. Yeah. What's the new thing? I don't know if you have a, do you have a quick MCP versus everything else? Yeah.Dharmesh [00:30:49]: So it's, so I like OpenAPI, right? So just a descriptive thing. It's OpenAPI. OpenAPI. Yes, that's what I meant. So it's basically a self-documenting thing. We can do machine-generated, lots of things from that output. It's a structured definition of an API. I get that, love it. But MCPs sort of are kind of use case specific. They're perfect for exactly what we're trying to use them for around LLMs in terms of discovery. It's like, okay, I don't necessarily need to know kind of all this detail. And so right now we have, we'll talk more about like MCP server implementations, but We will? I think, I don't know. Maybe we won't. At least it's in my head. It's like a back processor. But I do think MCP adds value above OpenAPI. It's, yeah, just because it solves this particular thing. And if we had come to the world, which we have, like, it's like, hey, we already have OpenAPI. It's like, if that were good enough for the universe, the universe would have adopted it already. There's a reason why MCP is taking office because marginally adds something that was missing before and doesn't go too far. And so that's why the kind of rate of adoption, you folks have written about this and talked about it. Yeah, why MCP won. Yeah. And it won because the universe decided that this was useful and maybe it gets supplanted by something else. Yeah. And maybe we discover, oh, maybe OpenAPI was good enough the whole time. I doubt that.swyx [00:32:09]: The meta lesson, this is, I mean, he's an investor in DevTools companies. I work in developer experience at DevRel in DevTools companies. Yep. Everyone wants to own the standard. Yeah. I'm sure you guys have tried to launch your own standards. Actually, it's Houseplant known for a standard, you know, obviously inbound marketing. But is there a standard or protocol that you ever tried to push? No.Dharmesh [00:32:30]: And there's a reason for this. Yeah. Is that? And I don't mean, need to mean, speak for the people of HubSpot, but I personally. You kind of do. I'm not smart enough. That's not the, like, I think I have a. You're smart. Not enough for that. I'm much better off understanding the standards that are out there. And I'm more on the composability side. Let's, like, take the pieces of technology that exist out there, combine them in creative, unique ways. And I like to consume standards. I don't like to, and that's not that I don't like to create them. I just don't think I have the, both the raw wattage or the credibility. It's like, okay, well, who the heck is Dharmesh, and why should we adopt a standard he created?swyx [00:33:07]: Yeah, I mean, there are people who don't monetize standards, like OpenTelemetry is a big standard, and LightStep never capitalized on that.Dharmesh [00:33:15]: So, okay, so if I were to do a standard, there's two things that have been in my head in the past. I was one around, a very, very basic one around, I don't even have the domain, I have a domain for everything, for open marketing. Because the issue we had in HubSpot grew up in the marketing space. There we go. There was no standard around data formats and things like that. It doesn't go anywhere. But the other one, and I did not mean to go here, but I'm going to go here. It's called OpenGraph. I know the term was already taken, but it hasn't been used for like 15 years now for its original purpose. But what I think should exist in the world is right now, our information, all of us, nodes are in the social graph at Meta or the professional graph at LinkedIn. Both of which are actually relatively closed in actually very annoying ways. Like very, very closed, right? Especially LinkedIn. Especially LinkedIn. I personally believe that if it's my data, and if I would get utility out of it being open, I should be able to make my data open or publish it in whatever forms that I choose, as long as I have control over it as opt-in. So the idea is around OpenGraph that says, here's a standard, here's a way to publish it. I should be able to go to OpenGraph.org slash Dharmesh dot JSON and get it back. And it's like, here's your stuff, right? And I can choose along the way and people can write to it and I can prove. And there can be an entire system. And if I were to do that, I would do it as a... Like a public benefit, non-profit-y kind of thing, as this is a contribution to society. I wouldn't try to commercialize that. Have you looked at AdProto? What's that? AdProto.swyx [00:34:43]: It's the protocol behind Blue Sky. Okay. My good friend, Dan Abramov, who was the face of React for many, many years, now works there. And he actually did a talk that I can send you, which basically kind of tries to articulate what you just said. But he does, he loves doing these like really great analogies, which I think you'll like. Like, you know, a lot of our data is behind a handle, behind a domain. Yep. So he's like, all right, what if we flip that? What if it was like our handle and then the domain? Yep. So, and that's really like your data should belong to you. Yep. And I should not have to wait 30 days for my Twitter data to export. Yep.Dharmesh [00:35:19]: you should be able to at least be able to automate it or do like, yes, I should be able to plug it into an agentic thing. Yeah. Yes. I think we're... Because so much of our data is... Locked up. I think the trick here isn't that standard. It is getting the normies to care.swyx [00:35:37]: Yeah. Because normies don't care.Dharmesh [00:35:38]: That's true. But building on that, normies don't care. So, you know, privacy is a really hot topic and an easy word to use, but it's not a binary thing. Like there are use cases where, and we make these choices all the time, that I will trade, not all privacy, but I will trade some privacy for some productivity gain or some benefit to me that says, oh, I don't care about that particular data being online if it gives me this in return, or I don't mind sharing this information with this company.Alessio [00:36:02]: If I'm getting, you know, this in return, but that sort of should be my option. I think now with computer use, you can actually automate some of the exports. Yes. Like something we've been doing internally is like everybody exports their LinkedIn connections. Yep. And then internally, we kind of merge them together to see how we can connect our companies to customers or things like that.Dharmesh [00:36:21]: And not to pick on LinkedIn, but since we're talking about it, but they feel strongly enough on the, you know, do not take LinkedIn data that they will block even browser use kind of things or whatever. They go to great, great lengths, even to see patterns of usage. And it says, oh, there's no way you could have, you know, gotten that particular thing or whatever without, and it's, so it's, there's...swyx [00:36:42]: Wasn't there a Supreme Court case that they lost? Yeah.Dharmesh [00:36:45]: So the one they lost was around someone that was scraping public data that was on the public internet. And that particular company had not signed any terms of service or whatever. It's like, oh, I'm just taking data that's on, there was no, and so that's why they won. But now, you know, the question is around, can LinkedIn... I think they can. Like, when you use, as a user, you use LinkedIn, you are signing up for their terms of service. And if they say, well, this kind of use of your LinkedIn account that violates our terms of service, they can shut your account down, right? They can. And they, yeah, so, you know, we don't need to make this a discussion. By the way, I love the company, don't get me wrong. I'm an avid user of the product. You know, I've got... Yeah, I mean, you've got over a million followers on LinkedIn, I think. Yeah, I do. And I've known people there for a long, long time, right? And I have lots of respect. And I understand even where the mindset originally came from of this kind of members-first approach to, you know, a privacy-first. I sort of get that. But sometimes you sort of have to wonder, it's like, okay, well, that was 15, 20 years ago. There's likely some controlled ways to expose some data on some member's behalf and not just completely be a binary. It's like, no, thou shalt not have the data.swyx [00:37:54]: Well, just pay for sales navigator.Alessio [00:37:57]: Before we move to the next layer of instruction, anything else on MCP you mentioned? Let's move back and then I'll tie it back to MCPs.Dharmesh [00:38:05]: So I think the... Open this with agent. Okay, so I'll start with... Here's my kind of running thesis, is that as AI and agents evolve, which they're doing very, very quickly, we're going to look at them more and more. I don't like to anthropomorphize. We'll talk about why this is not that. Less as just like raw tools and more like teammates. They'll still be software. They should self-disclose as being software. I'm totally cool with that. But I think what's going to happen is that in the same way you might collaborate with a team member on Slack or Teams or whatever you use, you can imagine a series of agents that do specific things just like a team member might do, that you can delegate things to. You can collaborate. You can say, hey, can you take a look at this? Can you proofread that? Can you try this? You can... Whatever it happens to be. So I think it is... I will go so far as to say it's inevitable that we're going to have hybrid teams someday. And what I mean by hybrid teams... So back in the day, hybrid teams were, oh, well, you have some full-time employees and some contractors. Then it was like hybrid teams are some people that are in the office and some that are remote. That's the kind of form of hybrid. The next form of hybrid is like the carbon-based life forms and agents and AI and some form of software. So let's say we temporarily stipulate that I'm right about that over some time horizon that eventually we're going to have these kind of digitally hybrid teams. So if that's true, then the question you sort of ask yourself is that then what needs to exist in order for us to get the full value of that new model? It's like, okay, well... You sort of need to... It's like, okay, well, how do I... If I'm building a digital team, like, how do I... Just in the same way, if I'm interviewing for an engineer or a designer or a PM, whatever, it's like, well, that's why we have professional networks, right? It's like, oh, they have a presence on likely LinkedIn. I can go through that semi-structured, structured form, and I can see the experience of whatever, you know, self-disclosed. But, okay, well, agents are going to need that someday. And so I'm like, okay, well, this seems like a thread that's worth pulling on. That says, okay. So I... So agent.ai is out there. And it's LinkedIn for agents. It's LinkedIn for agents. It's a professional network for agents. And the more I pull on that thread, it's like, okay, well, if that's true, like, what happens, right? It's like, oh, well, they have a profile just like anyone else, just like a human would. It's going to be a graph underneath, just like a professional network would be. It's just that... And you can have its, you know, connections and follows, and agents should be able to post. That's maybe how they do release notes. Like, oh, I have this new version. Whatever they decide to post, it should just be able to... Behave as a node on the network of a professional network. As it turns out, the more I think about that and pull on that thread, the more and more things, like, start to make sense to me. So it may be more than just a pure professional network. So my original thought was, okay, well, it's a professional network and agents as they exist out there, which I think there's going to be more and more of, will kind of exist on this network and have the profile. But then, and this is always dangerous, I'm like, okay, I want to see a world where thousands of agents are out there in order for the... Because those digital employees, the digital workers don't exist yet in any meaningful way. And so then I'm like, oh, can I make that easier for, like... And so I have, as one does, it's like, oh, I'll build a low-code platform for building agents. How hard could that be, right? Like, very hard, as it turns out. But it's been fun. So now, agent.ai has 1.3 million users. 3,000 people have actually, you know, built some variation of an agent, sometimes just for their own personal productivity. About 1,000 of which have been published. And the reason this comes back to MCP for me, so imagine that and other networks, since I know agent.ai. So right now, we have an MCP server for agent.ai that exposes all the internally built agents that we have that do, like, super useful things. Like, you know, I have access to a Twitter API that I can subsidize the cost. And I can say, you know, if you're looking to build something for social media, these kinds of things, with a single API key, and it's all completely free right now, I'm funding it. That's a useful way for it to work. And then we have a developer to say, oh, I have this idea. I don't have to worry about open AI. I don't have to worry about, now, you know, this particular model is better. It has access to all the models with one key. And we proxy it kind of behind the scenes. And then expose it. So then we get this kind of community effect, right? That says, oh, well, someone else may have built an agent to do X. Like, I have an agent right now that I built for myself to do domain valuation for website domains because I'm obsessed with domains, right? And, like, there's no efficient market for domains. There's no Zillow for domains right now that tells you, oh, here are what houses in your neighborhood sold for. It's like, well, why doesn't that exist? We should be able to solve that problem. And, yes, you're still guessing. Fine. There should be some simple heuristic. So I built that. It's like, okay, well, let me go look for past transactions. You say, okay, I'm going to type in agent.ai, agent.com, whatever domain. What's it actually worth? I'm looking at buying it. It can go and say, oh, which is what it does. It's like, I'm going to go look at are there any published domain transactions recently that are similar, either use the same word, same top-level domain, whatever it is. And it comes back with an approximate value, and it comes back with its kind of rationale for why it picked the value and comparable transactions. Oh, by the way, this domain sold for published. Okay. So that agent now, let's say, existed on the web, on agent.ai. Then imagine someone else says, oh, you know, I want to build a brand-building agent for startups and entrepreneurs to come up with names for their startup. Like a common problem, every startup is like, ah, I don't know what to call it. And so they type in five random words that kind of define whatever their startup is. And you can do all manner of things, one of which is like, oh, well, I need to find the domain for it. What are possible choices? Now it's like, okay, well, it would be nice to know if there's an aftermarket price for it, if it's listed for sale. Awesome. Then imagine calling this valuation agent. It's like, okay, well, I want to find where the arbitrage is, where the agent valuation tool says this thing is worth $25,000. It's listed on GoDaddy for $5,000. It's close enough. Let's go do that. Right? And that's a kind of composition use case that in my future state. Thousands of agents on the network, all discoverable through something like MCP. And then you as a developer of agents have access to all these kind of Lego building blocks based on what you're trying to solve. Then you blend in orchestration, which is getting better and better with the reasoning models now. Just describe the problem that you have. Now, the next layer that we're all contending with is that how many tools can you actually give an LLM before the LLM breaks? That number used to be like 15 or 20 before you kind of started to vary dramatically. And so that's the thing I'm thinking about now. It's like, okay, if I want to... If I want to expose 1,000 of these agents to a given LLM, obviously I can't give it all 1,000. Is there some intermediate layer that says, based on your prompt, I'm going to make a best guess at which agents might be able to be helpful for this particular thing? Yeah.Alessio [00:44:37]: Yeah, like RAG for tools. Yep. I did build the Latent Space Researcher on agent.ai. Okay. Nice. Yeah, that seems like, you know, then there's going to be a Latent Space Scheduler. And then once I schedule a research, you know, and you build all of these things. By the way, my apologies for the user experience. You realize I'm an engineer. It's pretty good.swyx [00:44:56]: I think it's a normie-friendly thing. Yeah. That's your magic. HubSpot does the same thing.Alessio [00:45:01]: Yeah, just to like quickly run through it. You can basically create all these different steps. And these steps are like, you know, static versus like variable-driven things. How did you decide between this kind of like low-code-ish versus doing, you know, low-code with code backend versus like not exposing that at all? Any fun design decisions? Yeah. And this is, I think...Dharmesh [00:45:22]: I think lots of people are likely sitting in exactly my position right now, coming through the choosing between deterministic. Like if you're like in a business or building, you know, some sort of agentic thing, do you decide to do a deterministic thing? Or do you go non-deterministic and just let the alum handle it, right, with the reasoning models? The original idea and the reason I took the low-code stepwise, a very deterministic approach. A, the reasoning models did not exist at that time. That's thing number one. Thing number two is if you can get... If you know in your head... If you know in your head what the actual steps are to accomplish whatever goal, why would you leave that to chance? There's no upside. There's literally no upside. Just tell me, like, what steps do you need executed? So right now what I'm playing with... So one thing we haven't talked about yet, and people don't talk about UI and agents. Right now, the primary interaction model... Or they don't talk enough about it. I know some people have. But it's like, okay, so we're used to the chatbot back and forth. Fine. I get that. But I think we're going to move to a blend of... Some of those things are going to be synchronous as they are now. But some are going to be... Some are going to be async. It's just going to put it in a queue, just like... And this goes back to my... Man, I talk fast. But I have this... I only have one other speed. It's even faster. So imagine it's like if you're working... So back to my, oh, we're going to have these hybrid digital teams. Like, you would not go to a co-worker and say, I'm going to ask you to do this thing, and then sit there and wait for them to go do it. Like, that's not how the world works. So it's nice to be able to just, like, hand something off to someone. It's like, okay, well, maybe I expect a response in an hour or a day or something like that.Dharmesh [00:46:52]: In terms of when things need to happen. So the UI around agents. So if you look at the output of agent.ai agents right now, they are the simplest possible manifestation of a UI, right? That says, oh, we have inputs of, like, four different types. Like, we've got a dropdown, we've got multi-select, all the things. It's like back in HTML, the original HTML 1.0 days, right? Like, you're the smallest possible set of primitives for a UI. And it just says, okay, because we need to collect some information from the user, and then we go do steps and do things. And generate some output in HTML or markup are the two primary examples. So the thing I've been asking myself, if I keep going down that path. So people ask me, I get requests all the time. It's like, oh, can you make the UI sort of boring? I need to be able to do this, right? And if I keep pulling on that, it's like, okay, well, now I've built an entire UI builder thing. Where does this end? And so I think the right answer, and this is what I'm going to be backcoding once I get done here, is around injecting a code generation UI generation into, the agent.ai flow, right? As a builder, you're like, okay, I'm going to describe the thing that I want, much like you would do in a vibe coding world. But instead of generating the entire app, it's going to generate the UI that exists at some point in either that deterministic flow or something like that. It says, oh, here's the thing I'm trying to do. Go generate the UI for me. And I can go through some iterations. And what I think of it as a, so it's like, I'm going to generate the code, generate the code, tweak it, go through this kind of prompt style, like we do with vibe coding now. And at some point, I'm going to be happy with it. And I'm going to hit save. And that's going to become the action in that particular step. It's like a caching of the generated code that I can then, like incur any inference time costs. It's just the actual code at that point.Alessio [00:48:29]: Yeah, I invested in a company called E2B, which does code sandbox. And they powered the LM arena web arena. So it's basically the, just like you do LMS, like text to text, they do the same for like UI generation. So if you're asking a model, how do you do it? But yeah, I think that's kind of where.Dharmesh [00:48:45]: That's the thing I'm really fascinated by. So the early LLM, you know, we're understandably, but laughably bad at simple arithmetic, right? That's the thing like my wife, Normies would ask us, like, you call this AI, like it can't, my son would be like, it's just stupid. It can't even do like simple arithmetic. And then like we've discovered over time that, and there's a reason for this, right? It's like, it's a large, there's, you know, the word language is in there for a reason in terms of what it's been trained on. It's not meant to do math, but now it's like, okay, well, the fact that it has access to a Python interpreter that I can actually call at runtime, that solves an entire body of problems that it wasn't trained to do. And it's basically a form of delegation. And so the thought that's kind of rattling around in my head is that that's great. So it's, it's like took the arithmetic problem and took it first. Now, like anything that's solvable through a relatively concrete Python program, it's able to do a bunch of things that I couldn't do before. Can we get to the same place with UI? I don't know what the future of UI looks like in a agentic AI world, but maybe let the LLM handle it, but not in the classic sense. Maybe it generates it on the fly, or maybe we go through some iterations and hit cache or something like that. So it's a little bit more predictable. Uh, I don't know, but yeah.Alessio [00:49:48]: And especially when is the human supposed to intervene? So, especially if you're composing them, most of them should not have a UI because then they're just web hooking to somewhere else. I just want to touch back. I don't know if you have more comments on this.swyx [00:50:01]: I was just going to ask when you, you said you got, you're going to go back to code. What

The Data Stack Show
234: The Cynical Data Guy on AI, Data Tools, and the Future of Coding

The Data Stack Show

Play Episode Listen Later Mar 26, 2025 35:42


Highlights from this week's conversation include:AI in Transcription Services (1:11)The Future of AI Companies (5:09)Potential Risks of AI Tools (8:57)Learning vs. Dependency in Programming (10:17)The Journey of a Data Analyst (12:07)AI and Coding Skills (14:06)Abstraction in Data Tools (16:59)Data Design and AI (19:07)User Experience vs. AI Automation (22:10)AGI and Data Mesh (24:36)Blank Screen Interaction Challenges (27:10)Understanding User Value in Data Platforms (32:22)AI's Role in Simplifying Data Interaction (34:04)Final Thought and Takeaways (35:05)The Data Stack Show is a weekly podcast powered by RudderStack, the CDP for developers. Each week we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.

Machine Learning Street Talk
Test-Time Adaptation: the key to reasoning with DL (Mohamed Osman)

Machine Learning Street Talk

Play Episode Listen Later Mar 22, 2025 63:36


Mohamed Osman joins to discuss MindsAI's highest scoring entry to the ARC challenge 2024 and the paradigm of test-time fine-tuning. They explore how the team, now part of Tufa Labs in Zurich, achieved state-of-the-art results using a combination of pre-training techniques, a unique meta-learning strategy, and an ensemble voting mechanism. Mohamed emphasizes the importance of raw data input and flexibility of the network.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers. Events in Zurich. Goto https://tufalabs.ai/***TRANSCRIPT + REFS:https://www.dropbox.com/scl/fi/jeavyqidsjzjgjgd7ns7h/MoFInal.pdf?rlkey=cjjmo7rgtenxrr3b46nk6yq2e&dl=0Mohamed Osman (Tufa Labs)https://x.com/MohamedOsmanMLJack Cole (Tufa Labs)https://x.com/MindsAI_JackHow and why deep learning for ARC paper:https://github.com/MohamedOsman1998/deep-learning-for-arc/blob/main/deep_learning_for_arc.pdfTOC:1. Abstract Reasoning Foundations [00:00:00] 1.1 Test-Time Fine-Tuning and ARC Challenge Overview [00:10:20] 1.2 Neural Networks vs Programmatic Approaches to Reasoning [00:13:23] 1.3 Code-Based Learning and Meta-Model Architecture [00:20:26] 1.4 Technical Implementation with Long T5 Model2. ARC Solution Architectures [00:24:10] 2.1 Test-Time Tuning and Voting Methods for ARC Solutions [00:27:54] 2.2 Model Generalization and Function Generation Challenges [00:32:53] 2.3 Input Representation and VLM Limitations [00:36:21] 2.4 Architecture Innovation and Cross-Modal Integration [00:40:05] 2.5 Future of ARC Challenge and Program Synthesis Approaches3. Advanced Systems Integration [00:43:00] 3.1 DreamCoder Evolution and LLM Integration [00:50:07] 3.2 MindsAI Team Progress and Acquisition by Tufa Labs [00:54:15] 3.3 ARC v2 Development and Performance Scaling [00:58:22] 3.4 Intelligence Benchmarks and Transformer Limitations [01:01:50] 3.5 Neural Architecture Optimization and Processing DistributionREFS:[00:01:32] Original ARC challenge paper, François Chollethttps://arxiv.org/abs/1911.01547[00:06:55] DreamCoder, Kevin Ellis et al.https://arxiv.org/abs/2006.08381[00:12:50] Deep Learning with Python, François Chollethttps://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438[00:13:35] Deep Learning with Python, François Chollethttps://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438[00:13:35] Influence of pretraining data for reasoning, Laura Ruishttps://arxiv.org/abs/2411.12580[00:17:50] Latent Program Networks, Clement Bonnethttps://arxiv.org/html/2411.08706v1[00:20:50] T5, Colin Raffel et al.https://arxiv.org/abs/1910.10683[00:30:30] Combining Induction and Transduction for Abstract Reasoning, Wen-Ding Li, Kevin Ellis et al.https://arxiv.org/abs/2411.02272[00:34:15] Six finger problem, Chen et al.https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_SpatialVLM_Endowing_Vision-Language_Models_with_Spatial_Reasoning_Capabilities_CVPR_2024_paper.pdf[00:38:15] DeepSeek-R1-Distill-Llama, DeepSeek AIhttps://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B[00:40:10] ARC Prize 2024 Technical Report, François Chollet et al.https://arxiv.org/html/2412.04604v2[00:45:20] LLM-Guided Compositional Program Synthesis, Wen-Ding Li and Kevin Ellishttps://arxiv.org/html/2503.15540[00:54:25] Abstraction and Reasoning Corpus, François Chollethttps://github.com/fchollet/ARC-AGI[00:57:10] O3 breakthrough on ARC-AGI, OpenAIhttps://arcprize.org/[00:59:35] ConceptARC Benchmark, Arseny Moskvichev, Melanie Mitchellhttps://arxiv.org/abs/2305.07141[01:02:05] Mixtape: Breaking the Softmax Bottleneck Efficiently, Yang, Zhilin and Dai, Zihang and Salakhutdinov, Ruslan and Cohen, William W.http://papers.neurips.cc/paper/9723-mixtape-breaking-the-softmax-bottleneck-efficiently.pdf

Campaign: Skyjacks
Skyjacks: Episode 261

Campaign: Skyjacks

Play Episode Listen Later Mar 19, 2025 58:28


Kief finishes Annabelle Sacha's harness and just needs the featherweave to get them to safety, but first he must contend with Jonah the Damned alongside Travis and Sinbad. CONTENT NOTE Main Show: Science, Biting sensitive bits, Undead saltiness Dear Uhuru: Estranged family reunions, Abstractions of horrifying violence MAGIC OF SPEIR ZINE Follow the project here! OH CAPTAIN, MY CAPTAIN Order now! Leave a review! THE ULTIMATE RPG PODCAST Listen Here! SKYJOUST FIGHT WITH SPIRIT EXPANSION Get it now! ULTIMATE RPG GAMEMASTER'S GUIDE Pre-order now! SKYJACKS: COURIER'S CALL IS BACK! Listen on Spotify (or any other podcatcher app)! STARWHAL PUBLIC FEED: Listen on Spotify (or any other podcatcher app)! JOIN OUR MAILING LIST Right Here! Learn more about your ad choices. Visit megaphone.fm/adchoices

Didde Center Homily Podcasts
GLOW AND MELT - Homily for the 2nd Sunday of Lent

Didde Center Homily Podcasts

Play Episode Listen Later Mar 16, 2025 18:14


"In all the teaching of Saint John of the Cross, despite the forbidding features of its radical demands, hides a poet of sanctity who has fallen in love with God, even helplessly so. The Church may call him the Mystical Doctor in recognition of the superlative teaching in his four major treatises; yet the weight of that title is not entirely helpful. He is not proposing a speculative doctrine of mystical ascent to be mastered by careful study and strict application. Abstraction has little place or purpose in his writing, even as he makes every effort to clarify in precise language what may often me impossible lessons to convey to a reader lacking experience of what he is elucidating. Simply reading once through his work will never disclose his teaching adequately. At some point, he has to become a very loved mentor to whom one turns with increasing need over the course of years, or else he slips away quietly and will be forgotten, as he was apparently forgotten by many in his own lifetime. But if he is embraced as a trusted guide, and his direction is accepted, he can become a companion who pushes and prods us to a mysterious, unsettling desire for God, which is only a start toward greater effects over time. If he remains a friend for many years, a hunger and fire in our soul for God far beyond any initial expectation of spiritual pursuit is bound to ignite within us." --Father Donald Haggerty

The Stacks
Ep. 362 Colonialism Is Not an Abstraction with Omar El Akkad

The Stacks

Play Episode Listen Later Mar 12, 2025 60:02


This week, we're joined by author and journalist Omar El Akkad to discuss his new book, One Day, Everyone Will Have Always Been Against This, which serves as a powerful reckoning with what it means to live in a West that betrays its fundamental values. Omar shares how writing nonfiction compares to his novels, how he anticipates and thinks about potential criticism, and what it means to resist despair in the face of empire.The Stacks Book Club pick for March is They Were Her Property by Stephanie E. Jones-Rogers. We will discuss the book on March 26th with Tembe Denton-Hurst returning as our guest.You can find everything we discuss on today's show on The Stacks' website:https://thestackspodcast.com/2025/3/12/ep-362-omar-el-akkadConnect with Omar: Instagram | TwitterConnect with The Stacks: Instagram | Twitter | Shop | Patreon | Goodreads | Substack | SubscribeSUPPORT THE STACKSJoin The Stacks Pack on PatreonTo support The Stacks and find out more from this week's sponsors, click here.Purchasing books through Bookshop.org or Amazon earns The Stacks a small commission.See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

Ordinary Mind Zendo
Which is the abstraction? The whole or the parts?

Ordinary Mind Zendo

Play Episode Listen Later Mar 12, 2025


Run it Red with Ben Sims
Ben Sims 'Run It Red' 119

Run it Red with Ben Sims

Play Episode Listen Later Mar 5, 2025 119:31


Run it Red 119 is here. This month's got killer sounds from the likes of D'Julz, Tal Fussmann, Scan 7, As One, Kr!z, Seddig and loads more. Full tracklist, as always, is below so check the labels/artists where you can

THE SOULFAM PODCAST with Diana and Lexi
AUSTISM, NEURODIVERGENCE and Visual Thinking: John Barnhardt, director/writer/producer OPEN DOOR; THE TEMPLE GRANDIN STORY

THE SOULFAM PODCAST with Diana and Lexi

Play Episode Listen Later Mar 2, 2025 62:54


Send us a textJohn Barnhardt, writer/director/producer of OPEN DOOR, the documentary film about Temple Grandin, shares his insights, revelations and experiences as both filmmaker and teacher  in the making of the film. A film festival hit, AN OPEN DOOR is an iconic, inside peek at the mind of Colorado State University professor Temple Grandin, well-known for her exemplary work in understanding how cattle and horses see, hear and feel and influencing livestock industry practices. Her ground-breaking work has heavily influenced the humane treatment of cattle and horses and is considered a hallmark in animal welfare care. Temple, who is autistic, has done extensive efforts both through her voice and work in pushing the envelope of understanding  and acceptance for neuro-divergent people everywhere. John, with the devoted support of executive producer John Festervand, created a film that not only captures the essence of one of the most powerful voices of our time for animal welfare and autism, but is an emotionally inspiring experience to do our very best in all walks of life. The film is touching, real and a beautiful example of one person's willingness to be vulnerable and authentic in the face of a world which once institutionalized the autistic. John, a teacher at CSU, a working cinematographer and founder of Barnfly Productions (https://www.barnflyproductions.com)  shares the credit for the film's success with  his CSU student crew. Their professionalism matches that of other professionals with multiple years of experience, says John. In this insightful interview about a heavily-touted documentary, John shares his own learning experiences with Temple during production. Her 17th book, Visual Thinking - the Hidden Gifts of People Who Think In Pictures, Patterns and Abstractions, is featured throughout the film. This book and Temple's nudges encouraged John to look at his own perceptions about math and education differently. Temple's life was portrayed in the 2010 movie. She was played by Clare Danes. John re-introduces Temple to the world in An Open Door and her vast wealth knowledge at age 78. A clip is featured in this interview. For further information and to request a screening and appearances, please go to Https://templegrandindocumentary.com.  For further information about John, his ongoing production slate and upcoming projects, contact him at Oweli Supplements (www.Oweli.com) and www.CBDpure.com, sponsors of the podcast, have graciously offered a coupon for free shipping and 15 percent off with the coupon code SOULFAM. Lexi and Diana both takes these supplements whose products support everything from your eye health to immune system to your protein intake to your brain's neurological health. CBD Pure is one of the very best CBD's on the market with high grade ingredients. Order now with SOULFAM in the coupon code. Support the show@dianamarcketta@lexisaldin

The Writer Files: Writing, Productivity, Creativity, and Neuroscience
How Bestselling Author & Literary Agent Betsy Lerner Writes

The Writer Files: Writing, Productivity, Creativity, and Neuroscience

Play Episode Listen Later Feb 28, 2025 38:22


Bestselling author and literary agent Betsy Lerner spoke with me about being a “late bloomer,” what 35 years in publishing has taught her, and portraying mental illness in her debut novel SHRED SISTERS. Betsy Lerner is the author of the popular advice book to writers, The Forest for the Trees, and the memoirs Food and Loathing and The Bridge Ladies. With Temple Grandin, she is the also co-author of the New York Times bestseller Visual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns and Abstractions. Her debut novel, Shred Sisters, is described as “... an intimate and bittersweet story exploring the fierce complexities of sisterhood, mental health, loss and love.” The book was longlisted for The Center for Fiction First Novel Prize, a New York Times Notable Book of 2024, and a New York Times Book Review Editors' Choice and Best Book of the Year So Far, among many other accolades. Betsy received an MFA from Columbia University in Poetry and was selected as one of PEN's Emerging Writers. She also received the Tony Godwin Publishing Prize for Editors. After working as an editor for 15 years, she became an agent and is currently a partner with Dunow, Carlson and Lerner Literary Agency. [Discover The Writer Files Extra: Get 'The Writer Files' Podcast Delivered Straight to Your Inbox at writerfiles.fm] [If you're a fan of The Writer Files, please click FOLLOW to automatically see new interviews. And drop us a rating or a review wherever you listen] In this file Betsy Lerner and I discussed: Getting kicked out of film school How "No Bad Dogs" inspired her to write The Forest for the Trees about writer personalities Working with punk rock icon Patti Smith The secrets behind her writing process Why she wants to have dinner with filmmaker Greta Gerwig And a lot more! Show Notes: betsylerner.com Dunow, Carlson & Lerner Literary Agency Shred Sisters by Betsy Lerner (Amazon) The Forest for the Trees: An Editor's Advice to Writers by Betsy Lerner (Amazon) Betsy Lerner Amazon Author Page Kelton Reid on Twitter Learn more about your ad choices. Visit megaphone.fm/adchoices

Demystifying Science
Did Women and Snakes Bring us Consciousness? - Dr. Andrew Cutler, #324

Demystifying Science

Play Episode Listen Later Feb 28, 2025 152:48


MAKE HISTORY WITH US THIS SUMMER:https://demystifysci.com/demysticon-2025PATREON https://www.patreon.com/c/demystifysciPARADIGM DRIFThttps://demystifysci.com/paradigm-drift-showPATREON: get episodes early + join our weekly Patron Chat https://bit.ly/3lcAasBMERCH: Rock some DemystifySci gear : https://demystifysci.myspreadshop.com/allAMAZON: Do your shopping through this link: https://amzn.to/3YyoT98SUBSTACK: https://substack.com/@UCqV4_7i9h1_V7hY48eZZSLw@demystifysciAndrew Cutler is the author of the Vectors of Mind Substack, where he explores the question of how humans became… human. His research starts from a simple premise - if our self-awareness, the ability to look at ourselves in the mirror and declare that there is an “I” staring back, is truly unique in the animal kingdom, then it likely related to that moment of coming. But no one really knows what happened in the fog of pre-history to ratchet us from the gauzy time before we were fully human to… whatever all of this that we're living in right now could be called. In fact, this is often referred to as the sapient paradox. Why, oh why, did we become genetically modern nearly 300,000 years ago (maybe more) but take until about 50,000 years ago to start doing human things like making art, ritually burying our dead, and tracking the stars? Many have suggested it was psychedelic mushrooms that pushed us over the edge. This is the stoned ape hypothesis, which says that a sufficiently large psychedelic experience pushed us out of the womb of the earth. However, Andrew thinks it might have been something else. He figures it was snakes. And women. Together, they produced the Snake Cult of Consciousness that dragged us, kicking and screaming, into the world.(00:00) Go! (00:06:56) The Sapient Paradox Explored(00:13:09) Recursion and Human Cognition(00:19:22) Abstraction and Innovation(00:25:23) Self-awareness Evolution(00:27:14) Recursion and Strategy(00:30:00) Cultural Shifts and Domination(00:33:39) Origins of Recursion(00:38:22) Subject-Object Separation(00:47:34) Linguistic Evolution(00:48:56) Emotional Intelligence in Animals(00:50:33) Creation Myths and Self-Awareness(00:52:10) Awareness of Death in Animals(00:56:06) Evolution of Symbolic Thought(01:00:58) Göbekli Tepe and Diffusion Hypotheses(01:06:05) Matriarchy and Rituals in Early Cultures(01:08:44) Human Migration and Cultural Development(01:17:11) Origins of Human Consciousness and Language(01:25:09) Snakes, Myths, and Early Civilization(01:33:40) Women, Mythology, and Historical Narratives(01:36:30) The Subtle Female Power Dynamics in Patriarchal Societies(01:40:25) Evolution of Societal Structures(01:46:00) Neolithic Genetic Bottleneck and Patriarchal Theories(01:49:23) Women's Role in Human Cognitive Evolution(01:56:11) Symbolism of Snakes and Ancient Knowledge(02:02:10) Snake Venom Usage(02:07:12) Historical Cults and Rituals(02:11:07) Greek Tragedy and Mystery Cults(02:14:08) Matriarchy and Cultural Myths(02:17:10) Diffusion of Culture and Legends(02:22:36) Comparative Mythology and the Seven Sisters Myth(02:27:01) Scientific and Metaphysical Connections in Human Origin Stories(02:28:55) The Origins and Significance of Gospel Stories(02:30:03) Shamanistic Cults and Cultural Symbols in Ancient Sites #HumanOrigins, #AncientHistory, #Mythology, #Evolution, #Consciousness, #AncientMysteries, #Symbolism, #SelfAwareness, #HumanEvolution, #AncientCultures, #CognitiveScience, #SpiritualEvolution, #Anthropology, #Philosophy, #AncientWisdom, #Archaeology, #philosophypodcast, #sciencepodcast, #longformpodcast

CryptoNews Podcast
#417: Austin King, Co-Founder of Omni Network , on Memecoin Mania, $XRP, and Creating the Abstraction Layer for the Ethereum Ecosystem

CryptoNews Podcast

Play Episode Listen Later Feb 27, 2025 38:37


Austin King is the Co-Founder of Omni Network and CEO of Labs, the abstraction layer for the Ethereum ecosystem. With a background in computer science from Harvard, Austin began his entrepreneurial journey by creating a payment network that scaled to over 10 billion transactions before being acquired by Ripple. Today, Austin, alongside Co-Founder Tyler Tarsi, is leading Omni's mission to empower application developers to access users and liquidity across the entire Ethereum ecosystem without smart contract upgrades.In this conversation, we discuss:- Memecoin Mania- Portnoy running with GREED and GREED 2- Hayden is going to jail!- Onboarding people to memecoins kills our chance for mass adoption- Ripple and XRP - current sentiment and why normies love it- Omni is an abstraction layer for the Ethereum ecosystem- Allowing users to trade crypto like Robinhood- Removing bridges and wrapped assets from the front-end- Gaining access to all the users in the on-chain economy- The secret sauce to mass adoption- SolverNet SDK- Tokenization is crypto's killer use caseOmni NetworkWebsite: omni.networkX: @OmniFDNTelegram: t.me/OmniFDNAustin KingX: @0xASKLinkedIn: Austin King  ---------------------------------------------------------------------------------  This episode is brought to you by PrimeXBT.  PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers.   PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions.  Code: CRYPTONEWS50  This promotion is available for a month after activation. Click the link below:  PrimeXBT x CRYPTONEWS50

CryptoNews Podcast
#416: Prabal Banerjee, Co-founder of Avail, on The Future of Blockchain Interoperability, Chain Abstraction, and Scalability in Web3

CryptoNews Podcast

Play Episode Listen Later Feb 24, 2025 33:17


Prabal Banerjee, co-founder of Avail, is a researcher and technical leader. He spearheaded the implementation of the Data Availability layer in 2020 and led a team of researchers to explore new frontiers in cryptography and blockchain technology.Prabal's interest in cryptography began in 2013, which led him to explore blockchain technology in 2016. He accumulated a wide range of internship experience with companies including IBM and Oracle. With his extensive expertise and passion for blockchains, Prabal is poised to continue leading the way in exploring the possibilities of this exciting and fast-evolving field for years to come.In this conversation, we discuss:- Breaking down the $LIBRA token launch- The future of blockchain interoperability- Web2 allows us to scale, Web3 has too many chains and we need more interoperability- When will the interoperability problem be fixed- The nuances of blockchains- How cross-chain communication is evolving and why interoperability is critical for the next phase of Web3 growth- The challenges of fragmented ecosystems and how unification can drive adoption- Scalability in Web3: are we ready for mass adaption?- Avail DA, Avail Fusion, Avail NexusAvailWebsite: www.availproject.orgX: @AvailProjectLinkedIn: AvailPrabal BanerjeeX: @prabalbanerjeeLinkedIn: Prabal Banerjee ---------------------------------------------------------------------------------  This episode is brought to you by PrimeXBT.  PrimeXBT offers a robust trading system for both beginners and professional traders that demand highly reliable market data and performance. Traders of all experience levels can easily design and customize layouts and widgets to best fit their trading style. PrimeXBT is always offering innovative products and professional trading conditions to all customers.   PrimeXBT is running an exclusive promotion for listeners of the podcast. After making your first deposit, 50% of that first deposit will be credited to your account as a bonus that can be used as additional collateral to open positions.  Code: CRYPTONEWS50  This promotion is available for a month after activation. Click the link below:  PrimeXBT x CRYPTONEWS50

Increments
#81 - What Does Critical Rationalism Get Wrong? (w/ Kasra)

Increments

Play Episode Listen Later Feb 14, 2025 99:05


As whores for criticism, we wanted to have Kasra on to discuss his essay The Deutschian Deadend (https://www.bitsofwonder.co/p/the-deutschian-deadend). Kasra claims that Popper and Deutsch are fundamentally wrong in some important ways, and that many of their ideas will forever remain in the "footnotes of the history of philosophy". Does he change our mind or do we change his? Follow Kasra on twitter (https://x.com/kasratweets) and subscribe to his blog, Bits of Wonder (https://www.bitsofwonder.co/p/the-deutschian-deadend). We discuss Has Popper had of a cultural impact? The differences between Popper, Deutsch, and Deutsch's bulldogs. Is observation really theory laden? The hierarchy of reliability: do different disciplines have different methods of criticism? The ladder of abstractions The difference between Popper and Deutsch on truth and abstraction The Deutschian community's reaction to the essay References Bruce Neilson's podcast on verification and falsification: https://podcasts.apple.com/ca/podcast/episode-61-a-critical-rationalist-defense/id1503194218?i=1000621362624 Popper on certainty: Chapter 22. Analytical Remarks on Certainty in Objective Knowledge Quotes By the nature of Deutsch and Popper's ideas being abstract, this essay will also necessarily be abstract. To combat this, let me ground the whole essay in a concrete empirical bet: Popper's ideas about epistemology, and David Deutsch's extensions of them, will forever remain in the footnotes of the history of philosophy. Popper's falsificationism, which was the main idea that he's widely known for today, will continue to remain the only thing that he's widely known for. The frustrating fact that Wittgenstein is widely regarded as a more influential philosopher than Popper will continue to remain true. Critical rationalism will never be widely recognized as the “one correct epistemology,” as the actual explanation (or even the precursor to an explanation) of knowledge, progress, and creativity. Instead it will be viewed, like many philosophical schools before it, as a useful and ambitious project that ultimately failed. In other words, critical rationalism is a kind of philosophical deadend: the Deutschian deadend. - Kasra in the Deutschian Deadend There are many things you can directly observe, and which are “manifestly true” to you: what you're wearing at the moment, which room of your house you're in, whether the sun has set yet, whether you are running out of breath, whether your parents are alive, whether you feel a piercing pain in your back, whether you feel warmth in your palms—and so on and so forth. These are not perfectly certain absolute truths about reality, and there's always more to know about them—but it is silly to claim that we have absolutely no claim on their truth either. I also think there are even such “obvious truths” in the realm of science—like the claim that the earth is not flat, that your body is made of cells, and that everyday objects follow predictable laws of motion. - Kasra in the Deutschian Deadend Deutsch writes: Some philosophical arguments, including the argument against solipsism, are far more compelling than any scientific argument. Indeed, every scientific argument assumes the falsity not only of solipsism, but also of other philosophical theories including any number of variants of solipsism that might contradict specific parts of the scientific argument. There are two different mistakes happening here. First, what Deutsch is doing is assuming a strict logical dependency between any one piece of our knowledge and every other piece of it. He says that our knowledge of science (say, of astrophysics) implicitly relies on other philosophical arguments about solipsism, epistemology, and metaphysics. But anyone who has thought about the difference between philosophy and science recognizes that in practice they can be studied and argued about independently. We can make progress on our understanding of celestial mechanics without making any crucial assumption about metaphysics. We can make progress studying neurons without solving the hard problem of consciousness or the question of free will. - Kasra in the Deutschian Deadend, quoting Deutsch on Solipsism At that time I learnt from Popper that it was not scientifically disgraceful to have one's hypothesis falsified. That was the best news I had had for a long time. I was persuaded by Popper, in fact, to formulate my electrical hypotheses of excitatory and inhibitory synaptic transmission so precisely and rigorously that they invited falsification - and, in fact, that is what happened to them a few years later, very largely by my colleagues and myself, when in 1951 we started to do intra- cellular recording from motoneurones. Thanks to my tutelage by Popper, I was able to accept joyfully this death of the brain-child which I had nurtured for nearly two decades and was immediately able to contribute as much as I could to the chemical transmission story which was the Dale and Loewi brain-child. - John C. Eccles on Popper, All Life is Problem Solving, p.12 In order to state the problem more clearly, I should like to reformulate it as follows. We may distinguish here between three types of theory. First, logical and mathematical theories. Second, empirical and scientific theories. Third, philosophical or metaphysical theories. -Popper on the "hierarchy of reliability", C&R p.266 Socials Follow us on Twitter at @IncrementsPod, @BennyChugg, @VadenMasrani Come join our discord server! DM us on twitter or send us an email to get a supersecret link Become a patreon subscriber here (https://www.patreon.com/Increments). Or give us one-time cash donations to help cover our lack of cash donations here (https://ko-fi.com/increments). Click dem like buttons on youtube (https://www.youtube.com/channel/UC_4wZzQyoW4s4ZuE4FY9DQQ) Are you a solipsist? If so, send yourself an email over to incrementspodcast@gmail.com. Special Guest: Kasra.

The Modern Art Notes Podcast
Kota Ezawa, Amy Pleasant

The Modern Art Notes Podcast

Play Episode Listen Later Jan 31, 2025 71:42


Episode No. 691 features artists Kota Ezawa and Amy Pleasant.  The Fort Mason Center for Arts & Culture is presenting "Kota Ezawa: Here and There - Now and Then," an investigation into the creation of memory in the Bay Area and nationally, through March 9. The exhibition, organized in collaboration with the San Francisco Museum of Modern Art, features Ezawa and Julian Brave NoiseCat's Alcatraz Is an Idea (2024), and Merzbau 1, 2, 3 (2021), and Ursonate (2022), which were among 11 Ezawas recently acquired by SFMOMA. "Ezawa" was curated by Frank Smigiel. Fort Mason will publish a catalogue on the closing weekend. SFMOMA is showing Ezawa's National Anthem (2018) in "Count Me In"  through April 27. Ezawa's work has been featured in solo exhibitions at many museums, including the Baltimore Museum of Art, the Chrysler Museum of Art, Norfolk, VA; the Buffalo AKG Art Museum; the Vancouver Art Gallery, Canada; and the Saint Louis Art Museum. His work is in the collection most major US art museums, and in museums in seven other countries.  Pleasant is included in "Synchronicities: Intersecting Figuration with Abstraction" at the Bemis Center for Contemporary Arts, Omaha. The exhibition examines some of the ways in which nine artists have recently navigated the space between abstraction and figuration. "Synchronicities" was curated by Rachel Adams, and is on view through May 4. Pleasant's work is also on view at The Carnegie, Covington, KY in "Southern Democratic" through February 15, and in "Vivid: A Fresh Take" at the Hunter Museum of American Art, Chattanooga, TN through June 1.  Pleasant has been included in exhibitions at the Knoxville Museum of Art, the Montgomery (Ala.) Museum of Fine Arts, the Weatherspoon Museum of Art, University of North Carolina, Greensboro, and more.  Instagram: Amy Pleasant, Tyler Green.

Urgency of Change - The Krishnamurti Podcast
Krishnamurti on Abstraction

Urgency of Change - The Krishnamurti Podcast

Play Episode Listen Later Jan 29, 2025 63:39


‘There is no abstraction, there is only 'what is', there is only the seeing. And when you see, you act.' This episode on Abstraction has three sections. The first extract (2:35) is from Krishnamurti's first talk in Bombay 1974, and is titled: Abstractions, Conclusions and Ideas. The second extract (29:02) is from the second talk at Brockwood Park in 1973, and is titled: Why Does the Mind Draw Abstractions? The final extract in this episode (57:02) is from Krishnamurti's fourth talk in New Delhi 1964, and is titled: Fear Is Not an Abstraction. Each episode of the Krishnamurti podcast is based on a significant theme of his talks. Extracts have been carefully selected to represent Krishnamurti's different approaches to these universal and timeless topics. This episode's theme is Knowing. Upcoming themes are Grief and Loss, Mechanical Living and Talent. This is a podcast from Krishnamurti Foundation Trust, based at Brockwood Park in Hampshire, UK. Brockwood is also home to Brockwood Park School, a unique international boarding school offering a personalised holistic education. It is deeply inspired by Krishnamurti's teaching, which encourages academic excellence, self-understanding, creativity and integrity. Please visit brockwood.org.uk for more information. You can also find our regular Krishnamurti quotes and videos on Instagram, TikTok and Facebook at Krishnamurti Foundation Trust. If you enjoy the podcast, please leave a review or rating on your podcast app.

Modern Web
Backend Abstractions, Serverless Patterns, and Why It's Okay to Start Learning with Frameworks

Modern Web

Play Episode Listen Later Jan 29, 2025 33:14


In this episode of the Modern Web Podcast, Rob Ocel, Danny Thompson, Adam Rackis, and Brandon Mathis discuss the role of abstractions in software development. They explore frontend tools like React and SolidJS, backend abstractions like serverless platforms, and the importance of understanding patterns and learning through mistakes. The group also highlights emerging trends for 2025, including opportunities in platform plugins and developer marketplaces. Key Points for the Episode: The Role of Abstractions in Development: The panel discusses the benefits and challenges of abstractions in software development, emphasizing the importance of understanding underlying systems to avoid over-reliance on tools like React hooks and serverless platforms. Learning Through Experimentation: Personal experiences with tools like Advent of Code, exploring new languages like Swift and Rust, and experimenting with new frameworks like SolidJS highlight the importance of hands-on learning and stepping outside comfort zones. Platform Opportunities: A growing interest in building apps and plugins on established platforms like Stripe, Zoom, and Chrome Extensions showcases untapped opportunities for developers to create impactful solutions and monetize their skills. Chapters 0:00 - The Potential of Plugins and Platforms 0:42 - Welcome to the Modern Web Podcast 0:47 - Introducing the Hosts and Guests 1:19 - Holiday Projects and Side Gigs 1:31 - Danny's Speedrun of a New Platform 2:07 - Adam's Holiday Reading List 3:38 - Brandon's Advent of Code Challenge in Swift and Rust 5:01 - Learning New Programming Languages Through Challenges 6:52 - Discussion on Abstractions in Software Development 7:10 - The Balance Between Abstractions and Understanding the Basics 8:56 - Learning Through Experience: The Importance of Stepping on Rakes 9:46 - React's Role in Frontend Development and Its Critics 10:39 - The Evolution of Frontend and Backend Abstractions 12:09 - The Impact of Serverless and Cloud Platforms 13:31 - Misuse of Abstractions and Overcomplicated Code 14:27 - The Common Pitfalls of React Hooks Misuse 15:29 - Overuse of `useEffect` and Its Performance Implications 16:41 - Learning from Industry Experts: Insights from Ben Lesh 17:53 - The Evolution of the Web from Static Documents to Interactive Applications 19:04 - The Role of Abstractions in Backend Development and Serverless Adoption 21:06 - Advice for Developers on Understanding Patterns and Abstractions 22:21 - Sponsor Message: This Dot Labs 22:27 - Looking Ahead to 2025: Technologies and Trends 22:43 - Excitement Around SolidJS and Signals-Based Frameworks 23:29 - The Growing Ecosystem Around SolidJS and TanStack Router 24:48 - Insights from a Conversation with Ryan Carniato 25:19 - Interest in TanStack Start and React 19 Features 26:09 - Danny Learning Spanish and Coding Challenges 27:16 - Exploring New Platforms for Side Projects and Monetization 27:55 - The Untapped Potential in Plugin and App Store Ecosystems 29:01 - Case Study: Monetization through Small Chrome and Office Extensions 30:09 - Growth of Developer Marketplaces (Stripe, Slack, Shopify, Zoom) 31:06 - The Challenge of Getting Projects in Front of Users 32:03 - Opportunities in Game Modding and Twitch Extensions 32:32 - Closing Thoughts and Future Podcast Episodes 32:45 - Sponsor Message and Where to Find the Podcast Online Follow the crew on Twitter and Linkedin: Rob Twitter: https://x.com/robocell Rob Linkedin:   / robocel   Danny Twitter: https://x.com/DThompsonDev Danny Linkedin:   / dthompsondev   Adam Twitter: https://x.com/AdamRackis Adam Linkedin:   / adam-rackis-5b655a8   Brandon Twitter: https://x.com/BrandonMathis Brandon Linkedin:   / mathisbrandon   Sponsored by This Dot: thisdot.co

Crazy Wisdom
Episode #430: From Sci-Fi to Reality: The Human Side of AI and Its Global Impact

Crazy Wisdom

Play Episode Listen Later Jan 27, 2025 64:39


In this episode of Crazy Wisdom, Stewart Alsop sits down with Diego Basch, a consultant in artificial intelligence with roots in San Francisco and Buenos Aires. Together, they explore the transformative potential of AI, its unpredictable trajectory, and its impact on everyday life, work, and creativity. Diego shares insights on AI's role in reshaping tasks, human interaction, and global economies while touching on his experiences in tech hubs like San Francisco and Buenos Aires. For more about Diego's work and thoughts, you can find him on LinkedIn or follow him on Twitter @dbasch where he shares reflections on technology and its fascinating intersections with society.Check out this GPT we trained on the conversation!Timestamps00:00 Introduction to the Crazy Wisdom Podcast00:20 Excitement and Uncertainty in AI01:07 Technology's Impact on Daily Life02:23 The Evolution of Social Networking02:43 AI and Human Interaction03:53 The Future of Writing in the Age of AI05:27 Argentina's Unique Linguistic Creativity06:15 AI's Role in Argentina's Future11:45 Cybersecurity and AI Threats20:57 The Evolution of Coding and Abstractions31:59 Troubleshooting Semantic Search Issues32:30 The Role of Working Memory in Coding34:46 Human Communication vs. AI Translation35:46 AI's Impact on Education and Job Redundancy37:37 Rebuilding Civilization and Knowledge Retention39:54 The Resilience of Global Systems41:32 The Singularity Debate45:01 AI Integration in Argentina's Economy51:54 The Evolution of San Francisco's Tech Scene58:48 The Future of AI Agents and Security01:03:09 Conclusion and Contact InformationKey InsightsAI's Transformative Potential: Diego Basch emphasizes that artificial intelligence feels like a sci-fi concept materialized, offering tools that could augment human life by automating repetitive tasks and improving productivity. The unpredictability of AI's trajectory is part of what makes it so exciting.Human Adaptation to Technology: The conversation highlights how the layering of technological abstractions over time has allowed more people to interact with complex systems without needing deep technical knowledge. This trend is accelerating with AI, making once-daunting tasks more accessible even to non-technical individuals.The Role of Creativity in the AI Era: Diego discusses how creativity, unpredictability, and humor remain uniquely human strengths that current AI struggles to replicate. These qualities could play a significant role in maintaining human relevance in an AI-enabled world.The Evolving Nature of Coding: AI is changing how developers work, reducing the need for intricate coding knowledge while enabling a focus on solving more human-centric problems. While some coding skills may atrophy, understanding fundamental principles remains essential for adapting to new tools.Argentina's Unique Position: The discussion explores Argentina's potential to emerge as a significant player in AI due to its history of technological creativity, economic unpredictability, and resourcefulness. The parallels with its early adoption of crypto demonstrate a readiness to engage with transformative technologies.AI and Human Relationships: An AI-enabled economy might allow humans to focus more on meaningful, human-centric work and relationships as machines take over repetitive and mechanical tasks. This could redefine the value humans derive from work and their interactions with technology.Risks and Opportunities with AI Agents: The development of autonomous AI agents raises significant security and ethical concerns, such as ensuring they act responsibly and are not exploited by malicious actors. At the same time, these agents promise unprecedented levels of efficiency and autonomy in managing real-world tasks.

PodRocket - A web development podcast from LogRocket
Universal React with Mo Khazali

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Jan 23, 2025 36:09


Mo Khazali, head of mobile and tech lead at Theodo UK, talks about the novel concept of Universal React. He discusses cross-platform development, overcoming performance challenges, and its impact on empowering small development teams to compete with big tech. Links https://x.com/mo__javad https://github.com/mojavad https://www.linkedin.com/in/mohammadkhazali We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Mo Khazali.

Cleopatra's Bling Podcast
Crafting an Aesthetic Life: Art in Practice

Cleopatra's Bling Podcast

Play Episode Listen Later Jan 23, 2025 35:52


Artist Miranda Skoczek transcends the traditional role of painter, creating wall works, co-designing with fashion houses like Gorman and Silk Laundry, and crafting figurines inspired by ancient Pagan traditions of her Slavic ancestry. We share a love of symbols and amulets, and we discussed how to bring their imagery and power into a contemporary context while honouring these traditions and building a successful artistic practice. Miranda is represented by Edwina Corlette gallery in Australia. You can find out more about artwork, awash with colour and symbolism, here. – Shop Latest Discover Bespoke This episode was live-recorded on Wurundjeri country. Cleopatra's Bling Podcast was produced by Zoltan Fecso and the CB team. Original music by Cameron Alva. 

This is My Silver Lining
Time to Think: The Art of Wise Thinking with Dan Kowalski

This is My Silver Lining

Play Episode Listen Later Jan 22, 2025 49:49


In this thought-provoking episode of This is My Silver Lining, I sit down with Dan Kowalski, founder of Plan A Thinking and author of W.I.S.E. Choices at Work: Go From Doubting to DECISIVE When the Clock is Ticking.Dan's mission is to empower individuals and teams to make better decisions through deliberate and disciplined thinking. From his humble beginnings to navigating career transitions and founding his own consulting firm, Dan shares how curiosity, resilience, and a framework for decision-making have shaped his journey.We explore the importance of creating space for reflection, asking the right questions, and understanding the downside of every choice. Dan also shares his insights on the evolving role of technology—especially AI—in decision-making and how to strike a balance between speed and thoughtful consideration in a fast-paced world.Whether you're a leader, a professional facing tough decisions, or simply someone striving to think more effectively, Dan's wise and practical approach will leave you inspired.Episode Links and ResourcesPlan A ThinkingW.I.S.E. Choices at Work: Go From Doubting to DECISIVE When the Clock is TickingDialogue Mapping: Building Shared Understanding of Wicked Problems, Jeff ConklinVisual Thinking: The Hidden Gifts of People Who Think in Pictures, Patterns, and Abstractions, Temple GrandinThink Again, Adam GrantI Never Thought of It That Way: How to Have Fearlessly Curious Conversations in Dangerously Divided Times, Mónica GuzmánSupport this podcast by subscribing and reviewing. Music is considered “royalty-free” and discovered on Audio Blocks. Technical Podcast Support by: Jon Keur at Wayfare Recording Co. © 2025 Silver Linings Media LLC. All Rights Reserved.

Urgency of Change - The Krishnamurti Podcast
Krishnamurti on Trust and Faith

Urgency of Change - The Krishnamurti Podcast

Play Episode Listen Later Jan 15, 2025 63:26


‘We are incapable, so we look, we search, we find somebody to tell us what to do, and we put our faith in those people. But faith and trust have no value.' This episode on Trust and Faith has five sections. The first extract (2:24) is from the second question and answer meeting in Saanen 1980, and is titled Trust and Certainty. The second extract (13:42) is from Krishnamurti's fifth talk in Madras 1964, and is titled: Trust and Faith Have No Value. The third extract (22:58) is from the second talk in Bombay 1962, and is titled: There Is Nothing You Can Trust. The fourth extract (42:18) is from the first question and answer meeting in Bombay 1984, and is titled: What Is Faith? The final extract in this episode (48:03) is from Krishnamurti's third talk in Colombo 1980, and is titled: Faith and Suffering. Each episode of the Krishnamurti podcast is based on a significant theme of his talks. Extracts have been carefully selected to represent Krishnamurti's different approaches to these universal and timeless topics. Upcoming themes are Abstraction, Mechanical Living, and Grief and Loss. This is a podcast from Krishnamurti Foundation Trust. Please visit the official YouTube channel for hundreds of full-length video and audio recordings of Krishnamurti's talks and discussions. In addition, the Foundation's own channel features a large collection of carefully selected clips. You can also find our regular Krishnamurti quotes and videos on Instagram, TikTok and Facebook at Krishnamurti Foundation Trust. If you enjoy the podcast, please leave a review or rating on your podcast app.

Platemark
s3e72 on color and abstraction with Jonathan Higgins, owner of Manneken Press

Platemark

Play Episode Listen Later Jan 14, 2025 71:01


In this episode of Platemark, Jonathan Higgins discusses his journey as the owner and master printer of Manneken Press, established in 2000 in Bloomington, Illinois. We talk about his early life in Berkeley, California, his initial interest in art and ceramics, and his transition to printmaking. After exploring lithography and working for various artists and print workshops in New York, including at Galamander Press with Randy Hemminghaus, he eventually founded Manneken Press. Jonathan shares insights into the operational strategies, collaborative projects with artists, challenges with photogravure, and his approach to publishing and curating prints. He also touches on the impact of COVID-19 on his work processes and future projects, while emphasizing the importance of selecting artists whose work resonates with him. The interview concludes with a reflection on the evolution of Manneken Press and Jonathan's current focus and achievements.   Episode photo by: Matt Shrier https://mannekenpress.com/ Blog: https://mannekenpress.com/news-the-manneken-press-blog/ Artsy: https://www.artsy.net/partner/manneken-press Printed Editions: https://www.printed-editions.com/gallery/manneken-press/ IG: @mannekenpress IG: jonathiggins   Platemark website Sign-up for Platemark emails Leave a 5-star review Support the show Get your Platemark merch Check out Platemark on Instagram Join our Platemark group on Facebook   Philip Van Keuren (American, born 1948). Snowstorm, 2016. Photogravure. 14 x 18 in. Published by Manneken Press. Courtesy of Manneken Press. Philip Van Keuren (American, born 1948). Tulips, 2019. Photogravure. 18 x 14 in. Published by Manneken Press. Courtesy of Manneken Press. Manneken Pis, Brussels, Belgium. Manneken Pis, Brussels, Belgium. Shrine to Manneken Pis at Manneken Press. Courtesy of Manneken Press. Rupert Deese (American, born 1952). Array 1000/Dark Blue, 2011. Woodcut. 45 x 45. Published by Manneken Press. Courtesy of Manneken Press. Ted Kincaid (American, born 1966). Nest 920, 2008. Etching. Plate: 20 x 16 in.; sheet: 25 x 21 in. Published by Manneken Press. Courtesy of Manneken Press. Matt Magee (American, born France, 1961). (L–R) Bugs, Drugs, Plugs, 2021. Set of three aquatints. Each: 21 1/2 x 17 in. Published by Manneken Press. Courtesy of Manneken Press. Matt Magee (American, born France, 1961). Lunar Lantern, 2024. Aquatint. 23 1/2 x 17 in. Published by Manneken Press. Courtesy of Manneken Press. Matt Magee (American, born France, 1961). Mind Gap, 2024. Aquatint. 23 1/2 x 17 in. Published by Manneken Press. Courtesy of Manneken Press. Matt Magee (American, born France, 1961). Winter Pool, 2024. Aquatint. 23 1/2 x 17 in. Published by Manneken Press. Courtesy of Manneken Press. Preparing to print the plate for the watercolor monotype Foursquare Foresworn by Judy Ledgerwood (American, born 1959) at Manneken Press. Courtesy of Manneken Press. Judy Ledgerwood (American, born 1959). Foursquare Foresworn, 2020. Watercolor monotype. 22 x 30 in. Published by Manneken Press. Courtesy of Manneken Press. Judy Ledgerwood (American, born 1959). Detail of watercolor monotype Old Glory, right after printing. Published by Manneken Press. Courtesy of Manneken Press. Judy Ledgerwood (American, born 1959). Old Glory, 2020. Watercolor monotype. 22 x 30 in. Published by Manneken Press. Courtesy of Manneken Press. Judy Ledgerwood (American, born 1959). Inner Vision, 2020. Suite of 9 watercolor monotypes. Published by Manneken Press. Courtesy of Manneken Press. Jill Moser (American, born 1956) working on the plates for Chroma Six in her Long Island City studio. Courtesy of Manneken Press. Jill Moser (American, born 1956). Chroma Six, 2019. Suite of six color aquatints. Each: 23 1/2 x 20 in. Published by Manneken Press. Courtesy of Manneken Press. Jason Karolak (American, born 1974) stopping out a copper plate with asphaltum at Manneken Press. Courtesy of Manneken Press. Jason Karolak (American, born 1974). Detail of a plate with soap ground applied prior to etching at Manneken Press. Courtesy of Manneken Press. Jason Karolak (American, born 1974) plates with soap ground applied prior to etching at Manneken Press. Courtesy of Manneken Press. Jason Karolak (American, born 1974). The first plate of Prospect inked and ready to print at Manneken Press. Courtesy of Manneken Press. Jason Karolak (American, born 1974). The second plate of Prospect inked and ready to print at Manneken Press. Courtesy of Manneken Press. Jonathan Higgins pulling a color proof of Prospect, an etching by Jason Karolak (American, born 1974) at Manneken Press. Courtesy of Manneken Press. Jonathan Higgins pulling a color proof of Prospect with all 10 colors, an etching by Jason Karolak (American, born 1974) at Manneken Press. Courtesy of Manneken Press. Jason Karolak (American, born 1974). Working proof of Prospect, 2024. 2-plate aquatint. Plate:  21 x 18 in.; sheet: 26 1/2 x 23 in. Published by Manneken Press. Courtesy of Manneken Press.

Bethel Baptist Church in Wilmington, DE
Don't Settle for Abstractions (Psalm 23)

Bethel Baptist Church in Wilmington, DE

Play Episode Listen Later Jan 11, 2025 42:26


Riverside Chats
219. Artist Carmen Winant on "The last safe abortion"

Riverside Chats

Play Episode Listen Later Jan 11, 2025 51:00


Carmen Winant is an artist, photographer, writer, and art professor at The Ohio State University. Her work involves installation and collage work to examine survival and revolt through a feminist lens. Her traveling exhibition “The last safe abortion” opens Jan. 18 at Bemis Center for Contemporary Arts.  “The last safe abortion” is an exploration of women's health clinics and abortion providers, with a particular focus on the Midwest. The installation is composed of photos of behind-the-scenes work related to reproductive healthcare, such as answering phones, sterilizing equipment, conducting training sessions and scheduling appointments.  Bemis' Rachel Adams curated the exhibition, which was organized by the Minneapolis Institute of Art. “The last safe abortion” will be displayed alongside “Synchronicities: Intersecting Figuration with Abstraction.” The installations will run concurrently through May 4. In this episode, Winant is in conversation with Maria Corpuz about the origins of “The last safe abortion,” the logistics of how she put it together, and how Winant's art has been affected by the overturning of Roe v. Wade in 2022.

Engines of Our Ingenuity
The Engines of Our Ingenuity 1310: Math and Abstraction

Engines of Our Ingenuity

Play Episode Listen Later Jan 10, 2025 3:40


Episode: 1310 Redeeming math and abstraction in our schools.  Today, we ask why American math and science scores are slipping.

Machine Learning Street Talk
Francois Chollet - ARC reflections - NeurIPS 2024

Machine Learning Street Talk

Play Episode Listen Later Jan 9, 2025 86:46


François Chollet discusses the outcomes of the ARC-AGI (Abstraction and Reasoning Corpus) Prize competition in 2024, where accuracy rose from 33% to 55.5% on a private evaluation set. SPONSOR MESSAGES: *** CentML offers competitive pricing for GenAI model deployment, with flexible options to suit a wide range of models, from small to large-scale deployments. https://centml.ai/pricing/ Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. Are you interested in working on reasoning, or getting involved in their events? They are hosting an event in Zurich on January 9th with the ARChitects, join if you can. Goto https://tufalabs.ai/ *** Read about the recent result on o3 with ARC here (Chollet knew about it at the time of the interview but wasn't allowed to say): https://arcprize.org/blog/oai-o3-pub-breakthrough TOC: 1. Introduction and Opening [00:00:00] 1.1 Deep Learning vs. Symbolic Reasoning: François's Long-Standing Hybrid View [00:00:48] 1.2 “Why Do They Call You a Symbolist?” – Addressing Misconceptions [00:01:31] 1.3 Defining Reasoning 3. ARC Competition 2024 Results and Evolution [00:07:26] 3.1 ARC Prize 2024: Reflecting on the Narrative Shift Toward System 2 [00:10:29] 3.2 Comparing Private Leaderboard vs. Public Leaderboard Solutions [00:13:17] 3.3 Two Winning Approaches: Deep Learning–Guided Program Synthesis and Test-Time Training 4. Transduction vs. Induction in ARC [00:16:04] 4.1 Test-Time Training, Overfitting Concerns, and Developer-Aware Generalization [00:19:35] 4.2 Gradient Descent Adaptation vs. Discrete Program Search 5. ARC-2 Development and Future Directions [00:23:51] 5.1 Ensemble Methods, Benchmark Flaws, and the Need for ARC-2 [00:25:35] 5.2 Human-Level Performance Metrics and Private Test Sets [00:29:44] 5.3 Task Diversity, Redundancy Issues, and Expanded Evaluation Methodology 6. Program Synthesis Approaches [00:30:18] 6.1 Induction vs. Transduction [00:32:11] 6.2 Challenges of Writing Algorithms for Perceptual vs. Algorithmic Tasks [00:34:23] 6.3 Combining Induction and Transduction [00:37:05] 6.4 Multi-View Insight and Overfitting Regulation 7. Latent Space and Graph-Based Synthesis [00:38:17] 7.1 Clément Bonnet's Latent Program Search Approach [00:40:10] 7.2 Decoding to Symbolic Form and Local Discrete Search [00:41:15] 7.3 Graph of Operators vs. Token-by-Token Code Generation [00:45:50] 7.4 Iterative Program Graph Modifications and Reusable Functions 8. Compute Efficiency and Lifelong Learning [00:48:05] 8.1 Symbolic Process for Architecture Generation [00:50:33] 8.2 Logarithmic Relationship of Compute and Accuracy [00:52:20] 8.3 Learning New Building Blocks for Future Tasks 9. AI Reasoning and Future Development [00:53:15] 9.1 Consciousness as a Self-Consistency Mechanism in Iterative Reasoning [00:56:30] 9.2 Reconciling Symbolic and Connectionist Views [01:00:13] 9.3 System 2 Reasoning - Awareness and Consistency [01:03:05] 9.4 Novel Problem Solving, Abstraction, and Reusability 10. Program Synthesis and Research Lab [01:05:53] 10.1 François Leaving Google to Focus on Program Synthesis [01:09:55] 10.2 Democratizing Programming and Natural Language Instruction 11. Frontier Models and O1 Architecture [01:14:38] 11.1 Search-Based Chain of Thought vs. Standard Forward Pass [01:16:55] 11.2 o1's Natural Language Program Generation and Test-Time Compute Scaling [01:19:35] 11.3 Logarithmic Gains with Deeper Search 12. ARC Evaluation and Human Intelligence [01:22:55] 12.1 LLMs as Guessing Machines and Agent Reliability Issues [01:25:02] 12.2 ARC-2 Human Testing and Correlation with g-Factor [01:26:16] 12.3 Closing Remarks and Future Directions SHOWNOTES PDF: https://www.dropbox.com/scl/fi/ujaai0ewpdnsosc5mc30k/CholletNeurips.pdf?rlkey=s68dp432vefpj2z0dp5wmzqz6&st=hazphyx5&dl=0

Un Jour dans l'Histoire
Lismonde , peintre et dessinateur belge : un célèbre oublié

Un Jour dans l'Histoire

Play Episode Listen Later Jan 9, 2025 38:43


Nous sommes en septembre 1934. Dans le numéro 15 de la revue littéraire qu'il a fondée l'année précédente, la revue Tribune, le poète et romancier Jean Groffier, né à Liège, écrit : « En tant que peintre, disons-le de suite, Lismonde n'est pas un coloriste. Sous le pinceau du peintre se cache avant tout le dessinateur. Sa couleur est triste, inquiète, ses effets sont gris ; mais d'une délicieuse mélancolie. En fait, le paysagiste Lismonde est une des personnalités les plus intéressantes du monde pictural belge et dont le nom dépassera bientôt les limites de nos frontières.» Jules Lismonde, qui n'aimait pas que l'on rappelle son prénom, et signait donc ses œuvres de son seul patronyme, connut, en effet, la notoriété à l'intérieur et à l'extérieur des frontières belges. Évoluant de la figuration à l'abstraction, l'artiste traverse une bonne partie du vingtième siècle de l'histoire de l'art et participe, notamment, à l'énergique mouvement de contestation et de modernisation que l'on appellera « La jeune peinture belge ». Lui qui pourtant confessait « n'avoir jamais été un partisan », se sentant « toujours un peu isolé au sein des discussions, des activités qui eurent lieu ». Que reste-t-il aujourd'hui de Lismonde, un peu oublié, malgré sa célébrité ? Quel est son héritage, sa postérité ? Que nous dit-il de notre sensibilité artistique belge ? Retrouvons Lismonde … Avec nous : Anne Hustache, historienne de l'art. Sujets traités : Jules Lismonde, dessinateur, peintre, abstraction, Merci pour votre écoute Un Jour dans l'Histoire, c'est également en direct tous les jours de la semaine de 13h15 à 14h30 sur www.rtbf.be/lapremiere Retrouvez tous les épisodes d'Un Jour dans l'Histoire sur notre plateforme Auvio.be :https://auvio.rtbf.be/emission/5936 Intéressés par l'histoire ? Vous pourriez également aimer nos autres podcasts : L'Histoire Continue: https://audmns.com/kSbpELwL'heure H : https://audmns.com/YagLLiKEt sa version à écouter en famille : La Mini Heure H https://audmns.com/YagLLiKAinsi que nos séries historiques :Chili, le Pays de mes Histoires : https://audmns.com/XHbnevhD-Day : https://audmns.com/JWRdPYIJoséphine Baker : https://audmns.com/wCfhoEwLa folle histoire de l'aviation : https://audmns.com/xAWjyWCLes Jeux Olympiques, l'étonnant miroir de notre Histoire : https://audmns.com/ZEIihzZMarguerite, la Voix d'une Résistante : https://audmns.com/zFDehnENapoléon, le crépuscule de l'Aigle : https://audmns.com/DcdnIUnUn Jour dans le Sport : https://audmns.com/xXlkHMHSous le sable des Pyramides : https://audmns.com/rXfVppvN'oubliez pas de vous y abonner pour ne rien manquer.Et si vous avez apprécié ce podcast, n'hésitez pas à nous donner des étoiles ou des commentaires, cela nous aide à le faire connaître plus largement.

Contain Podcast
Episode 200 - The Abstraction Episode - Pt A *Preview*

Contain Podcast

Play Episode Listen Later Jan 8, 2025 36:16


200th episode special on the history and future of abstraction in social life, art, and more. For full 4hr20 minute episode: Part A Part B Patreon.com/Contain

Urgency of Change - The Krishnamurti Podcast

‘Leisure is extraordinarily important – not to have a mind that is constantly occupied, constantly chattering. It is only in that unoccupied mind a new seed of learning can take place.' This episode on Leisure has five sections. The first extract (2:35) is from Krishnamurti's second talk in Ojai 1977, and is titled: The Importance of Leisure. The second extract (21:06) is from the sixth talk at Rajghat in 1962, and is titled: We Have Very Little Leisure. The third extract (30:48) is from Krishnamurti's seventh talk in Bombay 1964, and is titled: What Will We Do With Our Leisure? The fourth extract (42:18) is from the third talk in Bombay 1966, and is titled: Great Leisure Is Coming. The final extract in this episode (55:28) is from the fifth talk in Bombay 1962, and is titled: Leisure and Laziness. Each episode of the Krishnamurti podcast is based on a significant theme of his talks. Extracts from the archives have been selected to represent Krishnamurti's different approaches to these universal and timelessly relevant topics. Upcoming themes are Trust, Abstraction, Mechanical Living. This is a podcast from Krishnamurti Foundation Trust, based at Brockwood Park in the UK, which is also home to The Krishnamurti Centre. The Centre offers a variety of group retreats, including for young adults. There is also a volunteer programme. The atmosphere at the Centre is one of openness and friendliness, with a sense of freedom to inquire with others and alone. Please visit krishnamurticentre.org.uk for more information. You can also find our regular Krishnamurti quotes and videos on Instagram, TikTok and Facebook at Krishnamurti Foundation Trust. If you enjoy the podcast, please leave a review or rating on your podcast app.

Convergence
Best of 2024: Top Insights on Developer Tools, APIs, SDKs, and Creating Exceptional DevX

Convergence

Play Episode Listen Later Dec 31, 2024 45:38


We compiled our favorite clips on developer tools and developer experience (DevX). We discuss why DevX has become essential for developer-focused companies and how it drives adoption to grow your product. Learn what makes developers a unique and discerning customer base, and hear practical strategies for designing exceptional tools and platforms. Our guests also share lessons learned from their own experiences—whether in creating frictionless integrations, maintaining a strong feedback culture, or enabling internal platform adoption. Through compelling stories and actionable advice, this episode is packed with lessons on how to build products that developers love. Playlist of Full Episodes from This Compilation: https://www.youtube.com/playlist?list=PL31JETR9AR0FV-46VR4G_n6xi4WdXEx-2 Inside the episode... The importance of developer experience and why it's a priority for developer-facing companies. Key differences between building developer tools and end-user applications. How DevX differs from DevRel and the synergy between the two. Metrics for measuring the success of developer tools: adoption, satisfaction, and revenue. Insights into abstraction ladders and balancing complexity and power. Customer research strategies for validating assumptions and prioritizing features. Stripe's culture of craftsmanship and creating “surprisingly great” experiences. The importance of dogfooding and feedback loops in building trusted platforms. Balancing enablement and avoiding gatekeeping in internal platform adoption. Maintaining consistency and quality across APIs, CLIs, and other resources. Mentioned in this episode Stripe Doppler Heroku Abstraction ladders Developer feedback loops Unlock the full potential of your product team with Integral's player coaches, experts in lean, human-centered design. Visit integral.io/convergence for a free Product Success Lab workshop to gain clarity and confidence in tackling any product design or engineering challenge. Subscribe to the Convergence podcast wherever you get podcasts including video episodes to get updated on the other crucial conversations that we'll post on YouTube at youtube.com/@convergencefmpodcast Learn something? Give us a 5 star review and like the podcast on YouTube. It's how we grow.   Follow the Pod Linkedin: https://www.linkedin.com/company/convergence-podcast/ X: https://twitter.com/podconvergence Instagram: @podconvergence

The Unfinished Print
Jacek Machowski : Printmaker - On The Edge Of Abstraction

The Unfinished Print

Play Episode Listen Later Dec 31, 2024 58:07


When it comes to immersing oneself in the understanding of mokuhanga, Jacek Machowski is dedicated to constantly deepening his knowledge and practice of the art form. His exploration of mokuhanga is both inspiring and dynamic, as he continually shares, creates, and evolves his expertise and approach to this wonderful art form.   I speak with mokuhanga printmaker, educator, and mokuhanga explorer Jacek Machowski. Jacek's work is a blend of experimentation, tradition, and excitement. We discuss his journey into mokuhanga, his deep dive into its history, techniques, and philosophies, and how he has dedicated himself to uncovering the intricacies of the art form. Jacek also shares insights into his process of making and testing his own tools, continually pushing their boundaries. Additionally, we explore his own mokuhanga prints, the choices behind his artistic methods, and the workshop he leads.   I would like to thank Jacek's translator , Małgorzata Ptasiński, for her invaluable help with translation. In this episode, you'll notice that Jacek's voice is intermittent, with Gosia speaking for most of the discussion. This approach was chosen to ensure smoother listening and a better flow.   Please follow The Unfinished Print and my own mokuhanga work on Instagram @andrezadoroznyprints or email me at andrezadorozny@gmail.com Notes: may contain a hyperlink. Simply click on the highlighted word or phrase. Artists works follow after the note if available. Pieces are mokuhanga unless otherwise noted. Dimensions are given if known. Print publishers are given if known. Jacek Machowski - wesbite,  Etsy, Instagram YouTube  Sakeda - Senjafuda (2023) 1.97" x 5.90"  senjafuda - are votive slips attributed to Buddhism in Japan. This slips of paper were pasted on temples in Japan. The worshippers name was written on the senjafuda in order for people to see that they had visited said shrine. The paper senjafuda were popular in the Edo Period (1603-1868). Tokyo and Kyoto senjafuda had various differences.  ex-libris - a decorative label or stamp placed inside a book to indicate its ownership. Usually features the owner's name, initials, or a personalized design, often with artistic or symbolic elements that reflect the owner's personality, interests, or profession. MI Lab - is a mokuhanga artists residency located in Kawaguchi-ko, near Mount Fuji. More info can be found, here.  nori - is a type of paste made from starch. It is used when making mokuhanga. You can make nori from any type of material made from starch. For instance, paste can be made with tapioca,  rice, corn, even potato. You can purchase nori pretty much anywhere but making it is more environmentally friendly. Laura Boswell has a great recipe, here.  binder - refers to the substance which holds pigment particles together and adheres them to a surface, such as paper, wood, or canvas, for prints or paints. The binder transforms dry pigments into a usable medium and also makes them more durable. embossing - refers to a technique where the paper is pressed into the carved woodblocks, creating a raised or textured effect on the printed surface. This technique adds a three-dimensional quality to the print by making certain areas of the paper slightly elevated. linocut -A linocut is a relief or block print type, similar to woodblock printing. The artist carves an image into a linoleum block, printing what's left.  intaglio printing - is a printing method, also called etching, using metal plates such as zinc, and copper, creating “recessed” areas which are printed with ink on the surface of these "recesses.” More info, here. The MET has info, here.   mica - in mokuhanga, mica (kirazuri) is used to add a shimmering, reflective effect to prints. Mica powder is typically mixed with glue and applied to the surface of the print in areas where a subtle sparkle or luminous texture is desired, often to highlight details such as clothing, water, or the sky. This technique gives the print a luxurious quality and enhances the visual depth. Historically, mica was used in ukiyo-e prints to elevate the status of the work, and it continues to be used by contemporary printmakers for its unique aesthetic appeal. kirazuri -  is a technique in woodblock printing using mica to add a sheen to the print. Mokuhanga artist Marcia Guetschow has written about kirazuri on her website, here.  David Bull - is a Canadian woodblock printmaker, and educator who lives and works in Japan. His love of mokuhanga has almost singlehandedly promoted the art form around the world. His company, Mokuhankan, has a brick and mortar store in Asakusa, Tōkyō, and online, here.  Forest In Spring (2008)  Wojciech Tylbor-Kubrakiewicz - a part of the Faculty of Graphics of the Academy of Fine Arts in Warsaw. His work focuses on everyday life, travel and memory. His works are in intaglio, relief such as mokuhanga, and serigraphy.  Augenblick 70 x 100 cm (2023) linocut Tomasz Kawełczyk - is a mokuhanga artist and deputy dean at the Faculty of Fine Arts of the Academy of Fine Arts in Łódź, Poland. He is also a lecturer and organizer of workshops. Tomasz has also worked in mokuhanga as well as holding workshops accompanying the "Road to Edo," held at the National Museum in Warsaw from February 25 - May 7, 2017. His work in mokuhanga has been focused on creating prints by using local tools and materials found in Poland.  Dariusz Kaca - is a relief printmaker and professor at the Academy of Fine arts  in Łódź, Poland. He works in linocut and mokuhanga.  Nocturn I - 40cm x 40cm, linocut Marta Bożyk - lecturer and researcher at the Academy of Fine Arts in Krakow, Poland.   © Popular Wheat Productions opening and closing credit -Ruby My Dear as performed by Roy Hargrove, originally by Thelonius Monk. (1990) RCA  logo designed and produced by Douglas Batchelor and André Zadorozny  Disclaimer: Please do not reproduce or use anything from this podcast without shooting me an email and getting my express written or verbal consent. I'm friendly :)  Слава Українi If you find any issue with something in the show notes please let me know. ***The opinions expressed by guests in The Unfinished Print podcast are not necessarily those of André Zadorozny and of Popular Wheat Productions.***        

this IS research
Awards under the Christmas Tree

this IS research

Play Episode Listen Later Dec 25, 2024 32:31


Look at what Santa dropped when he came down the chimney last night. A bunch of valuable ThisISResearch Best paper Awards! As we do at the end of every year, we look back at the finest information systems scholarship our field has produced this year, and we pick some of our favorite papers that we want to give an award too. Like in previous years, we recognize three different kinds of best papers – a paper that is innovative in its use of research methods, a paper that is a fine example of elegant scholarship, and a paper that is trailblazing in the sense that it starts new conversations in our field. References Pujol Priego, L., & Wareham, J. (2023). From Bits to Atoms: White Rabbit at CERN. MIS Quarterly, 47(2), 639-668. Recker, J., Zeiss, R., & Mueller, M. (2024). iRepair or I Repair? A Dialectical Process Analysis of Control Enactment on the iPhone Repair Aftermarket. MIS Quarterly, 48(1), 321-346. Seidel, S., Frick, C. J., & vom Brocke, J. (2025). Regulating Emerging Technologies: Prospective Sensemaking through Abstraction and Elaboration. MIS Quarterly, 49, . Abbasi, A., Somanchi, S., & Kelley, K. (2025). The Critical Challenge of using Large-scale Digital Experiment Platforms for Scientific Discovery. MIS Quarterly, 49, . Lindberg, A., Schecter, A., Berente, N., Hennel, P., & Lyytinen, K. (2024). The Entrainment of Task Allocation and Release Cycles in Open Source Software Development. MIS Quarterly, 48(1), 67-94. Kitchens, B., Claggett, J. L., & Abbasi, A. (2024). Timely, Granular, and Actionable: Designing a Social Listening Platform for Public Health 3.0. MIS Quarterly, 48(3), 899-930. Chen, Z., & Chan, J. (2024). Large Language Model in Creative Work: The Role of Collaboration Modality and User Expertise. Management Science, 70(12), 9101-9117. Matherly, T., & Greenwood, B. N. (2024). No News is Bad News: The Internet, Corruption, and the Decline of the Fourth Estate. MIS Quarterly, 48(2), 699-714. Morse, L., Teodorescu, M., Awwad, Y., & Kane, G. C. (2022). Do the Ends Justify the Means? Variation in the Distributive and Procedural Fairness of Machine Learning Algorithms. Journal of Business Ethics, 181(4), 1083-1095. Hansen, S., Berente, N., & Lyytinen, K. (2009). Wikipedia, Critical Social Theory, and the Possibility of Rational Discourse. The Information Society, 25(1), 38-59. Habermas, J. (1984). Theory of Communicative Action, Volume 1: Reason and the Rationalization of Society. Heinemann.   

Un Jour dans l'Histoire
Hilma Af Klint : pionnière de l'abstraction, entre féminisme et spiritisme

Un Jour dans l'Histoire

Play Episode Listen Later Dec 10, 2024 37:59


Nous sommes à Stockholm, en 1937. C'est au cours d'une conférences à la Société anthroposophique, càd de philosophie spirituelle, que la peintre Hilma af Klint encourage ses membres à faire usage de ses œuvres. A son auditoire, elle déclare : « Chaque fois que je parvenais à exécuter une de mes dessins, ma compréhension de l'homme, de l'animal, des plantes, des minéraux (…) de la création en général devenait plus claire. Je comprenais que j'étais libérée. Et que je m'y hissais au-delà de ma propre conscience bien plus limitée. La peinture est capable de transmettre cette vision. D'une certaine façon, un peintre ou un musicien rend plus facile pour nous d'être en contact avec d'autres âmes. » Hilma af Klint occupe une place singulière dans l'histoire de l'art, sa fréquentation des cercles de spiritisme va la mener sur les chemins de l'abstraction et en faire une pionnière. Les sciences occultes connaissent un grand succès, à la fin du XIXe siècle, notamment auprès des femmes pour lesquelles ils représentent un espace de liberté et un moyen d'émancipation. Ainsi, lors d'une séance de spiritisme, un esprit aurait demandé à l'artiste de réaliser un cycle pictural. Il en résultera « Les Peintures pour le temple », un ensemble de 193 œuvres qui font écho aux grands principes théosophiques que sont l'harmonie et l'unicité du monde, par-delà le masculin et le féminin, le visible et l'invisible. Engageons-nous, sans crainte, dans un monde étrange, à la fois passé et très actuel, celui de la suédoise Hilma af Klint … Avec nous : Eliane Van den Ende, historienne. Exposition Musée Guggenheim, à Bilbao, jusqu'au 2 février 2025. Sujets traités : Hilma Af Klint, abstraction, féminisme, spiritisme, Stockholm , peintre, Merci pour votre écoute Un Jour dans l'Histoire, c'est également en direct tous les jours de la semaine de 13h15 à 14h30 sur www.rtbf.be/lapremiere Retrouvez tous les épisodes d'Un Jour dans l'Histoire sur notre plateforme Auvio.be :https://auvio.rtbf.be/emission/5936 Intéressés par l'histoire ? Vous pourriez également aimer nos autres podcasts : L'Histoire Continue: https://audmns.com/kSbpELwL'heure H : https://audmns.com/YagLLiKEt sa version à écouter en famille : La Mini Heure H https://audmns.com/YagLLiKAinsi que nos séries historiques :Chili, le Pays de mes Histoires : https://audmns.com/XHbnevhD-Day : https://audmns.com/JWRdPYIJoséphine Baker : https://audmns.com/wCfhoEwLa folle histoire de l'aviation : https://audmns.com/xAWjyWCLes Jeux Olympiques, l'étonnant miroir de notre Histoire : https://audmns.com/ZEIihzZMarguerite, la Voix d'une Résistante : https://audmns.com/zFDehnENapoléon, le crépuscule de l'Aigle : https://audmns.com/DcdnIUnUn Jour dans le Sport : https://audmns.com/xXlkHMHSous le sable des Pyramides : https://audmns.com/rXfVppvN'oubliez pas de vous y abonner pour ne rien manquer.Et si vous avez apprécié ce podcast, n'hésitez pas à nous donner des étoiles ou des commentaires, cela nous aide à le faire connaître plus largement.

MLOps.community
PyTorch's Combined Effort in Large Model Optimization // Michael Gschwind // #274

MLOps.community

Play Episode Listen Later Nov 26, 2024 57:44


Dr. Michael Gschwind is a Director / Principal Engineer for PyTorch at Meta Platforms. At Meta, he led the rollout of GPU Inference for production services. // MLOps Podcast #274 with Michael Gschwind, Software Engineer, Software Executive at Meta Platforms. // Abstract Explore the role in boosting model performance, on-device AI processing, and collaborations with tech giants like ARM and Apple. Michael shares his journey from gaming console accelerators to AI, emphasizing the power of community and innovation in driving advancements. // Bio Dr. Michael Gschwind is a Director / Principal Engineer for PyTorch at Meta Platforms. At Meta, he led the rollout of GPU Inference for production services. He led the development of MultiRay and Textray, the first deployment of LLMs at a scale exceeding a trillion queries per day shortly after its rollout. He created the strategy and led the implementation of PyTorch donation optimization with Better Transformers and Accelerated Transformers, bringing Flash Attention, PT2 compilation, and ExecuTorch into the mainstream for LLMs and GenAI models. Most recently, he led the enablement of large language models on-device AI with mobile and edge devices. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://en.m.wikipedia.org/wiki/Michael_Gschwind --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Michael on LinkedIn: https://www.linkedin.com/in/michael-gschwind-3704222/?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app Timestamps: [00:00] Michael's preferred coffee [00:21] Takeaways [01:59] Please like, share, leave a review, and subscribe to our MLOps channels! [02:10] Gaming to AI Accelerators [11:34] Torch Chat goals [18:53] Pytorch benchmarking and competitiveness [21:28] Optimizing MLOps models [24:52] GPU optimization tips [29:36] Cloud vs On-device AI [38:22] Abstraction across devices [42:29] PyTorch developer experience [45:33] AI and MLOps-related antipatterns [48:33] When to optimize [53:26] Efficient edge AI models [56:57] Wrap up

Theories of Everything with Curt Jaimungal
The Unification Theory of Cognition and Biology | Manolis Kellis

Theories of Everything with Curt Jaimungal

Play Episode Listen Later Nov 8, 2024 123:50


In today's episode, MIT computational biologist Manolis Kellis dive into the hidden patterns linking DNA, evolution, and cognition, exploring a potential unifying theory that bridges biology, AI, and the essence of life. New Substack! Follow my personal writings and EARLY ACCESS episodes here: https://curtjaimungal.substack.com SPONSOR (THE ECONOMIST): As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe LINKS MENTIONED: - Manolis Kellis's Lab (website): https://compbio.mit.edu/ - Manolis Kellis's profile: https://web.mit.edu/manoli/ - Curt's article on language: https://curtjaimungal.substack.com/p/language-isnt-just-low-resolution - Chiara Marletto on TOE: https://www.youtube.com/watch?v=Uey_mUy1vN0 - Roger Penrose on TOE: https://www.youtube.com/watch?v=sGm505TFMbU TIMESTAMPS: 00:00 - Introduction 02:05 - The Scope of Biological Unification 06:02 - Biology vs. Physics 09:31 - DNA as Life's Language 13:45 - The Universal Compatibility of DNA 16:55 - Evolutionary Trade-Offs and Isolation 20:17 - Layers of Abstraction in Biology 24:51 - Beyond DNA: The Role of Histones 30:30 - Protein Folding and Function 35:26 - How Cells Interpret DNA Signals 40:24 - The Creativity of Language and Miscommunication 44:55 - Teaching and Simplification 51:09 - Evolution of Cognition and Centralized Decision-Making 57:35 - Vertical vs. Horizontal Evolution 1:04:20 - Specialization and Society's Role in Evolution 1:08:50 - The Future of Biological Understanding TOE'S TOP LINKS: - Support TOE on Patreon: https://patreon.com/curtjaimungal (early access to ad-free audio episodes!) - Listen to TOE on Spotify: https://open.spotify.com/show/4gL14b92xAErofYQA7bU4e - Become a YouTube Member Here: https://www.youtube.com/channel/UCdWIQh9DGG6uhJk8eyIFl1w/join - Join TOE's Newsletter 'TOEmail' at https://www.curtjaimungal.org Other Links: - Twitter: https://twitter.com/TOEwithCurt - Discord Invite: https://discord.com/invite/kBcnfNVwqs - iTunes: https://podcasts.apple.com/ca/podcast/better-left-unsaid-with-curt-jaimungal/id1521758802 - Subreddit r/TheoriesOfEverything: https://reddit.com/r/theoriesofeverything #science #sciencepodcast #physics #biology #consciousness Learn more about your ad choices. Visit megaphone.fm/adchoices

LensWork - Photography and the Creative Process

HT2068 - On Abstractions Abstracts are one of the most puzzling types of photography that fascinates me. Emotionally connecting with an abstract is rare, but so powerful, so unpredictable, so fickle.

MUBI Podcast
SKINAMARINK — The Internet's own haunted house

MUBI Podcast

Play Episode Listen Later Oct 31, 2024 40:50


Director Kyle Edward Ball had a nightmare as a child: "I was in my parents' house, my parents were missing, and there was a monster." Turns out, this is a nightmare a lot of people have had. After honing this craft on his YouTube channel, he finally made his film… and then it leaked online. Joined by Ball himself and Dread Central's editor-in-chief MaryBeth McAndrews, Anna explores how SKINAMARINK became the perfect haunted house movie for the internet age. Season 6, titled Haunted Homes, explores how haunted house movies have mirrored our relationship with our homes. Each episode visits a horror movie that changed the way we imagine a haunted house, from the crumbling Gothic mansions to white picket fences, what it says about the people who live in the houses and what scares them the most.  Guest written and hosted by Anna Bogutskaya. Find her book on horror films and feelings, FEEDING THE MONSTER, online and in all good bookshops. You can also listen to her horror film history podcast The Final Girls and subscribe to her movie newsletter Admit One.THE SUBSTANCE is now showing in theaters across the US, UK, Latin America, Germany, Canada and Netherlands and streaming exclusively on MUBI. SKINAMARINK is now streaming on MUBI in Latin America. To watch some of the films we've covered on the podcast, check out the collection Featured on the MUBI Podcast. Availability of films varies depending on your country.After listening, check out our piece that explores the visual aesthetics of Skinamarink (2022), "Digital Impressionism: Cinema between Figuration and Abstraction". Read the article here.MUBI is a global streaming service, production company and film distributor dedicated to elevating great cinema. MUBI makes, acquires, curates, and champions extraordinary films, connecting them to audiences all over the world. A place to discover ambitious new films and singular voices, from iconic directors to emerging auteurs. Each carefully chosen by MUBI's curators.

Do Explain
[Half Episode] #57 - Truth and Abstractions, with David Deutsch and Jake Orthwein

Do Explain

Play Episode Listen Later Oct 28, 2024 42:16


Christofer speak with physicist David Deutsch and Jake Orthwein about the logical concept of truth. They discuss the reality of abstractions, how representations get their meaning, the difference between biological evolution and the evolution of ideas, how emotions aren't theories, and more.Note: This is only the first half of the conversation, the full episode can be found on Patreon (patreon.com/doexplain).David Deutsch is a Visiting Professor of Physics at the Centre for Quantum Computation at Oxford University and the author of two books: 'The Fabric of Reality' and 'The Beginning of Infinity'. He works on fundamental issues in physics, particularly the quantum theory of computation and information, and constructor theory.Website: www.daviddeutsch.org.ukTwitter: @DavidDeutschOxfSupport the podcast at:https://www.patreon.com/doexplain (monthly)https://ko-fi.com/doexplain (one-time)Find Christofer on Twitter:https://twitter.com/ReachChristofer

PodRocket - A web development podcast from LogRocket
Component composition with Dominik Dorfmiester

PodRocket - A web development podcast from LogRocket

Play Episode Listen Later Oct 24, 2024 19:03


In this episode, Dominik Dorfmeister, TanStack maintainer, joins us to discuss component composition in React. He discusses breaking components apart, managing conditional rendering, and the benefits of early returns in improving code readability and maintainability. Links https://tkdodo.eu/blog/component-composition-is-great-btw https://tkdodo.eu/blog https://github.com/TkDodo https://www.dorfmeister.cc https://x.com/TkDodo https://www.linkedin.com/in/dominik-dorfmeister-8a71051b9 We want to hear from you! How did you find us? Did you see us on Twitter? In a newsletter? Or maybe we were recommended by a friend? Let us know by sending an email to our producer, Emily, at emily.kochanekketner@logrocket.com (mailto:emily.kochanekketner@logrocket.com), or tweet at us at PodRocketPod (https://twitter.com/PodRocketpod). Follow us. Get free stickers. Follow us on Apple Podcasts, fill out this form (https://podrocket.logrocket.com/get-podrocket-stickers), and we'll send you free PodRocket stickers! What does LogRocket do? LogRocket provides AI-first session replay and analytics that surfaces the UX and technical issues impacting user experiences. Start understand where your users are struggling by trying it for free at [LogRocket.com]. Try LogRocket for free today.(https://logrocket.com/signup/?pdr) Special Guest: Dominik Dorfmeister.

Paul VanderKlay's Podcast
Jordan Peterson goes from Preaching from Scientific Abstractions to the Bible from 2018 to Today

Paul VanderKlay's Podcast

Play Episode Listen Later Oct 16, 2024 72:13


 @HelloFutureMe  Why We Love To Watch A Hero Fall | On Writing https://youtu.be/m81PiidzzJg?si=GM21Y6e8KB_eIwX1  @lexfridman  Jordan Peterson: Nietzsche, Hitler, God, Psychopathy, Suffering & Meaning | Lex Fridman Podcast #448 https://youtu.be/q8VePUwjB9Y?si=t72iabHRF_e36Oe8  @JordanBPeterson  AA Harris/Weinstein/Peterson Discussion: Vancouver https://youtu.be/d-Z9EZE8kpo?si=CBXo92xj6sA-dcCN  @christianbaxter_yt  ep.064 Finding Meaning Through Faith and Flavor: Stephen Osborne's Journey | Yours Truly Podcast https://youtu.be/DMLFusg0oWI?si=A1uKI6aJzP2i74ZZ  Paul Vander Klay clips channel https://www.youtube.com/channel/UCX0jIcadtoxELSwehCh5QTg Bridges of Meaning Discord https://discord.gg/jwwz5BDH https://www.meetup.com/sacramento-estuary/ My Substack https://paulvanderklay.substack.com/ Estuary Hub Link https://www.estuaryhub.com/ If you want to schedule a one-on-one conversation check here. https://calendly.com/paulvanderklay/one2one There is a video version of this podcast on YouTube at http://www.youtube.com/paulvanderklay To listen to this on ITunes https://itunes.apple.com/us/podcast/paul-vanderklays-podcast/id1394314333  If you need the RSS feed for your podcast player https://paulvanderklay.podbean.com/feed/  All Amazon links here are part of the Amazon Affiliate Program. Amazon pays me a small commission at no additional cost to you if you buy through one of the product links here. This is is one (free to you) way to support my videos.  https://paypal.me/paulvanderklay Blockchain backup on Lbry https://odysee.com/@paulvanderklay https://www.patreon.com/paulvanderklay Paul's Church Content at Living Stones Channel https://www.youtube.com/channel/UCh7bdktIALZ9Nq41oVCvW-A To support Paul's work by supporting his church give here. https://tithe.ly/give?c=2160640 https://www.livingstonescrc.com/give