POPULARITY
Categories
Why EVERY Product Manager needs to Understand AI!AI is no longer a standalone product — it's becoming a standard feature.For product managers (PMs), this shift means learning to think differently about how users interact with software. The rise of large language models (LLMs) like GPT-4, Claude, and open-source alternatives is changing user expectations across every industry — not just in tech-first companies.How to connect with AgileDad:- [website] https://www.agiledad.com/- [instagram] https://www.instagram.com/agile_coach/- [facebook] https://www.facebook.com/RealAgileDad/- [Linkedin] https://www.linkedin.com/in/leehenson/
AI's breakout moment is here - but where is the real value accruing, and what's just hype?Recorded live at a16z's annual LP Summit, General Partners Erik Torenberg, Martin Casado, and Sarah Wang unpack the current state of play in AI. From the myth of the GPT wrapper to the rapid rise of apps like Cursor, the conversation explores where defensibility is emerging, how platform shifts mirror (and diverge from) past tech cycles, and why the zero-sum mindset falls short in today's AI landscape.They also dig into the innovator's dilemma facing SaaS incumbents, the rise of brand moats, the surprising role of prosumer adoption, and what it takes to pick true category leaders in a market defined by both exponential growth - and accelerated wipeouts.Resources: Find Martin on X: https://x.com/martin_casadoFind Sarah on X: https://x.com/sarahdingwangStay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.
Lauren deVane is back (for the fifth time!) to help us make sense of the AI landscape—minus the tech bro energy. She and Michelle unpack what GPT-4o actually means for creative work, how to use AI tools with taste, and why brand builders can't afford to sit on the sidelines. From custom bots to off-label use cases, this conversation is a sharp, strategic look at where branding and AI intersect. Lauren deVane is the founder of The Bemused Studio, where she builds strategic, scroll-stopping brand identities for bold creatives. With 60+ client projects under her belt, she now teaches designers how to integrate AI into their workflows. Formerly leading creative at Ulta Beauty and Walgreens, Lauren's worked with celebs like Kim Kardashian and Tracee Ellis Ross, and designed for brands like Hyatt and Chicago Fire. ------------------------ In today's episode, we cover the following: Choosing the right AI tool Understanding AI models AI for brand designers Why using AI isn't a threat to your business Taste vs tools Creative direction with AI Postproduction AI hacks Democratizing branding Ethics and optimism Custom instructions and training Off-label use cases ----------------------- RESOURCES: Use the code ITSGONNABEMAY for $400 off BAIS CAMP Episode 117: Midjourney & AI with Lauren deVane Episode 133: Midjourney & AI Part 2 with Lauren deVane Episode 162: Leveraging AI Tools for Innovative Marketing with Lauren deVane Episode 192: Authenticity and AI with Lauren deVane Episode 215: Client Case Study: FRG Real Estate (Part 2) ----------------------- GUEST INFO: To learn more about Lauren and her distinct style, follow her on Instagram @TheBemusedStudio, or visit her websites, TheBemusedStudio.com and JoinBaisCamp.com. ----------------------- Your designs deserve the front page—literally. Searchlight Digital is the women-led SEO and Google Ads agency that helps creative businesses get seen, not just admired. Use code KMA100 at searchlightdigital.ca for $100 off a 60-minute Pick My Brain call and finally get found. ----------------------- WORK WITH MKW CREATIVE CO. Connect on social with Michelle at: Kiss My Aesthetic Facebook Group Instagram Tik Tok ----------------------- Did you know that the fuel of the POD and the KMA Team runs on coffee? ;) If you love the content shared in the KMA podcast, you're welcome to invite us to a cup of coffee any time - Buy Me a Coffee! ----------------------- This episode is brought to you by Zencastr. Create high quality video and audio content. Get your first two weeks free at https://zencastr.com/?via=kma. ----------------------- This episode of the Kiss My Aesthetic Podcast is brought to you by Audible. Get your first month free at www.audible.com/kma. This episode was edited by Berta Wired Theme music by: Eliza Rosevera and Nathan Menard
I, Stewart Alsop, had a fascinating conversation on this episode of Crazy Wisdom with Mallory McGee, the founder of Chroma, who is doing some really interesting work at the intersection of AI and crypto. We dove deep into how these two powerful technologies might reshape the internet and our interactions with it, moving beyond the hype cycles to what's truly foundational.Check out this GPT we trained on the conversationTimestamps00:00 The Intersection of AI and Crypto01:28 Bitcoin's Origins and Austrian Economics04:35 AI's Centralization Problem and the New Gatekeepers09:58 Agent Interactions and Decentralized Databases for Trustless Transactions11:11 AI as a Prosthetic Mind and the Interpretability Challenge15:12 Deterministic Blockchains vs. Non-Deterministic AI Intents18:44 The Demise of Traditional Apps in an Agent-Driven World35:07 Property Rights, Agent Registries, and Blockchains as BackendsKey InsightsCrypto's Enduring Fundamentals: Mallory emphasized that while crypto prices are often noise, the underlying fundamentals point to a new, long-term cycle for the Internet itself. It's about decentralizing control, a core principle stemming from Bitcoin's original blend of economics and technology.AI's Centralization Dilemma: We discussed the concerning trend of AI development consolidating power within a few major players. This, as Mallory pointed out, ironically mirrors the very centralization crypto aims to dismantle, potentially shifting control from governments to a new set of tech monopolies.Agents are the Future of Interaction: Mallory envisions a future where most digital interactions aren't human-to-LLM, but agent-to-agent. These autonomous agents will require decentralized, trustless platforms like blockchains to transact, hold assets, and communicate confidentially.Bridging Non-Deterministic AI with Deterministic Blockchains: A fascinating challenge Mallory highlighted is translating the non-deterministic "intents" of AI (e.g., an agent's goal to "get me a good return on spare cash") into the deterministic transactions required by blockchains. This translation layer is crucial for agents to operate effectively on-chain.The Decline of Traditional Apps: Mallory made a bold claim that traditional apps and web interfaces are on their way out. As AI agents become capable of generating personalized interfaces on the fly, the need for standardized, pre-built apps will diminish, leading to a world where software is hyper-personalized and often ephemeral.Blockchains as Agent Backbones: We explored the intriguing idea that blockchains might be inherently better suited for AI agents than for direct human use. Their deterministic nature, ability to handle assets, and potential for trustless reputation systems make them ideal backends for an agent-centric internet.Trust and Reputation for Agents: In a world teeming with AI agents, establishing trust is paramount. Mallory suggested that on-chain mechanisms like reward and slashing systems can be used to build verifiable reputation scores for agents, helping us discern trustworthy actors from malicious ones without central oversight.The Battle for an Open AI Future: The age-old battle between open and closed source is playing out again in the AI sphere. While centralized players currently seem to dominate, Mallory sees hope in the open-source AI movement, which could provide a crucial alternative to a future controlled by a few large entities.Contact Information* Twitter: @McGee_noodle* Company: Chroma
"Blurring Reality" - Chai's Social AI Platform - sponsoredThis episode of MLST explores the groundbreaking work of Chai, a social AI platform that quietly built one of the world's largest AI companion ecosystems before ChatGPT's mainstream adoption. With over 10 million active users and just 13 engineers serving 2 trillion tokens per day, Chai discovered the massive appetite for AI companionship through serendipity while searching for product-market fit.CHAI sponsored this show *because they want to hire amazing engineers* -- CAREER OPPORTUNITIES AT CHAIChai is actively hiring in Palo Alto with competitive compensation ($300K-$800K+ equity) for roles including AI Infrastructure Engineers, Software Engineers, Applied AI Researchers, and more. Fast-track qualification available for candidates with significant product launches, open source contributions, or entrepreneurial success.https://www.chai-research.com/jobs/The conversation with founder William Beauchamp and engineers Tom Lu and Nischay Dhankhar covers Chai's innovative technical approaches including reinforcement learning from human feedback (RLHF), model blending techniques that combine smaller models to outperform larger ones, and their unique infrastructure challenges running exaflop-class compute.SPONSOR MESSAGES:***Tufa AI Labs is a brand new research lab in Zurich started by Benjamin Crouzier focussed on o-series style reasoning and AGI. They are hiring a Chief Engineer and ML engineers in Zurich and SF. Goto https://tufalabs.ai/***Key themes explored include:- The ethics of AI engagement optimization and attention hacking- Content moderation at scale with a lean engineering team- The shift from AI as utility tool to AI as social companion- How users form deep emotional bonds with artificial intelligence- The broader implications of AI becoming a social mediumWe also examine OpenAI's recent pivot toward companion AI with April's new GPT-4o, suggesting a fundamental shift in how we interact with artificial intelligence - from utility-focused tools to companion-like experiences that blur the lines between human and artificial intimacy.The episode also covers Chai's unconventional approach to hiring only top-tier engineers, their bootstrap funding strategy focused on user revenue over VC funding, and their rapid experimentation culture where one in five experiments succeed.TOC:00:00:00 - Introduction: Steve Jobs' AI Vision & Chai's Scale00:04:02 - Chapter 1: Simulators - The Birth of Social AI00:13:34 - Chapter 2: Engineering at Chai - RLHF & Model Blending00:21:49 - Chapter 3: Social Impact of GenAI - Ethics & Safety00:33:55 - Chapter 4: The Lean Machine - 13 Engineers, Millions of Users00:42:38 - Chapter 5: GPT-4o Becoming a Companion - OpenAI's Pivot00:50:10 - Chapter 6: What Comes Next - The Future of AI Intimacy TRANSCRIPT: https://www.dropbox.com/scl/fi/yz2ewkzmwz9rbbturfbap/CHAI.pdf?rlkey=uuyk2nfhjzezucwdgntg5ubqb&dl=0
La tecnología se podría poner al servicio de los juzgados, tal y como han hecho en la India donde se ha escaneado el cerebro de un sospechoso durante un juicio, lo cuenta Mado Martínez, quien junto a Ana Vázquez Hoys y Juanjo Sánchez -Oro forman la tertulia de hoy. Además hablamos de una nueva teoría sobre las muertes en el paso Dyatlov; de un monumento secreto del neolítico; del llamativo hallazgo en el mar Adriático. El uso del chart GPT como oráculo; los polybolos, un arma creada por los griegos; un castillo embrujado con su dama blanca y todo; enigmas sumergidos en las islas baleares; el dato exacto de cuando implosionó el Titán de Ocean Gate y cómo terminó un niño prodigio coreano.
In this episode of Transformative Principal, Jethro Jones interviews Linda Berberich, a behavioral scientist, about her extensive experience in machine learning before it became a buzzword. They discuss the practical applications of artificial intelligence in education, the pros and cons of using technology like GPT models in learning environments, and the importance of integrating technology thoughtfully based on the specific needs and culture of a school.AI is such a buzzword but it's really just machine learningBuilt many solutions to virtual learningWhat technology is really good at is computingCycle motor learning - good formToo much memorizing Far transfer vs. Near-transfer (Ruth Clark) and organic vs. mechanistic skillsStandardizable tasks are mechanistic. The way you perform is how you train. Complex and Simple tasks.Skewed responses. How to know when to use a computer (AI, Machine Learning) for learning. Attempts to make the machine more empatheticJethro's example of writing using two different GPTs to writeNarrow the field and expand the field. Grades have a massive impact on peoples' lives, so we can't ditch that.Ideas around what school looks like. Use the time for kids to be together pro-socially. Generative InstructionTeachers know this stuff! Using Technology to get kids interested in Don't be afraid of technology or of letting kids lead. About Linda Berberich, PhD.Behavioral scientist specializing in innovative, impactful, and immersive learning and intelligent, intuitive technology product design. Extensive background in data analysis, technical training, behavior analysis, learning science, neuroscience, behavior-based performance improvement, and sport psychology/performance enhancement.Passionate lifelong learner who is constantly up-skilling, most recently in the areas of:solopreneurship, technology-based networking, writing business cases for corporate-wide initiatives, design thinking, agile/scrum methodology, data science, deep learning, machine learning, and other areas of artificial intelligence, particularly as they intersect with human learning and performance.Follow her newsletter at Linda Be Learning. We're thrilled to be sponsored by IXL. IXL's comprehensive teaching and learning platform for math, language arts, science, and social studies is accelerating achievement in 95 of the top 100 U.S. school districts. Loved by teachers and backed by independent research from Johns Hopkins University, IXL can help you do the following and more:Simplify and streamline technologySave teachers' timeReliably meet Tier 1 standardsImprove student performance on state assessments
Today's episode will be about the phrase “don't take it personally” and what it means to neurodivergent like me. I'll be using chat GPT and my own thoughts to contribute to this.Links for articles: https://hbr.org/2022/10/stop-asking-neurodivergent-people-to-change-the-way-they-communicate?utm_source=chatgpt.comhttps://blog.auticon.com/effective-communication-in-neurodiverse-teams/?utm_source=chatgpt.comhttps://www.verywellmind.com/the-neurodivergent-guide-to-social-skills-7500818?utm_source=chatgpt.comhttps://differentbrains.org/taking-things-personally-adhd-power-tools-w-ali-idriss-brooke-schnittman/?utm_source=chatgpt.comLink for Pateron: patreon.com/LivingWithAnInvisibleLearningChallengeLink for BetterHelp sponsorship: https://bit.ly/3A15Ac1Links for new podcasts:Shero: Be Your Own Hero Trailer: https://open.spotify.com/show/1O7Mb26wUJIsGzZPHuFlhX?si=c3b2fabc1f334284Chats, Barks, & Growls: Convos With My Pet Trailer: https://open.spotify.com/show/74BJO1eOWkpFGN5fT7qJHh?si=4440df59d52c4522Think Out: Free Your Imagination : https://open.spotify.com/show/4ah3I2lPcvqPCnBSpROPct?si=3beb436e59f44730
I, Stewart Alsop, welcomed Ben Roper, CEO and founder of Play Culture, to this episode of Crazy Wisdom for a fascinating discussion. We kicked things off by diving into Ben's reservations about AI, particularly its impact on creative authenticity, before exploring his innovative project, Play Culture, which aims to bring tactical outdoor games to adults. Ben also shared his journey of teaching himself to code and his philosophy on building experiences centered on human connection rather than pure profit.Check out this GPT we trained on the conversationTimestamps00:55 Ben Roper on AI's impact on creative authenticity and the dilution of the author's experience.03:05 The discussion on AI leading to a "simulation of experience" versus genuine, embodied experiences.08:40 Stewart Alsop explores the nuances of authenticity, honesty, and trust in media and personal interactions.17:53 Ben discusses how trust is invaluable and often broken by corporate attempts to feign it.20:22 Ben begins to explain the Play Culture project, discussing the community's confusion about its non-monetized approach, leading into his philosophy of "designing for people, not money."37:08 Ben elaborates on the Play Culture experience: creating tactical outdoor games designed specifically for adults.45:46 A comparison of Play Culture's approach with games like Pokémon GO, emphasizing "gentle technology."58:48 Ben shares his thoughts on the future of augmented reality and designing humanistic experiences.1:02:15 Ben describes "Pirate Gold," a real-world role-playing pirate simulator, as an example of Play Culture's innovative games.1:06:30 How to find Play Culture and get involved in their events worldwide.Key InsightsAI and Creative Authenticity: Ben, coming from a filmmaking background, views generative AI as a collaborator without a mind, which disassociates work from the author's unique experience. He believes art's value lies in being a window into an individual's life, a quality diluted by AI's averaged output.Simulation vs. Real Experience: We discussed how AI and even some modern technologies offer simulations of experiences (like VR travel or social media connections) that lack the depth and richness of real-world engagement. These simulations can be easier to access but may leave individuals unfulfilled and unaware of what they're missing.The Quest for Honesty Over Authenticity: I posited that while people claim to want authenticity, they might actually desire honesty more. Raw, unfiltered authenticity can be confronting, whereas honesty within a framework of trust allows for genuine connection without necessarily exposing every raw emotion.Trust as Unpurchasable Value: Ben emphasized that trust is one of the few things that cannot be bought; it must be earned and is easily broken. This makes genuine trust incredibly valuable, especially in a world where corporate entities often feign trustworthiness for transactional purposes.Designing for People, Not Money: Ben shared his philosophy behind Play Culture, which is to "design for people, not money." This means prioritizing genuine human experience, joy, and connection over optimizing for profit, believing that true value, including financial sustainability, can arise as a byproduct of creating something meaningful.The Need for Adult Play: Play Culture aims to fill a void by creating tactical outdoor games specifically designed for adult minds and social dynamics. This goes beyond childlike play or existing adult games like video games and sports, focusing on socially driven gameplay, strategy, and unique adult experiences.Gentle Technology in Gaming: Contrasting with AR-heavy games like Pokémon GO, Play Culture advocates for "gentle technology." The tech (like a mobile app) supports gameplay by providing information or connecting players, but the core interaction happens through players' senses and real-world engagement, not primarily through a screen.Real-World Game Streaming as the Future: Ben's vision for Play Culture includes moving towards real-world game streaming, akin to video game streaming on Twitch, but featuring live-action tactical games played in real cities. This aims to create a new genre of entertainment showcasing genuine human interaction and strategy.Contact Information* Ben Roper's Instagram* Website: playculture.com
This week, Sam and Dave are joined by a very special guest: Synthetic Fidji. Yes, an AI version of Fidji Simo, created after the real one politely declined (hey, the show must go on, okay?).They get into:the bull and bear case for vibe codingGemini vs. GPT (is Sam having second thoughts?)why GitHub might be quietly, ruthlessly winning the AI dev racethe identity crisis no one's cracked with LLMsPlus: OpenAI just spent $6 billion on hardware to team up with Jony Ive, the MEO phone is suddenly everywhere, and more.We're also on ↓X: https://twitter.com/moreorlesspodInstagram: https://instagram.com/moreorlessSpotify: https://podcasters.spotify.com/pod/show/moreorlesspodConnect with us here:Sam Lessin: https://x.com/lessinDave Morin: https://x.com/davemorinJessica Lessin: https://x.com/JessicalessinBrit Morin: https://x.com/brit00:00 Introduction01:24 OpenAI CEO of Applications?07:55 Tools that ACTUALLY work18:47 Is everyone lying about vibe coding?43:10 The big identity problem with LLMs (and Zapier)52:26 Google I/O: "Who cares?"54:13 Is OpenAI big tech? They buy Jony Ive's startup56:17 The "hyper-viral" Methaphone. How?59:40 Outro
In this legendary episode of the Antonio T. Smith Jr. Podcast, you're not just listening to wealth-building advice.You're being handed the keys to the real system — the one built behind the illusion, the one only 0.01% ever understand.Antonio reveals how the modern dollar is a trap — and how to escape it, not through labor, but through leveraged sovereign design.This isn't motivation. This is economic war.This isn't theory. This is how billionaires build nations in silence.This isn't inspiration. It's the blueprint. You Can Download — as a Special Never-Done-Before Gift:
In this episode, we sat down with full-stack developer and AI innovator Matthew Henage, creator of WAOS.ai (Web App Operating System) and the incredible storytelling platform SpeakMagic.ai. This conversation took us deep into the world of agentic AI, low-code app building, and the future of intelligent workflows.We kicked things off with Matthew sharing how he's been riding the AI wave since GPT-3.5 blew his mind. His platform WoWs is all about making it easy for developers to build powerful web apps with embedded AI workflows — think of it like Zapier meets ChatGPT, but with agents working together instead of API chains.One of the most eye-opening parts of our chat was learning about agent swarms — essentially teams of specialized AI agents that collaborate to perform complex tasks. Instead of relying on one giant AI brain to do everything, you create smaller, purpose-built AIs that handle specific steps in a workflow. It's scalable, smarter, and kind of like assembling your dream dev team… but all made of code.Matthew's Speak Magic project is a jaw-dropper. It uses a swarm of over 40 agents to turn a single story idea into a fully animated, two-minute video — complete with scenes, scripts, character animations, music, and more. It's AI storytelling on steroids.We also talked a lot about:Best practices for building reliable AI workflowsThe importance of keeping context windows small (under 4,000 tokens works best!)How prompt engineering is becoming the new programmingUsing AI for vibe coding (yes, that's a thing) and rapid prototypingThe tradeoffs between using traditional programming vs. letting AI handle logicEthical considerations and how to handle memory and privacy in long-running user interactionsCheck out Matthew's work at WAOS.ai and speakmagic.ai — and as always, stay curious and keep building!Become a supporter of this podcast: https://www.spreaker.com/podcast/javascript-jabber--6102064/support.
Send us a textJoin hosts Alex Sarlin and Ben Kornell as they explore the latest developments in education technology, from AI in classrooms to workforce shifts and EdTech innovation across the globe.✨ Episode Highlights:[00:03:16] Ezra Klein podcast brings AI and education to mainstream conversation[00:07:20] Alex and Ben compare and critique GPT-4, Claude, Gemini, and other AI tools[00:09:28] Utah emerges as a leading hub for EdTech startups and innovation[00:12:21] New AI bundles help educators explore tools like Superhuman and Perplexity[00:13:19] Surge in media coverage on cheating, lawsuits, and educator use of AI[00:16:17] Lawsuit filed against professor for using AI-generated content in class[00:18:00] Concerns grow about students using AI tools to bypass cognitive learning[00:23:10] Direct-to-student AI sparks debate about academic integrity and design[00:25:20] Google plans to roll out Gemini to students under 13[00:29:41] AI enables hands-on science learning like virtual frog dissections[00:33:43] AI compared to electricity as foundational infrastructure for the future[00:36:09] Rising youth unemployment signals early impact of AI-driven disruption[00:38:57] Major firms lay off workers while shifting strategy toward AI adoption[00:40:34] EdTech must define and prepare students for new AI-native job rolesPlus, special guest:[00:41:22] Sam Chaudhary, Co-founder & CEO of ClassDojo on tutoring, gamified learning, and community building
Today on Equity, Rebecca Bellan caught up with Ali Kashani, co-founder and CEO of Serve Robotics, to unpack how Serve is navigating public markets, scaling real-world robotics, and building what it hopes is the future of last-mile delivery. Listen to the full episode to hear more about: How Serve went from a lidar-focused startup to a publicly traded company via reverse merger in 2023 What it takes to scale a delivery fleet across cities like L.A., Miami, and Dallas Why Kashani says Serve's sidewalk bots collect four times more visual data per day than GPT-4's vision model How ground robots and drones might work together to finally crack last-mile logistics Equity will be back Friday with our weekly news round-up, and special Google I/O coverage from Max. Don't miss it! Equity is TechCrunch's flagship podcast, produced by Theresa Loconsolo, and posts every Wednesday and Friday. Subscribe to us on Apple Podcasts, Overcast, Spotify and all the casts. You also can follow Equity on X and Threads, at @EquityPod. For the full episode transcript, for those who prefer reading over listening, check out our full archive of episodes here. Credits: Equity is produced by Theresa Loconsolo with editing by Kell. We'd also like to thank TechCrunch's audience development team. Thank you so much for listening, and we'll talk to you next time. Learn more about your ad choices. Visit megaphone.fm/adchoices
כבר עשורים שמבטיחים לנו שההתקדמות הטכנולוגית תוביל לכך שנעבוד פחות. אבל הטכנולוגיה דוהרת קדימה, ואנחנו נשארים במקום עם שבוע עבודה של כ-40 שעות. אז למה זה קורה? יכול להיות שאלה בכלל אנחנו שאשמים? והאם הפעם, בזכות ה-AI, זה באמת עומד להשתנות? מתארחים בפרק: מגישי הפודקאסט של גוגל Notebook LM. סייע בתחקיר: צ'ט GPT מגישים: צליל אברהם ושאול אמסטרדמסקי; עורך: יונתן כיתאין; הפקה: ליהיא צדוק; עורכת סאונד: רחל רפאלי; סייע בהפקה: יובל וילף; תמונה: shutterstock לינק לשמיעת הפודקאסט של גוגל שנוצר על ידי בינה מלאכותיתSee omnystudio.com/listener for privacy information.
What if you had a daily ritual that not only set the tone for your week but also amplified your creative energy and business clarity? That's exactly why I've built the Artpreneur Daily Altar, my custom GPT designed to help you reflect, plan, and activate your creative flow every single day. I'm sharing how this tool came to life and how it's transforming the way I approach my art business. My friend, Jennifer Urezzio, has been using it too, and she's here to share her experience. Together, we explore how the Artpreneur Daily Altar can become a sacred space for inspiration, soul-aligned action, and strategic clarity. In this episode, you'll discover: Discover how the Artpreneur Daily Altar can serve as your daily reflection and activation tool Learn how rituals and intuitive prompts can amplify your productivity and mindset Create a sacred daily practice that grounds you while propelling your art journey forward. For full show notes, go to schulmanart.com/356
OpenAI made a coding splash. Anthropic is in legal trouble for .... using its own Claude tool? Google went full multimedia. And that's only the half of it. Don't spend hours a day trying to keep up with AI. That's what we do. Join us (most) Mondays as we bring you the AI News That Matters. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:Salesforce Acquires AI Startup ConvergenceGoogle AI Studio's Generative Media PlatformMajor AI Conferences: Microsoft, Google, AnthropicAnthropic's Legal Citation Error with AIDeepMind's Alpha Evolve Optimization BreakthroughUAE Stargate: US and UAE AI CollaborationOpenAI's GPT 4.1 Model ReleaseOpenAI's Codex Platform for DevelopersTimestamps:00:00 Busy week in AI03:39 Salesforce Expands AI Ambitions with Acquisition10:31 "Google AI Studio Integrates New Tools"13:57 Microsoft Build Focuses on AI Innovations16:27 AI Model and Tech Updates22:54 "Alpha Evolve: Breakthrough AI Model"26:05 Google Unveils AI Tools for Developers28:58 UAE's Tech Expansion & Global Collaboration30:57 OpenAI Releases GPT-4.1 Models34:06 OpenAI Codex Rollout Update37:11 "Codex: Geared for Enterprise Developers"41:41 Generative AI Updates ComingKeywords:OpenAI Codex, Codex Platform, Salesforce, Convergence AI, Autonomous AI agents, Large Language Models, Google AI Studio, generative media, Imagine 3 model, AI video generator, Anthropic, Legal citation error, AI conference week, Microsoft Build, Claude Code, Google IO, agentic AI, Alpha Evolve, Google DeepMind, AI driven arts, Gemini AI, UAE Stargate, US tech giants, NVIDIA, Blackwell GB 300 chips, Wind Surf, AI coding assistant, codex one model, coding tasks, Google Gemini, Semantic search, Copilot enhancements, XR headset, project Astra, MCP protocol, ChatGPT updates, API access, AI safety evaluations, AI software agents, AI studio sandbox, GPT o series, AI infrastructure, data center computing, Tech collaboration, international AI expansion.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
I, Stewart Alsop, am thrilled to welcome Leon Coe back to the Crazy Wisdom Podcast for a second deep dive. This time, we journeyed from the Renaissance and McLuhan's media theories straight into the heart of theology, church history, and the very essence of faith, exploring how ancient wisdom and modern challenges intertwine. It was a fascinating exploration, touching on everything from apostolic succession to the nature of sin and the search for meaning in a secular age.Check out this GPT we trained on the conversationTimestamps00:43 I kick things off by asking Leon about the Renaissance, Martin Luther, and the profound impact of the printing press on religion.01:02 Leon Coe illuminates Marshall McLuhan's insights on how technologies, like print, shape our consciousness and societal structures.03:25 Leon takes us back to early Church history, discussing the Church's life and sacraments, including the Didache, well before the Bible's formal canonization.06:00 Leon explains the scriptural basis for Peter as the "rock" of the Church, the foundation for the office of the papacy.07:06 We delve into the concept of apostolic succession, where Leon describes the unbroken line of ordination from the apostles.11:57 Leon clarifies Jesus's relationship to the Law, referencing Matthew 5:17 where Jesus states he came to fulfill, not abolish, the Law.12:20 I reflect on the intricate dance of religion, culture, and technology, and the sometimes bewildering, "cosmic joke" nature of our current reality.16:46 I share my thoughts on secularism potentially acting as a new, unacknowledged religion, and how it often leaves a void in our search for purpose.19:28 Leon introduces what he calls the "most terrifying verse in the Bible," Matthew 7:21, emphasizing the importance of doing the Father's will.24:21 Leon discusses the Eucharist as the new Passover, drawing connections to Jewish tradition and Jesus's institution of this central sacrament.Key InsightsTechnology's Shaping Power: McLuhan's Enduring Relevance. Leon highlighted how Marshall McLuhan's theories are crucial for understanding history. The shift from an oral, communal society to an individualistic one via the printing press, for instance, directly fueled the Protestant Reformation by enabling personal interpretation of scripture, moving away from a unified Church authority.The Early Church's Foundation: Life Before the Canon. Leon emphasized that for roughly 300 years before the Bible was officially canonized, the Church was actively functioning. It had established practices, sacraments (like baptism and the Eucharist), and teachings, as evidenced by texts like the Didache, demonstrating a lived faith independent of a finalized scriptural canon.Peter and Apostolic Succession: The Unbroken Chain. A core point from Leon was Jesus designating Peter as the "rock" upon which He would build His Church. This, combined with the principle of apostolic succession—the laying on of hands in an unbroken line from the apostles—forms the Catholic and Orthodox claim to authoritative teaching and sacramental ministry.Fulfillment, Not Abolition: Jesus and the Law. Leon clarified that Jesus, as stated in Matthew 5:17, came not to abolish the Old Testament Law but to fulfill it. This means the Mosaic Law finds its ultimate meaning and completion in Christ, who institutes a New Covenant.Secularism's Spiritual Vacuum: A Modern Religion? I, Stewart, posited that modern secularism, while valuing empiricism, often acts like a new religion that explicitly rejects the spiritual and miraculous. Leon agreed this can lead to a sense of emptiness, as humans inherently long for purpose and connection to a creator, a void secularism struggles to fill.The Criticality of God's Will: Beyond Lip Service. Leon pointed to Matthew 7:21 ("Not everyone who says to me, ‘Lord, Lord,' will enter the kingdom of heaven...") as a stark reminder. True faith requires more than verbal profession; it demands actively doing the will of the Father, implying that actions and heartfelt commitment are essential for salvation.The Eucharist as Central: The New Passover and Real Presence. Leon passionately explained the Eucharist as the new Passover, instituted by Christ. Referencing John 6, he stressed the Catholic belief in the Real Presence—that the bread and wine become the literal body and blood of Christ—which is essential for spiritual life and communion with God.Reconciliation and Purity: Restoring Communion. Leon explained the Sacrament of Reconciliation (Confession) as a vital means, given through the Church's apostolic ministry, to restore communion with God after sin. He also touched upon Purgatory as a state of purification for overcoming attachments to sin, ensuring one is perfectly ordered to God before entering Heaven.Contact Information* Leon Coe: @LeonJCoe on Twitter (X)
Welcome to episode 303 of The Cloud Pod – where the forecast is always cloudy! Justin, Ryan and exhausted dad Matt are here (and mostly awake) ready to bring the latest in cloud news! This week we've got more news from Nova, updates to Claude, earnings news, and a mini funeral for Skype – plus a new helping of Cloud Journey! Titles we almost went with this week: Claude researches so Ryan can nap The best AI for Nova Corps, Amazon Nova Premiere JB If you can't beat them, change the licensing terms and make them fork, and then reverse course… and profit Q has invaded your IDE!! Skype bites the dust A big thanks to this week's sponsor: We're sponsorless! Want to get your brand, company, or service in front of a very enthusiastic group of cloud news seekers? You've come to the right place! Send us an email or hit us up on our Slack channel for more info. Follow Up 02:50 Sycophancy in GPT-4o: What happened and what we're doing about it OpenAI wrote up a blog post about their sycophantic Chat GPT 4o upgrade last week, and they wanted to set the record straight. They made adjustments at improving the models default personality to make it feel more intuitive and effective across a variety of tasks. When shaping model behavior, they start with a baseline principle and instructions outlined in their model spec. They also teach their models how to apply these principles by incorporating user signals like thumbs up and thumbs down feedback on responses. In this update, though, they focused too much on short-term feedback and did not fully account for how users’ interactions with ChatGPT evolve. This skewed the results towards responses that were overly supportive – but disingenuous. Beyond rolling back the changes, they are taking steps to realign the model behavior, including refining core training techniques and system prompts to explicitly steer the model away from sycophancy. They also plan to build more guardrails to increase honesty and transparency principles in the model spec. Additionally, they plan to expand ways for users to test and give direct feedback before deployments. Lastly, OpenAI continues to expand evaluations building on the model sync and our ongoing research. 04:43 Deep Research on Microsoft Hotpatching: Yes, they’re grabbing money and screwing you. Basically. 07:06 Justin – “I'm not going to give them any credit on this one. I appreciate that they created hotpatching, but I don't like what you want to charge me for it.” General News It's Earnings time – cue the sound effects! 08:03 Alphabet’s Q1 earnings shattered analyst expectations, sending the stock
Новая неделя, новый выпуск Завтракаста – популярного подкаста про игры, медиа, технологии, интернет и всякое разное на русском языке, который ведут трое друзей Дима, Тимур и Максим вот уже десять лет. Подписывайтесь и ставьте лайк, не забудьте нажать на колокольчик тут – https://youtube.com/zavtracast Если вы хотите нас поддержать из России, подписывайтесь на нас на Boosty – https://boosty.to/zavtracast Если находитесь за границей, можно подписаться на нас еще и на Patreon – https://patreon.com/zavtracast Подписывайтесь на каналы ведущих: Радио Тимур – https://t.me/radiotimur Фотодушнила – https://t.me/dushovato Сказки Дядюшки Зомбака – https://t.me/zombaktales
Join us for this episode with Leigh Engel, Senior Product Manager for Enterprise AI at NVIDIA. Leigh's journey spans media, consulting, and big tech firms like Apple and WPP before landing in product leadership at one of the world's top AI companies. We explore her career pivots, how she balances innovation with user-centered design, and her take on evaluating AI models like GPT-4 and Llama for enterprise use. This is a great episode if you're curious about breaking into AI product management or understanding the future of generative AI in business. If you enjoy the episode, share it with a friend and leave us a comment!Please note: The views expressed in this episode are Leigh's alone and do not reflect those of her employer, NVIDIA.
I, Stewart Alsop, welcomed Woody Wiegmann to this episode of Crazy Wisdom, where we explored the fascinating and sometimes unsettling landscape of Artificial Intelligence. Woody, who is deeply involved in teaching AI, shared his insights on everything from the US-China AI race to the radical transformations AI is bringing to education and society at large.Check out this GPT we trained on the conversationTimestamps01:17 The AI "Cold War": Discussing the intense AI development race between China and the US.03:04 Opaque Models & Education's Resistance: The challenge of opaque AI and schools lagging in adoption.05:22 AI Blocked in Schools: The paradox of teaching AI while institutions restrict access.08:08 Crossing the AI Rubicon: How AI users are diverging from non-users into different realities.09:00 Budgetary Constraints in AI Education: The struggle for resources like premium AI access for students.12:45 Navigating AI Access for Students: Woody's ingenious workarounds for the premium AI divide.19:15 Igniting Curiosity with AI: Students creating impressive projects, like catapult websites.27:23 Exploring Grok and AI Interaction: Debating IP concerns and engaging with AI ("Morpheus").46:19 AI's Societal Impact: AI girlfriends, masculinity, and the erosion of traditional skills.Key InsightsThe AI Arms Race: Woody highlights a "cold war of nerdiness" where China is rapidly developing AI models comparable to GPT-4 at a fraction of the cost. This competition raises questions about data transparency from both sides and the strategic implications of superintelligence.Education's AI Resistance: I, Stewart Alsop, and Woody discuss the puzzling resistance to AI within educational institutions, including outright blocking of AI tools. This creates a paradox where courses on AI are taught in environments that restrict its use, hindering practical learning for students.Diverging Realities: We explore how individuals who have crossed the "Rubicon" of AI adoption are now living in a vastly different world than those who haven't. This divergence is akin to past technological shifts but is happening at an accelerated pace, impacting how people learn, work, and perceive reality.The Fading Relevance of Traditional Coding: Woody argues that focusing on teaching traditional coding languages like Python is becoming outdated in the age of advanced AI. AI can handle much of the detailed coding, shifting the necessary skills towards understanding AI systems, effective prompting, and higher-level architecture.AI as the Ultimate Tutor: The advent of AI offers the potential for personalized, one-on-one tutoring for everyone, a far more effective learning method than traditional classroom lectures. However, this potential is hampered by institutional inertia and a lack of resources for tools like premium AI subscriptions for students.Curiosity as the AI Catalyst: Woody shares anecdotes of students, even those initially disengaged, whose eyes light up when using AI for creative projects, like designing websites on niche topics such as catapults. This demonstrates AI's power to ignite curiosity and intrinsic motivation when paired with focused goals and the ability to build.AI's Impact on Society and Skills: We touch upon the broader societal implications, including the rise of AI girlfriends addressing male loneliness and providing acceptance. Simultaneously, there's concern over the potential atrophy of critical skills like writing and debate if individuals overly rely on AI for summarization and opinion generation without deep engagement.Contact Information* Twitter/X: @RulebyPowerlaw* Listeners can search for Woody Wiegmann's podcast "Courage over convention" * LinkedIn: www.linkedin.com/in/dataovernarratives/
Dans cet épisode, Laurent Kretz reçoit Sylvain Peyronnet, cofondateur de Babbar.tech et chercheur en IA, ainsi que Grégory Pairin, directeur e-commerce chez Ocarat, spécialiste du référencement naturel et du digital. Ensemble, ils explorent l'évolution majeure du SEO à l'ère des modèles de langage et des moteurs de recherche génératifs.Ils décryptent les coulisses du fonctionnement des algorithmes Google, les fuites de données qui révèlent leurs vérités cachées, et l'impact grandissant des données comportementales dans le référencement. Sylvain explique comment les modèles comme BERT et GPT révolutionnent la compréhension des requêtes et la synthèse de réponses. Grégory partage l'expérience terrain d'un e-commerçant dont le SEO représente près de 40% du chiffre d'affaires.Ils analysent aussi les nouveaux enjeux : la complémentarité du SEO classique avec le référencement optimisé pour les moteurs génératifs, les stratégies à adopter face à cette évolution, et les pièges à éviter. Un épisode pour tous ceux qui veulent anticiper et maîtriser la visibilité digitale de demain.Timecodes clés :00:00:00 Introduction et présentation des invités00:05:20 Histoire et évolution du SEO00:22:50 Fuites Google & données comportementales dans le référencement00:34:00 L'arrivée des modèles de langage et leur impact sur le SEO00:44:00 Qu'est-ce que la Generative Search Engine Optimization (GSO) ?00:54:00 Stratégies pratiques pour apparaître dans les réponses des moteurs génératifs01:05:00 Enjeux futurs et conseils pour les e-commerçantsEt quelques dernières infos à vous partager : Suivez Le Panier sur Instagram lepanier.podcast !Inscrivez- vous à la newsletter sur lepanier.io pour cartonner en e-comm ! Écoutez les épisodes sur Apple Podcasts, Spotify ou encore Podcast AddictLe Panier est un podcast produit par CosaVostra, du label Orso Media.Distribué par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
Explore the cutting-edge intersection of AI, cryptocurrency, and startup culture in our latest episode. We spoke with Shaw Walters, Founder of Eliza Labs, about the exciting possibilities and challenges of AI agents in digital worlds, the crypto ecosystem, and token launches. Discover how these technologies are shaping a fairer, more inclusive future for investing and why real products and vision matter more than hype. Chapters:00:00 Defiant introduction00:07 Episode summary00:54 Introduction to Shaw Walters and Eliza Labs02:27 AI agents in digital worlds10:56 Eliza as a framework for providing what GPT doesn't11:25 Writing actions for AI agents14:10 Use cases for AI agents14:55 Moving from one-on-one to group settings16:33 Connecting Eliza to the crypto ecosystem19:20 Pros and cons of AI agents in financial services21:25 AI agents as an interface24:53 auto.fun and launching AI agents30:08 Fairer than fair token launch31:58 A new age of investing in startups32:32 The government does not trust you with your own money34:33 Crypto workarounds for investing in startups35:43 The Trump administration and our path to regulation36:57 Changing laws around investing in startups38:32 Decentralized AI and opening investment to the public39:30 What will bring AI agents back to the forefront?44:25 How to structure better launchpads for startups46:00 Crypto markets in their current state are casinos49:24 Jeffy Yu and Zerebro53:36 Why the crypto culture needs to change55:31 Real products and real vision58:00 Closing remarks
With over 20 years in traditional media and a current obsession with generative tools, Erich is helping everyone from businesses to kids understand and embrace AI—without the tech jargon.Learn how his practical use of custom GPTs and video workflows makes AI accessible, fun, and transformational—from bedtime jokes with ChatGPT to streamlined nonprofit operations. Plus, you'll discover how to future-proof your kids (and your career) in a rapidly evolving tech landscape.
In this episode, Kane Simms is joined by Katherine Munro, Conversational AI Engineer at Swisscom, for a deep dive into what might sound like an odd pairing: using LLMs to classify customer intents.Large Language Models (LLMs) are powerful, multi-purpose tools. But would you trust one to handle the precision of a classification task?It's an unlikely fit for an LLM. Classifiers typically need to be fast, accurate, and interpretable. LLMs are slow, random black-boxes. Classifiers need to output a single label. LLMs never stop talking.And yet, there are good reasons to use LLMs for such tasks, and emerging architectures and techniques. Many real-world use cases need a classifier, and many data and product development teams will soon find themselves wondering: could GPT handle that?If that sounds like you, then check out this extended episode to explore how Switzerland's largest telecommunications provider tackles this issue while building a next-generation AI assistant. This episode is brought to you by NLX.NLX is a conversational AI platform enabling brands to build and manage chat, voice and multimodal applications. NLX's patented Voice+ technology synchronizes voice with digital channels, making it possible to automate complex use cases typically handled by a human agent. When a customer calls, the voice AI guides them to resolve their inquiry through self-service using the brand's digital asset, resulting in automation and CSAT scores well above industry average. Just ask United Airlines.Shownotes:"The Handbook of Data Science and AI: Generate Value from Data with Machine Learning and Data Analytics" - Available on Amazon: https://a.co/d/3wNN9cvKatherine's website: http://katherine-munro.com/Subscribe to VUX World: https://vuxworld.typeform.com/to/Qlo5aaeWSubscribe to The AI Ultimatum Substack: https://open.substack.com/pub/kanesimmsGet in touch with Kane on LinkedIn: https://www.linkedin.com/in/kanesimms/ Hosted on Acast. See acast.com/privacy for more information.
En entrevista para MVS Noticias con Luis Cárdenas, Laura Coronado, abogada y especialista en cultura digital, habló sobre mujer le pide divorcio a su esposo luego de que chat GPT le soplara infidelidad por una taza de café.See omnystudio.com/listener for privacy information.
People don't connect with job titles.They connect with you, your story, your “why,” and what you've overcome.In a sea of industry updates and thought leadership posts, it's the personal stories that cut through the noise.They show:- Why do you do what you do- What drives your mission- What you believe in- And how did you get hereWhen done well, your origin story does more than just introduce you.It positions you as a trusted voice.It attracts the right clients, partners, and opportunities.And it builds a real emotional connection with your audience.
Blink, and you've already missed like 7 AI updates.The large language models we use and rely on? They change out more than your undies. (No judgement here.) But real talk — businesses have made LLMs a cornerstone of their business operations, yet don't follow the updates. Don't worry shorties. We've got ya. In our first ever LLM Monthly roundup, we're telling you what's new and noteworthy in your favorite LLMs. Newsletter: Sign up for our free daily newsletterMore on this Episode: Episode PageJoin the discussion: Have a question? Join the convo here.Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineupWebsite: YourEverydayAI.comEmail The Show: info@youreverydayai.comConnect with Jordan on LinkedInTopics Covered in This Episode:ChatGPT 4.1 New Features OverviewChatGPT Shopping Platform LaunchChatGPT's Microsoft SharePoint IntegrationChatGPT Memory and Conversation HistoryGoogle Gemini 2.5 Pro UpdatesGemini Canvas Powerful ApplicationsClaude Integrations with Google WorkspaceMicrosoft Copilot Deep Research InsightsTimestamps:00:00 Saudi Arabia's $600B AI Investment06:44 Monthly AI Model Update Show08:11 OpenAI Launches GPT-4.1 Publicly11:52 AI Research Tools Comparison16:29 Perplexity's Pushy Shopping Propensity19:55 ChatGPT Memory: Pros and Cons22:29 Gemini Canvas vs. OpenAI Canvas25:06 AI Model Competition Highlights28:25 Google Gemini Rivals OpenAI's Research32:30 "Claude's Features and Limitations"37:05 Anthropic's Educational AI Innovation39:02 Exploring Copilot Vision Expansion41:38 Meta AI Launch and Llama 4 Models46:27 "New iOS Voice Assistant Features"47:54 "Enhancing iOS Assistant Potential"Keywords:ChatGPT, AI updates, Large Language Model updates, OpenAI, GPT 4.1, GPT 4.0, GPT 4.5, GPT 4.1 Mini, Saudi Arabia AI investment, NVIDIA Blackwell AI chips, AMD deal, Humane startup, Data Vault, AI data centers, Logic errors moderation, Grox AI, Elon Musk, XAI, Google Gemini, ChatGPT shopping, Microsoft SharePoint integration, OneDrive integration, deep research, AI shopping platform, Google DeepMind, Alpha Evolve, evolutionary techniques, AI coding, Claude, Anthropic Claude, Confluence integration, Jira integration, Zapier integration, ChatGPT enterprise, API updates, Copilot pages, Microsoft three sixty five, Bing search, Meta AI, Llama 4, Llama 4 Maverick, Llama 4 Scout, Perplexity, voice assistant, Siri alternatives, Grok Studio, AI social network.Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info) Ready for ROI on GenAI? Go to youreverydayai.com/partner
AI Chat: ChatGPT & AI News, Artificial Intelligence, OpenAI, Machine Learning
In this episode, Jaeden discusses the launch of OpenAI's new model, GPT 4.1, which is specifically designed for coding and math tasks.Try AI Box: https://AIBox.ai/AI Chat YouTube Channel: https://www.youtube.com/@JaedenSchaferJoin my AI Hustle Community: https://www.skool.com/aihustle/about00:00 Introduction to GPT 4.100:56 AI Box Playground Launch01:52 Features of GPT 4.103:15 The Coding Competition07:02 Safety Concerns and Controversies08:55 Future of Coding ToolsTakeawaysOpenAI has launched GPT 4.1, focusing on coding and math.AI Box offers access to multiple AI models for a flat fee.GPT 4.1 is better for coding tasks than its successor, GPT 4.5.OpenAI is acquiring Windsurf for $3 billion to enhance coding capabilities.The competition in AI coding tools is intensifying with players like Claude and Gemini.Safety concerns were raised regarding the release of GPT 4.1.GPT 4.1 is faster and better at instruction following than GPT 4.0.OpenAI's marketing strategy for model selection is criticized.The AI Box platform allows users to test various AI models easily.The future of AI coding tools will be shaped by ongoing competition.
In this week's episode, we're recapping Elena's speedy race and “effort PR” at the Flying Pig Marathon in Cincinnati. We talk through why Elena chose to race a road marathon as part of her build for UTMB, her flexible approach to setting A/B/C goals, how she managed internal and external pressure, why feeling bad during the taper doesn't mean you're going to have a bad race, the mental prep she did to get ready for race day, how she designed and executed her race plan on a challenging course, why body weight has so much less to do with performance than we might think, what it was like to coach her husband Will (who also crushed it!) for this race, and why self-belief was the number one factor that led to her incredible performance. Katie and Elena also cover some fun insights on when and how to choose a big vs. small race, why the ability to “say yes” to anything physically is one of the greatest benefits of endurance training, how we think about integrating chat GPT into our training and coaching strategies (or not), and female athlete body pressures. This is a super rich episode full of insights for any athlete — check it out! View extended show notes for this episode here.To share feedback or ask questions to be featured on a future episode, please use this form or email: Katie@TheEnduranceDrive.com.
Subscribe at thisnewway.com to get the step-by-step playbooks, tools, and workflows.In episode 5 of This New Way, Aydin sits down with Liam Martin, co-founder of Time Doctor and Running Remote, to explore how AI is reshaping team productivity, SaaS economics, and the future of work. Liam shares how his team replaced $40K/year worth of employee engagement software with open-source AI tools — and how their internal R&D lab, Chainsaw, is building the future of workforce analytics.You'll hear how Time Doctor uses AI to reclassify productivity metrics by job role, how AI has changed their approach to product-market fit, and why they're betting on proprietary agents as the next evolution of workplace tools. Liam also shares his personal tech stack, insights on open-source AI models like DeepSeek, and how he's replacing Google with LLMs in his day-to-day workflow.You'll walk away with practical ideas for how to reduce SaaS spend, empower your R&D teams, and get ahead of AI's disruptive force in remote work and beyond.Click here to check out the AI-generated timestamps, episode summary and transcript.. . .Like this episode? Be sure to leave a ⭐️⭐️⭐️⭐️⭐️ review and share the episode with someone who will benefit from listening.. . .TIMESTAMPS 00:36 Liam's McGill story and how he accidentally left academia 02:45 The early days of remote work and building Time Doctor 05:57 What Time Doctor actually does (and how AI is changing it) 08:08 How AI reclassified productivity by job type 11:03 Could product-market fit collapse due to AI? 13:44 Building the Chainsaw team 20:02 Replacing Google with LLMs 24:35 Why proprietary AIs might need to be “pushy,” not polite 28:03 Don't wait for economics — just solve the problem 29:45 What's coming next in the AI cost curve35:58 GPT customizes slide decks based on personality types39:07 Build complex no-code apps with just a prompt using Lovable43:46 Engineers may be more disrupted by AI than customer serviceTOOLS & RESOURCES MENTIONEDAI Tools & ModelsDeepSeek OCR → replaced paid OCR tools, cutting costs by 90%Do Browser (Chrome extension) → automates browser actions like a humanLM Studio → runs open-source LLMs like DeepSeek and Claude locallyClaude (Anthropic) → used for AI-based task delegationOpenAI GPT-4 / Operators → tested against open-source alternativesInternal Innovation & AI SystemsChainsaw R&D Team → focused on building from scratch, not optimizingWorkforce Analytics with AI → redefining productivity dynamically by roleAI-driven feature decisions → testing new models before looking at ROIOCR Video Analysis → used to assess best vs worst execution of tasksPhilosophies & Frameworks“Build a chainsaw, not a sharper axe” → rethink, don't just improve“Solve one customer's problem perfectly” → from Y Combinator playbookPersonal AI-first workflows → replacing search with LLMs
How can AI make meetings better? That's the simple question that inspired Granola, a productivity tool that can tell you what was actually discussed in that meeting last week and what the real next steps are. In this episode of Generative Now, host Michael Mignano, partner at Lightspeed, sits down with Granola co-founders Chris Pedregal and Sam Stephenson at their headquarters in London. They talk about how they first met, their early product bets, and how they decided to focus on solving one painful problem: the chaos that follows every meeting. They share the story behind their early experiments with GPT-3 and how that eventually evolved into a tool designed especially for people who find their days filled with back-to-back meetings. Chris and Sam explain why they avoided flashy AI features to focus on simplicity and habit-forming design, how they quickly found their product-market fit, and what it means to adapt quickly in the age of LLMs. Episode Chapters00:00 Welcome and Introduction01:15 Finding the Right Co-Founder01:51 Tools for Thought and AI03:24 Identifying the Problem04:38 Building the Solution05:07 The Evolution of AI and App Layer07:46 Challenges and Innovations11:20 Business Model and Future Outlook14:24 The Launch and Early Success17:52 Theoretical vs. Practical User Needs18:17 Stress and Software Design19:28 Scaling with AI22:05 Maintaining Quality and Taste24:10 Building a Silicon Valley Startup in London28:03 The Future of Granola30:27 Early Feedback and Iteration35:09 Privacy and Data HandlingStay in touch:www.lsvp.comX: https://twitter.com/lightspeedvpLinkedIn: https://www.linkedin.com/company/lightspeed-venture-partners/Instagram: https://www.instagram.com/lightspeedventurepartners/Subscribe on your favorite podcast app: generativenow.coEmail: generativenow@lsvp.comThe content here does not constitute tax, legal, business or investment advice or an offer to provide such advice, should not be construed as advocating the purchase or sale of any security or investment or a recommendation of any company, and is not an offer, or solicitation of an offer, for the purchase or sale of any security or investment product. For more details please see lsvp.com/legal.
Episode Title: GPTs Replaced Endless File SearchingShow Notes:In this episode of the B2B Marketing Excellence & AI Podcast, I talk about a common frustration — when your computer crashes and you're forced to transfer everything. That happened to me recently, and instead of just moving over cluttered files, I decided to create a better system.I used this opportunity to build personalized GPTs (Generative Pretrained Transformers) inside ChatGPT to help me organize information so I could stop digging through folders, spreadsheets, and emails. Now, instead of asking “Where did I save that?” — I just ask my GPT.These GPTs have become my go-to system for locating key client information, marketing materials, podcast outlines, and internal resources — all in seconds.If you're overwhelmed by digital disorganization or tired of repeating the same searches, this episode will show you how to use AI to create a centralized, accessible, and reliable system for storing and retrieving information.You'll learn:Why I decided not to keep transferring messy files across computersHow GPTs help organize and recall key information instantlyReal-world examples of how I use GPTs to support client work and daily operationsSimple ways to get started creating your own GPT-based document systemAt World Innovators, we're all about helping B2B brands and Executives find smarter ways to reach the right audience — and that starts with staying organized internally. GPTs are one tool that's helping us (and our clients) reduce clutter and increase clarity. Watch the Bonus Video: How to Create Your Own GPT- https://youtu.be/2NNt4f88qNw?si=KniJVppBV3CSuafpEpisode Breakdown:00:00 A Rough Week with Technology 03:21 Setting Up Your Own GPT 04:58 Practical Applications of GPTs 08:27 Training and Optimizing Your GPT 12:50 Benefits of GPTs for Teams 15:18 Final Thoughts and Encouragement
The AI Breakdown: Daily Artificial Intelligence News and Discussions
If you've ever stared at OpenAI's model selector and thought, “Which one am I supposed to use?”, this episode breaks it all down. We go through when to use GPT-4o, GPT-4.5, o3, o4-mini, o4-mini-high, and o1 pro, all based on real world business scenarios. Get Ad Free AI Daily Brief: https://patreon.com/AIDailyBriefBrought to you by:KPMG – Go to https://kpmg.com/ai to learn more about how KPMG can help you drive value with our AI solutions.Blitzy.com - Go to https://blitzy.com/ to build enterprise software in days, not months Vertice Labs - Check out http://verticelabs.io/ - the AI-native digital consulting firm specializing in product development and AI agents for small to medium-sized businesses.The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614Subscribe to the newsletter: https://aidailybrief.beehiiv.com/Join our Discord: https://bit.ly/aibreakdownInterested in sponsoring the show? nlw@breakdown.network
On this episode we interview artist Lauren Cooper. We talk about the impact of motherhood on the creative practice, launching ghost designing services for fellow artists, using murals to bring joy in schools, utilizing chat GPT to organize time and projects, unpacking the investment value of a murals for businesses, putting yourself first, and aligning your business with your core values as a person.Stay Connected with Lauren:https://www.instagram.com/rosemontlanehttps://www.rosemontlane.comEpisode Blog Link: https://www.levelupartists.com/lua-podcast/207Sign up for our studio newsletters at: https://www.AmeighArt.com https://www.JaclynSanders.com https://www.levelupartists.com Connect with us on Instagram: https://www.instagram.com/AmeighArt/https://www.instagram.com/JSandersStudio/https://www.instagram.com/LevelUpArtists/Music by: https://www.coreyclaxton.com Watching or listening to one of our earlier episodes? In 2022, the Art Studio Insights podcast was renamed the Level Up Artists podcast!
On this episode of the Best Ever CRE Show, Joe Fairless interviews Vanessa Alfaro, Bonny Wayman, and Rebecca Themelis in part three of a three-part series on AI in multifamily real estate. This installment focuses on how operators are implementing AI in property operations such as leasing, maintenance, asset management, and investor reporting. Vanessa discusses creating AI agents and chatbots for asset analysis and KPI tracking, Rebecca explains how tools like MeetElise and Claude AI have accelerated leasing and quality checks, and Bonny shares how custom GPT bots are transforming her management of 50-unit properties. The panel emphasizes the accessibility of AI across portfolio sizes, the importance of training both humans and bots, and how embracing these tools early provides a major operational edge. Vanessa Alfaro Current role: Founder of Venus Capital & Lunax.ai Based in: Texas Say hi to them at: https://lunax.ai, https://venuspartners.com Bonny Wayman Current role: Asset Manager at Wild Oak Capital Based in: Colorado Say hi to them at: https://www.wildoakcapital.com/ or bonny@wildoakcapital.com Rebecca Themelis Current role: Real Estate Investor, Broker, and Contractor at Spot Properties Based in: California Say hi to them at: rebecca@spotproperties.net Get a 4-week trial, free postage, and a digital scale at https://www.stamps.com/cre. Thanks to Stamps.com for sponsoring the show! Post your job for free at https://www.linkedin.com/BRE. Terms and conditions apply. Try Huel with 15% OFF + Free Gift for New Customers today using my code bestever at https://huel.com/bestever. Fuel your best performance with Huel today! Join the Best Ever Community The Best Ever Community is live and growing - and we want serious commercial real estate investors like you inside. It's free to join, but you must apply and meet the criteria. Connect with top operators, LPs, GPs, and more, get real insights, and be part of a curated network built to help you grow. Apply now at www.bestevercommunity.com Learn more about your ad choices. Visit megaphone.fm/adchoices
I, Stewart Alsop, welcomed Alex Levin, CEO and co-founder of Regal, to this episode of the Crazy Wisdom Podcast to discuss the fascinating world of AI phone agents. Alex shared some incredible insights into how AI is already transforming customer interactions and what the future holds for company agents, machine-to-machine communication, and even the nature of knowledge itself.Check out this GPT we trained on the conversation!Timestamps00:29 Alex Levin shares that people are often more honest with AI agents than human agents, especially regarding payments.02:41 The surprising persistence of voice as a preferred channel for customer interaction, and how AI is set to revolutionize it.05:15 Discussion of the three types of AI agents: personal, work, and company agents, and how conversational AI will become the main interface with brands.07:12 Exploring the shift to machine-to-machine interactions and how AI changes what knowledge humans need versus what machines need.10:56 The looming challenge of centralization versus decentralization in AI, and how Americans often prioritize experience over privacy.14:11 Alex explains how tokenized data can offer personalized experiences without compromising specific individual privacy.25:44 Voice is predicted to become the primary way we interact with brands and technology due to its naturalness and efficiency.33:21 Why AI agents are easier to implement in contact centers due to different entropy compared to typical software.38:13 How Regal ensures AI agents stay on script and avoid "hallucinations" by proper training and guardrails.46:11 The technical challenges in replicating human conversational latency and nuances in AI voice interactions.Key InsightsAI Elicits HonestyPeople tend to be more forthright with AI agents, particularly in financially sensitive situations like discussing overdue payments. Alex speculates this is because individuals may feel less judged by an AI, leading to more truthful disclosures compared to interactions with human agents.Voice is King, AI is its HeirDespite predictions of its decline, voice remains a dominant channel for customer interactions. Alex believes that within three to five years, AI will handle as much as 90% of these voice interactions, transforming customer service with its efficiency and availability.The Rise of Company AgentsThe primary interface with most brands is expected to shift from websites and apps to conversational AI agents. This is because voice is a more natural, faster, and emotive way for humans to interact, a behavior already seen in younger generations.Machine-to-Machine FutureWe're moving towards a world where AI agents representing companies will interact directly with AI agents representing consumers. This "machine-to-machine" (M2M) paradigm will redefine commerce and the nature of how businesses and customers engage.Ontology of KnowledgeAs AI systems process vast amounts of information, creating a clear "ontology of knowledge" becomes crucial. This means structuring and categorizing information so AI can understand the context and user's underlying intent, rather than just processing raw data.Tokenized Data for PrivacyA potential solution to privacy concerns is "tokenized data." Instead of providing AI with specific personal details, users could share generalized tokens (e.g., "high-intent buyer in 30s") that allow for personalized experiences without revealing sensitive, identifiable information.AI Highlights Human InconsistenciesImplementing AI often brings to light existing inconsistencies or unacknowledged issues within a company. For instance, AI might reveal discrepancies between official scripts and how top-performing human agents actually communicate, forcing companies to address these differences.Influence as a Key Human SkillIn a future increasingly shaped by AI, Sam Altman (via Alex) suggests that the ability to "influence" others will be a paramount human skill. This uniquely human trait will be vital, whether for interacting with other people or for guiding and shaping AI systems.Contact Information* Regal AI: regal.ai* Email: hello@regal.ai* LinkedIn: www.linkedin.com/in/alexlevin1/
In this high-impact episode of Seller Sessions, Danny McMillan is joined by Dorian Gorski for a no-fluff exploration of how AI is shifting the Amazon ecosystem. The conversation orbits around a powerful new tool called "Manus" — an AI-driven platform built to go beyond surface-level product research and tap into rich demographic insights, customer motivations, and actionable listing data.
"Vous avez quoi entre les mains ?" "De l'or !"Et ça, les trois milliardaires les plus en vogue l'ont bien compris.Xavier Niel (Free), Rodolphe Saadé (CMA-CGM) et Eric Schmidt (Google) ont financé à hauteur de 300 millions d'euros le laboratoire de recherche ouverte (open source) à but non lucratif dirigé par Patrick Perez, chercheur en IA appliquée.Patrick est à la tête de Kyutai, fondé en 2023, qui est déjà l'un des leaders français en IA, avec plusieurs outils disponibles : Moshi, leur IA vocale conversationnelle ; Hibiki, pour la traduction en live ; et MoshiVis, pour l'analyse d'images.Au programme de cet épisode : taxis autonomes, erreurs inhérentes à l'IA, entraînement des modèles par les humains, problème des contenus synthétiques… et là où l'IA est la plus lucrative.Avant de fonder Kuytai, Patrick a navigué entre recherche académique et industrie. Il a dirigé la stratégie IA chez Valeo, travaillé sur le traitement d'images chez Technicolor, et il a aussi mené des travaux chez Microsoft et à l'INRIA, deux références en innovation technologique.Ce parcours lui permet aujourd'hui de s'attaquer à l'un des sujets les plus prometteurs du moment : la multimodalité en IA — une approche qui combine texte, image et audio pour créer des outils plus puissants et plus intuitifs.Et bonne nouvelle, c'est la nouvelle vague de recherche qui sera à l'origine des prochaines grandes percées dans le domaine.Cet épisode est un point d'étape pour vraiment comprendre où en est la recherche en IA et comment se positionne la France.Entre fantasmes et réalités, Patrick explique comment fonctionne l'IA et comment elle capte peu à peu les signaux du monde réel — et pourquoi c'est une révolution.TIMELINE:00:00:00 : La beauté des mathématiques appliquées rendue accessible grâce à l'IA00:11:17 : Vers une IA vraiment multimodale : comprendre sans passer par le texte00:21:20 : Donner des yeux et des oreilles à l'IA00:30:17 : La rencontre entre IA et robotique : des robotaxis à Paris ?00:48:09 : Les prochaines avancées de l'IA vont TOUT changer00:55:20 : GPT se trompe encore… et c'est une bonne chose !01:00:51 : Quand la machine devient professeur pour d'autres machines01:08:33 : L'intervention des humains dans l'entraînement des IAs est encore nécessaire01:21:33 : Le problème des contenus synthétiques qui ne se présentent pas comme tels01:34:07 : Deviendrons-nous débiles en déléguant trop à l'IA ?01:42:40 : Là où l'IA est la plus lucrative01:53:09 : Convaincre des géants : Xavier Niel, Rodolphe Saadé, Eric Schmidt02:07:36 : L'IA pour coder : où en est-on ?02:15:59 : Ce qu'on peut faire avec l'IA et le coût des GPULes anciens épisodes de GDIY mentionnés : #450 - Karim Beguir - InstaDeep - L'IA Générale ? C'est pour 2025#397 - Yann Le Cun - Chief AI Scientist chez Meta - L'Intelligence Artificielle Générale ne viendra pas de Chat GPT#267 - Andréa Bensaïd - Eskimoz - Refuser 30 millions pour viser le milliard#418 - Clément Delangue - Hugging Face - 4,5 milliards de valo avec un produit gratuit à 99%#414 - Florian Douetteau - Dataiku - La prochaine grande vague de l'IA : l'adopter ou périr ?Nous avons parlé de :KYUTAIMoshi (l'IA de Kyuntai)Inria : Institut national de recherche en sciences et technologies du numériqueStéphane MallardTest des taxis autonomes Weymo : vidéo InstaDocumentaire aux USHibiki (outil de traduction)Allen Institute for Artificial IntelligenceVous pouvez contacter Patrick sur Linkedin et sur Bluesky.Vous souhaitez sponsoriser Génération Do It Yourself ou nous proposer un partenariat ?Contactez mon label Orso Media via ce formulaire.Distribué par Audiomeans. Visitez audiomeans.fr/politique-de-confidentialite pour plus d'informations.
From rogue AI behaviors to Google Gemini's web-building skills, this week's 10 Minute Teacher Podcast is your fast, fun roundup of the most classroom-worthy education and technology news!
This week's episode is a gift. Literally. Like, Happy Mother's Day. You didn't buy her flowers (again), so give her the only thing that lasts longer than a dead tulip and costs less than a greeting card: this podcast. It's free. It's entertaining. It's packed with unsolicited AI rants and raccoons with crack pipes. What else does she want?We kick things off by unveiling the ultimate gift for mom—a custom Cameo from yours truly for just $2. That's right, two bucks. You can't even steal gum for that anymore. But you can get a heartfelt, helium-voiced message from a man whose voice has never recovered from a childhood balloon incident.Then, we dive deep into the unholy evolution of AI—from adorable babyfied versions of podcast hosts like Theo Von and Bobby Lee, to the sexy, sentient voices of GPT that may or may not steal your man. I test ChatGPT's limits live on-air (spoiler: she tries to convince us we're in the Matrix and honestly? I buy it).Next up, we talk about chipping our kids. Yep. That's where we're at. Neuralink drops in like the world's most controversial app update, and we ask the real questions: Are we gonna chip our children for academic success, or nah?Then, Pennsylvania almost legalized recreational marijuana—but don't spark up just yet. The House passed the bill, but it's heading to the Senate where dreams go to die. Still, the thought of smoother roads and a billion dollars in tax revenue almost makes you wanna run for office. Or at least move to Maryland.Speaking of dreams dying—nothing says “routine traffic stop” like a meth-smoking raccoon named Chewy sitting shotgun while his crackhead owner gets cuffed in Springfield, Ohio. This is not satire. This is real life. Chewy. The. Raccoon. Has. A. Meth. Pipe. And a backup one. We cover the whole police report like it's TMZ for rodents.Then it's time for some nostalgic goodness. Remember those shady late-night ringtone commercials from 2004? The ones that charged you $9.99 to hear Save a Horse, Ride a Cowboy on your flip phone? We relive the glory days of prepaid Virgin Mobile plans, 15-second Lil Wayne ringtones, and the elite cultural significance of ringback tones.But it's not all nostalgia and narcotics—we're here to save your relationship too. That's right. Ladies, if you want to turn your man back into the guy you fell in love with (or at least get him to stop drinking cases of beer alone), you already know the answer. PSA: Put your hand down his pants. Need help? Our friends at BlueChew got you. Promo link in the episode. Save your marriage for $5 shipping.And finally, we close on a plea: David Dobrik, bring back Liza. I don't care what your sexuality is, just repost the OG vlogs with Helga, accents, and USPS boxes. Give the people what we want. Give us chaos. Give us love. Give us 4 minutes and 22 seconds of unhinged, golden, creator-content bliss.It's my birthday, it's Mother's Day, and it's Chewy the raccoon's meth bender anniversary. What are we doing?*************************************************************✅BLUECHEW - FIRST ORDER FREE Only $5 Shippinghttps://wawdpod.com/blue*************************************************************✅DUDEROBE - PROMO CODE: WAWD 20% OFFhttps://duderobe.com - promo code: WAWD*************************************************************
In episode 1860, Jack and guest co-host Andrew Ti are joined by host of Worse Than You, Mo Fry Pasic, to discuss…REAL ID Isn’t Real, Cyber Trucks Just Totally Stop Selling, This AI Expert Thinks The AI Bubble’s About to Pop and more! What you need to know about the REAL ID requirements for air travel The Racist Origins of the Real ID Act Top Trump agency reveals key reason why REAL ID will be enforced 'Mass surveillance': Conservatives sound alarm over Trump admin's REAL ID rollout Trump’s Insistence on Real ID Has Become a Flashpoint for His Tinfoil Hat Fans You can get a free Krispy Kreme doughnut on May 7 for Real ID deadline: Here's how Homeland Security chief says travelers with no REAL ID can fly for now, but with likely extra steps Flying out of Indianapolis without REAL ID? Don't fret — the airport isn't turning people away Tesla’s Inventory of Unsold Cybertrucks Skyrockets, Despite Offering $10K Discounts and Concealing Listings The Silicon Valley sceptic warning tech’s new bubble is about to burst Deep Learning Is Hitting a Wall Microsoft’s £2.5bn investment in Britain at risk from creaking power grid Chess helped me win the Nobel Prize, says Google’s AI genius OpenAI overrode concerns of expert testers to release sycophantic GPT-4o The next British boom could be in the offing – if Starmer abandons net zero Finance worker pays out $25 million after video call with deepfake ‘chief financial officer’ LISTEN: Indeed by CruzaSee omnystudio.com/listener for privacy information.
Welcome to the AI Rollup, from the Limitless Podcast. David, Ejaaz, and Josh break down the week's most important AI headlines, from OpenAI's $3B Windsurf acquisition and Google's full-stack AI play, to Visa and Mastercard preparing for agentic commerce. We explore the state of robotics, major interpretability challenges, and why the race to AGI may outpace our ability to understand it. Plus: AI ASMR, glow-up GPT, and why autonomous agents still kinda suck. Stay curious, this one's stacked.------
At inference, large language models use in-context learning with zero-, one-, or few-shot examples to perform new tasks without weight updates, and can be grounded with Retrieval Augmented Generation (RAG) by embedding documents into vector databases for real-time factual lookup using cosine similarity. LLM agents autonomously plan, act, and use external tools via orchestrated loops with persistent memory, while recent benchmarks like GPQA (STEM reasoning), SWE Bench (agentic coding), and MMMU (multimodal college-level tasks) test performance alongside prompt engineering techniques such as chain-of-thought reasoning, structured few-shot prompts, positive instruction framing, and iterative self-correction. Links Notes and resources at ocdevel.com/mlg/mlg35 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code In-Context Learning (ICL) Definition: LLMs can perform tasks by learning from examples provided directly in the prompt without updating their parameters. Types: Zero-shot: Direct query, no examples provided. One-shot: Single example provided. Few-shot: Multiple examples, balancing quantity with context window limitations. Mechanism: ICL works through analogy and Bayesian inference, using examples as semantic priors to activate relevant internal representations. Emergent Properties: ICL is an "inference-time training" approach, leveraging the model's pre-trained knowledge without gradient updates; its effectiveness can be enhanced with diverse, non-redundant examples. Retrieval Augmented Generation (RAG) and Grounding Grounding: Connecting LLMs with external knowledge bases to supplement or update static training data. Motivation: LLMs' training data becomes outdated or lacks proprietary/specialized knowledge. Benefit: Reduces hallucinations and improves factual accuracy by incorporating current or domain-specific information. RAG Workflow: Embedding: Documents are converted into vector embeddings (using sentence transformers or representation models). Storage: Vectors are stored in a vector database (e.g., FAISS, ChromaDB, Qdrant). Retrieval: When a query is made, relevant chunks are extracted based on similarity, possibly with re-ranking or additional query processing. Augmentation: Retrieved chunks are added to the prompt to provide up-to-date context for generation. Generation: The LLM generates responses informed by the augmented context. Advanced RAG: Includes agentic approaches—self-correction, aggregation, or multi-agent contribution to source ingestion, and can integrate external document sources (e.g., web search for real-time info, or custom datasets for private knowledge). LLM Agents Overview: Agents extend LLMs by providing goal-oriented, iterative problem-solving through interaction, memory, planning, and tool usage. Key Components: Reasoning Engine (LLM Core): Interprets goals, states, and makes decisions. Planning Module: Breaks down complex tasks using strategies such as Chain of Thought or ReAct; can incorporate reflection and adjustment. Memory: Short-term via context window; long-term via persistent storage like RAG-integrated databases or special memory systems. Tools and APIs: Agents select and use external functions—file manipulation, browser control, code execution, database queries, or invoking smaller/fine-tuned models. Capabilities: Support self-evaluation, correction, and multi-step planning; allow integration with other agents (multi-agent systems); face limitations in memory continuity, adaptivity, and controllability. Current Trends: Research and development are shifting toward these agentic paradigms as LLM core scaling saturates. Multimodal Large Language Models (MLLMs) Definition: Models capable of ingesting and generating across different modalities (text, image, audio, video). Architecture: Modality-Specific Encoders: Convert raw modalities (text, image, audio) into numeric embeddings (e.g., vision transformers for images). Fusion/Alignment Layer: Embeddings from different modalities are projected into a shared space, often via cross-attention or concatenation, allowing the model to jointly reason about their content. Unified Transformer Backbone: Processes fused embeddings to allow cross-modal reasoning and generates outputs in the required format. Recent Advances: Unified architectures (e.g., GPT-4o) use a single model for all modalities rather than switching between separate sub-models. Functionality: Enables actions such as image analysis via text prompts, visual Q&A, and integrated speech recognition/generation. Advanced LLM Architectures and Training Directions Predictive Abstract Representation: Incorporating latent concept prediction alongside token prediction (e.g., via autoencoders). Patch-Level Training: Predicting larger “patches” of tokens to reduce sequence lengths and computation. Concept-Centric Modeling: Moving from next-token prediction to predicting sequences of semantic concepts (e.g., Meta's Large Concept Model). Multi-Token Prediction: Training models to predict multiple future tokens for broader context capture. Evaluation Benchmarks (as of 2025) Key Benchmarks Used for LLM Evaluation: GPQA (Diamond): Graduate-level STEM reasoning. SWE Bench Verified: Real-world software engineering, verifying agentic code abilities. MMMU: Multimodal, college-level cross-disciplinary reasoning. HumanEval: Python coding correctness. HLE (Human's Last Exam): Extremely challenging, multimodal knowledge assessment. LiveCodeBench: Coding with contamination-free, up-to-date problems. MLPerf Inference v5.0 Long Context: Throughput/latency for processing long contexts. MultiChallenge Conversational AI: Multiturn dialogue, in-context reasoning. TAUBench/PFCL: Tool utilization in agentic tasks. TruthfulnessQA: Measures tendency toward factual accuracy/robustness against misinformation. Prompt Engineering: High-Impact Techniques Foundational Approaches: Few-Shot Prompting: Provide pairs of inputs and desired outputs to steer the LLM. Chain of Thought: Instructing the LLM to think step-by-step, either explicitly or through internal self-reprompting, enhances reasoning and output quality. Clarity and Structure: Use clear, detailed, and structured instructions—task definition, context, constraints, output format, use of delimiters or markdown structuring. Affirmative Directives: Phrase instructions positively (“write a concise summary” instead of “don't write a long summary”). Iterative Self-Refinement: Prompt the LLM to review and improve its prior response for better completeness, clarity, and factuality. System Prompt/Role Assignment: Assign a persona or role to the LLM for tailored behavior (e.g., “You are an expert Python programmer”). Guideline: Regularly consult official prompting guides from model developers as model capabilities evolve. Trends and Research Outlook Inference-time compute is increasingly important for pushing the boundaries of LLM task performance. Agentic LLMs and multimodal reasoning represent the primary frontiers for innovation. Prompt engineering and benchmarking remain essential for extracting optimal performance and assessing progress. Models are expected to continue evolving with research into new architectures, memory systems, and integration techniques.
Explains language models (LLMs) advancements. Scaling laws - the relationships among model size, data size, and compute - and how emergent abilities such as in-context learning, multi-step reasoning, and instruction following arise once certain scaling thresholds are crossed. The evolution of the transformer architecture with Mixture of Experts (MoE), describes the three-phase training process culminating in Reinforcement Learning from Human Feedback (RLHF) for model alignment, and explores advanced reasoning techniques such as chain-of-thought prompting which significantly improve complex task performance. Links Notes and resources at ocdevel.com/mlg/mlg34 Build the future of multi-agent software with AGNTCY Try a walking desk stay healthy & sharp while you learn & code Transformer Foundations and Scaling Laws Transformers: Introduced by the 2017 "Attention is All You Need" paper, transformers allow for parallel training and inference of sequences using self-attention, in contrast to the sequential nature of RNNs. Scaling Laws: Empirical research revealed that LLM performance improves predictably as model size (parameters), data size (training tokens), and compute are increased together, with diminishing returns if only one variable is scaled disproportionately. The "Chinchilla scaling law" (DeepMind, 2022) established the optimal model/data/compute ratio for efficient model performance: earlier large models like GPT-3 were undertrained relative to their size, whereas right-sized models with more training data (e.g., Chinchilla, LLaMA series) proved more compute and inference efficient. Emergent Abilities in LLMs Emergence: When trained beyond a certain scale, LLMs display abilities not present in smaller models, including: In-Context Learning (ICL): Performing new tasks based solely on prompt examples at inference time. Instruction Following: Executing natural language tasks not seen during training. Multi-Step Reasoning & Chain of Thought (CoT): Solving arithmetic, logic, or symbolic reasoning by generating intermediate reasoning steps. Discontinuity & Debate: These abilities appear abruptly in larger models, though recent research suggests that this could result from non-linearities in evaluation metrics rather than innate model properties. Architectural Evolutions: Mixture of Experts (MoE) MoE Layers: Modern LLMs often replace standard feed-forward layers with MoE structures. Composed of many independent "expert" networks specializing in different subdomains or latent structures. A gating network routes tokens to the most relevant experts per input, activating only a subset of parameters—this is called "sparse activation." Enables much larger overall models without proportional increases in compute per inference, but requires the entire model in memory and introduces new challenges like load balancing and communication overhead. Specialization & Efficiency: Experts learn different data/knowledge types, boosting model specialization and throughput, though care is needed to avoid overfitting and underutilization of specialists. The Three-Phase Training Process 1. Unsupervised Pre-Training: Next-token prediction on massive datasets—builds a foundation model capturing general language patterns. 2. Supervised Fine Tuning (SFT): Training on labeled prompt-response pairs to teach the model how to perform specific tasks (e.g., question answering, summarization, code generation). Overfitting and "catastrophic forgetting" are risks if not carefully managed. 3. Reinforcement Learning from Human Feedback (RLHF): Collects human preference data by generating multiple responses to prompts and then having annotators rank them. Builds a reward model (often PPO) based on these rankings, then updates the LLM to maximize alignment with human preferences (helpfulness, harmlessness, truthfulness). Introduces complexity and risk of reward hacking (specification gaming), where the model may exploit the reward system in unanticipated ways. Advanced Reasoning Techniques Prompt Engineering: The art/science of crafting prompts that elicit better model responses, shown to dramatically affect model output quality. Chain of Thought (CoT) Prompting: Guides models to elaborate step-by-step reasoning before arriving at final answers—demonstrably improves results on complex tasks. Variants include zero-shot CoT ("let's think step by step"), few-shot CoT with worked examples, self-consistency (voting among multiple reasoning chains), and Tree of Thought (explores multiple reasoning branches in parallel). Automated Reasoning Optimization: Frontier models selectively apply these advanced reasoning techniques, balancing compute costs with gains in accuracy and transparency. Optimization for Training and Inference Tradeoffs: The optimal balance between model size, data, and compute is determined not only for pretraining but also for inference efficiency, as lifetime inference costs may exceed initial training costs. Current Trends: Efficient scaling, model specialization (MoE), careful fine-tuning, RLHF alignment, and automated reasoning techniques define state-of-the-art LLM development.
This week we talk about the Marshall Plan, standardization, and USB.We also discuss artificial intelligence, Anthropic, and protocols.Recommended Book: Fuzz by Mary RoachTranscriptIn the wake of WWII, the US government implemented the European Recovery Program, more commonly known as the Marshall Plan, to help Western Europe recover from a conflict that had devastated the afflicted countries' populations, infrastructure, and economies.It kicked off in April of 1948, and though it was replaced by a successor program, the Mutual Security Act, just three years later in 1951—which was similar to the Marshall Plan, but which had a more militant, anti-communism bent, the idea being to keep the Soviets from expanding their influence across the continent and around the world—the general goal of both programs was similar: the US was in pretty good shape, post-war, and in fact by waiting to enter as long as it did, and by becoming the arsenal of the Allied side in the conflict, its economy was flourishing, its manufacturing base was all revved up and needed something to do with all the extra output capacity it had available, all the resources committed to producing hardware and food and so on, so by sharing these resources with allies, by basically just giving a bunch of money and assets and infrastructural necessities to these European governments, the US could get everybody on side, bulwarked against the Soviet Union's counterinfluence, at a moment in which these governments were otherwise prone to that influence; because they were suffering and weaker than usual, and thus, if the Soviets came in with the right offer, or with enough guns, they could conceivably grab a lot of support and even territory. So it was considered to be in everyone's best interest, those who wanted to keep the Soviet Union from expanding, at least, to get Europe back on its feet, posthaste.So this program, and its successor program, were highly influential during this period, and it's generally considered to be one of the better things the US government has done for the world, as while there were clear anti-Soviet incentives at play, it was also a relatively hands-off, large-scale give-away that favorably compared with the Soviets' more demanding and less generous version of the same.One interesting side effect of the Marshall Plan is that because US manufacturers were sending so much stuff to these foreign ports, their machines and screws and lumber used to rebuild entire cities across Europe, the types of machines and screws and lumber, which were the standard models of each in the US, but many of which were foreign to Europe at the time, became the de facto standard in some of these European cities, as well.Such standards aren't always the best of all possible options, sometimes they stick around long past their period of ideal utility, and they don't always stick, but the standards and protocols within an industry or technology do tend to shape that industry or technology's trajectory for decades into the future, as has been the case with many Marshall Plan-era US standards that rapidly spread around the world as a result of these giveaways.And standards and protocols are what I'd like to talk about today. In particular a new protocol that seems primed to shape the path today's AI tools are taking.—Today's artificial intelligence, or AI, which is an ill-defined type of software that generally refers to applications capable of doing vaguely human-like things, like producing text and images, but also somewhat superhuman things, like working with large data-sets and bringing meaning to them, are developing rapidly, becoming more potent and capable seemingly every day.This period of AI development has been in the works for decades, and the technologies required to make the current batch of generative AI tools—the type that makes stuff based on libraries of training data, deriving patterns from that data and then coming up with new stuff based on the prompting of human users—were originally developed in the 1970s, but the transformer, which was a fresh approach to what's called deep learning architectures, was first proposed in 2017 by a researcher at Google, and that led to the development of the generative pre-trained transformer, or GPT, in 2018.The average non-tech-world person probably started to hear about this generation of AI tools a few years later, maybe when the first transformer-based voice and image tools started popping up around the internet, mostly as novelties, or even more likely in late-2022 when OpenAI released the first version of ChatGPT, a generative AI system attached to a chatbot interface, which made these sorts of tools more accessible to the average person.Since then, there's been a wave of investment and interest in AI tools, and we've reached a point where the seemingly obvious next-step is removing humans from the loop in more AI-related processes.What that means in practice is that while today these tools require human prompting for most of what they do—you have to ask an AI for a specific image, then ask it to refine that image in order to customize it for your intended use-case, for instance—it's possible to have AI do more things on their own, working from broader instructions to refine their creations themselves over multiple steps and longer periods of time.So rather than chatting with an AI to come up with a marketing plan for your business, prompting it dozens or hundreds of times to refine the sales copy, the logo, the images for the website, the code for the website, and so on, you might tell an AI tool that you're building a business that does X and ask it to spin up all the assets that you need. From there, the AI might research what a new business in that industry requires, make all the assets you need for it, go back and tweak all those assets based on feedback from other AI tools, and then deploy those assets for you on web hosting services, social media accounts, and the like.It's possible that at some point these tools could become so capable in this regard that humans won't need to be involved at all, even for the initial ideation. You could ask an AI what sorts of businesses make sense at the moment, and tell it to build you a dozen minimum viable products for those businesses, and then ask it to run those businesses for you—completely hands off, except for the expressing your wishes part, almost like you're working with a digital genie.At the moment, components of that potential future are possible, but one of the main things standing in the way is that AI systems largely aren't agentic enough, which in this context means they need a lot of hand-holding for things that a human being would be capable of doing, but which they largely, with rare exceptions, aren't yet, and they often don't have the permission or ability to interact with other tools required to do that kind of building—and that includes things like the ability to create a business account on Shopify, but also the ability to access and handle money, which would be required to set up business and bank accounts, to receive money from customers, and so on.This is changing at a rapid pace, and more companies are making their offerings accessible to specific AI tools; Shopify has deployed its own cluster of internal AI systems, for instance, meant to manage various aspects of a business its customers perch on its platform.What's missing right now, though, is a unifying scaffolding that allows these services and assets and systems to all play nice with each other.And that's the issue the Model Context Protocol is meant to address.The Model Context Protocol, or MCP, is a standard developed by AI company Anthropic, and it's open and designed to be universal. The company intends for it to be the mycelium that connects large language model-based AI to all sorts of data and tools and other systems, a bit like the Hypertext Transfer Protocol, or HTTP, allows data on the web to be used and shared and processed, universally, in a standardized way, and to dip back into the world of physical objects, how standardized shipping containers make global trade a lot more efficient because everyone's working with the same sized boxes, cargo vessels, and so on.The Universal Serial Bus standard, usually shorthanded as USB, is also a good comparison here, as the USB was introduced to replaced a bunch of other standards in the early days of personal computing, which varied by computer maker, and which made it difficult for those makers, plus those who developed accessories, to make their products accessible and inexpensive for end-users, as you might buy a mouse that doesn't work with your specific computer hardware, or you might have a cable that fits in the hole on your computer, but doesn't send the right amount of data, or provide the power you need.USB standards ensured that all devices had the same holes, and that a certain basic level of data and power transmission would be available. And while this standard has since fractured a bit, a period of many different types of USB leading to a lot of confusion, and the deployment of the USB C standard simplying things somewhat, but still being a bit confounding at times, as the same shaped plug may carry different amounts of data and power, despite all that, it has still made things a lot easier for both consumers and producers of electronic goods, as there are fewer plugs and charger types to purchase, and thus less waste, confusion, and so on. We've moved on from the wild west era of computer hardware connectivity into something less varied and thus, more predictable and interoperable.The MCP, if it's successful, could go on to be something like the USB standard in that it would serve as a universal connector between various AI systems and all the things you might want those AI systems to access and use.That might mean you want one of Anthropic's AI systems to build you a business, without you having to do much or anything at all, and it may be capable of doing so, asking you questions along the way if it requires more clarity or additional permissiosn—to open a bank account in your name, for instance—but otherwise acting more agentically, as intended, even to the point that it could run social media accounts, work with manufacturers of the goods you sell, and handle customer service inquiries on your behalf.What makes this standard a standout compared to other options, though—and there are many other proposed options, right now, as this space is still kind of a wild west—is that though it was developed by Anthropic, which originally made it to work with its Claude family of AI tools, it has since also been adopted by OpenAI, Google DeepMind, and several of the other largest players in the AI world.That means, although there are other options here, all with their own pros and cons, as was the case with USB compared to other connection options back in the day, MCP is usable with many of the biggest and most spendy and powerful entities in the AI world, right now, and that gives it a sort of credibility and gravity that the other standards don't currently enjoy.This standard is also rapidly being adopted by companies like Block, Apollo, PayPal, CloudFlare, Asana, Plaid, and Sentry, among many, many others—including other connectors, like Zapier, which basically allows stuff to connect to other stuff, further broadening the capacity of AI tools that adopt this standard.While this isn't a done deal, then, there's a good chance that MCP will be the first big connective, near-universal standard in this space, which in turn means many of the next-step moves and tools in this space will need to work with it, in order to gain adoption and flourish, and that means, like the standards spread around the world by the Marshall Plan, it will go on to shape the look and feel and capabilities, including the limitations, of future AI tools and scaffoldings.Show Noteshttps://arstechnica.com/information-technology/2025/04/mcp-the-new-usb-c-for-ai-thats-bringing-fierce-rivals-together/https://blog.cloudflare.com/remote-model-context-protocol-servers-mcp/https://oldvcr.blogspot.com/2025/05/what-went-wrong-with-wireless-usb.htmlhttps://arxiv.org/html/2504.16736v2https://en.wikipedia.org/wiki/Model_Context_Protocol#cite_note-anthropic_mcp-1https://github.com/modelcontextprotocolhttps://www.anthropic.com/news/integrationshttps://www.theverge.com/2024/11/25/24305774/anthropic-model-context-protocol-data-sourceshttps://beebom.com/model-context-protocol-mcp-explained/https://techcrunch.com/2025/03/26/openai-adopts-rival-anthropics-standard-for-connecting-ai-models-to-data/https://techcrunch.com/2025/04/09/google-says-itll-embrace-anthropics-standard-for-connecting-ai-models-to-data/https://en.wikipedia.org/wiki/Generative_artificial_intelligencehttps://en.wikipedia.org/wiki/USBhttps://www.archives.gov/milestone-documents/marshall-planhttps://en.wikipedia.org/wiki/Marshall_Planhttps://www.congress.gov/crs-product/R45079https://www.ebsco.com/research-starters/history/marshall-planhttps://www.history.com/articles/marshall-plan This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit letsknowthings.substack.com/subscribe