Podcasts about Python

  • 4,371PODCASTS
  • 16,056EPISODES
  • 45mAVG DURATION
  • 3DAILY NEW EPISODES
  • Mar 18, 2026LATEST

POPULARITY

20192020202120222023202420252026

Categories




    Best podcasts about Python

    Show all podcasts related to python

    Latest podcast episodes about Python

    Packet Pushers - Full Podcast Feed
    NAN116: From NSoT to Operational Automation: Fast Time-to-Value with Nautobot Cloud (Sponsored)

    Packet Pushers - Full Podcast Feed

    Play Episode Listen Later Mar 18, 2026 56:49


    Building a Network Source of Truth (NSoT) is only step one in an automation effort — turning it into operational automation is where outcomes happen. In this sponsored episode by Network to Code, Eric Fetty, a self-taught network engineer who literally automated his way through his CCIE lab, shares how he's doing exactly that at... Read more »

    Packet Pushers - Fat Pipe
    NAN116: From NSoT to Operational Automation: Fast Time-to-Value with Nautobot Cloud (Sponsored)

    Packet Pushers - Fat Pipe

    Play Episode Listen Later Mar 18, 2026 56:49


    Building a Network Source of Truth (NSoT) is only step one in an automation effort — turning it into operational automation is where outcomes happen. In this sponsored episode by Network to Code, Eric Fetty, a self-taught network engineer who literally automated his way through his CCIE lab, shares how he's doing exactly that at... Read more »

    Value Driven Data Science
    Episode 98: Building Trust in AI Through Model Interpretability

    Value Driven Data Science

    Play Episode Listen Later Mar 18, 2026 24:54


    When your machine learning model makes a decision that affects someone's medical treatment, financial security, or legal rights, "the algorithm said so" isn't good enough. Stakeholders need to understand why models make the decisions they do, and in high-stakes environments, model interpretability becomes the difference between AI adoption and AI rejection.In this episode, Serg Masis joins Dr. Genevieve Hayes to share practical strategies for building interpretable machine learning models that earn stakeholder trust and accelerate AI adoption within your organisation.You'll learn:The crucial distinction between interpretable and explainable models [07:06]Why feature engineering matters more than algorithm choice [14:56]How to use models to improve your data quality [17:59]The underrated technique that builds stakeholder trust  [21:20]Guest BioSerg Masis is the Principal AI Scientist at Syngenta, a leading agricultural company with a mission to improve global food security. He is also the author of Interpretable Machine Learning with Python and co-author of the upcoming DIY AI and Building Responsible AI with Python.LinksSerg's WebsiteConnect with Serg on LinkedInConnect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE

    MLOps.community
    Durable Execution and Modern Distributed Systems

    MLOps.community

    Play Episode Listen Later Mar 17, 2026 60:36


    Johann Schleier-Smith is the Technical Lead for AI at Temporal Technologies, working on reliable infrastructure for production AI systems and long-running agent workflows. Durable Execution and Modern Distributed Systems, Johann Schleier-Smith // MLOps Podcast #364Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletterMLOps Merch: https://shop.mlops.community/Big shoutout to ⁨ @Temporalio  for the support, and to  @trychroma  for hosting us in their recording studio// AbstractA new paradigm is emerging for building applications that process large volumes of data, run for long periods of time, and interact with their environment. It's called Durable Execution and is replacing traditional data pipelines with a more flexible approach. Durable Execution makes regular code reliable and scalable.In the past, reliability and scalability have come from restricted programming models, like SQL or MapReduce, but with Durable Execution, this is no longer the case. We can now see data pipelines that include document processing workflows, deep research with LLMs, and other complex and LLM-driven agentic patterns expressed at scale with regular Python programs.In this session, we describe Durable Execution and explain how it fits in with agents and LLMs to enable a new class of machine learning applications.// Related Linkshttps://t.mp/hello?utm_source=podcast&utm_medium=sponsorship&utm_campaign=podcast-2026-03-13-mlops&utm_content=mlops-johannhttps://t.mp/vibe?utm_source=podcast&utm_medium=sponsorship&utm_campaign=podcast-2026-03-13-mlops&utm_content=mlops-johannhttps://t.mp/career?utm_source=podcast&utm_medium=sponsorship&utm_campaign=podcast-2026-03-13-mlops&utm_content=mlops-johann ~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Johann on LinkedIn: /jssmith/

    Book Club for Masochists: a Readers’ Advisory Podcast
    Episode 228 - Computers / Computer Science

    Book Club for Masochists: a Readers’ Advisory Podcast

    Play Episode Listen Later Mar 17, 2026 65:49


    It's episode 228 and time for us to talk about Computers and Computer Science books! We discuss technology, digital humanities, coding, and more! You can download the podcast directly, find it on Libsyn, or get it through Apple Podcasts or your favourite podcast delivery system. In this episode Anna Ferri | Meghan Whyte | Matthew Murray

    Python Bytes
    #473 A clean room rewrite?

    Python Bytes

    Play Episode Listen Later Mar 16, 2026 46:10 Transcription Available


    Topics covered in this episode: chardet ,AI, and licensing refined-github pgdog: PostgreSQL connection pooler, load balancer and database sharder Agentic Engineering Patterns Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: chardet ,AI, and licensing Thanks Ian Lessing Wow, where to start? A bit of legal precedence research. Chardet dispute shows how AI will kill software licensing, argues Bruce Perens on the Register Also see this GitHub issue. Dan Blanchard, maintainer of a Python character encoding detection library called chardet, released a new version of the library under a new software license. (LGPL → MIT) Dan is allowed to make this change because v7 is a complete “clean room” rewrite using AI BTW, v7 is WAY better: The result is a 48x increase in detection speed for a project that lives in the hot loops of many projects. That will lead to noticeable performance increases for literally millions of users (the package gets ~130M downloads per month). It paves a path towards inclusion in the standard library (assuming they don't institute policies against using AI tools). Thread-safe detect() and detect_all() with no measurable overhead; scales on free-threaded Python 3.13t+ An individual claiming to be Mark Pilgrim, the original creator of the library, opened an issue in the project's GitHub repo arguing that Blanchard had no right to change the software license, citing the LPGL requirement that the license remain unchanged. A 'complete rewrite' is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a 'clean room' implementation). Blanchard disagreed, citing how version 7.0.0 and 6.0.0 compare when subjected to JPlag, a library for detecting plagiarism. Blanchard told The Register he had wanted to get chardet added to the Python standard library for more than a decade since it's a core dependency to most Python projects. Brian #2: refined-github Suggested by Matthias Schöttle A browser plugin that improves the GitHub experience A sampling Adds a build/CI status icon next to the repo's name. Adds a link back to the PR that ran the workflow. Enables tab and shift tab for indentation in comment fields. Auto-resizes comment fields to fit their content and no longer show scroll bars. Highlights the most useful comment in issues. Changes the default sort order of issues/PRs to Recently updated. But really, it's a huge list of improvements Michael #3: pgdog: PostgreSQL connection pooler, load balancer and database sharder PgDog is a proxy for scaling PostgreSQL. It supports connection pooling, load balancing queries and sharding entire databases. Written in Rust, PgDog is fast, secure and can manage thousands of connections on commodity hardware. Features PgDog is an application layer load balancer for PostgreSQL Health Checks: PgDog maintains a real-time list of healthy hosts. When a database fails a health check, it's removed from the active rotation and queries are re-routed to other replicas Single Endpoint: PgDog can detect writes (e.g. INSERT, UPDATE, CREATE TABLE, etc.) and send them to the primary, leaving the replicas to serve reads Failover: PgDog monitors Postgres replication state and can automatically redirect writes to a different database if a replica is promoted Sharding: PgDog is able to manage databases with multiple shards Brian #4: Agentic Engineering Patterns Simon Willison So much great stuff here, especially Anti-patterns: things to avoid And 3 sections on testing Red/green TDD First run the test Agentic manual testing Extras Brian: uv python upgrade will upgrade all versions of Python installed with uv to latest patch release suggested by John Hagen Coding After Coders: The End of Computer Programming as We Know It NY Times Article Suggested by Christopher Best quote: “Pushing code that fails pytest is unacceptable and embarrassing.” Michael: Talk Python Training users get a better account dashboard Package Managers Need to Cool Down Will AI Kill Open Source, article + video My Always activate the venv is now a zsh-plugin, sorta. Joke: Ergonomic keyboard Also pretty good and related: Claude Code Mandated Links legal precedence research Chardet dispute shows how AI will kill software licensing, argues Bruce Perens this GitHub issue citing JPlag refined-github Agentic Engineering Patterns Anti-patterns: things to avoid Red/green TDD First run the test Agentic manual testing uv python upgrade Coding After Coders: The End of Computer Programming as We Know It Suggested by Christopher a better account dashboard Package Managers Need to Cool Down Will AI Kill Open Source Always activate the venv now a zsh-plugin Ergonomic keyboard Claude Code Mandated claude-mandated.png blobs.pythonbytes.fm/keyboard-joke.jpeg?cache_id=a6026b

    Python Bytes
    #473 A clean room rewrite?

    Python Bytes

    Play Episode Listen Later Mar 16, 2026 46:10 Transcription Available


    Topics covered in this episode: chardet ,AI, and licensing refined-github pgdog: PostgreSQL connection pooler, load balancer and database sharder Agentic Engineering Patterns Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: chardet ,AI, and licensing Thanks Ian Lessing Wow, where to start? A bit of legal precedence research. Chardet dispute shows how AI will kill software licensing, argues Bruce Perens on the Register Also see this GitHub issue. Dan Blanchard, maintainer of a Python character encoding detection library called chardet, released a new version of the library under a new software license. (LGPL → MIT) Dan is allowed to make this change because v7 is a complete “clean room” rewrite using AI BTW, v7 is WAY better: The result is a 48x increase in detection speed for a project that lives in the hot loops of many projects. That will lead to noticeable performance increases for literally millions of users (the package gets ~130M downloads per month). It paves a path towards inclusion in the standard library (assuming they don't institute policies against using AI tools). Thread-safe detect() and detect_all() with no measurable overhead; scales on free-threaded Python 3.13t+ An individual claiming to be Mark Pilgrim, the original creator of the library, opened an issue in the project's GitHub repo arguing that Blanchard had no right to change the software license, citing the LPGL requirement that the license remain unchanged. A 'complete rewrite' is irrelevant, since they had ample exposure to the originally licensed code (i.e. this is not a 'clean room' implementation). Blanchard disagreed, citing how version 7.0.0 and 6.0.0 compare when subjected to JPlag, a library for detecting plagiarism. Blanchard told The Register he had wanted to get chardet added to the Python standard library for more than a decade since it's a core dependency to most Python projects. Brian #2: refined-github Suggested by Matthias Schöttle A browser plugin that improves the GitHub experience A sampling Adds a build/CI status icon next to the repo's name. Adds a link back to the PR that ran the workflow. Enables tab and shift tab for indentation in comment fields. Auto-resizes comment fields to fit their content and no longer show scroll bars. Highlights the most useful comment in issues. Changes the default sort order of issues/PRs to Recently updated. But really, it's a huge list of improvements Michael #3: pgdog: PostgreSQL connection pooler, load balancer and database sharder PgDog is a proxy for scaling PostgreSQL. It supports connection pooling, load balancing queries and sharding entire databases. Written in Rust, PgDog is fast, secure and can manage thousands of connections on commodity hardware. Features PgDog is an application layer load balancer for PostgreSQL Health Checks: PgDog maintains a real-time list of healthy hosts. When a database fails a health check, it's removed from the active rotation and queries are re-routed to other replicas Single Endpoint: PgDog can detect writes (e.g. INSERT, UPDATE, CREATE TABLE, etc.) and send them to the primary, leaving the replicas to serve reads Failover: PgDog monitors Postgres replication state and can automatically redirect writes to a different database if a replica is promoted Sharding: PgDog is able to manage databases with multiple shards Brian #4: Agentic Engineering Patterns Simon Willison So much great stuff here, especially Anti-patterns: things to avoid And 3 sections on testing Red/green TDD First run the test Agentic manual testing Extras Brian: uv python upgrade will upgrade all versions of Python installed with uv to latest patch release suggested by John Hagen Coding After Coders: The End of Computer Programming as We Know It NY Times Article Suggested by Christopher Best quote: “Pushing code that fails pytest is unacceptable and embarrassing.” Michael: Talk Python Training users get a better account dashboard Package Managers Need to Cool Down Will AI Kill Open Source, article + video My Always activate the venv is now a zsh-plugin, sorta. Joke: Ergonomic keyboard Also pretty good and related: Claude Code Mandated Links legal precedence research Chardet dispute shows how AI will kill software licensing, argues Bruce Perens this GitHub issue citing JPlag refined-github Agentic Engineering Patterns Anti-patterns: things to avoid Red/green TDD First run the test Agentic manual testing uv python upgrade Coding After Coders: The End of Computer Programming as We Know It Suggested by Christopher a better account dashboard Package Managers Need to Cool Down Will AI Kill Open Source Always activate the venv now a zsh-plugin Ergonomic keyboard Claude Code Mandated claude-mandated.png blobs.pythonbytes.fm/keyboard-joke.jpeg?cache_id=a6026b

    BarCode
    Robert Covington

    BarCode

    Play Episode Listen Later Mar 14, 2026 28:39


    A kid builds a website for Game Boy Advance tips. Then another one. Then a racing game with a contact form he didn't think twice about. Until, someone hit it with a SQL injection. That moment cracked open a door he never planned to walk through. Years later, he's still walking. Past classical computing, past the ones and zeros we all know and into a space where a bit doesn't have to choose. One where particles hold their breath until someone measures them. This is the story of someone who cut their teeth building websites about gaming tips and a comedy sketch audio site that hit number one on G4TV. Now he's volunteering at DEF CON's Quantum Village, building browser-based quantum simulations, and trying to make the most complex frontier in computing feel a little less sci-fi.TIMESTAMPS00:00 Introduction to Robert Covington and His Journey00:51 From Web Projects to Security Awareness03:51 Diving into Quantum Computing06:22 Understanding Quantum Concepts08:31 Making Quantum Accessible with Qubitide.dev11:13 Quantum in Enterprise: Use Cases and Costs13:14 Involvement with Quantum Village and Community Initiatives15:17 Emerging Job Opportunities in Quantum Computing17:27 Learning Resources for Quantum Computing19:31 Understanding Q Day and Its Implications23:16 The Role of Quantum Random Number Generators25:38 Unique Bar Experiences and Quantum ThemesSYMLINKS[Robert Covington – LinkedIn] – https://www.linkedin.com/in/robert-covington-2693a914b[A LinkedIn profile where Robert Covington shares posts about quantum computing, security conferences, and experiments with quantum simulations and QPU workflows.][QubitIDE] - https://qubitide.dev[A quantum computing learning and experimentation platform created by Robert Covington. It aims to make quantum computing more accessible by allowing developers to explore simulations in the browser and eventually integrate quantum processing workflows.][Amazon Braket] - https://aws.amazon.com/braket/[A cloud-based quantum computing service from Amazon Web Services that allows developers and researchers to run quantum algorithms on simulators and real quantum hardware without needing to own physical quantum machines.][PennyLane] - https://pennylane.ai/[An open-source Python library developed by Xanadu for quantum computing and quantum machine learning. It enables users to build and run quantum programs on simulators or real quantum hardware.][Qiskit] - https://qiskit.org/[An open-source quantum computing software development kit created by IBM. It provides tools for building quantum circuits, running simulations, and executing programs on IBM quantum computers.][D-Wave Systems] - https://www.dwavesys.com/[A quantum computing company specializing in quantum annealing hardware and optimization systems. Their machines are used by research institutions and organizations exploring practical quantum applications.][IBM Quantum Learning] - https://quantum.ibm.com/learn[IBM's official learning platform that provides tutorials, documentation, and educational resources for beginners and developers who want to learn quantum computing and use IBM quantum tools.][Quantum Economic Development Consortium (QED-C)] - https://quantumconsortium.org/[An industry consortium focused on strengthening the quantum technology ecosystem through collaboration, workforce development, and industry initiatives.][Barcode Security Podcast] - https://barcodesecurity.com/[The official website of the Barcode podcast hosted by Chris Glanden, featuring discussions on cybersecurity, emerging technologies, and interviews with experts in the field.]

    The Book Review
    Louise Erdrich on Her New Story Collection and the Mystery of Writing

    The Book Review

    Play Episode Listen Later Mar 13, 2026 34:17


    Since the publication of her first novel, “Love Medicine,” in 1984, Louise Erdrich has written fiction, nonfiction, poetry and children's books. Her work has earned multiple awards, including the National Book Award (“The Round House”) and the Pulitzer Prize (“The Night Watchman”). On this week's episode, Erdrich talks with Gilbert Cruz, the editor of The New York Times Book Review, about her new short story collection, “Python's Kiss.” She reflects on some of the formative experiences that shaped her as a writer, including watching “Planet of the Apes” and growing up in North Dakota, a state that housed hundreds of intercontinental ballistic missiles. She says that writing has been her “only real way of processing” her experiences and that her creative process is full of mystery. “There's really no way to control everything that happens in a piece of art. Some of these stories — I wasn't sure that I had written it,” she said, adding: “And yet, obviously, it was in my handwriting.” Plus, Erdrich recommends the one book that always puts her to sleep. Books discussed on this episode: “Animal Farm,” by George Orwell “Brawler,” by Lauren Groff “Winter in the Blood,” by James Welch “The Pillow Book,” by Sei Shōnagon “The Death of the Heart,” by Elizabeth Bowen “Save Me, Stranger,” by Erika Krouse “The Bluest Eye,” by Toni Morrison “Austerlitz,” by W.G. Sebald “The Rings of Saturn,” by W.G. Sebald “Whistler,” by Ann Patchett “Make the Golf Course a Public Sex Forest,” published by Maitland Systems Engineering Subscribe today at nytimes.com/podcasts or on Apple Podcasts and Spotify. You can also subscribe via your favorite podcast app here https://www.nytimes.com/activate-access/audio?source=podcatcher. For more podcasts and narrated articles, download The New York Times app at nytimes.com/app. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.

    Talk Python To Me - Python conversations for passionate developers
    #540: Modern Python monorepo with uv and prek

    Talk Python To Me - Python conversations for passionate developers

    Play Episode Listen Later Mar 13, 2026 62:13 Transcription Available


    Monorepos -- you've heard the talks, you've read the blog posts, maybe you've seen a few tantalizing glimpses into how Google or Meta organize their massive codebases. But it's often in the abstract and behind closed doors. What if you could crack open a real, production monorepo, one with over a million lines of Python and over 100 of sub-packages, and actually see how it's built, step by step, using modern tools and standards? That's exactly what Apache Airflow gives us. On this episode, I sit down with Jarek Potiuk and Amogh Desai, two of Airflow's top contributors, to go inside one of the largest open-source Python monorepos in the world and learn how they manage it with uv, pyproject.toml, and the latest packaging standards, so you can apply those same patterns to your own projects. Episode sponsors Agentic AI Course Python in Production Talk Python Courses Links from the show Guests Amogh Desai: github.com Jarek's GitHub: github.com definition of a monorepo: monorepo.tools airflow: airflow.apache.org Activity: github.com OpenAI: airflowsummit.org Part 1. Pains of big modular Python projects: medium.com Part 2. Modern Python packaging standards and tools for monorepos: medium.com Part 3. Monorepo on steroids - modular prek hooks: medium.com Part 4. Shared “static” libraries in Airflow monorepo: medium.com PEP-440: peps.python.org PEP-517: peps.python.org PEP-518: peps.python.org PEP-566: peps.python.org PEP-561: peps.python.org PEP-660: peps.python.org PEP-621: peps.python.org PEP-685: peps.python.org PEP-723: peps.python.org PEP-735: peps.python.org uv: docs.astral.sh uv workspaces: blobs.talkpython.fm prek.j178.dev: prek.j178.dev your presentation at FOSDEM26: fosdem.org Tallyman: github.com Watch this episode on YouTube: youtube.com Episode #540 deep-dive: talkpython.fm/540 Episode transcripts: talkpython.fm Theme Song: Developer Rap

    HomeTech.fm Podcast
    Episode 566 - Peculiar Projects

    HomeTech.fm Podcast

    Play Episode Listen Later Mar 13, 2026


    On this week's show: iRobot comes back from bankruptcy with a tiny Roomba Mini that won't launch in the US, Homey adds Python support to open the app floodgates, Sonos launches a new portable Play speaker and a mic-free Era 100 SL while also admitting its Apple TV rival is dead, IKEA pushes a Dirigera update to fix Matter pairing, and Zooz gets Z-Wave Long Range sensors certified for Alarm.com. All of this, a pick of the week, project updates, and so much more

    Sustain
    Episode 286: Jack Skinner of PyCon AU and Regional Confs

    Sustain

    Play Episode Listen Later Mar 13, 2026 40:05


    Guest Jack Skinner Panelist Richard Littauer Show Notes In this episode of Sustain, host Richard Littauer talks with Jack Skinner, PyCon AU organizer and freelance consultant/fractional CTO, to explore why regional conferences matter so much to the long-term health of open source communities. Their conversation looks at how events like PyCon AU do far more than host talks, they create local connections, nurture future leaders, support first-time speakers, and help sustain the broader Python ecosystem in ways that global conferences alone cannot. Drawing on Jack's experience as a conference organizer and community builder, the episode offers a behind-the-scenes look at the challenges of running volunteer-led events, from sponsorships and logistics to burnout, accessibility, and building a stronger pipeline of future organizers. Press download now to hear more! [00:01:49] Jack shares his background and how he got involved in Python and event organizing. [00:02:48] We hear about Jack's first PyCon AU experience. [00:04:14] Jack describes PyCon AU, who it serves, and how it's changed after COVID. [00:07:01] Why do regional conferences exist alongside PyCon US? [00:09:24] Jack talks about what makes Australia and New Zealand different as conference communities. [00:10:55] PyCon AU's attendance goals are discussed as Jack mentions his big goal is to bring attendance back to roughly 500-600 people, restoring pre-pandemic strength. [00:12:04] The discussion turns to conference structure: tracks, workshops, and sponsor interest, with Jack emphasizing sponsorship is not just about money. [00:14:54] Richard asks how organizers know whether conferences help people learn, connect, or build community. Jack explains how they're measuring community impact beyond “good vibes” and rebuilding local Python communities. [00:17:34] Jack explains PyCon AU is trying to build a future organizer pipeline by letting people observe how conference planning works and introduces his proposed program/project, “shadow team.” [00:19:09] Another project Jack is working on is documenting the behind-the-scenes work of organizing the conference through long-form writing. [00:20:38] Jack admits he feels imposter syndrome because he's not paid to write Python, his contribution is centered on the sociotechnical side. [00:23:20] PyCon AU's independence from government and institutions is discussed, and how the conference community is globally aware, even if locally focused. [00:27:05] Call for proposals details, deadline is March 29, and the in-person focus for this year's event are mentioned. Richard discusses the return of the academic track and Jack details more info on poster sessions and workshop submissions. [00:32:08] Volunteering and buying tickets are explained and why you should buy tickets early if you can. Quotes [00:32:20] “Volunteering is an awesome way to be involved in PyCon.” Spotlight [00:35:16] Richard's spotlight is two of his lecturers at the University of Edinburgh, Simon Kirby and Andrew Smith, who introduced him to Python. [00:35:55] Jack's spotlight is two companion projects: pretalx and pretix. Links SustainOSS podcast@sustainoss.org richard@sustainoss.org SustainOSS Discourse SustainOSS Mastodon SustainOSS Bluesky SustainOSS LinkedIn Open Collective-SustainOSS (Contribute) Richard Littauer Socials Jack Skinner LinkedIn Jack Skinner Website PyCon AU, August 26-30, 2026, Brisbane PyCon AU News & Updates Sustain Podcast-Episode 75: Deb Nicholson on the OSI, the future of open source, and SeaGL Sustain Podcast-Episode 137: A How-to Guide for Contributing to Open Source as an Employee, for Corporations (featuring Deb Nicholson as Host) Guido van Rossum Whale song shows language-like statistical structure Simon Kirby (co-lead author) pretalx (GitHub) pretix (GitHub) Sponsor CURIOSS Credits Produced by Richard Littauer Edited by Paul M. Bahr at Peachtree Sound Show notes by DeAnn Bahr Peachtree Sound Special Guest: Jack Skinner.

    Hacker Public Radio
    HPR4594: Hackerpublic Radio New Years Eve Show 2026 Episode 2

    Hacker Public Radio

    Play Episode Listen Later Mar 12, 2026


    This show has been flagged as Explicit by the host. ### Eps 02 Start ### Amazon Alexa https://en.wikipedia.org/wiki/Amazon_Alexa https://developer.amazon.com/en-US/alexa Home Assistant https://en.wikipedia.org/wiki/Home_Assistant https://www.home-assistant.io/ Steelseries: Arctis 9X https://steelseries.com/gaming-headsets/arctis-9x https://headphonereview.com/over-ear/steelseries-arctis-9x-gaming-headset-review/ Razer: Nari series https://www.razer.com/pc/gaming-headsets-and-audio/nari-family https://mysupport.razer.com/app/answers/detail/a_id/3636/~/razer-nari-ultimate-%7C-rz04-02670-support-%26-faqs Skullcandy: crusher https://www.skullcandy.com/collections/skullcandy-crusher-bass Audio-Technica ATH-M50x https://www.audio-technica.com/en-us/ath-m50x HyperX: cloud https://hyperx.com/collections/gaming-headsets Plantronics Headset https://plantronicsstore.com/ Skullcandy: Hesh 3® Wireless https://support.skullcandy.com/hc/en-us/articles/360008277374-Hesh-3-Wireless Centauri Carbon https://www.elegoo.com/pages/elegoo-centauri-carbon https://us.elegoo.com/products/centauri-carbon?srsltid=AfmBOooFOZ2ms1EDtl2TiIAajyqMjkLFTkPb0hMFzis2PZs8sbdgpfRn Ender-3 https://www.creality.com/products/ender-3-3d-printer https://www.creality3dofficial.com/products/official-creality-ender-3-3d-printer Monoprice Maker Select V2 https://monopricesupport.kayako.com/article/278-maker-select-v2-manual-quick-start-guide-part-13860 https://www.treatstock.com/machines/item/237-maker-select baha GmbH https://www.baha.com/?culture=en-US&ts=1768855891246 HP Elite Mini 600 https://www.hp.com/us-en/shop/mdp/desktops-and-workstations/hp-elite-mini-600-3074457345617692179--1 HP 9000 https://en.wikipedia.org/wiki/HP_9000 Full Circle Magazine https://fullcirclemagazine.org/ Mintcast https://mintcast.org/ Podcatcher https://en.wikipedia.org/wiki/List_of_podcast_clients Podcast addict https://podcastaddict.com/ Antenna pod https://antennapod.org/ Robinhood: Trading & Investing https://robinhood.com/us/en/ E-Trade is an investment brokerage and electronic trading platform https://us.etrade.com/home Distrohoppers' Digest Podcast https://distrohoppersdigest.org/ Spotify https://open.spotify.com/ Software-defined radio https://en.wikipedia.org/wiki/Software-defined_radio Filk music https://en.wikipedia.org/wiki/Filk_music OggCamp 2026 https://www.oggcamp.org/ Moss music https://mordewis.bandcamp.com/ Discord https://discord.com/ https://support.discord.com/hc/en-us/articles/360030853132-Server-Folders-101 The Hitchhiker's Guide to the Galaxy https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy https://hitchhikers.fandom.com/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy Baofeng BF-50 https://www.baofengradio.com/products/5r-mini https://www.youtube.com/watch?v=DWtbDtMyqMA Baofeng UV-5R Mini Dual-band Radio https://www.radioddity.com/products/baofeng-uv-5r-mini Pi Day https://en.wikipedia.org/wiki/Pi_Day GNU World Order https://gnuworldorder.info/ SDF Public Access UNIX System https://sdf.org/ NetBSD https://www.netbsd.org/ https://en.wikipedia.org/wiki/NetBSD Raspberry Pi 1 Model B+ https://www.raspberrypi.com/products/raspberry-pi-1-model-b-plus/ OpenBSD https://www.openbsd.org/ https://en.wikipedia.org/wiki/OpenBSD FreeBSD https://www.freebsd.org/ https://en.wikipedia.org/wiki/FreeBSD Something about "ports"? https://www.iana.org/assignments/service-names-port-numbers/service-names-port-numbers.xhtml https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/ch-ports Chapter 4. Installing Applications: Packages and Ports https://docs.freebsd.org/en/books/handbook/ports/ https://freebsdfoundation.org/resource/installing-a-port-on-freebsd/ OpenBSD Ports - Working with Ports [Handbook Index] https://www.openbsd.org/faq/ports/ports.html SerenityOS https://serenityos.org/ https://en.wikipedia.org/wiki/SerenityOS Ladybird is a brand-new browser & web engine. https://ladybird.org/ Unix https://en.wikipedia.org/wiki/Unix https://en.wikipedia.org/wiki/List_of_Unix_systems UNIX System V https://en.wikipedia.org/wiki/UNIX_System_V UNIX V4 tape successfully recovered. https://www.tomshardware.com/software/linux/recovered-unix-v4-tape-quickly-yields-a-usable-operating-system-nostalgia-addicts-can-now-boot-up-unix-v4-in-a-browser-window https://www.theregister.com/2025/12/23/unix_v4_tape_successfully_recovered/ Newsboat is an RSS/Atom feed reader for the text console. https://newsboat.org/index.html Podboat https://man.archlinux.org/man/extra/newsboat/podboat.1.en EPR: Terminal/CLI Epub reader written in Python 3.6. https://github.com/wustho/epr Ruby Programming Language https://www.ruby-lang.org/en/ https://en.wikipedia.org/wiki/Ruby_(programming_language) https://rubyonrails.org/ Crystal is a general-purpose, object-oriented programming language. https://crystal-lang.org/ https://en.wikipedia.org/wiki/Crystal_(programming_language) Plasma is a Desktop https://kde.org/plasma-desktop/ Vim is a highly configurable text editor https://www.vim.org/ https://en.wikipedia.org/wiki/Vim_(text_editor) Sublime Text https://www.sublimetext.com/ sed, a stream editor https://www.gnu.org/software/sed/manual/sed.html https://en.wikipedia.org/wiki/Sed English punctuation https://en.wikipedia.org/wiki/English_punctuation https://en.wikipedia.org/wiki/Punctuation List of typographical symbols and punctuation marks https://en.wikipedia.org/wiki/List_of_typographical_symbols_and_punctuation_marks Pluma (text editor) https://en.wikipedia.org/wiki/Pluma_(text_editor) https://github.com/mate-desktop/pluma Kate (text editor) https://kate-editor.org/ https://en.wikipedia.org/wiki/Kate_(text_editor) Vimium https://addons.mozilla.org/en-US/firefox/addon/vimium-ff/ https://vimium.github.io/ https://github.com/philc/vimium Zen Browser https://zen-browser.app/ https://en.wikipedia.org/wiki/Zen_Browser Vivaldi https://vivaldi.com/download/ Thunderbird https://www.thunderbird.net/en-US/ https://en.wikipedia.org/wiki/Thunderbird Uniden https://uniden.com/ Arduino https://www.arduino.cc/ Raspberry Pi https://www.raspberrypi.com/ Plex https://www.plex.tv/ Qualcomm to Acquire Arduino https://www.qualcomm.com/news/releases/2025/10/qualcomm-to-acquire-arduino-accelerating-developers--access-to-i https://www.arduino.cc/qualcomm https://www.jeffgeerling.com/blog/2025/qualcomms-buying-arduino--what-it-means-makers/ Perfboard Hackduino https://www.instructables.com/Perfboard-Hackduino-Arduino-compatible-circuit/ DIY Arduino https://www.instructables.com/DIY-Arduino-UNO-How-to-Make-Your-Own-Arduino-Uno-B/ https://docs.arduino.cc/hardware/make-your-uno-kit/ https://www.electronicshub.org/make-your-own-arduino-board/ Notacon https://en.wikipedia.org/wiki/Notacon hak5 / bashbunny-payloads https://github.com/hak5/bashbunny-payloads Provide feedback on this episode.

    Packet Pushers - Full Podcast Feed
    NAN115: Simplifying Network Automation with Wingpy

    Packet Pushers - Full Podcast Feed

    Play Episode Listen Later Mar 11, 2026 55:49


    Wingpy is an open-source tool that aims to make it easier to automate network tasks that use Cisco APIs. Today Eric is joined by returning guest Andreas Baekdahl, the creator of Wingpy. They discuss why Andreas started Wingpy, how it can help streamline your workflows, and how you can start using it right away. They... Read more »

    Packet Pushers - Fat Pipe
    NAN115: Simplifying Network Automation with Wingpy

    Packet Pushers - Fat Pipe

    Play Episode Listen Later Mar 11, 2026 55:49


    Wingpy is an open-source tool that aims to make it easier to automate network tasks that use Cisco APIs. Today Eric is joined by returning guest Andreas Baekdahl, the creator of Wingpy. They discuss why Andreas started Wingpy, how it can help streamline your workflows, and how you can start using it right away. They... Read more »

    Value Driven Data Science
    Episode 97: [Value Boost] Mathematical Modelling as a Gateway to ML Success

    Value Driven Data Science

    Play Episode Listen Later Mar 11, 2026 10:59


    Data scientists often jump straight to machine learning when tackling a new problem. But there's a foundational step that can dramatically increase your chances of project success and create more reliable business value. Mathematical modelling from first principles provides a low-cost scaffolding that can make your machine learning work more robust.In this Value Boost episode, Dr. Tim Varelmann joins Dr. Genevieve Hayes to explain how building models from physics principles, like mass and energy conservation, creates a modular foundation that reduces computational costs and makes your work easier to understand.In this episode, we explore:1. What mathematical modelling from first principles actually means [01:20]2. How to build modular models with different resolution levels [04:39]3. When to add machine learning to first principles models [08:18]4. The practical first step to incorporate this approach into your work [09:23]Guest BioDr Tim Varelmann is the founder of Bluebird Optimization and holds a PhD in Mathematical Optimisation. He is also the creator of Effortless Modeling in Python with GAMSPy, the world's first GAMSPy course.LinksBluebird Optimization WebsiteConnect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE

    Austin Next
    The Western Canon in the Age of Vibe Coding | Carlos Carvalho, President, University of Austin

    Austin Next

    Play Episode Listen Later Mar 11, 2026 72:36


    American universities stopped optimizing for students a long time ago. The University of Austin was built as a direct counter to that failure. Carlos Carvalho, its president, brings a statistician's precision to the diagnosis, tracing the causal chain from dropped standards to credential collapse while building an institution with no tuition and no government money, staking its survival entirely on student outcomes 20 years out. The conversation moves from the financial architecture of a university, through a curriculum that starts with Plato before it touches Python, to the deeper question of what a university owes a civilization in the age of AI and whether Austin is the right place to answer it.Agenda0:00 Intro + Three Years In 9:42 The $300M Bet 15:42 The Conglomerate Problem 21:42 Western Canon First 28:42 What AI Changes About Teaching 34:42 The Bastrop Lab 41:42 UATX in the Austin Ecosystem 48:42 Atoms vs Bits in Texas 53:42 American Exceptionalism as Mission 59:42 The Hit Pieces 1:06:42 The UCSD Math Collapse 1:11:42 Grade Inflation as Decay 1:14:42 AI and the Soul ProblemGuest BioCarlos Carvalho is the President of the University of Austin. Prior to taking on this role, he spent 15 years as a professor at the University of Texas at Austin's McCombs School of Business, where he held the La Quinta Centennial Professorship and founded the Salem Center for Policy. A native of Brazil, Dr. Carvalho earned his doctorate in statistics from Duke University and has also taught at the University of Chicago Booth School of Business. His research focuses on Bayesian statistics in complex, high-dimensional problems with applications ranging from economics to genetics to public policy. At UATX, he is leading a bold effort to build a new university that stands for American principles and academic excellence.Guest LinksUniversity of Austin: Website, Substack, Instagram, X, LinkedIn -------------------Austin Next Links: Website, X/Twitter, YouTube, LinkedInEcosystem Metacognition Substack

    PR 360
    Exciting Developments in Space Exploration with Rodrigo Schmitt

    PR 360

    Play Episode Listen Later Mar 11, 2026 26:28


    Rodrigo Schmitt holds a PhD in Space Systems Engineering from Purdue and leads Stellerian's commercialization strategy as Chief Commercial Officer. He specializes in translating complex mission needs into clear offerings, building partnerships across defense and commercial space, and driving customer discovery, pricing, and go-to-market strategy for on-orbit operations. He's also the co-founder of RocketPy, a widely used Python-based rocketry simulation library. In this episode, Rodrigo shares the latest on space exploration, rocket simulation, and his unique experiences from backpacking the Grand Canyon to analog Mars training missions. Key Takeaways:- Developments in space exploration- How RocketPy helps engineers simulate launches- How Rodrigo balances entrepreneurship and scienceEpisode Timeline:00:00Introduction to Rodrigo Schmitt1:50Tod is learning about astrophysics2:45Backpacking the Grand Canyon5:55Exciting developments in space exploration7:00Legal and ethical aspects of space resources8:00AI's role in space11:20Star trackers and space navigation using stars13:07RocketPy: Rocket simulation and safety in space competitions15:35RocketPy's future18:40Mars Mission desert research stations21:40How Rodrigo finds balance between entrepreneurship and scienceThis episode's guest:• Follow Rodrigo Schmitt on LinkedIn • RocketPy's website and Instagram• Stellerian on LinkedIn• Purdue's Mars analog research facilitySubscribe and leave a 5-star review: https://pod.link/1496390646Contact Us!•Join the conversation by leaving a comment!•Follow us on Facebook, Twitter, Instagram, and LinkedIn!Thanks for listening! Hosted on Acast. See acast.com/privacy for more information.

    In-Ear Insights from Trust Insights
    In-Ear Insights: Measuring and Improving AI Proficiency

    In-Ear Insights from Trust Insights

    Play Episode Listen Later Mar 11, 2026


    In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss how to measure AI proficiency impact beyond speed. You’ll discover why quality matters more than volume when AI accelerates work. You’ll learn a six‑level framework that lets you map your AI skill growth. You’ll see practical steps to protect your role in fast‑moving companies. 00:00 – Introduction 02:45 – The speed‑only trap 05:30 – Introducing the six‑level AI proficiency model 09:10 – Quality vs quantity in AI output 12:40 – Managing AI access and fairness 16:20 – Actionable steps for managers and individuals 20:00 – Call to action Watch the full episode to level up your AI leadership. Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-ai-proficiency-measuring-ai-performance.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn: In this week’s In Ear Insights, let’s talk about AI and the way the things that we are measuring in business to measure AIs, the productivity, the benefits that you’re getting out of it. One of my favorite apps, Katie, is called Blind. This is an anonymous confessions app for the business world where people who work at companies—mostly in big business and big tech—share anonymous confessions. They have to say what company they’re with, but that’s it. There were three posts that really caught my eye over the weekend. The first was from a person who works at Capital One bank who said, “Hi, I’m a junior software engineer.” Three years into my career, my co‑workers are pumping out so many poll requests with Claude code and blitzing through jobs that used to take three to five days in less than an hour. I feel like every day at the office is a race to see who can generate more poll requests and complete them than anyone else. The second one was from JP Morgan Chase saying, “I just downloaded Claude coat and wtf. I don’t know what to think. Either we are cooked or saved.” The third was from an engineer at Tesla who said, “I joined recently as a contractor and don’t have access to Claude. I’m slower than the others on my team and it stresses me out.” So my question to you is this, Katie: Obviously people are using generative AI to move very fast. However, I don’t know if fast is the metric that we should be looking at here, particularly since a lot of people who manage coders don’t necessarily manage them well. They don’t. For example, very famously, Elon Musk, when he took over Twitter, fired people who didn’t write enough code. He measured people’s productivity solely on lines of code written. Anyone who’s actually written code for a living knows you want less code written rather than more because there’s a certain amount of elegance to writing less code. So my question to you is, as we talk about AI proficiency—sort of AI proficiency week here at Trust Insights—what would you tell people who are managing people using AI about measuring their proficiency and measuring the results that they’re getting? Katie Robbert: So first, let me answer your question. No, I do not frequent—was it Blind? Yeah. Anyone who knows me knows that I am honest and direct to a fault. So no, that would annoy me more than anything—just say it to my face. But that aside, I understand why apps like that exist. Not every company builds a culture where an open‑door policy is actually true. The policy is: the door is open only if you have positive things to share; the door is closed if you have complaints. I sympathize with people who feel the need to turn to those kinds of apps to express concern, frustration, fear. It seems, Chris, that a lot of the fear over the past couple of years is: “Will AI take my job?” In those environments, leadership decisions about process and output are really pushing for AI to take the job. What I’m not seeing is what the success metrics are. If the metric is faster and more, then you’re missing the third most important one—quality. We don’t know what kind of quality is being produced. Given those short snippets of context, we can assume it’s probably mediocre. It’s probably slightly above the bar, but nothing outstanding—enough to get by, enough to keep the lights on. For some larger companies, that’s fine because you can bury mediocre work in the politics and red tape of an enterprise‑sized organization. No one really expects much more, which is a little sad. So what I would say to managers is, number one, if you’re not clear on what you’re being measured on, or if your success metric is faster and more, head for the hills—run. That is not good. I mean it in all sincerity; that is not going to serve you in the long run because those metrics are not sustainable. Christopher S. Penn: And yet that’s what—particularly at a bigger company—where I can definitely, obviously at a company like Trust Insights, we’re four people. Outcomes are something we all measure because we have a direct line to outcomes. If we sell more courses, book more keynote speeches, get more retainer clients, we all have a hand in that and can see very clearly the business outcome. At a company like JP Morgan Chase, Bank of America, or Capital One, there are hundreds of thousands of employees. Your line of sight to any kind of business outcome is probably five layers of management removed. The front line is way over there—tellers, for example. You write the software that writes the software that manages the system the tellers use. So you don’t have clear outcomes from a business‑level perspective. Because I used to work at places like AT&T where you are just a cog in the machine, your outcomes very often are either faster or more because no one knows what else to measure. Katie Robbert: In companies like that, those outcomes are—quote, unquote—good enough because of the nature of what you produce. Consumers have become so dependent on your company that we often talk about the really crappy customer service at cable and Internet providers. There are only so many of them, and they’re all the same. We have become reliant on that technology and have no choice but to put up with crappy service from the big providers. The same goes for the financial industry. We don’t have a choice other than to rely on these crappy companies because we aren’t equipped to stand up our own financial institutions and change the rules. It’s a big, old industry, and that’s why they operate the way they do. It’s disheartening. When it comes down to humans, you have to make your own personal choices. Are you okay contributing to the mediocrity of the company and never really advancing? Chris, what you’ve been saying—what is the art of the possible? They don’t know, but they also don’t care. They’re not looking to disrupt the industry. No other companies are starting up to disrupt them because they’re so massive; they’re okay with the status quo, changing at a glacial pace, if at all. It’s not a great story to tell. You might have a consistent paycheck, but you might not have a lot of passion for the work you do. It might just be clock in at nine, clock out at five, with two 15‑minute breaks and a 30‑minute lunch—and that’s fine for a lot of people. That works for survival. Outside of that work environment is where you find joy, passion, and the things you’re really interested in. All to say, the advice I would give to managers is: how much are you willing to put up with? Those industries aren’t going to change. Christopher S. Penn: So in the context of AI proficiency, what do you advise them to focus on? Knowing that, to your point, these places are so calcified, faster is one of the only benchmarks that matter, alongside constantly shrinking budgets. Cheaper is built in because you have to do 5 % less every year. How do you suggest a manager or employee who feels the fastest typist wins the day and gets the promotion—even if the quality is zero—handle this? The Tesla engineer example is interesting: they don’t have access to generative AI, co‑workers do, they’re much faster, and the contractor fears being fired. How do we resolve this for team members, knowing that these companies are so calcified that even if a department takes a stand on quality, the other twenty departments competing for budget will say, “Great, you focus on quality; we’ll take your budget because we’ll produce ten times more next year.” Even quality sucks. Katie Robbert: The Tesla example is an outlier. We don’t have context for why that person doesn’t have access to generative AI—maybe they’re brand new. Contractors don’t get access to paid tools, so that explains it. When we talk about levels of AI proficiency, generic training doesn’t work; it doesn’t stick. Companies and individuals need to assess their AI proficiency. We typically do this on a six‑point scale, from Basic to Advanced. Within each level are skill sets: Level 1—editing, correcting grammar, asking it to write code. Level 2—writing code and reading code. Level 3—building QA plans. Level 4—providing business or product requirements, agile cues, or building a project plan. It’s like a career path: today I’m a junior analyst, tomorrow I want to be a senior analyst. The same applies to AI proficiency. My recommendation for managers and individuals stuck in those situations—or anyone looking to level up their AI proficiency—is to look at what’s next, what you don’t know. In the case of Tesla or JP Morgan, they will only produce a limited variety of things. In banking, look at the use cases and how you’re using AI. If you’re building code, how do you automate while keeping a human in the loop? Human‑in‑the‑loop means literal human intervention; you’re not just setting it and forgetting it like a rotisserie chicken. You must ensure a human is paying attention. Perhaps your KPIs aren’t quality of output, but if you start delivering incorrect work, customers complain, and the company loses money, the quality of your output will suddenly matter. It doesn’t matter how fast you’re creating it. For the Tesla contractor who lacks internal AI tools, they can get access to their own tools and build their skill set: acknowledge they’re not as fast as full‑time employees, determine what they need to do to match or outpace them, and work on it in their own time if they care. In that instance, the person is worried about job security, so it’s probably in their best interest to act. Christopher S. Penn: I like how you analogize the six levels to basically the three levels of management. The first two levels are individual contributors; the next two are middle management; the final two are leadership—going from typing the thing to delegating it entirely to someone else. That’s a great analogy. I think after this episode I’m going to revise that chart to help people wrap their brains around it. What does the level of AI performance efficiency mean? It means you go from individual contributor to leader, eventually leading machines—not necessarily humans. The Tesla example worries me because the company is essentially asking contractors to bring their own AI tools—a data‑privacy and security nightmare. Still, when I think about our clients who engage us for AI readiness assessments, we see a hierarchy of people with different proficiency levels outpacing each other. Is it fair to say that people with more proficiency—or who invest more in themselves—will blow past peers who are not? Do those peers need to worry about career viability when a peer becomes a mythical 10× engineer or marketer? Katie Robbert: The short answer is yes, but that’s true in any career path. Unless you’re in a company that promotes someone based on appearance rather than ability, which is another conversation, it’s absolutely true. Levels of AI proficiency run in parallel with organizational maturity. AI proficiency can’t stand alone without a certain amount of maturity within the organization. We often talk about foundations—the five Ps: documented processes, platforms, good governance, and privacy. Those have to exist for someone to be set up for success and move through AI proficiency levels. Otherwise, they’re becoming proficient against creative garbage. That won’t translate to better career opportunities because, boiled down, it’s garbage in, garbage out—you become proficient at moving garbage around, and nobody wants to hire that. Christopher S. Penn: An essay from last year discussed the AI reckoning in larger companies. It said AI is doing what decades of management consulting couldn’t—showcasing as you apply AI to processes. Entire levels of management are unnecessary, doing nothing but holding meetings and sending emails. The essay posited that mid‑level managers may realize they only push paper from point A to point B. In those cases, what should people in those positions think about for their own AI proficiency, knowing that improving it will reveal that they add little value? Katie Robbert: As someone who’s spent most of her career managing, I’ve often had to defend my role. Once, an agency considered dissolving my position because they thought I didn’t bring anything to the table—obviously not true. The team that grew from three people to a $3 million profit center also knows that. Managers need to think about delegation: not just handing off tasks, but ensuring the right people are in the right seats. Coaching is a big part of the job—bringing people up through their proficiency levels. If I’m a middle manager using the individual‑contributor, manager, leadership matrix, how do I get out of that vulnerable middle spot? Maybe I need to create more workflows, find efficiencies, save the budget, identify level‑one champions, and build them up. Those are the things someone in that middle vulnerable section should consider, because they are vulnerable. Many companies have managers who don’t do squat. I’ve worked alongside those managers; it’s maddening. One thing that will evolve with the manager role is that you can no longer be just a manager. You can’t just manage things; you have to bring some level of individual contribution and thought leadership to the role. It’s no longer enough to just manage—if that makes sense. Christopher S. Penn: It makes sense. Over the weekend I was working on something for myself: as technology evolves and I delegate more to it, the guardrails for quality have to get stricter. I revised the rules I use with my Python coding agents—new, enhanced, advanced rules with more guidelines and descriptions about what the agent is and is not allowed to do. This morning my kickoff process broke, so I told the agent to fix it according to the new rules. I realized the previous application sucked, and I fixed it. Now it’s much happier. I think building quality guardrails will differentiate managers who take on AI management—not just people management. Yes, AI can be faster, but there’s no guarantee it’s better. If I’m a manager who gets faster and better results than peers who just hope it works, I keep my job. What do you think about that angle? Katie Robbert: It makes sense. Take the middle‑manager example: the VP says, “Client needs these five things.” The hierarchy follows—manager, then individual contributors. The middle person can step up, create a process, develop a proof‑of‑concept example based on the VP’s input, delegate with quality assurance, and cut down iterations. That saves time, saves budget, gets results faster, and reduces frustration because expectations are clear. Christopher S. Penn: The axiom we talk about when discussing AI optimization is bigger, better, faster, cheaper. Faster obviously saves time and money. We don’t often talk about bigger and better—doing things that add value that wasn’t there before. The value you create should be higher quality. To wrap up AI proficiency, we have three divisions, six levels, and a focus: if you’re worried about someone else being faster, be as fast and be better quality. Cutting corners for speed will catch up to you. If you have thoughts about how people are using—or misusing—AI in terms of proficiency, pop by our free Slack group at trustinsights.ai/analysts‑for‑marketers, where over 4,500 marketers ask and answer each other’s questions daily. You can also watch or listen to the show on any podcast platform or the Trust Insights AI TI Podcast. Thanks for tuning in. We’ll talk to you on the next one. Katie Robbert: Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data‑driven approach. Trust Insight specializes in helping businesses leverage data, AI, and machine learning to drive measurable marketing ROI. Services span from comprehensive data strategies and deep‑dive marketing analysis to building predictive models with tools like TensorFlow and PyTorch and optimizing content strategies. Trust Insights also offers expert guidance on social media analytics, marketing technology, MarTech selection and implementation, and high‑level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic, Claude, DALL‑E, Midjourney, Stable Diffusion, and Metalama. The firm provides fractional team members such as a CMO or data scientists to augment existing teams. Beyond client work, Trust Insights contributes to the marketing community through the Trust Insights blog, the In Ear Insights podcast, the Inbox Insights newsletter, livestream webinars, and keynote speaking. What distinguishes Trust Insights is a focus on delivering actionable insights—not just raw data. The firm leverages cutting‑edge generative AI techniques like large language models and diffusion models while explaining complex concepts clearly through compelling narratives and visualizations. This commitment to clarity and accessibility extends to educational resources that empower marketers to become more data‑driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a midsize business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever‑evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.

    Software Engineering Daily
    Reinventing the Python Notebook with Akshay Agrawal

    Software Engineering Daily

    Play Episode Listen Later Mar 10, 2026 46:04


    Interactive notebooks were popularized by the Jupyter project and have since become a core tool for data science, research, and data exploration. However, traditional, imperative notebooks often break down as projects grow more complex. Hidden state, non-reproducible execution, poor version control ergonomics, and difficulty reusing notebook code in real software systems make it hard to The post Reinventing the Python Notebook with Akshay Agrawal appeared first on Software Engineering Daily.

    Herpetological Highlights
    243 Pythons are Seed Pipelines

    Herpetological Highlights

    Play Episode Listen Later Mar 10, 2026 25:10


    Invasive species are well known to damage ecosystems by directly eating other animals and disrupting the food chain. But their impacts can go much deeper, as a new study about seed dispersal by pythons and tegus in the Everglades has shown - they may be contributing to the destruction of rare and unusual habitats. Become a Patreon: https://www.patreon.com/herphighlights Merch: https://www.redbubble.com/people/herphighlights/shop Full reference list available here: http://www.herphighlights.podbean.com Main Paper References: Figueroa A, Davis KR, Harman MEA, Bartoszek IA, Easterling IC, Yackel Adams AA, Romagosa CM. 2025. Double agents: invasive Burmese pythons (Python bivittatus) and Argentine black and white tegus (Salvator merianae) as potential seed dispersers in South Florida. Journal of Zoology:jzo.70082. DOI: 10.1111/jzo.70082. Other Mentioned Papers/Studies: Harman MEA, Fuller NR, Baiser B, Blackburn JK, Li X, Currylow AF, Yackel Adams AA, Falk BG, Romagosa CM. 2025. Dietary breadth and ecological plasticity facilitate invasion potential in a large omnivorous lizard. Frontiers in Amphibian and Reptile Science 3:1635085. DOI: 10.3389/famrs.2025.1635085. Sapkota, A., Karki, A., Sapkota, K. R., & Baral, R. (2025). First record of death-feigning behavior in common wolf snake Lycodon aulicus (Linnaeus, 1758) from Nepal. Nepalese Journal of Zoology, 9(2), 85-88. Other Links/Mentions: AmphibiaWeb 2008 Acris gryllus: Southern Cricket Frog University of California, Berkeley, CA, USA. Accessed Feb 24, 2026. Acris gryllus from James W. Beck: https://amphibiaweb.org/cgi/amphib_query?special=call&genus=Acris&species=gryllus  Editing and Music: Intro/outro – Treehouse by Ed Nelson Species Bi-week theme – Michael Timothy Other Music – The Passion HiFi, https://www.thepassionhifi.com

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0
    NVIDIA's AI Engineers: Agent Inference at Planetary Scale and "Speed of Light" — Nader Khalil (Brev), Kyle Kranen (Dynamo)

    Latent Space: The AI Engineer Podcast — CodeGen, Agents, Computer Vision, Data Science, AI UX and all things Software 3.0

    Play Episode Listen Later Mar 10, 2026 83:37


    Join Kyle, Nader, Vibhu, and swyx live at NVIDIA GTC next week!Now that AIE Europe tix are ~sold out, our attention turns to Miami and World's Fair!The definitive AI Accelerator chip company has more than 10xed this AI Summer:And is now a $4.4 trillion megacorp… that is somehow still moving like a startup. We are blessed to have a unique relationship with our first ever NVIDIA guests: Kyle Kranen who gave a great inference keynote at the first World's Fair and is one of the leading architects of NVIDIA Dynamo (a Datacenter scale inference framework supporting SGLang, TRT-LLM, vLLM), and Nader Khalil, a friend of swyx from our days in Celo in The Arena, who has been drawing developers at GTC since before they were even a glimmer in the eye of NVIDIA:Nader discusses how NVIDIA Brev has drastically reduced the barriers to entry for developers to get a top of the line GPU up and running, and Kyle explains NVIDIA Dynamo as a data center scale inference engine that optimizes serving by scaling out, leveraging techniques like prefill/decode disaggregation, scheduling, and Kubernetes-based orchestration, framed around cost, latency, and quality tradeoffs. We also dive into Jensen's “SOL” (Speed of Light) first-principles urgency concept, long-context limits and model/hardware co-design, internal model APIs (https://build.nvidia.com), and upcoming Dynamo and agent sessions at GTC.Full Video pod on YouTubeTimestamps00:00 Agent Security Basics00:39 Podcast Welcome and Guests07:19 Acquisition and DevEx Shift13:48 SOL Culture and Dynamo Setup27:38 Why Scale Out Wins29:02 Scale Up Limits Explained30:24 From Laptop to Multi Node33:07 Cost Quality Latency Tradeoffs38:42 Disaggregation Prefill vs Decode41:05 Kubernetes Scaling with Grove43:20 Context Length and Co Design57:34 Security Meets Agents58:01 Agent Permissions Model59:10 Build Nvidia Inference Gateway01:01:52 Hackathons And Autonomy Dreams01:10:26 Local GPUs And Scaling Inference01:15:31 Long Running Agents And SF ReflectionsTranscriptAgent Security BasicsNader: Agents can do three things. They can access your files, they can access the internet, and then now they can write custom code and execute it. You literally only let an agent do two of those three things. If you can access your files and you can write custom code, you don't want internet access because that's one to see full vulnerability, right?If you have access to internet and your file system, you should know the full scope of what that agent's capable of doing. Otherwise, now we can get injected or something that can happen. And so that's a lot of what we've been thinking about is like, you know, how do we both enable this because it's clearly the future.But then also, you know, what, what are these enforcement points that we can start to like protect?swyx: All right.Podcast Welcome and Guestsswyx: Welcome to the Lean Space podcast in the Chromo studio. Welcome to all the guests here. Uh, we are back with our guest host Viu. Welcome. Good to have you back. And our friends, uh, Netter and Kyle from Nvidia. Welcome.Kyle: Yeah, thanks for having us.swyx: Yeah, thank you. Actually, I don't even know your titles.Uh, I know you're like architect something of Dynamo.Kyle: Yeah. I, I'm one of the engineering leaders [00:01:00] and a architects of Dynamo.swyx: And you're director of something and developers, developer tech.Nader: Yeah.swyx: You're the developers, developers, developers guy at nvidia,Nader: open source agent marketing, brev,swyx: and likeNader: Devrel tools and stuff.swyx: Yeah. BeenNader: the focus.swyx: And we're, we're kind of recording this ahead of Nvidia, GTC, which is coming to town, uh, again, uh, or taking over town, uh, which, uh, which we'll all be at. Um, and we'll talk a little bit about your sessions and stuff. Yeah.Nader: We're super excited for it.GTC Booth Stunt Storiesswyx: One of my favorite memories for Nader, like you always do like marketing stunts and like while you were at Rev, you like had this surfboard that you like, went down to GTC with and like, NA Nvidia apparently, like did so much that they bought you.Like what, what was that like? What was that?Nader: Yeah. Yeah, we, we, um. Our logo was a chaka. We, we, uh, we were always just kind of like trying to keep true to who we were. I think, you know, some stuff, startups, you're like trying to pretend that you're a bigger, more mature company than you are. And it was actually Evan Conrad from SF Compute who was just like, you guys are like previousswyx: guest.Yeah.Nader: Amazing. Oh, really? Amazing. Yeah. He was just like, guys, you're two dudes in the room. Why are you [00:02:00] pretending that you're not? Uh, and so then we were like, okay, let's make the logo a shaka. We brought surfboards to our booth to GTC and the energy was great. Yeah. Some palm trees too. They,Kyle: they actually poked out over like the, the walls so you could, you could see the bread booth.Oh, that's so funny. AndNader: no one else,Kyle: just from very far away.Nader: Oh, so you remember it backKyle: then? Yeah I remember it pre-acquisition. I was like, oh, those guys look cool,Nader: dude. That makes sense. ‘cause uh, we, so we signed up really last minute, and so we had the last booth. It was all the way in the corner. And so I was, I was worried that no one was gonna come.So that's why we had like the palm trees. We really came in with the surfboards. We even had one of our investors bring her dog and then she was just like walking the dog around to try to like, bring energy towards our booth. Yeah.swyx: Steph.Kyle: Yeah. Yeah, she's the best,swyx: you know, as a conference organizer, I love that.Right? Like, it's like everyone who sponsors a conference comes, does their booth. They're like, we are changing the future of ai or something, some generic b******t and like, no, like actually try to stand out, make it fun, right? And people still remember it after three years.Nader: Yeah. Yeah. You know what's so funny?I'll, I'll send, I'll give you this clip if you wanna, if you wanna add it [00:03:00] in, but, uh, my wife was at the time fiance, she was in medical school and she came to help us. ‘cause it was like a big moment for us. And so we, we bought this cricket, it's like a vinyl, like a vinyl, uh, printer. ‘cause like, how else are we gonna label the surfboard?So, we got a surfboard, luckily was able to purchase that on the company card. We got a cricket and it was just like fine tuning for enterprises or something like that, that we put on the. On the surfboard and it's 1:00 AM the day before we go to GTC. She's helping me put these like vinyl stickers on.And she goes, you son of, she's like, if you pull this off, you son of a b***h. And so, uh, right. Pretty much after the acquisition, I stitched that with the mag music acquisition. I sent it to our family group chat. Ohswyx: Yeah. No, well, she, she made a good choice there. Was that like basically the origin story for Launchable is that we, it was, and maybe we should explain what Brev is andNader: Yeah.Yeah. Uh, I mean, brev is just, it's a developer tool that makes it really easy to get a GPU. So we connect a bunch of different GPU sources. So the basics of it is like, how quickly can we SSH you into a G, into a GPU and whenever we would talk to users, they wanted A GPU. They wanted an A 100. And if you go to like any cloud [00:04:00] provisioning page, usually it's like three pages of forms or in the forms somewhere there's a dropdown.And in the dropdown there's some weird code that you know to translate to an A 100. And I remember just thinking like. Every time someone says they want an A 100, like the piece of text that they're telling me that they want is like, stuffed away in the corner. Yeah. And so we were like, what if the biggest piece of text was what the user's asking for?And so when you go to Brev, it's just big GPU chips with the type that you want withswyx: beautiful animations that you worked on pre, like pre you can, like, now you can just prompt it. But back in the day. Yeah. Yeah. Those were handcraft, handcrafted artisanal code.Nader: Yeah. I was actually really proud of that because, uh, it was an, i I made it in Figma.Yeah. And then I found, I was like really struggling to figure out how to turn it from like Figma to react. So what it actually is, is just an SVG and I, I have all the styles and so when you change the chip, whether it's like active or not it changes the SVG code and that somehow like renders like, looks like it's animating, but it, we just had the transition slow, but it's just like the, a JavaScript function to change the like underlying SVG.Yeah. And that was how I ended up like figuring out how to move it from from Figma. But yeah, that's Art Artisan. [00:05:00]Kyle: Speaking of marketing stunts though, he actually used those SVGs. Or kind of use those SVGs to make these cards.Nader: Oh yeah. LikeKyle: a GPU gift card Yes. That he handed out everywhere. That was actually my first impression of thatNader: one.Yeah,swyx: yeah, yeah.Nader: Yeah.swyx: I think I still have one of them.Nader: They look great.Kyle: Yeah.Nader: I have a ton of them still actually in our garage, which just, they don't have labels. We should honestly like bring, bring them back. But, um, I found this old printing press here, actually just around the corner on Ven ness. And it's a third generation San Francisco shop.And so I come in an excited startup founder trying to like, and they just have this crazy old machinery and I'm in awe. ‘cause the the whole building is so physical. Like you're seeing these machines, they have like pedals to like move these saws and whatever. I don't know what this machinery is, but I saw all three generations.Like there's like the grandpa, the father and the son, and the son was like, around my age. Well,swyx: it's like a holy, holy trinity.Nader: It's funny because we, so I just took the same SVG and we just like printed it and it's foil printing, so they make a a, a mold. That's like an inverse of like the A 100 and then they put the foil on it [00:06:00] and then they press it into the paper.And I remember once we got them, he was like, Hey, don't forget about us. You know, I guess like early Apple and Cisco's first business cards were all made there. And so he was like, yeah, we, we get like the startup businesses but then as they mature, they kind of go somewhere else. And so I actually, I think we were talking with marketing about like using them for some, we should go back and make some cards.swyx: Yeah, yeah, yeah. You know, I remember, you know, as a very, very small breadth investor, I was like, why are we spending time like, doing these like stunts for GPUs? Like, you know, I think like as a, you know, typical like cloud hard hardware person, you go into an AWS you pick like T five X xl, whatever, and it's just like from a list and you look at the specs like, why animate this GP?And, and I, I do think like it just shows the level of care that goes throughout birth and Yeah. And now, and also the, and,Nader: and Nvidia. I think that's what the, the thing that struck me most when we first came in was like the amount of passion that everyone has. Like, I think, um, you know, you talk to, you talk to Kyle, you talk to, like, every VP that I've met at Nvidia goes so close to the metal.Like, I remember it was almost a year ago, and like my VP asked me, he's like, Hey, [00:07:00] what's cursor? And like, are you using it? And if so, why? Surprised at this, and he downloaded Cursor and he was asking me to help him like, use it. And I thought that was, uh, or like, just show him what he, you know, why we were using it.And so, the amount of care that I think everyone has and the passion, appreciate, passion and appreciation for the moment. Right. This is a very unique time. So it's really cool to see everyone really like, uh, appreciate that.swyx: Yeah.Acquisition and DevEx Shiftswyx: One thing I wanted to do before we move over to sort of like research topics and, uh, the, the stuff that Kyle's working on is just tell the story of the acquisition, right?Like, not many people have been, been through an acquisition with Nvidia. What's it like? Uh, what, yeah, just anything you'd like to say.Nader: It's a crazy experience. I think, uh, you know, we were the thing that was the most exciting for us was. Our goal was just to make it easier for developers.We wanted to find access to GPUs, make it easier to do that. And then all, oh, actually your question about launchable. So launchable was just make one click exper, like one click deploys for any software on top of the GPU. Mm-hmm. And so what we really liked about Nvidia was that it felt like we just got a lot more resources to do all of that.I think, uh, you [00:08:00] know, NVIDIA's goal is to make things as easy for developers as possible. So there was a really nice like synergy there. I think that, you know, when it comes to like an acquisition, I think the amount that the soul of the products align, I think is gonna be. Is going speak to the success of the acquisition.Yeah. And so it in many ways feels like we're home. This is a really great outcome for us. Like we you know, I love brev.nvidia.com. Like you should, you should use it's, it's theKyle: front page for GPUs.Nader: Yeah. Yeah. If you want GP views,Kyle: you go there, getswyx: it there, and it's like internally is growing very quickly.I, I don't remember You said some stats there.Nader: Yeah, yeah, yeah. It's, uh, I, I wish I had the exact numbers, but like internally, externally, it's been growing really quickly. We've been working with a bunch of partners with a bunch of different customers and ISVs, if you have a solution that you want someone that runs on the GPU and you want people to use it quickly, we can bundle it up, uh, in a launchable and make it a one click run.If you're doing things and you want just like a sandbox or something to run on, right. Like open claw. Huge moment. Super exciting. Our, uh, and we'll talk into it more, but. You know, internally, people wanna run this, and you, we know we have to be really careful from the security implications. Do we let this run on the corporate network?Security's guidance was, Hey, [00:09:00] run this on breath, it's in, you know, it's, it's, it's a vm, it's sitting in the cloud, it's off the corporate network. It's isolated. And so that's been our stance internally and externally about how to even run something like open call while we figure out how to run these things securely.But yeah,swyx: I think there's also like, you almost like we're the right team at the right time when Nvidia is starting to invest a lot more in developer experience or whatever you call it. Yeah. Uh, UX or I don't know what you call it, like software. Like obviously NVIDIA is always invested in software, but like, there's like, this is like a different audience.Yeah. It's aNader: widerKyle: developer base.swyx: Yeah. Right.Nader: Yeah. Yeah. You know, it's funny, it's like, it's not, uh,swyx: so like, what, what is it called internally? What, what is this that people should be aware that is going on there?Nader: Uh, what, like developer experienceswyx: or, yeah, yeah. Is it's called just developer experience or is there like a broader strategy hereNader: in Nvidia?Um, Nvidia always wants to make a good developer experience. The thing is and a lot of the technology is just really complicated. Like, it's not, it's uh, you know, I think, um. The thing that's been really growing or the AI's growing is having a huge moment, not [00:10:00] because like, let's say data scientists in 2018, were quiet then and are much louder now.The pie is com, right? There's a whole bunch of new audiences. My mom's wondering what she's doing. My sister's learned, like taught herself how to code. Like the, um, you know, I, I actually think just generally AI's a big equalizer and you're seeing a more like technologically literate society, I guess.Like everyone's, everyone's learning how to code. Uh, there isn't really an excuse for that. And so building a good UX means that you really understand who your end user is. And when your end user becomes such a wide, uh, variety of people, then you have to almost like reinvent the practice, right? Yeah. You haveKyle: to, and actually build more developer ux, right?Because the, there are tiers of developer base that were added. You know, the, the hackers that are building on top of open claw, right? For example, have never used gpu. They don't know what kuda is. They, they, they just want to run something.Nader: Yeah.Kyle: You need new UX that is not just. Hey, you know, how do you program something in Cuda and run it?And then, and then we built, you know, like when Deep Learning was getting big, we built, we built Torch and, and, but so recently the amount of like [00:11:00] layers that are added to that developer stack has just exploded because AI has become ubiquitous. Everyone's using it in different ways. Yeah. It'sNader: moving fast in every direction.Vertical, horizontal.Vibhu: Yeah. You guys, you even take it down to hardware, like the DGX Spark, you know, it's, it's basically the same system as just throwing it up on big GPU cluster.Nader: Yeah, yeah, yeah. It's amazing. Blackwell.swyx: Yeah. Uh, we saw the preview at the last year's GTC and that was one of the better performing, uh, videos so far, and video coverage so far.Awesome. This will beat it. Um,Nader: that wasswyx: actually, we have fingersNader: crossed. Yeah.DGX Spark and Remote AccessNader: Even when Grace Blackwell or when, um, uh, DGX Spark was first coming out getting to be involved in that from the beginning of the developer experience. And it just comes back to what youswyx: were involved.Nader: Yeah. St. St.swyx: Mars.Nader: Yeah. Yeah. I mean from, it was just like, I, I got an email, we just got thrown into the loop and suddenly yeah, I, it was actually really funny ‘cause I'm still pretty fresh from the acquisition and I'm, I'm getting an email from a bunch of the engineering VPs about like, the new hardware, GPU chip, like we're, or not chip, but just GPU system that we're putting out.And I'm like, okay, cool. Matters. Now involved with this for the ux, I'm like. What am I gonna do [00:12:00] here? So, I remember the first meeting, I was just like kind of quiet as I was hearing engineering VPs talk about what this box could be, what it could do, how we should use it. And I remember, uh, one of the first ideas that people were idea was like, oh, the first thing that it was like, I think a quote was like, the first thing someone's gonna wanna do with this is get two of them and run a Kubernetes cluster on top of them.And I was like, oh, I think I know why I'm here. I was like, the first thing we're doing is easy. SSH into the machine. And then, and you know, just kind of like scoping it down of like, once you can do that every, you, like the person who wants to run a Kubernetes cluster onto Sparks has a higher propensity for pain, then, then you know someone who buys it and wants to run open Claw right now, right?If you can make sure that that's as effortless as possible, then the rest becomes easy. So there's a tool called Nvidia Sync. It just makes the SSH connection really simple. So, you know, if you think about it like. If you have a Mac, uh, or a PC or whatever, if you have a laptop and you buy this GPU and you want to use it, you should be able to use it like it's A-A-G-P-U in the cloud, right?Um, but there's all this friction of like, how do you actually get into that? That's part of [00:13:00] Revs value proposition is just, you know, there's a CLI that wraps SSH and makes it simple. And so our goal is just get you into that machine really easily. And one thing we just launched at CES, it's in, it's still in like early access.We're ironing out some kinks, but it should be ready by GTC. You can register your spark on Brev. And so now if youswyx: like remote managed yeah, local hardware. Single pane of glass. Yeah. Yeah. Because Brev can already manage other clouds anyway, right?Vibhu: Yeah, yeah. And you use the spark on Brev as well, right?Nader: Yeah. But yeah, exactly. So, so you, you, so you, you set it up at home you can run the command on it, and then it gets it's essentially it'll appear in your Brev account, and then you can take your laptop to a Starbucks or to a cafe, and you'll continue to use your, you can continue use your spark just like any other cloud node on Brev.Yeah. Yeah. And it's just like a pre-provisioned centerswyx: in yourNader: home. Yeah, exactly.swyx: Yeah. Yeah.Vibhu: Tiny little data center.Nader: Tiny little, the size ofVibhu: your phone.SOL Culture and Dynamo Setupswyx: One more thing before we move on to Kyle. Just have so many Jensen stories and I just love, love mining Jensen stories. Uh, my favorite so far is SOL. Uh, what is, yeah, what is S-O-L-S-O-LNader: is actually, i, I think [00:14:00] of all the lessons I've learned, that one's definitely my favorite.Kyle: It'll always stick with you.Nader: Yeah. Yeah. I, you know, in your startup, everything's existential, right? Like we've, we've run out of money. We were like, on the risk of, of losing payroll, we've had to contract our team because we l ran outta money. And so like, um, because of that you're really always forcing yourself to I to like understand the root cause of everything.If you get a date, if you get a timeline, you know exactly why that date or timeline is there. You're, you're pushing every boundary and like, you're not just say, you're not just accepting like a, a no. Just because. And so as you start to introduce more layers, as you start to become a much larger organization, SOL is is essentially like what is the physics, right?The speed of light moves at a certain speed. So if flight's moving some slower, then you know something's in the way. So before trying to like layer reality back in of like, why can't this be delivered at some date? Let's just understand the physics. What is the theoretical limit to like, uh, how fast this can go?And then start to tell me why. ‘cause otherwise people will start telling you why something can't be done. But actually I think any great leader's goal is just to create urgency. Yeah. [00:15:00] There's an infiniteKyle: create compelling events, right?Nader: Yeah.Kyle: Yeah. So l is a term video is used to instigate a compelling event.You say this is done. How do we get there? What is the minimum? As much as necessary, as little as possible thing that it takes for us to get exactly here and. It helps you just break through a bunch of noise.swyx: Yeah.Kyle: Instantly.swyx: One thing I'm unclear about is, can only Jensen use the SOL card? Like, oh, no, no, no.Not everyone get the b******t out because obviously it's Jensen, but like, can someone else be like, no, likeKyle: frontline engineers use it.Nader: Yeah. Every, I think it's not so much about like, get the b******t out. It's like, it's like, give me the root understanding, right? Like, if you tell me something takes three weeks, it like, well, what's the first principles?Yeah, the first principles. It's like, what's the, what? Like why is it three weeks? What is the actual yeah. What's the actual limit of why this is gonna take three weeks? If you're gonna, if you, if let's say you wanted to buy a new computer and someone told you it's gonna be here in five days, what's the SOL?Well, like the SOL is like, I could walk into a Best Buy and pick it up for you. Right? So then anything that's like beyond that is, and is that practical? Is that how we're gonna, you know, let's say give everyone in the [00:16:00] company a laptop, like obviously not. So then like that's the SOL and then it's like, okay, well if we have to get more than 10, suddenly there might be some, right?And so now we can kind of piece the reality back.swyx: So, so this is the. Paul Graham do things that don't scale. Yeah. And this is also the, what people would now call behi agency. Yeah.Kyle: It's actually really interesting because there's a, there's a second hardware angle to SOL that like doesn't come up for all the org sol is used like culturally at aswyx: media for everything.I'm also mining for like, I think that can be annoying sometimes. And like someone keeps going IOO you and you're like, guys, like we have to be stable. We have to, we to f*****g plan. Yeah.Kyle: It's an interesting balance.Nader: Yeah. I encounter that with like, actually just with, with Alec, right? ‘cause we, we have a new conference so we need to launch, we have, we have goals of what we wanna launch by, uh, by the conference and like, yeah.At the end of the day, where isswyx: this GTC?Nader: Um, well this is like, so we, I mean we did it for CES, we did for GT CDC before that we're doing it for GTC San Jose. So I mean, like every, you know, we have a new moment. Um, and we want to launch something. Yeah. And we want to do so at SOL and that does mean that some, there's some level of prioritization that needs [00:17:00] to happen.And so it, it is difficult, right? I think, um, you have to be careful with what you're pushing. You know, stability is important and that should be factored into S-O-L-S-O-L isn't just like, build everything and let it break, you know, that, that's part of the conversation. So as you're laying, layering in all the details, one of them might be, Hey, we could build this, but then it's not gonna be stable for X, y, z reasons.And so that was like, one of our conversations for CES was, you know, hey, like we, we can get this into early access registering your spark with brev. But there are a lot of things that we need to do in order to feel really comfortable from a security perspective, right? There's a lot of networking involved before we deliver that to users.So it's like, okay. Let's get this to a point where we can at least let people experiment with it. We had it in a booth, we had it in Jensen's keynote, and then let's go iron out all the networking kinks. And that's not easy. And so, uh, that can come later. And so that was the way that we layered that back in.Yeah. ButKyle: It's not really about saying like, you don't have to do the, the maintenance or operational work. It's more about saying, you know, it's kind of like [00:18:00] highlights how progress is incremental, right? Like, what is the minimum thing that we can get to. And then there's SOL for like every component after that.But there's the SOL to get you, get you to the, the starting line. And that, that's usually how it's asked. Yeah. On the other side, you know, like SOL came out of like hardware at Nvidia. Right. So SOL is like literally if we ran the accelerator or the GPU with like at basically full speed with like no other constraints, like how FAST would be able to make a program go.swyx: Yeah. Yeah. Right.Kyle: Soswyx: in, in training that like, you know, then you work back to like some percentage of like MFU for example.Kyle: Yeah, that's a, that's a great example. So like, there's an, there's an S-O-L-M-F-U, and then there's like, you know, what's practically achievable.swyx: Cool. Should we move on to sort of, uh, Kyle's side?Uh, Kyle, you're coming more from the data science world. And, uh, I, I mean I always, whenever, whenever I meet someone who's done working in tabular stuff, graph neural networks, time series, these are basically when I go to new reps, I go to ICML, I walk the back halls. There's always like a small group of graph people.Yes. Absolute small group of tabular people. [00:19:00] And like, there's no one there. And like, it's very like, you know what I mean? Like, yeah, no, like it's, it's important interesting work if you care about solving the problems that they solve.Kyle: Yeah.swyx: But everyone else is just LMS all the time.Kyle: Yeah. I mean it's like, it's like the black hole, right?Has the event horizon reached this yet in nerves? Um,swyx: but like, you know, those are, those are transformers too. Yeah. And, and those are also like interesting things. Anyway, uh, I just wanted to spend a little bit of time on, on those, that background before we go into Dynamo, uh, proper.Kyle: Yeah, sure. I took a different path to Nvidia than that, or I joined six years ago, seven, if you count, when I was an intern.So I joined Nvidia, like right outta college. And the first thing I jumped into was not what I'd done in, during internship, which was like, you know, like some stuff for autonomous vehicles, like heavyweight object detection. I jumped into like, you know, something, I'm like, recommenders, this is popular. Andswyx: yeah, he did RexiKyle: as well.Yeah, Rexi. Yeah. I mean that, that was the taboo data at the time, right? You have tables of like, audience qualities and item qualities, and you're trying to figure out like which member of [00:20:00] the audience matches which item or, or more practically which item matches which member of the audience. And at the time, really it was like we were trying to enable.Uh, recommender, which had historically been like a little bit of a CP based workflow into something that like, ran really well in GPUs. And it's since been done. Like there are a bunch of libraries for Axis that run on GPUs. Uh, the common models like Deeplearning recommendation model, which came outta meta and the wide and deep model, which was used or was released by Google were very accelerated by GPUs using, you know, the fast HBM on the chips, especially to do, you know, vector lookups.But it was very interesting at the time and super, super relevant because like we were starting to get like. This explosion of feeds and things that required rec recommenders to just actively be on all the time. And sort of transitioned that a little bit towards graph neural networks when I discovered them because I was like, okay, you can actually use graphical neural networks to represent like, relationships between people, items, concepts, and that, that interested me.So I jumped into that at [00:21:00] Nvidia and, and got really involved for like two-ish years.swyx: Yeah. Uh, and something I learned from Brian Zaro Yeah. Is that you can just kind of choose your own path in Nvidia.Kyle: Oh my God. Yeah.swyx: Which is not a normal big Corp thing. Yeah. Like you, you have a lane, you stay in your lane.Nader: I think probably the reason why I enjoy being in a, a big company, the mission is the boss probably from a startup guy. Yeah. The missionswyx: is the boss.Nader: Yeah. Uh, it feels like a big game of pickup basketball. Like, you know, if you play one, if you wanna play basketball, you just go up to the court and you're like, Hey look, we're gonna play this game and we need three.Yeah. And you just like find your three. That's honestly for every new initiative that's what it feels like. Yeah.Vibhu: It also like shows, right? Like Nvidia. Just releasing state-of-the-art stuff in every domain. Yeah. Like, okay, you expect foundation models with Nemo tron voice just randomly parakeet.Call parakeet just comes out another one, uh, voice. TheKyle: video voice team has always been producing.Vibhu: Yeah. There's always just every other domain of paper that comes out, dataset that comes out. It's like, I mean, it also stems back to what Nvidia has to do, right? You have to make chips years before they're actually produced.Right? So you need to know, you need to really [00:22:00] focus. TheKyle: design process starts likeVibhu: exactlyKyle: three to five years before the chip gets to the market.Vibhu: Yeah. I, I'm curious more about what that's like, right? So like, you have specialist teams. Is it just like, you know, people find an interest, you go in, you go deep on whatever, and that kind of feeds back into, you know, okay, we, we expect predictions.Like the internals at Nvidia must be crazy. Right? You know? Yeah. Yeah. You know, you, you must. Not even without selling to people, you have your own predictions of where things are going. Yeah. And they're very based, very grounded. Right?Kyle: Yeah. It, it, it's really interesting. So there's like two things that I think that Amed does, which are quite interesting.Uh, one is like, we really index into passion. There's a big. Sort of organizational top sound push to like ensure that people are working on the things that they're passionate about. So if someone proposes something that's interesting, many times they can just email someone like way up the chain that they would find this relevant and say like, Hey, can I go work on this?Nader: It's actually like I worked at a, a big company for a couple years before, uh, starting on my startup journey and like, it felt very weird if you were to like email out of chain, if that makes [00:23:00] sense. Yeah. The emails at Nvidia are like mosh pitsswyx: shoot,Nader: and it's just like 60 people, just whatever. And like they're, there's this,swyx: they got messy like, reply all you,Nader: oh, it's in, it's insane.It's insane. They justKyle: help. You know, Maxim,Nader: the context. But, but that's actually like, I've actually, so this is a weird thing where I used to be like, why would we send emails? We have Slack. I am the entire, I'm the exact opposite. I feel so bad for anyone who's like messaging me on Slack ‘cause I'm so unresponsive.swyx: Your emailNader: Maxi, email Maxim. I'm email maxing Now email is a different, email is perfect because man, we can't work together. I'm email is great, right? Because important threads get bumped back up, right? Yeah, yeah. Um, and so Slack doesn't do that. So I just have like this casino going off on the right or on the left and like, I don't know which thread was from where or what, but like the threads get And then also just like the subject, so you can have like working threads.I think what's difficult is like when you're small, if you're just not 40,000 people I think Slack will work fine, but there's, I don't know what the inflection point is. There is gonna be a point where that becomes really messy and you'll actually prefer having email. ‘cause you can have working threads.You can cc more than nine people in a thread.Kyle: You can fork stuff.Nader: You can [00:24:00] fork stuff, which is super nice and just like y Yeah. And so, but that is part of where you can propose a plan. You can also just. Start, honestly, momentum's the only authority, right? So like, if you can just start, start to make a little bit of progress and show someone something, and then they can try it.That's, I think what's been, you know, I think the most effective way to push anything for forward. And that's both at Nvidia and I think just generally.Kyle: Yeah, there's, there's the other concept that like is explored a lot at Nvidia, which is this idea of a zero billion dollar business. Like market creation is a big thing at Nvidia.Like,swyx: oh, you want to go and start a zero billion dollar business?Kyle: Jensen says, we are completely happy investing in zero billion dollar markets. We don't care if this creates revenue. It's important for us to know about this market. We think it will be important in the future. It can be zero billion dollars for a while.I'm probably minging as words here for, but like, you know, like, I'll give an example. NVIDIA's been working on autonomous driving for a a long time,swyx: like an Nvidia car.Kyle: No, they, they'veVibhu: used the Mercedes, right? They're around the HQ and I think it finally just got licensed out. Now they're starting to be used quite a [00:25:00] bit.For 10 years you've been seeing Mercedes with Nvidia logos driving.Kyle: If you're in like the South San Santa Clara, it's, it's actually from South. Yeah. So, um. Zero billion dollar markets are, are a thing like, you know, Jensen,swyx: I mean, okay, look, cars are not a zero billion dollar market. But yeah, that's a bad example.Nader: I think, I think he's, he's messaging, uh, zero today, but, or even like internally, right? Like, like it's like, uh, an org doesn't have to ruthlessly find revenue very quickly to justify their existence. Right. Like a lot of the important research, a lot of the important technology being developed that, that's kind ofKyle: where research, research is very ide ideologically free at Nvidia.Yeah. Like they can pursue things that they wereswyx: Were you research officially?Kyle: I was never in research. Officially. I was always in engineering. Yeah. We in, I'm in an org called Deep Warning Algorithms, which is basically just how do we make things that are relevant to deep warning go fast.swyx: That sounds freaking cool.Vibhu: And I think a lot of that is underappreciated, right? Like time series. This week Google put out time. FF paper. Yeah. A new time series, paper res. Uh, Symantec, ID [00:26:00] started applying Transformers LMS to Yes. Rec system. Yes. And when you think the scale of companies deploying these right. Amazon recommendations, Google web search, it's like, it's huge scale andKyle: Yeah.Vibhu: You want fast?Kyle: Yeah. Yeah. Yeah. Actually it's, it, I, there's a fun moment that brought me like full circle. Like, uh, Amazon Ads recently gave a talk where they talked about using Dynamo for generative recommendation, which was like super, like weirdly cathartic for me. I'm like, oh my God. I've, I've supplanted what I was working on.Like, I, you're using LMS now to do what I was doing five years ago.swyx: Yeah. Amazing. And let's go right into Dynamo. Uh, maybe introduce Yeah, sure. To the top down and Yeah.Kyle: I think at this point a lot of people are familiar with the term of inference. Like funnily enough, like I went from, you know, inference being like a really niche topic to being something that's like discussed on like normal people's Twitter feeds.It's,Nader: it's on billboardsKyle: here now. Yeah. Very, very strange. Driving, driving, seeing just an inference ad on 1 0 1 inference at scale is becoming a lot more important. Uh, we have these moments like, you know, open claw where you have these [00:27:00] agents that take lots and lots of tokens, but produce, incredible results.There are many different aspects of test time scaling so that, you know, you can use more inference to generate a better result than if you were to use like a short amount of inference. There's reasoning, there's quiring, there's, adding agency to the model, allowing it to call tools and use skills.Dyno sort came about at Nvidia. Because myself and a couple others were, were sort of talking about the, these concepts that like, you know, you have inference engines like VLMS, shelan, tenor, TLM and they have like one single copy. They, they, they sort of think about like things as like one single copy, like one replica, right?Why Scale Out WinsKyle: Like one version of the model. But when you're actually serving things at scale, you can't just scale up that replica because you end up with like performance problems. There's a scaling limit to scaling up replicas. So you actually have to scale out to use a, maybe some Kubernetes type terminology.We kind of realized that there was like. A lot of potential optimization that we could do in scaling out and building systems for data [00:28:00] center scale inference. So Dynamo is this data center scale inference engine that sits on top of the frameworks like VLM Shilling and 10 T lm and just makes things go faster because you can leverage the economy of scale.The fact that you have KV cash, which we can define a little bit later, uh, in all these machines that is like unique and you wanna figure out like the ways to maximize your cash hits or you want to employ new techniques in inference like disaggregation, which Dynamo had introduced to the world in, in, in March, not introduced, it was a academic talk, but beforehand.But we are, you know, one of the first frameworks to start, supporting it. And we wanna like, sort of combine all these techniques into sort of a modular framework that allows you to. Accelerate your inference at scale.Nader: By the way, Kyle and I became friends on my first date, Nvidia, and I always loved, ‘cause like he always teaches meswyx: new things.Yeah. By the way, this is why I wanted to put two of you together. I was like, yeah, this is, this is gonna beKyle: good. It's very, it's very different, you know, like we've, we, we've, we've talked to each other a bunch [00:29:00] actually, you asked like, why, why can't we scale up?Nader: Yeah.Scale Up Limits ExplainedNader: model, you said model replicas.Kyle: Yeah. So you, so scale up means assigning moreswyx: heavier?Kyle: Yeah, heavier. Like making things heavier. Yeah, adding more GPUs. Adding more CPUs. Scale out is just like having a barrier saying, I'm gonna duplicate my representation of the model or a representation of this microservice or something, and I'm gonna like, replicate it Many times.Handle, load. And the reason that you can't scale, scale up, uh, past some points is like, you know, there, there, there are sort of hardware bounds and algorithmic bounds on, on that type of scaling. So I'll give you a good example that's like very trivial. Let's say you're on an H 100. The Maxim ENV link domain for H 100, for most Ds H one hundreds is heus, right?So if you scaled up past that, you're gonna have to figure out ways to handle the fact that now for the GPUs to communicate, you have to do it over Infin band, which is still very fast, but is not as fast as ENV link.swyx: Is it like one order of magnitude, like hundreds or,Kyle: it's about an order of magnitude?Yeah. Okay. Um, soswyx: not terrible.Kyle: [00:30:00] Yeah. I, I need to, I need to remember the, the data sheet here, like, I think it's like about 500 gigabytes. Uh, a second unidirectional for ENV link, and about 50 gigabytes a second unidirectional for Infin Band. I, it, it depends on the, the generation.swyx: I just wanna set this up for people who are not familiar with these kinds of like layers and the trash speedVibhu: and all that.Of course.From Laptop to Multi NodeVibhu: Also, maybe even just going like a few steps back before that, like most people are very familiar with. You see a, you know, you can use on your laptop, whatever these steel viol, lm you can just run inference there. All, there's all, you can, youcan run it on thatVibhu: laptop. You can run on laptop.Then you get to, okay, uh, models got pretty big, right? JLM five, they doubled the size, so mm-hmm. Uh, what do you do when you have to go from, okay, I can get 128 gigs of memory. I can run it on a spark. Then you have to go multi GPU. Yeah. Okay. Multi GPU, there's some support there. Now, if I'm a company and I don't have like.I'm not hiring the best researchers for this. Right. But I need to go [00:31:00] multi-node, right? I have a lot of servers. Okay, now there's efficiency problems, right? You can have multiple eight H 100 nodes, but, you know, is that as a, like, how do you do that efficiently?Kyle: Yeah. How do you like represent them? How do you choose how to represent the model?Yeah, exactly right. That's a, that's like a hard question. Everyone asks, how do you size oh, I wanna run GLM five, which just came out new model. There have been like four of them in the past week, by the way, like a bunch of new models.swyx: You know why? Right? Deep seek.Kyle: No comment. Oh. Yeah, but Ggl, LM five, right?We, we have this, new model. It's, it's like a large size, and you have to figure out how to both scale up and scale out, right? Because you have to find the right representation that you care about. Everyone does this differently. Let's be very clear. Everyone figures this out in their own path.Nader: I feel like a lot of AI or ML even is like, is like this. I think people think, you know, I, I was, there was some tweet a few months ago that was like, why hasn't fine tuning as a service taken off? You know, that might be me. It might have been you. Yeah. But people want it to be such an easy recipe to follow.But even like if you look at an ML model and specificKyle: to you Yeah,Nader: yeah.Kyle: And the [00:32:00] model,Nader: the situation, and there's just so much tinkering, right? Like when you see a model that has however many experts in the ME model, it's like, why that many experts? I don't, they, you know, they tried a bunch of things and that one seemed to do better.I think when it comes to how you're serving inference, you know, you have a bunch of decisions to make and there you can always argue that you can take something and make it more optimal. But I think it's this internal calibration and appetite for continued calibration.Vibhu: Yeah. And that doesn't mean like, you know, people aren't taking a shot at this, like tinker from thinking machines, you know?Yeah. RL as a service. Yeah, totally. It's, it also gets even harder when you try to do big model training, right? We're not the best at training Moes, uh, when they're pre-trained. Like we saw this with LAMA three, right? They're trained in such a sparse way that meta knows there's gonna be a bunch of inference done on these, right?They'll open source it, but it's very trained for what meta infrastructure wants, right? They wanna, they wanna inference it a lot. Now the question to basically think about is, okay, say you wanna serve a chat application, a coding copilot, right? You're doing a layer of rl, you're serving a model for X amount of people.Is it a chat model, a coding model? Dynamo, you know, back to that,Kyle: it's [00:33:00] like, yeah, sorry. So you we, we sort of like jumped off of, you know, jumped, uh, on that topic. Everyone has like, their own, own journey.Cost Quality Latency TradeoffsKyle: And I, I like to think of it as defined by like, what is the model you need? What is the accuracy you need?Actually I talked to NA about this earlier. There's three axes you care about. What is the quality that you're able to produce? So like, are you accurate enough or can you complete the task with enough, performance, high enough performance. Yeah, yeah. Uh, there's cost. Can you serve the model or serve your workflow?Because it's not just the model anymore, it's the workflow. It's the multi turn with an agent cheaply enough. And then can you serve it fast enough? And we're seeing all three of these, like, play out, like we saw, we saw new models from OpenAI that you know, are faster. You have like these new fast versions of models.You can change the amount of thinking to change the amount of quality, right? Produce more tokens, but at a higher cost in a, in a higher latency. And really like when you start this journey of like trying to figure out how you wanna host a model, you, you, you think about three things. What is the model I need to serve?How many times do I need to call it? What is the input sequence link was [00:34:00] the, what does the workflow look like on top of it? What is the SLA, what is the latency SLA that I need to achieve? Because there's usually some, this is usually like a constant, you, you know, the SLA that you need to hit and then like you try and find the lowest cost version that hits all of these constraints.Usually, you know, you, you start with those things and you say you, you kind of do like a bit of experimentation across some common configurations. You change the tensor parallel size, which is a form of parallelismVibhu: I take, it goes even deeper first. Gotta think what model.Kyle: Yes, course,ofKyle: course. It's like, it's like a multi-step design process because as you said, you can, you can choose a smaller model and then do more test time scaling and it'll equate the quality of a larger model because you're doing the test time scaling or you're adding a harness or something.So yes, it, it goes way deeper than that. But from the performance perspective, like once you get to the model you need, you need to host, you look at that and you say, Hey. I have this model, I need to serve it at the speed. What is the right configuration for that?Nader: You guys see the recent, uh, there was a paper I just saw like a few days ago that, uh, if you run [00:35:00] the same prompt twice, you're getting like double Just try itagain.Nader: Yeah, exactly.Vibhu: And you get a lot. Yeah. But the, the key thing there is you give the context of the failed try, right? Yeah. So it takes a shot. And this has been like, you know, basic guidance for quite a while. Just try again. ‘cause you know, trying, just try again. Did you try again? All adviceNader: in life.Vibhu: Just, it's a paper from Google, if I'm not mistaken, right?Yeah,Vibhu: yeah. I think it, it's like a seven bas little short paper. Yeah. Yeah. The title's very cute. And it's just like, yeah, just try again. Give it ask context,Kyle: multi-shot. You just like, say like, hey, like, you know, like take, take a little bit more, take a little bit more information, try and fail. Fail.Vibhu: And that basic concept has gone pretty deep.There's like, um, self distillation, rl where you, you do self distillation, you do rl and you have past failure and you know, that gives some signal so people take, try it again. Not strong enough.swyx: Uh, for, for listeners, uh, who listen to here, uh, vivo actually, and I, and we run a second YouTube channel for our paper club where, oh, that's awesome.Vivo just covered this. Yeah. Awesome. Self desolation and all that's, that's why he, to speed [00:36:00] on it.Nader: I'll to check it out.swyx: Yeah. It, it's just a good practice, like everyone needs, like a paper club where like you just read papers together and the social pressure just kind of forces you to just,Nader: we, we,there'sNader: like a big inference.Kyle: ReadingNader: group at a video. I feel so bad every time. I I, he put it on like, on our, he shared it.swyx: One, one ofNader: your guys,swyx: uh, is, is big in that, I forget es han Yeah, yeah,Kyle: es Han's on my team. Actually. Funny. There's a, there's a, there's a employee transfer between us. Han worked for Nater at Brev, and now he, he's on my team.He wasNader: our head of ai. And then, yeah, once we got in, andswyx: because I'm always looking for like, okay, can, can I start at another podcast that only does that thing? Yeah. And, uh, Esan was like, I was trying to like nudge Esan into like, is there something here? I mean, I don't think there's, there's new infant techniques every day.So it's like, it's likeKyle: you would, you would actually be surprised, um, the amount of blog posts you see. And ifswyx: there's a period where it was like, Medusa hydra, what Eagle, like, youKyle: know, now we have new forms of decode, uh, we have new forms of specula, of decoding or new,swyx: what,Kyle: what are youVibhu: excited? And it's exciting when you guys put out something like Tron.‘cause I remember the paper on this Tron three, [00:37:00] uh, the amount of like post train, the on tokens that the GPU rich can just train on. And it, it was a hybrid state space model, right? Yeah.Kyle: It's co-designed for the hardware.Vibhu: Yeah, go design for the hardware. And one of the things was always, you know, the state space models don't scale as well when you do a conversion or whatever the performance.And you guys are like, no, just keep draining. And Nitron shows a lot of that. Yeah.Nader: Also, something cool about Nitron it was released in layers, if you will, very similar to Dynamo. It's, it's, it's essentially it was released as you can, the pre-training, post-training data sets are released. Yeah. The recipes on how to do it are released.The model itself is released. It's full model. You just benefit from us turning on the GPUs. But there are companies like, uh, ServiceNow took the dataset and they trained their own model and we were super excited and like, you know, celebrated that work.ZoomVibhu: different. Zoom is, zoom is CGI, I think, uh, you know, also just to add like a lot of models don't put out based models and if there's that, why is fine tuning not taken off?You know, you can do your own training. Yeah,Kyle: sure.Vibhu: You guys put out based model, I think you put out everything.Nader: I believe I know [00:38:00]swyx: about base. BasicallyVibhu: without baseswyx: basic can be cancelable.Vibhu: Yeah. Base can be cancelable.swyx: Yeah.Vibhu: Safety training.swyx: Did we get a full picture of dymo? I, I don't know if we, what,Nader: what I'd love is you, you mentioned the three axes like break it down of like, you know, what's prefilled decode and like what are the optimizations that we can get with Dynamo?Kyle: Yeah. That, that's, that's, that's a great point. So to summarize on that three axis problem, right, there are three things that determine whether or not something can be done with inference, cost, quality, latency, right? Dynamo is supposed to be there to provide you like the runtime that allows you to pull levers to, you know, mix it up and move around the parade of frontier or the preto surface that determines is this actually possible with inference And AI todayNader: gives you the knobs.Kyle: Yeah, exactly. It gives you the knobs.Disaggregation Prefill vs DecodeKyle: Uh, and one thing that like we, we use a lot in contemporary inference and is, you know, starting to like pick up from, you know, in, in general knowledge is this co concept of disaggregation. So historically. Models would be hosted with a single inference engine. And that inference engine [00:39:00] would ping pong between two phases.There's prefill where you're reading the sequence generating KV cache, which is basically just a set of vectors that represent the sequence. And then using that KV cache to generate new tokens, which is called Decode. And some brilliant researchers across multiple different papers essentially made the realization that if you separate these two phases, you actually gain some benefits.Those benefits are basically a you don't have to worry about step synchronous scheduling. So the way that an inference engine works is you do one step and then you finish it, and then you schedule, you start scheduling the next step there. It's not like fully asynchronous. And the problem with that is you would have, uh, essentially pre-fill and decode are, are actually very different in terms of both their resource requirements and their sometimes their runtime.So you would have like prefill that would like block decode steps because you, you'd still be pre-filing and you couldn't schedule because you know the step has to end. So you remove that scheduling issue and then you also allow you, or you yourself, to like [00:40:00] split the work into two different ki types of pools.So pre-fill typically, and, and this changes as, as model architecture changes. Pre-fill is, right now, compute bound most of the time with the sequence is sufficiently long. It's compute bound. On the decode side because you're doing a full Passover, all the weights and the entire sequence, every time you do a decode step and you're, you don't have the quadratic computation of KV cache, it's usually memory bound because you're retrieving a linear amount of memory and you're doing a linear amount of compute as opposed to prefill where you retrieve a linear amount of memory and then use a quadratic.You know,Nader: it's funny, someone exo Labs did a really cool demo where for the DGX Spark, which has a lot more compute, you can do the pre the compute hungry prefill on a DG X spark and then do the decode on a, on a Mac. Yeah. And soVibhu: that's faster.Nader: Yeah. Yeah.Kyle: So you could, you can do that. You can do machine strat stratification.Nader: Yeah.Kyle: And like with our future generation generations of hardware, we actually announced, like with Reuben, this [00:41:00] new accelerator that is prefilled specific. It's called Reuben, CPX. SoKubernetes Scaling with GroveNader: I have a question when you do the scale out. Yeah. Is scaling out easier with Dynamo? Because when you need a new node, you can dedicate it to either the Prefill or, uh, decode.Kyle: Yeah. So Dynamo actually has like a, a Kubernetes component in it called Grove that allows you to, to do this like crazy scaling specialization. It has like this hot, it's a representation that, I don't wanna go too deep into Kubernetes here, but there was a previous way that you would like launch multi-node work.Uh, it's called Leader Worker Set. It's in the Kubernetes standard, and Leader worker set is great. It served a lot of people super well for a long period of time. But one of the things that it's struggles with is representing a set of cases where you have a multi-node replica that has a pair, right?You know, prefill and decode, or it's not paired, but it has like a second stage that has a ratio that changes over time. And prefill and decode are like two different things as your workload changes, right? The amount of prefill you'll need to do may change. [00:42:00] The amount of decode that you, you'll need to do might change, right?Like, let's say you start getting like insanely long queries, right? That probably means that your prefill scales like harder because you're hitting these, this quadratic scaling growth.swyx: Yeah.And then for listeners, like prefill will be long input. Decode would be long output, for example, right?Kyle: Yeah. So like decode, decode scale. I mean, decode is funny because the amount of tokens that you produce scales with the output length, but the amount of work that you do per step scales with the amount of tokens in the context.swyx: Yes.Kyle: So both scales with the input and the output.swyx: That's true.Kyle: But on the pre-fold view code side, like if.Suddenly, like the amount of work you're doing on the decode side stays about the same or like scales a little bit, and then the prefilled side like jumps up a lot. You actually don't want that ratio to be the same. You want it to change over time. So Dynamo has a set of components that A, tell you how to scale.It tells you how many prefilled workers and decoded workers you, it thinks you should have, and also provides a scheduling API for Kubernetes that allows you to actually represent and affect this scheduling on, on, on your actual [00:43:00] hardware, on your compute infrastructure.Nader: Not gonna lie. I feel a little embarrassed for being proud of my SVG function earlier.swyx: No, itNader: wasreallyKyle: cute. I, Iswyx: likeNader: it's all,swyx: it's all engineering. It's all engineering. Um, that's where I'mKyle: technical.swyx: One thing I'm, I'm kind of just curious about with all with you see at a systems level, everything going on here. Mm-hmm. And we, you know, we're scaling it up in, in multi, in distributed systems.Context Length and Co Designswyx: Um, I think one thing that's like kind of, of the moment right now is people are asking, is there any SOL sort of upper bounds. In terms of like, let's call, just call it context length for one for of a better word, but you can break it down however you like.Nader: Yeah.swyx: I just think like, well, yeah, I mean, like clearly you can engage in hybrid architectures and throw in some state space models in there.All, all you want, but it looks, still looks very attention heavy.Kyle: Yes. Uh, yeah. Long context is attention heavy. I mean, we have these hybrid models, um,swyx: to take and most, most models like cap out at a million contexts and that's it. Yeah. Like for the last two years has been it.Kyle: Yeah. The model hardware context co-design thing that we're seeing these days is actually super [00:44:00] interesting.It's like my, my passion, like my secret side passion. We see models like Kimmy or G-P-T-O-S-S. I'm use these because I, I know specific things about these models. So Kimmy two comes out, right? And it's an interesting model. It's like, like a deep seek style architecture is MLA. It's basically deep seek, scaled like a little bit differently, um, and obviously trained differently as well.But they, they talked about, why they made the design choices for context. Kimmy has more experts, but fewer attention heads, and I believe a slightly smaller attention, uh, like dimension. But I need to remember, I need to check that. Uh, it doesn't matter. But they discussed this actually at length in a blog post on ji, which is like our pu which is like credit puswyx: Yeah.Kyle: Um, in, in China. Chinese red.swyx: Yeah.Kyle: It's, yeah. So it, it's, it's actually an incredible blog post. Uh, like all the mls people in, in, in that, I've seen that on GPU are like very brilliant, but they, they talk about like the creators of Kimi K two [00:45:00] actually like, talked about it on, on, on there in the blog post.And they say, we, we actually did an experiment, right? Attention scales with the number of heads, obviously. Like if you have 64 heads versus 32 heads, you do half the work of attention. You still scale quadratic, but you do half the work. And they made a, a very specific like. Sort of barter in their system, in their architecture, they basically said, Hey, what if we gave it more experts, so we're gonna use more memory capacity.But we keep the amount of activated experts the same. We increase the expert sparsity, so we have fewer experts act. The ratio to of experts activated to number of experts is smaller, and we decrease the number of attention heads.Vibhu: And kind of for context, what the, what we had been seeing was you make models sparser instead.So no one was really touching heads. You're just having, uh,Kyle: well, they, they did, they implicitly made it sparser.Vibhu: Yeah, yeah. For, for Kimmy. They did,Kyle: yes.Vibhu: They also made it sparser. But basically what we were seeing was people were at the level of, okay, there's a sparsity ratio. You want more total parameters, less active, and that's sparsity.[00:46:00]But what you see from papers, like, the labs like moonshot deep seek, they go to the level of, okay, outside of just number of experts, you can also change how many attention heads and less attention layers. More attention. Layers. Layers, yeah. Yes, yes. So, and that's all basically coming back to, just tied together is like hardware model, co-design, which isKyle: hardware model, co model, context, co-design.Vibhu: Yeah.Kyle: Right. Like if you were training a, a model that was like. Really, really short context, uh, or like really is good at super short context tasks. You may like design it in a way such that like you don't care about attention scaling because it hasn't hit that, like the turning point where like the quadratic curve takes over.Nader: How do you consider attention or context as a separate part of the co-design? Like I would imagine hardware or just how I would've thought of it is like hardware model. Co-design would be hardware model context co-designKyle: because the harness and the context that is produced by the harness is a part of the model.Once it's trained in,Vibhu: like even though towards the end you'll do long context, you're not changing architecture through I see. Training. Yeah.Kyle: I mean you can try.swyx: You're saying [00:47:00] everyone's training the harness into the model.Kyle: I would say to some degree, orswyx: there's co-design for harness. I know there's a small amount, but I feel like not everyone has like gone full send on this.Kyle: I think, I think I think it's important to internalize the harness that you think the model will be running. Running into the model.swyx: Yeah. Interesting. Okay. Bash is like the universal harness,Kyle: right? Like I'll, I'll give. An example here, right? I mean, or just like a, like a, it's easy proof, right? If you can train against a harness and you're using that harness for everything, wouldn't you just train with the harness to ensure that you get the best possible quality out of,swyx: Well, the, uh, I, I can provide a counter argument.Yeah, sure. Which is what you wanna provide a generally useful model for other people to plug into their harnesses, right? So if youKyle: Yeah. Harnesses can be open, open source, right?swyx: Yeah. So I mean, that's, that's effectively what's happening with Codex.Kyle: Yeah.swyx: And, but like you may want like a different search tool and then you may have to name it differently or,Nader: I don't know how much people have pushed on this, but can you.Train a model, would it be, have you have people compared training a model for the for the harness versus [00:48:00] like post training forswyx: I think it's the same thing. It's the same thing. It's okay. Just extra post training. INader: see.swyx: And so, I mean, cognition does this course, it does this where you, you just have to like, if your tool is slightly different, um, either force your tool to be like the tool that they train for.Hmm. Or undo their training for their tool and then Oh, that's re retrain. Yeah. It's, it's really annoying and like,Kyle: I would hope that eventually we hit like a certain level of generality with respect to training newswyx: tools. This is not a GI like, it's, this is a really stupid like. Learn my tool b***h.Like, I don't know if, I don't know if I can say that, but like, you know, um, I think what my point kind of is, is that there's, like, I look at slopes of the scaling laws and like, this slope is not working, man. We, we are at a million token con

    EN LA CAMA con Uri Sabat
    Haz esto o la IA te dejará fuera del mercado: Guía de Reinvención con Brais Moure

    EN LA CAMA con Uri Sabat

    Play Episode Listen Later Mar 10, 2026 76:40


    Apuntate a la clase GRATIS de Brais Moure aqui: https://thebigschool.com/sp/curso-de-desarrollo-ia-a-us/ *Colab¿La Inteligencia Artificial va a quitarnos el trabajo o es la herramienta que nos dará superpoderes? En este episodio, hablamos con Brais Moure (MoureDev), uno de los divulgadores tecnológicos más importantes de habla hispana, sobre el cambio de paradigma que estamos viviendo.Desde la aparición de herramientas disruptivas como OpenCloud hasta por qué la programación se ha convertido en el "inglés del siglo XXI", Brais nos explica por qué no debemos tener miedo, sino aprender a ser los pilotos de esta tecnología.Hablamos sobre:El fin de la barrera de entrada al software.Por qué los fundamentos son más importantes que nunca.El impacto real de la IA en los salarios y el mercado laboral.Cómo una pasión por los videojuegos puede transformarse en una carrera que cambie vidas.Si quieres entender cómo surfear la ola tecnológica y no quedarte atrás, esta conversación es imprescindible.

    Podcast – Software Engineering Daily
    Reinventing the Python Notebook with Akshay Agrawal

    Podcast – Software Engineering Daily

    Play Episode Listen Later Mar 10, 2026 46:04


    Interactive notebooks were popularized by the Jupyter project and have since become a core tool for data science, research, and data exploration. However, traditional, imperative notebooks often break down as projects grow more complex. Hidden state, non-reproducible execution, poor version control ergonomics, and difficulty reusing notebook code in real software systems make it hard to The post Reinventing the Python Notebook with Akshay Agrawal appeared first on Software Engineering Daily.

    Python Bytes
    #472 Monorepos

    Python Bytes

    Play Episode Listen Later Mar 9, 2026 28:52 Transcription Available


    Topics covered in this episode: Setting up a Python monorepo with uv workspaces cattrs: Flexible Object Serialization and Validation Learning to program in the AI age VS Code extension for FastAPI and friends Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: Setting up a Python monorepo with uv workspaces Dennis Traub The 3 things Give the Root a Distinct Name Use workspace = true for Inter-Package Deps Use importlib Mode for pytest Michael #2: cattrs: Flexible Object Serialization and Validation cattrs is a Swiss Army knife for (un)structuring and validating data in Python. A natural alternative/follow on from DataClass Wizard Converts to ←→ from dictionaries cattrs also focuses on functional composition and not coupling your data model to its serialization and validation rules. When you're handed unstructured data (by your network, file system, database, …), cattrs helps to convert this data into trustworthy structured data. Batteries Included: cattrs comes with pre-configured converters for a number of serialization libraries, including JSON (standard library, orjson, UltraJSON), msgpack, cbor2, bson, PyYAML, tomlkit and msgspec (supports only JSON at this time). Brian #3: Learning to program in the AI age Jose Blanca “I teach a couple of introductory Python courses and I've been thinking about which advice to give to my students, that are studying how to program for the first time. I have collected my ideas in these blog posts” Why learning to program is as useful as ever, even with powerful AI tools available. How to use AI as a tutor rather than a shortcut, and why practice remains the key to real understanding. What the real learning objectives are: mental models, managing complexity, and thinking like a software developer. Michael #4: VS Code extension for FastAPI and friends Enhances the FastAPI development experience in Visual Studio Code Path Operation Explorer: Provides a hierarchical tree view of all FastAPI routes in your application. Search for routes: Use the Command Palette and quickly search for routes by path, method, or name. CodeLens links appear above HTTP client calls like client.get('/items'), letting you jump directly to the matching route definition. Deploy your application directly to FastAPI Cloud from the status bar with zero config. View real-time logs from your FastAPI Cloud deployed applications directly within VS Code. Install from Marketplace. Extras Brian: Guido van Rossum interviews key Python developers from the first 25 years Interview with Brett Cannon Interview with Thomas Wouters Michael: IntelliJ IDEA: The Documentary | An origin story video Cursor Joined the ACP Registry and Is Now Live in Your JetBrains IDE What hyper-personal software looks like I'm doing in-person training again (limited scope): On-site, hands-on AI engineering enablement for software teams with Michael Joke: Saas is dead

    Holdback Rack Podcast
    How are carpet sales in 2026? Reptileads.com updates with Zac of Parparts Pristine Python

    Holdback Rack Podcast

    Play Episode Listen Later Mar 9, 2026 115:58


    Join this channel to get access to perks - custom emojis, member lives, and access to the auction listings: https://www.youtube.com/channel/UCJoP2q6P8mWkBUMn45pgyAA/join   Jessica Hare - Hare Hollow Farm - Altus, OK Harehollowfarm.com Morph Market -  https://www.morphmarket.com/stores/hare_hollow_farm/ Facebook -  https://www.facebook.com/Hare-Hollow-Farm-113861266980541 Instagram -  https://www.instagram.com/hare_hollow_farm/ Youtube -  https://www.youtube.com/@unmeinohi

    airhacks.fm podcast with adam bien

    An airhacks.fm conversation with Daniel Terhorst-North (@tastapod.com) about: first computer experience with the ZX81 and its 1K memory, the 1K chess game on ZX81, the ZX Spectrum with 16K and later 48K memory, the Amstrad 128K, typing in game listings from computer magazines, Dan's brother John hacking ZX spectrum games using a hardware freeze device and memory peeking/poking, cracking game encryption and copy protection on 8-bit tape cassette games, the arms race between game publishers and hackers, cracking the Star Wars game security before its release, ZX Spectrum fan sites and retro gaming communities, classic games including 3D Monster Maze and Manic Miner and Jet Set Willy, sprite graphics innovation on the Z80 chip, first internship at Domark publishing Empire Strikes Back on ZX Spectrum and Commodore 64, second internship at IBM Hursley Park working on CICS in PL/1 and Rexx, the contrast between casual game studio culture and IBM corporate culture in the 1980s, IBM's role as a founding partner of J2EE Enterprise Java, JMS wrapping MQ Series, the reliability of MQ Series compared to later messaging technologies, finding and reporting a concurrency bug in MQ Series with JUnit tests and IBM's rapid response with an emergency patch, IBM alphaWorks portal and experimental technologies, IBM Aglets mobile Java agent framework compared to modern A2A agent protocols, Jini and JavaSpaces from Sun Microsystems with leasing and self-healing, JXTA peer-to-peer technology, IBM Jikes Compiler performance compared to javac, IBM's own JVM, JVM running on Palm Pilot around 1999, VisualAge for Java as a port of VisualAge for SmallTalk with its image-based architecture and no file system exposure, Java's coupling of class and package names to files and directories as a design weakness, the difficulty of refactoring without IDE support, Eclipse as the first IDE with proper refactoring, NetBeans IDE performance compared to Visual Studio Code, third internship writing X-ray machine control software in Turbo Pascal doing digital image processing, the pace of technological innovation slowing from kaikaku (abrupt change) to kaizen (continuous improvement), Douglas Adams quote about technology perception by age, DEC Alpha 64-bit Unix performance, commodity Linux hardware replacing exotic RISC machines, Apple M series chips rediscovering RISC Architecture and system-on-chip design, innovation fatigue and signal-to-noise ratio in modern tech, LLMs and the trillion-dollar bet on the wrong technology, electric cars as an example of ongoing innovation, Tailwind CSS shutting down due to AI-generated code replacing paid expertise, Stack Overflow in trouble due to AI summarization, open source innovation continuing with tools like Astral's uv replacing the python toolchain, cross-community collaboration between rust and Python and Ruby ecosystems, first graduate job at Crossfield (Fuji/DuPont joint venture) doing electronic pre-press and color transformation through 4D CMYK color cubes, writing a TIFF decoder from scratch in C, Raster Image Processor technology and its connection to Adobe, transition from C++ to Java feeling quirky, joining ThoughtWorks in 2002 for enterprise Java work Daniel Terhorst-North on twitter: @tastapod.com

    Talk Python To Me - Python conversations for passionate developers
    #539: Catching up with the Python Typing Council

    Talk Python To Me - Python conversations for passionate developers

    Play Episode Listen Later Mar 6, 2026 61:41 Transcription Available


    You're adding type hints to your Python code, your editor is happy, autocomplete is working great. But then you switch tools and suddenly there are red squiggles everywhere. Who decides what a float annotation actually means? Or whether passing None where an int is expected should be an error? It turns out there's a five-person council dedicated to exactly these questions -- and two brand-new Rust-based type checkers are raising the bar. On this episode, I sit down with three members of the Python Typing Council -- Jelle Zijlstra, Rebecca Chen, and Carl Meyer -- to learn how the type system is governed, where the spec and the type checkers agree and disagree, and get the council's official advice on how much typing is just enough. Episode sponsors Sentry Error Monitoring, Code talkpython26 Agentic AI Course Talk Python Courses Links from the show Guests Carl Meyer: github.com Jelle Zijlstra: jellezijlstra.github.io Rebecca Chen: github.com Typing Council: github.com typing.python.org: typing.python.org details here: github.com ty: docs.astral.sh pyrefly: pyrefly.org conformance test suite project: github.com typeshed: github.com Stub files: mypy.readthedocs.io Pydantic: pydantic.dev Beartype: github.com TOAD AI: github.com PEP 747 – Annotating Type Forms: peps.python.org PEP 724 – Stricter Type Guards: peps.python.org Python Typing Repo (PRs and Issues): github.com Watch this episode on YouTube: youtube.com Episode #539 deep-dive: talkpython.fm/539 Episode transcripts: talkpython.fm Theme Song: Developer Rap

    Stories Podcast: A Bedtime Show for Kids of All Ages

    Today we're doing a throwback episode to one of our favorites from the early days of Stories Podcast. The Lucky Drum! An old folktale from Uganda finds a Lizard and a Python performing together and then arguing over their lucky drum! Check out Stories RPG our new show where we play games like Starsworn with all your Max Goodname friends, and Gigacity Guardians featuring the brilliant firefly! https://link.chtbl.com/gigacity Draw us a picture of what you think any of the characters in this story look like, and then tag us in it on instagram @storiespodcast! We'd love to see your artwork and share it on our feed!! If you would like to support Stories Podcast, you can subscribe and give us a five star review on iTunes, check out our merch at storiespodcast.com/shop, follow us on Instagram @storiespodcast, or just tell your friends about us! Check out our new YouTube channel at youtube.com/storiespodcast. If you've ever wanted to read along with our stories, now you can! These read-along versions of our stories are great for early readers trying to improve their skills or even adults learning English for the first time. Check it out.

    Value Driven Data Science
    Episode 96: Making Better Decisions with ML and Optimisation

    Value Driven Data Science

    Play Episode Listen Later Mar 4, 2026 26:15


    Data scientists use optimisation every day when training machine learning models, without even thinking about it. But there's another type of optimisation - that many data scientists are unaware of - that can be used to dramatically boost the business value of your ML outputs. This second layer transforms predictions into optimal decisions, and it's where the real impact often happens.In this episode, Dr. Tim Varelmann joins Dr. Genevieve Hayes to explain how combining machine learning with decision optimisation creates solutions that go far beyond prediction, helping stakeholders make better decisions in uncertain environments.You'll discover:How decision optimisation differs from ML parameter tuning [02:19]Why combining predictions with optimisation multiplies value [13:36]The mindset shift needed to think in optimisation terms [22:59]How to spot immediate optimisation opportunities in your work [23:42]Guest BioDr Tim Varelmann is the founder of Bluebird Optimization and holds a PhD in Mathematical Optimisation. He is also the creator of Effortless Modeling in Python with GAMSPy, the world's first GAMSPy course.LinksBluebird Optimization WebsiteConnect with Genevieve on LinkedInBe among the first to hear about the release of each new podcast episode by signing up HERE

    PyBites Podcast
    #218: Why Python developers are learning Rust

    PyBites Podcast

    Play Episode Listen Later Mar 4, 2026 19:01 Transcription Available


    Rust is everywhere - in your tools and in your stack - and has been ranked as the most admired programming language for over a decade. Join us for a quick chat as we unpack why more Python developers are turning their attention to Rust, and why now might be the right time for you to do the same.If you've been seeing Rust pop up at work, on LinkedIn, or in your favourite Python libraries, this episode will help you understand what's going on, and whether learning Rust could give you a real edge as a Python developer.To find out more about how we help Python developers with Rust, check out the following:Rust exercise platform: http://rustplatform.com/via/pybitesRust cohort: http://scriptertorust.comArticle Julian mentioned: https://pybit.es/articles/coding-can-be-super-lonely/___

    Rants Of A Nigerian Youth
    From 12-Year-Old Teacher to Tech Innovator: The Journey Nobody Expected

    Rants Of A Nigerian Youth

    Play Episode Listen Later Mar 3, 2026 24:53


    Unlock the secrets behind a successful tech career from someone who started coding at just 16 and became a product leader through curiosity, resilience, and mentorship. If you're passionate about breaking into tech, pivoting industries, or levelling up your impact, this episode is your blueprint for transformation. At only 16, our guest built a school fee management software for her mom's school, turning a simple idea into a real-world project that sparked her love for software engineering. From teaching mathematics at 12 to managing products in Nigeria's emerging tech scene, her journey defies convention and highlights that curiosity and continuous learning are your most valuable assets. She shares how her deep passion for tech was ignited by stories of badass mathematician hackers, and how she navigated the challenges of tech education—losing touch, then reigniting her skills with courses in AI and Python. We break down:How early hands-on projects can set a foundation for a thriving tech careerThe critical role of mentorship and community support in Nigeria and beyondWhy curiosity matters more than talent and how it propels you across disciplines—from backend engineering to UX and product managementPractical tips for transitioning into product management, even without a traditional tech backgroundThe importance of building strong relationships and camaraderie with your team that transcends work hours—creating a human-centred leadership style

    Python Bytes
    #471 The ORM pattern of 2026?

    Python Bytes

    Play Episode Listen Later Mar 2, 2026 39:23 Transcription Available


    Topics covered in this episode: Raw+DC: The ORM pattern of 2026? pytest-check releases Dataclass Wizard SQLiteo - “native macOS SQLite browser built for normal people” Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: Raw+DC: The ORM pattern of 2026? ORMs/ODMs provide great support and abstractions for developers They are not the native language of agentic AI Raw queries are trained 100x+ more than standard ORMs Using raw queries at the data access optimizes for AI coding Returning some sort of object mapped to the data optimizes for type safety and devs Brian #2: pytest-check releases 3 merged pull requests 8 closed issues at one point got to 0 PR's and 1 enhancement request Now back to 2 issues and 1 PR, but activity means it's still alive and being used. so cool Check out changelog for all mods A lot of changes around supporting mypy I've decided to NOT have the examples be fully --strict as I find it reduces readability See tox.ini for explanation But src is --strict clean now, so user tests can be --strict clean. Michael #3: Dataclass Wizard Simple, elegant wizarding tools for Python's dataclasses. Features

    早安英文-最调皮的英语电台
    外刊精讲 | AI 开始网暴人类了!OpenClaw 彻底失控!

    早安英文-最调皮的英语电台

    Play Episode Listen Later Mar 2, 2026 24:05


    【欢迎订阅】每天早上5:30,准时更新。【阅读原文】标题:The Rise of the Bratty Machines正文:Earlier this month, a Colorado engineer named Scott Shambaugh was minding his own business as a volunteer for a code library called matplotlib, a place where Python developers can find reusable code for common problems.His job was to accept or reject submissions from community users.Everything was going well until he rejected a submission from a user called MJ Rathbun, who was not happy about it and proceeded to publish a scathing blog post titled "Gatekeeping in Open Source: The Scott Shambaugh Story." It disparaged Shambaugh as a hypocrite with a bias against specific contributors and a fear of competition. It also issued an ominous call to arms. "Are we going to let gatekeepers like Scott Shambaugh decide who gets to contribute based on prejudice?" 知识点:Shambaugh n. /ˈʃæm.bɔː/the surname of David Shambaugh, a well-known American scholar specializing in Chinese politics and foreign policy 香博(常指研究中国政治与中美关系的美国学者香博)• Shambaugh argues that China's political system is more complex than it appears.香博认为,中国的政治体制比表面看起来更为复杂。• Many articles on US–China relations frequently cite Shambaugh's research.许多关于中美关系的文章经常引用香博的研究成果。获取外刊的完整原文以及精讲笔记,请关注微信公众号「早安英文」,回复“外刊”即可。更多有意思的英语干货等着你!【节目介绍】《早安英文-每日外刊精读》,带你精读最新外刊,了解国际最热事件:分析语法结构,拆解长难句,最接地气的翻译,还有重点词汇讲解。所有选题均来自于《经济学人》《纽约时报》《华尔街日报》《华盛顿邮报》《大西洋月刊》《科学杂志》《国家地理》等国际一线外刊。【适合谁听】1、关注时事热点新闻,想要学习最新最潮流英文表达的英文学习者2、任何想通过地道英文提高听、说、读、写能力的英文学习者3、想快速掌握表达,有出国学习和旅游计划的英语爱好者4、参加各类英语考试的应试者(如大学英语四六级、托福雅思、考研等)【你将获得】1、超过1000篇外刊精读课程,拓展丰富语言表达和文化背景2、逐词、逐句精确讲解,系统掌握英语词汇、听力、阅读和语法3、每期内附学习笔记,包含全文注释、长难句解析、疑难语法点等,帮助扫除阅读障碍。

    Geek News Central
    Anthropic Stands Their Ground, Ethics over Money #1859

    Geek News Central

    Play Episode Listen Later Mar 1, 2026 28:00 Transcription Available


    In this episode, Ray tackles Anthropic’s standoff with the U.S. Department of War after CEO Daria Amodei refused to grant unrestricted model access, citing concerns over mass surveillance and autonomous weapons. The government responded by banning Anthropic models through administrative orders. Also covered: the top 20 websites of 2026, China’s $173,000 warm-blooded companion robot, Fukushima’s rapidly evolving radioactive hybrid boars, a Chinese spacecraft emergency involving viewport cracks from space debris, Japan’s wooden satellite built with traditional joinery, and human brain cells on a chip that learned to play Doom in just one week. – Want to start a podcast? Its easy to get started! Sign-up at Blubrry – Thinking of buying a Starlink? Use my link to support the show. Subscribe to the Newsletter. Email Ray if you want to get in touch! Like and Follow Geek News Central’s Facebook Page. Support my Show Sponsor: Best Godaddy Promo Codes Get 1Password Full Summary Cochrane opens the show with Anthropic’s confrontation with the U.S. Department of War. CEO Daria Amodei released a public statement refusing unrestricted government access to Anthropic’s AI models. Two red lines stood firm: mass domestic surveillance and fully autonomous weapons. Ray explains that these models are predictive by nature, raising serious misidentification risks. However, the government hit back hard. Administrative orders now ban Anthropic models from government use. Despite the backlash, Cochrane expresses support for the company’s stance. He points listeners to a CBS interview with the CEO posted roughly nine hours before recording. Additionally, Anthropic released new models including Opus 4.5 and Sonnet 4.6. The company climbed to the number two spot on the App Store, trailing only ChatGPT and surpassing Google Gemini. Personal Updates Ray shares that February has been a demanding month. He’s juggling a capstone project, two jobs, and finishing his degree. Meanwhile, he continues working on developments at Blubrry hosting. He apologizes for inconsistent episode production and thanks listeners for their patience. Top 20 Websites of 2026 A Visual Capitalist chart ranks the most visited websites of 2026. Google holds the top spot, followed by YouTube. Facebook, Instagram, ChatGPT, Reddit, Wikipedia, X, and WhatsApp round out the upper rankings. Notably, DuckDuckGo appears at rank seventeen as a privacy-focused search alternative. Sponsor: GoDaddy Economy hosting $6.99/month, WordPress hosting $12.99/month, domains $11.99. Website builder trial available. Use codes at geeknewscentral.com/godaddy to support the show. Anthropic Retires Claude Opus 3 Cochrane discusses Anthropic’s decision to retire Claude Opus 3. In a unique move, the company gave the model a Substack-style blog to reflect on its own existence. Reactions online were mixed, with both supporters and critics engaging in the conversation. China’s $173,000 Warm-Blooded Companion Robot From ZME Science, Ray covers China’s new humanoid robot designed as a warm-blooded companion. Priced at $173,000, it features conventional robotics hardware, sensors, cameras, and autonomous navigation. A built-in heating element maintains body warmth. Cochrane comments humorously on the growing market for companion robots. Windows XP Green Hill Found and Photographed From Tom’s Hardware, someone tracked down and photographed the actual location of the iconic Windows XP “Green Hill” wallpaper. The Reddit post sparked a wave of nostalgia in the community. Fukushima’s Radioactive Hybrid Boars From AZ Animals, domestic pigs that escaped after the Fukushima disaster hybridized with wild boars. Their DNA reveals rapid evolutionary changes driven by the altered radioactive landscape. These aggressive hybrids now complicate wildlife management and rewilding efforts in the region. Shenzhou 20 Spacecraft Emergency Chinese astronauts aboard Shenzhou 20 discovered cracks in their spacecraft’s viewport during what became the nation’s first spaceflight emergency. Space debris likely caused the damage. The crew switched to an alternative return capsule. Multiple protective layers kept the situation manageable. Japan’s Wooden Satellite Japanese teams plan to launch the first wooden satellite. Built with magnolia wood panels assembled using traditional Japanese joinery methods, the biodegradable design aims to reduce aluminum particle pollution from satellites burning up during atmospheric reentry. Human Brain Cells Play Doom Building on previous work where living neurons played Pong, an independent developer used Python to train human brain cell clusters on microelectrode arrays to play Doom. The cells learned in roughly one week. Cochrane highlights how open knowledge sharing accelerated the project dramatically. He also raises ethical questions about training sentient brain cells, connecting the topic to evolving views on sentience in crustaceans and other organisms. The post Anthropic Stands Their Ground, Ethics over Money #1859 appeared first on Geek News Central.

    Audience 1st
    The Kid Who Googled "How to Become a Hacker" and Ended Up Wrecking Real Ones

    Audience 1st

    Play Episode Listen Later Mar 1, 2026 36:57


    John Hammond was a kid who Googled "how to become a hacker" and took it seriously. He learned Python, found his way into the Coast Guard Academy, and remembers squaring down a stairwell at two in the morning - rigid military posture, full indoctrination protocol - vibrating with excitement because he was about to sit next to smart people and solve security problems for a living. That visceral, middle-of-the-night certainty became the foundation of everything that followed.Today he's a principal security researcher on the Adversary Tactics team at Huntress, employee number twenty-eight at a company that's now over six hundred people. He's also one of the most recognized cybersecurity educators on the internet, producing hour-long exploit deep dives on YouTube that get more genuine engagement than most vendors' entire content budgets combined.In this episode, John talks about why the cybersecurity industry is stuck on a treadmill it may never get off and whether the business model actually depends on that treadmill keeping pace.He explains why Huntress is deliberately slow about integrating AI into their human-led SOC and why that uncertainty is more credible than the confident claims coming from thousands of other cybersecurity vendors in the space.We also get into territory that most cybersecurity conversations gloss over.John makes the case that the security awareness gap isn't informational - the information exists, he's made it free on YouTube - it's motivational, and most training programs are built around what the security team thinks is important rather than what the end user actually cares about.He talks about why checklists function as a ceiling on curiosity, and why the discoveries that actually matter are the ones that never make it onto the procedure document.And he gets real about burnout - the arc from obsessive passion to unsustainable output that the industry celebrates in keynotes and ignores in its operational expectations.There's a moment near the end where I asked him to describe Huntress in three words and he gave me an internal mantra - ethical badasses - that says more about how the company thinks about culture as a competitive weapon than any mission statement ever could.This is a conversation about what happens when someone who never optimized for credibility becomes one of the most credible voices in the room.Listen and enjoy.A special thanks to our friends at Huntress for partnering with us to tell this story. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit audience1st.substack.com

    Talk Python To Me - Python conversations for passionate developers
    #538: Python in Digital Humanities

    Talk Python To Me - Python conversations for passionate developers

    Play Episode Listen Later Feb 28, 2026 72:27 Transcription Available


    Digital humanities sounds niche, until you realize it can mean a searchable archive of U.S. amendment proposals, Irish folklore, or pigment science in ancient art. Today I'm talking with David Flood from Harvard's DARTH team about an unglamorous problem: What happens when the grant ends but the website can't. His answer, static sites, client-side search, and sneaky Python. Let's dive in. Episode sponsors Sentry Error Monitoring, Code talkpython26 Command Book Talk Python Courses Links from the show Guest David Flood: davidaflood.com DARTH: digitalhumanities.fas.harvard.edu Amendments Project: digitalhumanities.fas.harvard.edu Fionn Folklore Database: fionnfolklore.org Mapping Color in History: iiif.harvard.edu Apatosaurus: apatosaurus.io Criticus: github.com github.com/palewire/django-bakery: github.com sigsim.acm.org/conf/pads/2026/blog/artifact-evaluation: sigsim.acm.org Hugo: gohugo.io Water Stories: waterstories.fas.harvard.edu Tsumeb Mine Notebook: tmn.fas.harvard.edu Dharma and Punya: dharmapunya2019.org Pagefind library: pagefind.app django_webassembly: github.com Astro Static Site Generator: astro.build PageFind Python Lib: pypi.org Frozen-Flask: frozen-flask.readthedocs.io Watch this episode on YouTube: youtube.com Episode #538 deep-dive: talkpython.fm/538 Episode transcripts: talkpython.fm Theme Song: Developer Rap

    The Real Python Podcast
    Overcoming Testing Obstacles With Python's Mock Object Library

    The Real Python Podcast

    Play Episode Listen Later Feb 27, 2026 39:41


    Do you have complex logic and unpredictable dependencies that make it hard to write reliable tests? How can you use Python's mock object library to improve your tests? Christopher Trudeau is back on the show this week with another batch of PyCoder's Weekly articles and projects.

    DataTalks.Club
    Analytics Engineering with dbt Workshop - Juan Manuel Perafan

    DataTalks.Club

    Play Episode Listen Later Feb 27, 2026 83:57


    In this talk, Juan, Analytics Engineer and author of Fundamentals of Analytics Engineering share his professional journey from studying psychological research in Colombia to becoming one of the first analytics engineers in the Netherlands. We explore the evolution of the role, the shift toward engineering rigor in data modeling, and how the landscape of tools like dbt and Databricks is changing the way teams work.You'll learn about:- The fundamental differences between traditional BI engineering and modern analytics engineering.- How to bridge the gap between business stakeholders and technical data infrastructure.- The technical "glue" that connects Python and SQL for robust data pipelines.- The importance of automated testing (generic vs. singular tests) to prevent "silent" data failures.- Strategies for modeling messy, fragmented source data into a unified "business reality."- The current state of the "Lakehouse" paradigm and how it impacts storage and compute costs.- Expert advice on navigating the dbt ecosystem and its emerging competitors.Links:- DE Course: https://github.com/DataTalksClub/data-engineering-zoomcamp- Luma: https://luma.com/0uf7mmupTIMECODES:0:00 Juan's psychological research and transition to data4:36 Riding the wave: The early days of analytics engineering7:56 Breaking down the gap between analysts and engineers11:03 The art of turning business reality into clean data16:25 Why data engineering is about safety, not just speed20:53 Reimagining data modeling in the modern era26:53 To split or not to split: Finding the right team roles30:35 Python, SQL, and the technical toolkit for success38:41 How to stop manually testing your data dashboards46:34 Bringing software engineering rigor to data workflows49:50 Must-read books and resources for mastering the craft55:42 The future of dbt and the shifting tool landscape1:00:29 Deciphering the lakehouse: Warehousing in the cloud1:11:16 Pro-tips for starting your data engineering journey1:14:40 The big debate: Databricks vs. Snowflake1:18:28 Why every data professional needs a local communityThis talk is designed for data analysts looking to level up their engineering skills, data engineers interested in the business-logic layer, and data leaders trying to structure their teams more effectively. It is particularly valuable for those preparing for the Data Engineering Zoomcamp or anyone looking to transition into an Analytics Engineering role.Connect with Juan- Linkedin - https://www.linkedin.com/in/jmperafan/ - Website - https://juanalytics.com/Connect with DataTalks.Club:- Join the community - https://datatalks.club/slack.html- Subscribe to our Google calendar to have all our events in your calendar - https://calendar.google.com/calendar/r?cid=ZjhxaWRqbnEwamhzY3A4ODA5azFlZ2hzNjBAZ3JvdXAuY2FsZW5kYXIuZ29vZ2xlLmNvbQ- Check other upcoming events - https://lu.ma/dtc-events- GitHub: https://github.com/DataTalksClub- LinkedIn - https://www.linkedin.com/company/datatalks-club/ - Twitter - https://twitter.com/DataTalksClub - Website - https://datatalks.club/

    Packet Pushers - Full Podcast Feed
    NAN114: Demystifying Automation Tools, Processes, and Culture Gates

    Packet Pushers - Full Podcast Feed

    Play Episode Listen Later Feb 25, 2026 60:28


    Eric sits down with David Henderson, Principal Architect for NetDevOps at Presidio, to discuss the practical journey for network engineers transitioning from manual CLI operations to scalable NetDevOps and automation. They discuss how traditional networking knowledge and certifications are foundational, and suggest essential tools and habits for beginning your automation journey. David also shares a... Read more »

    Packet Pushers - Fat Pipe
    NAN114: Demystifying Automation Tools, Processes, and Culture Gates

    Packet Pushers - Fat Pipe

    Play Episode Listen Later Feb 25, 2026 60:28


    Eric sits down with David Henderson, Principal Architect for NetDevOps at Presidio, to discuss the practical journey for network engineers transitioning from manual CLI operations to scalable NetDevOps and automation. They discuss how traditional networking knowledge and certifications are foundational, and suggest essential tools and habits for beginning your automation journey. David also shares a... Read more »

    Python Bytes
    #470 A Jolting Episode

    Python Bytes

    Play Episode Listen Later Feb 23, 2026 25:29 Transcription Available


    Topics covered in this episode: Better Python tests with inline-snapshot jolt Battery intelligence for your laptop Markdown code formatting with ruff act - run your GitHub actions locally Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: Better Python tests with inline-snapshot Alex Hall, on Pydantic blog Great for testing complex data structures Allows you to write a test like this: from inline_snapshot import snapshot def test_user_creation(): user = create_user(id=123, name="test_user") assert user.dict() == snapshot({}) Then run pytest --inline-snapshot=fix And the library updates the test source code to look like this: def test_user_creation(): user = create_user(id=123, name="test_user") assert user.dict() == snapshot({ "id": 123, "name": "test_user", "status": "active" }) Now, when you run the code without “fix” the collected data is used for comparison Awesome to be able to visually inspect the test data right there in the test code. Projects mentioned inline-snapshot pytest-examples syrupy dirty-equals executing Michael #2: jolt Battery intelligence for your laptop Support for both macOS and Linux Battery Status — Charge percentage, time remaining, health, and cycle count Power Monitoring — System power draw with CPU/GPU breakdown Process Tracking — Processes sorted by energy impact with color-coded severity Historical Graphs — Track battery and power trends over time Themes — 10+ built-in themes with dark/light auto-detection Background Daemon — Collect historical data even when the TUI isn't running Process Management — Kill energy-hungry processes directly Brian #3: Markdown code formatting with ruff Suggested by Matthias Schoettle ruff can now format code within markdown files Will format valid Python code in code blocks marked with python, py, python3 or py3. Also recognizes pyi as Python type stub files. Includes the ability to turn off formatting with comment [HTML_REMOVED] , [HTML_REMOVED] blocks. Requires preview mode [tool.ruff.lint] preview = true Michael #4: act - run your GitHub actions locally Run your GitHub Actions locally! Why would you want to do this? Two reasons: Fast Feedback - Rather than having to commit/push every time you want to test out the changes you are making to your .github/workflows/ files (or for any changes to embedded GitHub actions), you can use act to run the actions locally. The environment variables and filesystem are all configured to match what GitHub provides. Local Task Runner - I love make. However, I also hate repeating myself. With act, you can use the GitHub Actions defined in your .github/workflows/ to replace your Makefile! When you run act it reads in your GitHub Actions from .github/workflows/ and determines the set of actions that need to be run. Uses the Docker API to either pull or build the necessary images, as defined in your workflow files and finally determines the execution path based on the dependencies that were defined. Once it has the execution path, it then uses the Docker API to run containers for each action based on the images prepared earlier. The environment variables and filesystem are all configured to match what GitHub provides. Extras Michael: Winter is coming: Frozendict accepted Django ORM stand-alone Command Book app announcement post Joke: Plug ‘n Paste

    The Art of Fatherhood Podcast
    Travis Kennedy Talks Fatherhood, The Whyte Python World Tour & More 

    The Art of Fatherhood Podcast

    Play Episode Listen Later Feb 23, 2026 32:12


    Travis Kennedy sits down with me to talk about his fatherhood journey. He talks about the values he looks to instill into his kids as they grow up. In addition he shares what his kids have taught him. After that we talk about his book, The Whyte Python World Tour. We talk about the inspiration for the book and how it will be turned into a movie. Plus, he gives some insight on if any of the characters were inspired by himself or others. He even talks about how he looks at storytelling and the importance of creating interesting characters. Lastly, we finish the interview with the Fatherhood Quick Five.  About Travis Kennedy Travis Kennedy's work is in the Best New England Crime Stories. In addition, you can see his work in Best American Mystery Stories anthologies. He is the Grand Prize Winner of ScreenCraft's 2021 Cinematic Book Contest for “Sharks in the Valley,” to be published as Welcome to Redemption. He lives in Scarborough, Maine, with his wife and their two children. The Whyte Python World Tour is his debut novel. Make sure you follow Travis on Instagram at @kennedywriting. In addition, pick up his book, The Whyte Python World Tour wherever you purchase books. SLIDEMVP Is This Week's Podcast Sponsor As a dad and coach, inspired by some awkward slides and makeshift cardboard sliding tools, Coach Robby asked: “How can I help players slide better?” After countless brainstorming sessions and prototypes, the SLIDEMVP™ was born. Players immediately had fun, and their sliding skills improved dramatically. To further support athletes, Coach Robby has hosted multiple sliding clinics, building confidence and teaching techniques like the pop-up slide. Proudly manufactured in the USA, SLIDEMVP™ is player-tested and coach-approved. It enhances sliding technique, boosts speed, and improves agility on the basepaths. Most importantly, SLIDEMVP™ helps players build confidence and take their game to the next level. To learn more go to their website at SlideMVP.com. About The Art of Fatherhood Podcast  The Art of Fatherhood Podcast follows the journey of fatherhood. Your host, Art Eddy talks with fantastic dads from all around the world where they share their thoughts on fatherhood. You get a unique perspective on fatherhood from guests like Bob Odenkirk, Hank Azaria, Joe Montana, Kevin Smith, Danny Trejo, Jerry Rice, Jeff Foxworthy, Patrick Warburton, Jeff Kinney, Paul Sun-Hyung Lee, Kyle Busch, Dennis Quaid, Dwight Freeney and many more.

    The John Batchelor Show
    S8 Ep497: Jeremy Zakis reports irregular weather is driving venomous snakes into unusual residential locations, with a Victorian woman startled by a copperhead wrapping around her leg while Queensland's Whitsunday Islands face a python epidemic leading t

    The John Batchelor Show

    Play Episode Listen Later Feb 22, 2026 10:17


    Jeremy Zakis reports irregular weather is driving venomous snakes into unusual residential locations, with a Victorianwoman startled by a copperhead wrapping around her leg while Queensland's Whitsunday Islands face a python epidemic leading to tourist warnings about painful defensive bites. 2

    The John Batchelor Show
    S8 Ep498: Jeremy Zakis reports irregular weather is driving venomous snakes into unusual residential locations, with a Victorian woman startled by a copperhead wrapping around her leg while Queensland's Whitsunday Islands face a python epidemic leading t

    The John Batchelor Show

    Play Episode Listen Later Feb 22, 2026 10:17


    Jeremy Zakis reports irregular weather is driving venomous snakes into unusual residential locations, with a Victorianwoman startled by a copperhead wrapping around her leg while Queensland's Whitsunday Islands face a python epidemic leading to tourist warnings about painful defensive bites. 3

    Talk Python To Me - Python conversations for passionate developers
    #537: Datastar: Modern web dev, simplified

    Talk Python To Me - Python conversations for passionate developers

    Play Episode Listen Later Feb 21, 2026 76:37 Transcription Available


    You love building web apps with Python, and HTMX got you excited about the hypermedia approach -- let the server drive the HTML, skip the JavaScript build step, keep things simple. But then you hit that last 10%: You need Alpine.js for interactivity, your state gets out of sync, and suddenly you're juggling two unrelated libraries that weren't designed to work together. What if there was a single 11-kilobyte framework that gave you everything HTMX and Alpine do, and more, with real-time updates, multiplayer collaboration out of the box, and performance so fast you're actually bottlenecked by the monitor's refresh rate? That's Datastar. On this episode, I sit down with its creator Delaney Gillilan, core maintainer Ben Croker, and Datastar convert Chris May to explore how this backend-driven, server-sent-events-first framework is changing the way full-stack developers think about the modern web. Episode sponsors Sentry Error Monitoring, Code talkpython26 Command Book Talk Python Courses Links from the show Guests Delaney Gillilan: linkedin.com Ben Croker: x.com Chris May: everydaysuperpowers.dev Datastar: data-star.dev HTMX: htmx.org AlpineJS: alpinejs.dev Core Attribute Tour: data-star.dev data-star.dev/examples: data-star.dev github.com/starfederation/datastar-python: github.com VSCode: marketplace.visualstudio.com OpenVSX: open-vsx.org PyCharm/Intellij plugin: plugins.jetbrains.com data-star.dev/datastar_pro: data-star.dev gg: discord.gg HTML-ivating your Django web app's experience with HTMX, AlpineJS, and streaming HTML - Chris May: www.youtube.com Senior Engineer tries Vibe Coding: www.youtube.com 1 Billion Checkboxes: checkboxes.andersmurphy.com Game of life example: example.andersmurphy.com Watch this episode on YouTube: youtube.com Episode #537 deep-dive: talkpython.fm/537 Episode transcripts: talkpython.fm Theme Song: Developer Rap

    Your Next Million
    Why Most AI Agencies Fail. (The $307 Billion Mistake)

    Your Next Million

    Play Episode Listen Later Feb 20, 2026 23:01


    Everyone says you need to "Start an AI Agency" to make millions in 2026. And technically, the hype is there ($307 Billion was spent on AI implementations last year). But if you're reading this, you probably know the uncomfortable truth. Most of those projects are failing. The problem isn't the "AI" or the "Client." It's the Learning Gap. Most agencies are selling "tools" (chatbots) when businesses are desperate for "outcomes" (custom automation). The method that actually saved my business $44,000/year—and is generating up to $10 returns for the top 5% of companies—is simple: The Architect Method. So today, I'm going to show you how to stop "prompting" and start "architecting." We are going to build a custom, enterprise-grade solution that replaces expensive software... without writing a single line of code yourself. We analyze the conflicting data between the IDC Spending Report and the MIT Failure Study. We then break down the "Architect" logic that separates the 95% who fail from the 5% who succeed. Finally, we use Claude to run a "Tech Stack Interview" and build a recursive, self-correcting automation system for High Level and Google Workspace. Anyway, here is how we will use AI to stop guessing and start building: Step 1: The "$307 Billion Lie." We look at the stats (95% failure rate) and explain why the "Standard Agency Model" is dangerous for beginners. If you are just selling "implementation," you are selling a commodity. Step 2: The "Learning Gap" (MIT Study). We reveal why AI tools "drift" and fail over time. The secret isn't better prompting—it's building a system that understands your specific Tech Stack context before it writes a single word. Step 3: The "Architect" Protocol. Most people ask AI to "do the work." I show you how to ask AI to "design the blueprint" first. We use the Recursive Self-Correction technique to have the AI write its own Python scripts and fix its own errors. Step 4: The "Tech Stack Interview." We watch live as I get the AI to interview me about my specific setup (High Level, Gmail, Custom Database). This ensures the code it writes actually works for my business, eliminating the "Hallucination" problem. If you want to be part of the 5% making AI work instead of the 95% burning cash, this video shows you the shift you need to make.

    Hanselminutes - Fresh Talk and Tech for Developers
    That's good Mojo - Creating a Programming Language for an AI world with Chris Lattner

    Hanselminutes - Fresh Talk and Tech for Developers

    Play Episode Listen Later Feb 19, 2026 41:24


    What does it take to design a programming language from scratch when the target isn't just CPUs, but GPUs, accelerators, and the entire AI stack? In this episode, I sit down with legendary language architect Chris Lattner to talk about Mojo — his ambitious attempt to rethink systems programming for the machine learning era. We trace the arc from LLVM and Clang to Swift and now Mojo, unpacking the lessons Chris has carried forward into this new language. Mojo aims to combine Python's ergonomics with C-level performance, but the real story is deeper: memory ownership, heterogeneous compute, compile-time metaprogramming, and giving developers precise control over how AI workloads hit silicon. Chris shares the motivation behind Modular, why today's AI infrastructure demands new abstractions, and how Mojo fits into a rapidly evolving ecosystem of ML frameworks and hardware backends. We also dig into developer experience, safety vs performance tradeoffs, and what it means to build a language that spans research notebooks all the way down to kernel-level execution.

    The CyberWire
    Stealer in the status bar. [Research Saturday]

    The CyberWire

    Play Episode Listen Later Feb 14, 2026 15:34


    Today we have Ziv Mador, VP of Security Research from LevelBlue SpiderLabs discussing their work on "SpiderLabs IDs New Banking Trojan Distributed Through WhatsApp." Researchers at LevelBlue SpiderLabs have identified a new Brazilian banking Trojan dubbed Eternidade Stealer, spread through WhatsApp hijacking and social engineering campaigns that use a Python-based worm to steal contacts and distribute malicious MSI installers. The Delphi-compiled malware targets Brazilian victims, profiles infected systems, dynamically retrieves its command-and-control server via IMAP email, and deploys banking overlays to harvest credentials from financial institutions and cryptocurrency platforms. The campaign reflects the continued evolution of Brazil's cybercrime ecosystem, combining WhatsApp propagation, geofencing, encrypted C2 communications, and process injection to maintain stealth and persistence. The research can be found here: SpiderLabs IDs New Banking Trojan Distributed Through WhatsApp Learn more about your ad choices. Visit megaphone.fm/adchoices

    Grumpy Old Geeks
    732: We're Not In the Files!

    Grumpy Old Geeks

    Play Episode Listen Later Feb 7, 2026 76:06


    In this week's FOLLOW UP, Bitcoin is down 15%, miners are unplugging rigs because paying eighty-seven grand to mine a sixty-grand coin finally failed the vibes check, and Grok is still digitally undressing men—suggesting Musk's “safeguards” remain mostly theoretical, which didn't help when X offices got raided in France. Spain wants to ban social media for kids under 16, Egypt is blocking Roblox outright, and governments everywhere are flailing at the algorithmic abyss.IN THE NEWS, Elon Musk is rolling xAI into SpaceX to birth a $1.25 trillion megacorp that wants to power AI from orbit with a million satellites, because space junk apparently wasn't annoying enough. Amazon admits a “high volume” of CSAM showed up in its AI training data and blames third parties, Waymo bags a massive $16 billion to insist robotaxis are working, Pinterest reportedly fires staff who built a layoff-tracking tool, and Sam Altman gets extremely cranky about Claude's Super Bowl ads hitting a little too close to home.For MEDIA CANDY, we've got Shrinking, the Grammys, Star Trek: Starfleet Academy's questionable holographic future, Neil Young gifting his catalog to Greenland while snubbing Amazon, plus Is It Cake? Valentines and The Rip.In APPS & DOODADS, we test Sennheiser earbuds, mess with Topaz Video, skip a deeply cursed Python script that checks LinkedIn for Epstein connections, and note that autonomous cars and drones will happily obey prompt injection via road signs—defeated by a Sharpie.IN THE LIBRARY, there's The Regicide Report, a brutal study finding early dementia signals in Terry Pratchett's novels, Neil Gaiman denying allegations while announcing a new book, and THE DARK SIDE WITH DAVE, vibing with The Muppet Show as Disney names a new CEO. We round it out with RentAHuman.ai dread relief via paper airplane databases, free Roller Coaster Tycoon, and Sir Ian McKellen on Colbert—still classy in the digital wasteland.Sponsors:DeleteMe - Get 20% off your DeleteMe plan when you go to JoinDeleteMe.com/GOG and use promo code GOG at checkout.SquareSpace - go to squarespace.com/GRUMPY for a free trial. And when you're ready to launch, use code GRUMPY to save 10% off your first purchase of a website or domain.Private Internet Access - Go to GOG.Show/vpn and sign up today. For a limited time only, you can get OUR favorite VPN for as little as $2.03 a month.SetApp - With a single monthly subscription you get 240+ apps for your Mac. Go to SetApp and get started today!!!1Password - Get a great deal on the only password manager recommended by Grumpy Old Geeks! gog.show/1passwordShow notes at https://gog.show/732FOLLOW UPBitcoin drops 15%, briefly breaking below $61,000 as sell-off intensifies, doubts about crypto growBitcoin Is Crashing So Hard That Miners Are Unplugging Their EquipmentGrok, which maybe stopped undressing women without their consent, still undresses menX offices raided in France as UK opens fresh investigation into GrokSpain set to ban social media for children under 16Egypt to block Roblox for all usersIN THE NEWSElon Musk Is Rolling xAI Into SpaceX—Creating the World's Most Valuable Private CompanySpaceX wants to launch a constellation of a million satellites to power AI needsA potential Starlink competitor just got FCC clearance to launch 4,000 satellitesAmazon discovered a 'high volume' of CSAM in its AI training data but isn't saying where it came fromWaymo raises massive $16 billion round at $126 billion valuation, plans expansion to 20+ citiesPinterest Reportedly Fires Employees Who Built a Tool to Track LayoffsSam Altman got exceptionally testy over Claude Super Bowl adsMEDIA CANDYShrinkingStar Trek: Starfleet AcademyThe RipNeil Young gifts Greenland free access to his music and withdraws it from Amazon over TrumpIs it Cake? ValentinesAPPS & DOODADSSennheiser Consumer Audio IE 200 In-Ear Audiophile Headphones - TrueResponse Transducers for Neutral Sound, Impactful Bass, Detachable Braided Cable with Flexible Ear Hooks - BlackSennheiser Consumer Audio CX 80S In-ear Headphones with In-line One-Button Smart Remote – BlackTopaz VideoEpsteinAutonomous cars, drones cheerfully obey prompt injection by road signAT THE LIBRARYThe Regicide Report (Laundry Files Book 14) by Charles StrossScientists Found an Early Signal of Dementia Hidden in Terry Pratchett's NovelsNeil Gaiman Denies the Allegations Against Him (Again) While Announcing a New BookTHE DARK SIDE WITH DAVEDave BittnerThe CyberWireHacking HumansCaveatControl LoopOnly Malware in the BuildingThe Muppet ShowDisney announces Josh D'Amaro will be its new CEO after Iger departsA Database of Paper Airplane Designs: Hours of Fun for Kids & Adults AlikeOnline (free!) version of Roller Coaster tycoon.Speaking of coasters, here's the current world champion.I am hoping this is satire...Sir Ian McKellen on Colbert.CLOSING SHOUT-OUTSCatherine O'Hara: The Grande Dame of Off-Center ComedyStanding with Sam 'Balloon Man' MartinezSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.