POPULARITY
Categories
In this sponsored episode, Eric sits down with Lee Peterson, VP of Product Management for Secure WAN, at Cisco. Together they discuss how Cisco Unified Branch is helping organizations scale, automate, and secure their distributed environments. They also define the Branch Network, discuss the major challenges facing network teams, and walk through Cisco’s vision of... Read more »
In this episode, Nick talks about Trump Takes On Ungrateful Ilhan, Nicotine Naysayers Eat It, FL Man Kills Tourists, Swalwell Can't Run, Doomed Delivery, and A Python's Nap! The FULL SHOW is live streaming & FREE-ONLY on Rumble! Join our LIVE CHAT at 6pm ET every Mon-Thu or watch the FULL EPISODE anytime on demand after 7pm ET. Follow my Channel and get notified! https://rumble.com/c/TheNickDiPaoloShow MERCH - Grab some mugs, hats, hoodies, shirts, stickers etc… https://shop.nickdip.com/ PERSONAL VIDEO FROM ME – Send someone a personal video from me! Go to https://shoutout.us/nickdipaolo or www.cameo.com/nickdipaolo SOCIALS/COMEDY- Follow me on Socials or Stream some of my Comedy! https://nickdipaolo.komi.io/
Topics covered in this episode: Better Django management commands with django-click and django-typer PSF Lands a $1.5 million sponsorship from Anthropic How uv got so fast PyView Web Framework Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: Better Django management commands with django-click and django-typer Lacy Henschel Extend Django manage.py commands for your own project, for things like data operations API integrations complex data transformations development and debugging Extending is built into Django, but it looks easier, less code, and more fun with either django-click or django-typer, two projects supported through Django Commons Michael #2: PSF Lands a $1.5 million sponsorship from Anthropic Anthropic is partnering with the Python Software Foundation in a landmark funding commitment to support both security initiatives and the PSF's core work. The funds will enable new automated tools for proactively reviewing all packages uploaded to PyPI, moving beyond the current reactive-only review process. The PSF plans to build a new dataset of known malware for capability analysis The investment will sustain programs like the Developer in Residence initiative, community grants, and infrastructure like PyPI. Brian #3: How uv got so fast Andrew Nesbitt It's not just be cause “it's written in Rust”. Recent-ish standards, PEPs 518 (2016), 517 (2017), 621 (2020), and 658 (2022) made many uv design decisions possible And uv drops many backwards compatible decisions kept by pip. Dropping functionality speeds things up. “Speed comes from elimination. Every code path you don't have is a code path you don't wait for.” Some of what uv does could be implemented in pip. Some cannot. Andrew discusses different speedups, why they could be done in Python also, or why they cannot. I read this article out of interest. But it gives me lots of ideas for tools that could be written faster just with Python by making design and support decisions that eliminate whole workflows. Michael #4: PyView Web Framework PyView brings the Phoenix LiveView paradigm to Python Recently interviewed Larry on Talk Python Build dynamic, real-time web applications using server-rendered HTML Check out the examples. See the Maps demo for some real magic How does this possibly work? See the LiveView Lifecycle. Extras Brian: Upgrade Django, has a great discussion of how to upgrade version by version and why you might want to do that instead of just jumping ahead to the latest version. And also who might want to save time by leapfrogging Also has all the versions and dates of release and end of support. The Lean TDD book 1st draft is done. Now available through both pythontest and LeanPub I set it as 80% done because of future drafts planned. I'm working through a few submitted suggestions. Not much feedback, so the 2nd pass might be fast and mostly my own modifications. It's possible. I'm re-reading it myself and already am disappointed with page 1 of the introduction. I gotta make it pop more. I'll work on that. Trying to decide how many suggestions around using AI I should include. It's not mentioned in the book yet, but I think I need to incorporate some discussion around it. Michael: Python: What's Coming in 2026 Python Bytes rewritten in Quart + async (very similar to Talk Python's journey) Added a proper MCP server at Talk Python To Me (you don't need a formal MCP framework btw) Example one: latest-episodes-mcp.png Example two: which-episodes-mcp.webp Implmented /llms.txt for Talk Python To Me (see talkpython.fm/llms.txt ) Joke: Reverse Superman
Travis Thompson is a lifelong Floridian outdoorsman who serves as the Executive Director of All Florida. This conservation group's focus is to "develop a coalition of advocates and artists, hunters and hikers, fishermen and families, to ensure that Florida's future is wild and wonderful." During today's discussion, Travis talks about the work All Florida is [...]
BRISBANE WOMAN WAKES TO FIND 8-FOOT PYTHON ON CHEST Colleague Jeremy Zakis. Zakis recounts a shocking story from Brisbane where a sleeping woman mistook an 8-foot python curled on her chest for her dog. Her husband alerted her, and a snake catcher removed the heavy, non-venomous intruder, which likely entered the home through plantation shutters seeking warmth and comfort.
How often have you heard about the speed of Python? What's actually being measured, where are the bottlenecks---development time or run time---and which matters more for productivity? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.
We follow up on episode 104 from September last year when we promised to tackle some Linux projects including moving to Immich and Jellyfin, learning about Docker Compose and Python, and ditching Synology. Support us on Patreon and get an ad-free RSS feed with early episodes sometimes See our contact page for ways to get in touch. Subscribe to the RSS feed.
Eric Chou is joined by Dr. Levi Perigo, Scholar in Residence and Professor of Network Engineering at the University of Colorado, Boulder. They discuss Levi's non-traditional career path from being in the network automation industry for 20 years before shifting to academia and co-founding QuivAR. Levi also dives into the success of the CU Boulder... Read more »
Eric Chou is joined by Dr. Levi Perigo, Scholar in Residence and Professor of Network Engineering at the University of Colorado, Boulder. They discuss Levi's non-traditional career path from being in the network automation industry for 20 years before shifting to academia and co-founding QuivAR. Levi also dives into the success of the CU Boulder... Read more »
JSON seems to be everywhere these days. Many application developers like it across all sorts of languages, C#, JAVA, Python, and more. They use it for transferring information between systems, and are comfortable serializing hierarchical object data into JSON from text and de-serializing it back into its various elements. For those of us working in relational databases, JSON seems like a blob of information that isn't easily queried, indexed, or stored. We prefer working with a relational set of data, which brings us into conflict with software developers. We'd like them to convert their objects to a relational structure, and they'd like us to just work with JSON. Read the rest of JSON Has a Cost
In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss analyzing survey data using generative artificial intelligence tools. You will discover how to use new AI functions embedded in spreadsheets to code hundreds of open-ended survey responses instantly. You’ll learn the exact prompts needed to perform complex topic clustering and sentiment analysis without writing any custom software. You will understand why establishing a calibrated, known good dataset is essential before trusting any automated qualitative data analysis. You’ll find out the overwhelming trend in digital marketing content that will shape future strategies for growing your business. Watch now to revolutionize how you transform raw feedback into powerful strategy! Watch the video here: Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-processing-survey-data-with-generative-ai.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn: In this week’s In Ear Insights, let’s talk about surveys and processing survey data. Now, this is something that we’ve talked about. Gosh, I think since the founding of the company, we’ve been doing surveys of some kind. And Katie, you and I have been running surveys of some form since we started working together 11 years ago because something that the old PR agency used to do a ton of—not necessarily well, but they used to do it well. Katie Robbert: When they asked us to participate, it would go well. Christopher S. Penn: Yes, exactly. Christopher S. Penn: And this week we’re talking about how do you approach survey analysis in the age of generative AI where it is everywhere now. And so this morning you discovered something completely new and different. Katie Robbert: Well, I mean, I discovered it via you, so credit where credit is due. But for those who don’t know, we have been a little delinquent in getting it out. But we typically run a one-question survey every quarter that just, it helps us get a good understanding of where our audience is, where people’s heads are at. Because the worst thing you can possibly do as business owners, as marketers, as professionals, is make assumptions about what people want. And that’s something that Chris and I work very hard to make sure we’re not doing. And so one of the best ways to do that is just to ask people. We’re a small company, so we don’t have the resources unfortunately to hold a lot of one-on-one meetings. But what we can do is ask questions virtually. And that’s what we did. So we put out a one-question survey. And in the survey, the question was around if you could pick a topic to deep dive on in 2026 to learn about, what would it be. Now keep in mind, I didn’t say about AI or about marketing because that’s where—and Chris was sort of alluding to—surveys go wrong. When we worked at the old shop, the problem was that people would present us with, “and this is the headline that my client wants to promote.” So how do we run a survey around it? Without going too far in the weeds, that’s called bias, and that’s bad. Bias equals bad. You don’t want to lead with what you want people to respond with. All of that being said, we’ve gotten almost 400 responses over the weekend, which is a fantastic number of responses. That gives us a lot of data to work with. But now we have to do something with it. What Chris discovered and then shared with me, which I’m very excited about, is you don’t have to code anything to do this. There were and there still are a lot of data analysis platforms for market research data, which is essentially what this is for: unstructured, qualitative, sentence structured data, which is really hard to work with if you don’t know what you’re looking for. And the more you have of it, the harder it is to figure out where the trends are. But now people are probably thinking, “oh, I just bring it into generative AI and say, summarize this for me.” Well, that’s not good enough. First of all, let’s just don’t do that. But there are ways to do it, no code, that you can really work with the data. So without further ado, Chris, do you want to talk about what you’ve been working on this morning? And we’re going to do a deep dive on our livestream on Thursday, which you can join us every Thursday at 1:00 PM Eastern. Go to Trust Insights AI TI podcast. Nope, that’s us today. Wait a second. TrustInsights AI YouTube, and you can follow live or catch the replay. And we’ll do a deep dive into how this works, both low code and high tech. But I think it’s worth at least acknowledging, Chris, what you have discovered this morning, and then we can sort of talk about some of the findings that we’re getting. Christopher S. Penn: So one of the most useful things that AI companies have done in the last 6 months is put generative AI into the tools that we already use. So Google has done this. They’ve put Gemini in Google Sheets, Google Docs, in your Gmail. Finally, by the way—slight tangent. They finally put it in Google Analytics. Three years later. Microsoft has put Copilot into all these different places as well. In Excel, in Word, in PowerPoint, and so on and so forth. And so what you can do inside of these tools is they now have formulas that essentially invoke an AI agent. So inside of Google Sheets you can type equals Gemini, then give it a prompt and then give it a cell to work on and have it do its thing. Christopher S. Penn: So what I did naturally was to say, “Okay, let’s write a prompt to do topic analysis.” “Okay, here’s 7 different topics you can choose from.” Gemini, tell me for this cell, this one survey response, which of the 7 topics does it fit in? And then it returns just the topic name and puts it in that cell. And so what used to be a very laborious hand coding—”okay, this is about this”—now you can just drag and fill the column and you’ve got all 400 responses classified. You can do sentiment analysis, you can do all sorts of stuff. Katie Robbert: I remember a quick anecdote, and I think I’ve told this story before. When I was doing clinical trial research, we were trying to develop an automated system to categorize sentiment for online posts about the use and abuse of opiates and stimulants. So, is it a positive sentiment? Is it a negative sentiment? With the goal of trying to understand the trends of, “oh, this is a pharmaceutical that just hit the market. People love it. The sentiment is super positive in the wrong places.” Therefore, it’s something that we should keep an eye on. All to say, I remember sitting there with stacks and stacks of printed out online conversation hand coding. One positive, two negative. And it’s completely subjective because we had to have 4 or 5 different hand coders doing the sentiment analysis over and over again until we came to agreement, and then we could start to build the computer program. So to see that you did this all in the span of maybe 20 minutes this morning is just—it’s mind blowing to me. Christopher S. Penn: Yeah. And the best part is you just have to be able to write good prompts. Katie Robbert: Well, therein lies the caveat. And I think that this is worth repeating. Critical thinking is something that AI is not going to do for you. You still have to think about what it is you want. Giving a spreadsheet to AI and saying, “summarize this,” you’re going to get crappy results. Christopher S. Penn: Exactly. So, and we’ll show this on the live stream. We’re going to walk through the steps on how do you build this? Very simple, no tech way of doing it, but at the very least, one of the things you’ll want to do. And we’ve done this. In fact, we did this not too long ago for an enterprise client building a sentiment analysis system: you have to have a known, good starting data set of stuff that has been coded that you agree with. And it can be 3 or 4 or 5 things, but ideally you start with that. So you can say, this is examples of what good and bad sentiment is, or positive and negative, or what the topic is. Write a prompt to essentially get these same results. It’s what the tech folks would call back testing, just calibration, saying, “This is a note, it still says, ‘I hate Justin Zeitzac, man, all this and stuff.’ Okay, that’s a minus 5.” What do they hate us as a company? Oh, okay. “That annoying Korean guy,” minus 5. So you’d want to do that stuff too. So that’s the mechanics of getting into this. Now, one of the things that I think we wanted to chat about was kind of at a very high level, what we saw. Katie Robbert: Yeah. Christopher S. Penn: So when we put all the big stuff into the big version of Gemini to try and get a sense of what are the big topics, really, 6 different topics popped out: Generative AI, broadly, of course; people wanting to learn about agentic AI; content marketing; attribution and analytics; use cases in general; and best practices in general. Although, of course, a lot of those had overlap with the AI portion. And when we look at the numbers, the number one topic by a very large margin is agentic AI. People want to know, what do we do with this thing, these things? How do we get them going? What is it even? And one of the things I think is worth pointing out is having Gemini in your spreadsheet, by definition, is kind of an agent in the sense that you don’t have to go back to an AI system and say, “I’ll do this.” Then copy-paste results back and forth. It’s right there as a utility. Katie Robbert: And I think that I’m not surprised by the results that we’re seeing. I assumed that there would be a lot of questions around agentic AI, generative AI in general. What I am happy to see is that it’s not all AI, that there is still a place for non-AI. So, one of the questions was what to measure and why, which to be fair, is very broad. But you can make assumptions that since they’re asking us, it’s around digital marketing or business operations. I think that there’s one of the things that we try to ask in our free Slack group, Analytics for Marketers, which you can join for free at trustinsights.ai/analyticsformarketers. We chatting in there every day is to make sure that we have a good blend of AI-related questions, but also non-AI-related questions because there is still a lot of work being done without AI, or AI is part of the platform, but it’s not the reason you’re doing it. We know that most of these tools at this day and age include AI, but people still need to know the fundamentals of how do I build KPIs, what do I need to measure, how do I manage my team, how do I put together a content calendar based on what people want. You can use AI as a supporting role, but it’s not AI forward. Christopher S. Penn: And I think the breakout, it’s about, if you just do back of the envelope, it’s about 70/30. 70% of the responses we got really were about AI in some fashion, either regular or agentic. And the 30% was in the other category. And that kind of fits nicely to the two themes that we’ve had. Last year’s theme was rooted, and this year’s theme is growth. So the rooted is that 30% of how do we just get basic stuff done? And the 70% is the growth. To say, this is where things are and are likely going. How do we grow to meet those challenges? That’s what our audience is asking of us. That’s what you folks listening are saying is, we recognize this is the growth opportunity. How do we take advantage of it? Katie Robbert: And so if we just look at all of these questions, it feels daunting to me, anyway. I don’t know about you, Chris—you don’t really get phased by much—but I feel a little overwhelmed: “Wow, do you really know the answers to all of these questions?” And the answer is yes, which is also a little overwhelming. Oh wait, when did that happen? But yeah, if you’re going to take the time to ask people what they’re thinking, you then have to take the time to respond and acknowledge what they’ve asked. And so our—basically our mandate—is to now do something with all of this information, which we’re going to figure out. It’s going to be a combination of a few things. But Chris, if you had your druthers, which you don’t, but if you did. Where would you start with answering some of these questions? Christopher S. Penn: What if I had my druthers? I would put. Take the entire data set one piece at a time and take the conclusion, the analysis that we’ve done, and put it into Claude Code with 4 different agents, which is actually something I did with my own newsletter this past weekend. I’d have a revenue agent saying, “How can we make some money?” I’d have a voice of the customer agent based on our ICP saying, “Hey, you gotta listen to the customer. This is what we’re saying. This is literally what we said. You gotta listen to us.” “Hey, your revenue agent, you can’t monetize everything. I’m not gonna pay for everything.” You would have a finance and operations agent to say, “Hey, let’s. What can we do?” “Here’s the limitations.” “We’re only this many people. We only have this much time in the day. We can’t do everything.” “We gotta pick the things that make sense.” And then I would have the Co-CEO agent (by virtual Katie) as the overseer and the orchestrator to say, “Okay, Revenue Agent, Customer Agent, Operations Agent, you guys tell me, and I’m going to make some executive decisions as to what makes the most sense for the company based on the imperatives.” I would essentially let them duke it out for about 20 minutes in Claude Code, sort of arguing with each other, and eventually come back with a strategy, tactics, execution, and measurement plan—which are the 4 pieces that the Co-CEO agent would generate—to say, “Okay, out of these hundreds of survey responses, we know agentic AI is the thing.” “We know these are the kinds of questions people are asking.” “We know what capabilities we have, we know limitations we have.” “Here’s the plan,” or perhaps, because it’s programmed after you, “Here’s 3 plans: the lowest possible, highest possible, middle ground.” And then we as the humans can look at it and go, “All right, let’s take some of what’s in this plan and most of what’s in this plan, merge that together, and now we have our plan for this content.” Because I did that this weekend with my newsletter, and all 4 of the agents were like, “Dude, you are completely missing all the opportunities. You could be making this a million-dollar business, and you are just ignoring it completely.” Yeah, Co-CEO was really harsh. She was like, “Dude, you are missing the boat here.” Katie Robbert: I need to get my avatar for the Co-CEO with my one eyebrow. Thanks, Dad. That’s a genetic thing. I mean, that’s what I do. Well, so first of all, I read your newsletter, and I thought that was a very interesting thing, which I’m very interested to see. I would like you to take this data and follow that same process. I’m guessing maybe you already have or are in the process of it in the background. But I think that when we talk about low tech and high tech, I think that this is really sort of what we’re after. So the lower tech version—for those who don’t want to build code, for those who don’t want to have to open up Python or even learn what it is—you can get really far without having to do that. And again, we’ll show you exactly the steps on the live stream on Thursday at 1:00 PM Eastern to do that. But then you actually have to do something with it, and that’s building a plan. And Chris, to your point, you’ve created synthetic versions of basically my brain and your brain and John’s brain and said, “Let’s put a plan together.” Or if you don’t have access to do that, believe it or not, humans still exist. And you can just say, “Hey Katie, we have all this stuff. People want to get answers to these questions based on what we know about our growth plans and the business models and all of those things. Where should we start?” And then we would have a real conversation about it and put together a plan. Because there’s so much data on me, so much data on you and John, etc., I feel confident—because I’ve helped build the Co-CEO—I feel confident that whatever we get back is going to be pretty close to what we as the humans would say. But we still want that human intervention. We would never just go, “Okay, that’s the plan, execute it.” We would still go, “Well, what the machines don’t know is what’s happening in parallel over here.” “So it’s missing that context.” “So let’s factor that in.” And so I’m really excited about all of it. I think that this is such a good use of the technology because it’s not replacing the human critical thinking—it’s just pattern matching for us so that we can do the critical thinking. Christopher S. Penn: Exactly. And the key really is for that advanced use case of using multiple agents for that scenario, the agents themselves really do have to be rock solid. So you built the ideal customer profile for the almost all the time in the newsletter. You built… Yeah, the Co-CEO. We’ve enhanced it over time, but it is rooted in who you are. So when it makes those recommendations and says those things, there was one point where it was saying, “Stop with heroics. Just develop a system and follow the system.” Huh, that sounds an awful lot. Katie Robbert: I mean, yeah, I can totally see. I can picture a few instances where that phrase would actually come out of my mouth. Christopher S. Penn: Yep, exactly. Christopher S. Penn: So that’s what we would probably do with this is take that data, put it through the smartest models we have access to with good prompts, with good data. And then, as you said, build some plans and start doing the thing. Because if you don’t do it, then you just made decorations for your office, which is not good. Katie Robbert: I think all too often that’s what a lot of companies find themselves in that position because analyzing qualitative data is not easy. There’s a reason: it’s a whole profession, it’s a whole skill set. You can’t just collect a bunch of feedback and go, “Okay, so we know what.” You need to actually figure out a process for pulling out the real insights. It’s voice of customer data. It’s literally, you’re asking your customers, “What do you want?” But then you need to do it. The number one mistake that companies make by collecting voice of customer data is not doing anything with it. Number 2 is then not going back to the customer and acknowledging it and saying, “We heard you.” “Here’s now what we’re going to do.” Because people take the time to respond to these things, and I would say 99% of the responses are thoughtful and useful and valuable. You’re always going to get a couple of trolls, and that’s normal. But then you want to actually get back to people, “I heard you.” Your voice is valuable because you’re building that trust, which is something machines can’t do. You’re building that human trust in those relationships so that when you go back to that person who gave you that feedback and said, “I heard you, I’m doing something with it.” “Here’s an acknowledgment.” “Here’s the answer.” “Here’s whatever it is.” Guess what? Think about your customer buyer’s journey. You’re building those loyalists and then eventually those evangelists. I’m sort of going on a tangent. I’m very tangential today. A lot of companies stop at the transactional purchase, but you need to continue. If you want that cycle to keep going and have people come back or to advocate on your behalf, you need to actually give them a reason to do that. And this is a great opportunity to build those loyalists and those evangelists of your brand, of your services, of your company, of whatever it is you’re doing by just showing up and acknowledging, “Hey, I heard you, I see you.” “Thank you for the feedback.” “We’re going to do something with it.” “Hey, here’s a little token of appreciation,” or “Here’s answer to your question.” It doesn’t take a lot. Our good friend Brook Sellis talks about this when she’s talking about the number one mistake brands make in online social conversations is not responding to comments. Yeah, doesn’t take a lot. Christopher S. Penn: Yeah. Doesn’t cost anything either. Katie Robbert: No. I am very tangential today. That’s all right. I’m trying not to lose the plot. Christopher S. Penn: Well, the plot is: We’ve got the survey data. We now need to do something about it. And the people have spoken, to the extent that you can make that claim, that Agentic AI and AI agents is the thing that they want to learn the most about. And if you have some thoughts about this, if you agree or disagree and you want to let us know, pop on by our free Slack, come on over to Trust Insights AI/analytics for marketers. I think we’re probably gonna have some questions about the specifics of agentic AI—what kinds of agents? I think it’s worth pointing out that, and we’ve covered this in the past on the podcast, there are multiple different kinds of AI agents. There’s everything from what are essentially GPTs, because Microsoft Copilot calls Copilot GPTs Copilot agents, which is annoying. There are chatbots and virtual customer service agents. And then there’s the agentic AI of, “this machine is just going to go off and do this thing without you.” Do you want it to do that? And so we’ll want to probably dig into the survey responses more and figure out which of those broad categories of agents do people want the most of, and then from there start making stuff. So you’ll see things in our, probably, our learning management system. You’ll definitely see things at the events that folks bring us in to speak at. And yeah, and hopefully there’ll be some things that as we build, we’ll be like, “Oh, we should probably do this ourselves.” Katie Robbert: But it’s why we ask. It’s too easy to get stuck in your own bubble and not look outside of what you’re doing. If you are making decisions on behalf of your customers of what you think they want, you’re doing it wrong. Do something else. Christopher S. Penn: Yeah, exactly. So pop on by to our free Slack. Go to TrustInsights.ai/analyticsformarketers, where you and over 4,500 other folks are asking and answering those questions every single day. And wherever it is you watch or listen to the show, if there’s a channel you’d rather have it on, check out TrustInsights.ai/tipodcast. You can find us in all the places fine podcasts are served. Thanks for tuning in. We’ll talk to you on the next one. Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI. Trust Insight services span the gamut from developing comprehensive data strategies and conducting deep dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Trust Insights also offers expert guidance on social media analytics, marketing technology and MarTech selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, DALL-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members such as CMO or data scientists to augment existing teams. Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the *In Ear Insights* podcast, the *Inbox Insights* newsletter, the *So What* Livestream, webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights are adept at leveraging cutting-edge generative AI techniques like large language models and diffusion models, yet they excel at explaining complex concepts clearly through compelling narratives and visualizations, data storytelling. This commitment to clarity and accessibility extends to Trust Insights’ educational resources, which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.
It's a new year of R Weekly Highlights! In this episode we learn of the positive intellectual diversity impacting R and Python communities, a great starting point to create your first RAG-powered knowledge store, and a new package enters the Shiny ecosystem that is certainly not tiny in its ambitions.Episode Links This week's curator: Eric Nantz: @rpodcast@podcastindex.social (Mastodon) & @rpodcast.bsky.social (BlueSky) & @theRcast (X/Twitter)R + Python: From polyglot to pluralism: A very targeted recap on posit::conf(2025) and a broader reflection on how the best design and devtools ideas from R and python have influenced each others' stacks for the betterRetrieval-Augmented Generation: Setting up a Knowledge Store in R{tinyshinyserver} 0.1.0: Tiny 'shiny' Server - Lightweight Multi-App 'shiny' ProxyEntire issue available at rweekly.org/2026-W03Supplement Resourcestinyshinyserver GitHub repository: https://github.com/lab1702/tinyshinyserverYann Tourman's R Weekly episode length analysis https://bsky.app/profile/yannco.bsky.social/post/3mbarm75kik2jSupporting the showUse the contact page at https://serve.podhome.fm/custompage/r-weekly-highlights/contact to send us your feedbackR-Weekly Highlights on the Podcastindex.org - You can send a boost into the show directly in the Podcast Index. First, top-up with Alby, and then head over to the R-Weekly Highlights podcast entry on the index.A new way to think about value: https://value4value.infoGet in touch with us on social mediaEric Nantz: @rpodcast@podcastindex.social (Mastodon), @rpodcast.bsky.social (BlueSky) and @theRcast (X/Twitter)Mike Thomas: @mike_thomas@fosstodon.org (Mastodon), @mike-thomas.bsky.social (BlueSky), and @mike_ketchbrook (X/Twitter) Music credits powered by OCRemixWe're the Fugawe Forest - Legend of Zelda: A Link to the Past - XPRTNovice feat. Weisty - https://ocremix.org/remix/OCR03597Zelda's Graceful Nightmare - 25YEARLEGEND: A Legend of Zelda Indie Game Composer Tribute - Matheus Manente - https://zelda25.ocremix.org/index.html
Talk Python To Me - Python conversations for passionate developers
Your cloud SSD is sitting there, bored, and it would like a job. Today we're putting it to work with DiskCache, a simple, practical cache built on SQLite that can speed things up without spinning up Redis or extra services. Once you start to see what it can do, a universe of possibilities opens up. We're joined by Vincent Warmerdam to dive into DiskCache. Episode sponsors Talk Python Courses Python in Production Links from the show diskcache docs: grantjenks.com LLM Building Blocks for Python course: training.talkpython.fm JSONDisk: grantjenks.com Git Code Archaeology Charts: koaning.github.io Talk Python Cache Admin UI: blobs.talkpython.fm Litestream SQLite streaming: litestream.io Plash hosting: pla.sh Watch this episode on YouTube: youtube.com Episode #534 deep-dive: talkpython.fm/534 Episode transcripts: talkpython.fm Theme Song: Developer Rap
TestTalks | Automation Awesomeness | Helping YOU Succeed with Test Automation
Performance testing often fails for one simple reason: teams can't see where the slowdown actually happens. In this episode, we explore Locust load testing and why Python-based performance testing is becoming the go-to choice for modern DevOps, QA, and SRE teams. You'll learn how Locust enables highly realistic user behavior, massive concurrency, and distributed load testing — without the overhead of traditional enterprise tools. We also dive into: Why Python works so well for AI-assisted load testing How Locust fits naturally into CI/CD and GitHub Actions The real difference between load testing vs performance testing How observability and end-to-end tracing eliminate guesswork Common performance testing mistakes even experienced teams make Whether you're a software tester, automation engineer, or QA leader looking to shift-left performance testing, this conversation will help you design smarter tests and catch scalability issues before your users do.
R. Tyler Croy, a principal engineer at Scribd, joins Corey Quinn to explain what happens when simple tasks cost $100,000. Checking if files are damaged? $100K. Using newer S3 tools? Way too expensive. Normal solutions don't work anymore. Tyler shares how with this much data, you can't just throw money at the problem, but rather you have to engineer your way out.About R. Tyler: R. Tyler Croy leads infrastructure architecture at Scribd and has been an open source developer for over 14 years. His work spans the FreeBSD, Python, Ruby, Puppet, Jenkins, and Delta Lake communities. Under his leadership, Scribd's Infrastructure Engineering team built Delta Lake for Rust to support a wide variety of high performance data processing systems. That experience led to Tyler developing the next big iteration of storage architecture to power large-scale fulltext compute challenges facing the organization.Show Highlights:01:48 Scribd's 18-Year History04:00 One Document Becomes Billions of Files05:47 When Normal Physics Stop Working08:02 Why S3 Metadata Costs Too Much10:50 How AI Made Old Documents Valuable13:30 From 100 Billion to 100 Million Objects15:05 The Curse of Retail Pricing 19:17 How Data Scientists Create Growth21:18 De-Normalizing Data Problems25:29 Evolving Old Systems27:45 Billions Added Since Summer29:29 Underused S3 Features31:48 Where to Find TylerLinks: Scribd: https://tech.scribd.comMastodon: https://hacky.town/@rtylerGitHub: https://github.com/rtylerSponsored by: duckbillhq.com
An airhacks.fm conversation with Thomas Wuerthinger (@thomaswue) about: clarification of GraalVM release cadence changes and decoupling from openJDK releases, GraalVM focusing on LTS Java releases only (skipping non-LTS like Java 26), GraalVM as a multi-vendor polyglot project with community edition and third-party vendors like Red Hat BellSoft and microdoc, increased focus on python support due to AI popularity, GraalVM team alignment with Oracle Database organization, Oracle Multilingual Engine (MLE) for running JavaScript and Python in Oracle Database, MySQL MLE integration, native image support for stored procedures in Oracle Database, shipping lambda functions from client applications to database for temporary execution, treating Oracle Database as an operating system for running business logic, serverless workloads directly in Oracle Database, application snapshotting similar to CRaC but running in user space without kernel privileges, efficient scale-to-zero capabilities with native images, Oracle REST Data Services service generalization for serverless execution platform, database triggers for workflow systems and application wake-up, durable functions with transactional state storage in Oracle Database, comparison to AS400 architecture with transaction manager database and operating system in same memory, memory price increases making GraalVM native image more attractive, lower memory consumption benefits of native image beyond just startup time, CPU-based inference support with SIMD and Vector API, TornadoVM for GPU-based inference built on Graal compiler, WebAssembly compilation target for native images, edge function deployment with WebAssembly, Intel memory protection keys for sandboxed native image execution, native image layers for shared base libraries similar to docker layers, profile-guided optimizations for size reduction, upx binary compression for 3x size reduction, memory savings from eliminated class metadata and profiling data not garbage collector differences, 32-bit object headers in serial GC smaller than HotSpot, polyglot integration allowing Python and JavaScript embedding in Java applications, Micronaut framework compile-time annotation processing, quarkus framework best alignment with native image for smallest binaries, GraalVM roadmap focused on database synergies and serverless innovation Thomas Wuerthinger on twitter: @thomaswue
More than a pleasure to ‘get back' (see what I did there?) with friend and podcast comrade Luke Thompson to round off our sort-of trilogy of Monty Python films (originally recorded on video, see below).This time round, Antony and Luke look at the flawed smorgasbord of ideas that is 1983's ‘Monty Python's The Meaning of Life', going scene-by-scene through the good, bad, ugly, extremely violent, completely gross and filler, and trying to find just what ismissing from this Python outing that made the previous two films close to flawless masterpieces (actually we say it right at the beginning but don't tell anyone…)In this part we give our general opinions of the film and look at some of the early scenes, so expect to hear about elderly pirates, sacred sperms and French ticklers. This is preceded by a bit of Python/Beatles comparisoning (that may not be a word) for our crossover audiencesMake your own mind up, even though I've already told you what to think…Most of all, enjoy!'Film Gold' is on all the main podcast platforms. Feedback to contrafib2001@gmail.com Facebookhttps://www.facebook.com/filmgoldpod Twitterhttps://twitter.com/FilmGold75 Antony's website(blog, music, podcasts) https://www.antonyrotunno.com Antony's JohnLennon/Beatles and Psychology/Alt. Media podcasts https://glassoniononjohnlennon.comhttps://lifeandlifeonly.podbean.com Support Antony's podcast work (Film Gold, Glass Onion: On John Lennon and Life And Life Only) athttps://www.paypal.com/paypalme/antonyrotunnoORhttps://www.buymeacoffee.com/antonyrotunno Luke's English Podcast (main site and you tube channel)https://teacherluke.co.uk/https://www.youtube.com/LukesEnglishPodcast Luke & Antony's previous Python film collaborations:Holy Grailhttps://www.youtube.com/watch?v=YMtOOBpRBaA Life of Brianhttps://www.youtube.com/watch?v=lYNW-fBpUoc&t=5673s The original video version of this review https://www.youtube.com/watch?v=ecb9nT1ZcAMepisode links'The Meaning of Life' film pageshttps://en.wikipedia.org/wiki/Monty_Python%27s_The_Meaning_of_Lifehttps://www.imdb.com/title/tt0085959/ Original trailer for the filmhttps://www.youtube.com/watch?v=wMJ24QwHdCY‘Meaning of Life' documentary with the Pythonshttps://www.youtube.com/watch?v=g_IpHCCyAC4&t=765s Every Sperm Is Sacredhttps://www.youtube.com/watch?v=fUspLVStPbk Steve Coogan's ‘The c**t song'https://www.youtube.com/watch?v=9cH0elKDOzs&rco=1 The Protestants having lots of theoretical sexhttps://www.youtube.com/watch?v=PDBjsFAyiwA Python/Beatles connections (clip from ‘Glass Onion: On John Lennon' podcast)https://www.youtube.com/watch?v=kMRj_y8g7Ck Boris Johnson gets a bit too ‘enthusiastic' playing rugby andfootballhttps://www.youtube.com/watch?v=T5NN5S9sPFMhttps://www.youtube.com/watch?v=0UYrLfxvfhM
Topics covered in this episode: port-killer How we made Python's packaging library 3x faster CodSpeed Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Michael #1: port-killer A powerful cross-platform port management tool for developers. Monitor ports, manage Kubernetes port forwards, integrate Cloudflare Tunnels, and kill processes with one click. Features:
This week on Enjoy Stuff, Jay and Shua tumble head-first into the wonderfully absurd world of Monty Python, celebrating the comedy troupe that made nonsense an art form. From silly walks to spam, it's a fast-paced love letter to ridiculous genius. It's…….Monty Python's Flying Circus! Let's dive head first into the pepper pot of silliness with the classic BBC comedy show that influenced generations. News Ghostbusters II's River of Slime being turned into a five-hour cinematic ambience video, plunging viewers into the eerie tunnels beneath New York City. Dark Chocolate Reese's Puffs and Cinnamon Toast Crunch Peanut Butter hit the shelves A Kansas public library has recreated some iconic scenes from The Breakfast Club to promote reading and community Check out our TeePublic store for some enjoyable swag and all the latest fashion trends What we're Enjoying Jay has been diving deep into the imaginative worlds of Andy Weir, enjoying The Martian, Artemis, and Project Hail Mary for their clever science, humor, and heartfelt storytelling. He shares how Weir's blend of problem-solving and character makes each book feel like a cinematic adventure. Shua is loving the Harry Potter: The Full-Cast Audio Editions, now available on Audible, praising the huge ensemble of recognizable voices that bring the wizarding world to life. The immersive performances make it feel like listening to a blockbuster movie in audiobook form. Sci-Fi Saturdays - This week on Sci-Fi Saturdays Jay gives ARQ (2016) the spotlight as he discusses this tense, time-looping science-fiction thriller. It's an obscure one that is definitely worth a watch. Read his article on RetroZap.com. And make sure to play around with the interactive map on MCULocationScout.com. Plus, you can tune in to SHIELD: Case Files where Jay and Shua talk about great stuff in the MCU. Enjoy Spam! Digging into the history of Monty Python's Flying Circus, exploring how six writers, one animator, and one fearless Carol Cleveland changed comedy forever through sketch, satire, and surreal animation. Through 4 seasons of absurdist comedy, they broke the mold of what comedy was supposed to be. Then we wrap things up with a playful game in which Jay reads the opening line of an iconic Python quote and challenges Shua to guess what comes next. Test yourself if you're a Python fan to see if you can do better than Shua. . Are you a Monty Python fan? Do you like Spam? Who won the English football cup in 1949? Let us know! First person that emails me with the subject line, "This is a dead parrot" will get a special mention on the show. Let us know. Come talk to us in the Discord channel or send us an email to EnjoyStuff@RetroZap.com
The public has been fascinated by the experience of interacting with large language models, or LLMs, like OpenAI's ChatGPT and Google's Gemini. In this episode we will look at current work with LLMs that plays to their strengths and involves a lower risk of inaccurate outputs. In particular we will look at the use of LLMs to translate between languages. Software teams generally operate in their native language. Once they have finished building their system, they often want to make it available in other languages to access other markets. The process of making a program that was originally written for one language usable by people who speak other languages is called internationalization. Historically internationalization has been a slow and expensive process. Today we will be talking with Archie McKenzie, the founder of a Silicon Valley startup that is offering internationalization services to software teams. Archie is atypical in various ways. A Briton, Archie came to the US to study Classics at Princeton. He ventured into a course taught by a famous computer scientist, Brian Kernighan, whose teaching inspired Archie to switch from Ancient Greek and Latin to Java, Python, and Typescript. After graduating from Princeton in 2024, Archie started a company called General Translation to develop and commercialize internationalization automation for software development projects.
Jurandir Filho, Felipe Mesquita, Evandro de Freitas e Bruno Carvalho batem um papo sobre "Clair Obscur: Expedition 33", jogo que já nasceu com status de clássico moderno. Mesmo sendo um título recente, o game conquistou algo raro na indústria: reconhecimento quase unânime de crítica, público e premiações, entrando rapidamente para o hall dos grandes nomes dos RPGs contemporâneos. Com uma narrativa envolvente, o jogo francês une arte, trilha e jogabilidade de maneira coesa e ousada. Seu universo melancólico, inspirado em uma estética que mistura fantasia sombria com influências artísticas europeias, cria uma identidade visual marcante, daquelas que ficam gravadas na memória do jogador. Cada cenário, personagem e trilha sonora parece cuidadosamente pensado para transmitir emoção, estranhamento e beleza ao mesmo tempo. Quais as maiores inovações? Como sse jogo conquistou tanta gente? É um jogo indie ou não?=- IMERSÃO ALURA | Bora começar 2026 fazendo aulas gratuitas de Python? CORRE PRA SE INSCREVER! https://alura.tv/99vidas-imersao-dados-2
• Sponsor read for MyEternalVitality.com with Dr. Powers • Gut health testing to identify individual histamine triggers • Relief that shrimp is not a histamine trigger • "Healthy" foods like spinach and kale causing inflammation • Improving digestion, regularity, and reducing stomach discomfort • Food reactions differing by individual body chemistry • Hormone testing becoming more important with age • Declining testosterone levels in men • Men getting hormone testing through Dr. Powers • Benefits of hormone replacement therapy • Improved libido, energy, and mental clarity • Symptoms of imbalance: fatigue, brain fog, hot flashes, low libido • Hormones discussed: estrogen, progesterone, testosterone, cortisol • Free Dr. Powers consultation for Tom & Dan listeners • Dr. Powers as a fan of the show and BDM member • New year framed as a time to address health • Show intro from the Just Call Moe Studio • Welcome to the Friday Free Show of A Mediocre Time • First show of 2026 and confusion adjusting to the year • Show running 17 years since 2009 • Jokes about reaching the 20th anniversary • Commitment to continuing the show regardless of profit • Guest Savannah appearing on the first show of 2026 • Being more cautious about what's said on air • Forgetting how large the audience actually is • Anxiety about saying something regrettable • Joke about an old onion-skin fart story • Comparing influencer audiences to radio audiences • Discussion of online backlash and hate comments • Wanting reactions but rarely receiving criticism • Shoutout to video editor Melissa • Opening Christmas gifts from Melissa on air • Melissa's self-deprecating note and affectionate appreciation • Big Johnson Key West shirt gift • Jokes about wearing tiny or "baby" shirts • "Where's Bumfardo?" shirt explained • Bumfardo described as a legendary Key West grifter • Reference to a podcast episode about Bumfardo • Clarifying Bumfardo as a criminal firefighter • Gratitude and appreciation for Melissa • Living in Key West after California • Living in an Airstream on sponsor property • Romantic idea vs reality of Airstream living • Millionaires hosting guests in RVs or guest houses • Restored and comfortable Airstream • Living with a pet monitor lizard • Joking about the start of a "lizard journey" • Lizard eating pulled pork and seafood • Joke comparing lizard diet to Jeff Foxworthy • Lizard free-roaming inside the Airstream • Lizard unusually clean and well-behaved • Lizard now living at Gatorland • Using a doggie door and daily routine • Monitor lizard about six feet long • Question about reptile cleanliness myths • Hygiene concerns when handling reptiles • Lizard attacked at night in Key West • Iguanas or raccoons suspected • Bringing the lizard indoors for safety • Emergency super glue used to close a wound • Super glue working on reptile scales • Owning many exotic pets over the years • Large python kept in a one-bedroom apartment • Python named Benji • Hybrid reticulated/Burmese python • Python reaching 13–14 feet long • Bathing a python in a bathtub • Snake suddenly becoming aggressive • Snake striking when door opened • Trapping the snake in the bathroom • Child reacting to apex predators in the apartment • Sending the kid outside for safety • Question of whether pythons can seriously injure people • Preventing snake escape through a window • Subduing the snake with a quilt • Wrestling and restraining the python • Snake aggression being a one-time incident • Snakes being unpredictable • Gateway exotic pets like Pac-Man frogs • Still owning a frog • Childhood fascination with reptiles • Catching and keeping reptiles in South Carolina • Childhood "zoo" with animals in drawers • Joke about kids now having digital pets instead of real ones • Feeding large pythons big rats • Debate over live vs pre-killed feeding • Some snakes needing movement to eat • Parenting rule against exotic pets for kids • Requiring responsibility before allowing pets • Travel complications of pet ownership • Personal hamster care experience • Dad raising guinea pigs • Guinea pigs named after dictators and NASCAR drivers • Greg Biffle and Waltrip jokes • Comedy bit about guinea pig personalities • Story about Jim Colbert's Daryl Waltrip impression • Late-night drunk texts from Jim Colbert • Joke about inappropriate texts and photos • Clarifying a misspoken offensive term • Transition to Savannah's Jamaica trip • Comparison to a past Australia trip • Savannah described as highly traveled • Gatorland Global raising nearly $10,000 for hurricane relief • Shipping aid supplies to Jamaica • Bottlenecks at Jamaican ports • Long-term recovery continuing after news cycle moves on • Using funds in practical ways • Helping communities near Hope Zoo in Kingston • Providing water storage and bathroom supplies • Kids previously walking long distances for water • Purchasing a water truck • "Practical conservation" approach • Helping people so animals can be cared for • Zoo animals surviving the hurricane • Oxygen mask analogy • Dark humor about survival priorities • One-week stay in Jamaica • Challenges traveling post-hurricane • Relying on local relationships • Praise for Jamaican kindness • Airbnb hosts offering help and discounts • Importance of global relationships • Transition to friendship with Jackie Siegel • Clarifying which Jackie is being discussed • Jokes about famous Jackies • How Savannah met Jackie Siegel • Savannah's ease connecting with people • Standing out due to appearance and style • Personal recognizability as a brand • Jokes about recognizability • Fascination with ultra-wealthy lifestyles • Meeting Jackie through Real Radio • Seeing Jackie at Runway to Hope • Runway to Hope supporting kids with cancer • Walking the runway with sponsored children • Jackie filming at Gatorland • Friendship forming through time together • Difficulty wealthy people have making friends • Trust and motive issues around rich people • Jackie portrayed as kind and trusting • Idea of rich people seen as "lottery tickets" • Influence of who you spend time with • Being around Jackie compared to a soap opera • Observing Jackie's priorities and behavior • Jackie's Broadway show ending • Show based on Jackie's life • Proving critics wrong theme • Love story with David Siegel • Interest in Broadway and musicals • Wanting to take Maisie to NYC shows • Connecting Maisie's dance to Broadway interest • Kristen Chenoweth playing Jackie • Primer on Kristen Chenoweth • Wicked, Glinda, and Ariana Grande comparison • Stephen Schwartz writing the show • Jackie focused on crew losing jobs • Wanting to help displaced cast and crew • Listing backstage jobs affected • Empathy for workers over producers • Learning about Jackie's past domestic violence • Public perception not matching her full story • Misconceptions about billionaires • Assumption wealthy people should give endlessly • Overlooking effort behind wealth • Jackie having many children • Incorrect belief she married into money • Comparison to Melinda Gates • Emphasis on partnerships building wealth • David Siegel's death last year • Attending his celebration of life • Repeated cycles of success and bankruptcy • Successful people often failing many times • How David built his fortune • Origin of Westgate • David's early acting dreams • Buying land near Disney World • Purchasing a rundown hotel • Discovering the timeshare concept • Starting his own timeshare business • Joke about stealing ideas • Shoutout to women who support the show • Transition to music segment • Punk band Paradox featured • Song "I'm the Outside" • Call-in number and email plug • Sponsor read for BudDocs • Medical marijuana card process explained • Same-day appointments and telemedicine follow-ups • Dispensary deals and education • Cannabis for pain after hip replacement • Using marijuana to reduce alcohol • Return from break with Savannah • Plug for visiting Gatorland • New attractions constantly added • Arrival of Siamese crocodiles • Crocodiles kept separately • Transport from Korea to Gatorland • Animal relocation to avoid euthanasia • Cultural differences in cleanliness and order • "Tokyo depression" concept • Driving and horn etiquette differences • Safari travel mention • South Africa affordability note • Wealth spectrum discussion • Story about driving a Maserati to Walmart • Navigating wealthy social spaces authentically • Jackie's daughter Victoria's overdose • Victoria's Voice organization • Addiction treatment and Narcan advocacy • Turning tragedy into public good • Playing the clown at rich dinners • Observing human behavior like animal behavior • Studying power, money, and authority • Press box story with Phil Rawlins • Meeting Cedric the Entertainer and George Lopez • Importance of introductions and social proof • Savannah blending into elite spaces • Declaring 2026 a takeover year • Goal to make Gatorland the top park globally • Growth plans for conservation, YouTube, and TV • Using affirmations despite mocking them • Reading motivational books • Social media burnout and algorithm frustration • Thumbnails mattering more than content • AI-generated animal videos misleading audiences • Desire for human-made content spaces • Posting more freely without chasing algorithms • Encouraging visits to Gatorland • Promoting BDM Appreciation Week • Wrapping the show with gratitude ### Social [https://tomanddan.com](https://tomanddan.com) [https://twitter.com/tomanddanlive](https://twitter.com/tomanddanlive) [https://facebook.com/amediocretime](https://facebook.com/amediocretime) [https://instagram.com/tomanddanlive](https://instagram.com/tomanddanlive) Listen AMT Apple: [https://podcasts.apple.com/us/podcast/a-mediocre-time/id334142682](https://podcasts.apple.com/us/podcast/a-mediocre-time/id334142682) AMT Google: [https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2FtZWRpb2NyZXRpbWUvcG9kY2FzdC54bWw](https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2FtZWRpb2NyZXRpbWUvcG9kY2FzdC54bWw) AMT TuneIn: [https://tunein.com/podcasts/Comedy/A-Mediocre-Time-p364156/](https://tunein.com/podcasts/Comedy/A-Mediocre-Time-p364156/) ACT (Real Radio 104.1) Apple: [https://podcasts.apple.com/us/podcast/a-corporate-time/id975258990](https://podcasts.apple.com/us/podcast/a-corporate-time/id975258990) Google: [https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2Fjb3Jwb3JhdGV0aW1lL3BvZGNhc3QueG1s](https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkLnBvZGJlYW4uY29tL2Fjb3Jwb3JhdGV0aW1lL3BvZGNhc3QueG1s) TuneIn: [https://tunein.com/podcasts/Comedy/A-Corporate-Time-p1038501/](https://tunein.com/podcasts/Comedy/A-Corporate-Time-p1038501/) Exclusive: [https://tomanddan.com/registration](https://tomanddan.com/registration) Merch: [https://tomanddan.myshopify.com/](https://tomanddan.myshopify.com/)
Join us LIVE on Mondays, 4:30pm EST.A weekly Podcast with BHIS and Friends. We discuss notable Infosec, and infosec-adjacent news stories gathered by our community news team.https://www.youtube.com/@BlackHillsInformationSecurityChat with us on Discord! - https://discord.gg/bhis
Are you looking for that solid foundation to begin your Python journey? Would the accountability of scheduled group classes help you get through the basics and start building something? This week, two members of the Python for Beginners live course discuss their experiences.
01-08-26 - Traffic Will Be Nightmare All Day Trying To Get To Fiesta Bowl - Florida Python Hunts Now Involve Fake Robot Rabbits w/Heat Sensors Giving John An Idea For ThemSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
01-08-26 - Traffic Will Be Nightmare All Day Trying To Get To Fiesta Bowl - Florida Python Hunts Now Involve Fake Robot Rabbits w/Heat Sensors Giving John An Idea For ThemSee Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
In this week’s In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss generative engine marketing, or GEM, the AI equivalent of SEM. Just as SEO became GEO, so too is SEM likely to become GEM. Learn what it is, how it might manifest, and what you should be considering. Watch the video here: Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-what-is-generative-engine-marketing-sem-gem.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn: In this week’s In-Ear Insights. Welcome back. Happy new year. It’s 2026. I have just begun to realize as I was cleaning out my pantry over the holidays, oh yeah, all these things expire in 2026. That’s this year. A lot happened over the holidays. A lot of changes in AI. But one thing that hasn’t happened yet but has been in discussion that I think is—Katie, you wanted to talk about—was SEO for good or ill, sort of centered on this GEO acronym, Generative Engine Optimization, and all of its brethren: AIO and AEO and whatever. SEO’s companion has always been SEM, also known as Pay Per Click marketing, and that has its alphabet soup like rlsa, remarketing lists for search ads, and all these acronyms, part of the paid version of search marketing. Well, Katie, you asked a very relevant… Katie Robbert: …question, which was, when is GEM coming? So as a little plug, I’m doing a Friday session with our good friends over at Marketing Profs on GEO and ROI, which I have to practice saying over and over again so I don’t stumble over it. But basically the idea is what can B2B marketers measure in GEO to demonstrate their return on investment so that they can argue for more budget. And so what we were talking about this morning is that GEO is really just an amped up version of brand search. If you know SEO, brand search is a part of SEO. And so basically it’s like how well recognized is my brand or my influencers or whatever. If I type in Katie Robbert or if I type in Trust Insights, what comes back? And so all of the same tactics that you do for branded search, you do for GEO plus a little bit more. So it’s the same end result, but you need to figure out sort of where all of that fits. So I’ll go over all of that. But it then naturally progressed into the conversation of, well, part of brand search is paid campaigns. You pay money to Google AdWords, if that’s still what it’s called, or whatever ad system you’re using, you put money behind your branded terms so that when someone’s looking for certain things, your name comes up. And I was like, well, that’s the SEM version of SEO. When are we getting the paid version of GEO? So basically GEM, or whatever you would want to call it, the way that I kind of envision it. So right now these systems like ChatGPT and Gemini and Claude, they’re not running ads. They’re making their money from usage. So they’re using tokens, which Chris, you’ve talked about extensively. But I can envision a world where they’re like, okay, here’s the free version of this. But every other query that you run, you get an ad for something, or at the end of every result, you get an ad for something. And so I would not be surprised if that was coming. So that was sort of what I was wondering, what I was thinking. I’m not trying to plant the idea that they should do that. I’m just assuming based on patterns of how these companies operate, they’re looking for the next way to make a revenue stream. So Chris, when I mentioned this to you this morning, I couldn’t see your face, but I assumed that there was an eye roll. So what are your thoughts on GEM? Christopher S. Penn: Here’s what we know. We know that on the back end for all these tools, what they’re doing when they use their web search tools is they’re writing their own web queries. They literally kick off their own web searches, and they do 5, 10, 20, or 100 different searches. This is something that Google calls query fan out. You can actually see this happening behind the scenes. When you use Google, you’ll see it list out summarized in Gemini, for example. You’ll see it in ChatGPT with its sources and stuff. We know—and if you’re using tools like Claude code or Gemini code—you will actually see the searches themselves. It is a very small leap of the imagination to say, okay, what’s really happening is the LLM is just doing searches, which means that the infrastructure exists—which it does for Google Ads—to say, when somebody searches for this set of keywords, show this ad. The difference is that AI searches tend to be eight to 10 words long. When you look at how Claude code does searches, it will say “docker configuration YAML file 2025” as an example of a very long term, or “best hotels under $1,000 Ibiza 2025 travel guide” would be an example of a more generic term that is a very specific, high-intent search phrase that it’s typing in. So for a system like Google to say, “You know what, inside of your search results, when it does query fan out, we’re just going to send a copy of the searches to our existing Google Ad system, and it’s going to spit back, ‘Hey, here’s some ads to go with your AI generated summary.'” I would say initially for marketers, you have to be thinking about how Gemini in particular does query fan out, how it does its own searches. We actually built a tool for this last year for ourselves that can measure how Gemini just does its own searches. We have not published because it’s still got a bunch of rough edges. But once you see those query fan out actions being taken, if you’re a Google Ads person, you can start going, “Huh? I think I need to start making sure my Google Ads have those longer, more detailed, more specific phrases.” Not necessarily because I think any human is going to search for them, but because that’s the way AI is going to search them. I think if you are using systems like ChatGPT, you should be—to the extent that you can, because you can see this in the developer API, not the consumer product, but the developer side on OpenAI’s platform—you can see what it searches for. You should be making notes on that and maybe even going so far as to say, “I’m going to type in, ‘recommend a Boston based AI consulting firm.'” See what ChatGPT does for its searches. And then if you’re the Google Ads manager, guess you better be running those ads. And probably Bing, probably Google. OpenAI said they’re going to build their own ad system—they probably will. But as many folks, including Will Reynolds and Rand Fishkin, have all said, Google still owns 95% of the search market. So if you’re going to put your bets anywhere, bet on the Google Ads system and put your efforts there. Katie Robbert: So it sounds like my theory wasn’t so far fetched this morning to assume that GEM is coming. Christopher S. Penn: Absolutely it’s coming. I mean, everyone and their cousin is burning money running AI, right? It costs so much to do inference. Even Google itself. Yes, they have their own hardware, yes, they have their own data centers and stuff. It still costs them resources to run Gemini, and they have new versions of Gemini out that came out just before the holidays, but still not cheap, and they have to monetize it. And the easiest way to monetize it is to not reinvent the wheel and just tie Gemini’s self-generated searches into Google Ads. Katie Robbert: So, I think one of the questions that people have is, well, do we know what people are searching for? And you mentioned for at least OpenAI, you can see in the developer console what the system searches for, but that’s not what people are searching for. Where do tools like Google Search Console fit in? For someone who doesn’t have the ability to tap into a developer API, could they use something like a Google Search Console as a proxy to at least start refining? I mean, they should be doing this anyway. But for generative AI, for what people are searching for? Because the reason I’m thinking of it is because what the system searches for is not what the person searches for. We still want to be tackling at least 50% of what the person searches for, and then we can start to make assumptions about what the system is going to be searching for. So where does a tool like Google Search Console fit in? Christopher S. Penn: The challenge with the tool, Google Search Console, is that it is reporting on what people type before Gemini rewrites it. So, I would say you could use that in combination with Gemini’s API to say, okay, how would Gemini transform this into a query fan out? Katie Robbert: But that’s my point: what if someone—a small business or just a marketing team that is siloed off from IT—doesn’t have access to tap into the API? Christopher S. Penn: Hire Trust Insights. Katie Robbert: Fair. If you want to do that, you can go to TrustInsights.ai/contact. But in all seriousness, I think we need to be making sure we’re educating appropriately. So yes, obviously the path of least resistance is to tap in the API to see what the system is doing. If that’s not accessible—because it is not accessible to everybody—what can they be doing? Christopher S. Penn: That’s really—it’s a challenging question. I’m not trying to be squirrely on purpose, but knowing how the AI overviews work, Gemini in Google is intercepting the user’s intent and trying to figure out what is the likely intent behind the query. So when you go into your Google search now, you will see a couple of quick results, which is what your Google Search Console will report on. And then you’re going to see all of the AI stuff, and that is the stuff that is much more difficult to predict. So as a very simple example, let me just go ahead and share my screen. For folks who are listening, you can catch us on our YouTube channel at trustinsights.ai/youtube. So I typed in “Python synth ID code,” right, which is a reference to something coding-wise. You can see, here’s the initial search term; this will show up in your Google Search Console. If the user clicks one of the two quick results, then once you get into webguide here, now this is all summarized. This is all written by Gemini. So none of this here is going to show up in Google Search Console. What happened between here and here is that Gemini went and did 80 to 100 different searches to assemble this very nice handy guide, which is completely rewritten. This is not what the original pages say. This is none of the content from these sites. It is what Gemini pulled from and generated on its own. Katie Robbert: So let me ask you this question, and this might be a little kooky, so follow me for a second. So let’s say I don’t have access to the API, so I can’t pull what the system is searching, but I do have access to something like a Google Search Console or I have my keyword list that I optimize for. Could I give Generative AI my keyword list and say, “Hey, these are the keywords or these are the phrases that humans search for. Can you help me transform these into longer-term, longer-tail keywords that a machine would search for?” Is that a process that someone who doesn’t have API access could follow? Christopher S. Penn: Yeah, because that’s exactly what’s going on inside Google software. They basically have, “Here’s the original thing. Determine the intent of the query, and then run 50 to 100 searches, variations of that, and then look at the results and sort of aggregate them, come back with what it came up with.” That’s exactly what’s happening behind the scenes. You could replicate that. It would just be a lot of manual labor. Katie Robbert: But for some, I mean, some people, some companies have to start somewhere, right? I could see—I mean, you’re saying it’s a lot of manual labor—I could even see it as a starting point. Just for simple math, here are the top 10 phrases that Trust Insights wants to rank for. “Hey, Gemini, can you help me determine the intent and give me three variations of each of these phrases that I can then build into my AdWords account?” I feel like that at least gives people a little bit more of a leg up than just waiting to see if anything comes up in search. Christopher S. Penn: Yeah, you absolutely could do that. And that would be a perfectly acceptable way to at least get started. Here’s the other wrinkle: it depends on which model of Gemini. There are three of them that exist. There’s Gemini Pro, which is the heavy duty model that almost never gets used in AI Overview. Does get used to AI mode, but AI Overviews, no. There’s Gemini Flash, and then there’s Gemini Flashlight. One of the things that is a challenge for marketers is to figure out which version Google is going to use and when they swap them in and out based on the difficulty of the query. So if you typed in, “best hotels under $1,000 Ibiza Spain,” right? That’s something that Flashlight is probably going to get because it’s an easy query. It requires no thinking. It can just dump a result very quickly, deliver very high performance, get a good result for the user, and not require a lot of mental benchmarks. On the other hand, if you type something like, “My dog has this weird bump on his leg, what should I do about it?” For a more complex query, it’s probably going to jump to Flash and go into thinking mode so it can generate a more accurate answer. It’s a higher risk query. So one of the things that, if you’re doing that exercise, you would want to test your ideas in both Flashlight and Flash to see how they differ and what results it comes back with for the search terms, because they will be different based on the model. Katie Robbert: But again, you have to start somewhere. It reminds me of when the smart devices all rolled out into the market. So everybody was yelling at their home speakers, which I’m not going to start doing because mine will go off. But from there, we as marketers were learning that people speaking into a voice, if they’re using the voice option on a Google search or if they’re using their smart home devices, they’re speaking in these complete sentences. The way that we had to think about search changed then and there. I feel like these generative AI systems are akin to the voice search, to the smart devices, to using the microphone and yelling into your phone, but coming up with Google results. If you aren’t already doing that, then get in your DeLorean, go back to, what, 2015, and start optimizing for smart devices and voice search. And then you can go ahead and start optimizing for GEO and GEM, because I feel like if you’re not doing that, then you’re at a serious disadvantage. Christopher S. Penn: Yeah, no, you absolutely are. So, I would say if you’re going to start somewhere, start with Gemini Flash. If you know your way around Google’s AI Studio, which is the developer version, that’s the best place to start because the consumer version of the web interface has a lot of extra stuff in it that Google’s back end will not have that the raw Gemini will not have because it slows it down. They build in, for example, a lot of safety stuff into the consumer web interface that is there for a good reason, but the search version of it doesn’t use because it’s a much more constrained use. So I would say start by reading up on how Google does this stuff. Then go into AI Studio, choose Gemini 3 Flash, and start having it generate those longer search queries, and then figure out, okay, is this stuff that we should be putting into our Google Ads as the keyword matches? The other thing is, from an advertising perspective, obviously we know the systems are going to be tailored to extract as much money from you as possible, but that also means having more things that are available as inventory for it to use. So we have been saying for three years now, if you are not creating content for places like YouTube, you have missed the boat. You really need to be doing that now because Google makes it pretty clear you can run ads on multiple parts of their platform. If you have your own content that you can turn into shorts and things, you can repurpose some of that within Google Ads and then help use that as fodder for your ad campaigns. It’s a no-brainer. Katie Robbert: To be clear, we’re talking about the Google ecosystem. Some companies aren’t using that. You can use a Google search engine without being part of the ecosystem. But some companies aren’t using Gemini, therefore they’re not using Developer Studio. If they’re using OpenAI, which is ChatGPT or Claude, or a lot of companies are Microsoft Shops. So a lot of them are using Copilot. I think taking the requirement to tap into the API or Developer Studio out of the conversation, that’s what I’m trying to get at. Not everybody has access to this stuff. So we need to provide those alternate routes, especially for all of our friends who are suffering through Copilot. Christopher S. Penn: Yes. The other thing is, if you haven’t already done this—it’s on the Trust Insights website, it’s in our Inbox Insight section. If you have not already gotten your Google Analytics Explore Dashboard set up to look at where you’re currently getting traffic from generative AI, you need to do that because this is also a good benchmark to say, “Okay, when this ad system rolls out for ChatGPT, for example, should we put money in it for Trust Insights?” The answer is yes, because ChatGPT currently is still the largest direct referrer of traffic to us. You can see in this last 28 days. Now granted this is the holidays, there wasn’t a ton happening, but ChatGPT is still the largest source of AI-generated direct clicked-on stuff to our website. If OpenAI says, “Hey, ads are open,” as we know with all these systems in the initial days, it will probably either be outlandishly expensive or ridiculously cheap. One of the two. If it errs on the ridiculously cheap side, that would be the first system for us to test because we’re already getting traffic from that model. Katie Robbert: So I think the big takeaway in 2026 is what is old is new again. Everyone is going to slap an AI label on it. If you think SEO is dead, if you think search is dead, well, you have another thing coming. If you think SEM is dead, you definitely have another thing coming. The basic tenets of good SEO and SEM are still essential, if not more so, because every conversation you have this year and moving forward, I guarantee, is going to come back to something with generative AI. How do we show up more? How do we measure it? So it really comes down to really smart SEO and SEM and then slapping an AI label on it. Am I wrong? I’m not wrong. So if you know really good SEO, if you know really good SEM, you already have a leg up on your competition. If you’re like, “Oh, I didn’t realize SEO and SEM were important.” Now, like today, no hesitation, now is the time to start getting skilled up on those things. Forget the label, forget GEO, forget GEMs, forget all that stuff. Just do really good intent-based content. Content that’s helpful, content that answers questions. If you have started nowhere and need to start somewhere today, take a look at the questions that your audience is asking about what you do, about what you sell. For example, Chris, a question that we might answer is, “How do I get started with change management?” Or, “How do I get started with good prompt engineering?” We could create a ton of content around that, and that’s going to give us an opportunity to rank, quote, unquote, rank in these systems for that content. Because it will be good, high-quality content that answers questions that might get picked up by some of our peer publications. And that’s how it all gets into it. But that’s a whole other side of the conversation. Christopher S. Penn: It is. It absolutely is. And again, if you would like to have a discussion about getting the more technical stuff implemented, like running query fan out things to see how Gemini rewrites your stuff, and you don’t want to do it yourself, hit us up. We’re more than happy to have the initial conversation and potentially do it for you because that’s what we do. You can always find us at trustinsights.ai/contact. If you have comments or questions—things that you’re thinking about with GEM—hop on our free Slack group. Go to trustinsights.ai/analyticsformarketers, where you and over 4,500 marketers are lamenting these acronyms every single day. Wherever you watch or listen to the show, if there’s a channel you’d rather have it instead, go to trustinsights.ai/tipodcast. You can find us at all the places fine podcasts are served. Happy new year. Happy 2026, and we’ll talk to you on the next one. *** Speaker 3: Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI. Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Trust Insights also offers expert guidance on social media analytics, marketing technology (MarTech) selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, DALL-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members such as CMO or Data Scientist to augment existing teams. Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the In-Ear Insights Podcast, the Inbox Insights newsletter, the So What Livestream webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights are adept at leveraging cutting-edge generative AI techniques like large language models and diffusion models, yet they excel at explaining complex concepts clearly through compelling narratives and visualizations, data storytelling. This commitment to clarity and accessibility extends to Trust Insights educational resources which empower marketers to become more data driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.
BACK ON FIGG EP 346 Python P Learn more about your ad choices. Visit megaphone.fm/adchoices
Mike ‘Trapper Mike' nee ‘Python Cowboy' Kimmel joins Robbie on the podcast recorded on-location in sunny South Florida. Mike, who has made a name for himself as a prolific python catcher using German wire-haired pointers, talks about sustainability, invasive species, and what drives him to hunt. Get to know the guest: https://www.instagram.com/pythoncowboy/?hl=en Do you have questions we can answer? Send it via DM on IG or through email at info@theoriginsfoundation.org Support our Conservation Club Members! Maartens Safaris: https://maartenssafaris.com/ Grayl: https://grayl.com/ The Sun Project: https://theoriginsfoundation.org/conservation-projects/the-sun-project/ See more from Blood Origins: https://bit.ly/BloodOrigins_Subscribe Music: Migration by Ian Post (Winter Solstice), licensed through artlist.io This podcast is brought to you by Bushnell, who believes in providing the highest quality, most reliable & affordable outdoor products on the market. Your performance is their passion. https://www.bushnell.com This podcast is also brought to you by Silencer Central, who believes in making buying a silencer simple and they handle the paperwork for you. Shop the largest silencer dealer in the world. Get started today! https://www.silencercentral.com This podcast is brought to you by Safari Specialty Importers. Why do serious hunters use Safari Specialty Importers? Because getting your trophies home to you is all they do. Find our more at: https://safarispecialtyimporters.com Learn more about your ad choices. Visit megaphone.fm/adchoices
Do you wake up in the middle of the night, wondering if your business is in the #1 position in AI searches? Join Brad Friedman and Andreas Voniatis as they discuss whether today's SEO playbook is becoming obsolete as people increasingly use AI to research their needs and solutions. They discuss using data science to decode what AI wants to uprank your business everywhere your customers are searching. Andreas Voniatis is the founder of Artios, helping business leaders in ambitious B2B and technology companies achieve breakthrough website results. He's the author of Data-Driven SEO with Python and has been featured in Search Engine Watch. By combining advanced math, data science, and proprietary AI, Andreas says he delivers 3–5x more non-brand organic traffic and guarantees a 2x ROI—far beyond what traditional SEO and content marketing can provide. His journey began when he began thinking that traditional SEO and content marketing were becoming guesswork. Rather than leave the field, he retrained in data science. That shift changed everything — like Neo seeing the Matrix. With Python and new mental models, he began approaching organic growth with scientific precision and scale. The Digital Slice Podcast is brought to you by Magai. Up your AI game at https://friedmansocialmedia.com/magai And, if it's your first time purchasing, use BRAD30 at checkout to get 30% off your first 3 months. Visit thedigitalslicepodcast.com for complete show notes of every podcast episode.
Talk Python To Me - Python conversations for passionate developers
Today on Talk Python, the creators behind FastAPI, Flask, Django, Quart, and Litestar get practical about running apps based on their framework in production. Deployment patterns, async gotchas, servers, scaling, and the stuff you only learn at 2 a.m. when the pager goes off. For Django, we have Carlton Gibson and Jeff Triplet. For Flask, we have David Lord and Phil Jones, and on team Litestar we have Janek Nouvertné and Cody Fincher, and finally Sebastián Ramírez from FastAPI is here. Let's jump in. Episode sponsors Talk Python Courses Python in Production Links from the show Carlton Gibson - Django: github.com Sebastian Ramirez - FastAPI: github.com David Lord - Flask: davidism.com Phil Jones - Flask and Quartz(async): pgjones.dev Yanik Nouvertne - LiteStar: github.com Cody Fincher - LiteStar: github.com Jeff Triplett - Django: jefftriplett.com Django: www.djangoproject.com Flask: flask.palletsprojects.com Quart: quart.palletsprojects.com Litestar: litestar.dev FastAPI: fastapi.tiangolo.com Coolify: coolify.io ASGI: asgi.readthedocs.io WSGI (PEP 3333): peps.python.org Granian: github.com Hypercorn: github.com uvicorn: uvicorn.dev Gunicorn: gunicorn.org Hypercorn: hypercorn.readthedocs.io Daphne: github.com Nginx: nginx.org Docker: www.docker.com Kubernetes: kubernetes.io PostgreSQL: www.postgresql.org SQLite: www.sqlite.org Celery: docs.celeryq.dev SQLAlchemy: www.sqlalchemy.org Django REST framework: www.django-rest-framework.org Jinja: jinja.palletsprojects.com Click: click.palletsprojects.com HTMX: htmx.org Server-Sent Events (SSE): developer.mozilla.org WebSockets (RFC 6455): www.rfc-editor.org HTTP/2 (RFC 9113): www.rfc-editor.org HTTP/3 (RFC 9114): www.rfc-editor.org uv: docs.astral.sh Amazon Web Services (AWS): aws.amazon.com Microsoft Azure: azure.microsoft.com Google Cloud Run: cloud.google.com Amazon ECS: aws.amazon.com AlloyDB for PostgreSQL: cloud.google.com Fly.io: fly.io Render: render.com Cloudflare: www.cloudflare.com Fastly: www.fastly.com Watch this episode on YouTube: youtube.com Episode #533 deep-dive: talkpython.fm/533 Episode transcripts: talkpython.fm Theme Song: Developer Rap
Topics covered in this episode: ty: An extremely fast Python type checker and LSP Python Supply Chain Security Made Easy typing_extensions MI6 chief: We'll be as fluent in Python as we are in Russian Extras Joke Watch on YouTube About the show Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: ty: An extremely fast Python type checker and LSP Charlie Marsh announced the Beta release of ty on Dec 16 “designed as an alternative to tools like mypy, Pyright, and Pylance.” Extremely fast even from first run Successive runs are incremental, only rerunning necessary computations as a user edits a file or function. This allows live updates. Includes nice visual diagnostics much like color enhanced tracebacks Extensive configuration control Nice for if you want to gradually fix warnings from ty for a project Also released a nice VSCode (or Cursor) extension Check the docs. There are lots of features. Also a note about disabling the default language server (or disabling ty's language server) so you don't have 2 running Michael #2: Python Supply Chain Security Made Easy We know about supply chain security issues, but what can you do? Typosquatting (not great) Github/PyPI account take-overs (very bad) Enter pip-audit. Run it in two ways: Against your installed dependencies in current venv As a proper unit test (so when running pytest or CI/CD). Let others find out first, wait a week on all dependency updates: uv pip compile requirements.piptools --upgrade --output-file requirements.txt --exclude-newer "1 week" Follow up article: DevOps Python Supply Chain Security Create a dedicated Docker image for testing dependencies with pip-audit in isolation before installing them into your venv. Run pip-compile / uv lock --upgrade to generate the new lock file Test in a ephemeral pip-audit optimized Docker container Only then if things pass, uv pip install / uv sync Add a dedicated Docker image build step that fails the docker build step if a vulnerable package is found. Brian #3: typing_extensions Kind of a followup on the deprecation warning topic we were talking about in December. prioinv on Mastodon notified us that the project typing-extensions includes it as part of the backport set. The warnings.deprecated decorator is new to Python 3.13, but with typing-extensions, you can use it in previous versions. But typing_extesions is way cooler than just that. The module serves 2 purposes: Enable use of new type system features on older Python versions. Enable experimentation with type system features proposed in new PEPs before they are accepted and added to the typing module. So cool. There's a lot of features here. I'm hoping it allows someone to use the latest typing syntax across multiple Python versions. I'm “tentatively” excited. But I'm bracing for someone to tell me why it's not a silver bullet. Michael #4: MI6 chief: We'll be as fluent in Python as we are in Russian "Advances in artificial intelligence, biotechnology and quantum computing are not only revolutionizing economies but rewriting the reality of conflict, as they 'converge' to create science fiction-like tools,” said new MI6 chief Blaise Metreweli. She focused mainly on threats from Russia, the country is "testing us in the grey zone with tactics that are just below the threshold of war.” This demands what she called "mastery of technology" across the service, with officers required to become "as comfortable with lines of code as we are with human sources, as fluent in Python as we are in multiple other languages." Recruitment will target linguists, data scientists, engineers, and technologists alike. Extras Brian: Next chapter of Lean TDD being released today, Finding Waste in TDD Still going to attempt a Jan 31 deadline for first draft of book. That really doesn't seem like enough time, but I'm optimistic. SteamDeck is not helping me find time to write But I very much appreciate the gift from my fam Send me game suggestions on Mastodon or Bluesky. I'd love to hear what you all are playing. Michael: Astral has announced the Beta release of ty, which they say they are "ready to recommend to motivated users for production use." Blog post Release page Reuven Lerner has a video series on Pandas 3 Joke: Error Handling in the age of AI Play on the inversion of JavaScript the Good Parts
We talk with Matt Trentini, Principal Software Engineer at Planet Innovation, about using MicroPython for professional embedded development—including medical devices. Matt shares how he was drawn back to embedded development after becoming jaded with traditional C-based workflows, and explains why MicroPython's interactive REPL and rapid development cycle have become game-changers for his team.We explore the practical realities of using an interpreted language on microcontrollers: how Planet Innovation uses it for Class B medical devices, what the performance trade-offs actually look like, and how features like the Unix port enable robust testing. Matt walks us through deployment considerations, explains how to integrate C code when needed, and shares compelling stories about real-time client demos that would be impossible in C++.Whether you're skeptical about high-level languages in embedded systems or curious about alternatives to traditional development workflows, this conversation offers a grounded, engineering-focused look at what MicroPython can—and can't—do in production environments.Key Topics[03:30] Matt's background and why he left embedded development before MicroPython brought him back[08:45] What MicroPython is: a complete re-implementation of Python for microcontrollers with REPL, filesystem, and machine module[13:20] How Planet Innovation introduced MicroPython through an OpenMV vision processing project[17:15] The game-changing power of the REPL for interactive hardware development and testing[21:40] Running MicroPython code on x86 for testing, and the mock machine library approach[26:30] Python library compatibility: what works, what doesn't, and memory considerations[29:50] Integrating C and C++ code through extension modules for performance-critical sections[33:10] Performance realities: 10-100x slower in interpreter, but can always drop to C speed when needed[37:45] Tooling: MPRemote, the magical mount feature, and development workflow[42:20] When NOT to use MicroPython: cost-sensitive high-volume products and resource constraints[45:30] Using MicroPython in Class B medical devices and safety-critical applications[49:15] Garbage collection: simple, predictable, and controllable—can be disabled when needed[52:40] Real-time client demo story: modifying state machines during a call and showing results immediately[56:20] Deployment: frozen code, disabling REPL and filesystem, and OTA considerations[01:01:30] Common mistakes: logic errors and inadvertent allocations rather than memory corruption[01:05:45] Threading, AsyncIO, and the Global Interpreter Lock across different ports[01:08:20] State machine frameworks: StateChart, Yasme, and PyTransitions[01:11:40] Junior developer productivity: faster onboarding compared to C/C++ embedded development[01:15:10] Getting started: board bring-up as an ideal first use case for MicroPython[01:17:50] Hardware-in-the-loop testing as a low-risk way to try MicroPythonNotable Quotes"It's hard to overstate how game changing the REPL is. Particularly as an embedded engineer, once you see that you can interactively talk to a peripheral, you can generate your own I2C, squirt it across and see what the peripheral does with it—suddenly driver development has just become easy to experiment with." — Matt Trentini"My trite answer is that MicroPython is slow—10 to 100 times slower than C in the interpreter. But my flip side answer is that it can always be made as fast as C because you can always drop into C to write things." — Matt Trentini"There was a moment in a recent project where we were discussing the workflow of a state machine with the client, and while we were on a call, another engineer was actually making changes to MicroPython code. Literally a couple minutes after we'd been hashing out the details, they showed the changes in the state machine using the REPL. The client was blown away—in 25 years of development, I have never had that kind of turnaround in C and C++." — Matt Trentini"If you want to make a good friend of your electronics engineers, give them a build of MicroPython that can run on their custom board. In the past, they would typically be waiting for weeks or sometimes months before a software resource could be assigned. Now I can turn around a MicroPython build in a day or two, and they can test I2C, GPIOs, and UARTs themselves." — Matt Trentini"The irony is that the people who have embedded C knowledge are actually the people that can benefit the most from MicroPython. It's like having a superpower—you understand what MicroPython is doing in the background, you know you're just effectively writing a lot less code." — Matt TrentiniResources MentionedMicroPython Official Site - The official MicroPython project website with documentation and downloadsOpenMV - Computer vision project using MicroPython for camera-based applicationsMPRemote - Tool for interacting with MicroPython devices, including the magical mount featurePlanet Innovation - Medical device consultancy using MicroPython in production devicesStateChart - State machine library compatible with Python and MicroPythonYasme - Yet another state machine library developed at Planet InnovationPyTransitions - Popular Python state machine library being ported to MicroPythonCircuitPython - Adafruit's fork of MicroPython with additional features and CPython compatibility focus You can find Jeff at https://jeffgable.com.You can find Luca at https://luca.engineer.Want to join the agile Embedded Slack? Click hereAre you looking for embedded-focused trainings? Head to https://agileembedded.academy/Ryan Torvik and Luca have started the Embedded AI podcast, check it out at https://embeddedaipodcast.com/
We're kicking off 2026 with two coding resources inspired by winter-break. This week, we feature Coddy.tech, a goal-based coding platform where students can choose a language (Python, JavaScript, HTML, and more) and a skill level, then follow a step-by-step learning path with built-in feedback and challenges. We also talk about projects.raspberrypi.org, a free project hub from the Raspberry Pi Foundation that offers Scratch, Python, AI, web dev, and more, all organized by student interests like nature, space, art, music, and science. Together, these tools support problem-solving, creativity, typing skills, and persistence through productive trial-and-error.Mentioned in this episode:Education Podcast NetworkTech Tools for Teachers is part of the Education Podcast Network. https://www.edupodcastnetwork.com/This podcast uses the following third-party services for analysis: Podcorn - https://podcorn.com/privacyPodtrac - https://analytics.podtrac.com/privacy-policy-gdrp
This is a recap of the top 10 posts on Hacker News on January 01, 2026. This podcast was generated by wondercraft.ai (00:30): Linux is good nowOriginal post: https://news.ycombinator.com/item?id=46457770&utm_source=wondercraft_ai(01:51): Bluetooth Headphone Jacking: A Key to Your Phone [video]Original post: https://news.ycombinator.com/item?id=46453204&utm_source=wondercraft_ai(03:12): A website to destroy all websitesOriginal post: https://news.ycombinator.com/item?id=46457784&utm_source=wondercraft_ai(04:33): Show HN: OpenWorkers – Self-hosted Cloudflare workers in RustOriginal post: https://news.ycombinator.com/item?id=46454693&utm_source=wondercraft_ai(05:54): Cameras and Lenses (2020)Original post: https://news.ycombinator.com/item?id=46455872&utm_source=wondercraft_ai(07:15): iOS allows alternative browser engines in JapanOriginal post: https://news.ycombinator.com/item?id=46453950&utm_source=wondercraft_ai(08:36): I rebooted my social lifeOriginal post: https://news.ycombinator.com/item?id=46453114&utm_source=wondercraft_ai(09:58): Finland detains ship and its crew after critical undersea cable damagedOriginal post: https://news.ycombinator.com/item?id=46456797&utm_source=wondercraft_ai(11:19): ACM Is Now Open AccessOriginal post: https://news.ycombinator.com/item?id=46454763&utm_source=wondercraft_ai(12:40): Python numbers every programmer should knowOriginal post: https://news.ycombinator.com/item?id=46454470&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
En 2025, une nouvelle expression s'est imposée dans le vocabulaire de la tech : le « vibe coding ». Derrière ce terme intrigant se cache une pratique qui transforme en profondeur la manière de développer des logiciels.Le vibe coding, que l'on peut traduire par « programmation intuitive », désigne une approche où le développeur ne code plus ligne par ligne, mais décrit simplement ce qu'il souhaite obtenir à une intelligence artificielle. Popularisé par Andrei Karpathy, ancien responsable de l'IA chez Tesla et cofondateur d'OpenAI, ce concept est né dans les communautés de développeurs avant de se diffuser largement dans l'écosystème numérique.Concrètement, il suffit désormais de formuler une demande en langage naturel : créer un script Python, concevoir une page web avec un formulaire, modifier l'interface d'une application ou même développer un jeu ou une application mobile complète. Cette méthode permet un gain de temps spectaculaire et ouvre la création logicielle à des non-développeurs, capables de produire des outils fonctionnels pour le web, le mobile ou des usages métiers comme des CMS ou des ERP.De nombreux outils incarnent cette tendance, à commencer par GitHub Copilot, mais aussi Cursor, Windsurf ou des assistants généralistes comme ChatGPT, Claude ou Gemini, qui génèrent du code à intégrer ensuite de manière classique. D'autres solutions vont plus loin encore, en produisant directement des applications prêtes à l'emploi, comme le propose la startup suédoise Lovable.Dans cet épisode, Sébastien Stormacq, responsable des relations développeurs chez AWS, partage une expérience concrète : la création, en une heure et sans écrire une seule ligne de code, d'un jeu inspiré de Pac-Man grâce au vibe coding. Un exemple révélateur de la puissance, mais aussi des limites de cette approche.Le phénomène soulève des questions cruciales : qualité et sécurité du code généré, risques de bugs majeurs, mais aussi impact sur l'emploi. Si le vibe coding accélère le travail des équipes et augmente la productivité des développeurs expérimentés, il fragilise davantage les profils juniors. Une chose est sûre : plus qu'un simple outil, le vibe coding redéfinit en profondeur le métier de développeur.-----------♥️ Soutien : https://mondenumerique.info/don
A $20 billion AI deal while you were away?
Talk Python To Me - Python conversations for passionate developers
Python in 2025 is in a delightfully refreshing place: the GIL's days are numbered, packaging is getting sharper tools, and the type checkers are multiplying like gremlins snacking after midnight. On this episode, we have an amazing panel to give us a range of perspectives on what matter in 2025 in Python. We have Barry Warsaw, Brett Cannon, Gregory Kapfhammer, Jodie Burchell, Reuven Lerner, and Thomas Wouters on to give us their thoughts. Episode sponsors Seer: AI Debugging, Code TALKPYTHON Talk Python Courses Links from the show Python Software Foundation (PSF): www.python.org PEP 810: Explicit lazy imports: peps.python.org PEP 779: Free-threaded Python is officially supported: peps.python.org PEP 723: Inline script metadata: peps.python.org PyCharm: www.jetbrains.com JetBrains: www.jetbrains.com Visual Studio Code: code.visualstudio.com pandas: pandas.pydata.org PydanticAI: ai.pydantic.dev OpenAI API docs: platform.openai.com uv: docs.astral.sh Hatch: github.com PDM: pdm-project.org Poetry: python-poetry.org Project Jupyter: jupyter.org JupyterLite: jupyterlite.readthedocs.io PEP 690: Lazy Imports: peps.python.org PyTorch: pytorch.org Python concurrent.futures: docs.python.org Python Package Index (PyPI): pypi.org EuroPython: tickets.europython.eu TensorFlow: www.tensorflow.org Keras: keras.io PyCon US: us.pycon.org NumFOCUS: numfocus.org Python discussion forum (discuss.python.org): discuss.python.org Language Server Protocol: microsoft.github.io mypy: mypy-lang.org Pyright: github.com Pylance: marketplace.visualstudio.com Pyrefly: github.com ty: github.com Zuban: docs.zubanls.com Jedi: jedi.readthedocs.io GitHub: github.com PyOhio: www.pyohio.org Watch this episode on YouTube: youtube.com Episode #532 deep-dive: talkpython.fm/532 Episode transcripts: talkpython.fm Theme Song: Developer Rap
¿Qué hace realmente un usuario de Linux durante todo un año? En este episodio no teorizamos: auditamos. He analizado mi historial de comandos de 2025 y los datos no mienten.Desde el dominio absoluto de Rust hasta la sorprendente eficiencia de uv en Python, hoy te desvelo las 20 herramientas que han vertebrado mi flujo de trabajo. Hablamos de productividad real, de cómo Neovim ha desplazado definitivamente a mis antiguos editores y de por qué herramientas como just o yadm son las joyas ocultas que deberías estar usando ya.En este episodio descubrirás:El Stack de la Eficiencia: Mi top 20 analizado por categorías (Desarrollo, Sistema y Navegación).La transición a Rust: ¿Por qué cargo es el motor de mi día a día?Adiós a la fricción: Cómo herramientas modernas están sustituyendo a los comandos clásicos de toda la vida.Telemetría personal: El método para que tú también audites tu terminal.Si quieres llevar tu productividad en Linux al siguiente nivel y conocer qué software está marcando la diferencia en 2025, este episodio es tu hoja de ruta.
This is a recap of the top 10 posts on Hacker News on December 25, 2025. This podcast was generated by wondercraft.ai (00:30): We invited a man into our home at Christmas and he stayed with us for 45 yearsOriginal post: https://news.ycombinator.com/item?id=46383552&utm_source=wondercraft_ai(01:52): Ruby 4.0.0Original post: https://news.ycombinator.com/item?id=46382011&utm_source=wondercraft_ai(03:14): Alzheimer's disease can be reversed in animal models? StudyOriginal post: https://news.ycombinator.com/item?id=46384919&utm_source=wondercraft_ai(04:36): I sell onions on the Internet (2019)Original post: https://news.ycombinator.com/item?id=46385308&utm_source=wondercraft_ai(05:58): Maybe the default settings are too highOriginal post: https://news.ycombinator.com/item?id=46387657&utm_source=wondercraft_ai(07:20): Mattermost restricted access to old messages after 10000 limit is reachedOriginal post: https://news.ycombinator.com/item?id=46383675&utm_source=wondercraft_ai(08:42): Python 3.15's interpreter for Windows x86-64 should hopefully be 15% fasterOriginal post: https://news.ycombinator.com/item?id=46384167&utm_source=wondercraft_ai(10:04): Who Watches the Waymos? I do [video]Original post: https://news.ycombinator.com/item?id=46380758&utm_source=wondercraft_ai(11:26): Asahi Linux with Sway on the MacBook Air M2 (2024)Original post: https://news.ycombinator.com/item?id=46384565&utm_source=wondercraft_ai(12:48): Free Software Foundation receives historic private donationsOriginal post: https://news.ycombinator.com/item?id=46382134&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
Send us a textIn which we've talked about all the other Python projects, so why not?
This is a recap of the top 10 posts on Hacker News on December 23, 2025. This podcast was generated by wondercraft.ai (00:30): Inside CECOT – 60 Minutes [video]Original post: https://news.ycombinator.com/item?id=46361024&utm_source=wondercraft_ai(01:51): Fabrice Bellard Releases MicroQuickJSOriginal post: https://news.ycombinator.com/item?id=46367224&utm_source=wondercraft_ai(03:13): Meta is using the Linux scheduler designed for Valve's Steam Deck on its serversOriginal post: https://news.ycombinator.com/item?id=46366998&utm_source=wondercraft_ai(04:35): Instant database clones with PostgreSQL 18Original post: https://news.ycombinator.com/item?id=46363360&utm_source=wondercraft_ai(05:57): Ask HN: What are the best engineering blogs with real-world depth?Original post: https://news.ycombinator.com/item?id=46363921&utm_source=wondercraft_ai(07:18): We replaced H.264 streaming with JPEG screenshots (and it worked better)Original post: https://news.ycombinator.com/item?id=46367475&utm_source=wondercraft_ai(08:40): Snitch – A friendlier ss/netstatOriginal post: https://news.ycombinator.com/item?id=46361229&utm_source=wondercraft_ai(10:02): X-ray: a Python library for finding bad redactions in PDF documentsOriginal post: https://news.ycombinator.com/item?id=46369923&utm_source=wondercraft_ai(11:24): Show HN: CineCLI – Browse and torrent movies directly from your terminalOriginal post: https://news.ycombinator.com/item?id=46362655&utm_source=wondercraft_ai(12:45): 10 years bootstrapped: €6.5M revenue with a team of 13Original post: https://news.ycombinator.com/item?id=46363319&utm_source=wondercraft_aiThis is a third-party project, independent from HN and YC. Text and audio generated using AI, by wondercraft.ai. Create your own studio quality podcast with text as the only input in seconds at app.wondercraft.ai. Issues or feedback? We'd love to hear from you: team@wondercraft.ai
Dr Mark Palmeri is a professor at Duke University in the Biomedical Engineering (BME) field. He joins Chris to talk about using open tools (KiCad, ngspice, Zephyr, Jupyter notebooks, Python) to build educational resources and how he shares those courses with the world outside of Duke. He also walks through the Tympanometer project, built with Duke BME Design Fellows.
Topics covered in this episode: Has the cost of building software just dropped 90%? More on Deprecation Warnings How FOSS Won and Why It Matters Should I be looking for a GitHub alternative? Extras Joke Watch on YouTube About the show Sponsored by us! Support our work through: Our courses at Talk Python Training The Complete pytest Course Patreon Supporters Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. HEADS UP: We are taking next week off, happy holiday everyone. Michael #1: Has the cost of building software just dropped 90%? by Martin Alderson Agentic coding tools are collapsing “implementation time,” so the cost curve of shipping software may be shifting sharply Recent programming advancements haven't been that great of a true benefit: Cloud, TDD, microservices, complex frontends, Kubernetes, etc. Agentic AI's big savings are not just code generation, but coordination overhead reduction (fewer handoffs, fewer meetings, fewer blocks). Thinking, product clarity, and domain decisions stay hard, while typing and scaffolding get cheap. Is it the end of software dev? Not really, see Jevons paradox: when production gets cheaper, total demand can rise rather than spending simply falling. (Historically: the efficiency of coal use led to the increased consumption of coal) Pushes back on “only good for greenfield” by arguing agents also help with legacy code comprehension and bug-fixing. I 100% agree. #Legacy code for the win. Brian #2: More on Deprecation Warnings How are people ignoring them? yep, it's right in the Python docs: -W ignore::DeprecationWarning Don't do that! Perhaps the docs should give the example of emitting them only once -W once::::DeprecationWarning See also -X dev mode , which sets -W default and some other runtime checks Don't use warn, use the @warnings.deprecated decorator instead Thanks John Hagen for pointing this out Emits a warning It's understood by type checkers, so editors visually warn you You can pass in your own custom UserWarning with category mypy also has a command line option and setting for this --enable-error-code deprecated or in [tool.mypy] enable_error_code = ["deprecated"] My recommendation Use @deprecated with your own custom warning and test with pytest -W error Michael #3: How FOSS Won and Why It Matters by Thomas Depierre Companies are not cheap, companies optimize cost control. They do this by making purchasing slow and painful. FOSS is/was a major unlock hack to skip procurement, legal, etc. Example is months to start using a paid “Add to calendar” widget! It “works both ways”: the same bypass lowers the barrier for maintainers too, no need for a legal entity, lawyers, liability insurance, or sales motion. Proposals that “fix FOSS” by reintroducing supply-chain style controls (he name-checks SBOMs and mandated processes) risk being rejected or gamed, because they restore the very friction FOSS sidesteps. Brian #4: Should I be looking for a GitHub alternative? Pricing changes for GitHub Actions The self-hosted runner pricing change caused a kerfuffle. It's has been postponed But… if you were to look around, maybe pay attention to These 4 GitHub alternatives are just as good—or better Codeburg, BitBucket, GitLab, Gitea And a new-ish entry, Tangled Extras Brian: End of year sale for The Complete pytest Course Use code XMAS2025 for 50% off before Dec 31 Writing work on Lean TDD book on hold for holidays Will pick up again in January Michael: PyCharm has better Ruff support now out of the box, via Daniel Molnar This is from the release notes of 2025.3: "PyCharm 2025.3 expands its LSP integration with support for Ruff, ty, Pyright, and Pyrefly.” If you check out the LSP section it will land you on this page and you can go to Ruff. The Ruff doc site was also updated. Previously it was only available external tools and a third party plugin, this feels like a big step. Fun quote I saw on ExTwitter: May your bug tracker be forever empty. Joke: Try/Catch/Stack Overflow Create a super annoying linkedin profile - From Tim Kellogg, submitted by archtoad
Building a Fiction Editing Pipeline with Regex and Python — Part 1 by Eddie Tonkoi Support the Show CCATP #826 — Adam Engst on The Phone App in iOS 26 Transcript of NC_2025_12_25 Join the Conversation: allison@podfeet.com podfeet.com/slack Support the Show: Patreon Donation Apple Pay or Credit Card one-time donation PayPal one-time donation Podfeet Podcasts Mugs at Zazzle NosillaCast 20th Anniversary Shirts Referral Links: Setapp - 1 month free for you and me PETLIBRO - 30% off for you and me Parallels Toolbox - 3 months free for you and me Learn through MacSparky Field Guides - 15% off for you and me Backblaze - One free month for me and you Eufy - $40 for me if you spend $200. Sadly nothing in it for you. PIA VPN - One month added to Paid Accounts for both of us CleanShot X - Earns me $25%, sorry nothing in it for you but my gratitude
Talk Python To Me - Python conversations for passionate developers
Have you ever thought about getting your small product into production, but are worried about the cost of the big cloud providers? Or maybe you think your current cloud service is over-architected and costing you too much? Well, in this episode, we interview Michael Kennedy, author of "Talk Python in Production," a new book that guides you through deploying web apps at scale with right-sized engineering. Episode sponsors Seer: AI Debugging, Code TALKPYTHON Agntcy Talk Python Courses Links from the show Christopher Trudeau - guest host: www.linkedin.com Michael's personal site: mkennedy.codes Talk Python in Production Book: talkpython.fm glances: github.com btop: github.com Uptimekuma: uptimekuma.org Coolify: coolify.io Talk Python Blog: talkpython.fm Hetzner (€20 credit with link): hetzner.cloud OpalStack: www.opalstack.com Bunny.net CDN: bunny.net Galleries from the book: github.com Pandoc: pandoc.org Docker: www.docker.com Watch this episode on YouTube: youtube.com Episode #531 deep-dive: talkpython.fm/531 Episode transcripts: talkpython.fm Theme Song: Developer Rap
Today Eric Chou dives deep into network automation and operational simplicity with guest Hardik Ajmera, VP of Product Management at Extreme Networks. In this sponsored episode, they talk about the ‘network fabric', Extreme Platform ONE, and, of course, what's next with AI in the world of enterprise networking. Hardik also shares how customers in complex... Read more »
Gabe Kunda (Marvel Rivals, Apex Legends) and Natasha Rothwell (The White Lotus, Insecure) co-star in a Ugandan legend about friendship, rivalry, and making music. Sign up for our monthly newsletter, "The Lion's Roar", here.
Topics covered in this episode: Deprecations via warnings docs PyAtlas: interactive map of the top 10,000 Python packages on PyPI. Buckaroo Extras Joke Watch on YouTube About the show Connect with the hosts Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky) Brian: @brianokken@fosstodon.org / @brianokken.bsky.social Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky) Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 10am PT. Older video versions available there too. Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it. Brian #1: Deprecations via warnings Deprecations via warnings don't work for Python libraries Seth Larson How to encourage developers to fix Python warnings for deprecated features Ines Panker Michael #2: docs A collaborative note taking, wiki and documentation platform that scales. Built with Django and React. Made for self hosting Docs is the result of a joint effort led by the French
Talk Python To Me - Python conversations for passionate developers
For years, building interactive widgets in Python notebooks meant wrestling with toolchains, platform quirks, and a mountain of JavaScript machinery. Most developers took one look and backed away slowly. Trevor Manz decided that barrier did not need to exist. His idea was simple: give Python users just enough JavaScript to unlock the web's interactivity, without dragging along the rest of the web ecosystem. That idea became anywidget, and it is quickly becoming the quiet connective tissue of modern interactive computing. Today we dig into how it works, why it has taken off, and how it might change the way we explore data. Episode sponsors Seer: AI Debugging, Code TALKPYTHON PyCharm, code STRONGER PYTHON Talk Python Courses Links from the show Trevor on GitHub: github.com anywidget GitHub: github.com Trevor's SciPy 2024 Talk: www.youtube.com Marimo GitHub: github.com Myst (Markdown docs): mystmd.org Altair: altair-viz.github.io DuckDB: duckdb.org Mosaic: uwdata.github.io ipywidgets: ipywidgets.readthedocs.io Tension between Web and Data Sci Graphic: blobs.talkpython.fm Quak: github.com Walk through building a widget: anywidget.dev Widget Gallery: anywidget.dev Video: How do I anywidget?: www.youtube.com PyCharm + PSF Fundraiser: pycharm-psf-2025 code STRONGER PYTHON Watch this episode on YouTube: youtube.com Episode #530 deep-dive: talkpython.fm/530 Episode transcripts: talkpython.fm Theme Song: Developer Rap