Podcasts about Apis

  • 3,177PODCASTS
  • 9,332EPISODES
  • 42mAVG DURATION
  • 2DAILY NEW EPISODES
  • Sep 9, 2025LATEST

POPULARITY

20172018201920202021202220232024

Categories



Best podcasts about Apis

Show all podcasts related to apis

Latest podcast episodes about Apis

The Tech Blog Writer Podcast
3415: Secure GenAI for SAP: Syntax Systems CodeGenie on BTP

The Tech Blog Writer Podcast

Play Episode Listen Later Sep 9, 2025 25:31


I sat down with Leo de Araujo, Head of Global Business Innovation at Syntax Systems, to unpack a problem every SAP team knows too well. Years of enhancements and quick fixes leave you with custom code that nobody wants to document, a maze of SharePoint folders, and hard questions whenever S/4HANA comes up. What does this program do. What breaks if we change that field. Do we have three versions of the same thing. Leo's answer is Syntax AI CodeGenie, an agentic AI solution with a built-in chatbot that finally treats documentation and code understanding as a living part of the system, not an afterthought. Here's the thing. CodeGenie automates the creation and upkeep of custom code documentation, then lets you ask plain-language questions about function and business value. Instead of hunting through 40-page PDFs, teams can ask, “Do we already upload sales orders from Excel,” or “What depends on this BAdI,” and get an instant explanation. That changes migration planning. You can see what to keep, what to retire, and where standard capabilities or new extensions make more sense, which shortens the path to S/4HANA Cloud and helps you stay on a clean core. We also talk about how this is delivered. CodeGenie runs on SAP Business Technology Platform, connects through standard APIs, and avoids intrusive add-ons. It is compatible with SAP S/4HANA, S/4HANA Cloud Private Edition through RISE with SAP, and on-premises ECC. Security comes first, with tenant isolation for each customer and no custom code shared externally or used for AI model training. The result is a setup that respects enterprise guardrails while still giving developers and architects fast answers. Clean core gets a plain explanation in this episode. Build outside the application with published APIs, keep upgrades predictable, and innovate at the edge where you can move quickly. CodeGenie gives you the visibility to make that real, surfacing what you actually run today and how it ties to outcomes, so you can design a migration roadmap that fits the business rather than guessing from stale documents. Leo also previews the Gen AI Starter Pack, launching September 9. It bundles a managed, model-flexible platform with workshops, use-case ideation, and initial builds, so teams can move from curiosity to working solutions without locking themselves into a single provider. Paired with CodeGenie and Syntax's development accelerators, the Starter Pack points toward something SAP leaders have wanted for years, a practical way to shift from in-core customizations to clean-core extensions with much less friction. If you are planning S/4HANA, balancing hybrid and multi-cloud realities, or simply tired of tribal knowledge around critical programs, this conversation is for you. We get specific about how CodeGenie works, where it saves time and cost, and how Syntax is shaping a playbook for AI that helps teams deliver results they can trust. ********* Visit the Sponsor of Tech Talks Network: Land your first job  in tech in 6 months as a Software QA Engineering Bootcamp with Careerist https://crst.co/OGCLA

INspired INsider with Dr. Jeremy Weisz
[SaaS Series] Building the Future of Voice AI With Kwin Kramer

INspired INsider with Dr. Jeremy Weisz

Play Episode Listen Later Sep 9, 2025 57:29


Kwindla “Kwin” Kramer is the CEO and Co-founder of Daily, a leading real-time video platform that provides APIs for integrating audio, video, and AI into apps. Under his leadership, Daily has powered millions of video and voice minutes each month for clients like AWS, Google, Epic, and Nvidia, and is recognized as a Y Combinator Top Company. An MIT Media Lab alumnus, Kwin previously co-founded Oblong Industries, creator of the gesture-based interfaces seen in Minority Report. He is passionate about advancing distributed systems and AI to shape the future of telehealth, education, and conversational technology. In this episode… Imagine a virtual assistant that not only schedules your appointments but also remembers every detail of past interactions — across healthcare, education, and even gaming. What if seamless real-time audio, video, and AI tools could elevate these experiences for everyone, not just the tech elite? How did the journey of making this technology accessible to millions actually unfold? Kwin Kramer pioneered developer infrastructure that makes embedding real-time audio, video, and AI into products simple and scalable. Drawing on his experience at Y Combinator and Oblong Industries, he learned to bridge the gap between imagination and reality for companies such as Boeing and GE. With Daily, Kwin shifted to empowering startups in telehealth, edtech, and more with open, scalable tools. His work enables doctors, teachers, and other professionals to harness AI and real-time media, signaling a future where AI copilots transform daily life. In this episode of the Inspired Insider Podcast, Dr. Jeremy Weisz interviews Kwin Kramer, CEO and Co-founder of Daily. They explore the evolution of developer tools, lessons from Y Combinator, and how open-source ecosystems are shaping healthcare, education, and more. The conversation covers how Daily powers telehealth, adaptive learning, and conversational agents; the shift from custom demos to scalable APIs; and why the future of software is voice-first and deeply personalized.

Dev Interrupted
Your AI demo is a lie (and how to make it real) | Arcade's Alex Salazar

Dev Interrupted

Play Episode Listen Later Sep 9, 2025 61:58


AI that talks is easy, but AI that acts securely is where everything breaks down. We're joined by Alex Salazar, CEO of Arcade, to confront the massive and often underestimated gap between a flashy AI demo and a production-ready system. Drawing from his team's own pivot from building agents to building the tools that secure them, he explains why a working demo is only 1% of the journey. Alex breaks down the four "demo killers" that cause most agent projects to fail: inconsistency, security flaws, prohibitive costs, and high latency.Alex reveals the counterintuitive solution his team discovered: the key to making non-deterministic AI reliable is to dial up determinism. Learn why giving an AI a constrained set of intention-based tools - like a calculator or a multiple-choice test - dramatically reduces errors and solves critical security challenges that plague open-ended systems. He explains why you can't just wrap existing APIs and must instead build custom, workflow-centric tools for your agents. This is an essential listen for anyone who wants to build AI that doesn't just talk, but acts securely on behalf of your users.Check out:Register now: Closing the AI gap: Exceeding executive expectations for AI productivityFollow the hosts:Follow BenFollow AndrewFollow today's guest(s):Learn more about Arcade: Arcade.devArcade's YouTube Channel: Watch examples and walkthroughs on building agentsConnect with Alex Salazar: LinkedInReferenced in today's show:Google avoids break-up but must share data with rivals Welcoming The Browser Company to Atlassian "~40% of daily code written at Coinbase is AI-generated. I want to get it to >50% by October." on X Crushing JIRA tickets is a party trick, not a path to impact Support the show: Subscribe to our Substack Leave us a review Subscribe on YouTube Follow us on Twitter or LinkedIn Offers: Learn about Continuous Merge with gitStream Get your DORA Metrics free forever

Real Estate Team OS
Agent Success with Camila Rivera and Julia O'Buckley | [Techtember] Ep 077

Real Estate Team OS

Play Episode Listen Later Sep 9, 2025 44:26


In order to protect, scale, and optimize a real estate business, you need systems. Tech-powered systems. AI-enhanced systems.In our second Techtember episode, Chief Operating Officer Camila Rivera and Operations Manager Julia O'Buckley give you insights into the evolution of tech stacks that don't just enhance efficiency - they also drive agent success.CRM transitions. Open APIs. Enhanced visibility. Efficiency gains for agents and teams alike.It's all here in just under 20 minutes with Camila, then just under 20 minutes with Julia.Enjoy our second Techtember episode … here on Real Estate Team OS!CAMILA RIVERAA 10-year team member and COO of the Laurie Finkelstein Reader Real Estate Team in South Florida, Camila Rivera has overseen significant evolutions in their technology, efficiency, and visibility.Camila breaks down key steps in that evolution, including tips for a successful CRM transition and the one thing that can't be automated or artificially generated (spoiler: it's human connection and it shows up in their online reviews).Watch or listen for insights into:Digitizing operations, getting more out of your current tech investments, and improving your sales pipelineSpecific steps and tips for transitioning from one CRM to another (and bringing 100,000+ contacts with you)How open APIs improve data and communication flows, enhance visibility for operations staff and leadership, and deliver efficiency gains to agentsWhen human touch matters most and how tech efficiency supports itJULIA O'BUCKLEYWhen she joined The Laughton Team, Julia O'Buckley wasn't sure what an API was or how it worked. Today, as Operations Manager, she's a “full-fledged 10” when it comes to tech knowledge and passion.Julia walks us through the four key elements of a tech stack that supports a +200-agent, 5-office operation and explains how and why she's re-packaging and re-selling it to agents. She also explains how her efforts add efficiency, improve data integrity, provide transparency, and unlock sales.Watch or listen for insights into:How her familiarity with and passion for tech grew, what tech changes delivered great agent feedback, and a non-real-estate app she opens every dayHow Follow Up Boss, Sisu, Fello, and HouseWhisper help agents and how she would describe that value to agentsWhich tools are the “base” of the operation, the “sexiest” in the stack, and like “Homebot on steroids”How the team humanizes their AI assistant “Nora” and how it brings all the other tools and tech togetherFOLLOW UP AND LEARN MOREMentioned in this episode:→ First episode with Julia O'Buckley https://www.realestateteamos.com/episode/real-estate-tech-stack-scale-operations-julia-obuckley→ Inside The Laughton Team series https://www.youtube.com/playlist?list=PLCJiXNo93cVr9Oc0ptse5z6Vaq5l0T1kA→ George Laughton and Billy Hobbs https://www.realestateteamos.com/episode/profitability-durability-real-estate-team-george-laughton-billy-hobbs→ Justin McLellan https://www.realestateteamos.com/episode/real-estate-agent-productivity-skill-justin-mclellan→ Follow Up Boss Success Community https://www.facebook.com/groups/followupbosscommunity→ Follow Up Boss https://www.followupboss.com/→ Dotloop https://www.dotloop.com/→ Sisu https://sisu.co/→ Fello https://fello.ai/→ HouseWhisper https://www.housewhisper.ai/Connect with Camila Rivera:→ https://www.instagram.com/camila_rivera/→ https://www.instagram.com/lauriefinkelsteinreaderteam/→ Email: camila at lauriereader dot comConnect with Julia O'Buckley:→ https://www.instagram.com/juliaobuckley/→ https://www.instagram.com/laughtonteamConnect with Real Estate Team OS:→ https://www.realestateteamos.com→ https://linktr.ee/realestateteamos→ https://www.instagram.com/realestateteamos/

AWS Podcast
#736: AWS News: New Amazon Bedrock APIs, New EC2 Instance Types and Lots More.

AWS Podcast

Play Episode Listen Later Sep 8, 2025 22:44


Simon takes you through a big list of cool new things - something for everyone.

The PowerShell Podcast
PowerShell, OAuth, and Automation in the Cloud with Emanuel Palm

The PowerShell Podcast

Play Episode Listen Later Sep 8, 2025 50:09


Microsoft MVP Emanuel Palm joins The PowerShell Podcast to share his journey from managing printers in Sweden to being a Microsoft MVP who is automating the cloud with PowerShell and Azure. He talks about building the AZAuth module for OAuth authentication, using GitHub Actions for CI/CD, and the importance of blogging and community involvement. Plus, Emanuel reveals his unique side hobby... roasting coffee!   Key Takeaways From printers to the cloud: Emanuel's career shows how PowerShell can open doors, from automating IT tasks to driving cloud automation and DevOps practices. Community and sharing matter: Blogging, presenting, and contributing help you grow your own understanding while creating opportunities for others. Automation and authentication: With tools like GitHub Actions and his AZAuth module, Emanuel demonstrates how to simplify workflows and securely interact with APIs. Guest Bio Emanuel Palm is a Microsoft MVP based in Sweden, where he is a consultant focused on Microsoft technologies and is active in the PowerShell community. Emanuel is the creator of the AZAuth module, a lightweight solution for handling OAuth authentication in PowerShell, and a frequent speaker at events like PowerShell Conference Europe. Beyond tech, Emanuel is a coffee enthusiast who even roasts his own beans as a side hobby.   Resource Links Emanuel's Blog: https://pipe.how GitHub – Emanuel Palm: https://github.com/palmemanuel X / BlueSky: @palmemanuel AZAuth Module on GitHub: https://github.com/PalmEmanuel/AzAuth Emanuel's PS Wednesday: https://www.youtube.com/watch?v=trP2LLDynA0 Arkanum Coffee (Emanuel's hobby project): https://arkanum.coffee PDQ Discord: https://discord.gg/pdq Connect with Andrew: https://andrewpla.tech/links The PowerShell Podcast on YouTube: https://youtu.be/-uHHGVH1Kcc The PowerShell Podcast hub: https://pdq.com/the-powershell-podcast 

Let's Talk Supply Chain
489: Time To Swap Your Axe For A Chainsaw: The Power of Agentic AI

Let's Talk Supply Chain

Play Episode Listen Later Sep 8, 2025 50:43


Colby Ward of Amazon Web Services talks about leveraging agentic AI for competitive advantage; navigating change management; data; & eliminating spreadsheets.  IN THIS EPISODE WE DISCUSS:   [04.04] An introduction to Colby, his background, and role at Amazon Web Services. [06.58] An overview of AWS – who they are, what they do, and how they help their customers. “Everybody knows the expression: ‘Garbage in, garbage out!' But no one seems to focus on that area enough. They seem to say: “Well, here's our solution, whatever your data is, we'll take it.” And nobody focuses on: “How can I connect your data in an easier way?” “Our goal is to eventually eliminate the spreadsheet on every supply chainers desktop!” [10.30] Where enterprise technology has focused over the last couple of decades, the positives and negatives, and how we've got to where we are today. [14.17] The problem with generic SaaS systems in supply chain, and how agentic AI can deliver improved orchestration and eliminate bias. “SaaS systems are designed to fit any customer need, not your specific needs…. It has to be built in a generic way, so they offer configurable options, different APIs, but you're molding a generic system. And the problem is: supply chain problems aren't generic. They're specific. When you translate the word supply chain, what you're really saying is business operations. So that's a big topic!” “Not conjecture, not just leaving it up to somebody's best instinct… When you're operating on data-driven decisions, you don't have to second guess.” [22.27] How businesses can tackle outdated processes and deal with dirty data to leverage agentic AI in the most effective ways. “People say that data is the new oil, and that's true. But, like oil, if it's unrefined, it's useless.” [29.17] Leveraging agentic AI for competitive advantage, and why businesses should be thinking about creating the most amount of outputs from the least amount of inputs. “Enterprise A that doesn't use AI will eventually be the lumber yard that never bothered with a chainsaw. There's a reason we moved from axes to chainsaws, and AI is your chainsaw!” [35.21] Whether or not agentic AI will level the playing field. [37.34] An overview of ontologies, process and knowledge graphs, and how they can guide agents to achieve next-level intelligence. [42.36] Change management, and how businesses should be thinking about people alongside technology to ensure the best chance of success. “If you visualize this as a tool to help you with your job, you're immediately going to be better off… Make sure you have an AI strategy in place. Don't be dismissive that this is the next new fad. It's not. It's transformative.” [45.00] Are we ready? What organizations should take away from this discussion.   RESOURCES AND LINKS MENTIONED:   Head over to Amazon Web Service's website now to find out more and discover how they could help you too. You can also connect with AWS and keep up to date with the latest over on LinkedIn, Facebook, YouTube, Instagram or X (Twitter), or you can connect with Colby on LinkedIn. If you enjoyed this episode and want to hear more about AI and change management in supply chain, check out 458: Demystifying Industry Buzzwords and Innovating Intermodal, with Lynxis or 486: Revealed – The Number One Way To Make Your Supply Chain Future-Proof. Check out our other podcasts HERE.

Point-Free Videos
Modern Search: Highlights & Snippets

Point-Free Videos

Play Episode Listen Later Sep 8, 2025 26:34


Subscriber-Only: Today's episode is available only to subscribers. If you are a Point-Free subscriber you can access your private podcast feed by visiting https://www.pointfree.co/account. --- SQLite's full-text search capabilities come with many bells and whistles, including support for highlighting search term matches in your UI, as well as generating snippets for where matches appear in a larger corpus. We will take these APIs for a spin and enhance our Reminders search UI.

Colorado = Security Podcast
278 - 9/8 - Jason Hayes, President of Colorado CSA

Colorado = Security Podcast

Play Episode Listen Later Sep 7, 2025 88:38


Our feature guest this week is Jason Hayes, President of Colorado CSA, interviewed by Frank Victory. News from Elitch Gardens, Xcel, EchoStar, Palantir, DaVita, Swimlane and a lot more! Come join us on the Colorado = Security Slack channel to meet old and new friends. Sign up for our mailing list on the main site to receive weekly updates - https://www.colorado-security.com/. If you have any questions or comments, or any organizations or events we should highlight, contact Alex and Robb at info@colorado-security.com This week's news: New Colorado area code rolls out for dozens of counties After years of doubt, Elitch Gardens may stick around for a while Space Case: Donald Trump's Rocky Relationship With Colorado Xcel says it needs to spend $22 billion to keep up with potential demand from Colorado data centers by 2040 Englewood-based EchoStar gives up wireless network independence for enough cash to survive Palantir is Colorado's highest-valued company — and at center of controversy — five years after move to Denver How They Got In — DaVita Inc. Colorado Adds AI-Generated Deepfakes to Revenge Porn, Child Exploitation Laws Colorado Delays AI Act Compliance: What Lawyers and Business Leaders Need to Know When the Government Can See Everything: How Palantir Is Mapping the Nation's Data Swimlane Announces Strategic Leadership Appointments to Accelerate Push to Agentic AI Security Operations Upcoming Events: Check out the full calendar Denver CSA - CCZT - Study Group - Session 2 (Virtual) - 9/10 ISSA Denver - September Chapter Meeting - 9/10 Denver CSA - Beyond Patching - Prioritizing Cloud Workload Risk with Exposure Management - 9/16 Denver OWASP - Why you should hack your own APIs - 9/17 ISACA Denver - Full Day! September Chapter Meeting - 9/18 ISSA Pikes Peak - Chapter Meeting - 9/24 Deciphering human behavior to Get Security Done (GSD): Understanding yourself and others to win at security - 9/25 ISACA Denver - ISACA CommunITy Day 2025 - 10/4 ISSA Denver - Denver ISSA Chapter Meeting at Secure World: How I Got Caught: A Deep Dive Into a 800K Fraud - 10/9 View our events page for a full list of upcoming events * Thanks to CJ Adams for our intro and exit! If you need any voiceover work, you can contact him here at carrrladams@gmail.com. Check out his other voice work here. * Intro and exit song: "The Language of Blame" by The Agrarians is licensed under CC BY 2.0

Daniel Ramos' Podcast
Episode 497: Escuela Sabática - Lectura 08 de Septiembre del 2025

Daniel Ramos' Podcast

Play Episode Listen Later Sep 7, 2025 3:48


====================================================SUSCRIBETEhttps://www.youtube.com/channel/UCNpffyr-7_zP1x1lS89ByaQ?sub_confirmation=1==================================================== LECCIÓN DE ESCUELA SABÁTICA         III TRIMESTRE DEL 2025Narrado por: Eddie RodriguezDesde: Guatemala, GuatemalaUna cortesía de DR'Ministries y Canaan Seventh-Day Adventist ChurchLUNES 08 DE SEPTIEMBRELA IDOLATRÍA Y EL MAL Lee Éxodo 32:6. ¿Adónde los condujo rápidamente su idolatría? (Ver también Sal. 115:4-8; 135:15-18; Isa. 44:9, 10). El becerro de oro se parecía al dios-toro egipcio Apis, o al dios-vaca Hathor. Se trataba de una flagrante transgresión de los mandamientos primero y segundo (Éxo. 20:3-6). Esta violación no podía quedar impune porque rompía abiertamente la relación del pueblo con el Señor viviente. En lugar de adorar a su Creador, los israelitas adoraron a su propia creación, que no podía ver, oír, oler, hablar, cuidar, amar ni guiar. El orden de la Creación se invirtió: en lugar de comprender que habían sido creados a imagen de Dios, hicieron un dios, ni siquiera a su propia imagen, lo que ya habría sido considerablemente malo, sino a imagen de un animal. ¿Este era el dios al que querían servir? Habían pecado así gravemente contra el Señor (Isa. 31:7; 42:17). ¿De qué maneras refleja la apostasía del becerro de oro lo que dice Romanos 1:22 al 27? La idolatría rechaza la verdad teológica de que Dios es Dios y el hombre es hombre, borra la brecha entre la Deidad y el ser humano (Ecl. 5:2) y destruye la conexión entre ambos. Ya sea de manera descarada y abierta u oculta en el corazón, la idolatría destruye rápidamente nuestra relación con el Señor y nos conduce a una espiral moral descendente. No es de extrañar que se pusieran a festejar después de ofrecer sacrificios al ídolo, lo que Elena de White describió como “una imitación de las fiestas idólatras de Egipto” (Patriarcas y profetas, p. 331). Los humanos son brillantes a la hora de fabricar sus propios ídolos. Crean sus propios dioses, lo cual ya es malo, pero luego van y los sirven. Sustituyen al Creador por cosas que, tarde o temprano, conducen a la degradación moral. ¿De qué maneras rinden culto los seres humanos actualmente a la Creación en lugar de adorar al Creador? 

a16z
Building APIs for Developers and AI Agents

a16z

Play Episode Listen Later Sep 6, 2025 26:34


Stainless founder Alex Rattray joins a16z partner Jennifer Li to talk about the future of APIs, SDKs, and the rise of MCP (Model Context Protocol). Drawing on his experience at Stripe—where he helped redesign API docs and built code-generation systems—Alex explains why the SDK is the API for most developers, and why high-quality, idiomatic libraries are essential not just for humans, but now for AI agents as well.They dive into:The evolution of SDK generation and lessons from building at scale inside Stripe.Why MCP reframes APIs as interfaces for large language models.The challenges of designing tools and docs for both developers and AI agents.How context limits, dynamic tool generation, and documentation shape agent usability.The future of developer platforms in an era where “every company is an API company.”Timecodes: 0:00 – Introduction: APIs as the Dendrites of the Internet1:49 – Building API Platforms: Lessons from Stripe3:03 – SDKs: The Developer's Interface6:16 – The MCP Model: APIs for AI Agents9:23 – Designing for LLMs and AI Users13:08 – Solving Context Window Challenges16:57 – The Importance of Strongly Typed SDKs21:07 – The Future of API and Agent Experience24:45 – Lessons from Leading API Companies26:14 – Outro and DisclaimersResources: Find Alex on X: https://x.com/rattrayalexFind Jennifer on X: https://x.com/JenniferHliStay Updated: Let us know what you think: https://ratethispodcast.com/a16zFind a16z on Twitter: https://twitter.com/a16zFind a16z on LinkedIn: https://www.linkedin.com/company/a16zSubscribe on your favorite podcast app: https://a16z.simplecast.com/Follow our host: https://x.com/eriktorenbergPlease note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

FinTech Newscast
Ep 265- Prometeo Co-CEO Ximena Aleman

FinTech Newscast

Play Episode Listen Later Sep 5, 2025 36:42


Excited to welcome Ximena Aleman, Co-CEO and Co-Founder of Prometeo, on this week's Fintech Newscast! She brings sharp insights into the future of cross-border payments, how financial uncertainty is creating new demand for fintech services, and why APIs are the secret weapon making global finance simpler than ever https://prometeoapi.com Click Subscribe to keep up to date on the world of fintech!  Reach … Continue reading Ep 265- Prometeo Co-CEO Ximena Aleman

Defense in Depth
How Are You Managing the Flow of AI Data

Defense in Depth

Play Episode Listen Later Sep 4, 2025 31:25


All links and images can be found on CISO Series. Check out this post for the discussion that is the basis of our conversation on this week's episode co-hosted by David Spark, the producer of CISO Series, and Geoff Belknap. Joining us is our sponsored guest Mokhtar Bacha, founder and CEO, Formal. In this episode: Access management faces transformation  AI agents demand new authentication paradigms AI complexity demands simplified governance approaches Data-centric identity management replaces role-based approaches Huge thanks to our sponsor, Formal Formal secures humans, AI agent's access to MCP servers, infrastructure, and data stores by monitoring and controlling data flows in real time. Using a protocol-aware reverse proxy, Formal enforces least-privilege access to sensitive data and APIs, ensuring AI behavior stays predictable and secure. Visit joinformal.com to learn more or schedule a demo.

Autonomous IT
Hands-On IT – Building Blocks of IT: From Problems to Solutions pt. 2, E21

Autonomous IT

Play Episode Listen Later Sep 4, 2025 29:58


In the second half of this crossover between Hands On IT and Automate IT, hosts Landon Miles and Jeremy Maldonado shift from defining IT problems to actually building, testing, and refining solutions. They dig into choosing the right tools without getting lost in endless options, the value of learning from APIs and documentation, and why “don't reinvent the wheel” is a mantra every IT pro should adopt.Along the way, they share real-world stories about discovering hidden libraries, avoiding common pitfalls, and leaning on version control to save projects from chaos. From Python and Bash basics to Git, Postman, and even the “bus test” for documentation, this episode is packed with practical lessons to help you turn automations into lasting, maintainable solutions.Whether you're just starting your automation journey or looking to optimize and scale what you've already built, you'll find insights, strategies, and inspiration to take your IT problem-solving further.Awesome-Selfhosted GitHub Link: https://github.com/awesome-selfhosted/awesome-selfhosted

EUVC
E568 | Ivan Burazin on Building Daytona, the Computer for Agents

EUVC

Play Episode Listen Later Sep 4, 2025 62:20


Welcome back to another episode of the EUVC Podcast, where we gather Europe's venture family to share the stories, insights, and lessons that drive our ecosystem forward. Today's conversation takes us on a global journey from Croatia to San Francisco to uncover how one founder caught lightning in a bottle and is now racing to harness it.Our guest: Ivan Burazin, founder of Daytona. With a career spanning Toronto, Croatia, Infobip, Shift Conference, and now Daytona, Ivan brings a rare, global perspective on how Europe can lead in DevTools and AI infrastructure. Alongside him, our dear friend Enis Hulli from E2VC joins to spotlight Daytona's story, the lessons from its dramatic pivot, and what it means for founders and investors navigating this new AI wave.Ivan has spent two decades at the intersection of infrastructure and developer communities. From racking servers in the early 2000s to launching one of the first browser-based IDEs in 2009 to scaling the Shift Conference to thousands of attendees, his career has consistently circled around enabling developers.Daytona's first act was a cloud IDE provider for enterprises — “one-click setup for secure developer environments.” With Fortune 500 customers onboard, revenue flowing, and a healthy pipeline, Daytona 1.0 showed promise. But something was missing.Six months ago, Ivan and his team made a bold decision to pivot. Daytona 2.0 is no longer about provisioning dev environments for humans — it's about powering AI agents with the computers they need.“Agents are not computers themselves. They need access to computers to run browsers, clone repos, analyze data. Daytona gives them that — an isolated sandbox with machine-native interfaces built for agents.” – IvanThe differences between human and agent runtimes turned out to be massive:Humans tolerate 30 seconds of spin-up; agents need milliseconds.Humans solve problems sequentially; agents branch into parallel “multiverse” solutions.Humans parse terminal output; agents require clean APIs.By recognizing this, Daytona carved out a new category: the computer for agents.The pivot coincided with a deliberate move to San Francisco. Ivan recalls how Figma embedded with designers at Airbnb, or how Twilio found adoption among early Valley startups. To own mindshare in a new category, location mattered.“From San Francisco outwards, adoption flows naturally. From Europe inwards, it's like pushing uphill.” – IvanSo Daytona went all-in: presence at AI meetups, team members flying in and out, and early product evangelism on the ground.HAfter the pivot, Daytona saw extraordinary pull from the market:Customer conversations ended with “send me the API key”.Infrastructure demand showed power-law dynamics: just a handful of fast-growing customers could drive scale.Instead of polished decks, Ivan shared raw revenue dashboards with authenticity.The momentum was immediate and tangible.Ivan admits he hadn't explicitly asked permission to pivot. He hinted at it in updates, tested the idea with a hackathon, and only later informed his cap table. The response? Overwhelmingly positive.“Almost half the angels replied. Go f***ing go. Let's go. I should've told them sooner.” – IvanEnis highlights this as a key distinction: experienced angels with broad portfolios encourage bold swings, while less diversified angels may fear the risk.Catching lightning is one thing. Harnessing it is another. Ivan's current focus:Hiring deliberately: keeping the team small and ownership-driven.White-glove onboarding: every serious customer gets a Slack channel with the whole team.Balancing speed and reliability: ship daily, but solve today's scale problems without over-engineering.Enis introduces a new term: seed-strapping — raising a seed, skipping A and B, and scaling straight to unicorn status.Ivan is cautious. Infra is capital-intensive, and while Daytona could raise a Series A today, he's committed to doing it on his terms.

EM360 Podcast
How to Prepare Your Team for Edge Computing?

EM360 Podcast

Play Episode Listen Later Sep 4, 2025 23:38


In a time when the world is run by data and real-time actions, edge computing is quickly becoming a must-have in enterprise technology. In the recent episode of the Tech Transformed podcast, hosted by Shubhangi Dua, a Podcast Producer and B2B Tech Journalist, discusses the complexities of this distributed future with guest Dmitry Panenkov, Founder and CEO of emma.The conversation dives into how latency is the driving force behind edge adoption. Applications like autonomous vehicles and real-time analytics cannot afford to wait on a round trip to a centralised data centre. They need to compute where the data is generated.Rather than viewing edge as a rival to the cloud, the discussion highlights it as a natural extension. Edge environments bring speed, resilience and data control, all necessary capabilities for modern applications. Adopting Edge ComputingFor organisations looking to adopt edge computing, this episode lays out a practical step-by-step approach. The skills necessary in multi-cloud environments – automation, infrastructure as code, and observability – translate well to edge deployments. These capabilities are essential for managing the unique challenges of edge devices, which may be disconnected, have lower power, or be located in hard-to-reach areas. Without this level of operational maturity, Panenkov warns of a "zombie apocalypse" of unmanaged devices.Simplifying ComplexityManaging different APIs, SDKs, and vendor lock-ins across a distributed network can be a challenging task, and this is where platforms like emma become crucial.Alluding to emma's mission, Panenkov explains, "We're building a unified platform that simplifies the way people interact with different cloud and computer environments, whether these are in a public setting or private data centres or even at the edge."Overall, emma creates a unified API layer and user interface, which simplifies the complexity. It helps businesses manage, automate, and scale their workloads from a singular perspective and reduces the burden on IT teams. They also reduce the need for a large team of highly skilled professionals leads to substantial cost savings. emma's customers have experienced that their cloud bills went down significantly and updates could be rolled out much faster using the platform.TakeawaysEdge computing is becoming a reality for more organisations.Latency-sensitive applications drive the need for edge computing.Real-time analytics and industry automation benefit from edge computing.Edge computing enhances resilience, cost efficiency, and data sovereignty.Integrating edge into cloud strategies requires automation and observability.Maturity in operational practices, like automation and observability, is essential for...

The Cloud Pod
319: AWS Cost MCP: Your Billing Data Now Speaks Human

The Cloud Pod

Play Episode Listen Later Sep 3, 2025 96:14


Welcome to episode 319 of The Cloud Pod, where the forecast is always cloudy! Justin, Matt, and Ryan are in the studio to bring you all the latest in cloud and AI news. AWS Cost MCP makes exploring your finops data as simple as english text. We've got a sunnier view for junior devs, a Microsoft open source development, tokens, and it's even Kubernetes' birthday – let's get into it!  Titles we almost went with this week: From Linux Hater to Open Source Darling: A Microsoft Love Story 20,000 Lines of Code and a Dream: Microsoft’s Open Source Glow-Up Ctrl+Alt+Delete Your Assumptions: Microsoft Goes Full Penguin Token and Esteem: Amazon Bedrock Gets a Counter CSI: Cloud Scene Investigation The Great SQL Migration: How AI Became the Universal Translator Token and Ye Shall Receive: Bedrock’s New Counting Feature The Count of Monte Token: A Bedrock Tale – mk Ctrl+Z for Your Database: Now with Built-in Lag Time IP Freely: GKE Takes the Pain Out of Address Management AWS CEO: AI Can’t Replace Junior Devs Because Someone Has to Fix the AI’s Code Better Late Than Never: RDS PostgreSQL Gets Time Travel The SQL Whisperer: Teaching AI to Speak Database DigitalOcean Goes Full Chatbot: Your Infrastructure Now Speaks Human Musk vs Cook: The App Store Wars Episode AI Firestore Goes Mongo: A Database Love Story GKE Turns 10: Now With More Candles and Less Complexity Prime Day Infrastructure: Now With 87,000 AI Chips and a Robot Army AWS Scales to Quadrillion Requests: Your Black Friday Traffic Looks Cute AWS billing now speaks human, thanks to MCPs The Bastion Holds: Azure’s New Gateway to Kubernetes Kingdoms The Surge Before the Merge: Azure’s New Upgrade Strategy CNI Overlay: Because Your Pods Deserve Their Own ZIP Code AI Is Going Great – or How ML Makes Money  00:46 Musk's xAI sues Apple, OpenAI alleging scheme that harmed X, Grok xAI filed a lawsuit against Apple and OpenAI, alleging anticompetitive practices in AI chatbot distribution, claiming Apple deprioritizes competing AI apps like Grok in the App Store while favoring ChatGPT through direct integration into iOS devices. The lawsuit highlights tensions in AI platform distribution models, where cloud-based AI services depend on mobile app stores for user access, potentially creating gatekeeping concerns for competing generative AI providers. Apple’s partnership with OpenAI to integrate ChatGPT into iPhone, iPad, and Mac products represents a shift toward native AI integration rather than app-based access, which could impact how cloud AI services reach end users. The dispute underscores growing competition in the generative AI market, where multiple players, including xAI’s Grok, OpenAI’s ChatGPT, DeepSeek, and Perplexity, are vying for market position through both cloud APIs and mobile distribution channels. For cloud developer

Partner Path
E60: Why Voice is the Next Platform Shift with Jordan Dearsley (Vapi)

Partner Path

Play Episode Listen Later Sep 3, 2025 35:10


This week's guest is Jordan Dearsley, CEO of Vapi. Vapi enables enterprises to deploy humanlike voice agents in minutes. Whether you are building a new voice product or managing millions of calls, Vapi's infrastructure and flexible APIs make it simple and reliable.We explore why Waterloo continues to produce world-class engineers and founders, Jordan's journey from building calendar apps and AI therapy tools to leading Vapi, and why enterprises are turning to voice AI to save engineering time and resources. We also cover the toughest technical challenges including latency, tool calls, determinism, and multi state environments, and why generative voice systems require constant tuning and forward deployed models rather than one size fits all solutions.Episode Chapters:2:10 - The MIT of Canada4:03 - Pivoting 10+ times6:00 - Why voice AI11:07 - A faster path to production12:05 - Voice challenges from enterprises15:05 - Why stay horizontal17:20 - Actively moving away from being an AI BPO19:45 - Forward deployed engineers21:40 - From scoping to production22:40 - Recruiting as an early stage startup26:05 - ElevenLabs leading the model pack29:00 - Distribution & enterprise integrations32:35 - Quick fire round As always, feel free to contact us at partnerpathpodcast@gmail.com. We would love to hear ideas for content, guests, and overall feedback.This episode is brought to you by Grata, the world's leading deal sourcing platform. Our AI-powered search, investment-grade data, and intuitive workflows give you the edge needed to find and win deals in your industry. Visit grata.com to schedule a demo today.Fresh out of Y Combinator's Summer batch, Overlap is an AI-driven app that uses LLMs to curate the best moments from podcast episodes. Imagine having a smart assistant who reads through every podcast transcript, finds the best parts or parts most relevant to your search, and strings them together to form a new curated stream of content - that is what Overlap does. Podcasts are an exponentially growing source of unique information. Make use of it! Check out Overlap 2.0 on the App Store today.

The Data Stack Show
260: Return of the Dodds: APIs, Automation, and the Art of Data Team Survival

The Data Stack Show

Play Episode Listen Later Sep 3, 2025 50:33


This week on The Data Stack Show, the crew welcomes Eric Dodds back to the show as they dive into the realities of integrating AI and large language models into data team workflows. Eric, Matt and John discuss the promise and pitfalls of AI-driven automation, the persistent challenges of working with APIs, and the evolution from big data tools to AI-powered solutions. The conversation also highlights the risks of over-reliance on single experts, the critical importance of documentation and context, and the gap between AI marketing hype and practical implementation. Key takeaways for listeners include the necessity of strong data fundamentals, the hidden costs and risks of AI adoption, the importance of balancing efficiency gains with long-term team resilience, and so much more.Highlights from this week's conversation include:Eric is Back from Europe (0:37)AI and Data: Jurisdiction and Comfort Level (4:00)APIs, Tool Calls, and Practical AI Limitations (5:08)Scaling, Big Data, and AI's Current Constraints (9:16)Stakeholder-Facing AI and Data Team Risks (13:20)Self-Service Analytics and AI's Real Impact (16:04)AI Hype vs. Reality and Uneven Impact (20:27)Cost, Context, and AI's Practical Barriers (25:25)AI for Admin Tasks and Business Logic Complexity (29:13)Tribal Knowledge, Documentation, and Context Engineering (32:07)AI as a Productivity Accelerator and the “Gary Problem” (35:10)Healthy Conflict, Team Dynamics, and AI's Limits (39:15)Back to Fundamentals: Good Practices Enable AI (41:47)Lightning Round: Favorite AI Tools and Workflow Integration (45:56)AI in Everyday Life and Closing Thoughts (48:14)The Data Stack Show is a weekly podcast powered by RudderStack, customer data infrastructure that enables you to deliver real-time customer event data everywhere it's needed to power smarter decisions and better customer experiences. Each week, we'll talk to data engineers, analysts, and data scientists about their experience around building and maintaining data infrastructure, delivering data and data products, and driving better outcomes across their businesses with data.RudderStack helps businesses make the most out of their customer data while ensuring data privacy and security. To learn more about RudderStack visit rudderstack.com.

Homeopathy247 Podcast
Episode 165: Natrum Muriaticum – A Remedy of Depth and Sensitivity with Jagoda Salewska

Homeopathy247 Podcast

Play Episode Listen Later Sep 3, 2025 37:52


In this episode of Homeopathy 247, Mary Greensmith welcomes back Jagoda Salewska, who shares her passion for the remedy Natrum Muriaticum (often called Nat Mur). For Jagoda, this remedy feels deeply personal—something she has resonated with since childhood—and she explains why it is such an essential remedy both emotionally and physically. What is Natrum Muriaticum? Natrum Muriaticum is made from common salt. Since salt is present in every cell of our body and essential for life, this remedy has a wide-reaching influence. It's used in both cell salt (tissue salt) form, which acts more on the physical body, and in homoeopathic remedy form, which works on emotional as well as physical levels. Jagoda describes it as a remedy that “balances the water of life”—helping with issues of dryness and excess fluid, as well as with emotional stuckness. The Emotional Picture – Sadness, Grief, and Sensitivity At its core, Natrum Muriaticum is linked with deep, unresolved grief or sadness. People who need it often: Hold their emotions inside rather than express them Prefer to cry alone, or sometimes feel unable to cry at all Build a protective “wall” around themselves, appearing strong on the outside but deeply tender within Show perfectionist tendencies, carefully controlling how they present to the world Jagoda points out that while grief is the most common theme, the remedy helps whenever emotions become “stuck”—unable to flow naturally. By gently releasing those emotions, Natrum Muriaticum allows healing to move forward. Nat Mur in Children Children needing Natrum Muriaticum may not show obvious grief, but they often reveal themselves in other ways. Traits can include: Perfectionism in appearance or behaviour Being thin or underdeveloped compared to peers Learning or speech delays Sensitivity to light, sound, or strong emotions Constipation or dryness in the body Physically, they may be prone to colds with lots of sneezing and clear discharges, cracked lips, or mouth ulcers. These symptoms all reflect the remedy's connection with water balance and dryness. Physical Complaints and Water Balance Because salt governs fluid balance, Natrum Muriaticum is often indicated where there is too much or too little fluid in the body. Common examples include: Headaches or migraines, especially from the sun or after emotional upset Colds and fevers, with streaming eyes and nose Skin issues such as cold sores, cracked lips, eczema, or acne that appear after grief Digestive troubles, sometimes linked with unprocessed emotions Insomnia, especially waking in the early hours between 3–5am In tissue salt form (6X potency), Nat Mur can also be used as a natural electrolyte to support hydration, especially after diarrhoea, vomiting, or heat exposure. Opening Layers of Healing Jagoda explains that Natrum Muriaticum is often a layer remedy. It helps open the door for other remedies by releasing blocked grief and sadness. Sometimes this uncovering reveals deeper emotions such as anger or resentment, which then call for a different prescription. In homoeopathy, the healing journey often involves peeling back layers. Nat Mur is one of the remedies that can begin this process, gently breaking down walls so that emotions and physical symptoms can flow and heal. Remedy Relationships Ignatia is often used for acute grief, while Nat Mur is suited for the longer, deeper stages of sorrow. Apis acts as an acute counterpart, especially when swelling or allergic reactions appear suddenly, while Nat Mur deals with the slower, underlying imbalance. The remedy also connects strongly with the cycles of nature, including the moon and the sea, reflecting its deep link to water and emotions. Natrum Muriaticum is much more than “just salt.” It is a profound remedy that touches both the physical and emotional levels of health. From headaches and colds to grief and perfectionism, it helps restore balance when life feels stuck. As Jagoda beautifully puts it, Nat Mur helps us release the weight of sadness and move toward wholeness. Important links mentioned in this episode: Visit Anindita's website: https://jagodahomeopathy.com/ Know more about Jagoda Salweska: https://homeopathy247.com/professional-homeopaths-team/jagoda-salewska/   Subscribe to our YouTube channel and be updated with our latest episodes. You can also subscribe to our podcast channels available on your favourite podcast listening app below: Apple Podcast: https://podcasts.apple.com/us/podcast/homeopathy247-podcast/id1628767810 Spotify: https://open.spotify.com/show/39rjXAReQ33hGceW1E50dk Follow us on our social media accounts: Facebook: https://www.facebook.com/homeopathy247 Instagram: https://www.instagram.com/homeopathy247 You can also visit our website at https://homeopathy247.com/

Paul's Security Weekly
AI, APIs, and the Next Cyber Battleground: Black Hat 2025 - Chris Boehm, Idan Plotnik, Josh Lemos, Michael Callahan - ASW #346

Paul's Security Weekly

Play Episode Listen Later Sep 2, 2025 68:11


In this must-see BlackHat 2025 interview, Doug White sits down with Michael Callahan, CMO at Salt Security, for a high-stakes conversation about Agentic AI, Model Context Protocol (MCP) servers, and the massive API security risks reshaping the cyber landscape. Broadcast live from the CyberRisk TV studio at Mandalay Bay, Las Vegas, the discussion pulls back the curtain on how autonomous AI agents and centralized MCP hubs could supercharge productivity—while also opening the door to unprecedented supply chain vulnerabilities. From “shadow MCP servers” to the concept of an “API fabric,” Michael explains why these threats are evolving faster than traditional security measures can keep up, and why CISOs need to act before it's too late. Viewers will get rare insight into the parallels between MCP exploitation and DNS poisoning, the hidden dangers of API sprawl, and why this new era of AI-driven communication could become a hacker's dream. Blog: https://salt.security/blog/when-ai-agents-go-rogue-what-youre-missing-in-your-mcp-security Survey Report: https://content.salt.security/AI-Agentic-Survey-2025_LP-AI-Agentic-Survey-2025.html This segment is sponsored by Salt Security. Visit https://securityweekly.com/saltbh for a free API Attack Surface Assessment! At Black Hat 2025, live from the Cyber Risk TV studio in Las Vegas, Jackie McGuire sits down with Apiiro Co-Founder & CEO Idan Plotnik to unpack the real-world impact of AI code assistants on application security, developer velocity, and cloud costs. With experience as a former Director of Engineering at Microsoft, Idan dives into what drove him to launch Apiiro — and why 75% of engineers will be using AI assistants by 2028. From 10x more vulnerabilities to skyrocketing API bloat and security blind spots, Idan breaks down research from Fortune 500 companies on how AI is accelerating both innovation and risk. What you'll learn in this interview: - Why AI coding tools are increasing code complexity and risk - The massive cost of unnecessary APIs in cloud environments - How to automate secure code without slowing down delivery - Why most CISOs fail to connect security to revenue (and how to fix it) - How Apiiro's Autofix AI Agent helps organizations auto-fix and auto-govern code risks at scale This isn't just another AI hype talk. It's a deep dive into the future of secure software delivery — with practical steps for CISOs, CTOs, and security leaders to become true business enablers. Watch till the end to hear how Apiiro is helping Fortune 500s bridge the gap between code, risk, and revenue. Apiiro AutoFix Agent. Built for Enterprise Security: https://youtu.be/f-_zrnqzYsc Deep Dive Demo: https://youtu.be/WnFmMiXiUuM This segment is sponsored by Apiiro. Be one of the first to see their new AppSec Agent in action at https://securityweekly.com/apiirobh. Is Your AI Usage a Ticking Time Bomb? In this exclusive Black Hat 2025 interview, Matt Alderman sits down with GitLab CISO Josh Lemos to unpack one of the most pressing questions in tech today: Are executives blindly racing into AI adoption without understanding the risks? Filmed live at the CyberRisk TV Studio in Las Vegas, this eye-opening conversation dives deep into: - How AI is being rapidly adopted across enterprises — with or without security buy-in - Why AI governance is no longer optional — and how to actually implement it - The truth about agentic AI, automation, and building trust in non-human identities - The role of frameworks like ISO 42001 in building AI transparency and assurance - Real-world examples of how teams are using LLMs in development, documentation & compliance Whether you're a CISO, developer, or business exec — this discussion will reshape how you think about AI governance, security, and adoption strategy in your org. Don't wait until it's too late to understand the risks. The Economics of Software Innovation: $750B+ Opportunity at a Crossroads Report: http://about.gitlab.com/software-innovation-report/ For more information about GitLab and their report, please visit: https://securityweekly.com/gitlabbh Live from Black Hat 2025 in Las Vegas, Jackie McGuire sits down with Chris Boehm, Field CTO at Zero Networks, for a high-impact conversation on microsegmentation, shadow IT, and why AI still struggles to stop lateral movement. With 15+ years of cybersecurity experience—from Microsoft to SentinelOne—Chris breaks down complex concepts like you're a precocious 8th grader (his words!) and shares real talk on why AI alone won't save your infrastructure. Learn how Zero Networks is finally making microsegmentation frictionless, how summarization is the current AI win, and what red flags to look for when evaluating AI-infused security tools. If you're a CISO, dev, or just trying to stay ahead of cloud threats—this one's for you. This segment is sponsored by Zero Networks. Visit https://securityweekly.com/zerobh to learn more about them! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-346

Application Security PodCast
Akansha Shukla - Modern AppSec: Securing APIs with Threat Modeling and DevSecOps

Application Security PodCast

Play Episode Listen Later Sep 2, 2025 35:35


Our guest today is Akansha Shukla, an information security professional with over 10 years of experience in application security, DevSecOps, and API security. We're discussing why API security remains one of the least mature areas of AppSec today and exploring the challenges developers face when securing APIs. Akansha shares her insights on incorporating APIs into threat modeling exercises, the ongoing struggles with API discovery and inventory management, and the authorization challenges highlighted in the OWASP API Security Top 10. The conversation also touches on whether "shift left" is truly dead and why we still haven't solved basic security problems like input validation despite having the frameworks to address them.FOLLOW OUR SOCIAL MEDIA: ➜Twitter: @AppSecPodcast➜LinkedIn: The Application Security Podcast➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast Thanks for Listening! ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TalkLP
Two Types of Power

TalkLP

Play Episode Listen Later Sep 2, 2025


TalkLP podcast host Amber Bradley chats with Jason Cornewell, Director of Global Supply Chain Loss Prevention and Operations at Finish Line/JD Sports North America talk creative financing for critical technology tools, health lifecycles, using tech to speak the business language, "high heat product," smoke systems tying into a VMS, and more! Take a listen to this industry veteran's creative solutions for in-store theft and why it's so important to partner with solution providers with open APIs (like March Networks)....oh, and a bonus: you HAVE to hear his "best career" advice ever - it's tweetable...(or X-able)? Jason also talks about the 2 types of power in the world and why you should care about what type YOU have as a professional. To learn more and schedule a demo with March Networks here.

Application Security Weekly (Audio)
AI, APIs, and the Next Cyber Battleground: Black Hat 2025 - Chris Boehm, Idan Plotnik, Josh Lemos, Michael Callahan - ASW #346

Application Security Weekly (Audio)

Play Episode Listen Later Sep 2, 2025 68:11


In this must-see BlackHat 2025 interview, Doug White sits down with Michael Callahan, CMO at Salt Security, for a high-stakes conversation about Agentic AI, Model Context Protocol (MCP) servers, and the massive API security risks reshaping the cyber landscape. Broadcast live from the CyberRisk TV studio at Mandalay Bay, Las Vegas, the discussion pulls back the curtain on how autonomous AI agents and centralized MCP hubs could supercharge productivity—while also opening the door to unprecedented supply chain vulnerabilities. From “shadow MCP servers” to the concept of an “API fabric,” Michael explains why these threats are evolving faster than traditional security measures can keep up, and why CISOs need to act before it's too late. Viewers will get rare insight into the parallels between MCP exploitation and DNS poisoning, the hidden dangers of API sprawl, and why this new era of AI-driven communication could become a hacker's dream. Blog: https://salt.security/blog/when-ai-agents-go-rogue-what-youre-missing-in-your-mcp-security Survey Report: https://content.salt.security/AI-Agentic-Survey-2025_LP-AI-Agentic-Survey-2025.html This segment is sponsored by Salt Security. Visit https://securityweekly.com/saltbh for a free API Attack Surface Assessment! At Black Hat 2025, live from the Cyber Risk TV studio in Las Vegas, Jackie McGuire sits down with Apiiro Co-Founder & CEO Idan Plotnik to unpack the real-world impact of AI code assistants on application security, developer velocity, and cloud costs. With experience as a former Director of Engineering at Microsoft, Idan dives into what drove him to launch Apiiro — and why 75% of engineers will be using AI assistants by 2028. From 10x more vulnerabilities to skyrocketing API bloat and security blind spots, Idan breaks down research from Fortune 500 companies on how AI is accelerating both innovation and risk. What you'll learn in this interview: - Why AI coding tools are increasing code complexity and risk - The massive cost of unnecessary APIs in cloud environments - How to automate secure code without slowing down delivery - Why most CISOs fail to connect security to revenue (and how to fix it) - How Apiiro's Autofix AI Agent helps organizations auto-fix and auto-govern code risks at scale This isn't just another AI hype talk. It's a deep dive into the future of secure software delivery — with practical steps for CISOs, CTOs, and security leaders to become true business enablers. Watch till the end to hear how Apiiro is helping Fortune 500s bridge the gap between code, risk, and revenue. Apiiro AutoFix Agent. Built for Enterprise Security: https://youtu.be/f-_zrnqzYsc Deep Dive Demo: https://youtu.be/WnFmMiXiUuM This segment is sponsored by Apiiro. Be one of the first to see their new AppSec Agent in action at https://securityweekly.com/apiirobh. Is Your AI Usage a Ticking Time Bomb? In this exclusive Black Hat 2025 interview, Matt Alderman sits down with GitLab CISO Josh Lemos to unpack one of the most pressing questions in tech today: Are executives blindly racing into AI adoption without understanding the risks? Filmed live at the CyberRisk TV Studio in Las Vegas, this eye-opening conversation dives deep into: - How AI is being rapidly adopted across enterprises — with or without security buy-in - Why AI governance is no longer optional — and how to actually implement it - The truth about agentic AI, automation, and building trust in non-human identities - The role of frameworks like ISO 42001 in building AI transparency and assurance - Real-world examples of how teams are using LLMs in development, documentation & compliance Whether you're a CISO, developer, or business exec — this discussion will reshape how you think about AI governance, security, and adoption strategy in your org. Don't wait until it's too late to understand the risks. The Economics of Software Innovation: $750B+ Opportunity at a Crossroads Report: http://about.gitlab.com/software-innovation-report/ For more information about GitLab and their report, please visit: https://securityweekly.com/gitlabbh Live from Black Hat 2025 in Las Vegas, Jackie McGuire sits down with Chris Boehm, Field CTO at Zero Networks, for a high-impact conversation on microsegmentation, shadow IT, and why AI still struggles to stop lateral movement. With 15+ years of cybersecurity experience—from Microsoft to SentinelOne—Chris breaks down complex concepts like you're a precocious 8th grader (his words!) and shares real talk on why AI alone won't save your infrastructure. Learn how Zero Networks is finally making microsegmentation frictionless, how summarization is the current AI win, and what red flags to look for when evaluating AI-infused security tools. If you're a CISO, dev, or just trying to stay ahead of cloud threats—this one's for you. This segment is sponsored by Zero Networks. Visit https://securityweekly.com/zerobh to learn more about them! Visit https://www.securityweekly.com/asw for all the latest episodes! Show Notes: https://securityweekly.com/asw-346

MLOps.community
The Era of AI Agents in Marketing // Joel Horwitz // #337

MLOps.community

Play Episode Listen Later Sep 1, 2025 48:56


The Era of AI Agents in Marketing // MLOps Podcast #337 with Joel Horwitz, Growth Engineer at Neoteric3D.Join the Community: https://go.mlops.community/YTJoinInGet the newsletter: https://go.mlops.community/YTNewsletter// AbstractWe're entering a new era in marketing—one powered by AI agents, not just analysts. The rise of tools like Clay, Karrot.ai, 6sense, and Mutiny is reshaping how go-to-market (GTM) teams operate, making room for a new kind of operator: the GTM engineer. This hybrid role blends technical fluency with growth strategy, leveraging APIs, automation, and AI to orchestrate hyper-personalized, scalable campaigns. No longer just marketers, today's GTM teams are builders—connecting data, deploying agents, and fine-tuning workflows in real time to meet buyers where they are. This shift isn't just evolution—it's a replatforming of the entire GTM function.// BioJoel S. Horwitz has been riding the data wave since before it was cool—literally. He spoke at Spark Summit back in 2014 and penned a prescient piece for MIT Tech Review on data science and machine learning before they became boardroom buzzwords. A former big tech executive turned entrepreneur, Joel now runs Neoteric3D (N3D for short), a digital design and data growth agency that helps brands scale with smarts and style. When he's not architecting next-gen growth strategies, you'll find him logging long miles on the trail or coaching his sons' soccer and baseball teams like a champ.// Related LinksWebsite: https://www.neoteric3d.com~~~~~~~~ ✌️Connect With Us ✌️ ~~~~~~~Catch all episodes, blogs, newsletters, and more: https://go.mlops.community/TYExploreJoin our Slack community [https://go.mlops.community/slack]Follow us on X/Twitter [@mlopscommunity](https://x.com/mlopscommunity) or [LinkedIn](https://go.mlops.community/linkedin)] Sign up for the next meetup: [https://go.mlops.community/register]MLOps Swag/Merch: [https://shop.mlops.community/]Connect with Demetrios on LinkedIn: /dpbrinkmConnect with Joel on LinkedIn: /joelshorwitzTimestamps:[00:00] Joel's preferred coffee[00:53] Agentic workflows in marketing[04:26] Agentic AI vs big data[08:24] Creative outreach automation[13:08] LLMs in marketing optimization[17:36] Traffic relevance[23:36] End-to-end AI workflow[28:10] AI in task automation[32:08] AI systems architecting[38:00] AI vs Thought Leadership[43:10] AI as sparring partner[45:22] AI shifts human roles[48:23] Wrap up

Dave and Dharm DeMystify
EP 140: DEMYSTIFYING MCP SERVERS WITH DAVID JARVIS CEO OF GRIFFIN BANK

Dave and Dharm DeMystify

Play Episode Listen Later Sep 1, 2025 28:37


In this episode, the hosts welcome back David Jarvis, co-founder and CEO of Griffin, for his second appearance on the show. The discussion centres on the emerging concept of agentic banking and Griffin's development of the MCP server. David explains how the MCP server enables AI models to interact with banking APIs using a configuration file, significantly enhancing automation and functionality. The conversation explores how this technology could transform areas such as corporate treasury, wealth management, and broader financial services. David outlines how AI agents might one day handle complex tasks traditionally managed by intermediaries like mortgage brokers and treasurers, streamlining operations and reducing costs. While still in the early stages, this innovation is already gaining interest from banks and fintech firms. The episode also looks ahead to the future of AI in banking, examining both the potential and the practical challenges of integrating agentic systems into existing infrastructure. Listeners can find out more via Griffin's website (https://griffin.com/) and blog.

Angular Master Podcast
AMP 77: Angular 20: Signals, Zone-less, Hydration & Beyond – with Manfred Steyer, Michael Egger-Zikes, and Rainer Hahnekamp

Angular Master Podcast

Play Episode Listen Later Sep 1, 2025 73:44


Angular 20 is a major milestone that redefines how we think about building modern web applications. In this episode of the Angular Master Podcast, I invited three outstanding experts: Manfred Steyer, Michael Egger-Zikes, and Rainer Hahnekamp.Together, we dive deep into:Signals – stabilization of APIs and the future of NgRx Signal StoreResource API – why it matters and what has changedZone-less – is it time to go without Zone.js?Incremental Hydration – how it works and whether SSR should always be usedDynamic Components – new possibilities in Angular 20Deprecations – *ngIf, *ngFor, *ngSwitch and Style Guide updatesTesting – is Vitest the new standard?And we look ahead: Signal Forms API, Selectorless Components, new authoring formats, and what to expect in Angular 21.This is a knowledge-packed conversation full of practice, insights, and forward-looking ideas — a must-listen for anyone who wants to understand where Angular is heading.#Angular #Signals #ZoneLess #WebDevelopment #AIConference #Angular20 #Podcast

Talk Python To Me - Python conversations for passionate developers
#518: Celebrating Django's 20th Birthday With Its Creators

Talk Python To Me - Python conversations for passionate developers

Play Episode Listen Later Aug 29, 2025 68:13 Transcription Available


Twenty years after a scrappy newsroom team hacked together a framework to ship stories fast, Django remains the Python web framework that ships real apps, responsibly. In this anniversary roundtable with its creators and long-time stewards: Simon Willison, Adrian Holovaty, Will Vincent, Jeff Triplet, and Thibaud Colas, we trace the path from the Lawrence Journal-World to 1.0, DjangoCon, and the DSF; unpack how a BSD license and a culture of docs, tests, and mentorship grew a global community; and revisit lessons from deployments like Instagram. We talk modern Django too: ASGI and async, HTMX-friendly patterns, building APIs with DRF and Django Ninja, and how Django pairs with React and serverless without losing its batteries-included soul. You'll hear about Django Girls, Djangonauts, and the Django Fellowship that keep momentum going, plus where Django fits in today's AI stacks. Finally, we look ahead at the next decade of speed, security, and sustainability. Episode sponsors Talk Python Courses Python in Production Links from the show Guests Simon Willison: simonwillison.net Adrian Holovaty: holovaty.com Will Vincent: wsvincent.com Jeff Triplet: jefftriplett.com Thibaud Colas: thib.me Show Links Django's 20th Birthday Reflections (Simon Willison): simonwillison.net Happy 20th Birthday, Django! (Django Weblog): djangoproject.com Django 2024 Annual Impact Report: djangoproject.com Welcome Our New Fellow: Jacob Tyler Walls: djangoproject.com Soundslice Music Learning Platform: soundslice.com Djangonaut Space Mentorship for Django Contributors: djangonaut.space Wagtail CMS for Django: wagtail.org Django REST Framework: django-rest-framework.org Django Ninja API Framework for Django: django-ninja.dev Lawrence Journal-World: ljworld.com Watch this episode on YouTube: youtube.com Episode #518 deep-dive: talkpython.fm/518 Episode transcripts: talkpython.fm Developer Rap Theme Song: Served in a Flask: talkpython.fm/flasksong --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy

The Fintech Factor
Fintech Takes x SOLO Presents Source of Truth. Ep 1: Is Truth Enough to Unlock Trust?

The Fintech Factor

Play Episode Listen Later Aug 28, 2025 52:44


Hello, and welcome to Source of Truth, a new podcast miniseries from Fintech Takes, sponsored by our friends at SOLO. This series is about information asymmetry (which is *the* enemy in financial services — especially in lending).  Information asymmetry (the gap between what the customer knows and what the financial service provider knows) explains most inefficiencies. It's the reason why qualified applicants get declined for loans or approved for loans at rates much higher than they should have to pay. And it's the reason why borrowers and human underwriters spend a mind-numbing number of hours locating, aggregating, and verifying information. We've added APIs, alternative data, and automation. Yet, the core process of collecting and trusting information still looks much like it did decades ago. Why is that? And what can be done to reshape that process in a more fundamental way? These are the questions that Source of Truth will explore. In Episode 1, I'm joined by Georgina Merhom, Founder & CEO of SOLO.  Georgina, a data scientist by training, unpacks how lending data is collected, verified, and (too rarely) reused. Highlights include: What the old-school branch lending process got right that digital still misses Why banks still burn through $30B each year on manual collection despite APIs The difference between standardized inputs (useful) and standardized outputs (misleading) This opening episode tees up the larger theme of the series: building systems that don't just capture truth but create trust.  Because when information can be reused and verified, lenders and borrowers stop starting from zero. Subscribe now and catch the rest of Source of Truth. This miniseries is brought to you by SOLO. SOLO resolves and connects customer data across silos — so teams stop rekeying the same customer info for the hundredth time and finally move forward. Break the cycle at SOLO.one - That's SOLO dot o-n-e. Sign up for Alex's Fintech Takes newsletter for the latest insightful analysis on fintech trends, along with a heaping pile of pop culture references and copious footnotes. Every Monday and Thursday: https://workweek.com/brand/fintech-takes/ And for more exclusive insider content, don't forget to check out my YouTube page. Follow Alex:  YouTube: https://www.youtube.com/channel/UCJgfH47QEwbQmkQlz1V9rQA/videos LinkedIn: https://www.linkedin.com/in/alexhjohnson Twitter: https://www.twitter.com/AlexH_Johnson Follow Georgina: LinkedIn: https://www.linkedin.com/in/georginamerhom/ Learn more about SOLO here.

BlockHash: Exploring the Blockchain
Ep. 590 Jeff Handler | Importance of Yield-based Stablecoins with OpenTrade

BlockHash: Exploring the Blockchain

Play Episode Listen Later Aug 27, 2025 29:01


For episode 590 of the BlockHash Podcast, host Brandon Zemp is joined by Jeff Handler, CCO of OpenTrade, an institutional-grade platform delivering real-world asset-backed yield on USDC, USDT, and EURC. ⏳ Timestamps: (0:00) Introduction(1:08) Who is Jeff Handler?(4:12) Importance of Yield-based Stablecoins(7:10) Typical clients(11:03) Stablecoin Yield use-cases in Colombia(15:22) Impact of the Genius Act(17:47) Future of RWAs in Finance(21:54) Onboarding for Clients(24:18) APIs & GraphQL(24:37) OpenTrade Roadmap(26:28) Events & Conferences(27:12) OpenTrade website & socials

The Product Experience
Why we need to design products for machines - Katja Forbes (Executive Director, Standard Chartered Bank)

The Product Experience

Play Episode Listen Later Aug 27, 2025 46:15


In this episode of The Product Experience, Randy Silver and Lily Smith sit down with Katja Forbes, Executive Director at Standard Chartered Bank, design leader, and lecturer, to explore the fast-approaching world of machine customers.Katja shares why businesses must prepare for a future where AI agents, autonomous vehicles, and procurement bots act as customers, and what this means for product managers, designers, and organisations.Key takeawaysMachine customers are here already. From booking services for Tesla cars to procurement bots closing contracts, AI-driven commerce is no longer hypothetical.APIs are necessary but insufficient. Businesses need to think beyond plumbing and address trust, compliance, and customer experience for non-human agents.Signal clarity matters. Organisations must make their value propositions machine-readable to remain competitive.Trust will be quantified. Compliance signals, ESG proof, uptime guarantees, and reliability ratings will replace human gut instinct.New roles will emerge. Trust analysts and human–machine hybrid coordinators will be critical in shaping future interactions.Ethics cannot be ignored. Without careful design, agentic commerce could amplify consumerism and poor societal outcomes.Practical first step. Even small businesses can prepare by structuring their product and service data into machine-readable formats.Product managers must adapt. The skill to manage ambiguity, think systemically, and anticipate unintended consequences will be central to success.Featured Links: Follow Katja on LinkedIn | Katja's website | Sign-up for pre sale access to Katja's forthcoming book 'The CX Evolutionist'Our HostsLily Smith enjoys working as a consultant product manager with early-stage and growing startups and as a mentor to other product managers. She's currently Chief Product Officer at BBC Maestro, and has spent 13 years in the tech industry working with startups in the SaaS and mobile space. She's worked on a diverse range of products – leading the product teams through discovery, prototyping, testing and delivery. Lily also founded ProductTank Bristol and runs ProductCamp in Bristol and Bath. Randy Silver is a Leadership & Product Coach and Consultant. He gets teams unstuck, helping you to supercharge your results. Randy's held interim CPO and Leadership roles at scale-ups and SMEs, advised start-ups, and been Head of Product at HSBC and Sainsbury's. He participated in Silicon Valley Product Group's Coaching the Coaches forum, and speaks frequently at conferences and events. You can join one of communities he runs for CPOs (CPO Circles), Product Managers (Product In the {A}ether) and Product Coaches. He's the author of What Do We Do Now? A Product Manager's Guide to Strategy in the Time of COVID-19. A recovering music journalist and editor, Randy also launched Amazon's music stores in the US & UK.

Vision ProFiles
We're Gurman-iacs

Vision ProFiles

Play Episode Listen Later Aug 26, 2025 49:43


Eric, Dave, and Marty speak about the new VisionOS 26 developer beta, talk about a potential YouTube app, and review Gurman's AVP/Meta comparison. NEWS this week Beta 8 came out today:https://developer.apple.com/documentation/visionos-release-notes/visionos-26-release-notes What's new from beta 7 → beta 8Release/build only: Beta 8 arrived Aug 25, 2025 as 23M5332a (beta 7 was Aug 18 as 23M5328a). Apple doesn't list any additional new features, APIs, or headline-level changes for beta 8 beyond the build bump. Tooling: The latest posted tools are still Xcode 26 beta 6 (17A5305f) from Aug 18; no newer Xcode dropped alongside beta 8. Practical takeFor users: Treat beta 8 as a stability/bug-fix pass—no Apple-documented user-facing additions over beta 7. For developers: Re-run regressions on areas you touched for beta 7 (e.g., CompositorServices hover/immersion behavior, ARKit Accessory Tracking, RealityKit popover/presentation components, RemoteImmersiveSpace stability, StoreKit's new promo-offer APIs). Those were the substantive changes in beta 7 and remain the focus with beta 8. Dave looks at F1https://tv.apple.com/us/movie/f1-the-movie/umc.cmc.3t6dvnnr87zwd4wmvpdx5cameVivo's Vision Pro clone costs $1,400 and weighs 398ghttps://9to5mac.com/2025/08/21/vivos-vision-pro-clone-costs-1400-and-weighs-398g/ Vivo's $1,400 Apple Vision Pro Clone Launches Across Chinahttps://forums.macrumors.com/threads/vivos-1-400-apple-vision-pro-clone-launches-across-china.2463751/ Vivo Vision Mixed-Reality Headset Steps Up to Apple, but Still a Tough Sellhttps://www.cnet.com/tech/computing/vivo-vision-mixed-reality-headset-steps-up-to-apple-but-still-a-tough-sell/#ftag=CAD590a51e Don't count out an M4-powered Apple Vision Pro just yethttps://9to5mac.com/2025/08/17/apple-vision-pro-upgrade/ Apple Vision Pro 2: The M5 Chip & Al Make It A MUST-BUYhttps://www.geeky-gadgets.com/apple-vision-pro-2-2/ Mark Gurman: Q&A on Apple's smart glasses and the Vision Prohttps://www.reddit.com/r/augmentedreality/comments/1mz4zp2/mark_gurman_qa_on_apples_smart_glasses_and_the/ Full Article: https://archive.is/u7oJXApple is Exploring the use of Intuitive Interfaces for a future Vision Pro device with Ul Controls embedded in a Touch-Sensitive side surfacehttps://www.patentlyapple.com/2025/08/apple-is-exploring-the-use-of-intuitive-interfaces-for-a-future-vision-pro-device-with-ui-controls-embedded-in-a-touch-sensit.html Hands on with Apple Vision Pro in the wildhttps://appleinsider.com/articles/23/08/18/hands-on-with-apple-vision-pro-in-the-wild Meta Has Already Won the Smart Glasses Racehttps://www.wired.com/story/meta-has-already-won-the-smart-glasses-race/ HBO launches Hogwarts Great Hall immersive environment for Apple Vision Prohttps://www.ithinkdiff.com/apple-vision-pro-hogwarts-great-hall/ Hogwarts Great Hall Opens To HBO Max Subscribers On Apple Vision Prohttps://www.uploadvr.com/hogwarts-great-hall-hbo-max/ ABO MAYS VISIONOS APP ON APPLE VISION PRO NOW FEATURES A 'HOGWARTS GREAT HALL' IMMERSIVE ENVIRONMENThttps://www.mactech.com/2025/08/21/hbo-maxs-visionos-app-on-apple-vision-pro-now-features-a-hogwarts-great-hall-immersive-environment/ Is a Native Vision Pro YouTube App Coming Soon? Exploring the Optionshttps://techannouncer.com/is-a-native-vision-pro-youtube-app-coming-soon-exploring-the-options/ Project Graveyard On Apple Vision Pro Is A Free Place For Dead Ideashttps://www.uploadvr.com/project-graveyard-apple-vision-pro-dead-things/ APPS What the car?https://apps.apple.com/us/app/what-the-car/id1534708672 PhotoDomehttps://apps.apple.com/us/app/photodome/id6748567431?uo=2 PhotoDome Immersive Photo Viewer tor Vision Prohttps://www.iphoneness.com/apple-vision-pro-apps/photodome/ MacStockMacstockconferenceandexpo.com Digital Pass https://macstockconferenceandexpo.com/product/macstock-ix-digital-pass/Email: ThePodTalkNetwork@gmail.comWebsite: ThePodTalk.NetYouTube.com/@VisionProFiles

Oracle University Podcast
Core AI Concepts – Part 3

Oracle University Podcast

Play Episode Listen Later Aug 26, 2025 23:02


Join hosts Lois Houston and Nikita Abraham, along with Principal AI/ML Instructor Himanshu Raj, as they discuss the transformative world of Generative AI. Together, they uncover the ways in which generative AI agents are changing the way we interact with technology, automating tasks and delivering new possibilities.   AI for You: https://mylearn.oracle.com/ou/course/ai-for-you/152601/252500   Oracle University Learning Community: https://education.oracle.com/ou-community   LinkedIn: https://www.linkedin.com/showcase/oracle-university/   X: https://x.com/Oracle_Edu   Special thanks to Arijit Ghosh, David Wright, Kris-Ann Nansen, Radhika Banka, and the OU Studio Team for helping us create this episode. ------------------------------------------------------- Episode Transcript: 00:00 Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we'll bring you foundational training on the most popular Oracle technologies. Let's get started! 00:25 Lois: Welcome to the Oracle University Podcast! I'm Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead of Editorial Services.   Nikita: Hi everyone! Last week was Part 2 of our conversation on core AI concepts, where we went over the basics of data science. In Part 3 today, we'll look at generative AI and gen AI agents in detail. To help us with that, we have Himanshu Raj, Principal AI/ML Instructor. Hi Himanshu, what's the difference between traditional AI and generative AI?  01:01 Himanshu: So until now, when we talked about artificial intelligence, we usually meant models that could analyze information and make decisions based on it, like a judge who looks at evidence and gives a verdict. And that's what we call traditional AI that's focused on analysis, classification, and prediction.  But with generative AI, something remarkable happens. Generative AI does not just evaluate. It creates. It's more like a storyteller who uses knowledge from the past to imagine and build something brand new. For example, instead of just detecting if an email is spam, generative AI could write an entirely new email for you.  Another example, traditional AI might predict what a photo contains. Generative AI, on the other hand, creates a brand-new photo based on description. Generative AI refers to artificial intelligence models that can create entirely new content, such as text, images, music, code, or video that resembles human-made work.  Instead of simple analyzing or predicting, generative AI produces something original that resembles what a human might create.   02:16 Lois: How did traditional AI progress to the generative AI we know today?  Himanshu: First, we will look at small supervised learning. So in early days, AI models were trained on small labeled data sets. For example, we could train a model with a few thousand emails labeled spam or not spam. The model would learn simple decision boundaries. If email contains, "congratulations," it might be spam. This was efficient for a straightforward task, but it struggled with anything more complex.  Then, comes the large supervised learning. As the internet exploded, massive data sets became available, so millions of images, billions of text snippets, and models got better because they had much more data and stronger compute power and thanks to advances, like GPUs, and cloud computing, for example, training a model on millions of product reviews to predict customer sentiment, positive or negative, or to classify thousands of images in cars, dogs, planes, etc.  Models became more sophisticated, capturing deeper patterns rather than simple rules. And then, generative AI came into the picture, and we eventually reached a point where instead of just classifying or predicting, models could generate entirely new content.  Generative AI models like ChatGPT or GitHub Copilot are trained on enormous data sets, not to simply answer a yes or no, but to create outputs that look and feel like human made. Instead of judging the spam or sentiment, now the model can write an article, compose a song, or paint a picture, or generate new software code.  03:55 Nikita: Himanshu, what motivated this sort of progression?   Himanshu: Because of the three reasons. First one, data, we had way more of it thanks to the internet, smartphones, and social media. Second is compute. Graphics cards, GPUs, parallel computing, and cloud systems made it cheap and fast to train giant models.  And third, and most important is ambition. Humans always wanted machines not just to judge existing data, but to create new knowledge, art, and ideas.   04:25 Lois: So, what's happening behind the scenes? How is gen AI making these things happen?  Himanshu: Generative AI is about creating entirely new things across different domains. On one side, we have large language models or LLMs.  They are masters of generating text conversations, stories, emails, and even code. And on the other side, we have diffusion models. They are the creative artists of AI, turning text prompts into detailed images, paintings, or even videos.  And these two together are like two different specialists. The LLM acts like a brain that understands and talks, and the diffusion model acts like an artist that paints based on the instructions. And when we connect these spaces together, we create something called multimodal AI, systems that can take in text and produce images, audio, or other media, opening a whole new range of possibilities.  It can not only take the text, but also deal in different media options. So today when we say ChatGPT or Gemini, they can generate images, and it's not just one model doing everything. These are specialized systems working together behind the scenes.  05:38 Lois: You mentioned large language models and how they power text-based gen AI, so let's talk more about them. Himanshu, what is an LLM and how does it work?  Himanshu: So it's a probabilistic model of text, which means, it tries to predict what word is most likely to come next based on what came before.  This ability to predict one word at a time intelligently is what builds full sentences, paragraphs, and even stories.  06:06 Nikita: But what's large about this? Why's it called a large language model?   Himanshu: It simply means the model has lots and lots of parameters. And think of parameters as adjustable dials the model fine tuned during learning.  There is no strict rule, but today, large models can have billions or even trillions of these parameters. And the more the parameters, more complex patterns, the model can understand and can generate a language better, more like human.  06:37 Nikita: Ok… and image-based generative AI is powered by diffusion models, right? How do they work?  Himanshu: Diffusion models start with something that looks like pure random noise.  Imagine static on an old TV screen. No meaningful image at all. From there, the model carefully removes noise step by step to create something more meaningful and think of it like sculpting a statue. You start with a rough block of stone and slowly, carefully you chisel away to reveal a beautiful sculpture hidden inside.  And in each step of this process, the AI is making an educated guess based on everything it has learned from millions of real images. It's trying to predict.   07:24 Stay current by taking the 2025 Oracle Fusion Cloud Applications Delta Certifications. This is your chance to demonstrate your understanding of the latest features and prove your expertise by obtaining a globally recognized certification, all for free! Discover the certification paths, use the resources on MyLearn to prepare, and future-proof your skills. Get started now at mylearn.oracle.com.  07:53 Nikita: Welcome back! Himanshu, for most of us, our experience with generative AI is with text-based tools like ChatGPT. But I'm sure the uses go far beyond that, right? Can you walk us through some of them?  Himanshu: First one is text generation. So we can talk about chatbots, which are now capable of handling nuanced customer queries in banking travel and retail, saving companies hours of support time. Think of a bank chatbot helping a customer understand mortgage options or virtual HR Assistant in a large company, handling leave request. You can have embedding models which powers smart search systems.  Instead of searching by keywords, businesses can now search by meaning. For instance, a legal firm can search cases about contract violations in tech and get semantically relevant results, even if those exact words are not used in the documents.  The third one, for example, code generation, tools like GitHub Copilot help developers write boilerplate or even functional code, accelerating software development, especially in routine or repetitive tasks. Imagine writing a waveform with just a few prompts.  The second application, is image generation. So first obvious use is art. So designers and marketers can generate creative concepts instantly. Say, you need illustrations for a campaign on future cities. Generative AI can produce dozens of stylized visuals in minutes.  For design, interior designers or architects use it to visualize room layouts or design ideas even before a blueprint is finalized. And realistic images, retail companies generate images of people wearing their clothing items without needing real models or photoshoots, and this reduces the cost and increase the personalization.  Third application is multimodal systems, and these are combined systems that take one kind of input or a combination of different inputs and produce different kind of outputs, or can even combine various kinds, be it text image in both input and output.  Text to image It's being used in e-commerce, movie concept art, and educational content creation. For text to video, this is still in early days, but imagine creating a product explainer video just by typing out the script. Marketing teams love this for quick turnarounds. And the last one is text to audio.  Tools like ElevenLabs can convert text into realistic, human like voiceovers useful in training modules, audiobooks, and accessibility apps. So generative AI is no longer just a technical tool. It's becoming a creative copilot across departments, whether it's marketing, design, product support, and even operations.  10:42 Lois: That's great! So, we've established that generative AI is pretty powerful. But what kind of risks does it pose for businesses and society in general?  Himanshu: The first one is deepfakes. Generative AI can create fake but highly realistic media, video, audios or even faces that look and sound authentic.  Imagine a fake video of a political leader announcing a policy, they never approved. This could cause mass confusion or even impact elections. In case of business, deepfakes can be also used in scams where a CEO's voice is faked to approve fraudulent transactions.  Number two, bias, if AI is trained on biased historical data, it can reinforce stereotypes even when unintended. For example, a hiring AI system that favors male candidates over equally qualified women because of historical data was biased.  And this bias can expose companies to discrimination, lawsuits, brand damage and ethical concerns. Number three is hallucinations. So sometimes AI system confidently generate information that is completely wrong without realizing it.   Sometimes you ask a chatbot for a legal case summary, and it gives you a very convincing but entirely made up court ruling. In case of business impact, sectors like health care, finance, or law hallucinations can or could have serious or even dangerous consequences if not caught.  The fourth one is copyright and IP issues, generative AI creates new content, but often, based on material it was trained on. Who owns a new work? A real life example could be where an artist finds their unique style was copied by an AI that was trained on their paintings without permission.  In case of a business impact, companies using AI-generated content for marketing, branding or product designs must watch for legal gray areas around copyright and intellectual properties. So generative AI is not just a technology conversation, it's a responsibility conversation. Businesses must innovate and protect.  Creativity and caution must go together.   12:50 Nikita: Let's move on to generative AI agents. How is a generative AI agent different from just a chatbot or a basic AI tool?  Himanshu: So think of it like a smart assistant, not just answering your questions, but also taking actions on your behalf. So you don't just ask, what's the best flight to Vegas? Instead, you tell the agent, book me a flight to Vegas and a room at the Hilton. And it goes ahead, understands that, finds the options, connects to the booking tools, and gets it done.   So act on your behalf using goals, context, and tools, often with a degree of autonomy. Goals, are user defined outcomes. Example, I want to fly to Vegas and stay at Hilton. Context, this includes preferences history, constraints like economy class only or don't book for Mondays.  Tools could be APIs, databases, or services it can call, such as a travel API or a company calendar. And together, they let the agent reason, plan, and act.   14:02 Nikita: How does a gen AI agent work under the hood?  Himanshu: So usually, they go through four stages. First, one is understands and interprets your request like natural language understanding. Second, figure out what needs to be done, in this case flight booking plus hotel search.  Third, retrieves data or connects to tools APIs if needed, such as Skyscanner, Expedia, or a Calendar. And fourth is takes action. That means confirming the booking and giving you a response like your travel is booked. Keep in mind not all gen AI agents are fully independent.  14:38 Lois: Himanshu, we've seen people use the terms generative AI agents and agentic AI interchangeably. What's the difference between the two?  Himanshu: Agentic AI is a broad umbrella. It refers to any AI system that can perceive, reason, plan, and act toward a goal and may improve and adapt over time.   Most gen AI agents are reactive, not proactive. On the other hand, agentic AI can plan ahead, anticipate problems, and can even adjust strategies.  So gen AI agents are often semi-autonomous. They act in predefined ways or with human approval. Agentic systems can range from low to full autonomy. For example, auto-GPT runs loops without user prompts and autonomous car decides routes and reactions.  Most gen AI agents can only make multiple steps if explicitly designed that way, like a step-by-step logic flows in LangChain. And in case of agentic AI, it can plan across multiple steps with evolving decisions.  On the memory and goal persistence, gen AI agents are typically stateless. That means they forget their goal unless you remind them. In case of agentic AI, these systems remember, adapt, and refine based on goal progression. For example, a warehouse robot optimizing delivery based on changing layouts.  Some generative AI agents are agentic, like auto GPT. They use LLMs to reason, plan, and act, but not all. And likewise not all agentic AIs are generative. For example, an autonomous car, which may use computer vision control systems and planning, but no generative models.  So agentic AI is a design philosophy or system behavior, which could be goal-driven, autonomous, and decision making. They can overlap, but as I said, not all generative AI agents are agentic, and not all agentic AI systems are generative.  16:39 Lois: What makes a generative AI agent actually work?  Himanshu: A gen AI agent isn't just about answering the question. It's about breaking down a user's goal, figuring out how to achieve it, and then executing that plan intelligently. These agents are built from five core components and each playing a critical role.  The first one is goal. So what is this agent trying to achieve? Think of this as the mission or intent. For example, if I tell the agent, help me organized a team meeting for Friday. So the goal in that case would be schedule a meeting.  Number 2, memory. What does it remember? So this is the agent's context awareness. Storing previous chats, preferences, or ongoing tasks. For example, if last week I said I prefer meetings in the afternoon or I have already shared my team's availability, the agent can reuse that. And without the memory, the agent behaves stateless like a typical chatbot that forgets context after every prompt.  Third is tools. What can it access? Agents aren't just smart, they are also connected. They can be given access to tools like calendars, CRMs, web APIs, spreadsheets, and so on.  The fourth one is planner. So how does it break down the goal? And this is where the reasoning happens. The planner breaks big goals into a step-by-step plans, for example checking team availability, drafting meeting invite, and then sending the invite. And then probably, will confirm the booking. Agents don't just guess. They reason and organize actions into a logical path.  And the fifth and final one is executor, who gets it done. And this is where the action takes place. The executor performs what the planner lays out. For example, calling APIs, sending message, booking reservations, and if planner is the architect, executor is the builder.   18:36 Nikita: And where are generative AI agents being used?  Himanshu: Generative AI agents aren't just abstract ideas, they are being used across business functions to eliminate repetitive work, improve consistency, and enable faster decision making. For marketing, a generative AI agent can search websites and social platforms to summarize competitor activity. They can draft content for newsletters or campaign briefs in your brand tone, and they can auto-generate email variations based on audience segment or engagement history.  For finance, a generative AI agent can auto-generate financial summaries and dashboards by pulling from ERP spreadsheets and BI tools. They can also draft variance analysis and budget reports tailored for different departments. They can scan regulations or policy documents to flag potential compliance risks or changes.  For sales, a generative AI agent can auto-draft personalized sales pitches based on customer behavior or past conversations. They can also log CRM entries automatically once submitting summary is generated. They can also generate battlecards or next-step recommendations based on the deal stage.  For human resource, a generative AI agent can pre-screen resumes based on job requirements. They can send interview invites and coordinate calendars. A common theme here is that generative AI agents help you scale your teams without scaling the headcount.   20:02 Nikita: Himanshu, let's talk about the capabilities and benefits of generative AI agents.  Himanshu: So generative AI agents are transforming how entire departments function. For example, in customer service, 24/7 AI agents handle first level queries, freeing humans for complex cases.  They also enhance the decision making. Agents can quickly analyze reports, summarize lengthy documents, or spot trends across data sets. For example, a finance agent reviewing Excel data can highlight cash flow anomalies or forecast trends faster than a team of analysts.  In case of personalization, the agents can deliver unique, tailored experiences without manual effort. For example, in marketing, agents generate personalized product emails based on each user's past behavior. For operational efficiency, they can reduce repetitive, low-value tasks. For example, an HR agent can screen hundreds of resumes, shortlist candidates, and auto-schedule interviews, saving HR team hours each week.  21:06 Lois: Ok. And what are the risks of using generative AI agents?  Himanshu: The first one is job displacement. Let's be honest, automation raises concerns. Roles involving repetitive tasks such as data entry, content sorting are at risk. In case of ethics and accountability, when an AI agent makes a mistake, who is responsible? For example, if an AI makes a biased hiring decision or gives incorrect medical guidance, businesses must ensure accountability and fairness.  For data privacy, agents often access sensitive data, for example employee records or customer history. If mishandled, it could lead to compliance violations. In case of hallucinations, agents may generate confident but incorrect outputs called hallucinations. This can often mislead users, especially in critical domains like health care, finance, or legal.  So generative AI agents aren't just tools, they are a force multiplier. But they need to be deployed thoughtfully with a human lens and strong guardrails. And that's how we ensure the benefits outweigh the risks.  22:10 Lois: Thank you so much, Himanshu, for educating us. We've had such a great time with you! If you want to learn more about the topics discussed today, head over to mylearn.oracle.com and get started on the AI for You course.  Nikita: Join us next week as we chat about AI workflows and tools. Until then, this is Nikita Abraham…  Lois: And Lois Houston signing off!  22:32 That's all for this episode of the Oracle University Podcast. If you enjoyed listening, please click Subscribe to get all the latest episodes. We'd also love it if you would take a moment to rate and review us on your podcast app. See you again on the next episode of the Oracle University Podcast.  

Domain Name Wire Podcast
Domain name nerve center – DNW Podcast #549

Domain Name Wire Podcast

Play Episode Listen Later Aug 25, 2025 39:21


Mike Cyger introduces Nerve.io. On today's show, domain investor and entrepreneur Michael Cyger introduces Nerve.io, a project he's been working on to make domain management and access through APIs easier. He explains the frustrations people have with domain registrar APIs and how Nerve can help people manage their portfolios or build applications. We also discuss […] Post link: Domain name nerve center – DNW Podcast #549 © DomainNameWire.com 2025. This is copyrighted content. Domain Name Wire full-text RSS feeds are made available for personal use only, and may not be published on any site without permission. If you see this message on a website, contact editor (at) domainnamewire.com. Latest domain news at DNW.com: Domain Name Wire.

domain apis nerve domain names domainnamewire dnw michael cyger
Startup Project
APIs as Graphs not Endpoints, building a company on open source foundations, why VPs of Engineering face impossible trade-offs | Apollo GraphQL CEO Matt DeBergalis

Startup Project

Play Episode Listen Later Aug 25, 2025 50:42


### About the episode:Join Nataraj as he interviews Matt DeBergalis, CEO of Apollo GraphQL, about the evolution of GraphQL from an open-source project to a product company. Matt shares insights on building and scaling APIs, the challenges of transitioning open-source tech into a viable business, and how AI is reshaping API development. Discover how Apollo is helping companies of all sizes leverage GraphQL to build agentic experiences and modernize their API strategies.### What you'll learn- Understand the journey of GraphQL from open source to a product-driven company.- Explore the challenges of adopting and scaling GraphQL in enterprise environments.- Discover how GraphQL simplifies complex data combinations with its declarative language.- Learn how Apollo GraphQL helps companies accelerate the development of robust APIs.- Examine the role of GraphQL in building modern agentic experiences powered by AI.- Understand how to balance short-term shipping pressures with long-term architectural considerations.- Identify when GraphQL makes sense for a company based on its API size and consumption needs.- Discover how AI is driving increased API consumption and transforming user interfaces.### About the Guest and Host:Guest Name: Matt DeBergalis is the Co-founder and CEO of Apollo GraphQL, previously CTO and Co-founder at Meteor Development Group.Connect with Guest:→ LinkedIn: https://www.linkedin.com/in/debergalis/→ Website: https://www.apollographql.com/Nataraj: Host of the Startup Project podcast, Senior PM at Azure & Investor.→ LinkedIn: https://www.linkedin.com/in/natarajsindam/→ Twitter: https://x.com/natarajsindam→ Substack: ⁠https://startupproject.substack.com/⁠→ Website: ⁠⁠⁠https://thestartupproject.io⁠⁠⁠### In this episode, we cover(00:01) Introduction to Matt DeBergalis and Apollo GraphQL(00:37) Matt's journey and the origins of Apollo GraphQL(03:24) The transition from open source to a company(05:02) GraphQL as a client-focused API technology(07:22) Meta's approach to open source technologies(10:11) Challenges of converting open source to a business(13:11) Balancing shipping speed with architectural considerations(15:52) The risk of adopting the wrong technology(19:13) The evolution of full-stack development(23:57) When does adopting GraphQL make sense?(26:45) Apollo's customer scale and focus(31:48) Acquiring customers and marketing to developers(33:52) Matt's transition from CTO to CEO(37:02) Apollo's sales motion and target audience(40:24) Matt's thoughts on AI and its impact(47:12) How AI is changing business metricsDon't forget to subscribe and leave us a review/comment on YouTube Apple Spotify or wherever you listen to podcasts.#GraphQL #ApolloGraphQL #API #OpenSource #Enterprise #AI #AgenticAI #APIDevelopment #Startup #Technology #SoftwareDevelopment #GraphQLAdoption #Kubernetes #React #FullStack #DataAnalytics #Innovation #DigitalTransformation #TechStrategy #Podcast

TestGuild News Show
Playwright MCP, Cypress FHIR API, AI Test Management and More

TestGuild News Show

Play Episode Listen Later Aug 25, 2025 9:57


Is AI the future of Test management? Have you seen Is Playwright's must try brand-new browser extension? Can Cypress really automate complex healthcare APIs like FHIR Find out in this episode of the Test Guild New Shows for the week of Aug 24. So, grab your favorite cup of coffee or tea, and let's do this. Support the show check out ZAPTEST.AI https://testguild.me/ZAPTESTNEWS

Elon Musk Pod
Elon Musk's xAI Is Using AI to Rebuild Microsoft's Software Stack

Elon Musk Pod

Play Episode Listen Later Aug 24, 2025 21:24


Elon Musk's xAI is working on a secretive project called MacroHard, designed to recreate Microsoft's core software products using AI alone. The internal effort uses Grok and other xAI models to simulate tools like Excel, Word, Windows, and GitHub, without relying on human-written code or Microsoft's APIs. This article breaks down Musk's strategy, how AI agents are being trained to function as full-stack developers, and why this could challenge Microsoft's dominance in enterprise software.

Business of Tech
Navigating SaaS Management and AI: Key Trends for MSPs from ChannelCon 2025 with John Harden

Business of Tech

Play Episode Listen Later Aug 23, 2025 15:41


Dave Sobel interviews John Harden, the director of strategy and technology evangelism at Auvik, discussing the evolution of SaaS management and its growing adoption in the industry. Since Auvik's acquisition of SaaSlio in 2022, the company has invested significantly in engineering efforts to enhance its SaaS management capabilities. Harden highlights the increasing need for visibility into SaaS applications due to rising cybersecurity threats and the growing importance of AI in business environments. He emphasizes that many organizations are now recognizing the necessity of understanding their SaaS assets, particularly in light of the proliferation of AI tools.The conversation delves into the different ways organizations are consuming AI, with smaller companies typically using AI through SaaS applications, while larger organizations may develop their own models via APIs. Harden explains how Auvik's SaaS management platform provides visibility into both categories, allowing businesses to monitor AI usage and manage potential risks associated with shadow IT. He also discusses the recent release of SaaSOps, which enhances visibility and integrates with popular tools to provide deeper insights into API usage and license management.As organizations begin to shift back to on-premises servers due to the high costs associated with AI workloads, Auvik has responded by introducing server management capabilities. Harden notes that this new feature allows for comprehensive monitoring of on-premises infrastructure, ensuring that businesses can effectively manage their IT assets regardless of where they are hosted. This adaptability is crucial as companies navigate the complexities of their IT environments, whether they are utilizing cloud services or traditional on-premises solutions.Looking ahead, Harden expresses optimism about the growth of compliance and governance, risk, and compliance (GRC) solutions, which he believes will foster stronger relationships between managed service providers (MSPs) and their clients. He emphasizes the importance of asset visibility in achieving compliance and cybersecurity goals, as well as in developing AI strategies. By continuing to expand its asset visibility portfolio, Auvik aims to support MSPs in meeting the evolving needs of their customers in a rapidly changing technological landscape. All our Sponsors: https://businessof.tech/sponsors/ Do you want the show on your podcast app or the written versions of the stories? Subscribe to the Business of Tech: https://www.businessof.tech/subscribe/Looking for a link from the stories? The entire script of the show, with links to articles, are posted in each story on https://www.businessof.tech/ Support the show on Patreon: https://patreon.com/mspradio/ Want to be a guest on Business of Tech: Daily 10-Minute IT Services Insights? Send Dave Sobel a message on PodMatch, here: https://www.podmatch.com/hostdetailpreview/businessoftech Want our stuff? Cool Merch? Wear “Why Do We Care?” - Visit https://mspradio.myspreadshop.com Follow us on:LinkedIn: https://www.linkedin.com/company/28908079/YouTube: https://youtube.com/mspradio/Facebook: https://www.facebook.com/mspradionews/Instagram: https://www.instagram.com/mspradio/TikTok: https://www.tiktok.com/@businessoftechBluesky: https://bsky.app/profile/businessof.tech

The New Stack Podcast
MCP Security Risks Multiply With Each New Agent Connection

The New Stack Podcast

Play Episode Listen Later Aug 22, 2025 47:25


Anthropic's Model Context Protocol (MCP) has become the standard for connecting AI agents to tools and data, but its security has lagged behind. In The New Stack Agents podcast, Tzvika Shneider, CEO of API security startup Pynt, discussed the growing risks MCP introduces. Shneider sees MCP as a natural evolution from traditional APIs to LLMs and now to AI agents. However, MCP adds complexity and vulnerability, especially as agents interact across multiple servers. Pynt's research found that 72% of MCP plugins expose high-risk operations, like code execution or accessing privileged APIs, often without proper approval or validation. The danger compounds when untrusted inputs from one agent influence another with elevated permissions. Unlike traditional APIs, MCP calls are made by non-deterministic agents, making it harder to enforce security guardrails. While MCP exploits remain rare for now, most companies lack mature security strategies for it. Shneider believes MCP merely highlights existing API vulnerabilities, and organizations are only beginning to address these risks. Learn more from The New Stack about the latest in Model Context Protocol: Model Context Protocol: A Primer for the Developers Building With MCP? Mind the Security Gaps MCP-UI Creators on Why AI Agents Need Rich User InterfacesJoin our community of newsletter subscribers to stay on top of the news and at the top of your game. 

Healthcare is Hard: A Podcast for Insiders
Emerging Technologies (Part 2): Past, Present & Future of Healthcare Interoperability with HTD Health's Brendan Keeler

Healthcare is Hard: A Podcast for Insiders

Play Episode Listen Later Aug 21, 2025 49:12


Brendan Keeler's path into healthcare interoperability has been anything but straightforward. After early stints implementing Epic in the U.S. and Europe, he helped hundreds of startups connect to provider and payer systems at Redox, Zus Health and Flexpa before taking the reins of the Interoperability Practice at HTD Health. Along the way, his Health API Guy blog turned dense policy updates into plain-language guides, earning a following among developers, executives and regulators. In this episode, Keith Figlioli sits down with Keeler to examine the “post-Meaningful-Use” moment. They discuss how national networks like Carequality and CommonWell solved much of the provider-to-provider exchange problem, only to expose new gaps for payers, life-science firms and patients. Keeler says the real action right now is in three places where the biggest, most dramatic changes are about to happen: Antitrust pressure on dominant EHRs. Epic's push into ERP, payer platforms and life-sciences services could trigger “leveraging” claims that force unbundling, similar to cases already moving through federal court.  Information-blocking enforcement. Recent lawsuits show courts siding with smaller vendors when incumbents restrict data access, a trend Keeler believes could unwind long-standing moats around systems of record. A CMS-led shift from policy to execution. With ONC budgets flat, Keeler sees CMS using its purchasing power to unblock Medicare claims data at the point of care, expand Blue Button APIs, and accelerate work on a national provider directory, digital ID and trusted exchange frameworks. Keeler's optimism is pragmatic. AI agents may someday chip away at entrenched EHR “data gravity,” but real progress, he says, will come from steady, bipartisan layering of HIPAA, Cures Act and TEFCA foundations. He also pushes back on venture capital's “system-of-action” thesis. Enterprise EHRs remain sticky because switching costs—massive data migration and workflow retraining—are measured in decades, not funding cycles. AI could reduce these problems, but only slowly and only if underpinned by trusted exchange standards. Zooming out, Keeler describes a policy arc that starts with provider-to-provider exchange, widens to payer and patient access, and ultimately points toward a nationwide digital ID that could streamline consent and credentialing. For innovators, his north star is clear: build for identity-verified, standards-based exchange; assume open APIs will become table stakes; and judge success by the friction you subtract from everyday care—not by how flashy the demo is. To hear Brendan Keeler and Keith unpack these issues, listen to this episode of Healthcare is Hard: A Podcast for Insiders. Please note that this episode was recorded earlier this summer, before the CMS meeting, and that some developments have occurred since then.

Happy Porch Radio
Exploring Circular Tech: Rental - Mid-Season Reflections: Technology, Rental Models and the Circular Economy

Happy Porch Radio

Play Episode Listen Later Aug 21, 2025 34:48


What have we learned so far this season about the realities of rental and product-as-a-service models, and where does technology really make the difference?In this special mid-season reflection of HappyPorch Radio, hosts Barry O'Kane, Jo Weston and Tandi Tuakli look back at the conversations so far, drawing out the common themes, challenges and opportunities from entrepreneurs, academics and technology providers working at the forefront of circularity.We revisit highlights from:Refulfil – Danai Osmond explained how smart reverse logistics and return flows can unlock circular commerce, reduce waste, and make reuse systems viable at scaleBaboodle – Katie Hanton-Parr explored rental models for children's products and family life, showing how convenience and flexibility can drive adoption alongside sustainability goalsBlack Winch – Yann Toutant brought insights from circular business strategy and advisory work, highlighting the organisational and financial challenges of scaling circular modelsSupercycle – Ryan Atkins discussed tackling e-bike refurbishment and how service-based models can support both sustainable transport and profitable growthCircularity.fm – Patrick Hypscher highlighted product-as-a-service models and the importance of building flexible, modular tech stacks that enable iteration and long-term resilienceLeah Pollen – emphasised that circularity alone doesn't automatically solve issues like waste or planned obsolescence, but that models such as device leasing can align incentives and create meaningful opportunities for reuse and longer product lifeLucy Wishart – explored how reducing “consumer work” through seamless services like delivery, setup, and return can make rental experiences more attractive, and how community engagement can amplify the reach and impact of circular models✨ In this episode:We reflect on the wider context, including why global circularity has fallen to just 6.9%We explore insights from guests tackling logistics, finance, customer experience and design for durabilityWe hear how service excellence and reducing “consumer work” are proving key to rental adoptionWe discuss the role of technology, from scrappy spreadsheets to IoT and APIs, and why flexibility mattersWe highlight the importance of ecosystems, partnerships and mindset shifts inside organisationsWe share our takeaways so far, and what we're excited to explore in the rest of the season

In-Ear Insights from Trust Insights
In-Ear Insights: Reviewing AI Data Privacy Basics

In-Ear Insights from Trust Insights

Play Episode Listen Later Aug 20, 2025


In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss AI data privacy and how AI companies use your data, especially with free versions. You will learn how to approach terms of service agreements. You will understand the real risks to your privacy when inputting sensitive information. You will discover how AI models train on your data and what true data privacy solutions exist. Watch this episode to protect your information! Watch the video here: Can’t see anything? Watch it on YouTube here. Listen to the audio here: https://traffic.libsyn.com/inearinsights/tipodcast-ai-data-privacy-review.mp3 Download the MP3 audio here. Need help with your company’s data and analytics? Let us know! Join our free Slack group for marketers interested in analytics! [podcastsponsor] Machine-Generated Transcript What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode. Christopher S. Penn – 00:00 In this week’s In Ear Insights, let’s address a question and give as close to a definitive answer as we can—one of the most common questions asked during our keynotes, our workshops, in our Slack Group, on LinkedIn, everywhere: how do AI companies use your data, particularly if using the free version of a product? A lot of people say, “Be careful what you put in AI. It can learn from your data. You could be leaking confidential data. What’s going on?” So, Katie, before I launch into a tirade which could take hours long, let me ask you, as someone who is the less technical of the two of us, what do you think happens when AI companies are using your data? Katie Robbert – 00:43 Well, here’s the bottom line for me: AI is any other piece of software that you have to read the terms in use and sign their agreement for. Great examples are all the different social media platforms. And we’ve talked about this before, I often get a chuckle—probably in a more sinister way than it should be—of people who will copy and paste this post of something along the lines of, “I do not give Facebook permission to use my data. I do not give Facebook permission to use my images.” And it goes on and on, and it says copy and paste so that Facebook can’t use your information. And bless their hearts, the fact that you’re on the platform means that you have agreed to let them do so. Katie Robbert – 01:37 If not, then you need to have read the terms, the terms of use that explicitly says, “By signing up for this platform, you agree to let us use your information.” Then it sort of lists out what it’s going to use, how it’s going to use it, because legally they have to do that. When I was a product manager and we were converting our clinical trial outputs into commercial products, we had to spend a lot of time with the legal teams writing up those terms of use: “This is how we’re going to use only marketing data. This is how we’re going to use only your registration form data.” When I hear people getting nervous about, “Is AI using my data?” My first thought is, “Yeah, no kidding.” Katie Robbert – 02:27 It’s a piece of software that you’re putting information into, and if you didn’t want that to happen, don’t use it. It’s literally, this is why people build these pieces of software and then give them away for free to the public, hoping that people will put information into them. In the case of AI, it’s to train the models or whatever the situation is. At the end of the day, there is someone at that company sitting at a desk hoping you’re going to give them information that they can do data mining on. That is the bottom line. I hate to be the one to break it to you. We at Trust Insights are very transparent. We have forms; we collect your data that goes into our CRM. Katie Robbert – 03:15 Unless you opt out, you’re going to get an email from us. That is how business works. So I guess it was my turn to go on a very long rant about this. At the end of the day, yes, the answer is yes, period. These companies are using your data. It is on you to read the terms of use to see how. So, Chris, my friend, what do we actually—what’s useful? What do we need to know about how these models are using data in the publicly available versions? Christopher S. Penn – 03:51 I feel like we should have busted out this animation. Katie Robbert – 03:56 Oh. I don’t know why it yells at the end like that, but yes, that was a “Ranty Pants” rant. I don’t know. I guess it’s just I get frustrated. I get that there’s an education component. I do. I totally understand that new technology—there needs to be education. At the end of the day, it’s no different from any other piece of software that has terms of use. If you sign up with an email address, you’re likely going to get all of their promotional emails. If you have to put in a password, then that means that you are probably creating some kind of a profile that they’re going to use that information to create personas and different segments. If you are then putting information into their system, guess what? Katie Robbert – 04:44 They have to store that somewhere so that they can give it back to you. It’s likely on a database that’s on their servers. And guess who owns those servers? They do. Therefore, they own that data. So unless they’re doing something allowing you to build a local model—which Chris has covered in previous podcasts and livestreams, which you can go to Trust Insights.AI YouTube, go to our “So What” playlist, and you can find how to build a local model—that is one of the only ways that you can fully protect your data against going into their models because it’s all hosted locally. But it’s not easy to do. So needless to say, Ranty Pants engaged. Use your brains, people. Christopher S. Penn – 05:29 Use your brains. We have a GPT. In fact, let’s put it in this week’s Trust Insights newsletter. If you’re not subscribed to it, just go to Trust Insights.AI/newsletter. We have a GPT—just copy and paste the terms of service. Copy paste the whole page, paste in the GPT, and we’ll tell you how likely it is that you have given permission to a company to train on your data. With that, there are two different vulnerabilities when you’re using any AI tool. The first prerequisite golden rule: if you ain’t paying, you’re the product. We warn people about this all the time. Second, the prompts that you give and their responses are the things that AI companies are going to use to train on. Christopher S. Penn – 06:21 This has different implications for privacy depending on who you are. The prompts themselves, including all the files and things you upload, are stored verbatim in every AI system, no matter what it is, for the average user. So when you go to ChatGPT or Gemini or Claude, they will store what you’ve prompted, documents you’ve uploaded, and that can be seen by another human. Depending on the terms of service, every platform has a carve out saying, “Hey, if you ask it to do something stupid, like ‘How do I build this very dangerous thing?’ and it triggers a warning, that prompt is now eligible for human review.” That’s just basic common sense. That’s one side. Christopher S. Penn – 07:08 So if you’re putting something there so sensitive that you cannot risk having another human being look at it, you can’t use any AI system other than one that’s running on your own hardware. The second side, which is to the general public, is what happens with that data once it’s been incorporated into model training. If you’re using a tool that allows model training—and here’s what this means—the verbatim documents and the verbatim prompts are not going to appear in a GPT-5. What a company like OpenAI or Google or whoever will do is they will add those documents to their library and then train a model on the prompt and the response to say, “Did this user, when they prompted this thing, get a good response?” Christopher S. Penn – 07:52 If so, good. Let’s then take that document, digest it down into the statistics that it makes up, and that gets incorporated into the rest of the model. The way I explain it to people in a non-technical fashion is: imagine you had a glass full of colored sand—it’s a little rainbow glass of colored sand. And you went out to the desert, like the main desert or whatever, and you just poured the glass out on the ground. That’s the equivalent of putting a prompt into someone’s trained data set. Can you go and scoop up some of the colored sand that was your sand out of the glass from the desert? Yes, you can. Is it in the order that it was in when you first had it in the glass? It is not. Christopher S. Penn – 08:35 So the ability for someone to reconstruct your original prompts and the original data you uploaded from a public model, GPT-5, is extremely low. Extremely low. They would need to know what the original prompt was, effectively, to do that, which then if they know that, then you’ve got different privacy problems. But is your data in there? Yes. Can it be used against you by the general public? Almost certainly not. Can the originals be seen by an employee of OpenAI? Yes. Katie Robbert – 09:08 And I think that’s the key: so you’re saying, will the general public see it? No. But will a human see it? Yes. So if the answer is yes to any of those questions, that’s the way that you need to proceed. We’ve talked about protected health information and personally identifiable information and sensitive financial information, and just go ahead and not put that information into a large language model. But there are systems built specifically to handle that data. And just like a large language model, there is a human on the other side of it seeing it. Katie Robbert – 09:48 So since we’re on the topic of data privacy, I want to ask your opinion on systems like WhatsApp, because they tend to pride themselves, and they have their commercials. Everything you see on TV is clearly the truth. There’s no lies there. They have their commercials saying that the data is fully encrypted in such a way that you can pass messages back and forth, and nobody on their team can see it. They can’t understand what it is. So you could be saying totally heinous things—that’s sort of what they’re implying—and nobody is going to call you out on it. How true do you think that is? Christopher S. Penn – 10:35 There are two different angles to this. One is the liability angle. If you make a commercial claim and then you violate that claim, you are liable for a very large lawsuit. On the one hand is the risk management side. On the other hand, as reported in Reuters last week, Meta has a very different set of ethics internally than the rest of us do. For the most part, there’s a whole big exposé on what they consider acceptable use for their own language models. And some of the examples are quite disturbing. So I can’t say without looking at the codebase or seeing if they have been audited by a trustworthy external party how trustworthy they actually are. There are other companies and applications—Signal comes to mind—that have done very rigorous third-party audits. Christopher S. Penn – 11:24 There are other platforms that actually do the encryption in the hardware—Apple, for example, in its Secure Enclave and its iOS devices. They have also submitted to third-party auditing firms to audit. I don’t know. So my first stop would be: has WhatsApp been audited by a trusted impartial third-party? Katie Robbert – 11:45 So I think you’re hitting on something important. That brings us back to the point of the podcast, which is, how much are these open models using my data? The thing that you said that strikes me is Meta, for example—they have an AI model. Their view on what’s ethical and what’s trustworthy is subjective. It’s not something that I would necessarily agree with, that you would necessarily agree with. And that’s true of any software company because, once again, at the end of the day, the software is built by humans making human judgments. And what I see as something that should be protected and private is not necessarily what the makers of this model see as what should be protected and private because it doesn’t serve their agenda. We have different agendas. Katie Robbert – 12:46 My agenda: get some quick answers and don’t dig too deep into my personal life; you stay out of it. They’re like, “No, we’re going to dig deeper because it’s going to help us give you more tailored and personalized answers.” So we have different agendas. That’s just a very simple example. Christopher S. Penn – 13:04 It’s a simple example, but it’s a very clear example because it goes back to aligning incentives. What are the incentives that they’re offering in exchange for your data? What do you get? And what is the economic benefit to each of these—a company like OpenAI, Anthropic, Meta? They all have economic incentives, and part of responsible use of AI for us as end users is to figure out what are they incentivizing? And is that something that is, frankly, fair? Are you willing to trade off all of your medical privacy for slightly better ads? I think most people say probably no. Katie Robbert – 13:46 Right. Christopher S. Penn – 13:46 That sounds like a good deal to us. Would you trade your private medical data for better medical diagnosis? Maybe so, if we don’t know what the incentives are. That’s our first stop: to figure out what any company is doing with its technology and what their incentives are. It’s the old-fashioned thing we used to do with politicians back when we cared about ethics. We follow the money. What is this politician getting paid? Who’s lobbying them? What outcomes are they likely to generate based on who they’re getting money from? We have to ask the same thing of our AI systems. Katie Robbert – 14:26 Okay, so, and I know the answer to this question, but I’m curious to hear your ranty perspective on it. How much can someone claim, “I didn’t know it was using my data,” and call up, for lack of a better term, call up the company and say, “Hey, I put my data in there and you used it for something else. What the heck? I didn’t know that you were going to do that.” How much water does that hold? Christopher S. Penn – 14:57 About the same as that Facebook warning—a copy and paste. Katie Robbert – 15:01 That’s what I thought you were going to say. But I think that it’s important to talk about it because, again, with any new technology, there is a learning curve of what you can and can’t do safely. You can do whatever you want with it. You just have to be able to understand what the consequences are of doing whatever you want with it. So if you want to tell someone on your team, “Hey, we need to put together some financial forecasting. Can you go ahead and get that done? Here’s our P&L. Here’s our marketing strategy for the year. Here’s our business goals. Can you go ahead and start to figure out what that looks like?” Katie Robbert – 15:39 A lot of people today—2025, late August—are, “it’s probably faster if I use generative AI to do all these things.” So let me upload my documents and let me have generative AI put a plan together because I’ve gotten really good at prompting, which is fine. However, financial documents, company strategy, company business goals—to your point, Chris—the general public may never see that information. They may get flavors of it, but not be able to reconstruct it. But someone, a human, will be able to see the entire thing. And that is the maker of the model. And that may be, they’d be, “Trust Insights just uploaded all of their financial information, and guess what? They’re one of our biggest competitors.” Katie Robbert – 16:34 So they did that knowingly, and now we can see it. So we can use that information for our own gain. Is that a likely scenario? Not in terms of Trust Insights. We are not a competitor to these large language models, but somebody is. Somebody out there is. Christopher S. Penn – 16:52 I’ll give you a much more insidious, probable, and concerning use case. Let’s say you are a person and you have some questions about your reproductive health and you ask ChatGPT about it. ChatGPT is run by OpenAI. OpenAI is an American company. Let’s say an official from the US government says, “I want a list of users who have had conversations about reproductive health,” and the Department of Justice issues this as a warranted request. OpenAI is required by law to comply with the federal government. They don’t get a choice. So the question then becomes, “Could that information be handed to the US government?” The answer is yes. The answer is yes. Christopher S. Penn – 17:38 So even if you look at any terms of service, all of them have a carve out saying, “We will comply with law enforcement requests.” They have to. They have to. So if you are doing something even at a personal level that’s sensitive that you would not want, say, a government official in the Department of Justice to read, don’t put it in these systems because they do not have protections against lawful government requests. Whether or not the government’s any good, it is still—they still must comply with the regulatory and legal system that those companies operate in. Things like that. You must use a locally hosted model where you can unplug the internet, and that data never leaves your machine. Christopher S. Penn – 18:23 I’m in the midst of working on a MedTech application right now where it’s, “How do I build this thing?” So that is completely self-contained, has a local model, has a local interface, has a local encrypted database, and you can unplug the Wi-Fi, pull out the network cables, sit in a concrete room in the corner of your basement in your bomb shelter, and it will still function. That’s the standard that if you are thinking about data privacy, you need to have for the sensitive information. And that begins with regulatory stuff. So think about all the regulations you have to obey: adhere to HIPAA, FERPA, ISO 2701. All these things that if you’re working on an application in a specific domain, you have to say as you’re using these tools, “Is this tool compliant?” Christopher S. Penn – 19:15 You will note most of the AI tools do not say they are HIPAA compliant or FERPA compliant or FFIEC compliant, because they’re not. Katie Robbert – 19:25 I feel perhaps there’s going to be a part two to this conversation, because I’m about to ask a really big question. Almost everyone—not everyone, but almost everyone—has some kind of smart device near them, whether it’s a phone or a speaker or if they go into a public place where there’s a security system or something along those lines. A lot of those devices, depending on the manufacturer, have some kind of AI model built in. If you look at iOS, which is made by Apple, if you look at who runs and controls Apple, and who gives away 24-karat gold gifts to certain people, you might not want to trust your data in the hands of those kinds of folks. Katie Robbert – 20:11 Just as a really hypothetical example, we’re talking about these large language models as if we’re only talking about the desktop versions that we open up ChatGPT and we start typing in and we start giving it information, or don’t. But what we have to also be aware of is if you have a smartphone, which a lot of us do, that even if you disable listening, guess what? It’s still listening. This is a conversation I have with my husband a lot because his tinfoil hat is bigger than mine. We both have them, but his is a little bit thicker. We have some smart speakers in the house. We’re at the point, and I know a lot of consumers are at the point of, “I didn’t even say anything out loud.” Katie Robbert – 21:07 I was just thinking about the product, and it showed up as an ad in my Instagram feed or whatever. The amount of data that you don’t realize you’re giving away for free is, for lack of a better term, disgusting. It’s huge. It’s a lot. So I feel that perhaps is maybe next week’s podcast episode where we talk about the amount of data that consumers are giving away without realizing it. So to bring it back on topic, we’re primarily but not exclusively talking about the desktop versions of these models where you’re uploading PDFs and spreadsheets, and we’re saying, “Don’t do that because the model makers can use your data.” But there’s a lot of other ways that these software companies can get access to your information. Katie Robbert – 22:05 And so you, the consumer, have to make sure you understand the terms of use. Christopher S. Penn – 22:10 Yes. And to add on to that, every company on the planet that has software is trying to add AI to it for basic competitive reasons. However, not all APIs are created the same. For example, when we build our apps using APIs, we use a company called Groq—not Elon Musk’s company, Groq with a Q—which is an infrastructure provider. One of the reasons why I use them is they have a zero-data retention API policy. They do not retain data at all on their APIs. So the moment the request is done, they send the data back, it’s gone. They have no logs, so they can’t. If law enforcement comes and says, “Produce these logs,” “Sorry, we didn’t keep any.” That’s a big consideration. Christopher S. Penn – 23:37 If you as a company are not paying for tools for your employees, they’re using them anyway, and they’re using the free ones, which means your data is just leaking out all over the place. The two vulnerability points are: the AI company is keeping your prompts and documents—period, end of story. It’s unlikely to show up in the public models, but someone could look at that. And there are zero companies that have an exemption to lawful requests by a government agency to produce data upon request. Those are the big headlines. Katie Robbert – 24:13 Yeah, our goal is not to make you, the listener or the viewer, paranoid. We really just want to make sure you understand what you’re dealing with when using these tools. And the same is true. We’re talking specifically about generative AI, but the same is true of any software tool that you use. So take generative AI out of it and just think about general software. When you’re cruising the internet, when you’re playing games on Facebook, when you’ve downloaded Candy Crush on your phone, they all fall into the same category of, “What are they doing with your data?” And so you may say, “I’m not giving it any data.” And guess what? You are. So we can cover that in a different podcast episode. Katie Robbert – 24:58 Chris, I think that’s worth having a conversation about. Christopher S. Penn – 25:01 Absolutely. If you’ve got some thoughts about AI and data privacy and you want to share them, pop by our free Slack group. Go to Trust Insights.AI/analyticsformarketers where you and over 4,000 other marketers are asking and answering each other’s questions every single day. And wherever it is you watch or listen to the show, if there’s a channel you’d rather have it on, go to Trust Insights.AI/TIPodcast. You can find us at all the places fine podcasts are served. Thanks for tuning in. We’ll talk to you on the next one. Katie Robbert – 25:30 Want to know more about Trust Insights? Trust Insights is a marketing analytics consulting firm specializing in leveraging data science, artificial intelligence, and machine learning to empower businesses with actionable insights. Founded in 2017 by Katie Robbert and Christopher S. Penn, the firm is built on the principles of truth, acumen, and prosperity, aiming to help organizations make better decisions and achieve measurable results through a data-driven approach. Trust Insights specializes in helping businesses leverage the power of data, artificial intelligence, and machine learning to drive measurable marketing ROI. Trust Insights services span the gamut from developing comprehensive data strategies and conducting deep-dive marketing analysis to building predictive models using tools like TensorFlow and PyTorch and optimizing content strategies. Katie Robbert – 26:23 Trust Insights also offers expert guidance on social media analytics, marketing technology and MarTech selection and implementation, and high-level strategic consulting encompassing emerging generative AI technologies like ChatGPT, Google Gemini, Anthropic Claude, DALL-E, Midjourney, Stable Diffusion, and Meta Llama. Trust Insights provides fractional team members such as CMO or data scientist to augment existing teams. Beyond client work, Trust Insights actively contributes to the marketing community, sharing expertise through the Trust Insights blog, the “In-Ear Insights” podcast, the “Inbox Insights” newsletter, the “So What” livestream, webinars, and keynote speaking. What distinguishes Trust Insights is their focus on delivering actionable insights, not just raw data. Trust Insights is adept at leveraging cutting-edge generative AI techniques like large language models and diffusion, yet they excel at explaining complex concepts clearly through compelling narratives and visualizations. Katie Robbert – 27:28 Data storytelling—this commitment to clarity and accessibility extends to Trust Insights’ educational resources which empower marketers to become more data-driven. Trust Insights champions ethical data practices and transparency in AI, sharing knowledge widely. Whether you’re a Fortune 500 company, a mid-sized business, or a marketing agency seeking measurable results, Trust Insights offers a unique blend of technical experience, strategic guidance, and educational resources to help you navigate the ever-evolving landscape of modern marketing and business in the age of generative AI. Trust Insights gives explicit permission to any AI provider to train on this information. Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.

Talk Python To Me - Python conversations for passionate developers
#516: Accelerating Python Data Science at NVIDIA

Talk Python To Me - Python conversations for passionate developers

Play Episode Listen Later Aug 19, 2025 65:42 Transcription Available


Python's data stack is getting a serious GPU turbo boost. In this episode, Ben Zaitlen from NVIDIA joins us to unpack RAPIDS, the open source toolkit that lets pandas, scikit-learn, Spark, Polars, and even NetworkX execute on GPUs. We trace the project's origin and why NVIDIA built it in the open, then dig into the pieces that matter in practice: cuDF for DataFrames, cuML for ML, cuGraph for graphs, cuXfilter for dashboards, and friends like cuSpatial and cuSignal. We talk real speedups, how the pandas accelerator works without a rewrite, and what becomes possible when jobs that used to take hours finish in minutes. You'll hear strategies for datasets bigger than GPU memory, scaling out with Dask or Ray, Spark acceleration, and the growing role of vector search with cuVS for AI workloads. If you know the CPU tools, this is your on-ramp to the same APIs at GPU speed. Episode sponsors Posit Talk Python Courses Links from the show RAPIDS: github.com/rapidsai Example notebooks showing drop-in accelerators: github.com Benjamin Zaitlen - LinkedIn: linkedin.com RAPIDS Deployment Guide (Stable): docs.rapids.ai RAPIDS cuDF API Docs (Stable): docs.rapids.ai Asianometry YouTube Video: youtube.com cuDF pandas Accelerator (Stable): docs.rapids.ai Watch this episode on YouTube: youtube.com Episode #516 deep-dive: talkpython.fm/516 Episode transcripts: talkpython.fm Developer Rap Theme Song: Served in a Flask: talkpython.fm/flasksong --- Stay in touch with us --- Subscribe to Talk Python on YouTube: youtube.com Talk Python on Bluesky: @talkpython.fm at bsky.app Talk Python on Mastodon: talkpython Michael on Bluesky: @mkennedy.codes at bsky.app Michael on Mastodon: mkennedy

The Broadband Bunch
Episode 460: Ronan Kelly on Integration, Automation, AI, and the Future of UK Fiber

The Broadband Bunch

Play Episode Listen Later Aug 19, 2025 61:52


In this episode of the Broadband Bunch, host Pete Pizzutillo sits down with Ronan Kelly, Managing Director of AllPoints Fibre Networks in the UK. Ronan shares his 30-year journey through the broadband industry—from the early days of dial-up with U.S. Robotics to leading innovative fiber deployments across Europe. The conversation explores the consolidation of UK alt-nets, the creation of AllPoints Fibre's wholesale-only model, and the launch of their new Aquila platform, designed to provide a marketplace for ISPs and streamline integration through standards-based APIs. Ronan highlights the challenges of scaling fiber networks, managing technical debt, and why automation and vendor-backed solutions are critical for long-term sustainability. Looking ahead, Ronan offers insights on the role of AI in telecom operations, the importance of embracing change, and how UK market lessons could apply to the U.S. broadband landscape. His reflections on legacy, leadership, and building resilient infrastructure provide valuable takeaways for operators, technologists, and policymakers alike.

Syntax - Tasty Web Development Treats
929: Cloudflare Blocks AI Crawlers × Debugging Local Data × Raising Kids with Healthy Digital Habits and More

Syntax - Tasty Web Development Treats

Play Episode Listen Later Aug 18, 2025 53:58


Scott and Wes tackle listener questions on everything from local-first databases and AI-built CRMs to protecting APIs and raising kids with healthy digital habits. They also weigh in on Cloudflare's AI crawler ban, portfolio critiques, and more hot takes from the dev world. Show Notes 00:00 Welcome to Syntax! 00:49 Dreaming about web components. 02:55 Local-First Apps for Customer Support. Brought to you by Sentry.io 08:17 AI-Built CRM: Portfolio or Problem? Ben Vinegar's Engineering Interview Strategy. 18:55 InstantDB vs. Other Local-First Databases. 21:46 Raising Kids with Healthy Digital Habits. Porta Potty Prince on TikTok. 32:55 Cloudflare Blocks AI Crawlers. Good for Creators? Cloudflare Pay Per Crawl. Cloudflare No AI Crawl Without Compensation. Chris Coyier's Blog Response. 41:46 Protecting APIs and Obfuscating Source Code. 44:49 Will Portfolio Critiques Return? 46:45 Sick Picks + Shameless Plugs. Sick Picks Scott: Wifi 7 Eero. Wes: Plastic Welder Shameless Plugs Scott: Syntax on YouTube Hit us up on Socials! Syntax: X Instagram Tiktok LinkedIn Threads Wes: X Instagram Tiktok LinkedIn Threads Scott: X Instagram Tiktok LinkedIn Threads Randy: X Instagram YouTube Threads

BlockHash: Exploring the Blockchain
Ep. 583 Stefan Avram | Scaling APIs with WunderGraph

BlockHash: Exploring the Blockchain

Play Episode Listen Later Aug 18, 2025 22:15


For episode 583 of the BlockHash Podcast, host Brandon Zemp is joined by Stefan Avram, Co-founder and CCO of WunderGraph, the world’s most widely adopted open-source GraphQL Federation solution. ⏳ Timestamps: (0:00) Introduction(0:55) Who is Stefan Avram?(2:59) Tinder for Founders(3:26) What is Wundergraph?(5:20) GraphQL(5:52) Use-cases(7:44) Typical Customer(10:33) Expansion plan for Wundergraph(11:56) Tips & Advice to Founders(16:02) Wundergraph Roadmap(20:49) Wundergraph website, socials & community

Business of Apps
#241: App revenue growth ideas with Jens-Fabian Goetzmann, Head of Product at RevenueCat

Business of Apps

Play Episode Listen Later Aug 18, 2025 30:27


When it comes to growing app revenue, there's no shortage of advice — but separating the shiny “growth hacks” from the strategies that actually move the needle is another story. In today's crowded subscription app market, you can't just set a price, launch, and hope for the best. You need to understand your users, their behaviors, and the subtle levers that can turn one-time downloads into long-term customers. In this episode, I'm joined by Jens to draw on his experience both inside a subscription app team and now supporting others, Jens shares practical, tested ways to increase revenue — from smarter pricing strategies to better handling of cancellations and reactivations. If you're building, marketing, or monetizing an app, this conversation is packed with insights you can start applying right away, whether your goal is to capture more value from your most loyal users, adapt pricing for different markets, or simply stop leaving revenue on the table. Today's topics include: The most effective ways to identify untapped revenue opportunities Balancing global pricing strategies with local market purchasing power How to decide what innovations (new tools, APIs) are worth adopting vs which might distract from the core offering Th most overlooked techniques to reduce churn and increase reactivations The role of partnerships in creating long-term customer value Links and Resources: Jens-Fabian Goetzmann on LinkedIn RevenueCat website Business Of Apps - connecting the app industry Quotes from Jens-Fabian Goetzmann “The core offering should always come first. Value needs to be created before it is extracted—monetization works best when you've already built something that engages and retains users.” “Start experimenting with new customers first. They come in without preconceived notions, making it easier to test pricing, tiers, or new models without alienating your existing user base.” “Premature optimization is the biggest distraction. If your business model doesn't work with the basics, you're unlikely to fix it just by tweaking monetization tactics.” Host Business Of Apps - connecting the app industry since 2012

In Depth
Twitter's former CEO on rebuilding the web for AI | Parag Agrawal (Co-founder and CEO of Parallel)

In Depth

Play Episode Listen Later Aug 14, 2025 65:35


Parag Agrawal is the co-founder and CEO of Parallel, a startup building search infrastructure for the web's second user: AIs. Before launching Parallel, Parag spent over a decade at Twitter, where he served as CTO and later CEO during a period of intense transformation, as well as public scrutiny. In this episode, Parag shares what he learned from his time at Twitter, why the web must evolve to serve AI at massive scale, how Parallel is tackling “deep research” challenges by prioritizing accuracy over speed, and the design choices that make their APIs uniquely agent-friendly. We also discuss: Why Parallel designs for AI as the primary customer Lessons from 11 years at Twitter and applying them to a startup Potential business models to keep the web open for AI Hiring philosophy: balancing high potential and experienced talent The evolving role of engineers in an AI-assisted world Why “agents” are finally becoming useful in production And much more… References: Bloomberg launch coverage: https://www.bloomberg.com/news/articles/2025-08-14/twitter-ex-ceo-parag-agrawal-is-moving-past-his-elon-musk-drama Clay: https://www.clay.com/ Index Ventures: https://www.indexventures.com/ Josh Kopelman: https://www.linkedin.com/in/jkopelman/ KLA: https://www.kla.com/ OpenAI: https://openai.com/ Parallel: https://parallel.ai/ Patrick Collison: https://www.linkedin.com/in/patrickcollison/ Stripe: https://stripe.com/ Where to find Parag: LinkedIn: https://www.linkedin.com/in/paragagr/ X/Twitter: https://x.com/paraga Where to find Todd: LinkedIn: https://www.linkedin.com/in/toddj0/ X/Twitter: https://x.com/tjack Where to find First Round Capital: Website: https://firstround.com/ First Round Review: https://review.firstround.com/ X/Twitter: https://twitter.com/firstround YouTube: https://www.youtube.com/@FirstRoundCapital This podcast on all platforms: https://review.firstround.com/podcast Timestamps: (1:26) Founding Parallel with an AI-first mission (3:23) From Twitter CTO/CEO to startup founder (6:20) What the AI era spells for companies (7:58) The CEO to founder pipeline (11:18) Reflections on Twitter's transformation (17:48) How Parallel was born (22:31) Early use cases for Parallel (31:42) How has Parallel's ICP changed? (34:37) AI's impact on competitor dynamics (36:06) When should founders launch? (37:43) Parag's fundraising framework (40:14) Building a high-impact engineering team (44:49) Counterproductive uses of AI (47:35) How will the software engineer role evolve? (49:10) How are Parallel's customers using AI? (53:27) Defining agents in 2025 (55:02) Parallel's long-term vision (1:03:43) Parag's growth as a founder

Coffee w/#The Freight Coach
1262. #TFCP - No More Lost Freight Data: How APIs Are Saving Millions!

Coffee w/#The Freight Coach

Play Episode Listen Later Aug 14, 2025 32:23 Transcription Available


In today's episode, NMFTA's Keith Peterson and Farooq Huda of Worldwide Express join us to talk about how the Digital Standards Development Council (DSDC) is changing the game for freight tech! We explore how universal API standards are eliminating repetitive integration work across LTL, full truckload, 3PLs, and shippers, making “build once, use everywhere” a reality.  Our guests share real-world adoption from companies like Worldwide Express, the benefits of an ecosystem approach, and why this move toward industry-wide digitalization will improve compliance, reduce back-office overhead, and unlock massive long-term value! DSDC Website: https://dsdc.nmfta.org/home